Discussion:
Obsolete processors resurected in FPGAs
(too old to reply)
Ryan
2004-11-12 17:28:56 UTC
Permalink
As part of an academic project I'm going to be looking at the pros and cons
of re-producing microprocessors in current FPGA technologies that are no
longer available on the open market. This is to address the problem that
occurs in some specialised areas where the lifetime of a product is very
long and the cost of rewriting the software is prohibitively high (e.g. it
was written in a language and/or tools that aren't supported anymore). The
idea is to be able to use an FPGA implementation an either a drop-in
replacement component onto a legacy board or to produce a new board but of
identical functionality. Either way, no changes to the application object
code stored in ROM is required.

There are many different factors that I'll have to look into before I can
make any conclusions and I'm concerned that some important ones could be
missed. Obviously there are:
1) Availability of the original processor HDL or equivalent.
2) How can the exact EBI timings of the original be recreated (or how close
to the original is practical)?
3) Cache memory cannot be recreated on-chip.
4) How close can the internal timings be recreated?
5) Verification ?!

If anyone would like to contribute to this initial brainstorming, I'd be
grateful.

Rupert.
B. Joshua Rosen
2004-11-12 18:38:49 UTC
Permalink
Post by Ryan
As part of an academic project I'm going to be looking at the pros and cons
of re-producing microprocessors in current FPGA technologies that are no
longer available on the open market. This is to address the problem that
occurs in some specialised areas where the lifetime of a product is very
long and the cost of rewriting the software is prohibitively high (e.g. it
was written in a language and/or tools that aren't supported anymore). The
idea is to be able to use an FPGA implementation an either a drop-in
replacement component onto a legacy board or to produce a new board but of
identical functionality. Either way, no changes to the application object
code stored in ROM is required.
There are many different factors that I'll have to look into before I can
make any conclusions and I'm concerned that some important ones could be
1) Availability of the original processor HDL or equivalent.
Systems that were designed 20 years ago were designed with paper
schematics not HDLs. If the manufacturer still exists, and most of them
are long gone, the designs are going to be hidden away in a long forgotten
file cabinet. The architecture manuals may be still be available some
where, maybe even on the net. I've seen sites devoted to the Data General
machines for example (I was one of the designers of the DG MV8000 in the
late 70s which is why I've looked). The good news is that machines of that
vintage were relatively simple because of the limited number of gates that
we had available. The smallest FPGA has more gates that any minicomputer
of the 70s and the available block RAM in a decent size FPGA exceeds the
main memory sizes of many of those machines. The caches, if any, were
tiny. A couple of block RAMs is enough. Also modern HDLs like Verilog
vastly simplify the design task. One person using Verilog and a decent
simulator can do in a few weeks what it took a team of people a couple of
years to do in the 70s and early 80s.
Post by Ryan
2) How can the exact EBI timings of the original be recreated (or how
close to the original is practical)?
This probably isn't practical.
Post by Ryan
3) Cache memory cannot be recreated on-chip.
Easily done on chip

4) How close can the
Post by Ryan
internal timings be recreated?
It would be hard to reporduce it exactly, but why would you want to?
Minicomputers of the 70s had clock speeds of 5-10MHz, modern FPGAs run at
over 100MHz without any work at all, and much faster if you put even
modest effort into it.

5) Verification ?!
If you can find the orginal diagnostics that would give you a start. In
the 70s machines and early 80s machines were debugged in the lab using
instruction set diagnostics. The prototype machines were built on wirewrap
boards which could be fixed almost as easily as we change a line of
Verilog today. The simulators that existed weren't very good and the
machines that they ran on were too slow to do any serious debugging so
there was no such thing as a testbench as we know it today. The real
debugging was in hte lab.
Post by Ryan
If anyone would like to contribute to this initial brainstorming, I'd be
grateful.
Rupert.
While it is practical to emulate an obsolete architecture in an FPGA it's
not clear that it's the right thing to do. Using a software emulator is
the more cost effective way to do this. Moores law works out to a factor
of 100 per decade which means that in the last 25 years the
performance/price ratio has improved by a factor of 100,000. Today's
desktop PC is several thousand times faster than the super minicomputers
of the late 70s while being a factor of 100 cheaper. What this means is
that even if took you 100 instructions to emulate a single instruction on
an antique machine the emulator would still run 20-30 times faster than
the original machine did. Of course a decent emulator should be able to do
a lot better than this but my point is that even the crudest software
emulator could do the job.
mike_treseler
2004-11-12 19:53:15 UTC
Permalink
It might be an intriguing exercise, but
but don't expect much outside interest in the
results. Those parts are obsolete for good reasons.

-- Mike Treseler
Kryten
2004-11-13 04:07:23 UTC
Permalink
Post by mike_treseler
It might be an intriguing exercise, but
don't expect much outside interest in the
results. Those parts are obsolete for good reasons.
True.

It only becomes interesting when you have an application that cannot be
economically or practically replaced by software re-writes or a desktop PC.

I heard that the space shuttle is having a hard time sourcing 8086 chips.
These were decent in the late seventies when the shuttle was developed.

Many military flight systems suffer from part obsolescence.

Concorde used many analogue computers. It was never going to be economic to
upgrade those to modern technology and test them all.

In all the cases above, it is too dear to re-write code and impractical to
stick in a modern PC running an emulator. A Blue Screen of Death might
easily become a White Hot Screen of Death.

It is interesting to look at space craft over the years.

The early space craft looked rather like pinball machine panels.

The recent Spaceship One cockpit looked like a it had a single LCD panel and
a joystick.


In another field, the wartime Colossus is said to be on a par with recent
PCs, but internal workings are still quite closely guarded. Partly because
the guy looking at it regards it as his project a lot, and doesn't want to
give too much away. He is also under the official secrets act, so there's
always the risk of having a suspicious suicide in the woods. ;-)
Jim Granville
2004-11-12 20:56:46 UTC
Permalink
Post by Ryan
As part of an academic project I'm going to be looking at the pros and cons
of re-producing microprocessors in current FPGA technologies that are no
longer available on the open market. This is to address the problem that
occurs in some specialised areas where the lifetime of a product is very
long and the cost of rewriting the software is prohibitively high (e.g. it
was written in a language and/or tools that aren't supported anymore). The
idea is to be able to use an FPGA implementation an either a drop-in
replacement component onto a legacy board or to produce a new board but of
identical functionality. Either way, no changes to the application object
code stored in ROM is required.
There are many different factors that I'll have to look into before I can
make any conclusions and I'm concerned that some important ones could be
1) Availability of the original processor HDL or equivalent.
2) How can the exact EBI timings of the original be recreated (or how close
to the original is practical)?
3) Cache memory cannot be recreated on-chip.
4) How close can the internal timings be recreated?
5) Verification ?!
If anyone would like to contribute to this initial brainstorming, I'd be
grateful.
A good place to start, is to look at what is already out there.
This is a good launch site
http://www.lug-kiel.de/links/details/hdl.html

Look at devices like 6502, 6809, & 8080 as simpler examples of cores
that have a wider code base, but are not active hardware any more.

Peripheral logic is likely to be as much/more work than the core.

There is also a lot of work being done on game-machine emulation.

Someone has mentioned SW emulation, for an interesting take on that
see
http://www.xilinx.com/publications/xcellonline/xcell_48/xc_picoblaze48.htm

[this uses a Soft CPU in a FPGA to SW Emulate a more complex CPU ! ]

Where you have spare speed, this approach can save resource.

What would be interesting research, would be a tool chain that allowed
a soft-boundary and generic approach to the replacement of any core.
The most flexible emulation, would be to start using a tinycore, and
calling Target Opcode Sw emulation blocks. This gets the system working,
but at a lower speed.
Then, you analyse the blocks, and target the frequent/slow ones, to be
replaced by either FPGA Logic resource, or opcode extension on the
original core.

-jg
Hal Murray
2004-11-12 21:28:34 UTC
Permalink
Post by Jim Granville
Someone has mentioned SW emulation, for an interesting take on that
see
http://www.xilinx.com/publications/xcellonline/xcell_48/xc_picoblaze48.htm
[this uses a Soft CPU in a FPGA to SW Emulate a more complex CPU ! ]
There was a time when it was common to implement instruction
sets by writing microcode that ran on real hardware. The general
idea on the ones I'm familiar with was to use a wide instruction
word that was easy to decode and simple to implement. (and ran
very fast)

It might be fun to do that in an FPGA. I wonder how much it takes
to implement a 6502 or such.
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
Alex Gibson
2004-11-13 07:29:11 UTC
Permalink
Post by Hal Murray
Post by Jim Granville
Someone has mentioned SW emulation, for an interesting take on that
see
http://www.xilinx.com/publications/xcellonline/xcell_48/xc_picoblaze48.htm
[this uses a Soft CPU in a FPGA to SW Emulate a more complex CPU ! ]
There was a time when it was common to implement instruction
sets by writing microcode that ran on real hardware. The general
idea on the ones I'm familiar with was to use a wide instruction
word that was easy to decode and simple to implement. (and ran
very fast)
It might be fun to do that in an FPGA. I wonder how much it takes
to implement a 6502 or such.
quite a few 6502 cores available

www.opencores.org

www.fpgaarcade.com
http://home.freeuk.com/fpgaarcade/platforms.htm vic20,6502
http://home.freeuk.com/fpgaarcade/pac_main.htm z80

http://zxgate.sourceforge.net/ zx81 ,zx spectrum, jupiter ace, trs80

http://www.fpga-games.com/

commadore 64 http://c64upgra.de/c-one/
http://www.howell1964.freeserve.co.uk/logic/index_logic.htm lots of stuff
see the links

Alex
Kryten
2004-11-14 01:04:03 UTC
Permalink
"Alex Gibson" <al xx at tpg dot com dot au - remove spaces replace dot>
Post by Alex Gibson
quite a few 6502 cores available
Always read the small print though.

Several 6502 cores have been published but most of them come with notes
about not being completely finished. For example, BCD instructions missing.

Daniel Wallner seems to be the only one who gets round to finishing stuff
and testing it.
Post by Alex Gibson
http://www.howell1964.freeserve.co.uk/logic/index_logic.htm lots of stuff
Recently announced plans to clone Atari 800XL, which will be a bigger task
than the Atom.
Rainer Buchty
2004-11-14 12:37:01 UTC
Permalink
This post might be inappropriate. Click to display it.
Kryten
2004-11-14 15:49:00 UTC
Permalink
Post by Rainer Buchty
Next thing is, that mostly all 6502 cores I've seen so far emulate the 65C02.
For quite a number of legacy stuff which exploited the NMOS 6502 illegal
opcodes for timing reasons a 65C02 is plain unusable.
Daniel's T65 can be configured as either. :-)

It can't do the Atari "sally" variant yet, but I imagine that most games
writers would want their code to run on the ordinary 6502 in old 800
machines as well. So code using 'sally' illegals would be even rarer than
those using the usual illegals.

I wonder what fraction of all code used illegal opcodes, and were they ever
much use?
Rainer Buchty
2004-11-14 17:29:38 UTC
Permalink
In article <M7Lld.340$***@newsfe4-gui.ntli.net>,
"Kryten" <***@ntlworld.com> writes:
|> I wonder what fraction of all code used illegal opcodes, and were they ever
|> much use?

On the C64 they were used quite often for fancy video manipulation; AFAIK
they were also used within a floppy speeder to speed up GCR decoding.

IIRC the mainly used ones were of the "do something and wire-or the accu in"
kind and multi-cycle NOPs.

You'll find a list here
http://www.funet.fi/pub/cbm/documents/chipdata/6502-NMOS.extra.opcodes

and how they map into the overall opcode table here
http://www.funet.fi/pub/cbm/documents/chipdata/64doc

Rainer
Monte Dalrymple
2004-11-13 18:48:57 UTC
Permalink
Post by Ryan
As part of an academic project I'm going to be looking at the pros and cons
of re-producing microprocessors in current FPGA technologies that are no
longer available on the open market. This is to address the problem that
occurs in some specialised areas where the lifetime of a product is very
long and the cost of rewriting the software is prohibitively high (e.g. it
was written in a language and/or tools that aren't supported anymore). The
idea is to be able to use an FPGA implementation an either a drop-in
replacement component onto a legacy board or to produce a new board but of
identical functionality. Either way, no changes to the application object
code stored in ROM is required.
There are many different factors that I'll have to look into before I can
make any conclusions and I'm concerned that some important ones could be
1) Availability of the original processor HDL or equivalent.
Even if the design was originally done in an HDL, getting the original owner
to
release it will be next to impossible. However, recreating an older design
in an
HDL form doesn't take an unreasonable amount of time if the specifications
are fairly complete (and accurate, see below).
Post by Ryan
2) How can the exact EBI timings of the original be recreated (or how close
to the original is practical)?
If you are talking nanoseconds, it's going to be time-consuming and probably
not worth the effort as long as the timings are cycle-accurate.
Post by Ryan
3) Cache memory cannot be recreated on-chip.
And unless you have the time and inclination to figure out the internal
timing for
this kind of subsystem (a non-trivial task) you wil never be able to achieve
cycle accuracy. A similar problem arises whenever you have more than one
clock domain in the device, as the original synchronization strategy will be
very
difficult to discern.
Post by Ryan
4) How close can the internal timings be recreated?
It's actually an interesting exercise to try to figure out how the original
design was
implemented to come up with the original timings. I have done this twice,
for the
Z180 and the Z8000. I was able to match the Z180 clock cycle timing in all
cases. The Z8000 was a different story, for two reasons. First, the exact
timing
for the case of the interrupt acknowledge was not specified except "1 to 7
clock
cycles" for an aborted instruction fetch. Second, the published timing for
both
the multiply and divide instruction was clearly incorrect, as it did not
account
for the different addressing modes correctly, besides not making sense (from
a
clock cycle standpoint) relative to whether a bit in the divisor was set or
not.
Post by Ryan
5) Verification ?!
If you write your testbench properly, and cover all the boundary cases, it's
possible to exercise the original chip with the same stimulus and compare
the
results. This was very useful in my Z8000 case, where a number of
instructions
are described as having "undefined" flag values. I was able to figure out
what
the chip was doing in these "undefined cases" and match the behavior in my
implementation. It's usually a case of figuring out where the original
designer
used "don't cares" in his logic design.

This step is critical if you are working from a published spec. For example,
the Z8000 divide instruction is completely specified, including boundary
cases. This was obviously the work of the Z8000 architect, and is what I
implemented. My testbench tested each of the specified boundary cases,
only to find that the actual chip did not properly handle one boundary case!
This was actually the hardest case to implement, and it seems that the
designer decided to signal overflow rather than properly handle the case of
the most-negative quotient (recall that the range of a 2's complement number
is assymetric). Well, this change never made it into the published
documentation
for the chip.

This "specification/implementation disconnect" is one of the more difficult
aspects of this process. Without detailed and accurate specifications, the
task can be impossible. I would add this to your list of important factors
to
consider.
Post by Ryan
If anyone would like to contribute to this initial brainstorming, I'd be
grateful.
There are a number of cases where this avenue is the most logical or even
the least expensive. (Think about the cost requalifying flight-critical
software,
for example.) Emulating an older design on a fast new whiz-bang chip does
nothing except postpone the problem, because what happens when that
chip is obsolete in 18 months (or worse, in the middle of your redesign).
Having the design in a retargetable HDL format makes the obsolesense
problem manageable. Emulation is also grossly inefficient in terms of power,
and can't hope to be hardware compatible except at the edges of the system,
if then.
Post by Ryan
Rupert.
Hal Murray
2004-11-13 19:22:33 UTC
Permalink
Post by Monte Dalrymple
And unless you have the time and inclination to figure out the internal
timing for
this kind of subsystem (a non-trivial task) you wil never be able to achieve
cycle accuracy. A similar problem arises whenever you have more than one
clock domain in the device, as the original synchronization strategy will be
very
difficult to discern.
Why not? Old cycle times were slow by modern standards. Just add
enough delay to match the specs.
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
rickman
2004-11-14 17:15:30 UTC
Permalink
Post by Hal Murray
Post by Monte Dalrymple
And unless you have the time and inclination to figure out the internal
timing for
this kind of subsystem (a non-trivial task) you wil never be able to achieve
cycle accuracy. A similar problem arises whenever you have more than one
clock domain in the device, as the original synchronization strategy will be
very
difficult to discern.
Why not? Old cycle times were slow by modern standards. Just add
enough delay to match the specs.
You wouldn't need to *match* the timings, only meet them. You can
always provide more setup time or allow less setup time from the
peripheral.
--
Rick "rickman" Collins

***@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
vax, 9000
2004-11-13 22:30:13 UTC
Permalink
Post by Monte Dalrymple
It's actually an interesting exercise to try to figure out how the
original design was
implemented to come up with the original timings. I have done this twice,
for the
Z180 and the Z8000. I was able to match the Z180 clock cycle timing in all
cases. The Z8000 was a different story, for two reasons. First, the exact
timing...
Since you are an expert of Z8000, I'd like to ask you a question about
Z8000, out of curiosity. I heard that Z8000 had bugs. Do you know what
these bugs were, and whether they are corrected? Thank you.

vax, 9000
Monte Dalrymple
2004-11-14 01:01:22 UTC
Permalink
Post by vax, 9000
Post by Monte Dalrymple
It's actually an interesting exercise to try to figure out how the
original design was
implemented to come up with the original timings. I have done this twice,
for the
Z180 and the Z8000. I was able to match the Z180 clock cycle timing in all
cases. The Z8000 was a different story, for two reasons. First, the exact
timing...
Since you are an expert of Z8000, I'd like to ask you a question about
Z8000, out of curiosity. I heard that Z8000 had bugs. Do you know what
these bugs were, and whether they are corrected? Thank you.
vax, 9000
You can download the spec for my clone design at

http://www.systemyde.com/pdf/y8002.pdf

All of the differences between the published spec and what we found with
the actual silicon are detailed there. I think that most of the bugs were
back-annotated into the spec. For example, register R0 can't be used
with some instructions, for no obvious reason. I think that some of the
"undefined" flag settings were actually bugs that were deemed not
important. The biggest one, in my opinion, had to do with divide not
handling the one boundary case, as I mentioned in the previous post.
And there is also the issue of cycle counts for both multiply and divide
that can't be correct in the published spec.

Monte
Jim Granville
2004-11-14 00:16:51 UTC
Permalink
Monte Dalrymple wrote:

<snip>
Post by Monte Dalrymple
Post by Ryan
4) How close can the internal timings be recreated?
It's actually an interesting exercise to try to figure out how the original
design was implemented to come up with the original timings. I have done this twice,
for the Z180 and the Z8000. I was able to match the Z180 clock cycle timing in all
cases. The Z8000 was a different story, for two reasons. First, the exact
timing for the case of the interrupt acknowledge was not specified except "1 to 7
clock cycles" for an aborted instruction fetch. Second, the published timing for
both the multiply and divide instruction was clearly incorrect, as it did not
account for the different addressing modes correctly, besides not making sense (from
a clock cycle standpoint) relative to whether a bit in the divisor was set or
not.
<snip>

Interesting, Sounds a lot of work on the Z8000, can you elaborate on the
reasons/needs for this core, in particular.
Could also be a good example, for the OP.

-jg
Monte Dalrymple
2004-11-14 00:48:37 UTC
Permalink
Post by Jim Granville
<snip>
Post by Monte Dalrymple
Post by Ryan
4) How close can the internal timings be recreated?
It's actually an interesting exercise to try to figure out how the original
design was implemented to come up with the original timings. I have done this twice,
for the Z180 and the Z8000. I was able to match the Z180 clock cycle timing in all
cases. The Z8000 was a different story, for two reasons. First, the exact
timing for the case of the interrupt acknowledge was not specified except "1 to 7
clock cycles" for an aborted instruction fetch. Second, the published timing for
both the multiply and divide instruction was clearly incorrect, as it did not
account for the different addressing modes correctly, besides not making sense (from
a clock cycle standpoint) relative to whether a bit in the divisor was set or
not.
<snip>
Interesting, Sounds a lot of work on the Z8000, can you elaborate on the
reasons/needs for this core, in particular.
Could also be a good example, for the OP.
-jg
The original customer for this design makes air data computers, and projects
demand to continue well beyond when the "obsolete part stock" quantities of
the Z8000 will be around. Since the software for this system has to be FAA
certified, changing even one line of code is horrendously expensive. I'm
sure
that the OP was talking about exactly these kinds of applications. There are
a number of similar applications out there, because the Z8000 was the first
MIL-qualified 16-bit CPU and was designed into quite a few military and
mil-spec systems. These are the kinds of systems with very long lifetimes. I
know that the Z8000 was used in the F-15, the F-16, the 747 and the 757,
for example. All of these aircraft are still flying and are still in
production as
far as I know. These kinds of applications are the exact opposite of the
more
common "throw-it-away-in-18 months" that most people deal with today.

Monte
Jim Granville
2004-11-14 02:16:03 UTC
Permalink
Post by Monte Dalrymple
Post by Jim Granville
Interesting, Sounds a lot of work on the Z8000, can you elaborate on the
reasons/needs for this core, in particular.
Could also be a good example, for the OP.
-jg
The original customer for this design makes air data computers, and projects
demand to continue well beyond when the "obsolete part stock" quantities of
the Z8000 will be around. Since the software for this system has to be FAA
certified, changing even one line of code is horrendously expensive. I'm
sure
that the OP was talking about exactly these kinds of applications. There are
a number of similar applications out there, because the Z8000 was the first
MIL-qualified 16-bit CPU and was designed into quite a few military and
mil-spec systems. These are the kinds of systems with very long lifetimes. I
know that the Z8000 was used in the F-15, the F-16, the 747 and the 757,
for example. All of these aircraft are still flying and are still in
production as
far as I know. These kinds of applications are the exact opposite of the
more
common "throw-it-away-in-18 months" that most people deal with today.
..and industrial systems are somewhere in-between.

Did you need to get certification for the Z8000 SoftCPU ? - it seems
this would need to be fully qualified as well, or have the MIL/FAA
not quite caught up with the idea of SoftCPU ?
-jg
Monte Dalrymple
2004-11-14 03:37:59 UTC
Permalink
Post by Jim Granville
Post by Monte Dalrymple
Post by Jim Granville
Interesting, Sounds a lot of work on the Z8000, can you elaborate on the
reasons/needs for this core, in particular.
Could also be a good example, for the OP.
-jg
The original customer for this design makes air data computers, and projects
demand to continue well beyond when the "obsolete part stock" quantities of
the Z8000 will be around. Since the software for this system has to be FAA
certified, changing even one line of code is horrendously expensive. I'm
sure
that the OP was talking about exactly these kinds of applications. There are
a number of similar applications out there, because the Z8000 was the first
MIL-qualified 16-bit CPU and was designed into quite a few military and
mil-spec systems. These are the kinds of systems with very long lifetimes. I
know that the Z8000 was used in the F-15, the F-16, the 747 and the 757,
for example. All of these aircraft are still flying and are still in
production as
far as I know. These kinds of applications are the exact opposite of the
more
common "throw-it-away-in-18 months" that most people deal with today.
..and industrial systems are somewhere in-between.
Did you need to get certification for the Z8000 SoftCPU ? - it seems
this would need to be fully qualified as well, or have the MIL/FAA
not quite caught up with the idea of SoftCPU ?
-jg
I am not sure of the specifics, because the customer took this
responsibility.
I suspect that the hardware certification process is easier than the
software
certification process. I saw a copy of the procedure used for software
certification and I don't know how anyone could actually get any code
written in a finite amount of time following the procedure. But then, I'm a
hardware guy. I do know that the customer found my testbench to be
very useful. As I mentioned in an earlier post, it checked every
instruction,
flag, register combination, addressing mode and boundary condition. The
customer ran it with a real chip and my design and compared the results.
Then I modified my design to match the chip where there was a difference.
The only things I didn't match was the interrupt acknowledge timing and
the execution time for multiply and divide. My design used fewer clocks,
and the customer was satisfied with this result.

Monte
Jim Granville
2004-11-14 03:49:25 UTC
Permalink
<snip
Post by Monte Dalrymple
Post by Jim Granville
Did you need to get certification for the Z8000 SoftCPU ? - it seems
this would need to be fully qualified as well, or have the MIL/FAA
not quite caught up with the idea of SoftCPU ?
-jg
I am not sure of the specifics, because the customer took this
responsibility.
I suspect that the hardware certification process is easier than the
software
certification process. I saw a copy of the procedure used for software
certification and I don't know how anyone could actually get any code
written in a finite amount of time following the procedure. But then, I'm a
hardware guy. I do know that the customer found my testbench to be
very useful.
Testbenchs are always under appreciated.
Post by Monte Dalrymple
As I mentioned in an earlier post, it checked every
instruction,
flag, register combination, addressing mode and boundary condition. The
customer ran it with a real chip and my design and compared the results.
Then I modified my design to match the chip where there was a difference.
The only things I didn't match was the interrupt acknowledge timing and
the execution time for multiply and divide. My design used fewer clocks,
and the customer was satisfied with this result.
An impressive case-study. Did they use an ASIC, or an FPGA (which?),
and has that needed process revision yet ?
-jg
Monte Dalrymple
2004-11-14 04:36:10 UTC
Permalink
Post by Jim Granville
Post by Monte Dalrymple
I am not sure of the specifics, because the customer took this
responsibility.
I suspect that the hardware certification process is easier than the
software
certification process. I saw a copy of the procedure used for software
certification and I don't know how anyone could actually get any code
written in a finite amount of time following the procedure. But then, I'm a
hardware guy. I do know that the customer found my testbench to be
very useful.
Testbenchs are always under appreciated.
Indeed.
Post by Jim Granville
Post by Monte Dalrymple
As I mentioned in an earlier post, it checked every
instruction,
flag, register combination, addressing mode and boundary condition. The
customer ran it with a real chip and my design and compared the results.
Then I modified my design to match the chip where there was a difference.
The only things I didn't match was the interrupt acknowledge timing and
the execution time for multiply and divide. My design used fewer clocks,
and the customer was satisfied with this result.
An impressive case-study. Did they use an ASIC, or an FPGA (which?),
and has that needed process revision yet ?
-jg
The target was an Actel FPGA. It hasn't gone through a process revision
yet that I am aware of.
vax, 9000
2004-11-14 05:35:05 UTC
Permalink
Post by Monte Dalrymple
Post by Jim Granville
Post by Monte Dalrymple
I am not sure of the specifics, because the customer took this
responsibility.
I suspect that the hardware certification process is easier than the
software
certification process. I saw a copy of the procedure used for software
certification and I don't know how anyone could actually get any code
written in a finite amount of time following the procedure. But then,
I'm a
Post by Jim Granville
Post by Monte Dalrymple
hardware guy. I do know that the customer found my testbench to be
very useful.
Testbenchs are always under appreciated.
Indeed.
Agree. I only test my hobby project with a few settings and pray that
everything else works.

Did you try Zilog 16C01/16C00 (CMOS version Z8000)? Did you find any
difference between Z8000 and 16C00?

vax, 9000
Monte Dalrymple
2004-11-14 16:19:57 UTC
Permalink
Post by vax, 9000
Post by Monte Dalrymple
Post by Jim Granville
Post by Monte Dalrymple
I am not sure of the specifics, because the customer took this
responsibility.
I suspect that the hardware certification process is easier than the
software
certification process. I saw a copy of the procedure used for software
certification and I don't know how anyone could actually get any code
written in a finite amount of time following the procedure. But then,
I'm a
Post by Jim Granville
Post by Monte Dalrymple
hardware guy. I do know that the customer found my testbench to be
very useful.
Testbenchs are always under appreciated.
Indeed.
Agree. I only test my hobby project with a few settings and pray that
everything else works.
Did you try Zilog 16C01/16C00 (CMOS version Z8000)? Did you find any
difference between Z8000 and 16C00?
vax, 9000
As far as I know, the comparison was only done with the Z8000.
However, I am pretty sure that the CMOS conversion at Zilog was
done directly from the schematics, so the devices would be identical.

As an aside, it's amusing to look at the Zilog website for a description
of the 16C0X. It starts off with "RISC-like load/store architecture"...
The Z8000 is classic CISC and is not anywhere near being load/store.
The only thing RISC-like about it is the fact that the instruction set is
quite regular.Classic marketing.
bh
2004-11-14 19:44:21 UTC
Permalink
There are a number of applications where old 8080 code or Z8000
has gone through significant number of code reviews and the logic
is considered safe for some special applications. Nevertheless, having
confidence in the softCPU is pretty important in order to have the
qualify-by-similarity argument will hold. Something like an 8080
is a much simpler task to verify, but it cannot be ignored.

Likewise, you really need to do some over-all system testing to
insure subtle timing differences have not resulted in unexpected
consequences. Just meeting/exceeding the timing of the original
component is not good enough (IMHO) since there may be
unknown dependencies on timing that might have been caught
in the original qualification program.

With respect to FAA certification, I believe RTCA DO-254 addresses
some of these issues.

-BH
Post by Jim Granville
Post by Monte Dalrymple
Post by Jim Granville
Interesting, Sounds a lot of work on the Z8000, can you elaborate on the
reasons/needs for this core, in particular.
Could also be a good example, for the OP.
-jg
The original customer for this design makes air data computers, and projects
demand to continue well beyond when the "obsolete part stock" quantities of
the Z8000 will be around. Since the software for this system has to be FAA
certified, changing even one line of code is horrendously expensive. I'm
sure
that the OP was talking about exactly these kinds of applications. There are
a number of similar applications out there, because the Z8000 was the first
MIL-qualified 16-bit CPU and was designed into quite a few military and
mil-spec systems. These are the kinds of systems with very long lifetimes. I
know that the Z8000 was used in the F-15, the F-16, the 747 and the 757,
for example. All of these aircraft are still flying and are still in
production as
far as I know. These kinds of applications are the exact opposite of the
more
common "throw-it-away-in-18 months" that most people deal with today.
..and industrial systems are somewhere in-between.
Did you need to get certification for the Z8000 SoftCPU ? - it seems
this would need to be fully qualified as well, or have the MIL/FAA
not quite caught up with the idea of SoftCPU ?
-jg
Jonathan Bromley
2004-11-15 00:22:04 UTC
Permalink
"bh" <***@nosuch.com> wrote in message news:<pAOld.5262$***@news4.srv.hcvlny.cv.net>...

[snip]
Post by bh
Just meeting/exceeding the timing of the original
component is not good enough (IMHO) since there may be
unknown dependencies on timing that might have been caught
in the original qualification program.
Tee hee. How very true.

Many years ago I worked on an academic project that
involved modifying an existing industrial robot controller
(an ASEA IRb6, if anyone's interested). It was a very
early design with a single Intel 8008 CPU, and it had no
useful external data comms links. So we replaced the
CPU board with our own version that had a Z80 on it
instead (advanced stuff, eh!) and, at least to start
with, we wanted to run the original code on it.
We disassembled the maker's 8008 machine code and
re-assembled it for the Z80 (the Z80 opcodes and
architecture were a proper superset of the 8008,
but the binary instruction codings were different).

Everything worked perfectly except that, in one mode
of operation, the robot moved at double speed. It
turned out that the original designers had not been
able to make the 8008's interrupt service routine
run fast enough, and therefore it missed every second
clock interrupt when in that particular mode. They
had knowingly compensated for that by multiplying
all the "speed" constants by two. Our Z80 design
processed everything about 5x faster than the old
version, and therefore it *didn't* miss alternate
clock interrupts.

We had not anticipated this behaviour, because we knew
that everything was controlled by clock interrupts
and therefore assumed that the timing and speeds would
all be OK.

Happy days, when it was possible to reverse-engineer
by hand the whole embedded firmware of a non-trivial
product...
--
Jonathan Bromley
Loading...