Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Teensy Z80 Part 1 – Intro, Memory, Serial I/O and Display (domipheus.com)
96 points by mrry on Jan 11, 2015 | hide | past | favorite | 16 comments


This gave me a nice warm fuzzy feeling as I remembered bringing up bare metal Z80s 'back in the day' (can it really be 30 years ago? - yes). One trick I used over and over was to start with a blank EPROM. Every byte is 0xff (pretty much the exact opposite of his 0x00 starting point). The nice thing about that was that 0xff is (from memory) a RST 7 instruction. Which again (from memory) is a kind of one byte subroutine call to address 0x0038. This is nice because it means the CPU (should) settle into a kind of infinite loop, endlessly reading 0xff from 0x0038 AND (very importantly) pushing the return address 0x0039 onto the stack. Of course there is no stack at this stage - but that won't stop the CPU trying, and the effect is an endless series of two byte writes of 0x39 then 0x00 (little endian!) to an address that decrements forever through the 64K address space. It would typically take of the order of a second to cycle once through 64K, and the A15 line would toggle at this rate, the A14 line twice as fast, etc. You could troubleshoot the entire CPU bus interface this way with nothing more than an oscilloscope. Good times.


Just to be clear here: yes, he is using a 32-bit 72 MHz Cortex M4 and a 12 MHz Cortex M0 as an IO controller for an 8-bit 8MHz cpu. It also appears that the ram for the Z80 is coming out of the 64KB built into the M4.

Even the MCUs built into the screen and SD card are probably all more powerful.

This is awesome.


It is awesome. I had a Z80 machine running CP/M (and then Z/PM) in the 80's and when SCSI hard drives became a thing I remember hooking up a 20MB one to the Z80 (max hard drive size 8MB so it showed up as 3 volumes :-) and noted that the computer system on the disk drive was much more powerful than my Z80 system. It was an interesting inversion but one which resurfaces in computation a lot, where a base class for something might be quite complex but the controlling class relatively simple. In the software world you can write 'hello_world.c' in like 5 lines of C code but the amount of code that actually has to run on a Linux machine to make that work is quite extensive.


The SCSI controller I got for my Amiga 2000 had a Z80 on it.

I've come to see IBM compatible PC's of the 80's and up until some point in the 90's as a curious aberration in that until GPUs became common, they were the systems most likely to "just" have a single CPU and no co-processors outside of really stripped down low end home computers.

E.g. an 8 bit home computer with just a tape deck might have had just one CPU. But like for your Z80 machine, for many you got a second CPU as part of the package the moment you added a hard drive or even floppy drive (the 1541 floppy drive for the C64 for example, which was a full 6502 based computer that you could download programs to over the serial bus). And many others had CPUs all over the place (the Amiga 500 and 2000 I had used 6502 compatible cores in their keyboard controllers, for example...)

The PC first "caught up" in the early 90's, and finally got where we are now, where I have x86 servers at work with dozens of non-x86 CPUs in things like hard drives and on controller cards.


Interesting viewpoint. I always thought of the Amiga as catching up when they threw away the funny custom chips and went full PowerPC! - part of the problem with the later models being the chip bandwidth bottleneck (as was related to me by an Amiga programmer) and of course the dreadful expense for Commodore of keeping everything up to date. But you're right about the ARM CPUs everywhere these days. I suppose we're back there again today.

So, now we're back where we were, I wonder if the cycle will repeat. After all, even if we didn't get Larrabee, we did get the Intel HD3000.


Dropping the custom chips didn't happen in any Amiga. AmigaOS runs on PowerPC after years of wrangling over rights following the bankruptcy of Commodore, at which point the IP was severely outdated and nobody really knows who owns all the rights anymore anyway.

The PPC transition happened because there had been a number of 3rd party PPC co-processor cards for classic Amiga's (that let you run code both on m68k and PPC), so there was already a viable, reasonably popular target to port AmigaOS to.

At the time of the Commodore bankruptcy, Commodore was actually not going towards PPC but PA-RISC coupled with a new set of custom chips ("Hombre") that dropped planar graphics for chunky pixels and included 3d acceleration. Dropping M68k was more out of necessity because Motorola failed to get the performance needed out of the 68040 and 68060 and the next generation was outright cancelled, so there was no alternative but to transition, but there were never any plans to stop using custom chips.

> part of the problem with the later models being the chip bandwidth bottleneck (as was related to me by an Amiga programmer)

Chip bandwidth was a bottleneck for "everyone". Commodore was dealing with that by including VRAM in the next generation chip designs. But the Amiga was more vulnerable in this respect because a lead in multimedia was essential to the Amiga image. Lots of people bought PCs even without graphics cards. People still started stuff from DOS at the time of the Commodore bankruptcy. Windows 3.x was not something people bought for the graphics and animation.

But nobody would buy an Amiga without good graphics performance.

That the planar graphics that were the default for the Amiga until '92 also hampered efficient 3D was a much bigger deal.

> the dreadful expense for Commodore of keeping everything up to date.

Commodore actually never invested much in engineering. It was one of the ongoing stories of missed opportunities. They got extremely lucky breaks time after time again due to a series of extremely talented people delivering fantastic products at just the right time, and it finally caught up with them. Jack Tramiel was famously tightfisted, and after he was ousted Commodore had a bunch of managers who to the outside seemed to care very little about the company, and even less about engineering. What kept costing them money on engineering was a complete lack of focus: Repeated re-designs. Products that were canned right before release. Management edicts to make changes way too late in product cycles.

> After all, even if we didn't get Larrabee, we did get the Intel HD3000.

Sure, but they don't represent a single general-purpose CPU core. It's inevitable we'll get more functionality on-chip as yields for larger dies increases. But what matters is whether we continue to see parallelisation and off-loading vs. a seeing single core performance start to rapidly rise again (I wouldn't bet on the latter).


And the wheel turns: http://www.catb.org/jargon/html/W/wheel-of-reincarnation.htm...

It's been going on for a long time.


I did kind of the same thing but using an arduino to simulate a disk drive in a MSX, a Z80 based computer.

http://codinglab.blogspot.be/2013/01/virtual-msx-disk-drive....


There's been a bunch of this kind of thing lately, see also the Propeddle, a 6502+Parallax Propeller MCU. (http://propeddle.com/)


That's a cool piece of work. I spent a lot of time interfacing the Z80 (TRS80) to the outside world using both memory mapping and port IO. I Seem to recall (hell, it's been over 25 years), that the documentation stated that the OUT opcode put the lower 8 bits on the address bus, but in reality the whole 16 bits went out. Not sure if this was a documentation error or a Z80 bug. The Z80 had quite a few "undocumented" opcodes that could be used with care.


Yep, if you do "OUT (C),r", C goes to A0-A7 and B goes to A8-A15. 16 bit addressing!

I owned several TRS-80s (model I, III, IV), a Xerox 820 "Big Board", and a couple of Vector Graphics S-100 machines. All of them were lots of fun and I learned a ton.


Hey folks, writer here. Thanks for the great comments! Part 2 is already up talking interrupts (same blog) and i'll be cheating even more when the SD card interface comes online.


I think that would have more merit if he would using a CPLD/FPGA. But in any case, is awesome!


great low-level hackings, interesting to see an LED board for debugging. I was thinking on something similar some weeks ago, see http://ledlogics.com


Awesome. I have this chip, going to try something similar.


Beautiful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: