Friday, 20 December 2013

Adventures with Programmable Logic

So I have received the following goodies:
  1. XC9572 x 3 (PLCC44)
  2. Platform Cable USB programmer (Model DLC9G)
  3. Some PLCC44 PCB sockets (through hole)
  4. PLCC44 to DIP44 converter for the breadboard
  5. PLCC extraction tool
The XC9572 is a simple part, by modern programmable logic standards.  Not exactly capable of running a soft processor, the '9572 can still be used for one of the less exciting parts of a computer (at least a small one): glue logic.  It can also, with a suitable design constructed, be used for implementing some fairly complex IO functions.

If I'd relied on a supplier in China for my JTAG programmer I would still be waiting.  After about 4 weeks I finally gave up and ordered another one from a chap in the UK.  This caused me my first small headache: getting the programmer working in a VirtualBox Windows 7 environment.  The DLC9G needs, like many USB gadgets, firmware to be loaded before it can be used.  This causes a problem for VirtualBox as upon loading the firmware the USB product ID changes, which breaks USB pass-through.  The solution is simple enough: manually make the particular vendor and product codes used by the programmer always be marked for use by the Windows VM.

Once that was done, it was time to test out some HDL designs!

But first a word on the various ways of expressing the design to implement.  There a roughly three ways:
  • By drawing a schematic.  In the good old days, this was about the only way.
  • By using a low-level hardware description language like ABEL.
  • By using a "higher" level language like VHDL or Verilog.
My initial efforts focused on ABEL.  This seemed sensible because it most closely describes the actual wiring in the circuit; every connection (more or less) is described in text form.  I figured if I can draw a shift register using flip-flops I can describe it in ABEL.  Combinational logic, too, seemed simple enough.

The initial goal was (and is) to implement the glue logic for the IO "half" of the computer in a CPLD.  This includes the IO decoder enable generator as well as the AY-3-8912 glue.  In other words, these bits of the circuit:


 After reading some tutorials and references on ABEL like these:
I came up with the following ABEL implementation for my IO decoder:

MODULE ioglue

// Defined inputs

ECLOCK pin 1;
WRITE pin 2;
READ pin 3;
IO pin 4;
A0 pin 8;
A4..A6 pin 9,11,12;

// Defined outputs

DUART pin 38 istype 'com'; // /DUART CS
LATCH pin 39 istype 'com'; // /LATCH CS
IDE pin 42 istype 'com'; // /IDE CS
AY pin 43 istype 'com'; // /AY CS
VIA pin 44 istype 'com'; // /VIA CS

BDIR pin 29 istype 'com'; // AY BDIR
BC1 pin 33 istype 'com'; // AY BC1
AY_A0 pin 34 istype 'com'; // AY A0

RESET pin 34 istype 'com'; // RESET

EQUATIONS

DUART = !(!IO & (!A6 & !A5 & !A4));
LATCH = !(!IO & (!A6 & !A5 &  A4));
IDE = !(!IO & (!A6 &  A5 & !A4));
AY = !(!IO & (!A6 &  A5 &  A4));
VIA = !(!IO & ( A6 & !A5 & !A4));

BDIR = !(WRITE # AY); // NOR
BC1 = !(A0 # AY); // NOR
AY_A0 = !AY;

END

This is just simple Boolean logic used to implement a decoder, with some NOR gates for the sound IC.  Notice the pin assignments at the top of the file - they tie the inputs and outputs to particular pins on the CPLD.

I'm not quite ready to replace the 74HC138 and NOR gates in the IO breadboard with the CPLD, but after programming a XC9572 I tried out my glue design on the breadboard with some switches and LEDs and all looked good!

The next experiment to further my ABEL skills was to implement some sequential logic.  Probably the simplest thing to implement was a counter. For that I would need a clock source and, predictable, I went for one of the all-time famous ICs; the 555 timer.

At this point things started to go wrong. No matter what I try, I cannot get a counter to work properly. The oddest thing is the fault in all the counters I have implemented: upon counting to binary 1111 (in a 4 bit counter), the next outputted value is not 0000 but 0001, which is obviously not right.  Back to the drawing board then.

The next step is to follow some tutorials in a different HDL and try out some different designs, including counters.  After playing about with Verilog, I have decided to focus my efforts on VHDL instead.  It looks "cleaner" then Verilog (don't ask how).  I certainly need to slow down a bit and start learning again.

Over the next few weeks (the Christmas holidays) I hope to follow some very interesting looking tutorials that I've found, and achieve a working counter.  There is not really much point in trying to do more complicated things until this is done.

In the meantime, to prove that the XC9572 will work in my 8 bit computer project, I have implemented someone else's design in my CPLD.  The 65SPI is a SPI bus driver, implemented in ABEL by Daryl Rictor, and intended to implemented in a XC9572.  After playing with it for a few days, I've come to the conclusion that is an amazing piece of HDL.  It has 8 Slave Select outputs, multiple speed options, and is really easy to program. Here it is plugged into the breadboard, replacing the 6522 VIA which was performing SPI duty:


As before, the SPI controller is interfaced to a DS1305 Real Time Clock.

Code for talking to the 65SPI is implemented in the assembly file spi.asm, and replaces the previous bit-banging on the VIA.  Interestingly, I tried exactly what I described previously in this blog whereby I did not recompile my setclock.asm and showclock.asm user programs, and instead changed the implementation of the SPI AI from bit-banging to talking to the 65SPI, and (as I hoped and expected) everything worked just perfectly.  The "user level" code neither knew or cared that the implementation for the SPI interface was changed.

One of the interesting things about using the 65SPI is that SPI is now much faster.  As can be seen in this grab from the Logic Analyser, it is now so fast that the 6809 cannot keep up:


The SPI clock line is now running at 500Khz (on a 4Mhz 6809), which is at least 10 times faster then was achieved when bit-banging.  As can be seen, it is so fast that the 6809 cannot supply data fast enough. This is because of two things: polling for the 65SPI going into the idle state before it loads the next byte (this polling loop could probably be replaced with a couple of nops (no operation)), and actually getting the bytes to send from memory.  Even with the large gap between bytes, the 65SPI is still at least 5 times faster then I achieved with the bit-banging approach.

The 65SPI is very flexible and I've only just started to learn about it's subtleties. One thing you can configure is the data rate, by dividing the clock down by a configurable amount.  It also has a "fast" mode, which I've yet to play with.  All in all the 65SPI is a kind of model for me to follow with what I want to do with the CPLDs in my computer.  One thing is to see if it's possible to implement a I2C bus-driver in the same way.  I2C is quite a bit more complicated then SPI though, so it may not be possible.

But this is far in the future.  In the mean time, I have ordered some more DIP adapters so I can play with one of my other XC9572 and try, once again, to implement some simple sequential logic, starting with getting a fully working counter.

In "other news", I have received a replacement CompactFlash and a couple of 8Mhz 6809s.  At some point I will solder up a new "computer PCB" and switch back to a 8Mhz core computer, laying to rest the ghosts of blown up parts caused by a mistake with a power supply.

I have not much time before Christmas to get my computer to play Christmas music, so I will focus on that for the next few days....

Sunday, 10 November 2013

Hardware problems, XMODEM, and musings on programmable logic

In my last post I mentioned the problems I've been having with powering my computer. A multimeter on various ICs shows a big drop across the power pins on some of the ICs. The AY 3 8912, for instance, was getting only about 3V. I have worked around these problems by using an additional power source to counter the voltage drops. I have also bought an old computer PSU. These are nice because not only can they supply high currents, but they also have lines for pretty much any voltage I will ever need; 5V, 12V, 3.3V etc.

Sadly I have made a big blunder: in playing with the power supply I inadvertently connected the power around the wrong way, frying the 6809, RAM, EEPROM and CompactFlash. Very annoying, but completely predictable because I didn't use the correct power connector onto the breadboard. Luckily I have spares for the 6809 (at least a 4Mhz version) and memories, but will have to buy a new CompactFlash and eventually a replacement 8Mhz 6809.

In any case none of this is very satisfactory and should not be necessary either since the Apple USB power supply I am using should be well capable of powering the computer. So after reading about other peoples similar problems I have had some sucees in getting the whole computer powered by the original USB power supply.

The two tweaks I've done so far seem to have mostly fixed the voltage drop:
  • Instead of chaining the breadboards together, and making the length of power wires and number of connections larger and larger as more boards are added, the boards are instead arranged in a star.  So no board is any more then one board away from the power connection.
  • I have also added some decoupling capacitors (100uF electrolytics) to each board.
The voltage drop has now all-but disapeard.  I would still like to switch to using the PC powersupply at some point because eventually the USB power supply will reach its limit, but I'll have to make it impossible to plug in the power supply backwards first!

Since I no longer have a working CompactFlash card, I need another way to get programs into RAM if I'm going to continue working on "user" programs. Since I have the serial line, the logical thing to do is use that. There are several methods for sending files down serial lines, including XMODEM, YMODEM, ZMODEM and Kermit.  The simplest is XMODEM.  It is so simple in fact that I've implemented it in the monitor in few dozen lines of assembly.  I can now transfer files from the terminal program (minicom) into the computers RAM.  It isn't perfect though; because XMODEM file transfers are initiated by the receiving terminal, there is a small delay before the file transfer starts as the sender has to timeout.  It works well though; I can send a program from the Linux PC into the 6809 computer and run it, just as I had with the CompactFlash.  I'm of course still going to buy a new CF card so I can work on the Minix filesystem code.

Programmable Logic then.  I have wanted to try my hand at programmable logic for many years.  Ever since I found out about the ULA in the ZX Spectrum, the idea that a generic IC could be configured carry out almost any logic function seemed compelling.  And modern versions of this idea, namely CPLD (Complex Programmable Logic Devices), and FPGAs (Field Programmable Gate Arrays), are available now to the hobbyist, people like me!

In terms of uses to put to this technology my eventual aim is to implement some kind of simple custom IC for my 6809 computer. Hopefully it will contain various functions including replacing the glue logic (currently implemented in 74HC discrete logic), and some communications functions like an SPI controller, to replace the bit-banged SPI in the VIA.  There are quite a few other possibilities, like a simple display controller, or I could design my own sound interface.

But that is in the future.  I know next to nothing about this area, except some theory.  I have yet to find out anything detailed about the various HDL (Hardware Definition Languages) available, but will probably experiment with Verilog first.  I need to select which vendor to use, and select what parts from that vendor are suitable:
  • Generally it has to fit on my breadboard. Modern FPGAs have hundreds of pins and are surface mounted. 
  • It has to use 5Vs, since this is what the computer uses.  Again, modern PL devices run on 3.3V or lower.
  • Ideally the programming environment should run on OS X and should be free.
  • None of the parts should be too expensive.
So far I have narrowed my selection to Altera or Xilinx and there CPLD parts.  Both companies offer free tooling software, though unfortunately only for Windows or Linux and not OS X.  I have setup a Windows 7 virtual machine though.  So far I have installed the Xilinx tools (ISE) but the Altera software refuses to install correctly.  I will percivere but if I don't get any luck I will likely go with Xilinx anyway; the tools look good and the actual CPLDs are cheap.

Assuming I go with Xilinx my shopping list so far looks like the following:
  1. XC9572PC44-10C (PDF) (several)
  2. PLCC->DIP adapter for the breadboard
  3. PLCC socket for when I eventually use the parts in a PCB 
  4. Xilinx USB Cable Platform 
  5. PLCC extraction tool
Hopefully the whole lot shouldn't cost more then £50, which will include a couple of XC9572s.  These are 5V CPLDs with 72 macrocells and about 1,600 useable gates.  This should easily be enough to implement all the glue logic I will need, as well as a communication controller or two in a single IC.  The fact that these parts are PLCC is a bit of pain, but luckily there are DIP adapters so I can still use the breadboard.

But first up I will be happy to get something really simple, like a counter, running in the CPLD.

I have to thank the folks over at forum.6502.org for there guidance with checking that my plan for getting into Programmable Logic wasn't completely nuts...

Tuesday, 5 November 2013

Sound, and running programs from CF

My investigation into MinixFS has revealed a nice and simple filesystem; probably the simplest filesystem which you could implement which has UNIX-like semantics.  It has inodes, permissions, and all the usual UNIX things, but (luckily for me) the first iteration of Minix-FS mostly uses 16 bit datatypes.  This means it is "fairly" easy to interpret the data structures which make up directories, superblocks and inodes.

So far the following filesystem code has been written:
  • MBR parsing reads the offset into the first partition (if there is no MBR then the entire disk is the partition).
  • From this offset, the Minix superblock is read in.  From this block, the filesystem type can be determined, and the offset into the inode table can be worked out.
  • inodes are 32bytes long, so 32 inodes can fit in a 1Kbyte block.  Inodes are stored in consective sequence following the superblock.  So to read the inode number N, N needs to be divided by 32 to work out the block number to read (with the inode start offset added), and then the remainder used to work out the starting point of the inode within the block.  This, of course, can be done with bit shifting and masking.
  • inodes contain links (block numbers) to data blocks, for both files and directories.  They also contain a 16 bit word with bits for type (file, directory, symlink etc) and permissions. Also an inode contains the file (or directory) size in bytes. Of course the filename is not in the inode, but is instead in a directory entry. A routine has been written to summarise this info, for a given inode.
  • Directory entries are 32bytes long, with the first two bytes holding the inode number, and the remaining 30 bytes containing the file or directory name, padded with null characters.
  • So to list a directory by inode, the inode needs to be read in, and the datablocks followed.  For each directory entry the name can be retrieved.  To know if the entry references a file or directory, the inode for the entry can be read in, and the type (directory, file, devnode, etc), file size and permissions can be determined.  Without following the inode pointer when listing a directory, only the filename can be known.  Linux's ls command works the same way; only if an option like -l is given will inodes be loaded for each file to be listed.  This is the "l" monitor command, which takes an inode number for the directory to list.  0001 is the root inode.
  • To read the content of a file by inode, a similar process can be followed  except that the referenced datablocks are file data and not directory entries.  This is the "f" monitor command.  It takes a starting memory location and an inode number.
So far the above has been implemented and I can list directories and read files into RAM, but only through an inode reference.  Eventually reading a fie or directory by filename will be done.  This requires string matching of the files in the directory entries, something that is not entirely trivial in 8bit assembly.

One interesting quirk of reading this data from the CompactFlash is to do with endianess.  Since the Motorola 6809 is Big Endian, all words (and longs) that are read from the CF which are things like block numbers or file sizes have to be byte swapped, because the CF is written to be a Linux PC.

The following screenshot shows the monitor being used to mount a partition, read the superblock, list the root directory, and then read a file (me.jpg at inode 2) into RAM.  When listing a directory, the inode number, type (with file mode), and file size (in hex) is shown.  Not very user friendly, but serviceable. The first 64 bytes of the file, now in RAM, at $4000 is then displayed.  You can see it is a JPG file from the first few bytes in the file.  Reading a file by filename will be much more useful then reading files by inode number!


While working on the monitor, I finally took the plunge and restructured the code for the ASxxxx assembler.  This has a number of advantages, including much friendlier ways of holding static data (including null terminated strings) in the code, and it generally has a more "modern" feel to the assembly syntax, using 0x instead of $ etc.  The monitor is now spread across half a dozen files, with a file for serial communications, a file for string handling etc.

At the same time, some of the "reusable" monitor functionality is now exported through a jump table.  Jump tables are an age-old technique for publishing APIs (in this case, through a function number) to user-level code through a table of subroutine jumps.  Thus, to execute a particular ROM routine (to output a string to the serial port, say), the user-level code meerly jumps to the table position for the routine in question.  The table, in term, is a sequence of jump instructions into the real routine.  With this technique, if the location in ROM for the real routine has to change because of implementation changes, then only the jump inside the jump table needs to change.  The position that the user level program jumps to can remain the same.  Thus the "binding" between user code and ROM code can be static and nothing needs to break when the ROM is changed.  As long as newly exported routine jump table entries are added to the bottom of the table, and not anywhere in the middle, the routines available to user code can be expanded.

None of this really matters to my little computer; I could easily rebuild user programs when the ROM is changed and just use direct jumps instead of indirect ones.  But it is satisfying to do things "properly".

With all this in mind, I have implemented two real programs which are loaded from CompactFlash and executed as user code:
  • showtime.bin: performs SPI reads and displays the time, much like the "t" command in the monitor (which has now been removed).
  • settime.bin: prompts the user to enter the time and date before performing SPI writes to set the time and date accordingly.
These programs uses the SPI APIs and does not hit the VIA directly. Thus even if I was change to hardware (to a dedicated SPI bus driver say) these programs would still work as long as the API remained the same.  Also, only PC relative addressing is used inside the user-level program.  The programs can thus be loaded into memory at any location.

The following screenshot shows these two programs in action.  The settime program (at inode 3) is read into location $4000, and the showtime program (at inode 4) into location $4200.  Each program is then run:


The new monitor code, as well as the two user programs have been added to github, in case anyone is interested.

Another thing implemented in the last few of weeks is sound.  I've had a couple of General Instrument AY-3-8192 chips in the parts drawer for months, and have finally got around to integrating them into my circuit.  These chips (and there variants) were amazingly popular and were used in many home micros including the Sinclair Spectrum 128K, Atari ST, and many, many arcade machines.  These ICs are a little unusual in that they were designed to integrate seamlessly into GIs slightly strange CPU bus.  This bus has multiplexed data and address lines, and instead of Chip Select, Read Write and friends has Bus Direction and two Bus Control lines.  Glue logic is needed to convert the 6809 R/W and CS signals into the BDIR and BC lines needed by the sound IC.  After playing with several glue logic schemes, I ended up using the same logic as Matt Sarnoff used in this 8bit computer.

In summary my computer now beeps when it starts! I have written some generic sound routines, but unfortunately I'm not musically talented... I might have a go at covering some sheet music later though.

In other hardware news, I am now using the PCB I made up for the core computer. Here is a picture of the top of the board:


The decoupling caps are yet to be soldiered.  Seems to work ok without them, but I'll get round to it one day.

And here is a picture of the bottom:


As can be seen there were a couple of small errors:
  • The largest and most embarrassing problem is with the rest button. Somehow when making a lovely custom PCB footprint I managed to get the orientation wrong. This required cutting a track and a small jumper wire.
  • The crystal was not given enough room. I fixed this by standing the crystal off the board by a few mm, just enough to clear the lead of a nearby resistor.
  • I really wish I'd remembered to leave room for the feet, or (better yet) include some holes for some proper mounting posts.  Next time....
The final problem is more subtle and a little unknown. It has to do with the first line of address decoding. This is done with a 1 in 4 decoder, which generates RAM, EEPROM and IO enable signals. In most of the 6809 circuits online the decoder is always enabled; the outputs will continually reflect the address bus. But I found this to cause some glitching because the address bus is not updated "atomically" so in the circuit I had made up the decoder is enabled by an inverting Q signal. I noticed this problem with the latch-drive LEDs. Sometimes activity on the serial port would flash the LEDs., which should of course never happen. Per the datasheet Q goes high when the address is on the bus. This worked well, until I added the IDE interface which only works when the address decoder is always enabled. So it is all a but tricky. Currently the circuit is in the always enabled state and everything "seems" ok but I may try other combinations of address decoder enabling. I'm getting quite good at hardware hacking with a soldering iron, so could try out some other schemes for controlling the first line address decoder, and if I do ruin that board I have four more.  It's equally possible that I'm seeing problems only because of using socketed glue logic.

I have made one other cool improvement which I'm really happy about. I have now ditched the 6850 UART in favor of a 88C681 (PDF) DUART. This is an 8 bit bus compatible version of the 68681, a DUART for the 68000 series of MPUs released by Motorola around 1985. Other then having two UARTs in one, the other big advantage is that it has an integrated baud rate generator. This allows me to divorce the system clock from the baud rate used on the console. My computer is thus now running on a 8Mhz crystal, with a 3.864Mhz crystal now driving just the DUAT. I have taken the opportunity to crank the monitor baud rate up to 115,200 baud in the process. Software wise, interfacing this chip was only moderately harder then interfacing the 6850, with just some extra registers to setup. It is still polled; interrupts are something I would like to look at soon.

Here's a picture of the PCB hooked up to the breadboard:


I jury-rigged a 3.5mm socket out of a small piece of stripboard and a socket which was salvaged from an old MP3 player.  Quite pleased with this bodge: the socket is "surface mounted" to the back-side of the strip board.  I had to splay the leads slightly to get the (non 0.1") connectors onto the stripboard track, but it seems to work well.

One small problem I'm having is with power. The various IC in the circuit consume more power then can be reliably drawn from a USB hub. While I have always wanted a bench PSU, I think I will instead buy an old PC AT power supply.  They are a damn site cheaper, and I'm unlikely to ever need anything other then 5V.  A good computer PSU will easily give me 10A at 5V, far more then my little computer will ever need, even with its inefficient NMOS parts.

And here are the two circuits. First the core computer, much as it was in the last post:


And finally the long awaited IO circuit:


Future plans now include:
  • Get filename searching working. in the filesystem code.
  • More sounds with the AY. Maybe Greensleaves because I like the tune? Or another simple tune.
  • Investigate the address decoding oddities.
  • Play with interrupts.
  • Start designing a proper SBC board incorporating the DUART and IDE interfaces, and possibly sound and the VIA.
Fun, fun, fun. :)

Saturday, 28 September 2013

SPI, IDE, and a PCB!

It's been another couple of months with no blog update. Busy busy. Anyway. In the few hours here and there (work lunch times, early morning, etc) I have managed a fair bit of progress.

Firstly I've laid out the circuit again on some fresh breadboard slabs. I'm fairly sure the breadboard I used previously had some bad links, because the circuit is now miles apart to how it previously was from a reliability point of view. I've also broken it into two halves, with the core computer on one set of breadboards and IO on the other. A bunch of ribbon cable joins the two halves, with 16 strands for address, 8 for data and a bunch more for power and the various control lines. All in all I'm very happy with the new, slightly neater, breadboard setup.

Everything was working great on the new breadboard except.... I2C. More specifically, the PCF8584, which just didn't want to work right in it's new home. Actually I was never really happy with this device in the circuit; it never quite achieved 100% reliability, and the code was always a bit strange because it didn't conform to the datasheet. In the end I decided to try something new: SPI.

SPI serves a similar purpose to I2C in that it links ICs to ICs, typically microcontrollers to memories, sensors and other devices. There are a number if differences though:
  • Generally simpler signalling because in and out are on two different wires. No pesky ACKing of bytes, or any ACKing at all for that matter, unless the programmer implements it themselves.
  • Speeds in the Mhz range, vs I2C which is a few hundred KHz, not that it makes much odds here.
  • The specification is much more "loose". SPI does not even define word length, yet alone bit ordering. There are some commonalities across devices though.
I did look for an equivalent part to the '8584 but in the end I decided to have a go at bit-banging the protocol. To do this required an I/O port ie. a parallel port with which to assert and receive the SPI signals. I had a number of options:
  • Motorola 6821 (PDF): this was the companion part to the 6850 and features two eight bit parallel ports along with some peripheral control lines.
  • MOS 6522: this part was used in many 8 bit micros. Like the 6821 it has two parallel ports, plus some counters/timers and an in and out shift register.
In the end I settled on the 6522. The timers will no doubt come in handy. Maybe the shift register too.
Bit banging SPI is fairly straightforward. In addition to the in and out serial lines, plus clock there is one enable pin per device. So half of one of the eight bit ports is occupied by the SPI. Each byte to be shifted out of the interface is sent bit by bit, with a toggling of the clock. To keep things as fast as possible, the eight bits are proceed in an unrolled loop. Just updating a counter each time a bit is sent would slow things down. Reading a byte is similar and uses the carry bit in the status register to move the read in bit into the output byte register.

To test my SPI port I would need a device to talk too. In the end I thought my computer is still going to need a clock, so after a long search - most SPI ICs are not in handy DIP format - I found what I was looking for, a DS1305 (PDF). Similar to the '1307, it is an RTC with an SPI interface. It is not completely equivalent to the '1307 though. It has more pins, but lacks the handy 1Hz output mode. Instead it has two "alarm clocks" which generate interrupts. Storage of the time is pretty much the same BCD format, and like the 1307 it has a few dozen bytes of RAM which might be useful someday.  Currently only four lines on the VIA are being used, which is obviously a big waste!

Suffice to say my computer has a nice fully working clock.  The pictures show the clock being set and then read back in, as well as a grab of the logic analyser capturing a read of the clock.



From the Logic16 screen you can easily see how long it takes to bit-bang the 10 or so bytes.  The clock rate works out at about 32Khz, or around 4KB/s.  Not exactly quick, but quick enough for the small transfers I need to do. 

And here is a picture of the new breadboard setup. One half of the Logic16 is currently attached to the data bus, allowing me to see the full contents of the data bus which is useful for seeing what data is being sent to the various ICs.  You can see this in action in the screenshot above, though nothing useful is done here with the databus signals.  The left hand breadboard is the main computer, whilst the right hand side is for IO.


A summary of the picture.  First the left hand section, from top to bottom:
  • EEPROM in its ZIF slot
  • RAM (buried under a load of wires)
  • Ribbon cable jumping off to the IO section
  • On the middle slab is the 6809 and glue logic
On the right hand set of boards we have:
  • Ribbon cables coming into the boards
  • IO decoder (1 of 8)
  • Serial port (6850)
  • Latch with 8 LEDs used for occasional debugging
  • 6522 VIA and DS1305 on the bit-banged SPI bus
In the picture (right hand side, bottom row) is one other surprise: a CompactFlash card plugged into an IDE (also known as ATA) interface! This is the latest edition to my computer.

Computers are not that useful without some form of mass storage. Having a CF port will allow me to load programs (chunks of 6809 machine code) from CF - after saving them from my linux box - without having to write them into the EEPROM. It goes along way to making the machine a "real" computer, albeit one top of the line in 1980.

IDE is great because all the cleverness is in the hard drive, or Compact Flash in my case. This makes interfacing fairly simple. Early IDE even has an 8 bit mode, though only very early (to the mid 90s) hard disks support it. But luckily most CF cards also do. Beside a direct connection to the data bus the IDE interface also has 3 address lines for accessing various registers, two Chip Select lines, a read and a write line, and thats about it. The CF card plugs into an IDE adapter which then plugs into a 40 pin header block, made up on a piece of strip board, which in turn plugs into the breadboard (phew). Hardware wise that's about it if all you need is PIO mode operation. DMA would be cool but much more complex.

It's the software where things get interesting. So far I have simple sector reads as well as the IDENTITY command working. The code is broken down into two layers: monitor commands and low level IDE commands.

The m (for mount) command sets the IDE interface into 8 bit mode and reads the Master Boot Record, which is at the first sector of the disk. the start of the first partition is retrieved from the Master Boot Record and displayed.

The I (for identify) command performs the IDENTITY command to the drive and reads back a sector (512 bytes)containing various information. The monitor extracts the model name and some other information and outputs it. This command was implemented just to prove everything was working properly.

Finally the < (yes I'm running out of command characters) can be used to read upto 256 sectors from an arbitrary starting point. Because I only want to deal with 16 bit quantities my computer can only access the first 32 megabytes (65,536 lots of 512 byte sectors) of the compact flash. This is plenty of room for the type of storage I need.

The following screenshot shows these three commands being used. First the m command is run, then the identity command is performed before finally the first sector of the disk, the Master Boot Record is read in and displayed.


I didn't think I had a spare CompactFlash card and assumed I would have to buy one specially for this project. But after a thorough search managed to find one, which happened to be 128MB in size, in the bottom of a drawer. I had no clue what was on the card until I started doing some random reads when I realised I had found the old CF that used to be in my old Amiga 1200. Just to prove it, here is a screenshot of a random block, which I think is the AnigaGuide help file for the classic program SnoopDOS.


Of course reading raw disk blocks is only partly useful.  While it is possible to read and write raw disk blocks in a semi-structured way, the famililar way to interact with a storage device is through a file system.  Initially the focus will be on read-only access, with writes (perhaps) coming later.  I have several choices to consider:
  • FAT.  Ie. FAT16.  Everyone does FAT, so I won't.
  • Ext2/3/4.  This would be great but it is likely extremely difficult as even Ext2 is a highly complicated file system.
  • MinixFS.  This is more like it.  The first version of Minix had a fairly simple UNIX-inspired filesystem.  One of the nice aspects to it is that it only supports small (64MB ish) hard drives, which means it has 16bit data types for block pointers and the like.
Unfortunately my efforts to decode Minix are currently on hold, for a very stupid reason: I have no means of accessing a CompactFlash from my Linux PC.  This means I have no way to make a test file system on a CompactFlash, etc.  A USB reader has been ordered from good old eBay and should be with me soon.  In the meantime I'm looking into the format of Minix's superblocks, inodes, and other file system gubbins.

While all this has been going on, I have been engaged in one other area: PCB design.  Since the core of my computer is now effectively a stable design, I have decided I can forego the lefthand breadboard for a PCB, and to this end, one has been designed by yours-truely and in fact, I am now in possession of five identical boards.

In terms of software for my layout, I had a few choices but decided to stick with the gEDA suite.  To that end, I spent several days learning the imaginatively named "pcb".  I have to say the software is fair to mediocre.  Not as polished as the schematic capture tool, but good enough.  In designing a PCB layout I had to be extremely accurate in my schematic design, since the layout of the PCB flows directly from that.  To that end I spent a while getting the schematic for the core computer computer right, fixing some previous problems and adding some nice things, like decoupling caps and tieing off unused inputs.  These then map into PCB "nets".

Considering I know very little about PCB design I think I have done quite well!  First up is the schematic:


You can see everything is now documented in the schematic; even the power LED.  I have added LEDs for the BA and BS CPU signals, as well as a run/halt toggle switch, and a jumper for write enabling or write protecting the EEPROM,  There is also a 40 pin header at the right of the diagram.  This is the header which will join the core computer to the IO breadboard (and eventually PCB as well) parts.

And now a picture of the PCB design:


The dual layer PCB measure just over 10cm by 10cm, which is pretty good I think.  While the gEDA PCB software can autoroute, the results were dreadful and in fact it could not autoroute the whole circuit in "only" two layers.  In the end the whole thing was manually routed, which took probably a couple of man days of effort.  But that did include learning the software as well.

After exporting the design in the right format, I found someone on eBay to manufacture the PCB.  Total cost was about £40, and turnaround time was a little under 10 days.  Pretty impressive!  Coicidently I found the same company that made my EEPROM circuit: Botech Circuits,  I used this item on ebay.


After getting the PCBs home, of course, I noticed a few minor problems.  They should be easy to work around, hopefully.

My next tasks are, roughly:
  1. Investigate the Minix filesystem and have a go at parsing the superblock and root directory in 6809 ASM.  Eventually I hope to be able to read a file into RAM.
  2. Put the updated schematic and PCB (along with the necessary symbol and footprint files) into github.
  3. Solder up and test the CPU board!
  4. I can dream: more IO devices.  I'm especially keen on making my computer be able to make noises!
Hopefully the next post wont be 2 months away...

Links:
  1. http://www.t13.org/Documents/UploadedDocuments/project/d0948r4c-ATA-2.pdf - The spec for ATA-2, the last one to support 8 bit operation.

Thursday, 18 July 2013

I2C on a 6809 computer and a mini review of a logic analyser

I haven't had much time to work on my electronics projects lately, or write on this blog. The main reason being the arrival of a bouncing baby boy! So for once hobbies must take a back seat. If I'm lucky I will be able to write here about once a month.

I have had a little time to work on improving my 6809 computer, however, by adding a Philips PCF8584 (PDF) I2C parallel bus controller. This 20 pin IC was released around 1997 as a means to add I2C functionally to 8 and 16 bit micros, and can work with bus signalling from Z80, 6800 and 68000 CPUs. It also seems to work nicely enough with the 6809, but more on that later.

One of the challenges I had when working on the I2C interface on the AVR was knowing what the circuit was doing. When my circuit was struggling to read data from the EEPROM I wasn't really sure if the data had, in fact, been first properly written. The typical solution to these kinds of problems is a Logic Analyser. These are hardware devices that can monitor and record digital signals, in a similar way to how an oscilarscope is used for monitoring analog signals.

In days gone by this kit would cost thousands of pounds. Luckily with the use of a USB interface, the cost of a useable logic analyser can be reduced significantly since the analysis and a graphical display is offloaded onto a PC, with only the capturing needing to be done in specialist hardware. So began my hunt for an affordable logic analyser, eventually leading me to the Saleae Logic16. This is a 16 channel 100Mhz device costing about 200 pounds. Still not cheap but just about within budget.

One of the things that attracted me to the Logic16 was the company behind the product. Saleae run blogs documenting the company and the people that work there, giving an insight into how the company is run and the people who designed the products. It reminded me a little bit of how Smoothwall was when it was a smaller company. They also seem open to improve the products based on customer feedback, and I have to say I found their support to be totally awsome. There are a number of reviews of the Logic16 on the net, including an excellent video on youtube, so I won't bother going into too much detail here. Suffice to say that I couldn't have gotten my I2C interface working without it.

Here's a photo of the Logic16 hooked up to the breadboard, capturing various signals including the E clock, various Chip Enable lines, key CPU control lines, as well as the two I2C lines generated by the I2C bus driver. You can see how cute and small the Logic16 is.


Along the bottom piece of breadboard is the PCF8584 I2C bus driver, 24LC256 EEPROM, DS1307 RTC and button battery.

Not only can the Logic16 display waveforms, but in the case of I2C and other signalling formats, it will even decode them for you. See the screenshots below for an illustration.

The Logic16 can do loads more and I've only just scratched the surface in terms of its functionality. There's even an SDK so the software can be expanded. And the software is cross platform, being available for all the three main OSes. It should come in handy as my homebrew cimputer grows ever more complex. I have to say also that having an analyser makes learning and experimenting with digital electronics much more fun too.

So back to the I2C bus chip. The PCF8584 is somewhat similar to the trusty 6850 UART in that it hooks into the data bus, R/W line, and is clocked by E. It has control, status, and data registers, and like the UART is currently programmed in a polling mode instead of using interrupts.

Currently the 6809 circuit only supports a single I/O device, though it takes up a quater of the address space. To accomodate the PCF8584 the I/O space has been cut in quaters with another layer of address decoding on A8 and A9, meaning I can have four devices each with upto 256 registers. The 6850 and PCF8584 are in this new I/O space, which has been implemented with another 1 in 4 decoder. Therefore there are another two lines available for other devices. Like the 6850 only a single address line is required on the chip as it only exposes 2 registers.

The datasheet nicely describes how the chip need to be configured for different datarates, by dividing its clock in various ways. I had to cheat a little since the chip has no options for a 1Mhz clock, only going down to 3Mhz. Luckily I2C is very flexible when it comes to data transfer ratses.  My I2C runs at a third the speed it should, at about 5Khz, or about 700 useable bytes per second.  Slow, but fast enough for the trivial bits of information floating about in my computer. The datasheet contains flowcharts describing  read and write operations which I've turned into various 6809 subroutines. Oddly the I2C implementation in 6809 assembly seems simpler to follow then the equivalent AVR C code. Maybe that just means I've been looking ay assembly too long.

For now I have two devices on the I2C bus:
  1. A 24LC256 EEPRPOM at I2C slave address $a0
  2. A DS1307 Real Time Clock at I2C slave address $d0
The circuit now looks like the following. Because it's getting quite large, I have split the circuit diagram in two.  First the core computer, with CPU, memory, glue logic and core address decoding:


Followed by the IO components; the 6850, and the newly added I2C hardware:


The additional layer of address decoding. for selecting which IO device should be used, is on the bottom left.

Two new commands have been implemented in the monitor to exercise the I2C functionality.
  • s for store. Takes a I2C bus address, a I2C memory address, a 6809 memory address and a byte count. Bytes from the 6809 memory are written onto the I2C bus. The I2C memory address can either be a word or a byte, since the EEPROM has a 16 bit address requirement and the RTC a 8 bit requiement.
  • l for load. Like load except the transfer is the other way.
Writes are trivial, and consist of sending a START, follwed by an I2C slave address, followed by an I2C memory address, followed by the actual data, before finishing up with a STOP.  These were implented easily and using the Logic16 I was able to verify that the data was flowing across the bus.  Reads in I2C, on the other hand, are a bit more complicated and are implemented as a write to set the adress followed by a read with a repeat START, and this has given me some trouble for a couple of weeks. Essentially I worked around the problem by ignoring the datasheet and doing an extra read when "discarding" the slave address just after the repeat START. It seems to work, save for the ability to properly do single byte reads, but I would like to better understand what is going on.

The below screenshots show the monitor being used to copy a 4byte sequence from 6809 ROM into the serial EEPROM, the serial EEPROM then is read into a free area of RAM and then the RAM dumped out.

First the screenshot of the monitor session, showing ROM and RAM dumps prior to the transfer, the two transfers and finally the RAM dump.


Next the screenshot from Logic16 showing the store operation.


Note how the START and STOP sequences are shown, as well as the hex values.

And finally the screenshot of the load operation showing the I2C data being outputted from the EEPROM, after the START sequence (the green blob) is sent a second time.


EEPROMs are all very well but the Real Time Clock IC is far cooler. The DS1307 is fitted with a 3V button battery for keeping the time when the board is powered off, and a 32.768Khz crystal. This chip also has an extra feature: it can output a waveform at various configurable frequencies on a spare pin. The circuit uses this feature to flash an LED at 1Hz. Because it was so simple to do, this output is also fed into the non maskable interrupt pin on the 6809.  When the interrupt routine runs it simply increments an "uptime" value, stored in memory.  This is my first use of interrupts on the 6809, and was surprisingly simple to do.  This uptime count can be displayed using a simple new command, cunningly called "u".  Because a 16bit value for seconds is only enough to count up for a day or so, the uptime count is actually stored as a 32bit value, enough for a bit more then a 100 years!  Unfortunately the "u" command is not especially friendly, and simply outputs the uptime value in hex.

Interestingly this RTC stores all values in binary coded decimal, making display a virtual no-op since I already have routines for displaying hex digits. Setting the clock is currently done by preparing a block of memory in the following format (all values are bytes in BCD):

Seconds, Minutes, Hours, Day, Date, Month, Year, Output control

And sending it up to the IC.

Day is from 01 to 07 and is simply incremented at midnight. The choice of the start of the week is arbitrary, but I will use Sunday. Output control sets what the output pin should do. $10 is the magic value which enables the 1Hz pulse. Higher frequencies are also possible as well as simply turning the output pin on and off but this seems less useful.

I have written a special command for obtaining and printing the time. It somewhat crudely borrows the I2C load command subroutine to send a specific I2C request which has the effect of dumping the clock values into a memory buffer. This is then formatted for printing, the most complicated bit being to turn the numbers 01 to 07 into a day name (Sun to Sat).

The code, for anyone perverted enough to want to see, is on github as usual.

The following screenshot shows the monitor being used to load some memory with a buffer containing the current time, then the time being loaded across the I2C bus, and finally the time is retrieved and printed using the showtime command, "t". Notice that when storing the buffer to the I2C RTC IC the I2C slave address $d0 is used, along with the 8bit start address $00.  This is because the load and store commands can deal with 8 and 16bit addresses.


And here is a capture obtained when the "t" command is run.  You can easily see the seconds, minutes, hours, etc being outputted from the RTC IC.


I have loads of potential next ideas, but not enough time:
  1. First I feel I'm reaching the limits of what a09, the 6809 assembler is useful for. I need to split the monitor into multiple files and structure it better, but it does not seem very good at larger programs. Thankfully there are other assemblers out there. Hopefully it will be easier to port my code to a more sophisticated assembler.
  2. I need to make the breadboard more reliable. Often I have to hit the reset switch several times before sensible text comes across the terminal. I'm not sure if this is a real problem with my circuit or just down to flakey wiring inherent in breadboards. Either way it needs to be fixed.
  3. I have a GI AY-3-8912 soundchip I would love to get making noises. This is the same chip used in the 128Kbyte ZX Spectrum, a computer I played with back in the good old days.
  4. I have one other I2C chip I could get working, a MCP23017 16 bit I/O port. Not terribly interesting but could be cool to make some LEDs flash.
  5. I have some 8x8 bicolour LED matrices I could turn into a display. One idea is to use an AVR with I2C in slave mode to drive the LEDs, perhaps with intelegence so it can scroll text etc. The 6809 would send high level commands like "scroll this message" etc. This sounds quite hard, but could be a lot of fun.
  6. For something simpler, I could use some 7 segment displays, perhaps driven from the 16 bit I/O expander mentioned above.
So many ideas but not enough time...

Friday, 17 May 2013

An I2C EEPROM programmer with an ATMega8

To learn about I2C I have implemented a serial EEPROM programmer using the same AVR based ATMega8 as my old school paralled programmer. I2C, like SPI, is nice because all signalling between the microcontroller and the memory is along only a handful of wires. This also, at I2C speeds at least, makes it slower but adequate for many applications like data logging.

I2C, as a fairly modern tech, is extremely well documented with many good tutorials available on the subject. I got a lot of information from this nice tutorial which roughly describes I2C and how I2C is implemented in the AVR microcontrollers. The serial EEPROM I had to hand, an AT24C256 (PDF), was my target I2C device. This is a 32KByte serial memory in an 8 pin PDIP package. Quite a bit smaller then the parallel version of the same capacity my 6809 uses!

Though the bus is radically different, many things about this serial EEPROM are the same as its parallel sibling, including having a 64byte page size and the rough time taken to write a page (5ms as opposed to 10ms).

Below is the circuit for the programmer. As you can see, it is very much simpler then the parallel programmer:



It is also fits easily on a single slab of breadboard:



From left to right -
  • Reset button
  • EEPROM
  • ISP programming header
  • ATMega8
  • USB serial interface (which also powers the circuit)
Because these EEPROMs are so similar to the parallel version, the programmer code can be similar. The same commands and mechanisms, like upblock, dumphex etc as divised for the parallel programmer can be used with the serial memory. Only the implementation of the functions which read and write bytes and pages to the memory need be different.

For this reason, I decided to indulge in a little refactoring: the programmer software is now split in two, with a "front end", which handles the UART text interface to the computer, and a "back end" which does the actual memory reads and writes. The cool part is that because the serial interface to the computer is the same in each case the same front end code can be used for both programmer types. It also means the same upload program can be used; it neither knows or cares what type of memory is used.

This is made easier because the page size, which is the unit of memory written to at a time, is the same in both instances: 64bytes. But there is one slight "regression" compared to previously: it isn't possible to disable page writes in the upblock command. This is because doing single byte writes is pointless except for testing.

The code is now in four files:
  1. main.c : the shared front end
  2. parallel.c : parallel backend for the AT28C256 and similar memories
  3. i2c.c : I2C backend for AT24LC256 and equivalents
  4. programmer.h : shared header containing an "API" that the memory backends have to implement, as well as services, like UART output main.c provides
Two AVR EEPROM images (.hex files) are now built by linking main.o with either i2c.o or parallel.o.

An interesting aspect of this change relates to the address counter. The parallel programmer circuit contains two 8 bit counters which generate the address bus signals, but this isn't needed for the I2C programmer. Indeed, it would be possible to program the memory "randomly" to any memory address. But to keep everything consistent with the parallel programmer, and not introduce any new commands, the counter is retained purely as a software construct. The reading and writing memory address is incremented and reset just as if there was a real counter present, but its value is maintained by the code instead of in a piece of hardware.

One final change was needed to the programmer. Because the I2C programming code is a bit more complex then the parallel equivalent, the ATMega8 flash was filled. The 'Mega8, as its name implies, has 8KByte of program space and this was initially not enough.

One of the reasons the code does not fit is that it makes use of some standard C functions including strtok, strcmp and snprintf. While its not really possible to write a simpler, and therefore smaller, implementation of a function like strcmp, it is possible to write a simpler printf-type function, and thus save enough room to make everything fit.

The standard printf can format all simple types including integers, floats, character arrays, and format them in various ways including printing integers in binary, hex, octal etc. They can also be padded, printed to various precisions, printed as signed or unsigned etc. This makes a full printf implementation somewhat large considering the limited space available in a typical microcontroller.

The programmer firmware only really needs to output hex values, and the odd decimal. And even outputting decimal is only done once, for the getcount function. So that can go, leaving just hex output for byte and double byte (word) values.

Thus the "data formatting" requirements for the programmer are much like the 6809 asm code written for the monitor, and indeed writing that code helped me write the same code in C. Instead of a fancy formatting function which would write into a character buffer, I have implemented two functions alongside wtitechar: writehexbyte and writehexword. Both use shifting and hex to ASCII arithmetic to form a character, then sendchar is used to output the hex on the UART. dumphex and a few other commands have been changed to form up the output using these new functions. The code is a little less readable, but this is the price you pay for a "lower level" implementation.

The parallel programmer has been retested and still works nicely, as does the brand new I2C programmer.

As usual, the code is on github.

Now that's working, the next thing to do is integrate an I2C interface in my 6808 computer. There are a couple of ways to do this, but the one I will try is to use a Philips PCF8584 (PDF), an 8 bit parallel interface to I2C. This chip, designed to be used with Z80, 68000 and other microprocessors, gives a system an I2C bus.  Hopefully it will also work with a 6809. I can then attempt to repeat the EEPROM reading and writing, but this time with the 6809 computer. Once this has been accomplished, which isn't very useful in and of itself, I can look at interfacing more useful peripherals, like a real time clock...

Saturday, 11 May 2013

The 6809 computer is now self-updateable

I can now update the contents of the EEPROM without removing the chip! This marks a minor milestone in this little project.

The flow of the update process is a little different then previously described, but has quite a bit in common with the process used by the AVR EEPROM programmer:
  1. At startup a greeting prompts the user to either type 'f' (for flash) or hit any other key
  2. If they type anything but 'f' normal startup continues.
  3. The first thing the flash code does is copy the whole of the EEPROM (16 KByte) into the top half of RAM.
  4. The flashing code, now in RAM, is jumped to by calculating a jump address from the location of the code in EEPROM and working out where it now is in RAM.
  5. It outputs a marker (+++) which the sending side uses to know it can start the upload.
  6. The uploader sends 64 bytes (a page for the AT28C256) which the 6809 writes to a RAM buffer.
  7. The 6809 acknowledges the page by sending back a hash character. But before acknowledging the page, the 6809 writes the copy of it into EEPROM and waits about 10ms, as per the AVR code.
  8. After all pages have been uploaded and written out, the 6809 sends the new EEPROM content back to the uploader so it can check it. Nothing can be done if there was a write error, but at least it will be known.
  9. At the end of the upload process a reset is performed so the new code is run.
The uploader side of the process is a Linux program (flasher) somewhat similar to upload.c written for the AVR EEPROM programmer.

The process works well although it would be better if the flash code in the 6809 and the actual monitor were two distinct programs and only the monitor was updated. This would mean if the upload failed the "core" update system would still be available so the upload could be retried. Doing this is a fair amount of work though, so I think I will skip it for now. I do, of course, still have the AVR programmer so I can always recover the system.

Both the updated monitor code and Linux flasher program are in git, if anyone is interested.

This week I also had a problem with the MAX232 level shsifter. In the end I gave up with that part of the circuit and now have a USB TTL serial interface plugged into the 6850 in the breadboard. It has the side benefit of breaking out +5V and GND from the USB so a single connector onto the breadboard for both serial and power is all that is needed. At some point I will figure out why the MAX232 no longer works as it once did.

Below is the latest pic of the breadboard. The USB port looks weird but actually works quite well.  The circuit, except for the removal of the MAX232, is near enough identical to the previous circuit.  The only change is that /WE on the EEPROM is now connected in the way as the RAM /WE pin.  Previously it was tied high, ie. writes were impossible.


Finally, a screen shot of two terminal windows. One shows a session with the monitor, the other a run of the flasher program.

As a break from working on the 6809 my next task will be to learn more about I2C and how I might use it to connect up some I2C peripherals including sensors, real time clocks, and the like to the 6809 computer.

The easiest way to learn about these "modern" serial busses is to use a microcontroller. The AVR comes with both an SPI interface (it is used for flashing the internal EEPEOM) and an I2C interface, which AVR terms TWI (two wire interface). I hope to interface a serial EEPROM to an ATMega8 as a learning excersise. Once I've figured this out, there are a number of options for interfacing a 6809 to a I2C bus...

Saturday, 27 April 2013

Monitor progress

So far the monitor can do four things:
  1. Dump memory contents in hex and ASCII (d command)
  2. Write memory contents in hex (w command)
  3. Exit the monitor and run arbitrary code (e command)
  4. Show the registers as they were just before the monitor was reentered (r command)
For ease of parsing each command is a single character.

Monitors are generally implemented as a big software interrupt handler. When external code needs to be run, a "rti" (return from interrupt) is performed, directly after modifying the return address ie. the Program Counter as it was just before the monitor was entered is modified to point at the external code. To return to the monitor, external code should do a "swi" (software interrupt).  This will cause the entire CPU state to be saved to the stack.

Probably the most complicated code written so far is for the ASCII to byte conversion. Each time a memory address or other data needs to be displayed, or read in, it needs to be converted. This would be trivial in C. I'm slowly gaining profficency in 6809 asm though and am enjoying the change of pace from languages a million miles away like Perl and C which I'm more used to using.

The parser, while not good at spotting every user error, is fairly flexible and will parse a stream of ASCII space-separated hex into a list of bytes and words with markers to indicate wether a byte or word, or the end of the list, was found. On the downside I'm still to implement backspace key handling.

The following screenshot shows how a trivial program is input, in machine code, at memory location $2000, dumped back out to check it was loaded successfully and ran. Finally the registers are displayed, showing what they were when the trivial program is finished and the monitor is reentered:



The program is as follows:

2000   lda #$40    86 40
       ora #$02    8a 02
       swi         4f


As you can see, the A register has the value $42 ($40 OR'd with $02) when the monitor is re-entered, and the progam counter has moved on to location $2005. Other registers will have fairly random values.

As it currently stands the monitor is not terribly useful, but it's getting there.

The next stage, after finally implementing backspace handling, is to implement EEPROM programming in the monitor. The rough plan for the uploading command is:
  1. Copy the whole monitor code to RAM
  2. Jump to the upload subroutine in RAM
  3. Read from the serial port 16Kbytes for the EEPROM image and write it to RAM
  4. Send the image back to the uploader machine so it can check it
  5. If the uploader isn't happy with what it received, it will output an N
  6. If its ok the uploader will send a Y and the monitor will copy the RAM image into the EEPROM
  7. Either way a reset will be performed
The reason for the convoluted copying is twofold.

First the monitor shouldn't be overwritten while its running. That way leads to madness. Second the upload should be validated before being made active, and overwriting the current EEPROM code. The idea is that if the upload was corrupt for some reason, the current monitor won't be lost. Luckily there is 32KByte of RAM available to hold the whole EEPROM image (which is 16Kbyte). Of course a plain old bug in the monitor could still cause me to reach for the EEPROM programmer, but generally this approach should mean I can improve the monitor more quickly.

Other approaches to solving this problem are possible. One way would involve using a loader program which is protected from the main program and never normally overwritten. Even if junk was uploaded it would still be available, and this loader "mini program" would always be run on reset instead of the actual monitor. This is probably more complicated then the plan outlined above, so I will choose the simpler option.

Thursday, 25 April 2013

Serial communications and the start of a "monitor"

"Monitors" were pouplar tools for debugging programs, along with other useful facilities. Some were also powerfull enough to be considered development environments in there own right. They provided the user with the ability to read memory content, modify memory contents and registers, execute external code, set breakpoints etc. Some had built in assemblers and dissasemblers. I will be writing my own, without external reference except to old books and datasheets. This will form the building blocks of the computer's software and allow me to experiment with talking to different hardware devices, as well as improve my knowledge of the 6809.
 
The input and output to the monitor is on RS232 via the previously described 6850 UART (PDF), so the first thing to do is get that working. No graphical display for my computer just yet!

Here I found the first big problem: baud rate generation. The 6850 is extremely basic in this regard, and lacks a proper baud rate generator. Instead it can only be configured to use the E clock asis, or divide it by 16 or divide it by 64. Since my E clock is at a quater of the crystal frequency (at 4 MHz) this leaves me with no useful baud rates: 1000000, 62500 or 15625. None of these are even close to standard rates.
 
There are two solutions:
  1. Use a dedicated baud rate generator circuit with its own crystal.
  2. Switch the main crystal to something more useful for baud rate generation.
Option 1 is ideal but requires extra circuitry and I have very little room as it is. As it happens a 3.6864 MHz crystal is available, which when divided down by 4 and then 16 yields a nice and standard 57600 baud rate. So I duly ordered several of these. Actually I meant to order 3 but ended up with 12 because I didn't notice that they were being sold in batches of 4!

Of course a downside of this change is that my computer is now a tiny bit slower then it was with the 4 MHz crystal. Oh well.

In addition to the UART the circuit also needs a level shifter. This is because RS233 generally uses -12V and +12V signalling levels, vs 0V and 5V used in TTL. A MAX232 IC fits the bill nicely, and only needs four electrolitic capacitors to form a complete circuit. Whilst I could have used the same USB serial port converter as on my EEPROM programmer, I feel that this is not in the spirit of the 80s tech the rest of my computer uses.

Programming the UART is fairly simple. It has four 8 bit registers:
  • Read data to receive
  • Write data to send 
  • Read status
  • Write configuration
A single Register Select pin is used in combination with the R/W pin to select which register to manipulate. Register Select is connected to A0. With the same address decoding as used on the LED output circuit, the UART is therefore at addresses:

$8000 - send and receive data
$8001 - status and configuration

The 6850 has three Chip Select lines, to aid in address decoding. But since I already have an address decoder, only the active low CS2 line is required.

The only other connections of note are the E signal and rx/tx clocking inputs. All 3 are connect to the MPU E line. (If I had an external baud rate generator it would connect to the rx/tx pins instead of using E.)

That pretty much describes the circuit:


As usual, a postscript version can be downloaded here.

Software wise, the first job is to test my understanding of the UART by running some code.  A way to do this is to write a program which can both output text to the port, and read user input. To keep things simple, polling instead of interrupts is to be used.

The 6850 is configured through the configuration register. The UART must first be reset, then the comms parameters configured.  In my case I need a divisor of 16 on the clock, and 8 databits without parity. Also interrupts should be disabled.

Sending data is as simple as polling for "transmit empty" and when it is writing a byte. Receiving is similar; poll for "receive full" and then read a byte. Subroutines for sending and receiving strings (or any other data) can be built up out of these basic blocks.

Thus the serialtest.a program was written. It will output a greeting, then wait for some input before echoing the input back to the user, before looping back to waiting for another string. It is fairly crude and does not support backspace or any input editing, but it works well. In addition, because serial terminal software does not generally echo keystrokes directly, each byte is sent back to the terminal as it us received. Thus the user can see what they are typing.

The code is on github, for anyone curious.

And here is a picture of breadboard, along with a screenshot of the program in "action". The terminal software used was minicom, running on ny linux box, which in turn was accessed over SSH on my Mac.



I have "bodged" serial input into the computer via the breadboard by removing one of the plugs from a null modem cable and soldering on three PCB pins. This is the grey lead coming onto the board at the bottom left.

The next task, which I've started, is to turn the setialtest.a program into a monitor. I doubt I will ever write an assembler, but for debugging code cross-assembled on my linux machine it should prove invaluable. One other use I plan to make of the monitor is to make the EEPROM programmable from the 6809 directly. Once this is done I can try new programs without having to move the EEPROM chip from the 6809 breadboard to my homemade programmer and back again...

[This blog post had to be rewritten due to the Android "blogger" software consuming the finished posting.  Utterly rubbish "modern" software which I have now removed from my phone. Thank you Google!]

Sunday, 14 April 2013

EEPROM progammer finished, and perspective on a 1982 progamming book

The EEPROM programmer is finished!  I soldered up the PCB, and after fixing a couple of issues it checks out.  My programmer has some very bright LEDs, and to show them off I've added a new "debugdelay" option to set how fast the programming should happen.  In the process I've removed the old "debug" option because the 8KByte flash in the ATMEGA8 is now full.  It wasn't really that useful once the code was shown to be working anyway.

Top of the programmer PCB:



Bottom of the programmer PCB:



As you can see a couple of small modifications were needed:
  • The 3.5mm jack socket was too far from the edge of the PCB, such that the plug was blocked by the edge.  Cured by hacking out a notch in the board.
  • The EEPROM socket had no power pins connected.  This was cured by the addition of some unsightly jump wires. 
Luckily I noticed the missing power connections to the EEPROM before I soldered the decoupling caps.  This meant I could use the holes meant for the capacitors for the jump leads.  The caps are just a "nice to have" really, hopefully.

Anyway, the programmer is done and works just lovely!

I'm now knee deep in learning 6809 assembly.  From a modern perspective, it's an extremely labour intensive way to write code, but it's fun.  In comparison to other 8bit MPUs the 6809 was clearly quite a way ahead, with its addressing modes and "massive" selection of registers (9, including 2 general purpose 8bit accumulators).  But it remains far behind it's 16bit big brothers, such as the 68000, in terms of ease at which it can be programmed in assembly.

Whilst there's a fair amount of info on this processor online, there's no substitute for a contemporary book on the subject.  So I bought one: Programming the 6809, by Rodnay Zaks and William Labiak, published in 1982.  As well as a useful technical resource, it's also got some fascinating insights into programming computers in the early 80s.  Some interesting quotes:
Programming also requires a strict documentation discipline. Well documented programs are understandable to others, as well as to the author.  Documentation must be both internal and external to the program.  Internal program documentation refers to the comments used in the body of a program to explain it's operation. External documentation refers to the design documents that are separate from the program including, written explanations, manuals, and flowcharts.
 This is as true today as it was in 1982, the fixaction on flowcharts perhaps notwithstanding.

Another gem, illustating the simpler times:
The representation of alphanumeric data, i.e. characters, is completely straightforward: all characters are encoded in an eight-bit code.
 If only that was still true!