Saturday, November 24, 2007

The First DOS Machine

Seattle Computer Products (SCP) introduced their 8086 16-bit computer system in October 1979, nearly two years before the introduction of the IBM PC. By "computer system", I actually mean a set of three plug-in cards for the S-100 bus: the 8086 CPU card, the CPU Support Card, and the 8/16 RAM. At that time SCP did not manufacture the S-100 chassis these cards would plug into. That chassis and a computer terminal would be needed to make a complete working computer.

The S-100 Bus

Inside of a modern personal computer you'll find a motherboard crammed with the heart of the computer system: CPU, memory, disk interface, network interface, USB interface, etc. Off in one corner you usually find four or five PCI slots for add-in cards, but for most people no additional cards are needed (except maybe a graphics card).

In contrast, the S-100 Bus motherboard contained no components at all. A full-size motherboard had nothing but 18 – 22 card slots. Each slot accepted a 5" x 10" S-100 card with its 100-pin edge connector. A typical computer system would have a card with a CPU, possibly several cards with memory, a card for a floppy disk interface, a card for serial I/O, possibly a video card, etc.

This arrangement was started by MITS with the Altair 8800 computer, but eventually became standardized by the Institute of Electrical and Electronics Engineers as IEEE-696. During the standardization process, the S-100 bus was extended from being an 8-bit bus (typically used by 8080 and Z80 processors) to a 16-bit bus. It was this extension to 16-bits that made the S-100 bus a suitable target for the 8086 computer from SCP.

SCP also wanted to take advantage of the vast range of existing cards for the S-100 bus. They didn't need to make cards for disk interface, serial I/O, video, etc. since they were already available. Even the (empty) chassis itself was a standard item. An existing computer owner could simply swap out his 8-bit CPU card and replace it with the 16-bit SCP card, and all the hardware would work together (but the software was another matter).

The SCP 16-bit Computer System

The 8086 CPU card was an Intel 8086 microprocessor with dozens of logic chips needed to interface it to the S-100 bus. The signals and timings of the bus were built around the original 8-bit Intel 8080, and it took a lot of "glue" logic to create the same signals with a different microprocessor (this was also true for the Z80). For the 8086, however, there was also a significant added layer of working in "8-bit mode" or "16-bit mode". These modes were defined by the IEEE standard so that a 16-bit CPU could be used with existing 8-bit memory with a performance penalty, or with new 16-bit memory at full speed. Essentially, the CPU card could request 16 bits for each memory access. If there was no response, the card went into 8-bit mode: the microprocessor would be stopped momentarily while logic on the card ran two consecutive 8-bit memory cycles to fetch the required 16 bits.

The SCP 8086 CPU had a mechanical switch on it that allowed the microprocessor to run at either 4 MHz or 8 MHz (while a processor you get today would run at around 3,000 MHz = 3 GHz). When SCP first started sales, Intel could not yet provide the 8 MHz CPU chip so early units could only be run at 4 MHz.

The CPU Support Card was a collection of stuff needed to make a working computer. The most important items were:

  • A boot ROM with debugger.
  • A serial port to connect to a computer terminal.
  • A time-of-day clock.

The 8/16 RAM had 16 KB (not MB!) of memory. It could operate in "16-bit mode" so the 16-bit processor could run at full speed. In the early days, using four of these cards to build a system with 64 KB of memory would have been considered plenty.

The only 16-bit software available when the system was first released was Microsoft Stand-Alone Disk BASIC. SCP did not make a floppy disk controller, but supported disk controllers made by Cromemco and Tarbell.

Development Timeline

I earned my BS in Computer Science in June of 1978 and started work at SCP. Intel just announced their new 8086 microprocessor, and I was sent to a technical seminar to check it out. The May 1978 issue of IEEE Computer Magazine had published the first draft of the S-100 standard which included 16-bit extensions, so I was excited about the possibility of making a 16-bit S-100 computer. My primary duties at SCP were to improve and create new S-100 memory cards, which at that time were SCP's only products. But I was given the go-ahead to investigate a computer design when I had the time.

By January of 1979 my design for the 8086 CPU and CPU Support Card had been realized in prototypes. I was able to do some testing, but then hardware development needed to be put on hold while I wrote some software. Not even Intel could provide me with an 8086 assembler to do the most basic programming, so I wrote one myself. It was actually a Z80 program that ran under CP/M, but it generated 8086 code. Next I wrote a debugger that would fit into the 2 KB ROM on the CPU Support Card.

In May we began work with Microsoft to get their BASIC running on our machine. I brought the computer to their offices and sat side by side with Bob O'Rear as we debugged it. I was very impressed with how quickly he got it working. They had not used a real 8086 before, but they had simulated it so BASIC was nearly ready to go when I arrived. At Microsoft's invitation, I took the 8086 computer to New York to demonstrate it at the National Computer Conference in the first week of June.

There was a small setback when the June 1979 issue of IEEE Computer Magazine came out. The draft of the IEEE S-100 standard had changed significantly. I got involved in the standards process to correct some errors that had been introduced. But I still had to make design changes to the 8086 CPU which required a whole new layout of the circuit board, with a risk of introducing new errors. It turned out, however, that no mistakes were made so production was able to start 3 months later.

Evolution

The 16 KB memory card was eventually replaced by a 64 KB card. Intel introduced the 8087 coprocessor that performed floating-point math, and SCP made an adapter that plugged into the 8086 microprocessor socket that made room for it. Later SCP updated the 8086 CPU card so it had space for the 8087.

The software situation did not change until I wrote DOS for this machine, first shipping it in August 1980.

When IBM introduced their PC in August 1981, its 8088 processor used 8-bit memory, virtually identical in performance to using 8-bit memory with the SCP 8086 CPU. Except IBM ran their processor at 4.77 MHz while the SCP machine ran at 8 MHz. So the SCP 8086 computer system was about three times faster than the IBM PC.

IBM also reintroduced memory limitations that I had specifically avoided in designing the 8086 CPU. For S-100 computers, a low-cost alternative to using a regular computer terminal was to use a video card. The video card, however, used up some of the memory address space. The boot ROM would normally use up address space as well. SCP systems were designed to be used with a terminal, and the boot ROM could be disabled after boot-up. This made the entire 1 MB of memory address space available for RAM. IBM, on the other hand, had limited the address space in their PC to 640 KB of RAM due to video and boot/BIOS ROM. This limitation has been called the "DOS 640K barrier", but it had nothing to do with DOS.

Microsoft took full advantage of the SCP system capability. In 1988, years after SCP had shut down, they were still using the SCP system for one task only it could perform ("linking the linker"). Their machine was equipped with the full 1 MB of RAM – 16 of the 64 KB cards. That machine could not be retired until 32-bit software tools were developed for Intel's 386 microprocessor.

Sunday, September 30, 2007

Design of DOS

I set to work writing an operating system (OS) for the 16-bit Intel 8086 microprocessor in April of 1980. At that point my employer, Seattle Computer Products (SCP), had been shipping their 8086 computer system (which I had designed) for about 6 months. The only software we had for the computer system was Microsoft Stand-Alone Disk BASIC. “Stand-Alone” means that it worked without an OS, managing the disk directly with its own file system. It was fast for BASIC, but it wouldn’t have been suitable for writing, say, a word processor.

We knew Digital Research was working on a 16-bit OS, CP/M-86. At one point we were expecting it to be available at the end of 1979. Had it made its debut at any time before DOS was working, the DOS project would have been dropped. SCP wanted to be a hardware company, not a software company.

I envisioned the power of the 8086 making it practical to have a multi-user OS, and I laid out a plan to the SCP board of directors to develop a single-user OS and a multi-user OS that would share the same Application Program Interface (API). This would be a big design job that would take time to get right – but we were already shipping our computer system and needed an OS now. So I proposed to start with a “quick and dirty” OS that would eventually be thrown away.

Baseline Experience

I had graduated from college (BS in Computer Science) less than two years earlier. I spent the next year tentatively in graduate school while also working at SCP. So I didn’t have much experience in the computer industry.

This is not to say I had no experience at all. In college I had an IMSAI 8080 with a North Star floppy disk system. I made my own peripherals for it, which included a Qume daisy-wheel printer with its own Z80-based controller that I designed and programmed myself. I also designed my own Z80-based road-rally computer that used a 9-inch CRT video display mounted in the glove box of my car. And my school work included writing a multitasking OS for the Z80 microprocessor as a term project. The thrust of that project had been to demonstrate preemptive multitasking with synchronization between tasks.

For SCP, my work had been ostensibly hardware-oriented. But the 8086 CPU had required me to develop software tools including an assembler (the most basic tool for programming a new processor) and a debugger. These tools shipped with the product.

My hands-on experience with operating systems was limited to those I had used on microcomputers. I had never used a “big computer” OS at all. All programming projects in high school and college were submitted on punched cards. On my own computer I used North Star DOS, and at SCP we had Cromemco CDOS, a CP/M look-alike.

File System Performance

An important design parameter for the OS was performance, which drove my choice for the file system. I had learned a handful of file management techniques, and I spent some time analyzing what I knew before making a choice. These were the candidates:

North Star DOS and the UCSD p-system used the simplest method: contiguous file allocation. That is, each file occupies consecutive sectors on the disk. The disk directory only needs to keep track of the first sector and the length for each file – very compact. Random access to file data is just as fast as sequential access, because it’s trivial to compute the sector you want. But the big drawback is that once a file is boxed in by other files on the disk, it can’t grow. The whole file would then have to be moved to a new spot with more contiguous free space, with the old location leaving a “hole”. After a while, all that’s left are the holes. Then you have to do a time-consuming “pack” to shuffle all the files together and close the holes. I decided the drawbacks of contiguous allocation were too severe.

UNIX uses a clever multi-tiered approach. For small files, the directory entry for a file has a short table of the sectors that make up the file. These sectors don’t have to be contiguous, so it’s easy to extend the file. If the file gets too large for the list to fit in the table, UNIX adds a tier. The sectors listed in the table no longer reference data in the file; instead, each entry identifies a sector which itself contains nothing but a list of sectors of file data. If the file gets huge, yet another tier is added – the table entries each reference a sector whose entries reference a sector whose entries identify the file data. Random access to file data is very fast for small files, but as the files get larger and the number of tiers grow, if will take one or two additional disk reads just to find the location of the data you really want.

CP/M didn’t track disk space directly in terms of sectors. Instead it grouped sectors together into a “cluster” or “allocation unit”. The original CP/M was designed specifically around 8” disks, which had 2002 sectors of 128 bytes each. By making each cluster 8 sectors (1K), there were less than 256 clusters on a disk. Thus clusters could be indentified using only one byte. The directory entry for CP/M had a table of 16 entries of the clusters in file, so for a file of 16K or less both random and sequential access were fast and efficient. But when a file exceeded 16K, it needed a whole new directory entry to store an additional 16K of cluster numbers. There was no link between these entries; they simply contained the same name and a number indentifying which section of the file it represented (the “extent”). This led to a potential performance nightmare, especially for random access. When switching between extents, the system had to perform its standard linear search of the directory for a file of the correct name and extent. This search could take multiple disk reads before the requested data was located.

Microsoft Stand-Alone Disk BASIC used the File Allocation Table (FAT). Unlike all the other file systems, the FAT system separates the directory entry (which has the file name, file size, etc.) from the map of how the data is stored (the FAT). I will not give a detailed explanation of how that worked here as the system has been well documented, such as my 1983 article An Inside Look at MS-DOS at http://www.patersontech.com/dos/Byte/InsideDos.htm. Like CP/M, BASIC used a 1K cluster so that, once again, there were less than 256 on the standard 8” floppy disk of the day. The FAT needs one entry per cluster, and for BASIC the entry needed to be just one byte, so the FAT fit within two 128-byte sectors. This small size also meant it was practical, even with the limited memory of the time, to keep the entire FAT in memory at all times.

To me, the big appeal of the FAT system was that you never had to read the disk just to find the location of the data you really wanted. FAT entries are in a chain – you can’t get to the end without visiting every entry in between – so it is possible the OS would have to pass through many entries finding the location of the data. But with the FAT entirely in memory, passing through a long chain would still be 100 times faster than a single sector read from a floppy disk.

Another thing I liked about FAT was its space efficiency. There were no tables of sectors or clusters that might be half full because the file wasn’t big enough to need them all. The size of the FAT was set by the size of the disk.

When I designed DOS I knew that fitting the cluster number in a single byte, limiting the number of clusters to 256, wouldn’t get the job done as disks got bigger. I increased the FAT entry to 12 bits, allowing over 4000 clusters. With a cluster size of as much as 16K bytes, this would allow for disks as large as 64MB. You could even push it to a 32K cluster and 128MB disk size, although that large cluster could waste a lot space. These disk sizes seemed enormous to me in 1980. Only recently had we seen the first 10MB hard disks come out for microcomputers, and that size seemed absurdly lavish (and expensive).

Obviously I’m no visionary. Disk size has grown faster than any other computer attribute, up by a factor of 100,000. (Typical memory size is up by a factor of 30,000, while clock speed is only up 1000x.) Microsoft extended the FAT entry to 16 bits, then to 32 bits to keep up. But this made the FAT so large that it was no longer kept entirely in memory, taking away the performance advantage.

Hardware Performance

On the few computer systems I personally used, I had experienced a range of disk system performance. North Star DOS loaded files with lightening speed, while the CP/M look-alike Cromemco CDOS took much longer. This was not a file system issue – the difference still existed with files less than 16K (just one CP/M “extent”) and contiguously allocated.

North Star DOS did the best job possible with its hardware. Each track had 10 sectors of 256 bytes each, and it could read those 10 sectors consecutively into memory without interruption. To read in a file of 8KB would require reading 32 sectors; the nature of the file system ensured they were contiguous, so the data would be found on four consecutive tracks. When stepping from track to track, it would have missed the start of the new track and had to wait for it to spin around again. That would mean it would take a total of 6.2 revolutions (including three revolutions lost to track stepping) to read the 8KB. The 5-inch disk turned at 5 revolutions per second so the total time would be less than 1.3 seconds.

The standard disk with CP/M (CDOS) had a track with 26 sectors of 128 bytes each. CP/M could not read these sectors consecutively. It used an interleave factor of 6, meaning it would read every sixth sector. The five-sector gap between reads presumably allowed for processing time. An 8KB file would occupy about 2½ tracks, which, at 6 revolutions per track (because of the interleave), would take about 17 revolutions of the disk to be read (including two revolutions lost to track stepping). The 8-inch disk turned at 6 revolutions per second so the total time would be over 2.8 seconds. This is more than twice as long as the North Star system which used fundamentally slower hardware.

At least part of the reason CP/M was so much slower was because of its poor interface to the low-level “device driver” software. CP/M called this the BIOS (for Basic Input/Output System). Reading a single disk sector required five separate requests, and only one sector could be requested at a time. (The five requests were Select Disk, Set Track, Set Sector, Set Memory Address, and finally Read Sector. I don’t know if all five were needed for every Read if, say, the disk or memory address were the same.)

I called the low-level driver software the I/O System, and I was determined its interface would not be a bottleneck. Only a single request was needed to read disk data, and that request could be for any number of sectors. This put more of the work on the I/O System, but it allowed it to maximize performance. The floppy disk format did not use interleave, and an 8KB file could be read from an 8-inch disk in 4½ revolutions which is less than 0.8 seconds.

When hard disks were first introduced on the IBM PC/XT in 1982, the I/O System provided by IBM once again used an interleave factor of 6. Some aftermarket add-on hard disks were available with interleave factor as low as 3. In 1983 I founded Falcon Technology which made the first PC hard disk system that required no interleave. Once hard disks started having built-in memory, interleave was completely forgotten.

CP/M Translation Compatibility

For DOS to succeed, it would need useful applications (like word processing) to be written for it. I was concerned that SCP might have trouble persuading authors of application software to put in the effort to create a DOS version of their programs. Few people had bought SCP’s 16-bit computer, so the installed base was small. Without the applications, there wouldn’t be many users, and without the users, there wouldn’t be many applications.

My hope was that by making it as easy as possible to port existing 8-bit applications to our 16-bit computer, we would get more programmers to take the plunge. And it seemed to me that CP/M translation compatibility was what would make the job as easy as possible. Intel had defined rules for translating 8-bit programs into 16-bit programs; CP/M translation compatibility means that when a program’s request to CP/M went through the translation, it would become an equivalent request to DOS. My first blog entry explains this in more detail.

So I made CP/M translation compatibility a fundamental design goal. This required me to create a very specific Application Program Interface that implemented the translation compatibility. I did not consider this the primary API – there was, in fact, another API more suited to the 16-bit world and that had more capabilities. Both APIs used CP/M-defined constructs (such as the “File Control Block”); the compatibility API had to, and I didn’t see a reason to define something different for the primary API.

I myself took advantage of translation compatibility. The development tools I had written, such as the assembler, were originally 8-bit programs that ran under CP/M (CDOS). I put them through the translator and came up with 16-bit programs that ran under DOS. These translated tools were included with DOS when shipped by SCP. But I don’t think anyone else ever took advantage of this process.

Friday, August 17, 2007

The Contributions of CP/M

Gary Kildall’s CP/M was the first general-purpose operating system (OS) for 8-bit computers. It demonstrated that it was possible to pare down the giant operating systems of mainframes and minicomputers into an OS that provided the essential functionality of a general-purpose Application Program Interface, while leaving enough memory left for applications to run. This was a radical idea.

Beyond this technical achievement is the success CP/M had in the marketplace. It stands alone as the catalyst that launched the microcomputer industry as a successful business. Without those signs of success, companies like IBM wouldn’t have been tempted to enter the business and fuel its growth.

The Significance of the BIOS Interface

The concept of a software interface between separate programs or program components has been around for a long, long time. The most obvious example is the Application Program Interface (API) that all operating systems provide to application programs.

But interfaces can exist at other levels, not just between the OS and applications. Interfaces are used whenever two components must interact, but the components are developed independently or may need to be changed independently.

Much has been made of the layers of interfaces used by CP/M. (John Wharton: “Gary’s most profound contribution”; Harold Evans: “truly revolutionary”, “a phenomenal advance”; Tom Rolander: “supreme accomplishment”, “originator of that layering of the software”.) The core of CP/M was the Basic Disk Operating System (BDOS). On one side of the BDOS was the API exposed to application programs; on the other side was the Basic Input/Output System (BIOS) that connected the BDOS to specific computer hardware.

Certainly the idea of these layers of interfaces was not new to the computer industry. For example, UNIX (like all operating systems) provided an API for application programs, and connected to the hardware with an interface to low-level device driver software. These are equivalent layers to the CP/M’s BDOS and BIOS, but much more sophisticated. So I am a bit mystified as to what is so revolutionary about CP/M’s design.

Of course, being the first microcomputer OS, CP/M was the first to put these layers on a microcomputer. So I guess in distilling down the essence of an OS so it would fit on a microcomputer, we can give Kildall credit for not distilling out too much and leaving out the interface layers. Except that he actually did, originally. As described in his memoirs, quoted by Evans, in early versions of CP/M he had to rewrite the parts that manage the hardware “so many times that the tips of my fingers were wearing thin, so I designed a general interface, which I called the BIOS.” I read somewhere that the BIOS interface was first added to version 1.3 of CP/M. It may well be that Kildall had not seen a device driver or BIOS-level interface before. But the advantages that became obvious to him had been just as visible to his predecessors.

I equipped my first computer, an IMSAI 8080, with the North Star floppy disk system. It came with North Star DOS, which was the first OS that I personally used. It, of course, had a low-level interface so it could be tailored to work with any hardware. This is where my experience with interface layers began. I had not seen CP/M at the time.

CP/M & the DOS Connection

I can think of no specific technical innovations demonstrated by CP/M. The new idea was simply that you could do it – you could actually put a general-purpose operating system on a microcomputer. It worked and the market loved it.

DOS was built on top of this general groundwork. Kildall distilled the OS to a minimal, useful set of capabilities that would fit on a microcomputer. This simplified my task in setting the functionality of DOS. It was a matter of adding or subtracting to an existing & working set rather than developing the list from scratch.

DOS has been accused of being a “rip-off” or “knockoff” of the CP/M “design” or “architecture”. I guess this may refer to the fact that DOS has the same interface layers. Or it may refer to the similar function set. In that sense, since Kildall picked an appropriate set of functions, any subsequent microcomputer OS would have the same ones and would be some sort of “knockoff”.

Wednesday, August 8, 2007

Is DOS a Rip-Off of CP/M?

Paterson v. Evans lawsuit dismissed

In his book They Made America (Little, Brown & Co., 2004), author Harold Evans revives claims that DOS is based on the late Gary Kildall's CP/M, using words such as "slapdash clone" and "blatant copies". I sued the author & publisher for making false and defamatory statements.

The case was dismissed last week shortly before it was to go to trial. The main reason this happened is because the judge ruled that I am a “limited purpose public figure.” This sets a very high bar of protection for free speech, leading the judge to then rule that the book represented protected opinions.

Facts not in dispute

What may be most surprising about the issue is that there is no significant dispute on the actual relationship between DOS and CP/M. The relationship is simply this: DOS implements the same Application Program Interface (API) as CP/M. The API is how an application program (such as a word processor) asks the operating system to perform a task, such as to read or write a disk file.

There is no suggestion that I copied any CP/M code when I wrote DOS. (To this day, I have never seen any CP/M code.) And the internal workings of DOS are quite different. For example, unlike CP/M, DOS used the FAT (File Allocation Table) system for organizing disk files, which made it much faster but meant floppy disks were not interchangeable between CP/M and DOS.

One point of disagreement: In his memoirs (quoted by Evans), Kildall claims that I dissected CP/M to learn how it worked. This is not true, and it doesn’t even make sense. Since DOS worked so differently, there would have been nothing I could learn from CP/M’s internal workings to help in writing DOS.

What do I mean by “implement the same API”? Every operating system has basic functions like reading and writing disk files. The API defines the exact details of how to make it happen and what the results are. For example, to “open” a file in preparation for reading or writing, the application would pass the location of an 11-character file name and the function code 15 to CP/M through the “Call 5” mechanism. The very same sequence would also open a file in DOS, while, say, UNIX, did not use function code 15, 11-character file names, or “Call 5” to open a file.

Translation Compatibility

Since CP/M and DOS both had the same API, you would think that a program for one would run on the other, right? Wrong. CP/M was only for 8-bit computers based on the 8080 or Z80 microprocessors. DOS was only for 16-bit computers based on the Intel 8086 microprocessor. At the time DOS was written, there was a vast library of 8-bit CP/M programs, none of which could run on 16-bit DOS computers.

While 8-bit programs could not run on 16-bit computers, Intel documented how the original software developer could mechanically translate an 8-bit program into a 16-bit program. Only the developer of the program with possession of the source code could make this translation. I designed DOS so the translated program would work the same as it had with CP/M – translation compatibility. The key to making this work was implementing the CP/M API.

So sue me

When you boil it all down, the thrust of Evans’ story is that Kildall and his company, Digital Research (DRI), should have sued for copyright infringement and DOS would be dead. CP/M would have then been chosen for the IBM PC (because it was the only choice left), and the history of the PC would be much different.

While DRI was free to sue for copyright infringement, the likely success of such action is still controversial at best. The question was whether the published API, used by all the applications that ran under CP/M, was protected from being implemented in another operating system. There are experts who say no, and there are experts who say maybe, but from what I can tell as of 2007 there is yet to be a successful finalized case.

I say that because Evans & I each brought our own copyright experts into our negotiations to settle the case. Mine was Lee Hollaar of the University of Utah, while Evans used Douglas Lichtman of the University of Chicago. Lichtman provided a handful of case citations to show “that the issues here are not clear in either direction, and, in a hypothetical copyright suit filed by Kildall, he might have won and he might have lost.” But the only case that seemed to support DRI’s side was a preliminary injunction in 2003 (Positive Software v. New Century Mortgage). We didn’t think it really applied, and as a preliminary injunction it hadn’t gone through a full trial, let alone appeal. I would suppose this is the best he had. Hollaar said to me “I feel that it's clear from the cases under similar circumstances that you would have won.” I have wished my suit against Evans could have really become a copyright suit about CP/M & DOS so this could be settled.

If tiny Seattle Computer Products had been sued by Digital Research back when DOS was new, I’m sure we would have caved instead of fighting it. I would have changed DOS so the API details were nothing like CP/M, and translation compatibility would have been lost. But in the end that would have made absolutely no difference. No one ever used or cared about translation compatibility. I had been wrong to think it was a valuable feature.

Lichtman mentioned to me that he was working for SCO in their lawsuits against Linux. If I understood him correctly, SCO was trying to establish that an API can be protected by copyright, so it could be used to collect royalties on Linux, whose API is the same as or similar to UNIX.