The Intel 8086 microprocessor (1978) revolutionized the computer industry and led to the popular x86 architecture. It uses microcode, breaking machine instructions down into simpler micro-instructions. I Ken Shirriff explains it, studying the chip under a microscope.
The groundbreaking 8086 microprocessor was introduced by Intel in 1978 and led to the x86 architecture that still dominates desktop and server computing. One way that the 8086 increased performance was by prefetching: the processor fetches instructions from memory before they are needed, so the processor can execute them without waiting on the (relatively slow) memory. I’ve been reverse-engineering the 8086 from die photos and this blog post discusses what I’ve uncovered about the prefetch circuitry.
The 8086 was introduced at an interesting point in microprocessor history, where memory was becoming slower than the CPU. For the first microprocessors, the speed of the CPU and the speed of memory were comparable.1 However, as processors became faster, the speed of memory failed to keep up. The 8086 was probably the first microprocessor to prefetch instructions to improve performance. While modern microprocessors have megabytes of fast cache to act as a buffer between the CPU and much-slower main memory, the 8086 has just 6 bytes of prefetch queue. However, this was enough to increase performance by about 50%.
See Ken’s article with detailed pictures on how the 8086 achieved what it did.