Medium discusses a debate from the 1980s: are complex instruction sets superior over reduced instruction set processors?
RISC chips are huge today, but it is fun to recall the *huge* debate in deep academic tech circles over the arrival of RISC chips. In the early 1980s the Intel 8086 began to dominate with the PC. Chips were mostly modeled on the the basic microprocessor model. What was next?
Because most commercial programming happened in assembly language, a basic evolution of instruction sets was to move “high level operations” to the silicon. These “Complex Instruction Set Chips” or CISC was the dominant force as governed by Moore’s Law. Then came Intel 80286.
Some in Academia, particularly up 101 a bit at Berkeley & Stanford. There they began to question putting all this complexity in Si. That required a lot of complex engineering and really complicated compilers. What if compilers could make faster code with simpler instructions?
This became known as “Reduced Instruction Set Computing” or RISC. It was quite counter-intuitive and def ran up against Intel. The raging debate was RISC v CISC.
Articles in Byte, a preeminent magazine at the time, discussed the issue.
RISC v. CISC continued literally for decades. RISC turned into ARM and the whole world of billions of embedded chips. The economics and customization enabled by the business model worked well for devices that could not carry royalties. Then came phones! Then the iPod!
Read the details in the insightful article here.