Wyoming | Yea, I've heard that before.
I heard that with the iAPX 432. x86 survived. The 432 went nowhere. I mean, really, nowhere. Part of the problem was that the 432 was tied to an Ada compiler project, but still, the chip architecture was vastly superior to the x86 at that time.
I heard it with the i860. The architecture was cleaner, faster and better than the x86. The 860 flamed out too.
And I heard it with the i960. Intel was so pissed off by this point in pouring millions into new chip architectures, they tried to salvage what they could out of this effort and relegated the 960 to embedded systems like printers. In the end, it went nowhere.
All three of those efforts were Intel themselves trying to replace the x86 architecture. Everyone who looked at the x86 architecture in the 80's and early 90's said (inside and outside of Intel) "You know... this sucks. There's lots of better memory, address and instruction set architectures out there. Let's do something cleaner, faster, better, more scalable, etc."
Intel's own customers said "Give us faster x86 chips. You can keep those other things."
There's such a huge investment in x86 code now that any replacement will have a mode of memory model to provide backwards compatibility with the Pentium line of CPU's.
This isn't a new issue; I used to hear how the next generation of mainframes would replace the System 370 instruction set. Here we are, 40 years and billions of dollars invested in the System-370 architecture and when I look around the IBM mainframe product offerings, I can still find a way to run an object deck from even a S/360 on the most modern machines that support hundreds of gigabytes of physical address space.
x86 instructions might be emulated by microcode or executed in a virtual machine, but the investment in x86 code is now so large, it is effectively in the same situation as the S/360 or S/370. It won't only not be replaced, it is now impossible to kill with deliberate force. |