So there I was, sitting in the library, and I thought to myself, why not just change the program counter register instead of using a JMP instruction?

The program counter points to the memory address the current instruction the CPU is considering resides in. It is referenced by the CPU to access the next instruction, and it gets incremented after a fetch (this is a simplified view of how it works - pipelines, superscalar designs, etc. complicate things). Programs don't always progress in a linear fashion like this. In order to implement a real computer, you need conditional branching instructions, which will either jump to another section of code or not, depending on a certain condition.

Well, given a CPU that can do basic arithmetic, it's easy enough to replace the functionality of "jump" with equivalent code that changes the program counter (EIP/IP - instruction pointer - in Intel-ese). A simplified CPU cycle might go like this:

  • while () { fetch instruction from memory pointed to by PC
  • increment PC
  • write PC from a register
  • } loop;
So why would you want to do this? Well, it seems that this cuts down on instruction set size (allowing very RISCy instruction sets with tiny instruction lengths, perhaps 20 bits as in Chuck Moore's F21 CPU). I have a feeling it might make pipelining simpler, but on the other hand, it's not all that much different from JMP so maybe not. This would probably only work for very RISCy designs. Modifying the program counter will probably mess up a lot of things and bad stuff will probably happen if an interrupt comes in while setting PC. Or, not, it's just one instruction after all.

Anyway, in the interest of approaching ultimate RISC, this is one more instruction it might be possible to do without.


Magenta: my excuse is that's it's an academic argument ;-)

I hadn't thought that it would slow things down to make PC a regular register. Okay, on a conventional CPU, but how about a stack machine? It probably shows that I don't know what I'm talking about, but on the other hand, stack machines seem to have been pretty much abandoned for the CISCy RISCs that we have today.

Everything I've read about stack machines seems to say something along the lines of "this is a promising design that's fast an simple, but it's been abandoned because it makes life a little difficult for programmers."

(To elaborate - stack machines get acceptable performance without intricate pipelining tricks. They get to have very short instructions that can fetched as a group, which gets a similar effect. That's why I brought them up.)