The processor is the circuitry responsible for running the computer: ultimately it’s where the programs that you run are executed.
Although the circuits you can most easily see are printed circuit boards, processors are implemented as integrated circuits. Instead of circuits made of wire, the electricity flows along tiny etched channels in a semiconductor chip.
It’s not uncommon for computers to have more than one processor on a single chip: these are called multicore processors, and the number of cores is the number of processors.
Processors are, by necessity, immensely complex devices but there are a few key aspects that define their behaviour.
Because processors are performing millions of calculations to execute tasks, the speed of a processor is a critically important feature. However, the efficiency of the processor (and hence, ultimately, its speed) is dependent on a number of different factors.
- word size
- clock rate
- instruction set
There are other important factors too, including dedicated caches and parallelism.
The word size is the size, in binary digits, of the numbers that the processor’s arithmetic circuits can handle. These are binary digits, called bits, because in practice the numbers are being represented by the presence (1) or absence (0) of charge.
This is a critical limit because those numbers effectively limit the size of of the memory the processor can efficiently access.
Today, 64-bit machines are common.
Note that architectures with smaller word sizes can process bigger numbers, but less efficiently. The complexity of processors — and hence the difficulty of designing and implementing them — increases as their word size increases.
The clock rate is the speed of the processor’s “heartbeat”, roughly a measure of how fast it’s running. This alone doesn’t tell you how fast it can work, because that depends on the other factors including how powerful a single instruction can be, and what optimisations (like caches) are built-in.
The instruction set is the collection of all the instructions that the processor can execute. This is “close to the metal” because every instruction requires circuitry for running it, on the chip. So the design of instruction sets is a balance between having a large set of complex, expressive instructions or a small, set of simple, fast ones. The extremes of these two approaches are sometimes described as complex instruction set computers (CISC) and reduced instruction set computers (RISC) respectively.
Programmers and the processor
When you first learn to write programs, you do so in high level languages which another program (a compiler or interpreter) turns into commands that match the processor’s instruction set. That is, the translation into machine code is done for you. High level languages are now so expressive that it’s not uncommon for a professional programmer to never need to know about that machine code.
However, for industrial applications where optimal efficiency is required, programmers may need to write in low level languages. This is where knowing how to exploit the characteristics of the processor on which the program will run (the “target device”) becomes a consideration.