No Mhz. processing

In the 1990's, chip manufacturers have gone through great lengths in order to push the envelope when it came to CPU speeds. Companies spared no expense in the race to bring the first gigahertz processor to market. In the process of cranking up the cycles at such a dramatic rate, microprocessor design is hitting a natural barrier: the clock itself. Having a couple of million microscopic logic circuits running in a well choreographed dance with picosecond precision is a computational conundrum in itself.

Why clocks in the first place?

Computer CPU's are, like all complex things, built from very simple parts. At the very lowest level there are the logic gates. A logic gate is a small, usually electronic circuit that can perform an elementary logic operation. The gates perform simple functions on two input bit values, like AND, OR, XOR and NOT. Out of such simple logic operations, one can build circuits that can perform arithmetical operations on binary numbers, like addition, subtraction, and so on. We will call these circuits operational circuits because they perform mathematical operations.

A processor is essentially a collection of such circuits with a routing circuit. The routing circuit is itself a logic circuit that takes a binary code, the opcode, and uses that to send the incoming operand bits (which may represent two numbers) to the proper operational circuit (for instance, the addition circuit).

The great problem here for chip designers is that each operational circuit is different and each simple logic gate in those circuits takes a certain amount of time before its output is ready. Not all operations are performed equally fast because an addition might just require more logic gates than a subtraction. Further, each of the bits in the result of a single operation might take different times to calculate.

In a computer, it is often necessary to use the result from one calculation in the next. Therefore, the computer needs to be absolutely sure that all bits are really finished before going on with the next calculation. The easiest way of doing this is to calculate the longest propagation time of all bits in all operational circuits. Then, the CPU is built in such a way that it can only route the next bunch of bits through after one cycle on a clock. That clock runs at cycles slower than the slowest propagation time, thus ensuring that each previous calculation will be correct at the time a new cycle begins.

Ensuring good results without the clock

When clocks didn't run at such breakneck speeds, clocked design was a very neat way of doing things. At 2 Ghz. however, the clock soaks up lots of power and generates a lot of heat as well. Besides, there is something intuitively wrong with having the CPU run at the speed of its slowest operational circuit. That is why clockless, or asynchronous designs are making a comeback after being generally dismissed as being too complicated in the past.

Although practically complex, in theory clockless computing is quite simple. Integrated into every operational circuit we build a set of control bits that carry codes that change while the answer is being composed by the logic gates. This way, a circuit that requires the computations made by the operational circuit can see if the answer currently at its output is a reliable answer, or if it is still partly incorrect. The next computation can start once the 'ready' code is given, instead of waiting for the clock pulse to end.

Like this, CPU instructions can just 'drain away' from memory like water in a sink instead of being fetched at fixed times like cars in a factory. Several companies have produced clockless clones of their processors that run up to three times as fast as the unclocked versions. At this point, getting the production and design facilities ready for clockless technology is the biggest frontier. Because of this, it isn't likely clockless chips will take over the market any time soon.

Some information used from www.techreview.com

Disclaimer:

The above presents a highly simplified and unrealistic view of microprocessor technology. Concepts like pipelining, branch prediction, delayed branching, operand forwarding, caching, etc. have been kept out of this discussion in order cater the layman and the techie alike.

Log in or register to write something here or to contact authors.