Originally, most microprocessors used in small computer systems could only do integer math natively. In order to do floating point (FP) operations, they had to implement those algorithms in software. This was a lot slower than doing the operations directly on the chip, but since applications used mostly integer math and didn't require high FP performance, this wasn't a problem.

For applications that did require high FP performance, many computer systems could accept a math coprocessor, which was essentially another chip that worked together with the main CPU. The chip had its own set of instructions for doing FP math, and that was all it could do. In some cases, FP math operations were performed by the coprocessor an order of magnitude faster than through software. However, math coprocessors were expensive, usually costing up to half of the price of the computer itself, because of the relatively low volume. An example of a well known math coprocessor is the Intel 387, which works in tandem with the Intel 386 processor.

Once the cost of microprocessors came down and the transistor count went up, it became more economical to simply include the math coprocessor on the main CPU itself. For instance, Intel integrated the math coprocessor on their x86 processors starting with the 486DX. Today, pretty much all general purpose CPU's have a math coprocessor built in. This has enabled widespread development of software that takes advantage of the FPU; most modern games do.

Log in or register to write something here or to contact authors.