is a technique often used in networking
where the time between retransmissions
doubles with each retransmission. The time therefore increases exponentially
It is used, for example, in CSMA/CD Ethernet networks. When trying to transmit on the network it is possible another machine may try to transmit at the same time. A collision therefore occurs. This problem is solved by the two machines being able to detect the collision and transmitting again. However, this brings about another problem: if they simply try to transmit again immediately, they will collide again. The solution is to wait a random length of time before retransmitting.
Despite this, it is still possible that the two nodes will collide again next time. This is where Exponential Backoff comes into play. Each time they collide the range of the random length of time is doubled. The greater the length of time, the less chance of colliding. So each time the two nodes collide, the probability of them colliding next time decreases.
A naive attempt at optimisation is to ignore Exponential Backoff and simply wait a constant time. I believe some old Sun machines used to do this. This improves performance for you but assumes nobody else is doing the same thing. If they are, it can kill overall network performance.
Exponential Backoff is also used for a different purpose in TCP for retransmission times. If a packet timeout occurs, the packet is retransmitted. However, each time the packet is retransmitted, the timeout length is doubled. This can help deal with temporary interruptions in network service without a constant flood of retransmitted packets.
(r) lj says re Exponential backoff , you might want to mention that at full congestion, exponential backoff works to throttle down the rate at which each node transmits (as it spends more and more time backing off and not transmitting) until each node is using a fair proportion of the available bandwidth. Random or constant becomes unusable at high contention, as the nodes only throttle down a constant amount.