A clock synchronization algorithm used to synchronize the time on a machine with a remote time server. This is a very straightforward algorithm, and is quite easy to understand.

The procedure:

  1. A process p requests the time in a message mr and receives the time value t in a message mt.
  2. t is inserted in mt at the last possible point before transmission from the server S.
  3. Tround = Time(send mr) + Time ( receive mt) = (1-10)*10-3 seconds on a LAN.
  4. min = minimum queueing time for S.

The earliest point at which S could have placed the time mt was min after p dispatched mr. The time by S's clock when the message is received by P is in the range t + min < p < t + Tround - min. The total width of this range is Tround - 2*min. This gives an accuracy of (Tround / 2 - min).

If all of that made absolutely no sense to you, here's a much simpler (but far less rigorous) explanation. Basically, the client sends a request for the current time to the time server. When it receives the response, it finds the transmission delay (time between the request being sent and the response being received), divides that in half, and adds that to the time received back from the server. The idea is to eliminate the inaccuracy caused by network delays. This assumes that the link is equally fast both ways, which may not always be the case. But as with any algorithm, you have to make tradeoffs.