In parallel programming, it is common to refer to wallclock time, as opposed to total CPU time, when measuring the performance of an algorithm. Many processors may be used to try to reduce the total wallclock time used for a particular problem. In total CPU time expended by all processors, parallel algorithms always require greater effort than a serial algorithm, due to management. Since CPU time is cheap, parallel programmers study improvements in wallclock time instead.

The value of "serial time" divided by "parallel wallclock time" yields "parallel speedup". Interestingly, adding additional processors can sometimes increase wallclock time due to increased overhead. An analogue is the question
If one man can dig a hole in one minute, does it mean that sixty men can dig a hole in 1 second?
Often, the best algorithm to use for a parallel problem is not the best algorithm for a serial problem.

Log in or register to write something here or to contact authors.