Too many programs make a bad assumption about how long
a disk operation will take. That assumption is
The amount of time left in an operation is equal to
the total remaining bytes of operation left divided
by the dividend of the total bytes completed and the
amount of time spent on completing it.
The assumption is, in many cases, plain wrong.
Generally, copying n one byte files takes far
longer than copying one n-byte file.
Disk seeks are costly. Often file creation and
finding is slow as well. For example, finding and
creating files over WebDAV is much
worse than read or write streaming.
To accurately measure the time to complete an
operation, you must first figure how much overhead
there is per file. Then add in the overhead in the
calculation. This still isn't perfect, but it's much
closer than most programs use.
Many disks don't run at the same speed at the inner
and outer track. Disks have multiple platters and
heads, and can sometimes use many at a time.
Seek times vary based on current head position. The
only thing about all predictions of the future is that
they are all wrong, etc.