In the property business, time sharing is a way of providing a holiday home to a number of subscribers. This was the subject of several scams and frauds in Spain.
In computing terms, a time sharing operating system is one which divides run time equably between a number of users or processes.

In the 1960s and early 1970s, batch processing was king

Programming involved a pen and paper. Many hours were spent writing code on pads of paper, often layed out in a special way as coding sheets. The programs were then submitted to a pool of typists to be punched onto cards (or paper tape).

The decks of cards were passed to the operators on shift, who would load them into the card reader's hopper. Each deck would contain control cards (with commands punched on them), which would instruct the batch operating system what to do. The job ran to completion (or was timed out if it did not finish), and the output was printed on a line printer.

The operator's next job was to separate the output for each run, and marry up the card decks with the output, pidgeon holed ready for the programmer to collect.

This whole process meant a minimum of a 24 hour turnaround for program edits and runs.

Late 1970s to mid 1990s, interactive sessions

The first improvement to this was to have terminals connected to the computer, capable of having interactive sessions. Before this, the only terminal which was capable of interactive sessions was the operators' console, of which there was only one per machine.

Early terminals were teletypes, all output appearing in hard copy. These were the days of line based text editors.

It was something of an innovation having a computer system that many people could log into at the same time. The prevailing view was that machine time (CPU ticks) was precious, hence was rationed, as indeed was connect time. This meant that part of the operating system was keeping account of how much use each user was using.

These were the days of the first generation of midnight oil burning hackers, who would dial into machines at wierd hours, because the machine resources were cheaper at this time.

At a fine grain level, the operating system needed to arbitrate between different user sessions, and decide how to allocate CPU resources. In many cases, this worked on a round robin basis, allocating time to each user process in turn. The system clock would interrupt the processor every tick (usually mains frequency, 50 or 60 times a second), and the operating system would decide which process should run next.

There was also the notion of user processes being in one of a number of states (this is a simplification):

  • Compute bound.
  • Waiting for the user to type something
  • Waiting for disk I/O

As an improvement on the basic round robin, processes which had just completed some I/O operation (the user had pressed carriage return, or the disk I/O had completed) would receive a short term priority boost. On one oparting system (RSTS) this was termed run burst. This improves the general responsiveness, as this will favour interactive programs over those that just crunch numbers in the CPU.

Windows and the desktop

The public perception of computers has altered markedly, with the ubiquity of PCs. 1970s sci-fi is full of large hardware with flashing lights (and usually magtape drives).

The multitasking model that Windows uses is not based on time sharing (compared with Unix, which is a time sharing operating system). I am using a PC in front of me to type in this writeup, and my unconscious perception is that everything on it is mine - I am not needing to share machine resources with anybody else. If I write a program that loops forever and hangs the machine, this is my lookout.

Windows programming requires the application to design in co-operative multitasking, as the operating system will not be pre-emptively multitasking for you (though NT and derived Windows versions do have some pre-emptive task scheduling taking place).

Server side technology

The time sharing model has a better fit for using computers as servers, than Windows. The operating system does need to arbitrate between incoming requests, much as time sharing operating systems need to arbitrate between user processes.