Previous Chapter
Contents
Next Section

ยง 2. Switching and Software

Two other significant discoveries in computing hardware, made by US companies soon after, indicated private research seemed to be pulling away from government and university efforts. The first was to emerge from Bell Laboratories in 1958 with the invention of a modulator-demodulating device (shortened to 'modem') which enabled the communication of electronic data along conventional telephone lines, much like the automatic telegraph devices of an earlier era. Here Shannon's notion of the bit was merged, with a 'per second' measurement of transmission time, to be known as BPS (bit per second) or baud.1 The first Bell modem was capable of transmitting roughly 300 bits (or 60 characters of ASCII text) per second and retailed for nearly $2000 US (clearly not priced with a wide commercial market yet in mind).2 The second invention however, by Texas Instruments in 1959, of the integrated circuit would in many ways shape the future of computers even more profoundly. The solid-state integrated circuit immediately began to shrink the size of computers while increasing the number of chips which they could contain. The days of mammoth computers filling gymnasium-sized rooms seemed to give way to the possibility of much smaller units (ones which might, say, fit in one room).

As work continued on optimizing these two new mechanisms, the inter-connection of computer systems began to occupy researchers attention. In 1960, the economist Fritz Malchup focused for the first time on the importance of information to commercial systems in his study The Production and Distribution of Knowledge in the United States which pointed out the burgeoning percentage of the American workforce employed in industries centered primarily on distributing, manipulating or reporting specific forms of information.3 This certainly foreshadowed the likes of Daniel Bell and Alvin Toffler (The Post-Industrial State and The Third Wave would not appear until the early 1970s) but it also seems to strike a chord in the computing community itself. If information itself was becoming the basis of the economy, and if computers were increasingly to be the means by which that information was worked, then the long lines in university computing labs as people waited to run one batch of program cards at time did not bode well. Surely there had to be a more efficient, less centralized approach.

That question was almost immediately taken up by a researcher with the RAND Corporation, Paul Baran, who published a paper in early 1962 titled (somewhat obliquely) On Distributed Communications Networks. The technical discussion took as its starting point the notion of a group of computers, all linked by modems, which could then break up data into discreet units, 'packets', and shuffle them through the network via those computers whose resources were least occupied. Baran even foresaw the advent of the 'information utility',
...while it may seem odd to think of piping computing power into homes, it may not be as far-fetched as it sounds. We will speak to the computers of the future in a language simple to learn and easy to use. Our home computers will be used to send and receive messages---like telegrams... we could pay our bills and compute our taxes via the console. We could ask questions and receive answers from 'information banks'---automated versions of today's libraries.
Again, like Shannon's concept of the bit, this does not seem to anyone familiar with network 'packet switching' now to be a radical idea, but it generated a great deal of excited discussion. In 1967, the annual ARPA symposium took as its main focus possible designs for ArpaNet, using the 'nodes' and 'packet switching' protocols first proposed by Baran.

This movement towards 'open' computing, distributed through networks, held up the promise of freeing the computer's power from the shrine of the locked and sealed computer lab. The potential of computing was seen by many in the development community as being put into the hands of individual 'users' (a radical idea at the time) where computational work and data could be spread throughout a 'free' system. As Borgmann summarizes, this is the inherent strength in networked computing, since
historically considered, information technology has resulted from the convergence of two technologies: the transmission of information and the automation of computation...we can be more explicit and define it structurally as the information that is measured in bits, ordered by Boolean algebra and conveyed by electrons...representing the fundamental and universal alphabet and grammar of information...powerful enough to do the most complex calculations, contain and control music, capture and manipulate images, process words, steer robots and guide missiles. 4
In 1965, Ted Nelson coined the phrase 'hypertext', drawing on the ideas of earlier thinkers like H.G. Wells (who in 1936 had proposed the idea of a World Encyclopaedia, or World Brain5 ) or Vannevar Bush (who in 1945, theorized about the 'memex' machine).* The notion of linking words in one document to related terms in another was an excellent elaboration, especially in combination with work being done by a computer designer in California named Doug Englebart. Englebart had spent most of his youth tinkering with amateur radio and electronics, ended up a radar operator during W.W.II, and returned to Southern California with a passion for technology. However, he apparently had not been terribly enthusiastic about what his friends in the commercial and university computing communities had been coming up with over the years, and felt intuitively these machines had more to offer people.

In 1968, at the Fall Joint Computer Conference trade show in San Francisco, he unveiled what he felt was the answer to all computer's problems, and serious computer people giggled and snorted in response. Englebart rigged up the standard monitor and keyboard on a desk, but just to the right of that was attached a little box with a button. If you moved the box, the flashing screen on the cursor followed the motion. This was to allow a user to choose between different frames (another innovation of Englebart) display on the monitor, like 'windows'. The consensus at the time seemed to be Doug had been through a rough time, 'in the war and all', but six years later these features, the mouse and windows interface, would be included in the newly marketed Xerox Star (retailing for $50, 000 at the time). 6

Another hint that 'monolithic' centralized computing processes were on their way out was the convening in 1968 of the NATO (North Atlantic Treaty Organization) Science Committee Software Conference in Garmisch, Germany, responding to what had been labeled the 'Software Crisis', just after the massively dramatic failure of IBM's OS/360 which had been four years in development. The dizzying complexity of writing source code for ever-expanding system needs and applications had moved far beyond individuals or even groups to grasp. The processes within these Byzantine programs (running at this point into the hundreds of thousands of lines) were so inter-woven that single line fixes (de-bugging) seemed to produce two errors for every one fixed. One programmer from MIT confessed, "We build systems like the Wright brothers built airplanes--- build the whole thing, push it off the cliff, let it crash, and start over again." 7

The response was to propose "software engineering" and "structured design methodology" in which large operating systems or programs must, at the design stage, be broken down into the smallest possible 'modular' component still able to perform a function.8 In this way, the art of programming was to become (at least in theory) gradually more 'scientific'. One of the greatest exponents of the 'fragmentary, structured design' school mentioned at the conference above was Evan Dijkstra, who wrote another entreaty to his peers, The Humble Programmer, soon after in 1972:
The only problems we can really solve in a satisfactory manner are those that finally admit a nicely factored (i.e. hierarchical) solution. At first glance this view of human limitations may strike you as a rather depressing view of our predicament...on the contrary, the best way to learn to live with our limitations is to know them. 9
For the first, but certainly not the last time, public expectations about what computers could deliver were dashed and blame was cast. Corporate greed and unrealistic promises were faulted, while the bickering and anti-social nature of programmers (undisciplined 'in a world that indulges in indisciplines') was also blamed. Clearly not much has changed, but out of that conference a wider appreciation arose (both within and without the community) about what computers could and could not be expected to do.
Notes:

1 Baudot code had been an international binary code, names after its French inventor, Inspector-Engineer Jean-Marice Emile Baudot, a pioneer of multiplex telegraphy. Multiplex signaling allowed the simultaneous transmission and reception of numerous communications over the same wire, and greatly advanced the speed and versatility of the telegraph, particularly in Europe and her colonies where the system was widely adopted. At the same time, it also largely automated the system, as signaling was now transmitted and decoded by relay machines at either end of the wire (Baudot filed his patent in June of 1874, under the name 'printing multiple telegraph system apparatus'). The new binary code system was based on 5-6 grouped 'bits' (either dots or dashes) used to represent a wide variety of characters, later used as a standard for teletypes. It also became the model for the adoption of the ASCII (American Standard Code for Information Exchange) standard in 1963. See French Inspector of Posts and Telegraphs Mssr. E. Montriol's "Baudot and his Works", Annales des Postes, Telegraphes, et Telephones,, v. 5, n. 4, December 1916 (trans. Eric Fischer).

2 Neal Stephenson details in his essay In The Beginning Was the Command Line how similar these early modems were to telegraphs when he relates (as a teenage computer enthusiast in 1973, still five years before the first desktop workstations were to appear) how he used to write his programs in long-hand at home, trek to his high school, type the program into a device which converted it to binary notation and imprinting it onto tape, and how that tape was then fed into a modem which would send them for processing to a nearby university mainframe, which would then send the 'answer' of the programmed problem back as binary code, which his school's machine would then convert back into a readable format. In other words, "Human beings have various ways of communicating to each other...but some of these are more amenable that others to being expressed as a string of symbols. Written language is the easiest of all because, of course, it consists of a string of symbols to begin with. If the symbols happen to belong to a phonetic alphabet (as opposed to, say, ideograms), converting them into bits is a trivial process, and one that was nailed, technologically, in the early nineteenth century, with the introduction of Morse code and other forms of telegraphy. We had a human/computer interface a hundred years before we had computers."

3 See sections five and six in this work, "Information Machines" and "Information Services" (p. 265-347) for an astonishingly accurate projection of economic trends in the 1980s and 90s, Production and Distribution of Knowledge in the United States (Princeton: University Press, 1962).

4 Albert Borgmann, Holding on to Reality : the Nature of Information at the Turn of the Millennium (Chicago: University Press, 1999), 166.

* Bush's 'memex' was to be a sort of 'memory-extender', a souped-up differential analyzer which would link images and documents together on a desk, and has no inherent correlation to the 'meme' - Richard Dawkins' term from his The Selfish Gene, a type of viral idea which travels through a community). Thanks to Illumina for pointing this out.

5 H.G. Wells explained in his speech, given first to the British Royal Society, then later to radio listeners in New York, that "a World Encyclopedia no longer presents itself to a modern imagination as a row of volumes printed and published once and for all, but as a sort of mental clearing house for the mind, a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared. This Encyclopedic organization need not be concentrated now in one place; it might have the form of a network...with a great number of workers engaged perpetually in perfecting this index of human knowledge and keeping it up to date." See Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (NY: BasicBooks, 1996), 284.

6 Paul E. Ceruzzi, A History of Modern Computing (Cambridge: MIT Press, 1999), 260.

7 Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (NY: BasicBooks, 1996), 196-203.

8 Is it not so still?

9 Interestingly enough, the 'software crisis' was also borne of the fact there were no programming schools, or even courses, available throughout the 1950s and into much of the 60s in many areas. People who 'became' a software developer just seemed to flock from other mathematical and scientific fields because it was interesting and lucrative to do so, but often had little or no formal training with computers. As one programmer pointed out twenty years ago, "this resulted in the situation where we have many people in the computer profession with insufficient training and an unwillingness to catch up. This is the situation at which the manufacturer's connived: They wanted to sell machines and convinced the customer that programmers were at hand." F.L. Bauer, "Software and Software Engineering," Society for Industrial and Applied Mathematics Journal 15 (2) (Apr. 1973): 472.

10 E. Dijkstra, "The Humble Programmer," Communications of the ACM (Oct. 1972): 865.

11 Ibid., 475.

Previous Chapter
Contents
Next Section