Hypertext is nothing more than the inclusion of links within a body of text. These links can be unique markers or icons between the words or sentences, but are typically just a string of the words themselves. Selecting the link opens up a new hypertext document, or moves to a different section of the same document.

HTML is short for HyperText Markup Language, and has long been the de facto standard for formatting and displaying hypertext on the World Wide Web. You see non-HTML hypertext all the time, however -- in your CD-ROM encyclopedia, a HyperCard stack, or your Windows Help panel, and that's only on your personal computer.

It dates back to the late 1960s, when Ted Nelson proposed it as one of the then-alternative uses of technology:

By "hypertext" I mean non-sequential writing--text that branches and allows choice to the reader, best read at an interactive screen. As popularly conceived, this is a series of text chunks connected by links which offers the reader different pathways.

In June 2000, British Telecom briefly threatened to enforce a patent on hyperlinking which predated the WWW, and asked for ISPs in the United States for voluntary cooperation. Apparently they didn't get it, because their "patent" hasn't been heard about since.

According to Gerard Gennette, French literary critic and author of Palimpsests: Literature in the Second Degree, a hypertext is any text that is related to an earlier text (which he calls the hypotext) "in a manner that is not that of commentary".

Almost any text is a hypertext, related to other works via hypertextuality, or more generally intertextuality, "grafted", as Gennette says, onto these other works, because no text is completely isolated or original. Every text is recycled culture.

The recent and technological meaning for "hypertext" is really about a method for making obvious the linkages between texts. The links have always been there, since there has been writing (or perhaps before, with oral tradition), but the technology of the World Wide Web and other devices have simplified the navigation of these relationships.

Considering that most nonfiction text contains references to related documents, hypertext is a logical progression for these references. It is arguably the job of computers to take care of repetitive or tedious work, such as actually retrieving the document being referenced.

Encyclopedias benefit greatly from hypertext, as it allows the reader to explore different avenues of thought, providing elaborations on exactly what she or he is interested in. This ability for non-linear retrieval of information makes learning something much easier as the individual has the freedom to explore related topics at her own pace.

Given how our brains group things together by seemingly obscure relevances - something impossible to achieve by sorting information in any standard kind of index - it seems logical that we would one day use technology to help us to link documents together by such erratic connections. The journey that has so far led to our current system of global hypertext isn't as sudden as you might think, however.

1934: Paul Otlet's database of linked documents

In 1934, Paul Otlet had a vision: a machine that would let people search, read and write documents stored in a mechanical database. They would be able to access this database remotely, via a telephone line, and even connect documents together. He called such connections links. He called the project in its entirety a web of human knowledge.

Perhaps the most useful practical achievement of Paul Otlet was his improvement of the existing classification systems, such as the Dewey Decimal System. His own system, Universal Decimal Classification, was the first full implementation of a faceted classification system.

Sadly, his operation was shut down, and the remains of his work were destroyed by Nazi troops.

1945: Vannevar Bush's mechanical home encyclopedia

In the July 1945 issue of The Atlantic Monthly, Vannevar Bush wrote an article speculating about possible future technologies. While it dealt with a broad range of ideas, most of them were in some way related to the storage and retrieval of information. Arguably the most interesting of the ideas related to finding useful information, at least in hindsight, was the concept of information being linked together by association.

The human mind... operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain.

Selection by association, rather than by indexing, may yet be mechanized.

In terms of proposing a useful way of linking non-fiction together, especially encyclopedias, the idea of links that worked the same way that human brains group ideas together had been formally proposed.

Something that Vannevar Bush didn't predict was that computers would be able to store text digitally, translating characters into numbers, such as with ASCII or Unicode. This enables them to store, manipulate, update and retrieve text in ways dramatically more efficient than analogue technology such as microfilm could achieve, although he did talk about speech recognition and speech synthesis in order for the information to be stored at a level closer to plain text. Eventually, technology caught up with these ideas, then allowed them to be not only realized but also improved upon.

1960: Ted Nelson's ambitious Xanadu® Docuverse

With the advent of digital computers which stored written words in such a way that they could be easily manipulated, real systems started to appear in place of the mechanical lever and microfilm filled dreams. One of the most ambitious of these projects, if not the most ambitious, is Xanadu®, currently forty-five years in the making.

The first thing Xanadu® supports is parallel documents: one document based on, or otherwise related to, another one. Its ability to display two related documents next to each other, with similarities and differences highlighted, is useful for keeping track of revisions in different drafts, or for comparing both sides of a debate. You can also pull data from one document to another, then build upon it, letting the computer automate the tedious tasks such as working out royalty payments for the various authors cited.

Xanadu® allows transclusion, which is the existence of the same information in more than one place. Two completely different documents can share a few paragraphs, for instance, and when those paragraphs are updated on one of the documents, the new version is automatically seen on the other. Ted Nelson sees transclusion as what quotation, copying and cross-referencing merely attempt.

Links are also available in Xanadu®, although in 1965 Ted Nelson coined a new word for them: hyperlinks. They are bidirectional, and cannot be broken. The way they work is that a block of text in one document is linked to a block of text in another document. The link has an identity as far as both text blocks are concerned (such as "my comment on someone's idea" at one side, and "someone has commented on my idea" at the other). No matter how much either end is updated, the link remains between any individual characters that were present in the original versions of the texts, and so it cannot become obsolete.

Perhaps the most ambitious part of the Xanadu® system is transcopyright: every author has the right to demand a very small amount of money every time someone reads a piece of her work, whether the reader is trying to access it directly or it is transcluded in someone else's document.

Despite all of these innovative ideas, Xanadu® has so far failed to become popular. It is proprietary and centralized, and few people have taken the time to try it out. In the end, the hypertext protocol that changed the world was the one that was, in many ways, the least ambitious.

1989: Tim Berners-Lee's open, decentralized web

Tim Berners-Lee started off developing a hypertext system called Enquire, which was much like the other systems before it: centralized. It had one place where everything was stored. It also used bidirectional hyperlinks, just like Xanadu®. The new feature it added was external links that could connect different files together. These only went in one direction, however, to avoid cluttering a page with thousands of links, not to mention all the problems associated with storing redundant data - stating the same thing in more than one place.

Like the other hypertext systems before it, Enquire never took off. It did, however, give Tim Berners-Lee a starting point for a more adventurous idea.

The system had to have one fundamental property: it had to be completely decentralized. That would be the only way a new person somewhere could start to use it without asking for access from anyone else.

One of the main advantages of the web is that it is built on top of an existing technology: the Internet. Although the Internet had been growing since its inception in 1969, there was very little information permanently stored on it at the time, and certainly no standardized way of accessing that information. The bulk of data passing through it were in the form of e-mails, newsgroup posts and other transient messages. It offered the ideal place for a new protocol to reside, however, as anyone could put a computer on the Internet and get it to start talking to any other computer on the network. It fitted in with the decentralized philosophy perfectly. Anyone was free to join, without having to ask anyone else for permission.

In much the same way as Paul Otlet invented Universal Decimal Classification, Tim Berners-Lee invented Uniform Resource Locators, or URLs for short. These allow anything on the Internet, from a newsgroup message to a file on an FTP server, to be linked to from a hypertext document. In practice, this means that people can treat almost anything already on the Internet as if it is on the web, making it easy to include the wealth of information already available without having to move it or rewrite it.

In many ways, the web isn't as advanced as other hypertext systems, such as Xanadu®. It doesn't support transcluding or bidirectional links, and as anyone who has used it can attest to, it is full of broken links. For all these setbacks, however, it remains the most popular hypertext system so far because of its advantages: it uses an open format, that anyone can use freely; and it is decentralized, so that anyone can add to it. These factors ensure that, while it isn't the most elegant solution, it is the most accessible and widely adopted. No one person owns it, and everybody is free to add to it in any way they like.

In the end, it seems that most people agree that these freedoms outweigh the web's minor technical shortcomings.

Forgotten Forefather (http://www.boxesandarrows.com/archives/forgotten_forefather_paul_otlet.php)
As We May Think (http://www.press.umich.edu/jep/works/vbush/vbush-all.html)
Xanalogical Structure (http://www.xanadu.com.au/ted/XUsurvey/xuDation.html)
Weaving the Web, by Tim Berners-Lee (ISBN 1-58799-018-0)

This writeup is in the public domain.

Ted Nelson, who coined the term hypertext in 1963, described the “writer’s console,” or word processor, as “lonely and pointless.” Nelson began a revolution in composition arrangement that would, he thought, be far more significant than word processing.

It is the choice among [ideas and structures]… that is the process of writing…. You take a structured complex of thought… that you are trying to communicate, and you break it into individual sequential parts that can be put end to end, and this is a wholly artificial process… based upon the fact that it has to be published eventually in a sequence.

Nelson objected to the necessity of sequence not only in the writing process but also in the final text. Computers, he discovered, did not impose this format on text but instead allowed a much more diverse variety of arrangement in writing, including “branching literature” and new techniques for organizing and comparing ideas. While word processing can make the writing process less linear, hypertext can do the same to the product.

One of Nelson’s main arguments against the paradigms used to handle text on computers, including the word processor, is that they operate on metaphors drawn from our pre-computerized lives, metaphors that limit the range of things we can do with a computer. “Today’s arbitrarily constructed computer world,” writes Nelson, “is also based on paper simulation.… Paper is the flat heart of most of today’s software concepts.”

Nelson argues that rejecting metaphors altogether would be better, writing that “as soon as you draw a comparison to something familiar, you are drawn into that comparison—and stuck with the resemblance.” Because computers do not actually function in the same way as paper, it is not necessary that they follow the same constraints. The so-called technorati have often focused on the impact of the internet in terms of the ability to access a vast amount of information quickly, but Nelson has always focused on the importance of connections between this information. A self-described contrarian, he describes the World Wide Web as “watered down and oversimplified” because it lacks the permanent two-way links and annotation that would help make connections in an orthodox Nelsonian hypertext system. The Web, with rectangular pages, represents the continuing hegemony of paper. “We must,” declares Nelson, “overthrow the paper model, with its four prison walls and peephole 1-way links.”

Work Cited

Log in or registerto write something here or to contact authors.