Skip to content

Tell Me a Story I Can Live:
Chapter Three:
Technological Underpinnings of New Media

Non-fiction Cover


“What is Fantasy Adventure exactly? It’s vicarious escape through consumerism, catharsis through consumption. It’s a momentary, wild-and-crazy retreat from the world into an exotic flavor, a ‘foreign’ experience, some product-assisted derring-do of the imagination…its an escapist identification with a hero who’s gutsier than you are, able to get rid of all the bad guys and still get you back home for dinner… It’s an out-of-body experience you enjoy from your favorite chair. It’s ever-more-exciting exploits undertaken in the safest possible ways: for the key to the Fantasy Adventure trend is that the risk-taking is risk-free. You cavort through your favorite exotic…or dangerous…or wicked…or luxurious…or mysterious world, confident that you’re guaranteed safe return. It’s adventure-by-association, secondhand sensation. And for most of our overloaded sensory systems, it’s all that we need.”


– Faith Popcorn, The Popcorn Report




The history of the development of computer technology is broad, has been dealt with more than adequately in other places and, in any case, is beyond the scope of this thesis, so I do not intend to recapitulate it here. However, certain areas of research are relevant to the emergence of computer aided forms of narrative art, so I will briefly look at them here.

The Memex

Imagine an information processing machine so big that it has to be built into the desk where you work. Information is stored on microfilm; when you ask for a document, mechanical devices quickly sort through the microfilm to find it. It is then projected onto a screen built into the desk; the screen can be divided in such a way that more than one
document can be projected onto it at a time. If you want, a special pen will allow you to write on the document projected onto the screen; the annotated document is then saved on microfilm so that it can be accessed by you (or anybody else) at any future date. In addition, you will be able to save the connections you make between files in case you want to retrace your steps or expect somebody else will benefit from the connections you have made.

This is the basis of the Memex, a system proposed by Vannevar Bush in a paper published in 1945 called “As We May Think.” In most of its particulars, Bush’s device has been rendered comically obsolete by successive waves of technological innovation, especially in the area of digital technology. However, the logic behind the memex is as relevant today as it was 50 years ago.

Bush looked at the explosion of information in the first half of the twentieth century, and was dismayed at what he found. “The summation of human experience is being expanded at a prodigious rate,” he wrote, “and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.” [38] As Bush pointed out, the expansion of knowledge was of limited value if a researcher could not find all of the information relevant to his or her field of study: “There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene.” [39]

The traditional method of organizing information to this point was indexing; the universe of knowledge was divided into different fields, which were then subdivided into different specialties, which were further subdivided, and so on. Moving down the various subdivisions, you would eventually come to the very specific information you were looking for. The Dewey decimal system, much beloved by libraries, uses this method of organization.

Bush found this system inadequate for what now seems to be an obvious reason: “The human mind does not work that way. It operates by association.” [40] He proposed the for the time radical idea that a different approach to organizing information was required: “Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it… Selection by association, rather than by indexing, may yet be mechanized.” [41]

Employing the technology of the time, the memex was a proposal for a new method of storing and accessing information based on this observation. Using it, you could use a ballistics report to access other, previously stored ballistics reports, which would be easy enough using traditional methods. However, starting from the same ballistics report, you could also access weather reports or histories of the migratory patterns of birds or papers on the explosive properties of new chemical compounds or schematic diagrams of different types of rockets or, in short, any information which you feel may be relevant to your subject but which would not necessarily be easy to find by traditional means. Bush championed “associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.” [42]

Bush had the foresight to realize that a whole new way of looking at knowledge would develop out of such a system: “…when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly… It is exactly as though the physical items had been gathered together to form a new book. It is more than this, for any item can be joined into numerous trails.” [43] These trails could be reproduced so that anybody else could follow them, adding his or her own additions to them. The end result would be the transformation of literature: “Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.” [44]

Anybody who has ever been on it can see that Bush’s ideas form the intellectual underpinnings of what has come to be known as the World Wide Web. In addition, as we shall see, most interactive media, including those which are primarily fictional, are associative. But it took another visionary, working 20 years later, to start transforming Bush’s ideas into a workable reality: Theodore Nelson.

Ted Nelson Dreams of Xanadu

“The system described in this book [Literary Machines] builds and fuses…two great visions. It is very close to Bush’s original memex, but now computerized; and its purpose is the augmentation of human intellect as Doug Engelbart foresaw; it is intended to be especially simple for beginning users but easily extended to applications of great complexity; and it is constructed for orderly and sweeping growth as a universal archive and publishing system.” [45]

In the early 1960s, Theodore (Ted) Nelson realized that then-developing computer technology could be used to fulfil Bush’s vision. He wrote Literary Machines, which was distributed in digital form until the mid-1980s, when it was finally set concretely into print. In the book, Nelson made much the same argument as Bush for a new way of organizing information: “The structure of ideas is never sequential; and indeed, our thought processes are not very sequential either. True, only a few thoughts at a time pass across the central screen of our mind; but as you consider a thing, your thoughts crisscross constantly, reviewing first one connection, then another. Each new idea is compared with many parts of the whole picture, or with some mental visualization of the whole picture itself.” [46] Since the nature of thought is not linear, but associative, human beings needed a method of storing and retrieving information which would stimulate such associative thinking.

Nelson went beyond this, however, to argue that “breaking up…ideas into a presentational sequence is an arbitrary and complex process. It is often also a destructive process, since in taking apart the whole system of connection to present it sequentially, we can scarcely avoid breaking — that is, leaving out — some of the connections that are part of the whole.” [47] A traditional text, by this argument, is always incomplete because there are always more connections which can be made between its various parts; a text which is made up of smaller parts which are connected, on the other hand, can be read in a lot of ways.

Nelson adds that forcing a single sequence of information upon every reader is inappropriate, and that “it would be greatly preferable if we could easily create different pathways for different readers based upon background, taste and probable understanding.” [48] In placing the experience of the reader at the centre of the literary experience, Nelson was echoing the ideas of French literary theorists who were just beginning to be known in North American academic circles at that time, as when he suggested that, “Within bodies of writing, everywhere, there are linkages we tend not to see,” [49] and that his aim was to create a system which would make such linkages explicit. (I deal with these theories in more detail below.)

Nelson called his system hypertext. “By hypertext,” he wrote, “I simply mean non-sequential writing.” [50] To illustrate what he was talking about, he divided each of the chapters of Literary Machines into small sections and encouraged readers to read them in any order which they felt like pursuing. Reading Nelson’s book for the first time in 1994, I was struck by how artificial the device seemed. But then, hypertext unquestionably had become reality by this point; the book’s structure may have been more helpful to people reading it in the 1960s and 70s.

Some ramifications of this system not addressed by Bush were clarified by Nelson. He realized, for instance, that no document need ever be stored whole (as word
processors do to this day), but could be stored as successive parts of a whole. “When you ‘go to a certain part’ of a document, the whole document is not ready to show; yet the system gives you that part instantly, assembling it on the run from the many fragments of its actual storage… This method stores the document canonically as a system of evolving and alternative versions, instantly constructed as needed from the stored fragments, pointers and lists of our unusual data structure. Thus there is no ‘main’ version of a thing, just an ongoing accumulation of pieces and changes, some of whose permutations have names and special linkages.” [51] This method of creation would allow writers to go back to previous versions of a document and compare them with the current version and create alternative versions, two major goals of Nelson’s design, while saving storage space since successive drafts of a document would not be stored completely, only those parts which weren’t in previous drafts.

Rather than linking one document to a specific address in a person’s computer, a traditional way of finding material, links would be made to specific areas (such as characters or words) within another document. This would have the advantage of ensuring that even if a document was changed, or a part which contained a link was moved, the link would remain. “Thus a link to one version of a Prismatic Document is a link to all versions.” [52] Taking this to its logical conclusion, Nelson saw that a link to a document is a link to all other documents to which it is linked, which means that a link to one document is a potential link to every other document on the system. He called this web of interlinked text fragments the “docuverse,” or universe of documents.

The computer system which Nelson developed to embody his theories was named Xanadu, after the city in which Kublai Khan was said to have erected his pleasure dome in the Coleridge poem.

In Xanadu, a link from one document to another was looked upon as a window onto it: “Through a ‘window’ in the new document we see a portion of the old… The
windows of a windowing document are themselves actually particular links between documents. No copy is made of quoted material; rather a quote-link symbol (or its essential equivalent) is placed in the quoting document. This quotation does not affect the integrity or uniqueness of ownership of the original document, since nothing has been copied from it.” [53] This distinction, that a document may contain other documents, in whole or in part, without affecting their unique integrity, was crucial to the functioning of Xanadu because it made an accurate assessment of ownership of information possible. “We make sure that everything stored is divided precisely into separate documents…” Nelson wrote. “What this convention really does is stress the singularity of each document, its external and internal borders. Thus, we focus on the integrity of the ‘document’ as we’ve known it… Every document has an owner, the person who created and stored it (or someone who arranged it to be created and stored, such as a publishing company). The rightful copyright holder, or someone who has permission from the copyright holder and pays for storage, is the owner as far as the system is concerned.” [54]

When ownership of bits of information can be established, a system for paying for them can be created: “In our planned service, there is a royalty on every byte transmitted. This is paid automatically by the user to the owner every time a fragment is summoned, as part of the proportional use of byte delivery. Each publishing owner must consent to the royalty – say, a thousandth of a cent per byte – and each reader contributes those few cents automatically as he or she reads along, as part of the cost of using the system.” [55] Determining which writer of a hypertext document owns the copyright, and, therefore, should be paid for the work, was simple: “The windowing approaches already mentioned automatically furnish a general solution to the ‘copyright problem’ with regard to quotation and citation, simply by this means: authors who are windowed in a document
automatically get royalties as well. When a quotation is sent out, the owner of the quoted document gets that increment of royalty.” [56]

People on this system would have accounts from which electronic money would be debited every time they accessed somebody else’s document. In this way, a problem which has plagued the World Wide Web since its inception – how to maintain copyright on electronic networks so that writers properly can be compensated for their work – was largely solved by Nelson in the 1960s.

Nelson recognized that it would not be possible to create a substantial docuverse on a single computer. “For the services described here to be seriously expanded to large numbers, it will be necessary to ‘network’ the service through multiple computers distributed throughout the nation and the world… A user should get anything he or she asks for an instant or so after the request, even if it comes from far away – even if some parts of it come from one faraway place, other parts from other far away places; however widely scattered its parts may be in their storage and ownership.” [57]

Finally, Nelson believed his ideas of hypertext could be applied to a variety of other media. “This freedom of windowing applies, of course, to all forms of data, including pictures, musical notation, etc.” he wrote. [58] This essentially brings us to the present, where interactive hypermedia, as well as being used for information storage and retrieval, are emerging as a new art form.

Reading Literary Machines can be a saddening experience, like reading a novel about an alternative universe which is so close to our own yet we know can never be. (“In one business scenario, the intended public operation of the publishing system will be out of a chain of suburban or roadside stations, called SilverstandsTM. New users will learn the operation of the system at such stands, and local users may dial into their nearest Silverstand. Silverstand personnel (‘Conductors’) will include both local people and an itinerant corps of circulating smarties.” [59]) Although it has been in development for decades, taking up vast amounts of the time of talented programmers, Xanadu has never been tested widely, let alone made available to the general public. [60] The ascendance of the World Wide Web in the last few years makes it unlikely that Nelson’s vision will ever be realized in the form about which he wrote. Still, today’s interactive media owe a great debt to the ideas which Nelson was the first to doggedly pursue and popularize.

Games Computer Users Play

Suppose you had a medium of communications that could take you anywhere, allow you to inhabit any character, give you the chance to experience any adventure — what would you do with it? Would you go back in history to learn what life was like for prehistoric humans? Would you learn philosophy at the foot of Plato? Playwrighting at the work table of William Shakespeare? Politics in the cabinet of Abraham Lincoln?

Get real. If the history of computer games teaches us anything, it’s that you would want to shoot things — and the more things to shoot the better!

From almost the beginning of the history of computers, they have been used to play games. In the early 1960s, for instance, a crude space game, complete with enemy craft, had been created for the mainframe and gray screen computers of the time. It was visual (in the simplest way imaginable) and interactive, the two touchstones of current computer-based entertainment.

Because of the crudeness of computer screens at the time, a more common form of game was text-based. “Will Crowther and Don Woods of the Stanford Artificial Intelligence Laboratory programmed the first text-exploration game, the illustrious Adventure, in 1976. Adventure in turn launched a genre.” [61] In these games, players are given a brief description of an environment. By typing questions into their computer, they can learn more about the environment. By typing simple commands into the computer, they can move about in the environment, picking up objects along the way which they may use at some future point in the adventure, and interact with other characters, by typing in both “dialogue” and commands for physical action (ie: “Attack the monster with sword.”).

It didn’t take long for such games to become immensely popular:



Zork was the first commercially distributed interactive adventure game. It required the user to enter commands on a keyboard to find hidden treasure in a sprawling fantasy world.

The original version of Zork was written in 1977 by some playful MIT students using a US$400,000 mainframe, the PDP-10. The game could be freely accessed through Arpanet (the forerunner of the Internet). A year later, it was rewritten in Fortran and became a well-known time waster in computer labs throughout the world.

In 1979, some of Zork‘s original creators formed a software publishing company named Infocom, Inc., and in 1980, they released a commercial version of Zork for Radio Shack’s first home computer, the TRS-80 Model 1. Zork II and Zork III soon followed, selling a combined total of more than 3.5 million units. [62]



Then, purchased by Activision Inc., Infocom published Return to Zork in 1993, a CD-ROM using modern graphics and requiring no text entry. Activision also owns the rights to the original Zork games and has rereleased them as The Zork Anthology.

Around the same time, another visual computer game was created, this time for use in people’s homes: Pong. “In the early 1970s, people began playing a novel new game on their TVs. It was called Pong. In hindsight, it was moronically simple. Two “paddles” appeared as small rectangles on either sides of the TV screen. A bright dot representing a ball appeared between them. The objective of the game, like real tabletop ping pong, was to use the electronic paddles to keep the ball in play. The novelty of interacting with the TV and being able to match skill with another player, regardless of how simple the game, caught the public’s interest. Even though Pong wasn’t the first video game, it was the first to achieve popular recognition and financial success.” [63]

Nolan Bushnell, creator of Pong (1972), went on to found Atari, a maker of computer games which remained popular into the mid-1980s. Bushnell’s strategy was to take popular arcade games such and recreate them to be played on home computer systems. At the time, remember, computers were not a popular household item; most people were not aware that the game players they allowed in their home were actually based on the same technology on which they could do word or information processing.

Atari’s success was eclipsed in the mid-1980s by an upstart Japanese company: Nintendo. “In the video-game market, where shooting and mass destruction were the norm, the first “Super Mario Bros.” game created a revolution in 1985 by introducing elements not often associated with computer terminals and controllers: wit and humor.” [64] The Nintendo Entertainment System (NES) became a staple in every home and a necessary part of growing up for a generation of children. And few parents knew this, but the NES was designed to function with all the capabilities of a computer.

Games were the Trojan horse of the computer industry, putting more computing power in people’s homes than they realized they might want. Where they once had to be disguised so as not to worry a generally technophobic public, they are now out in the open: “A recent survey by Inteco, a U.S.-based research firm, shows that 70 percent of people who own computers use them for playing games. Computers are, of course, used for other functions, but they are heavily used for nontraditional computing functions.” [65]

As computers become more sophisticated, they are seen as a better and better way of giving people – you guessed it – games: “As Macs and PCs become multimedia machines, they are being viewed as a new untapped market by game developers.” [66] In addition, computer networks offer the opportunity to play against other human beings, rather than against machines. “‘Soon, everyone will be playing games by phone – it’s inevitable,’ said John Bermingham, AT&T consumer products vice-president… If things
go as planned, kids will be shooting it out in national game tournaments without ever leaving home.” [67]

This (admittedly highly selective) history of computer gaming should illustrate that interactive entertainment is not a new phenomenon, having been around since at least the 1970s. In fact, many of the principles of interactivity were explored a couple of decades ago; as Julian Arnold points out, “Adventure games are an early form of computer-based IF [Interactive Fiction].” [68] In addition, artists should be aware of the fact that, because of this history, there is an ingrained belief in the computer industry that interactive entertainment is solely about games; convincing people in the industry of the legitimacy of more sophisticated forms of interactive “art” is, at the moment, an uphill battle.

Convergence

We all know what the different media are. In the corner sits a television set which brings us video programs when we turn it on. On a desk in another room of the house (or possibly as far away as our office) rests a computer on which we work. There’s no point trying to combine the two because all we would get would be a television that could do word processing and spreadsheets, or a computer on which you could watch Wide World of Sports, right?

Hardly.

“When computing technology becomes part of a product, it transforms the basic nature of the product until it becomes something else altogether,” writes futurist Frank Koelsch. [69] Thus, when plain text was combined with computers, the result wasn’t just a fancier form of typewriter or book, but a combination of the two, as well as a new medium where fragments of text are navigated through by the reader, creating a unique whole with every reading – hypertext.

We live in a time where all our information media are being combined and recombined, facilitated by the omnipresent computer. This is known as “convergence.” “In essence,” Koelsch explains, “convergence is the coming together of different technologies, the fusion of two or more technologies to form something new and different – something that has attributes of each but is altogether unique. The new technologies and products that result from convergence are greater than the sum of the original parts…” [70]

Adding computers to television, for instance, can give us video on demand, freeing us from the tyranny of network scheduling. Beyond this, computers allow our television programs to be interactive: with the click of a button, we can see a scene from the point of view of any character in it or, perhaps, decide what direction the scene will take. Add CD-ROM technology, and you have an interactive dramatic work which you can experience on your desktop computer. Add the telephone, and you have an interactive experience which you can share with 100 of your closest friends.

Koelsch calls the end result of this convergence “infomedia,” traditional media combined with information technologies. Most of the technology already exists, or is the subject of furious research which should bear fruit in the near future. This explosion of new media offers a tremendous arena for artists to experiment with form. This thesis looks at some emerging principles of emerging media, particularly as they relate to dramatic narrative media such as film, television and theatre.

Leave a Reply