Skip to content

Literature at Lightspeed:
Chapter Three:
The Economics of Information on the Web

Non-fiction Cover



The Internet is the hottest topic in the country… Companies of all kinds — big and small, technology-oriented or service-based, employing hundreds of people or consisting of one person at a desk — are trying desperately to take advantage of the benefits, both factual and perceived, of being online. Writers are no different. Surf the Net and check the web sites writers have put up in an attempt to draw attention to their services. You’ll be surprised at what you see. (Winchester, 1997, 23)

For a huge number of Americans, a single company will control the electronic pipeline into the house and most, if not all, of the content and services that are pumped through it. (Reguly, 2000, B1)

Walter Hale Hamilton: “Business succeeds rather better than the state in imposing restraints upon individuals, because its imperatives are disguised as choices.” (Herman and McChesney, 1997, 190/191)


Introduction

As I write this, “A Year in the Life of the Digital Gold Rush” blares from the cover of the latest issue of Wired magazine. (Bronson, 1999, front cover) The comparison of efforts to make money off the Internet to the Alaskan or Californian gold rushes of the last century is common in the popular literature on electronic commerce. “[T]here are corporate prospectors on the electronic frontier, rubbing their hands at the trillion-dollar, digital goldmine expected by the year 2000,” goes one example. (Biocca and Levy, 1995, 21)

It is true that a lot of people and organizations are putting up Web sites in the hope of making money. As we saw in Chapter Two, many of the writers in my survey said that they were hoping to figure out a way to make money from the writing they published on the Internet. However, people and organizations who supply digital information (as opposed to physical goods) over the Internet have, for the most part, not been able to find an economic model which works. “Prodigy, once the third-biggest online service in the U.S., practically vanished after blowing more than a million dollars on largely unwatched content. Ted Turner spent an undisclosed amount on his Web ‘zine, Spiv, before pulling the plug. And the Microsoft Network took a bath on the online magazine Mint.” (Thompson, 1998, 57) Announcements that “…in recent months, well-regarded sites Word and Charged have been forced to seek financial rescue” (Sandberg, 1998, B7) are almost as frequent as announcements of new efforts to create content which will be profitable. There seems to be a lot of wisdom in the observation that, “As [with the gold rushes], it is likely that more money will be made by those who provide the supportive infrastructure of hardware, software, and intellectual property — the picks and shovels of cyberspace…” (Whittle, 1997, 42)

A workable mechanism for paying for digital information was developed in the 1960s. As described in Chapter One, Theodore Nelson’s hypertext system, which he came to call Xanadu, involved a series of documents in different windows; when a user activated a link, the material would appear in a new window. Xanadu had an internal copyright which everybody who signed a contract to be on the network agreed to: anybody could link to anything to which they had legitimate access; once a document was published, you had no control over who linked to it or how, and; once a document was posted, you could not remove it (because you would mess up all the connections made to and from it), but you could publish versions which superseded it. Most important to the current discussion, “In our planned service, there is a royalty on every byte transmitted. This is paid automatically by the user to the owner every time a fragment is summoned, as part of the proportional use of byte delivery. Each publishing owner must consent to the royalty — say, a thousandth of a cent per byte — and each reader contributes those few cents automatically as he or she reads along, as part of the cost of using the system.” (Nelson, 1992, 2/43 and 2/44.) If a person was quoted in one document, when the link to the quote was activated, the person got a small percentage of the fee from the original page. If another link was activated in the quote, the third person would get a percentage of the second’s fee, and so on.

Nelson’s Xanadu was never put into practice, and has largely been overtaken by the World Wide Web, so we will probably never know if it would have worked. This chapter will look at the financial repercussions of the digital communications network which does exist, the World Wide Web.

We must start by recognizing that writers are not the only people with a financial stake in the Internet. Those who run the largest entertainment conglomerates in the world are also eagerly eyeing the Internet as potentially lucrative additions to their revenue streams. As we shall see later in the chapter, the interests of the major corporations are often in conflict with those of individual producer/consumers. It is necessary, then, to begin the chapter with a description of these conglomerates and their interests in this medium.

Having done that, we can begin to look at problems which both individuals and corporations have making money supplying content for digital communications media like the Web. I start by considering the question of what information, as a generic commodity is worth, concluding that, as the amount of information grows, its value approaches zero. This accords with the early ethos of the Internet, which has been described as a “gift economy,” where information was exchanged for reasons other than financial gain. Thus, I consider how the Internet compares to models of gift economies based on the physical world.

One technical solution to the diminishing value of information is known as micropayments, which allowed people to buy things over the Internet which could be valued at fractions of cents. This would have been a boon to individual content creators, who would have had an effective means of charging for their small inventory of individual pieces of writing. Unfortunately, as I show, there are problems with the technology which have yet to be overcome.

Without an acceptable form of exchange, it becomes important to be able to distinguish one’s content from that of the generic content flow. This can be accomplished by “branding,” making the name of the producer or the product stand out in the mind of the consumer. As I show in a section on the subject, large producers can take advantage of branding in a way which is not available to most smaller information producers, especially individual writers.

While branding can make a product more attractive, it isn’t an economic model, per se. So, next, I turn my attention to models from existing media — advertising and subscriptions — and find them mostly inadequate for generating revenue over the Web. One of the main problems with both models is that they require large numbers of consumers, while the billions of pages on the Web fracture audiences, making the audience for any given page too small to make it financially viable. One possible method of dealing with this is to change the nature of the medium, to make it close enough to an existing medium that the model of revenue generation for the existing medium can be applied to it. Individuals do not have this power, but entertainment conglomerates might. Therefore, I next look at ways in which they have attempted, and continue to attempt, to turn the Web into a glorified form of television, introducing push technologies, streaming video and multicasting, Web TV and asymmetrical bandwidth transmission. These technologies are not necessarily sinister; there are good reasons for adopting them. However, I hope to show that the form in which they are introduced into the marketplace can have an adverse effect on individual Web content creators.

The chapter ends with a look at a non-traditional form of exchange which has been suggested to deal with the fundamental problem of information as a commodity on digital networks: the attention economy.

Corporate Conglomeration in the Information and Entertainment Industries

The primary (some would say only) purpose of a corporation is to make money for its shareholders. The more it dominates a market, the greater the potential profit. There are two ways a corporation can attempt to dominate a market: through vertical or horizontal integration. A horizontally integrated corporation tries to corner the market on a given product. A corporation in this position is known as a monopoly. A vertically integrated corporation tries to control every aspect of production and distribution of its product (and, sometimes, related products). A film corporation which has production, distribution, marketing and exhibition capabilities is vertically integrated. Most often, a small number of large vertically integrated corporations settle into a comfortable competition with each other: this is known as an oligopoly.

According to Herman and McChesney, this describes the current state of the entertainment industry.



The 1990s has seen an unprecedented wave of mergers and acquisitions among global media giants. What is emerging is a tiered global media market. In the first tier are around ten colossal vertically integrated media conglomerates. Six firms that already fit that description are News Corporation, Time Warner, Disney, Bertelsmann, Viacom and TCI. These firms are major producers of entertainment and media software and have global distribution networks… Four other firms that round out this first group include PolyGram (owned by Philips), NBC (owned by General Electric), Universal (owned by Seagram), and Sony. All four of these firms are conglomerates with non-media interests, and three of them (Sony, GE, and Philips) are huge electronics concerns that at least double the annual sales of any first-tier media firm. None of them is as fully integrated as the first six firms, but they have the resources to do so if they wish. (1997, 53/54)



These first tier corporations, which work on a global scale, not only control all of the steps for production and distribution of their products in a given medium, but own subsidiary corporations in a wide variety of related media.

This gives these corporations tremendous advantages. “Disney’s 1996 Hunchback of Notre Dame generated a disappointing $99 million at the U.S. and Canadian box offices. According to Adweek magazine, however, it is expected to generate $500 million in profit (not just revenues), after the other revenue streams are taken into account… In sum, the profit whole for the vertically integrated firm can be significantly greater than the profit potential of the individual parts in isolation. Firms without this cross-selling and cross-promotional potential are at a serious disadvantage in competing in the global marketplace.” (ibid, 54) These advantages lie not only in cross-promotion of a product across a wide variety of media owned by a single corporation, but can also occur with large non-media conglomerates: “In 1996 Disney signed a ten-year deal with McDonald’s, giving the fast food chain exclusive global rights to promote Disney products in its restaurants. Disney can use McDonald’s 18,700 outlets to promote its global sales, while McDonald’s can use Disney to assist it in its unabashed campaign to ‘dominate every market’ in the world. PepsiCo. signed a similar promotional deal for a 1996 release of the Star Wars film trilogy, in which all of PepsiCo.’s global properties — including Pepsi-Cola, Frito-Lay snacks, Pizza Hut, and Taco Bell — were committed to the promotion [footnotes omitted].” (ibid, 55) Again, this sort of promotion is not available to smaller content creators, certainly not individuals.

McChesney and Herman identify two other tiers of entertainment corporation. There is “a second tier of approximately three dozen quite large media firms…that fill regional or niche markets within the global system… These second tier firms tend to have working agreements and/or joint ventures with one or more of the giants in the first tier and with each other; none attempts to ‘go it alone…'” (ibid, 53/54) The global corporations “dominat[e] the activities of the other, weaker competitors in the market.” (Serexhe, 1997, 301/302) Access to their marketing and distribution systems gives the global corporations a strong negotiating position in relation to smaller firms, although this may be mitigated partially by the fact that they need a constant stream of content to maximize the profit potential of their distribution systems.

Finally, “there are thousands of relatively small national and local firms that provide services to the large firms or fill small niches, and their prosperity is dependent in part upon the choices of the large firms.” (Herman and McChesney, 1997, 54) These are “independent” producers: small film companies, regional publishers, community radio stations, et al. These companies can have a positive relationship with the larger corporations. As Julie Schwerin, president of InfoTech, a firm that tracks the multimedia industry, points out, “Having a big partner…greases the skids for raising more money to keep the [small] company growing…” (Carlson, 1996, 32) However, the independence of these companies depends upon how much of their revenue can be generated from their local audience; as Herman and McChesney point out, to the extent that they rely on the larger firms for revenue, their independence is compromised.

One might expect that the major entertainment conglomerates would be in intense competition with each other, but this is not necessarily the case. For one thing, there is a pattern of overlapping ownership: “Seagram, for example, owner of Universal, also owns 15 percent of Time Warner and has other media equity holdings. TCI is a major shareholder in Time Warner and has holdings in numerous other media firms. The Capital Group Companies’ mutual funds, valued at $250 billion, are among the very largest shareholders in TCI, News Corporation, Seagram, Time Warner, Viacom, Disney, Westinghouse and several other smaller media firms [footnotes omitted].” (Herman and McChesney, 1997, 56/57) As has also been mentioned, these conglomerates are increasingly “tied, either directly or by overlapping directorships, to the major manufacturing and financial powers.” (Drew, 1995, 73)

Furthermore, “In establishing new ventures, media firms frequently participate in joint ventures with one or more of their rivals on specific media projects. Joint ventures are attractive because they reduce the capital requirements and risk of the participants and permit them to spread their resources more widely… The ten largest global media firms have, on average, joint ventures with five of the other nine giants. They each also average six joint ventures with second-tier media firms.” (Herman and McChesney, 1997, 56) An example might help illuminate this point: the increase in the budgets for Hollywood movies which are dominated by special effects means that they increasingly cost more than $100 million; if such films don’t take in at least $300 million at the box office, the studio could lose enough money to threaten its existence. So, for some films, studios agree to co-produce the films. While this decreases their potential profit, it also decreases the amount of their investment and, therefore, the amount which they risk losing if the film doesn’t do well.

Oligopolies, as a rule, foster only a selective form of competition; in many ways, there is agreement at the highest levels on how to run the system to the benefit of those companies large enough to be included in it. Joint ventures and overlapping ownership are two ways in which companies in an oligopoly work with each other for their mutual benefit.

Although well known for their traditional media holdings, the major entertainment conglomerates are becoming increasingly active in creating content for CD-ROMs and computer mediated communications systems like the World Wide Web. “Now, Fox and Universal have their own large interactive divisions — as do Disney, LucasArts, Time Warner, Virgin, Paramount, Turner, MGM, etc. These companies compete neck-and-neck with other big exclusive software developers like Broderbund, Interplay, Electronic Arts, Accolade and others.” (Lewinski, 1997, 41) Not only do they produce content for their own streams in competition with new media producers, but these corporations also ally themselves with new media companies; America Online, for instance, “established a joint venture in Germany with Bertelsmann, the world’s third largest publishing group,” (Meissner, 1997, 16) while ZDF, the public German broadcasting service, “together with Microsoft and NBC, runs the most ambitious Internet news channel in Germany…with 19 editors who are on Microsoft’s payroll.” (ibid, 17) In fact, “Throughout the 1990s companies like Lucasfilms and Time-Warner began to explore alliances with the biggest players in the information technology industries, and computer companies courted broadcasters. In 1996 Microsoft began a joint venture with NBC to create MSNBC — a traditional television network, delivered by cable and satellite, with an associated Web site… By 1996 all the major television networks had established Internet footholds.” (Friedman, 1997, 179/80)

Computer companies are not the only ones with an interest in electronic communications with which the entertainment corporations are allying themselves. Phone companies with an eye towards delivering digital content are also looking for partners: “Several Bells, including Ameritech and SNET have hired former Hollywood executives to negotiate strategic alliances with film studios… The joint synergy of studio-Bells makes the following possible: movies on demand, home shopping, interactive games, educational programs and travel assistance. The alliances are win-win — studios receive extra distribution and the Bells develop competitive programming.” (Carlson, 1996, 37/38) This was made possible partially by government deregulation of phone services, allowing them to enter fields they were previously forbidden from entering, and partially from privatization of what were once public utilities. “Forty-four [Public Telecommunications Operator]s have made this shift since 1984, generating almost US$159 billion…” (Barkow, 1997, 80) The competition brought about by deregulation and privatization has forced phone companies to aggressively pursue avenues of revenue generation which were previously closed to them.

Completing this picture are the cable companies, which, along with the telephone companies, are becoming increasingly interested in exploiting their capability of distributing digital communications. According to Baldwin, McVoy and Steinfield, “The cable system is vertically integrated. Large multiple system operators have investments in program networks, often shared with other MSOs. Some cable operators own television and film production subsidiaries as well, completing the vertical integration — that is, retailer (systems operator), distributor (program network), and producer (film or television studio). [original emphasis]” (1996, 261) At the same time, Microsoft invested $1 billion in cable company Comcast. [1] (Reid, 1997, 125)

Although still largely separate, these various media may ultimately merge, a process known as convergence. “For years, most of us have had three different sets of wires and cables entering our homes and offices: one for electricity, one for conversation or computer data, and one for news and entertainment… When all of these signals are digitized, it becomes possible to carry TV pictures on the telephone wires, computer data on the TV cable, or both of them on the electric utility’s meter-checking lines. That’s convergence.” (Cetron, 1997, 19) The mergers and alliances with computer, phone and cable companies in which first tier corporations are engaged are their way of ensuring that they can maintain the control vertical integration gives them over a completely digitally converged system. As Edmond Sanctis, senior vice president of NBC Digital Productions, explains, “The whole idea is to develop media franchises and creative properties, and then float them across any platform that is viable.” (Goldman, 1997, 42)

Early in the new century, the first major event in the convergence of the old media companies typified by McChesney and Herman’s seven dominant transnational corporations and new media computer-based corporations took place when AOL took over Time Warner. Time Warner owned, among other entertainment or information companies, CNN, Time and People magazines and the Warner Brothers movie network and TV studio. AOL’s assets included its Internet service, which had about 20 million subscribers, Netscape, the second largest Web browser and MovieFone, a telephone and online movie-booking service. (Milner, 2000, A1)

The attraction of Time Warner to AOL would seem to be the production companies content, which could be cross-promoted to its online customers. (I shall look at this phenomenon in more detail below.) However, AOL had a more immediate purpose for the takeover: “Time Warner fills [AOL’s] need for a high-speed network with its cable business which covers 20 per cent of the United States. AOL no longer has to plead with other cable companies for access to their systems and it has a way to stop its customers from leaving for high-speed providers such as @Home.” (Evans, 2000b, B13) The high speed pipes were necessary for what some analysts see as the next phase of the Internet: video on demand.

Some suggested that the advantage for Time Warner was that it had “a treasure trove of archived material that it will now be able to remarket to a vastly expanded audience.” (MacDonald, 2000, A1) This can only be partially true, however; while some of its older material may be repackaged for the Internet, it’s hard to see how AOL’s 20 million subscribers could give Time Warner more viewers than its own CNN or WB networks.

A different motivation for Time Warner emerged close to two weeks after when it announced that it was taking over British music company EMI. “Of the treasure trove of content within the AOL Time Warner portfolio, music has the biggest business potential because it is already the most pervasive and accepted form of content on the Web today. There are thousands of Web sites that accept orders for music on-line and ship CDs to customers.” (Evans, 2000a, B5) An added bonus is that because of its cable holdings, AOL Time Warner would be able to remedy the problem of slow download times for music. “If AOL Time Warner can convince [its subscribers] to purchase [its] high-speed cable access to the Internet, it would give the company a large audience for on-line music purchases.” (ibid) The immediate import of the deal was that AOL Time Warner’s purchase of EMI meant that four corporations controlled 90% of the music sold in Canada. (Bertin, 2000, B5)

AOL’s joint venture with Bertelsmann (described above) was expected to be unaffected by its takeover of Time Warner, even though Time Warner and Bertelsmann were competitors. (Milner, 2000, A8) This is another example of the interlocking nature of first tier entertainment corporations.

Some commentators believe that the AOL takeover of Time Warner signaled a fundamental shift in the economics of entertainment. One claimed that the deal “has created what industry watchers are calling the new model for the media industry — both on line and off.” (Cribb, 2000, C1) This seems to me to be highly overstated: the takeover is an extension of the logic of vertical integration to digital communications corporations. I would tend to agree more with Robert Barnard, author and co-founder of d-Code, who said, “So what’s so new? Nothing I’ve seen or read so far tells me that AOL Time Warner is going to do anything differently other than being bigger. The iMac was new, Netscape was new, but this is just bigger.” (Potter, 2000, A20)

The same logic suggests that other new and old media companies will have to combine in order to compete with AOL Time Warner. “Insiders expect the AOL-Time Warner deal will open the floodgates to a number of mergers, not just between media and entertainment companies, but between media, telephone, cable television and entertainment businesses as they move to combine their resources.” (Craig, 2000, B14)

So, the entertainment industry at the beginning of the century was a dizzying complex of large players allying or merging with other large players in order to increase their profitability.



Three RBOCs are attempting to form partnerships in Hollywood. Cable and telephone companies are aligning with software designers and hardware manufacturers. Broadcast networks are ‘in play,’ with Hollywood studios, cable MSOs, and telephone companies all mentioned as prospective buyers. Most of the converging companies are also buying into or creating online services, a business strategy useful in its own right and as a stepping stone to integrated broadband networks. We can expect that in the end the new industry will thoroughly integrate the businesses of television and audio production, multimedia production, program distribution, database creation and distribution, and broadband networks to the home. (Baldwin, McVoy and Steinfield, 1996, 400/401)


This is the marketplace into which individual content producers who wish to distribute their work will be entering. It is sometimes argued that the innovations which large corporations introduce into the market also benefit small players. For instance, if a workable electronic cash system were developed by a major distributor of online information, individuals would also be able to use it for their benefit. However, as we are about to see, size does matter. Large corporations have economies of scale which are not available to individual content creators; furthermore, the corporations may have the power to restructure the Internet in ways which would be of great benefit to them, but at the cost of completely disenfranchising individuals. As I hope to show, the interests of the major entertainment conglomerates are in competition, for the most part, with the interests of individual content providers.

Before we look at this, however, we must ask a basic question which will affect all of the players who hope to make money by putting original content on the World Wide Web.

What is Information Worth?

Before we can determine what information is worth, we must know what information is. Shannon and Weaver suggest that information is something we didn’t know before. (Fiske, 1982) The repetition of a fact may have value, but it is only information the first time we hear it. To this definition, I would like to add that the information with which I am primarily interested in this dissertation — prose fiction — is a deliberate human construction (unlike the myriad information from our environment which is constantly flooding our senses).

Many commentators have argued that the economics of information is different from traditional economics. (To simplify the argument, we will look at information as a generic product; later in the chapter, we shall see how specific information complicates this theory.) To explore this difference, it is necessary to look at some of the basic tenets of traditional economics. The most fundamental of these is the issue of scarcity:



Scarcity means that we do not and cannot have enough income or wealth to satisfy our every desire. We are not referring to any measurable standard of wants, because when we deal with an individual’s desires, they are always relative to what is available at any moment. Indeed, this concept of relative scarcity in relation to our wants generates the reason for being for the subject we call economics. As long as we cannot get everything we want at a zero price, scarcity will always be with us. (Miller, 1988, 4) [2]



Economics is an attempt to find the most efficient means of distributing these scarce resources.

One important contributor to the condition of scarcity is what can be called the perishability of goods. When you use something, it is gone. When you buy and eat food, you cannot bring that food back. Even goods which seem permanent (for example, buildings), deteriorate over time and must eventually be replaced. While some goods can be renewed (for instance, food can be replaced with a new year’s crop), far more cannot.

Information is not like that: when you use it, it is still there to be used by somebody else. When you’ve read a book, for instance, even if you lend the book to another person, you can still hold the contents in your memory. Two or more people can watch a recorded video or listen to a taped song, and it will still be there for them (or others) to use at a later date. Digital information is considered by many to be the paradigmatic case: if I download an article from the World Wide Web, I have a copy, but the original is still there for anybody else to access; when I email a copy of that article to a friend, we both have copies; etc. Unlike any other good in the world, any physical good, information is not depleted through use, but can be said to accumulate.

In oral societies, where there was no lasting record of information, the amount of information available to anybody was the total of the memory of every member of the tribe. Since the population of tribes was more or less stable (since the number of births would more or less offset the number of deaths), the amount of information in the world was relatively stable: the amount of information in the memory of every living human being. With the advent of cave paintings and markings on stone and wood, the amount of information in the world increased: now, it was the sum of all living human memory, plus all cave paintings and everything carved into sticks and stones. Artificial storage systems increase the amount of available information in the world. Applying this idea to the present, we can say that the amount of information available in the world equals the sum of the content of all living human memory AND all books and magazines in existence AND all television shows and movies AND all recorded music AND every digital storage system AND other storage systems too numerous to elaborate upon here.

Information accumulates. [3]

This facet of information affects its value. In traditional economics, the price of a good is determined by the interaction between the number of units of the good which are available and the number of people who want the good; that is, between the supply of the good and the demand for it. The relationship between supply and demand can be summed up in two very simple rules. According to the law of demand, “More of a good will be bought the lower its price, other things equal.” or “Less of a good will be bought the higher its price, other things equal.” (ibid, 37) That is to say, when we go shopping, we compare the cost of a good against how much we want it; generally, the higher the cost, the less likely we are to buy it. According to the law of supply, “At higher prices, a larger quantity will generally be supplied than at lower prices, all other things held constant.” or “At lower prices, a smaller quantity will generally be supplied than at higher prices, all other things held constant.” (ibid, 48) That is, companies will tend to produce goods with higher prices in order to make the most profits. The point at which supply equals demand is known as the point of equilibrium. Here, the number of buyers of a good is the same as the number of units of the good which producers make available. This is also the point which determines the price of the good. (ibid, 55)

Because of the way it accumulates, information cannot be considered a scarce commodity, but an abundant one, and, as O’Donnell observes, “The shift from an economics of scarcity to an economics of abundance becomes painfully relevant and threatens to change the landscape dramatically.” (1998, 134) One commonsense result of the interaction of the laws of supply and demand is that as the supply of a good increases, the price per unit must go down (this is sometimes referred to as “economies of scale”). The abundance of information is a corollary to Miller’s argument about scarcity: abundance drives the price of information ever closer to zero.

This has been true for a long time, but it has been obscured by the fact that information had to be embodied in physical form. When you buy a book, for example, most of the money you pay goes to the people who produce and distribute the tangible artifact; very little of the price of the book is actually returned to its author. “Typical author royalty rates for hardbacks range from 10% to 15%, or $2.50-$3.75 per copy [for a book with a $25 cover price].” (Eberhard, 1999, unpaginated) A similar argument can be made for pre-recorded music. The actual information content of previous media was usually the least valuable component of the artifact.

Digital information releases information from its reliance on a physical container. It is true that the computer networks through which such disincorporate information flow amount to a vast physical system, or, for that matter, the fact that to be useful to a human being, such information must manifest itself on a very physical computer screen, or frequently be printed up on quite physical paper. Unlike a book, however, where you buy the physical artifact with the information, with digital information, you buy a machine (a computer) which is disconnected from any specific content; you choose the information you want from the abundance of it in digital form. This severance of information from its physical container has made much clearer the reality that an abundance of information drives the value of information asymptotically towards zero in the traditional economic system. [4]

As it happens, for most of the history of computer networks, users have shared information with no expectation of monetary reward, so this issue didn’t come up. Before we can properly discuss the current economics of the Internet, it is worth considering how this system developed and thrived without direct economic incentives.

The Gift Economy and Generalized Exchange of Public Goods

For much of its existence, the Internet was a non-commercial place to obtain information. The general impression, which persists among many people, was that “One of the keystones of the Net is free stuff.” (Zgodzinski, 1988, F10) This seems to fly in the face of the common belief that content providers would not create anything for the Internet unless they were financially rewarded for it. Another model had to be applied to the Internet: one was known as the “gift economy.”

“The culture of the Internet is marked by a circle-of-gifts mentality, according to which people produce materials and contribute them to a common stock on which they draw themselves.” (O’Donnell, 1998, 96) Gift economies dominated ancient tribal cultures; although gift-giving certainly continues in modern societies (for instance, for weddings, anniversaries and birthdays), it does not have a central place in our economy. To better understand how gift giving may have been the basis of Internet culture, it is necessary to see how theories created to explain the behaviour in tribal cultures might apply to this new social grouping.

To begin, we have to go beyond traditional concepts of selflessness. “Gift giving is often described by sociological theorists as a process of exchange through which individuals rationally pursue their self-interests… According to the exchange theorists…the generosity that we observe in gift giving is only an apparent altruism. In reality…giving to others is motivated by the expectation of some reward….” (Cheal, 1988, 7) Since the reward in a gift economy is, by definition, not economic, we must look elsewhere to understand what motivates people to participate in such exchanges.

Yan claims that “It has been widely recognized that gift giving is one of the most important modes of social exchange in human societies. The obligatory give-and-take maintains, strengthens and creates various social bonds…” (1996, 1) People who participate in gift economies, therefore, do so as a means of building and maintaining relationships to others. Rheingold argues that the ease of distributing digital information helps this process in the online world:



I have to keep my friends in mind and send them pointers instead of throwing my informational discards into the virtual scrap heap. It doesn’t take a great deal of energy to do that, since I sift that information anyway in order to find the knowledge I seek for my own purposes; it takes two keystrokes to delete the information, three to send it to someone else. And with scores of other people who have an eye out for my interests while they explore sectors of the information space that I normally wouldn’t frequent, I find that help I receive far outweighs the energy I expend helping others: a marriage of altruism and self-interest. (1993b, 68)



Implicit in this model of a gift economy is the concept of reciprocity. “Interpersonal dependence is everywhere the result of socially constructed ties between human agents. The contents of those ties are defined by the participants’ reciprocal expectations. It is these reciprocal expectations between persons that make social interaction possible, both in market exchange and in gift exchange.” (Cheal, 1998, 11) When we give a birthday gift to a friend, to take one example, most of us assume that we will receive a comparable gift when our birthday rolls around. When we put up a site on the Web, on the other hand, we do not expect everybody who visits the site to give us the URL to their site in return (in fact, many if not most of those visitors may not even have a site on the Web). Relationships between information providers and computer users are, therefore, for the most part, asymmetrical, although, as Rheingold pointed out above and for reasons we shall consider in further depth below, one could have a reasonable expectation of receiving more information from the Internet than one put on it.

Traditionally, gifts have been physical objects, but there seems to be no reason why the theory cannot be stretched to accommodate digital information, which need not have a physical form. Perhaps more importantly, “Gift transactions almost always occur between individuals who possess the kind of reciprocal interpersonal knowledge that can only be acquired in face-to-face interaction.” (ibid, 174) Face to face interaction clearly need not take place in relationships conducted over computer networks, where the participants may never physically meet, or, indeed, have any personal contact whatsoever (as in the case of a user who downloads a Web page). This would seem to suggest that online information exchange does not fall under the gift economy model.

Another traditional feature of gift economics is that, “To be given as a gift an object must be alienable, in the dual sense that the donor has the right to renounce ownership of it and that the recipient has the right to possess it as his or her own property.” (ibid, 10) By this theory, an object is not a gift if the giver can reclaim ownership and retake possession of it. As we have seen, though, information does not work this way: I can give it to others and still keep a copy for myself. Information is not alienable.

The concept of the alienability of a gift has been challenged. “A new approach to the study of the gift gradually emerged in the 1980s, emphasizing the inalienability of objects from their owners.” (Yan, 1996, 10) In this view, although the gift-giver may give up possession of an object, it is imbued with his or her spirit, which can never be given up. This is, perhaps, closer to the spirit of online information exchange, although few people would think of information in this way.

When considering the gift value of information, an important thing to remember is that the first users of computer networked communication were primarily university researchers. (Rheingold, 1993a) There is a culture of sharing information in the academic community; it is important for academics to get published in peer review journals, for instance, even though there is no financial reward to do so. To be sure, publishing articles has the potential to help academics advance their careers, particularly those who are attempting to get tenure. However, it is also true that, to the extent that academics see their role as expanding the base of human knowledge, freely flowing information has always been a major part of academic culture, a part which greatly informed the early culture of computer mediated communications.

This was augmented by one of the first groups to take up CMC after it grew beyond the academy: former hippies. (ibid) Many of these people felt that the new form of communication could help them spread their communitarian beliefs, and were attracted to the ARPANet and Internet because they thought that the free flow of information would further their utopian goals.

These two cultures contributed to the development of the Internet as a place to exchange information at no cost; these beliefs would likely continue to dominate if these two groups had remained as the majority of Net users. However, as the Net has expanded, especially with the popularity of the World Wide Web, the number of people who use it who belong to neither group, and, therefore, do not have allegiance to the belief in the free exchange of information, has grown substantially. Moreover, A computer user can download information from a Web site anonymously, without entering into any sort of relationship with the person who created the site. To be sure, personal relationships can develop between Web designers and the people who visit their pages, but we don’t know how often this occurs, or how strong such ties are.

As it happens, personal relationships are not the only type of relationship which might benefit from gift-giving. Another “type of social reproduction does not necessarily involve intimate relations (although it may do), and is often conducted through forms of communal action. It consists of the reproduction of social, rather than personal, relations [note omitted].” (ibid, 90) This may better explain why information is often freely exchanged on the Internet, even among people who may never know each other; as Sproull and Kiesler point out, “open-access networks favor the free flow of information. Respondents seem to believe that sharing information enhances the overall electronic community and leads to a richer information environment.” (1993, 116)

Unlike personal relationships, which exist between individuals, “…communal relations may involve very large numbers of people. Such ties are inevitably specialized in content and limited in emotional involvement. Communal relations involve actors who share specific interests and whose knowledge about each other may be limited to what is necessary in order to get things done.” (Cheal, 1998, 108) This could describe much of the information exchange on the Net, which is often described as a collection of communities of interest, particularly in news groups and other areas organized by subject matter. Although the Web is not organized around subject matter, it could be argued that people tend to go online to search for specific information, allowing them to form loose communities around specific pages or clusters of pages on any given topic; in the last chapter, I tried to show that just such a community was being formed around fiction writers.

Rather than think of information exchange on the Internet as an exchange of gifts, which does not seem completely accurate, one writer refers to it as a “generalized exchange,” which “is both more generous and riskier than traditional gift exchange. It is more generous because an individual provides a benefit without the expectation of immediate reciprocation, but this is also a source of risk. There is the temptation to gather valuable information and advice without contributing anything back. If everyone succumbs to this temptation, however, everyone is worse off than they would have been otherwise: no one benefits from the valuable information that others might have. Thus, generalized exchange has the structure of a social dilemma — individually reasonable behavior (gathering but not offering information) leads to collective disaster…” (Kollock, 1999, 222)

Some argue that if rationality would suggest that we take information without giving any in return, there must be other reasons why people contribute to general exchanges such as the Internet. For example, “…the process of providing support and information on the Net is a means of expressing one’s identity, particularly if technical expertise or supportive behaviour is perceived as an integral part of one’s self-identity. Helping others can increase self-esteem, respect from others, and status attainment.” (Wellman and Gulia, 1999, 177) Thus, writers who offer constructive criticism to each other may do so in order to show off their own knowledge of writing, or to make themselves look better to other members of the writing community. While these types of personal motivations undoubtedly play their part, I do not believe it is necessary to resort to them to resolve the dilemma of why people contribute to general exchanges.

With a traditional exchange, one unit is given and another received. If you wanted five units of information from five different people, you would need to exchange information five times (which could require you to have five different units of information to exchange, since they might have different needs, although it might sometimes work out that you could offer each the same information). With generalized exchange, you enter your unit of information into the pool and can draw on the information which already exists in it; in a single transaction, you can obtain more units of information than you could with a traditional exchange. For this reason, Kollock’s suggestion that individuals who gather but do not offer information threaten the system is perhaps overstated. A rational person, knowing that if nobody contributes to the general pool of information, it will stagnate and die, and realizing that there are great benefits to its existence which outweigh the minimal effort any single person must take to keep it going, will reasonably decide to contribute. To be sure, some will try to calculate the minimum amount they need to put in in order to get out the rewards. It’s also true that some will not contribute. The question is, how tolerant can a general exchange system be of non-contributers?

Suppose ten people read a collaborative short story on the Web and five are moved to contribute new material. For the one unit of the story, they have received four other new units of the story. This may well be enough for them to feel that they have received fair value, even though others who read the story did not contribute. (In fact, the way the Web works, most people other than the creators of a page will not know how many people have accessed it, so they will likely not be aware of how many people are not contributing.) How tolerant different communities on the Web are of non-participant users of their information (referred to online as “lurkers”) is an interesting question which requires more study.

Generalized exchange leads to the creation of public goods, the term “good” referring not to commodities but boons. Kollock claimed that public goods were easier to create and maintain on digital communication networks than they were in the physical world. For one thing, “To the extent costs are lowered, the more likely it is that individuals will take part in the collective action.” (1999, 224) Because it’s easier for an individual to contribute information through an online generalized exchange, it’s less likely for the person to lurk. For another thing, “The fact that many of the public goods produced on the Internet consist of digital information means that the goods are purely nonrival — one person’s use of the information in no way diminishes what is available for someone else.” (ibid, 225) Public goods in the physical world, such as the shared common lands which were spread throughout Europe until the 16th century, require much greater coordination because the resource can often be used up; the incentive for individuals to get as much as they can out of such public goods while giving back as little as possible is, therefore, greater than with digital public goods. Finally, “while the provision of many public goods requires the actions of groups…the nature of a digital network turns even a single individual’s contribution of information or advice into a public good.” (ibid) The common lands often required farmers to coordinate their efforts to maximize their benefit from the land; with digital information, people add to the store of ideas with a minimum of coordination.

If there is a threat to the public good of the general exchange of information on the World Wide Web, it arises out of the fact that it has become a site for a tremendous amount of commercial activity. A large number of corporate Web sites have been created for the purpose of promoting products, which has inspired individuals to seek financial remuneration for their Web sites, potentially lessening the amount of work available in the common pool. Advertising is virtually ubiquitous. “Traffic suggests that half of all pages sent over the Web every day contain an ad.” (Wallich, 1999, 37) This creates a tension between two basically different systems: “It is the extended reproduction of…relationships that lies at the heart of a gift economy, just as it is the extended reproduction of financial capital which lies at the heart of a market economy. Between these two principles there is a fundamental opposition, as a result of which any attempt to combine them is likely to result in strain and conflict [note omitted].” (Cheal, 1988, 40)

Commercialization does not mean an end to material being freely shared on the Internet; some people will continue to offer information at no financial cost. “…[D]espite the enormous changes associated with capitalist modernization, gift transactions continue to have a vital importance in social life.” (ibid, 19) Indeed, as we saw in Chapter Two, almost all of the writers surveyed for this dissertation put their information on the Web without expectation of making money (although many harboured vague hopes of doing so). However, what commercialization does is marginalize other forms of exchange. As commercial sites proliferate on the Web, it becomes harder and harder to find non-commercial sites, which become a smaller and smaller percentage of the whole.

Commercialization does not affect different uses of digital communication networks in the same way: email, for example, remains dominated by the free exchange of information between individuals. Still, even though some form of generalized exchange may continue to exist on the Internet, many people will place pages on the World Wide Web in the hope of making (or visit them with the expectation of spending) money; in addition, as we have seen, major economic forces are at work to exploit the medium for their profit. So, we must go back to finding an economic model which would make this work. As it happens, one which takes into account the vanishing small value of generic information has been developed for the Net: it is known as micropayments.

Micropayments

As commerce slowly began to develop on the Internet in the 1990s, the most common way of paying for goods and services was with a credit card. There was a practical limit to what could be bought, however: $10. This was because, “It costs large national acquirers somewhere in the neighborhood of 19 to 20 cents to process a card transaction, according to analysts’ estimates. Thus, using a credit card to buy something on the `Net for a nickel will be a money loser.” (Patricia Murphy, 1998, 50) The cost of using a credit card for a purchase under $10 was greater than the amount of money a merchant could make on the sale. As a result, there was no mechanism by which consumers could buy information at its true value. “Currently, most minor services [on the Internet] are provided free of charge because it is impossible to get Web consumers to pay for them.” (Chartier, 1999), 28)

One method of dealing with this problem is to aggregate content. Collect enough information in one place, and you can charge more than the minimum $10 for it. This is common enough in the world of newspapers, magazines and articles collected into books. The problem with aggregating content on the Net (as, indeed, it is a problem with newspapers, magazines or articles collected into books) is that the consumer must pay for information he or she does not necessarily want. Online, where space is not nearly as costly as in print, the temptation for the aggregator is to increase the amount of information available at his or her site, allowing him or her to charge more; but, for most consumers, this means paying increasing amounts of money for information with a decreasing amount of overall usefulness. Moreover, it seems to go against one of the advantages of digital information: the ability of a reader to choose information. There is no reason inherent in the technology for a consumer to buy any information other than that which he or she specifically wants.

Digital cash seemed to be a way out of this dilemma.

In the mid-1990s, several schemes to create an electronic version of money were developed; Mondex, DigiCash, Cybercash and First Virtual, among other companies, vied to offer a form of currency which could be spent over digital communications networks. A typical digital cash set-up would go something like this: merchants with wares to sell and a presence on the then-emerging World Wide Web (or who were willing to set up on the WWW) would sign up with one of the digital cash companies. They would use software from the company to allow them to accept digital cash over the Internet. Consumers would sign up with the company and download software which would allow them to connect to the merchants who had previously made arrangements with the company. The consumer would then have to transfer money from a bank or (most frequently) through a credit card to his or her online account. (Some systems also allowed consumers to transfer this money to cards which could be used to purchase goods from vendors in the real world.) Only after all of these steps were taken could online transactions take place. (Godin, 1995)

One of the major advantages of digital cash is that it automates the processing of orders, which no longer requires the shuffling of paper. “The cost of processing credit card transactions is high because the merchant has to ask the credit card issuer to verify the card holder’s ability to pay for each transaction. Micropayment schemes eliminate this costly step. The micropayment system broker – typically a bank – usually simply verifies that the encrypted serial number on an electronic token or purchase order is valid.” (Patch and Smalley, 1998, 72) This means that, “The system can handle financial transactions as little as a few cents…” (Harrison, 1999, 16) Some of the early experimenters with digital cash were able to do just that: “MilliCent supports charges as low as 1/l0th of a cent; IBM Micro Payment: 1 cent; BT Array: 10 pence; and CyberCoin: 25 cents.” (ibid)

Had this system worked, it would have allowed consumers to pay the market value for small amounts of information (a single newspaper article, for instance, or a short story). On the one hand, it would allow them to be more specific about what information they consumed, since what they bought need no longer be aggregated with information which they did not want. On the other hand, micropayments would encourage consumers to explore the possibility of purchasing information they didn’t have a prior interest in since “there is little or no [financial] risk” involved when paying such small amounts per article or story. (Balfour, 1998, 23)

Moreover, micropayments enabled through digital cash systems would have given even the smallest producer (such as a fiction writer currently on the Web) a mechanism by which she or he could make some money. Patch and Smalley offer one example: “The Guitar Heroes Web site, based in St. Paul, Minn., offered songs that customers could play along with for 25 cents in the MilliCent trial and made $75 in test money in one month. Each day of the month Magic had 120 downloads at $2 to $5 each, earning about $450 in test money.” (1998, 72) The amount of money per transaction may not be large, but “millions of transactions worth even a few cents represent a very impressive flow of income.” (Mosley-Matchett, 1997, 10) Even a writer whose page only attracted hundreds or thousands of readers willing to pay a fraction of a cent to download a story had the potential to make more money than if she or he tried to sell it to a magazine.

Unfortunately, there were several problems with digital cash. For one thing, setting up an account to sell products cost money: “Subscriber enrollment [in CommerceNet] costs $400 a year for U.S. companies and U.S. subsidiaries, and $800 a year for non-U.S. organizations. There is a one-time $250 initiation fee.” (Godin, 1995, 214) Individuals with a small number of works to sell might not make that much money in micropayments in six months or more. This likely discouraged many people who might have been interested in participating in this form of commerce.

Another problem with proposed digital cash systems was that they required a lot of effort on the part of consumers to set up. “Typically, consumers must open an account with a micropayment system before using it and then download “wallet” software to use with their browsers. Depending on the system, customers can either run a tab that is paid with a credit card when a set dollar amount is reached or they can buy ‘funds’ to spend later.” (Machlis, 1998, 39) In order to minimize fraud, one system, First Virtual, required that users respond positively to an email message asking if they had authorized every single purchase. (Godin, 1995, 207) Consumers already comfortable with using credit cards to make purchases in the real world, were more likely to transfer this purchasing behaviour to the online world, especially given the fact that most digital cash schemes required a credit card to be set up in the first place.

Finally, there was the chicken and egg problem which plagues many new technologies: “…consumers didn’t want to download unproved e-commerce software without an attractive range of things they could buy. But most Web firms weren’t willing to invest in digital-cash servers and parcel up their sites into easily saleable chunks without a guaranteed audience of willing buyers.” (Wallich, 1999, 37) The fact that there were many competing firms exacerbated this problem, since consumers couldn’t be guaranteed that the digital cash account they set up today would allow them to buy the goods they wanted, or even be in service tomorrow.

The results of these problems were predictable: the first efforts at creating a digital cash system failed. “First Virtual, which billed itself as the first Internet bank, has abandoned the business altogether; DigiCash…is in Chapter 11 reorganization, and its only telephone number leads to a message from the company’s ‘interim president’ saying he no longer listens to messages left there. Ostensible market leader CyberCash has stopped offering ‘cybercoin’ transactions in its U.S. software. In the U.S., at least, all the banks that once supported micropayments have taken their resources elsewhere.” (ibid, 37)

The lack of acceptance of the first wave of digital cash illustrates an important principle in technological adoption: new technologies will not be able to compete with existing technologies unless they allow users to do something they could not do with the existing technology, or to do something they do with existing technology more easily. It is true that digital cash allows producers to divide information into increasingly smaller units, giving consumers greater control over what they can buy, usually at a smaller price. However, this advantage was outweighed by the fact that credit cards were much easier for consumers to use than digital cash, which required too steep a learning curve. As Amy Larsen pointed out, “ease-of-use may be the most important factor in determining if a new online payment method gains acceptance.” (1999, 46)

Too many producers recognize the advantages of micropayments, however, so, although they were “long ago cast by the side of the Infobahn as unrealistic technology, [they] are on the comeback trail.” (Kerstetter, 1999, N12) A second wave of companies is currently creating new digital cash schemes. As more people become comfortable with the idea of spending money over digital communications networks, they may be more willing to accept the idea of digital cash. Perhaps with this in mind, “Datamonitor, a London-based research firm, recently estimated that by 2002, micropayments could account for 12% of the total projected U.S. online purchases of $12.5 billion.” (Machlis, 1998, 39) There is also the possibility that if more people do more of their banking online, banks will develop their own workable micropayment schemes.

In the meantime, companies and individuals continue to put information on the Web and continue to dream of making money from it. Since the option of digital cash hasn’t been open to most of them, they have had to fall back on more traditional methods of valuing information. The first step is to create a popular conception about one’s product in the mind of potential consumers. That is, to create a brand.

Branding

In traditional economic theory, we decide what commodities to buy based on an assessment of whether or not they will satisfy a given need. Thus, if we are hungry, we are willing to pay a lot for a banana, but are not likely to be willing to pay much for a Mont Blanc pen. The reason we can make such judgments is that the information about the product is separate from the consumption of the product. We can know what a banana and a pen are without having to purchase them. When information is the product, however, it is difficult to divorce assessment from use. To know if a specific newspaper article has information we need, we must read the article; to know if a specific film will give us pleasure, we must watch it.

In the computer software realm, one method of dealing with this problem is known as shareware. In this model, creators give away their product for nothing, asking the people who make copies of it to pay a certain amount (sometimes set, other times whatever the user thinks is fair) if they have found it useful. Unfortunately, this is, at best, a haphazard way of making a living; most people exposed to shareware do not seem to pay for it. Some commentators have suggested that this is because of a basic flaw in human nature. Perhaps. I would suggest, however, that it points to a fundamental paradox in the nature of information as a commodity: one has to be exposed to information to be able to determine if it has value; but, that very exposure to information lessens its value.

Some online payment systems take this into account. First Virtual, for instance, was considering allowing its users to look at information before deciding whether or not to pay for it, in a more formalized version of the shareware concept. However, “To ensure that customers do not abuse the privilege of trying before they buy, First Virtual may limit the number of times a consumer may evaluate information products without paying for them.” (Godin, 1995, 132) This system could not guarantee that information would be used without payment (especially if users signed on to a variety of digital cash systems, each allowing free access to a specified amount of information), but it could stop some abuses.

One way out of this paradox is branding. A product brand is “a set of expectations and associations that a given community has about a product, and attaching a brand to one’s content stream is a way of enabling satisfied consumers to get ‘more like that…'” (Agre, 1998, 92) You read The New York Times in the morning because, being familiar with the brand, you have a reasonable expectation that it will give you news which has value to you. In a similar fashion, you go to a film starring a particular actor because you have seen that actor’s films before and have a reasonable expectation that you will enjoy this one. (Film aficionados may go to a film because of a specific director, or even a specific special effects house.) Of course, that particular issue of the newspaper or that particular film may not satisfy your needs; however, sooner or later a consumer’s loyalty to a brand will fade away if the brand does not satisfy the person on a regular basis, so it is necessary for a producer to maintain product consistency.

Branding is considered an important part of the marketing of an entertainment franchise by entertainment conglomerates. Branding is also a solution to the larger problem of determining the value of information in a time of abundance, since brands create a form of scarcity. “A successful branding program creates in the consumer’s mind a perception of singularity, that there is no other product on the market like ‘The Brand.'” (Diekmeyer, 1998, E2) There are millions of fiction books in the world. However, less than a dozen of them were written by Thomas Pynchon. Even such a prolific writer as Isaac Asimov, whose works number in the hundreds, makes up a very small number of books relative to all that are available. Readers will search out their books, as opposed to those of less well known writers, because readers believe their books have qualities which the books of no other writers have.

There are two types of brand which have very different effects and consequences. The first is direct branding, where a consumer keeps buying the same product from a producer because he or she expects it will continue to fill his or her needs. Thus, a person might pick up a Globe and Mail every day because he or she believes, based on past experience reading it, that it will deliver a certain level of international or business news. (On the other hand, a person might pick up the Toronto Sun in order to obtain high quality sports reports — branding occurs at all levels of perceived quality.)
The other type of branding is associated branding. This occurs when a company attempts to associate its name with a product for which it is not necessarily known. When the Globe and Mail makes an information database available online, for instance, it can use its reputation as a source of print information to attract customers for its online venture. This type of branding is most commonly associated with film: when Disney releases a new cartoon, for example, a wide variety of products associated with the film also enter the marketplace. These may include: a soundtrack CD; videogames based on the film; action figures of characters in the film; TV specials on the making of the film; a book based on the film (as well as other books loosely based on the characters or situations in the film); tie-ins with restaurants or food and beverage manufacturers; mugs, bedspreads, keychains and other products which can carry images from the film; and so on.

Direct branding can be used by individuals; when this happens, it is sometimes called personal branding. Associated branding, on the other hand, requires large expenditures in marketing, since this makes the brand attractive to creators of other products, making them more likely to want to associate themselves with the brand; it is, therefore, only open to the largest entertainment corporations.

A small number of personal brands which originated on the Internet have migrated beyond its borders to become known in the larger culture. Matt Drudge’s The Drudge Report, for instance, has developed a reputation as a deliverer of information which other news sources (including most traditional sources) will not report on. This led him to a cross-over career in traditional media: as the anchor of a weekly television show on Fox News Channel and the anchor of a two-hour weekly radio show on the ABC network. (“ABC signs Drudge,” 1999, D14)

However, a much larger number of associated brands have migrated from the real world to the Net. Most films, television shows and books released by major entertainment corporations now boast Web addresses. Many corporations consider a Web presence an important part of their larger promotional efforts. According to Lynda Keeler, vice president marketing of Columbia Tristar Interactive/Sony Pictures Entertainment, “First, SPE has properties, Wheel [of Fortune] and Jeopardy, that are big mass-market brands. They’re proven audience pleasers with an inherent interactive game play to them… It’s a natural for us to look for other ways to extend the brand… A percentage of the show’s fan base is online; plus other people online are looking for a destination for fun, which we hope to deliver.” (Cury, 1997b, 50) Paramount Digital Entertainment President David Wertheimer says much the same thing: “Our focus has been on leveraging the brands in the online world that Paramount has [created in the real world] and building online places for fans to congregate, and really look at how to build a business around networks and multimedia entertainment.” (Goldman Rohm, 1997, 116)

For the most part, major entertainment producers have been content to put up Web pages with little original content; in fact, some are nothing more than blatant advertisements for the real world product. While this may satisfy fans of the original work, it does little to attract others. For this reason, developers of online material are starting to develop content associated with real world works which is, itself, original, offering an experience which cannot be obtained anywhere else. For example, “…HBO online will again venture into uncharted territory with a virtual reality companion piece to the upcoming HBO series From the Earth to the Moon, about NASA’s Apollo space program. The TV component, produced by Tom Hanks and airing in early ’98, will consist of 13 one-hour episodes. Webheads inspired by the mini-series can get a pseudo-lunar experience of their own by tuning into the site and taking a VR trip to the moon and (with luck) back.” (Ivry, 1997, 28) The Web site for the television show 3rd Rock from the Sun features: trivia contests; chat rooms with stars and producers; behind-the-scenes video and audio; episode scripts, including material that didn’t air, and; humourous features. (Goldman, 1997, 42)

Associated branding has more effect on consumer choice than direct branding. With direct branding, there is a single product around which to build a reputation. With associated branding, any of a hundred products may gain an individual’s attention. You might see the film first. However, you may buy the book first. You might hear the single from the soundtrack on the radio first. You might see an image on a t-shirt. One real world example should suffice: “Time-Warner produced the film Space Jam using a Warner Brothers cartoon character, of course. The film was plugged shamelessly in Time, Inc. magazines — Sports Illustrated for Kids even ran a 64-page special issue devoted to it. The soundtrack was released on Warner Music, and included a roster of Warner Music artists in its track-list. That’s film, print and music…oh, of course: ads for the movie aired during basketball games on Time-Warner controlled TBS Sports TV and on CNN, along with ‘The Making of Space Jam‘ specials.” (Spiegelman, 1998, 16) Any individual cultural artifact may lead you into the entire chain. Moreover, every additional product which carries a brand reinforces knowledge of all of the other products in the chain in the mind of the potential consumer.

Some commentators are very wary of this process. “In spite of the utopian promises made by the promoters of the Net, I didn’t notice traditional media powers getting any weaker” writes John Seabrook.



On the contrary: Instead of distributing power to the edges of society, the Net offered the media megamachine a new way of consolidating its hold. The Net would not develop into a revolutionary new medium that replaced existing media — the people who used that kind of rhetoric (like me, in my newbie days) were like fog machines. They obscured the truth. What was more likely to happen, it now seemed to me, was that the few advantages and innovations that the Net offered would be seized by the megamachine and used to further entrench itself into our daily lives. And with the growth of corporate Web sites, it appeared that one of those innovations was a new way of marketing off-line goods and content. Net dot marketing got into your head in the same way that, say, MTV got into your head — it worked the brand and the desire to have it right into your cortex, like the mink oil I was forever massaging into my leather boots, to soften them. (1997, 241)



Or, as another person put it, even on the Web, “at the end of the day the big brands win, and the little brands lose.'” (Kline, 1997, 65)

Others believe that the Internet shifts power away from large brands. Esther Dyson, for instance, states, “I’m not saying everybody has the power to become Disney, but people have the power to suck a little power away from them. It does create a flatter landscape.” (Nee, 1998, 118) It is true that Matt Drudge takes a little attention away from the major mainstream news outlets. However, it is too early to know whether this can be duplicated by thousands of other Web sites run by individuals or, more specifically, if this type of success will come to writers of fiction.

Another argument is that the proliferation of brands calls their effectiveness into question. According to Silicon Valley marketing specialist Regis McKenna, “‘Other’ owns the leading market share of personal computers, cookies, tires, jeans, beer and fast foods. Since 1984 American television viewers have been watching ‘other’ more often than the three major networks. Brand names do not hold the lock on consumers they once held.” (Davidow and Malone, 1992, 221.) While there may be some validity to this, the truth is that most of the ‘others” McKenna is referring to have their own brand (ie: MTV and CNN, competitors to the three major networks). It seems a little premature, therefore, to say that “The more [brands] strive to please the masses, the more we see of the same — everywhere, all the time — the less appealing our brands become.” (Abramson, 1999, 58)

In any case, while branding is an important step in being able to differentiate between different kinds of information delivered by computer mediated communications systems, it isn’t, by itself, an economic model: examples abound of real world information producers who have not been able to profit from their brand in the online world. For instance, “Although the success of Wired magazine is truly remarkable, many of the subsequent spinoffs of Wired Ventures have not enjoyed such good fortune. Wired Digital, the branch of Wired Ventures Inc. online, has reported huge financial losses. Despite the fact that the magazine’s online equivalent, HotWired, has been critically acclaimed and is one of the busiest Web sites on the Internet, profits have remained elusive.” (Stewart Millar, 1998, 82)

To understand why existing corporate information brands have had little success on the Web, we need to look at the effectiveness of traditional economic models in the online environment.

Traditional forms of income

“Until now, the biggest lie on the Internet hasn’t been about alien abductions. It’s been: ‘Don’t worry, the Web will make money.'” (“Pandesic advertisement,” 1998, 157)

One approach to generating income from information online would be to transfer traditional models from other media. These include advertising and subscriptions. As the Pandesic ad quoted above suggests, these have largely failed. (Keep in mind, though, that Pandesic’s business is supplying shovels for those under the spell of the new Gold Rush, so it’s in the company’s interest to hold out the possibility that making money is now possible.) This section will look at these models.

The Advertising Model

Advertising on the World Wide Web has shown a large growth curve: “The Internet Advertising Bureau reported that Net ad revenues totaled $906 million in 1997, up 240 percent from the previous year.” (Danko, 1998, 49) Online ad revenue has the potential to be quite substantial: “The research firms of Jupiter Communications and Forrester Research have both projected that ad spending on the Web will approach US$5 billion by 2000. This bonanza will make up more than 90 percent of total revenues for content-providing Web sites…” (Madsen, 1996, 206) However, to put this in perspective, “Web ad spending in the second quarter of 1996 was about $43 million, up 347% over the fourth quarter of 1995, but a small fraction of the $60 billion spent on traditional advertising.” (Voight, 1996, 196)

We are all familiar with advertising, from the pages of newspapers and magazines to the periodic interruptions of television shows. Advertising is adaptable, taking a different form in each medium: on the Web, advertising to date most often means banners. Banner advertisements usually appear at the top of a Web page (the first thing a user sees), often loading before the page’s content (the first thing a user can see).

Some have suggested that the banner is not an effective advertising space: “It’s a skimpy piece of acreage to work within — smaller than a cereal box top, and limited graphically by the need for quick downloads. Sized at 480-by-55 pixels or smaller, the typical ad banner occupies less than 10 percent of a 640-by-480 screen display. As the computer standard has moved to a finer-grained 832-by-624 screen, the banner looks even smaller.” (Madsen, 1996, 208) This is compounded by the fact that some sites fill their first screen with advertising banners of different sizes, which means they compete visually for the user’s attention, but, perhaps more damaging, the user can simply scroll down (or link) to the content and ignore all of the advertising.

Another drawback of banner ads is that they can be turned off. Web browsers allow computer users to suspend their ability to see graphics, a necessary function for those whose connection to the Internet is slow since it allows them to maximize the amount of information they can access while minimizing their connection costs. When the graphics function is disabled, all graphics, including advertisements, appear on the user’s screen as a generic graphic. “There’s some evidence that more experienced users turn browser graphics off, a trend that might make advertisers uncomfortable and cheer retailers of high-speed modems. Only 16 per cent of first-year subscribers who use the Internet daily say they frequently turn graphics off, but that rises to 32 per cent for people who have at least three years of experience.” (Solomon, 1998, 13)

Moreover, for those who do want to access graphics on the Web, but not ads, programs have been created which remove ads from sites before Web pages are downloaded. Ad blocking software goes by names like WebWasher, InterMute and AtGuard. “Many online advertisers dismiss the trend toward ad-blocking, noting that when faster connections are available, consumers will not be so annoyed about being forced to download cumbersome advertisement files. ‘Consumers understand the basic proposition that all the free things are enabled by advertising,’ says the chairman of the Internet Advertising Bureau.” (“And viewers fight back against Web ad overload,” 1999, unpaginated) This may have been true of older media; however, given that a substantial amount of the content on the Web has not been enabled by advertising (that is, it is available for free), and that many users of computer networks are members of online communities which still subscribe to the ethos that information should be free, it seems, at best, to be a dubious assertion.

In any case, the basis of advertising is getting as large a number of people as possible to take in your information, on the assumption that a fraction of them will be motivated by the ad to buy your product. Despite the Web’s reputation as having the capability of personalizing advertising messages, “the advertisers with the throw weight to unleash the online economy are eager for audience consolidation. Old-fashioned economies of scale means delivering messages more cheaply and efficiently.” (Beato, 1997, 193) As a result, “Since more users equals more ad dollars, media outlets — old and new — strive to reach more eyeballs.” (Behar, 1998, 48) Some numbers may help put the Web in perspective in this regard: CNN has 69 million weekly viewers; ESPN, 53.5; TV Guide has 39.2 million weekly readers; Time, 25.2; Newsweek, 22. (ibid) While these numbers may be a little misleading (for one thing, different media have different production costs, meaning they can become profitable with far different levels of auditorship), they are useful as a general basis of comparison: “The 120,000 hits a day that even popular Web sites such as Salon brag about are eclipsed at least twentyfold in typical TV channel-surfing downtime.” (Eisenberg, 1997, 68) As we saw in Chapter Two, writers on the Web are happy to measure the hits their pages get in the low thousands. This helps explain not only why the level of advertising on the Web is not nearly as great as that of established media, but why individual pages cannot command the amount of ad revenue that works in other media can.

Furthermore, the method used to measure how many people see ads on Web sites, page hits, is in serious dispute. “The number of hits is not the same as the number of visitors or even visits to a given page. It is a measure of the number of files loaded for any given page or site and is thus less than desirable as a measure of impressions or actions.” (Whittle, 1997, 300) We have already seen some of the problems with using hits as a measurement of individual Web page viewers. Another is: if somebody keeps going back to a page, repeatedly downloading it (as can happen with the home page of a Web site with a lot of pages), each visit would count as a separate set of hits and be considered the experience of a separate viewer (did 1,000 different people access your page, or did 1 person access it 1,000 times?) This would lead to overestimations of how many people saw an ad.

On the other hand, Web browsers have the ability to store Web pages in what is known as a cache. When a user clicks on a link, her or his computer sends commands to the server on which the page is stored to retrieve its contents. To speed this process up, those contents can then be stored on the user’s computer. The next time she or he clicks on a link to the page, instead of calling its contents up from a distant computer, it simply calls the contents up from its own memory. Moreover, a single ad which appears on many different pages, need only be cached once. In either situation, since the user no longer needs to request the ad from the server on which it is stored, this does not count as a “hit” (unless the cache is emptied by the user, or disabled before browsing). This can lead to underestimating how often some people have seen an ad. A proposed solution to the caching problem has been suggested: embed tags in advertisements which would count the number of people who view an ad, regardless of whether it resides in its original server or in their computer’s memory. (“Committee adopts standard for counting Web ad viewers…”, 1999)

This is an important issue. “‘Real [advertising] budgets don’t get spent until you have some kind of accountability,’ says the president of the Advertising Research Foundation. ‘That’s where audience measurement is critical.’ (ibid) Without an accurate accounting of how many people see an advertisement, it is impossible to know how much to charge. Yet, as late as 1998, Media Metrix and Relevant Knowledge, two companies that provide advertisers with statistics on how people use the World Wide Web, could not agree on what the most viewed Web sites were. “Rich Lefurgy, head of an industry trade group, says: ‘It was very hard to understand why 10 of the top 25 sites rated by Media Metrix weren’t on Relevant Knowledge’s Top 25 list.” (“Merger of Web measurement firms will smooth out differences,” 1998, unpaginated) This example underscores the unreliability of Web usage statistics.

There is one further wrinkle in attempts at measuring Web page viewers by the number of hits pages get: some pages are not accessed by human beings. These hits are “generated by ‘spiders’ and ‘crawlers’ — the index services’ software engines that travel the Net cataloguing new sites.” (Bayers, 1996, 126) It’s hard to know how many hits are attributable to non-human sources (which will increasingly include bots — personal programs which travel around Web sites looking for specific goods or services desired by their human owners), but to the extent that this happens, it leads to an overestimation of the human viewership of Web pages.

The Web allows for other forms of audience measurement. Click-through, the amount of times people actually activate the link in a banner ad, is one alternative form. Advertisers argue that it isn’t cost-effective for them to take out an ad which “may cost as much as $10,000 per month, [when] only 3 to 13 out of every hundred people who notice the banner actually open it.” (Voight, 1996, 196) Unlike print, where an advertiser cannot know how many people act upon an ad, click-through is supposed to give an immediate indication of how many people are interested in a product. However, click-through may mislead advertisers as to the utility of their ads inasmuch as it measures the attractiveness of the ad, and not the interest it has generated in the computer user for the product. “I might see some cool cyber-ad with a zooming airplane and click. But it may be for an airline in Arizona that I’ll never use, and my clicking is insignificant data for the advertisers: news they can’t use.” (Israelson, 1998, C2) Furthermore, with traditional advertising, impressing the brand name on the auditor in the hope that if he or she is in the market for a product in that general category, he or she will remember the advertised product, is the goal. This effect is not measured by click-through.

Perhaps more importantly, paying for the number of people who click on advertisements, rather than simply viewing them, changes the nature of the relationship between advertiser and creator as it has existed in other media, to the detriment of some Web content providers: “The argument for [charging for traditional impressions rather than click-throughs] is straightforward: whose fault is it if an ad doesn’t pull people in? But smaller Web publishers can often be strong-armed, if only because they never counted on making any money in the first place. They often agree to by-the-click ad contracts — generally with a bigger site, which takes the traffic and then turns around and charges someone else for the impressions.” (Anuff, 1998, 94) Since the number of people who click through an ad is substantially smaller than those who see it, Web page designers who can only command advertising for click-through are at a serious economic disadvantage.

Given the limitations of banners, some are suggesting that advertising on Web sites needs to be more prominent.



Radical surgery awaits. Publishers must reinvent the banner in larger sizes, different shapes, and surprising locations (as Duracell has done with its batteries ripping through background screens). Get rid of the box, add audio and animations, incorporate useful Java apps, and devise more interaction with site content. Delay the ad’s appearance on the page, let it pop up later, or let a rollover reveal it as a hidden Easter egg. Give it continuous presence on the site through the use of frames. Create serial messaging from banner to banner. Make us focus on the ad exclusively for a few seconds (much as the Riddler and Word sites have done by interposing ads on splash screens). Above all, let the advertisement add value to the site experience. That equals entertainment. And revenue. (Madsen, 1996, 212)



By using features specific to the Web (especially interactivity), it is hoped that surfers will actually want to experience advertisements rather than avoid them. Since this is a relatively new field, it is hard to know where it will go, but we should be aware that a counter argument can be made: Web surfers who go to a site for specific branded content may resent having to negotiate complex advertising material which they weren’t looking for.

Some suggest that advertisers go even further: “Unlike traditional media that broadcast a blaring brand identity, interactive technologies can — and should — mesh entertainment with service, support, and full-blown applications. Online advertising is more than business as usual.” (Freund, 1997b, 92) Or, as Esther Dyson puts it, “The challenge for advertisers is to make sure that their advertising messages are inextricable from the content.” (1995, 142) Some have suggested that this is necessary with an interactive medium like computer mediated communications since, “No longer can advertisers count on catching a passive audience unawares; they must now focus on ways to entice viewers to ‘tune in’ or ‘visit’ a Web site…” (Hindle, 1997, xi ) This is complicated by the fact that the Internet has, for most of its history, been a non-commercial communications medium, resulting in the fact that “magazine readers expect blatant advertising but computer users don’t.” (Katz, 1994, 56) This combination of two usually clearly defined objects — advertising and content — would make it even more likely that users would experience the advertisements, since avoiding them might mean also missing the content which the user went to the site to obtain in the first place.

Several examples of this kind of advertising already exist. One is referred to as a “microsite,” a corporate Web page which sponsors other kinds of content. A microsite is linked to the site it sponsors, and returns the computer user to the sponsored site when she or he is finished with the microsite. “The e-zine Word helped Altoids breath mints build such a site, a targeted 15- to 20-page microsite that tries to blend the personality of the product with the idiosyncratic tone of Word. The Altoids microsite was codeveloped by Word technical and design staff and is connected to the zine with a banner. Word editor Marisa Bowe says she offers ideas to sponsors but stays away from creating any ad copy.” (Voight, 204)

Bowe goes on to say that the line between advertising and content on her site is clear, although the point is to make them similar enough that the user doesn’t object too strenuously when moving between them. “Many [microsites] fall short of their full potential because they are not customized to mesh thematically with the infotainment sites on which they appear and thus remain outside the site’s essential experience. Content providers are in a unique position to customize these modules more effectively to their own content, for premium charges.” (Madsen, 1996, 216) There will always be a tension between the advertiser’s need to make the microsite as near to the main site as possible, and the content provider’s need to keep the line between content and ad clear.

Another example of the blurring between advertising and content occurred on Lifetime TV online,



a spin-off if the popular female-oriented television station of the same name. The online publication offers to develop ‘content-related’ advertising for companies interested in trying out new promotional techniques. Sites like Say Cheese, which appears in Lifetime’s parenting section, are the result. Created in conjunction with Sears portrait studio, the site encourages users to vote for the cutest baby out of a selection of 10 new photographs displayed every month. The site’s content is co-produced by Sears and Lifetime, and Sears offers a free photo session at one of their portrait studios as a prize to participants.

Brian Donlon, VP-new media for Lifetime TV online, says the page has been a wild success. Bouyed by positive feedback, Donlon says Lifetime is making plans for the insertion of various products into one of its online digital dramas. Users will be able to stop the unfolding action and find out all about the displayed items. Eventually, people will be able to order products right then and there. (Groves, 1997, 34)



Supporters of this kind of advertising compare it to forms of advertising from previous media. One argues that it is “much like sponsored programming of the 1940s or ’50s, whether it was the quiz show, the “Texaco Star Theatre’ or the “Hallmark Hall of Fame…'” (Goldman Rohm, 1997, 120) However, the advertising did not intrude onto the actual programs to nearly the extent that is being suggested on the Web. “You already have those product placements in movies,” another person argues, “and everybody knows companies pay for those.” (Casey and Grierson, 1999, 29) Product placements in films are not supposed to break the forward momentum of the story, though; a better analogy to this form of online advertising would be if a film stopped dead for a 20 minute infomercial.

Some argue that the blurring of content and advertising will inevitably undermine the credibility of the content (as it has, to some extent, with product placement in movies). According to Chris Barr, editor in chief of CNET, “Advertising must be clearly marked, or else you’re compromising the content. There may be short-term benefits, but in the long term you really hurt the publication.” (Voight, 1996, 200) Others argue that “the traditional publishing barrier between advertising and editorial could be eliminated in cyberspace without harming readers if the readers are offered information about how much revenue is derived by the publisher, or content provider, from each advertiser.” (Whittle, 1997, 116) I find it hard to see, though, how a short disclaimer at the bottom of a home page will adequately prepare readers for the fact that what they are about to experience contains advertising in the guise of editorial content.

For non-fiction, the blurring of ads and content undermines the reader’s belief in the quality of the information. For fiction, the experience of product placement in movies is that, when it becomes too blatant, it destroys the audience’s suspension of disbelief. Furthermore, as Aristotle pointed out millennia ago (and we saw in the previous chapter), a properly constructed narrative requires that each action follow from the preceding action by logic and necessity. (1987) The intrusive kind of advertising being discussed, unless very carefully handled, would likely break this flow.

What theorists of Web advertising describe is a symbiotic relationship between content and advertising: the content gives legitimacy to the advertising, while the advertising can be both as entertaining and informative as the content (while paying for it). However, it may also be possible that, by accepting this trend in advertising, content creators are making themselves obsolete. After all, if advertising is as entertaining and informative as content, what need do advertisers have for other content, which will only compete with their sales message?

Calvin Klein had a campaign for a fragrance called cKone which illustrates this point. In print and video advertising, characters are established, each of whom has an email address. If you write to one of the cKone people, “they’ll start writing back.” (Casey and Grierson, 1999, 29) The cKone campaign is a soap opera played out in email. It is subtle: the email correspondence doesn’t mention the product (although it is in the return address on all the email messages); the only way readers of the email know it is a promotion for perfume is from the original advertisement from which they got the email address. For our purposes, the important thing to note about the cKone campaign is that it employs fictional devices but it is not tied to specific fictional content on the Internet. If more advertisers take this direct approach, the amount of advertising money available to content providers will diminish.

There is one other Web strategy which could change the nature of advertising. Advertising generally has been described as “an inefficient medium for paying for its accompanying information.” (“The Place of the Internet in National and Global Information Infrastructure,” 1997, 351) This was eloquently explained by Marshall McLuhan in 1964: “Advertisers pay for space and time in paper and magazine, on radio and TV; that is, they buy a piece of the reader, listener, or viewer as definitely as if they hired our homes for a public meeting. They would gladly pay the reader, listener, or viewer directly for his time and attention if they knew how to do so.” (McLuhan, 1996, 168) Computer mediated communications media such as the Web give advertisers just this possibility. Cybergold, for instance, one of the failed forms of electronic cash, allowed, “Subscribers [to] earn cash by visiting websites and reading ads, then spend their cyber cheques on MP3 files and other posted merchandise.” (Platt, 1999, 40) A viable electronic money system would make this type of advertising more likely. How it would affect content is an open question.

Although, as has been mentioned, the amount of advertising dollars devoted to the Web as a whole is increasing, “The truth is that the companies bankrolling Web sites are, for the most part, seeing rivers of red ink.” (Larsen, 1999, 94) This means that, for most Web sites whose main product is information, the advertising model is failing to generate revenues which can cover their costs. This is due, in part, to the problems with online advertising which we have looked at, problems which make advertisers reluctant to fully embrace the medium. However, there is a more fundamental structural problem with the Web which may make it impossible for the advertising model of revenue generation to succeed: there are “too many Web sites chasing too few ad dollars.” (“Web profits still elusive,” 1998, unpaginated) One writer suggested that “ad inventory exceeds the demand from advertisers by probably 10,000 percent.” (Kline, 1997, 65) Unlike the magazine market, where there are established profitable publications and the potential for a small number of new publications to succeed given enough time, the sheer number of Web sites means that “What…online publishers seem to be running up against is an accelerated business model, where the marketplace is flooded with equally unprofitable competitors.” (Eisembeis, 1998, 38) We will look at one of the few exceptions to this rule, Web portals, in Chapter Five.

Given the super-abundance of content relative to the amount of advertising available, the law of supply and demand has led to a retrenchment in advertising rates: “The prices charged for every thousand page views delivered — or CPMs — dropped from $15 per thousand at the beginning of 1996 to less than a dollar per thousand by the end of the year.” (Bayers, 1996, 127)

This may be part of a larger trend in advertising: “Madison Avenue is already suffering, having watched corporate advertising shrink from 60 percent of corporate promotional budgets to just 40 percent — the difference having shifted to direct promotions.” (Davidow and Malone, 1992, 221) Ironically, the Web seems to be developing the need for advertising at just the moment when advertisers are moving away from advertising and putting their money in direct marketing campaigns such as junk mail and telephone solicitation.

In response to this trend, “most [Internet] analysts are predicting a wave of consolidations and failures this year.” (“Web profits still elusive,” 1998, unpaginated) This is classical economic theory: since supply of Web pages far outstrips the demand for advertising which can economically sustain them, the number of pages will have to be reduced. This would bode ill for small producers, since if advertisers are “best served by a Web in which 60 to 70 megasites receive the overwhelming majority of traffic, as was predicted at a recent industry conference, then that’s the sort of medium the Web will become. With these megasites offering daily, comprehensive, state-of-the-art content for free, it’s unlikely that smaller sites will have much luck charging users for programming.” (Beato, 1997, 193)

It should be pointed out, however, that these analysts are looking primarily at corporate Web sites. The idea that the number of Web sites must decrease so that the remaining sites may become profitable does not take into account the fact that as millions of new computer users begin accessing the Web, many of them will want to put up Web sites of their own (and continue to be willing to do so without making money). All other things being equal, therefore, the glut of Web sites is likely to continue into the foreseeable future, resulting in pressure which will continue to depress advertising revenues.

The Subscription Model

The concept of subscriptions translates fairly easily into the online world. For physical magazines, a subscription requires a reader to pay a fixed amount of money for access to a set number of issues. In the online world, readers pay a set amount for access to a certain amount of information for a set period of time. This may be all of the information on a site, but there is no reason why different levels of subscription fees could not cover different degrees of access. In addition, “The distinction between whether a ‘subscription’ means delivery by email or simply access to a restricted Web site is vanishing; most online publications offer either, at the user’s option.” (Dyson, 1998, 192)

For the most part, subscriptions to online information have not succeeded. The experience of Slate, an online publication available through the Microsoft Network, is instructive. In January, 1998, Slate announced that it would start charging for access to the site, which, up to that point, had been accessible for free. “We don’t believe that the advertising-only approach is sustainable for us,” the publisher of the electronic magazine claimed. (“Slate tries subscription model,” 1998, unpaginated) At the time, The Wall Street Journal, The New York Times, The Economist and Business Week, among other publications, were experimenting with charging for subscriptions. (ibid) Slate established a subscription rate of $29.95 per year. “Nothing that I have seen in the past one-and-a-half years has dissuaded me from the notion that we need subscriptions to have a viable business model,” Slate‘s publisher insisted soon after. “The longer you stay a free site, the harder it becomes to switch to paid. For us, it’s not question of if, but when.” (“The increasing cost of surfing,” 1998, unpaginated) By April, Slate claimed a paid subscriber base of 20,000, although at the reduced rate of $19.95 a year. (“Pay-per-view Internet news becoming more common,” 1998, unpaginated)

In February, 1999, Slate announced that it would drop its subscription fee and allow people to access it for free once again. According to a Microsoft spokesperson, dropping the subscription fee was “part of an aggressive company strategy to focus on development of a Web site.” (“Slate drops subscription fees,” 1999, unpaginated) It’s hard to know exactly what to make of this statement. Some observers stated the case more directly: “Web-based publications such as Microsoft’s Slate — before the company gave up on paid subscriptions — found themselves with only a small fraction of the subscribers they needed to break even (or to match their print competitors).” (Wallich, 1999, 37)

Nor was Slate the only online publication to fail to make enough money from subscriptions. “Even such commercial communication giants as the Wall Street Journal (www.wsj.com) and sports network ESPN (www.SportsZone.com) have found it difficult to obtain a viable pool of subscribers.” (Mosley-Matchett, 1997, 10) According to Esther Dyson, “Time Inc.’s Pathfinder, which once hoped to charge, is still free. And the New York Times has an interesting strategy that prices according to value to the customer: It’s online version is now free (although you have to register for access). As of mid-1997, the Times still charged subscribers outside the United States and had 4,000 overseas at $35 a month (the price of home delivery of the paper edition in the United States). But in 1998, the rate for overseas subscribers was dropped in the interest of competitiveness and long-term international growth potential.” (1998, 181/182) In addition, “USA Today had to cut the monthly subscription fee on its Web site from $15 to $13 and finally to nothing.” (Rose, 1997, 221)

It seems hard to argue that, “for the most part, subscriptions on the Internet have failed miserably.” (Goldman Rohm, 1997, 118) This may be because there are too few readers currently online for a large enough subscription base to develop; if this is the case, it is a temporary problem which will be relieved as more people get connected to computer mediated communications networks like the Web. On the other hand, there may be too many publications for any to gain enough readers to be economically self-sustaining through subscriptions, even if a large number of new computer users come online.

Microsoft Chief Technical Officer Nathan Myhrvold predicted (ironically, in Slate) that “Web readers wouldn’t pay online subscriptions until they became both addicted to the medium and bored by their free options. ‘Imagine trying to sell subscriptions to HBO back in the 1950s,’ he wrote. ‘People clustered around their primitive sets to watch the damndest things (Milton Berle, for instance).'” (Romano, 1998, 62) Perhaps, but cable television was competing with a small number of free channels; all other things being equal, there may always be millions of free Web pages, a lot of which are of high quality. In this case, it is by no means certain that enough people will come to accept subscriptions to support the economic viability of online publications through them.

It can be argued that if the largest corporations cannot use their well known brands to make money from subscription sales to their electronic publications, individual content creators don’t have a chance. It is worth keeping in mind, however, that these publications have high overhead costs which have to be recouped; a much smaller publisher, if he or she can become well enough known to attract a lot of readers (a big if, to be sure) needs to charge fewer people to recover his or her costs and start making money.

* * *

There is no need to choose between advertising and subscriptions, of course; it is likely that the two systems will coexist. (Varian, 1997, 30) On the other hand, at least one prediction is that by 2005, online advertising revenue will grow to $8.9 billion, while subscription revenue will only grow to $360 million. (Wolf and Sands, 1999, 113) Still, to the extent that different payment models will be available, different kinds of information will be subject to different pricing schemes: “[I]nformation that has a broad appeal will remain free. If a large enough user base is interested in it, it will be supported by advertising. Sole suppliers and niche suppliers of information will be able to sell their goods on a per piece basis or subscription basis.” (“The New Economics,” 1997, 418)

With respect to advertising and subscriptions, there are two foreseeable outcomes, neither of which bodes well for individual content producers.

It is possible that the Web will continue to have too many pages for any to be sustainable by traditional economic methods. This may, in fact, be exacerbated by the fact that as more people buy computers and get connected to the Internet, we can expect the number of Web pages to continue to grow. If this happens, no content creator, large or small, will make an appreciable income.

On the other hand, it is possible that a small number of sites will garner enough traffic to make them economically viable. If this happens, it is most likely that they will not be the work of individuals, but large corporate sites which are taking advantage of brands existing in the real world and economies of scale.

Perhaps there is another alternative: aggregation. Aggregation can work on the level of advertising or content. In terms of advertising, a single agency will place advertising on a variety of Web pages, charging a single rate for them as a package. “Plenty of small and medium-size companies without ad sales forces are generating online advertising income thanks to a growing list of ‘ad-reach’ networks. These networks — from companies like DoubleClick, Adbot, and Softbank Interactive Marketing — place ad banners on your site. They even connect with the ad agency media buyer — that all-important person who decides where client’s ad banners will appear. In addition, the ad-reach networks deliver, rotate, and track ads; provide click-through rates to the advertiser and ad agency; and best of all, pay you for the privilege of using your ‘inventory’ or Web pages.” (Carr, 1998, 55) Web Wide Media, a joint venture between BSkyB, the world’s largest direct-to-home pay television operator, and OzEmail, Australia’s largest Internet service provider, claims that, “By representing and profiling thousands of sites we are able to effectively deliver target audiences to advertisers from all over the world… With backing like this, you can be sure advertising on your site is in good hands.” (“Web Wide Media advertisement,” 1997, 97) Aggregation of advertising could be a boon for small content producers; aggregating their readers would make them more attractive to potential advertisers who might otherwise have no use for them. In addition, the aggregation companies would do all the promotional work and billing, two aspects of advertising which many small content producers are not capable of and/or interested in doing for themselves.

Aggregation of content happens when you combine the work of different people into a single site. This can be an electronic magazine, where all of the content is stored on a single server, or it can be a series of pages on a variety of servers linked together. The theoretical advantage of aggregated content is that you can charge a subscription rate that is more attractive to potential readers than if each person tried to sell his or her content individually. In addition, the more content you offer, the more attractive an aggregated site can be to those potential readers (although, as we have seen, it is also possible that they will balk at paying for a lot of content which they would not otherwise want).

There is a problem with both forms of aggregation. Aggregation is part of a process sometimes referred to as “reintermediation:” the return of intermediaries between information producers and consumers. It is possible that the aggregators will make money since they take a percentage from all advertising sales or subscriptions. What is not at all clear is whether, once the money has been divided up between all of the content providers, there will be enough for any of them individually to sustain their work.

One final aspect of traditional methods of making money from the sale of information needs to be noted: while it seems true that, as Hakim Bey states, “The Net is gradually being enclosed by corporate capital,” (I26, 12) nothing we have seen so far prevents small content providers from putting their own material on the Web. The danger from these corporate maneuvers is not that small producers will be entirely disenfranchised, it is that they will become irrelevant. “What seems probable, then, is a ‘web within the web,'” write Herman and McChesney, “where the [conglomerate] media firm websites and some other fortunate content providers are assured recognition and audiences, and less well-endowed websites drift off to the margins… The relevant media analogy for the Internet, then, is not that of broadcasting, with its limited number of channels, but, rather, that of magazine or book publishing. Assuming no explicit state censorship, anyone can produce a publication, but the right to do so means little without distribution, resources, and publicity.” (1997, 124/125) Or, as Ted Leonsis, chief programmer of America Online, puts it, “The big will get bigger and the small will get marginalized… This isn’t going to be a business where 380,000 Web sites are going to be important.” (Rose, 1996, 295)

Their size, in terms of financial and human resources, gives multinational entertainment conglomerates considerable advantages over individual content producers. According to a 1996 Forrester Research report, the average commercial Web site cost $2 million per year (and was losing $1 million). (Herman and McChesney, 1997, 124) The big corporations can afford this: “‘Brand building is being done today,’ one media executive said of his firm’s Internet activities in 1996, ‘for reward in 10 years’ time.'” (ibid) While small players come and go, a corporation which can maintain a brand on the Web for a decade will attract users because of its stability: people will come to rely on it because it is simply always there.

In addition, conglomerates have the ability to produce tremendous amounts of material. “Will the bandwidth bonanza herald the death of the Web as a populist medium? An individual author might create 100 kilobytes of text and a few megabytes of rendered graphics in a day. Compare that with the amount pumped out by the armies of
programmers and graphic artists at Microsoft or Time Warner. When the bandwidth logjam breaks, individually produced content will drown in the corporate flood. Turning up the bandwidth effectively turns down the volume on all the small sites that make the Internet what it is today.” (Claburn, 1997, 158) The fear is that, when a user goes looking for entertainment, conglomerate products are all she or he will be able to see.

Some, like Wired magazine’s Nicholas Negroponte, do not believe that this is a problem. “Companies like Time Inc., News Corp., and Bertelsmann keep getting bigger and bigger. People worry about control of the world’s media being concentrated in so few hands. But those who are concerned forget that, at the same time, there are more and more mom-and-pop information services doing just fine, thank you very much.” (1997, 208) Rawlins believes that once the first wave of corporate control has run its course, “small independents will start up again to service various niche markets.” (1996, 67)

There is certainly validity to this: many of the authors of online material we saw in Chapter Two would not have been able to get much of an audience for their work if they hadn’t been able to put it on the Web. It can be argued that for somebody who is likely to photocopy only 20 or 30 copies of a work and distribute them to friends, or even somebody who can offset print 100 copies of a work and try for a little wider distribution, getting a few hundred or even a thousand or more readers on the Web is a great step forward.

There are, however, a couple of reasons why this development is not as positive as it could be. For one thing, the rhetoric of the Web is that anybody with a modem and an account with an ISP can be the equal of a multinational conglomerate; some of the writers in the survey expressed this sentiment. Yet, it clearly isn’t so. “For all the hype about information wanting to be free, and the glorious cyberlibertarian future of the Net combined with the market, the oligopolies are moving in.” (Wark, 1997, 33) A lot of people may be putting material online because of a romantic ideal of the Web which increasingly has little to do with its reality: “Far from demonstrating a revolution in patterns of social and political influence, empirical studies of computers and social change…usually show how powerful groups adapting computerized methods regain control.” (Fernback, 1997, 47) On a practical level, if small content producers cannot attract larger audiences, they will not be able to share in the financial rewards if a workable system of advertising, subscriptions or micropayments is, ultimately, developed.

It is also true that information consumers will suffer because the idea that they have a wide variety of choices will prove to be largely illusory. Moreover, because they will be discouraged from looking beyond corporate Web sites, users will not necessarily get the best information or entertainment. “When you vertically link things, you don’t need to have the best in order to prevail. You only need to have something that’s adequate because bundled generally beats better.” (Brandt, 1998, 136) The promise of digital communication is one of increasing diversity of information and entertainment, where smaller and smaller niche interests can be served. However, this promise “is largely illusory if carried out within a commercial framework: the new channels tend to offer the same fare as the old, and instead of filling new niches they attempt to skim off some of the thinning cream in entertainment and sports.” (Herman and McChesney, 1997, 195)

Other means of generating revenue, means which would effectively exclude all but the largest producers of entertainment, are also being explored. It is to these efforts that we must now turn our attention.

Old Media for New

As we have seen, traditional models for making income have, for the most part, failed when applied to selling information over the new digital medium. As this reality has become increasingly obvious, one response from the business community has been to change the nature of the medium, to make it more like traditional media, media from which business knows how to profit.



“Here’s how one technology executive described what’s going on: ‘Where we see this going is bringing more TV-like experiences to the Web. There’ll be more sound, more graphics and more animation are being employed. It’s what advertisers and agencies have been waiting for to express themselves better.’

“So the future lies before us. The future of the Net is…Television!” (Gehl, 1998, 5)



Television, of course, works on a one-to-many model of communications, a model which, at first, seems antithetical to the Internet’s many-to-many form. (I shall consider these models in further depth in the next chapter.) However, attempts have been made to change the working of the Internet, and, particularly, the World Wide Web, to make it more closely resemble a one-to-many medium.

Before we explore these efforts, it is useful to remember that there are two kinds of software: function-oriented and content-oriented. Content-oriented software includes games, music, still images, etc. Function-oriented software is the program on which content is displayed: image readers, music players, Web browsers, etc. This is an important distinction which too often is conflated in the popular press, and, therefore, in the public understanding of technology. It is important because the nature of function-oriented software largely determines what content-oriented software can be displayed on a computer, a lesson not lost on those who hope to shape the future of the Web.

The Push for Push

How people access information on the World Wide Web is fundamentally different than how they access it on broadcast media. “The Web is basically a ‘pull’ medium. Users decide what they want; they point their Netscape or Microsoft browsers at the relevant website; they then pull the designated pages back to their PCs.” (“When shove comes to push,” 1998, 14) On TV, by way of contrast, the networks “push” their programming according to their schedules, and viewers must accept what the networks offer when they offer it (at least, until the advent of VCRs). While push media are synchronous, limited to a small number of channels and generally require users to be passive, pull media are asynchronous, far less limited and require users to be active.

Push applied to the Internet would work as follows: “On-line users download and install software that has a push application. Then they choose which channels they want to receive and how often. Channels will come from content providers that include news organizations such as CNN and The New York Times, and sports and entertainment sites including CBS SportsLine and Daily Variety. Not all push technologies are compatible, but many content companies will make their information available in several formats.” (Kramer, 1997, C25) The “channels” [5] would periodically appear as pop-up boxes on computer users’ screens. Much control of what information appeared on his or her screen would be taken out of the hands of the computer user: “With software now emerging, such as various ‘webcasting’ systems, Netscape’s ‘kiosk mode,’ and Microsoft’s ActiveX programming, content arrives in an unbroken, often uninterruptible stream once the user completes the initial link. Since these schemes aim to make the Web safe for advertising, it is reasonable to assume that users will not be encouraged to make other connections, but rather to keep the channel open and await instructions.” (Moulthrop, 1997, 654)

The channels would be free, offering a much larger amount of information sources than television. Otherwise, the economics look eerily familiar: “The pushers would make their profits by requiring people to fill out brief questionnaires about themselves in exchange for free subscriptions; and by using these ‘demographics’ to sell their audience to advertisers. PointCast, for instance, charged advertisers up to $54,000 to run 30-second ‘intermercials’ within the content it pushed to its 1.2m (mostly affluent) subscribers.” (“When shove comes to push,” 1998, 14)

As with other such innovations, push technologies offered Web users some advantages. For one thing, push technology held out the possibility of “solving the problem of information overload.” (“‘Push’ found to be too pushy,” 1998, unpaginated) Rather than having to search the Web every time a computer user needed some information, she or he could cultivate a small number of trusted sources, who would deliver (hopefully useful) information directly to her or him. The literature on push technology assumed these trusted sources would be the established brands of existing media corporations. In addition, push technology could save a computer user valuable time; once he or she had found a site which he or she could trust, he or she could ask it to send regular updates to his or her desktop, which would “make repeated visits to Web sites unnecessary.” (Kramer, 1997, C25)

For a brief period, push technology “was being hyped as the blockbuster application of the Internet.” (“When shove comes to push,” 1998, 14) Extravagant claims were made about push’s potential: “The technology is expected to become so widespread that push-delivered advertising, transactions and subscriptions will account for a third of a projected $19-billion (U.S.) in annual worldwide revenues just three years from now, according to the Yankee Group, Boston-based market researchers.” (Kramer, 1997, C25)

The technology didn’t live up to the hype, though. Consumer reaction to push technologies was largely negative: “Push media’s promises [were] often met with outright resentment from computer owners glad to be free of TV’s broadcast model.” (Boutin, 1998, 86) In fact, the reaction could be quite vehement. “The push model grabbed the attention of Internet publishers because it allows them to dispatch information without depending on users to visit their sites,” one letter writer to Wired magazine commented. “Of course, you and I both know the real reason these Internet publishers aren’t getting visitors — their content sucks. Publishers aren’t willing to accept that low traffic might be their problem. So what do they do? These oh-so-thoughtful publishers force themselves on us and ram their worthless information right down our pipelines.” (Peterson, 1997, 32) Another person wrote: “The Web is a success because it provides information to users and doesn’t pander to advertisers. Television is a vast wasteland of useless predigested mush because the people running it put commercial interests before those of the viewers. If push media is going to follow the model of television, it’s going to be a waste of time.” (Freeman, 1997, 32)

Virtual Reality pioneer Jaron Lanier summed up the attitude of existing computer users to push technologies when he stated that, “Push is not a technology, but a way of using technologies to make new media emulate old media. Push indicts the business minds of new media for failure of imagination. Push ultimately will mean television all over again, because that’s the only business model our moribund investment sector seems able to fathom.” (1997, 156)

This opposition led to “the demise or reinvention of many of the many start-ups hoping to make a success of push technology…” (“‘Push’ found to be too pushy,” 1998, unpaginated) PointCast, one of the biggest names in push technology, is “retreating from the consumer market and concentrating more on such businesses as corporate banking, telecom, health and property.” (“When shove comes to push,” 1998, 14)

Proponents of push technologies attempted to change the way computer users accessed online information through the nature of a specific type of software. Although they failed, other efforts continue.

Streaming Video and Multicasting

In order to get a video file, one has to download it to one’s computer. This is time-consuming (large files downloaded by slow modems could take hours) and frequently frustrating (imagine spending all that time to download a video that, it turns out, you aren’t interested in). A better way to download video would be to have it appear immediately on your desktop when called for, and unspool in real time. This would allow you to watch only as much as you needed to get the information or experience you wanted.

This is the idea behind streaming video.

Rob Glasser, Chairman and CEO of RealAudio and RealVideo maker Progressive Networks, believed that they would turn “the Net into the next mass media.” (Jones, 1997, 14) The advantage of the Net as a mass medium is, of course, that traditional economic models could then be applied to it, particularly the advertising model, giving major corporations a way of making money by selling their information assets through it.

An optimistic view of streaming video was that it “could some day challenge the landed interests of the TV industry…” (Reid, 1997, 123) by giving small video producers an outlet for their work. At the same time, however, large economic interests have been involved with the technology: “…Microsoft, which already owned 10 percent of [streaming video company] VDOnet, added 10 percent of Progressive Networks and 100 percent of [streaming video company] VXtreme to its collection. It also announced agreements with several other vendors to support its Active Streaming Format. By assimilating the best technology from its competition and assuring that most player modules will support ASF, Microsoft changed the entire Web video landscape in a single week.” (Avgerakis and Waring, 1997, 47) At the same time as the giant of the computer industry was buying up streaming video properties, existing television networks were beginning to see its potential for their reach: “Jeff Garrard, executive producer for CNN Interactive, says if CNN can reach people with information relevant to their work, via streaming media on the Web, it will have maintained its reach, even if those same people watch less CNN on TV at home.” (“Broadcasters Target the Office Worker,” 1998, unpaginated)

As we have seen, the World Wide Web is a pull medium where individuals ask for information from distant sites. Pulling streaming video off the Web has a fundamental problem, though: it can put a large load on a server and take up a lot of space on a network. If 1,000 people each download a large video file at their convenience, it requires 1,000 different requests from the server and 1,000 different streams of information.

One solution which has been tried is known as Internet Protocol (IP) Multicasting. “[I]f the server site were to use a multicasting protocol, it could send just one stream of packets into the ether — and any number of users could tune into the signal, with no extra load on the server. Multicasting changes the rules of the road: it allows packets of information to be ‘broadcast’ to anyone who is ‘listening, rather than a single, specific computer. These packets aren’t sent individually to each recipient; instead, only one stream is sent, but it is received at all destinations at (more or less) the same time.” (Savets, 70) The main difference between streaming video and multicasting, a vitally important one, is a matter of time: streaming video is asynchronous, meaning that computer users can download information when they want; multicasting is synchronous, meaning that users must be online when a video stream is sent.

More or less like television.

Many commentators have claimed that “a multicast-enabled network is the foundation for the next major evolutionary step in the life of the Internet.” (Hovis, 1997, 24) Certainly, important players in the information industry are acting as though it will. “MCI and Progressive Networks recently announced an ambitious mass-market hosting service called RealNetwork. MCI has placed Progressive splitters and multicast technology throughout its IP network and has signed up the likes of ESPN, ABC News, and Atlantic Records.” (Avgerakis and Waring, 1997, 48) And, of course, there’s always Microsoft: “UUNET, the sole access provider for Microsoft Network, is fully multi-cast enabled. Microsoft plans to exploit IP multicast in Windows 98, in important new technologies like DirectX and DirectShow, and in products like NetMeeting and NetShow.” (Doyle, 1997, 62)

Some people believe that streaming video is currently economically viable. “Web-based streaming-media technologies already make possible live telecasts to audiences as big as 50,000 people at a cost per viewer lower than cable. When you’re paying to reach people, it pays to reach only the ones you want.” (Browning and Reiss, 1998, 102) This belief is predicated on the idea that advertisers are willing to pay higher costs for smaller numbers of audience members if those audience members are part of a demographic segment of the population more likely to buy the products of the advertisers. This is a disputed proposition. As a representative of America Online (AOL) argued, “We talk about Webcasting here a lot, and we think audio and multimedia is going to be a big part of what we do going forward. Yet, it is totally unproven. The biggest webcast ever reached 10 or 20 thousand people. If you’re running your business on an advertising model where you’re getting $60 per thousand, an audience of 20,000 is not going to cut it as a business model.” (Geirland, 1997, 233) Without the revenues of a mass audience, these producers will not be able to produce television quality shows, which may be to the advantage of smaller producers with much less overhead, but which may also mean that it will never be economically feasible.

As it happens, multicasting need not necessarily turn the Internet into a one-way medium; it has been used in trials of online videoconferencing, for instance. However, this type of interactivity (which is, after all, a defining feature of networked digital communications) only works with a small number of sites, which is the opposite of the mass medium envisioned by the entertainment corporations pushing the technology. “‘The big players in content production — the TV networks and Hollywood — are used to a broadcast method that reaches tens of millions of users, and the Internet today simply cannot deliver data in this way,’ said Martin Hall, co-chair of the IP Multicast Initiative (www.ipmulticast.com), a coalition of more than 65 major companies. ‘In order to attract the creators of expensive and elaborate content, the Internet must change to deliver those
users, and that’s where IP Multicast comes in.'” (Hovis, 1997, 24) There can be no clearer statement of the proposition that where there is a conflict between a medium and the needs of major content producers, the medium must change. Some advocates are a little more circumspect, claiming that multicasting technology promises “the richness of television broadcasting coupled with the interactivity of the Web.” (ibid) Of course, interactivity with a broadcast meant for tens of millions of other people necessarily would be limited to relatively trivial matters such as camera angles, replays, purchasing products, etc.

The irony is that streaming video may not be necessary. According to the famous law propounded by Gordon Moore, computer processing power doubles approximately every 18 months, a process which will continue for the foreseeable future. Thus, it is only a matter of time before computer networks will have the bandwidth to distribute full-motion video on an asynchronous basis. As Steinberg points out, “Advocates [of multicasting] are overlooking one detail: with the coming of the Gigabit Ethernet, network managers are trying to find ways to use all the bandwidth, not save it.” (1997b, 100) Multicasting may turn out to be a permanent solution to a temporary problem.

WebTV

To this point, we have seen attempts to change the nature of the Web through the introduction of specific kinds of software. One major drawback of these approaches for the corporations which are trying to impose the broadcast model on the Internet is that they are voluntary: computer users must sign up for push channels or invoke a streaming video program in order to use them. The users are still able to use all of the other features of the Web, if they so choose, ignoring push and streaming video. Other changes in the nature of the Web which are being pursued more directly curtail what a computer user can do with it. These changes are being introduced at the level of hardware.

WebTV is one of the technologies which is currently being introduced into people’s homes. At its simplest, it is a box which sits on top of a normal television. The box contains a computer chip which allows it to process information delivered through the TV set. WebTV uses “a broadcasting technology called ‘the vertical blanking interval” — a space between TV signals that can be adapted for sending data — to automatically integrate data from the World Wide Web into TV programs in progress.” (“Oracle’s plans for integrating Web with TV,” 1997, unpaginated)

What might a WebTV experience look like? “In the not-so-distant future, you’ll see television programming on your screen, complete with interactive elements. Just imagine: Click here to (1) shoot Barney, (2) break Michael Flatley’s kneecaps, or (3) download Martha Stewart’s recipe for purple dinosaur stew.” (Li-Ron, 1997, 48) A perhaps less dramatic vision of WebTV suggests that “a person viewing, say, a football game could interact with other viewers through a Web-based chat session appearing in one window on the screen.” (“Oracle’s plans for integrating Web with TV,” 1997, unpaginated)

As with other attempts to integrate television-like features into a Web environment, some major economic players are involved in WebTV. “There’s no stopping this runaway train. All you really need to know is that Citizen Bill has already invested in the necessary infrastructure: First Microsoft bought Web TV; then the company poured a billion dollars into a cable business. Next stop: your desktop.” (Li-Ron, 1997, 48) And, as with the other attempts, there has been a certain amount of retrenchment as the kinks of the business model are worked out: “NetChannel Inc., which provides an Internet-via-TV service similar to Microsoft’s WebTV, plans to shut down its service this weekend, as it continues to talk with America Online about an acquisition… AOL lent the beleaguered company $5 million in November, and is said to be more interested in NetChannel’s technology, employees and expertise, than in the NetChannel service.” (“Netchannel likely to turn off its Internet-via-TV-service,” 1998, unpaginated)

There are two modes of WebTV, neither of which is without problems. With one, direct access to the Web is available on your television set. But Web surfing is an essentially active pursuit: you must go out and find the information you want. Television viewing, on the other hand, is a primarily passive pursuit: turn the set on, pick a channel and watch. Some critics of WebTV are afraid that it will ultimately change the nature of information on the Web: “Some sites are already feeding the push beast by altering the basic shape of information they send to consumers, essentially creating bit-size chunks for easier transmission to cybersavvy consumers.” (Li-Ron, 1997, 49)

The other mode, as we have seen, is to allow a small amount of interactivity into traditional programming. “Using phrases such as ‘web shows rather than web sites’ and ‘choreographing your (Internet) experience,’ [executive producer of the Microsoft Network for Microsoft Canada, Inc. Martin] Katz outlined a future where the Web could be ‘programmed,’ just like CBC or CBS, but with some pointing and clicking thrown in to juice up the particip-action factor.” (Zerbisias, 1997, D1) This diminished form of interaction has been referred to as “lazy interactivity.” Josh Bernoff, an analyst for Forrester Research, defines lazy interactivity as “interactivity you can do with a remote in one hand and a beer in the other.” (O’Harrow Jr., 1998, F12) Critics see this as a great diminution of what the Web can be: “Television isn’t expanding into the Net; it’s shrinking the Net to fit the cramped dimensions of the box.” (Kingwell, 1997, 93)

WebTV, although computer chip-based, does not offer many of the features of a personal computer. “You can’t download files and save them on a hard drive, for example. If you want to write a letter offline, there’s no word processor.” (Riga, 1998, C1) Perhaps the greatest drawback of WebTV, though, is that, “the way the technology is for the foreseeable future, Web TV does not allow users to create their own programming, or their own Web sites, as they now can with their computer. Instead, much like conventional TV now works, Web TV will set the schedule — and the agenda.” (Zerbisias, 1997, D1) As the Web is currently configured, each consumer of information is also a potential producer; as television is currently configured, only a small number of producers create work for a large number of consumers. Any attempt to introduce the television paradigm into the Web will necessarily have to reduce the role of the individual from producer/consumer to consumer. This distinction, so important to the writers surveyed in Chapter Two, will come up again.

Why would people currently on the Web accept such a reduced role? Odds are, most wouldn’t, but that doesn’t matter. WebTV was not designed for people who are already online. “Do you hanker to surf the Net via your TV? I don’t, and I wonder how many of you can honestly say you do. A growing number of companies, however, are betting that many people will turn to settop boxes for Internet access. You and I are not their market, but for people who either can’t afford computers or are too intimidated to use them, these devices offer a viable alternative.” (Coursey, 1997, 63) The CEO of set-top box maker Curtis Mathes Holding Corp. bluntly stated that, “We’re hitting the TV viewer, not a computer person.” (“New set-top box challenges WebTV,” 1997, unpaginated) As I never tire of pointing out to my friends, 50 million people may currently use the Internet in North America, but that leaves 300 million who are not regularly online. WebTV offers these people a convenient means of getting online, and, since they have no experience with the World Wide Web, they have no allegiance to it, no idea of how WebTV is a reduction of it. As Mark Kingwell points out, “The troubling thing is that, under cover of the allegedly democratic character of wider access, the revolutionary interactive possibilities of a direct-communication medium are gradually being allowed to slip away.” (1997, 3)

One way to look at WebTV is as an example of a phenomenon called “convergence.” As we have seen, at its simplest, convergence is about the merging of the computer and the television into one appliance. (At its most complex, convergence is about the merging of the computer and all other appliances into one cybernetic system.) WebTV incorporates elements of computer networks into the television; streaming video, on the other hand, incorporates elements of television into computer networks.

If a converged system succeeds, it will be because it offers users experiences they could not get with television or a computer on their own, in accordance with the theory that any new technology must offer new experiences or other benefits which outweigh those of old technologies to succeed. However, there is a corollary to this theory: an existing technology will continue to thrive in the face of a challenge from a new technology to the extent that it can accentuate its own unique features. As long as a large enough group of individuals see value in putting their own information on the Web, or prefer to surf for themselves rather than have their choices limited to what television producers program into their shows, the original, computer-mediated Web will continue to exist. As one commentator put it, “I used to be on interactive television panels all the time and people would always ask, ‘What’s going to win, the PC or the TV?’ The question is nonsensical because there are certain types of applications you would never want to do on your TV, and there are certain types of entertainment you would never want to do on your PC. So, you have to assume that both forms will continue to live, grow and morph.” (Goldman Rohm, 1997, 116)

WebTV need not be a direct threat to the Internet, since individuals will still be able to choose between a direct or a televisually mediated experience of the Web. However, one thing it will do will be to allow current television networks to maintain their advertising base, and possibly expand it slightly as people who might otherwise have found their way onto the Web watch the new, somewhat interactive television
offerings. This may slow the projected growth of online advertising, to the detriment of those who are trying to derive some of their income from it.

The main argument for such things as push and WebTV is economic: business can apply models from existing media to new media in order to profit from them. However, the effect is not just economic. Recall from last chapter the difference between prescriptive and holistic technologies drawn by Ursula Franklin. As it is currently configured, the Internet is a holistic publishing medium. All of these efforts, in addition to applying existing business models to the Internet, require the application of existing organizational models to the Internet. To a greater or lesser extent, this may turn digital communications networks into prescriptive technologies, to the detriment of individuals who are currently using them in holistic ways.

Bandwidth Issues

The Internet currently piggybacks on the phone system. As anybody who uses it privately (as opposed to from a school or business) knows, you must dial into an Internet Service provider through a phone line; the ISP usually connects to an access reseller which in turn connects to an Internet backbone (although a small number of ISPs connect directly to a backbone). The phone lines were not, of course, created to handle digital data and especially not the volume of digital information required for higher end uses such as video. As a result, there can be lags in the flow of information across the lines, which is relatively unimportant for email, somewhat unpleasant when downloading large Web sites and potentially deadly for video or online gaming.

To solve this problem, telephone companies are upgrading their equipment. However, they are not simply replicating the two-way telephone system; they are fundamentally changing the way information is delivered to the home. “In an attempt to find a way to offer video services over standard telephone lines, asymmetrical digital subscriber line (ADSL) technology has been developed. ADSL offers transmission speeds of up to 7 megabits per second from the central office to the subscriber, and up to 576 kilobits per second transmission speed from the home to the central office. This is enough to send two medium-quality video channels to a home.” (Baldwin, McVoy, Steinfield, 1996, 118) ADSL, once implemented, would allow about 13 times more information to enter your house than you could send out in a given time period.

The telephone companies were not the only ones to envisage an asymmetrical information pipeline. Digital information can also be transmitted to the home by cable modem; some systems allow 800 to 3000 kbps to come into the home, but only 33.6 kbps out (a difference of 23 to 89 times). It is also possible to transmit digital information over satellite dishes; in this case, you could get 200 to 400 kbps into the home, and the same 33.6 kbps out (a difference of 6 to 12 times). (Reinhardt, 1998, 83) Some of the developers of these systems insist that additional bandwidth leading out of the home will be added in the future (ibid), however, it seems unlikely that, once patterns of usage have become entrenched, these companies will want to jeopardize their dominance by offering more interactivity.

Although the competing technologies are completely different, the vision of the executives of the various companies is remarkably similar. By the mid-1990s,


a major turf war had erupted with cablecos and telcos jockeying for position in what was thought by industry mavens to be the new frontier of the multibillion-dollar home-entertainment business, the so-called information highway. This was to be a high-bandwidth delivery system to homes, for what was envisaged in corporate boardrooms as digital, pay-per-view television with an interactive component. The interactivity would be confined mainly to games, and to searchable databases for news and various kinds of information including financial services and shopping. It was perceived from the start as a television-based rather than PC-based enterprise; almost no thought was given to adapting the innovations in interactivity in use on the Internet; indeed, the number of senior executives in either the cable or telephone industry who had Internet experience was vanishingly small. (Rowland, 1997, 313/314)



In the middle of the decade, “expensive trials of so-called video-on-demand [were conducted] in Britain, the United States, and Canada, and these received widespread journalistic coverage. Typically, a demographically-correct subdivision or urban neighbourhood would be rewired with high-capacity, two-way cable connected to a bank of computer servers. The computer databases would contain dozens of digitized movies along with other video entertainment, video games, news outlets and on-line shopping services.” (ibid, 314/315) Within two years, all of the trials had been wrapped up, none of them showing enough economic promise to be rolled out for larger groups of people. Time Warner’s Full Service Network, for example, failed partially because competition between cable and phone companies decreased, lessening the push to open new markets, and partially because “people don’t want a lot of what’s being offered.” (“Postmortem on Time Warner’s Full Service Network,” 1997, unpaginated) This failure is perhaps understandable: those already online would not be attracted to the circumscribed world of new digital services, while those with no computer experience faced a fairly steep learning curve and daunting new machines for what must have seemed like little benefit. Or it may lead to the inescapable conclusion that, “A number of pilot projects have shown that, with regard to the use of interactive television services, audience habits tend to change very slowly…” (Schroeder, 1997, 105)

Still, such services as video-on-demand and home shopping are very attractive to telephone and cable companies. As Unsworth points out, they “fit well into the current market system and require no alteration whatsoever in the role of the consumer.” (Unsworth, 1996, 242) Yet, as we have seen, on the Internet, every consumer of information also has the potential to be a producer of information. The underlying assumption of the passive information consumer in these scenarios is very different from the current reality of information consumption on the Internet, and opposed to the consumer actively producing content which was the attraction for so many of the writers in my survey. “What is disturbing is how, in the general current discourse of these ‘user-centered’ trials, the users of the technologies are posited as consumers of the services that will be delivered to their homes, rather than as active and inquisitive citizens who might use the technologies for personal ’empowerment’ or edification.” (Shade, 1997, 200)

* * *

Efforts to remake the Web in the image of television are being conducted by many of the largest names in corporate communications. “The giants are ‘pushing’ the types of sport and commercial news and entertainment that play well in broadcasting,” write Herman and McChesney.



CBS and Disney, for example, have developed major sports on-line services. The General-Electric-Microsoft joint venture MSNBC plans to join them… In 1997, MSNBC, CNN and News Corporation began ‘pushing’ 24-hour live video feeds over the Internet. News Corporation also launched its TV Guide Entertainment Network website, to capitalize on the firm’s widespread media properties… To complement its existing websites, in 1997 Time Warner established CityWeb, meant to replicate a TV network on the Internet with hundreds of local affiliates. Disney launched an online kids’ service in 1997 with basic and premium options and several different ‘channels’ targeting different youth demographic categories. After failing in its effort to launch German digital television, Bertelsmann announced that it would concentrate on developing a widespread digital TV presence through the Internet.” [footnotes omitted] (1997, 125/126)



An advertisement for AT&T has a former infomercial producer proclaiming that, “The Web is a natural extension of television.” (“AT&T Web Site Services advertisement,” 1997, 45)

These efforts to remake the Web into the image of television are driven by perceived economic necessity: by creating a medium in which mass audiences will come together for their Web programs, these corporations hope to be able to make money through traditional forms of advertising. The irony is that these corporations are trying to recreate the mass paradigm online at the same time as it is breaking down in existing media. The early history of television was dominated by three networks which vied for the largest audiences. By the 1980s, however, cable delivery of television signals allowed for the delivery of additional networks (CNN, HBO, et al), and deregulation in the mid-1980s allowed even more networks to develop (Fox, the WB, et al). “In 1978 three television networks — ABC, CBS, and NBC — captured 90 percent of the American prime-time television viewing audience. Over the following decade, that figure dipped to 64 percent… ‘There’s really no mass media left,’ an ad buyer told Forbes magazine in 1990.” (Shenk, 1997, 112) Tellingly, “A contemporary television blockbuster like Seinfeld draws only one-third the audience, as a percentage of total, that saw 1960s network hits like The Beverly Hillbillies.” (Rothenberg, 1998, 74)

This plays havoc with the advertising model which relied on achieving the largest number of viewers as possible. “Indeed, fragmenting audiences are robbing entertainment companies of the mass scale that made their businesses so attractive in the first place. It’s extremely difficult to amortize higher costs over fewer customers.” (Stevens and Grover, 1998, 90)

Attempts to recreate the Web along the lines of broadcast television may be self-defeating. Adding new channels on the Web will only continue the process of fragmenting the audience, further undermining the mass advertising model on which television depends. This would greatly disadvantage those creating work specifically for the Web, but it would be less of a problem for those leveraging brands across a variety of media: “The formula for success is straightforward enough: Produce something for a fixed cost and exploit the hell out of it, selling it over and over in different markets, venues and formats.” (ibid, 88)

Only time will tell if any of these efforts to change the fundamental nature of the Web by changing the underlying software or hardware by which users get connected will succeed, either singly or in some combination. The failure of heavily hyped push technologies suggests that individual computer users can, by the choices of which technologies they use and which they ignore, determine the future of the Web. However, this requires vigilance, since what industry cannot do in one form, it simply finds different ways of accomplishing. The consequences of users losing sight of their long-term interests in order to cope with short-term problems are very serious. As Derrick de Kerckhove comments: “If business is left to guide the values of the Internet, the latter will be marked by the proliferation of real-time, full-bandwidth communications. We can only hope that the architecture of these communications remains sufficiently open to let everyone in on them.” (Rushkoff, 1998, 68)

The Attention Economy

“Does a place in cyberspace exist if no one visits it?” (Dyson, 1995, 142)

As we have seen, information is abundant, and, therefore, almost valueless as a generic commodity. Attempts to make money from the sale of online information using traditional models have been, for the most part, unsuccessful. Some have suggested that qualities related to information may be where value lies. According to Esther Dyson, “The source of commercial value will be people’s attention, not the content that consumes that attention. There will be too much content, and not enough people with time for it all. That will change our attitudes to everything; it will bring back a new respect for people, for personal attention, for service, and for human interaction.” (Dyson, 1998, 175)

The basic idea behind so-called “attention economics” is that, while information is abundant, the time we have to devote to consuming it is scarce. Dyson points out that the time we make available for such consumption is directly tied to our demand for information (ibid, 173); attention, therefore, should be used as a measure of value, since the more time we devote to something, the more valuable it can be assumed it is to us.

Advocates of attention economics argue that computer users will pay for customized service. This is not a simple assertion. Because it is being posited as the scarce commodity, the attention of the consumer is the product in this economic equation; this would seem to suggest that the attention of consumers is what artists will be competing for, not cash. By this logic, content creators should be paying computer users for their attention. Micropayments, for instance, would allow computer users to “‘earn’ as well as spend small sums” (Wallich, 1999, 37) by, for instance, clicking on advertisements on an artist’s Web site. The problem with this is that most individual producers of content cannot afford to pay users of computer networks to come and look at their work, nor do they have access to advertising which would cover such a cost.

In such an environment, Dyson suggests that “The likely best course for content providers is to exploit the situation, to distribute intellectual property free in order to sell services and relationships.” (1995, 137) Thus, content becomes the lure by which artists sell other things. This may seem an exotic solution, but, in fact, the model has existed for decades: television programs are given away for free, payment for the attention of television viewers to the advertising within programs. “Precisely because it is scarce and unreplicable, this unreplicable kind of content is likely to command the highest rewards in the commercial world of the future.” (ibid, 141)

For some, attention economics seems to fit nicely with the artist’s agenda. “What do writers want when a book is published?” James J. O’Donnell asks. “Attention, acclaim, response, notoriety: they want the act of imposition to succeed in seizing the public stage, the stage that has been inaccessible until the act of publication occurs.” (1998, 12) While this is certainly one motivation, it is by no means the only one. After all, artists, like other people, have to eat. The time they devote to making a living by other means is time that is taken away from their craft. Given the choice, most artists would prefer making their living from their art so that they can devote all of their time to it. [6]

How might attention economics work on computer networks? Dyson suggests that some online content creators “‘will write highly successful works and then go out and make speeches.’ And what if they are shy? ‘Then they won’t make any money.'” (“Advice to Emily Dickinson: Speak Up!”, 1996, 16) Of course, many authors currently augment their writing income with speaking engagements; according to Dyson, this money will be an increasing percentage of their income as the amount of money to be made directly from writing decreases.

Some people have applied traditional economic approaches to this issue: “People, I think, are going to be increasingly rewarded for their personal effort with processes and services rather than for simply owning the assets. It’s a kind of intellectual property which today is called ‘context.'” (“Opening the Gate,” 1997, 153) Dyson suggests that “players may simply try their hands at creative endeavors based on service, not content assets: filtering content, hosting online forums, rating others’ (free) content, custom programming, consulting, or performing.” (1995, 138) This begs the question, though: filtering or rating assumes pre-existing content to work with, but if people cannot make any income out of producing such content, where would it come from?

There are other models by which artists might be able to make money through a relationship with their audience. Musician Todd Rundgren, for instance, has experimented with “a project he hopes will radically change the way artists and musicians market their work” to the public: “‘The idea,’ says Rundgren, ‘is that, instead of, say, a record company buying an artist’s music and then selling that music to the public, anyone who wanted to could subscribe for a year.’ As a subscriber, you’d be let in on the creative process, receiving, via email or a Web page, any newly recorded tracks as they were happening — run-throughs, second or third takes, finished tracks.” (“I Want My…PatroNet?” 1997, 92) Subscribers would receive whatever work was created in the period for which they had paid, but that is almost a by-product of what was really being sold: access to the artist’s creative process.

Economics played a big part in Rundgren’s decision to take this route. “Although a popular producer, Rundgren isn’t an easy sell as a musician, and lately he’s had trouble getting record deals. He knows that every time a record label releases a new album it represents a gamble of about $300,000 in up-front costs for production, CD manufacturing and distribution. Very few of those gambles pay off, so Rundgren wants to remove some of those costs, thereby reducing a lot of the financial risk involved.” (Houpt, 1998, C9) This rationale is similar to that of writers who hope to avoid the costs of producing and distributing books by publishing on the Web; it differs from that of the writers in the model for generating revenue on which it is based.

A different approach was taken by a writer named Dan Hurley, who billed himself as the “Five Minute Novelist.” Hurley started with a typewriter on a street corner, offering to write a story for passersby; he would ask them questions about their lives, and then create the story based on what they had told him. Eventually, Hurley’s writing moved to the Net: “America Online has created a special area for Hurley — officially launched in September — and the novelist may not be back on street corners any time soon. ‘It seems like the online medium was made for the kind of work I do,’ Hurley beams.” (van Bakel, 1995, 90) Users of Hurley’s service email him details of their lives, which he turns into a story. It is important to note that the works themselves are not what makes what Hurley does unique, but the fact that he develops a personal relationship with each of his readers. It is also worth noting that, while Rundgren’s personal brand, developed out of his music in the real world, would make his subscription-based Web site attractive to people familiar with it, Hurley became much better known after he appeared on the Web than he would have been had he remained on his street corner. There is no single approach to garnering attention online.

The attention model may not be familiar to most people, but, as John Perry Barlow points out, it “was applied to much of what is now copyrighted until the late eighteenth century. Before the industrialization of creation, writers, composers, artists, and the like produced their products in the private service of patrons. Without objects to distribute in a mass market, creative people will return to a condition somewhat like this, except they will serve many patrons, rather than one.” (Barlow, 1996, 168) Or, as Dyson puts it, “Just as prominent patrons such as the Medicis sponsored artists during the Renaissance, corporations and the odd rich person will sponsor artists and entertainers in the new era. The Medicis presumably had the pleasure of seeing or listening to their beneficiaries and sharing access to them with their friends. This won them renown and attention as well as a certain amount (we hope) of sheer pleasure at experiencing the art.” (1995, 142) Whether or not the Medicis got the sheer pleasure out of experiencing the art produced by the artists they patronized, many people since have had that experience thanks to their largesse, just as many people will be able to purchase the music Rundgren will be able to produce because of the patrons whose money allowed him to create for a year.

Some commentators have gone so far as to suggest that attention is “the hard currency of cyberspace.” (Goldhaber, 1997, 182) Goldhaber points out that “…transactions in which money is involved may be growing in total number, but the total number of global attention transactions is growing even faster. By attention transactions I mean occasions when attention is paid to someone who can possibly make some use of having it, or is able to pass it on to someone else. People trade attention at work, at home, and in between, day in and day out. Anyone tied into the Web might make hundreds of such transactions a day, far more than the number of monetary transactions they are likely to be involved in.” (ibid, 188/190) There is the suggestion that attention will ultimately replace cash as the unit of exchange, in digital communications networks if not the real world.

This may strike many as absurd. After all, the money which we currently use as a token of exchange is actually worth something. Right? Increasingly, this is not the case. A dollar bill, for instance, is just a piece of paper; if it has any value beyond what one can ordinarily do with a piece of paper, it is because of the social convention that we have agreed that it has such a value. Some have argued that a dollar represents the labour which went into earning it. Thus, a dollar is worth one sixth of a fast food worker’s hour of labour (or one five hundredth of a lawyer’s). There are many problems with this formulation (not the least of which is that money is often created without labour: when governments print more bills, for example, or when banks lend money but only have to actually keep a 10% margin in their vaults), but let us assume it is correct. Since the assignment of value based on labour is essentially a social convention, there is no reason to believe that it could not be supplanted by another social convention, such as the assignment of value based on attention.

The tricky bit would be handling the transition from a labour economy to an attention economy.

Leave a Reply