Chapter Six: Conclusion: Web Fiction Writers in Society

Like other sub-systems, it [technology] can be geared to certain social ends. The choice, determination and implementation of these ends and technological ventures which are linked to them are matters of general policy. Technology cannot escape the process of value judgment resulting from political struggles and orientations of society. (Hetman, 1977, 4)

If our thinking centres on the effect of technology on society, then we will tend to pose questions like, 'How can society best adapt to changing technology?' We will take technological change as a given, as an independent factor, and think through our social actions as a range of (more or less) passive responses. If, alternatively, we focus on the effect of society on technology, then technology ceases to be an independent factor. Our technology becomes, like our economy or our political system, an aspect of the way we live socially... It even becomes something whose changes we might think of consciously shaping -- though we must warn right at the beginning that to say technology is socially shaped is not to say that it can necessarily be altered easily. (Winner, 1985, 2/3)

...broad synthesizing descriptions of on-line culture overstate both the Internet's homogeneity and its independence from off-line contexts. (Kendall, 1999, 68)

We are supported as scholars and faculty in great measure by the public purse, and, unlike most arts and humanities, the justification for the money is largely that our activities inform public action. More important, we know much that is vital to national decisions and ought, as citizens, to contribute our knowledge -- both detailed social facts and general social perspectives -- to public discourse. (Fischer, 1990, 50)

The word made digital means different things to different people. For many writers, it means the potential to gain a large readership, and perhaps even make some money. For the executives at many entertainment conglomerates, it means new revenue streams. For many government officials, it is a confusing morass which offers them few effective guidelines for action. How these points of view come together is the main subject of this chapter. Before we get to that, however, it is worth taking a second look at the way we look at the media.

Media Theories Revisited

In the first chapter, I suggested that neither technological determinism nor social constructivism were sufficient in themselves to explain all of the aspects of the relationship between society and technology, and suggested that they were both parts of a larger process of technological change which has been called "mutual shaping." In Chapter Five, I looked at the stake of page designers over time, and showed that it could change not because the technology changed, but because the social structures around the technology changed. Neither determinism nor constructivism would sufficiently be able to explain this phenomenon.

This isolated situation is not the only argument which favours mutual shaping. In Chapter Three, different forms of Web technology were explored, including "push" technology, WebTV and asymmetrical signal distribution. My argument was that these forms of the technology would change what individuals could do on the Web; in the worst case, they would no longer be able to create and upload their own work. In such a case, the formation of communities of individual creators on the Web would be next to impossible. Notice that this is a deterministic, rather than constructivist argument. I would not be able to make this case if I looked at the stakeholder groups alone.

Why have no other constructivists encountered this problem? I would suggest that it is because one of the original aspects of the current study is that it looks at a technology which is still highly contested, whereas previous constructivist studies were of technologies which had already achieved a high degree of stability. If a technology is stabilized, then what happens after its widespread dissemination into society is moot, since its social effects are, in a sense, an inevitable consequence of its adoption. If, on the other hand, you're looking at technological change from the inside, that is, while the form an artifact takes is still being contested, the effects various forms the technology can take matters. Nobody can predict the future. Nonetheless, it is possible to determine some of the social effects of a technology before it is introduced into society. Before stability occurs, individuals and society have choices; in weighing those choices, we must consider the possible foreseeable outcomes on the stakeholders involved. We must combine constructivist and determinist considerations to decide on personal technological use as well as political policy.

One of the potential pitfalls of social constructivist research is the temptation to write the history of a technology with the knowledge of the form in which it finally stabilized. This can lead to linear histories that simplify the conflict over the shape of the technology, as well as giving its stable form a kind of inevitability. One of the advantages of studying a technology which hasn't achieved stability is that the diversity of relevant social groups, their visions of what a technology should be and the conflicts between them, becomes apparent in all their messy, human contingency. One of the disadvantages, however, is that, because there is no clear guide to which groups will be relevant to the stability of the technology, the researcher must cast a wide net when defining which groups may be relevant. This is most obvious in Chapter Five, which contains my best guess as to which stakeholder groups will be involved in the emergence of the Internet as a publishing medium. The involvement of some of the groups may be decisive; the involvement of other groups may be irrelevant. We will not be able to say definitively which groups are which until the technology has stabilized.

My approach in this dissertation has been to identify relevant social groups, their stake in the technology, and how their view of the technology might affect the stakes other groups have in it. This appears in statements of the general form: "[STAKEHOLDER A] wants [TECHNOLOGY X] to develop in accord with [INTEREST a], but [STAKEHOLDER B] would be affected because of [INTEREST b]." Thus: writers [A] would like to use the World Wide Web [X] to distribute their work in order to be able to get more readers and perhaps make money [a], but this would mean that traditional print publishers [B] could lose much of their existing market [b]. Notice that this is not a predictive statement; I am not suggesting that the interests of one or the other stakeholder will ultimately determine the direction of the development of the technology. The main advantage of stating the interests of various stakeholders this way is that it foregrounds the contested nature of the technology by making clear how the visions of a pair of stakeholders differ.

One aspect of the general form of the statement is that it is commutative: it would work just as well (although the meaning would be somewhat different) if Stakeholder B's interest was stated first and Stakeholder A's was stated second. Another aspect of this approach is that it can also be used to describe conflicts over technology where closure has been achieved, even where the technology has failed to succeed. For example: entertainment conglomerates [A] tried to use "push" technologies on the Internet [X] because they thought they could make money from them [a], but individual users [B] did not accept push technologies because they preferred to search the Web for information they wanted rather than have information they may or may not have wanted thrust upon them [b]. Of course, this kind of statement is a simplification of a complex reality. However, it is useful for summing up the relationship between the interests of a pair of stakeholders in a given technology.

This kind of statement does have an inherent problem: it makes it appear as if technology is determined by the outcome of a single conflict between two stakeholders. We may come to the conclusion that the creation of technology involves a tug of war, with each side pulling in a different direction, the outcome of which is determined by the relative social and/or economic power of the stakeholders. Yet, throughout this study, I have stressed that the stakeholder groups in publishing on the World Wide Web are numerous, each with its own understanding of how the technology should develop, its own technological frame, and what it should be used for. Rather than a single line, a better graphic representation of this situation would be a vector geometry graph. In vector geometry, several lines of varying lengths and directions appear in a single graph; the sum of the lines requires not only adding their lengths together, but also their directions. This is analogous to the present situation, where a large number of stakeholder groups are pushing for the development of a technology along a variety of lines.

To understand the larger picture of technological development, therefore, it is necessary to look at it as a series of statements about the competing interests of different stakeholders. Thus, to what has already been written in this section, we would have to add that publishers, whether individual or corporate [A] who use the World Wide Web [X] to distribute their work [a], may cause a decline in the use of printing presses [B], which stand to lose substantial work and, therefore, revenue [b]. We could also add that online booksellers [A] selling through the World Wide Web [X] hope to reap substantial profits [a], which seems to be adding to the financial burdens [b] of real world independent bookstores [B]. And so on. (A summation of the major relationships between stakeholders developed in this dissertation is provided in Chart 6.1).

Another aspect of social constructivist theory which requires revisiting is the assignment of actors into stakeholder groups: at some level, this must always be an arbitrary process. When I began the current study, for instance, the subject I thought I would be looking at were "fiction writers who put their work on the World Wide Web." It soon became apparent, however, that this was not a homogenous group, that, at the very least, it consisted of sub-groups of people who put their work on their own Web pages, people whose work appeared on the pages of electronic magazines and people who write hypertext fiction. These groups do not have the same set of interests in the new technology. Most of the individual Web page creators, for instance, have also had their fiction published in print; they see the Web as one more venue for their work. Hypertext authors, on the other hand, could not create their works in other media (early print experiments in hyperfiction notwithstanding); for them, computers are not a convenience, but a necessity. In a similar way, we can see that individual Web page creators require different skills (coding in HTML, uploading material to the Web, et al) than writers whose work is published in an ezine (who need know no more than how to email their work to the zine editors, who are then responsible for designing the pages and putting them on the Web). All three sub-groups are united by a common goal (publishing fiction on the Web), yet have different investments in the technology (or, as Bijker might put it, define the technology in three different ways, seeing, in essence, different technologies).

Stakeholder ATechnology XInterest aStakeholder BInterest b
writersWorld Wide Webreaders, incometraditional publishersfewer readers (lose income)
writersWebreaders, incomeprint editors, designersfewer print jobs (lose income)
publishersWebcheaper distributionprinting presseslose business
publishersWebincrease readersother medialose business
online booksellersWebrevenueindependent storeslose business
portal sites "sticky" Web designkeep surfers onsiteindividual writerssmaller potential readership
governmentsWebcensor pornographywritersbanning of their work
governmentsWebcensor pornographycorporationsbanning of their work
governmentsWebcopyright enforcementwriterslose fair use
governmentsWebcopyright enforcementcorporationsmaintain tight control of work
corporationsmicropaymentsgenerate revenueindividualsgenerate revenue
corporationspushgenerate revenueindividual Web usersdislike obtrusive information
corporationsWeb TVgenerate revenueindividual Web userscannot upload own material
corporationsasymmetrical signalsgenerate revenueindividual Web userscannot upload own material

Chart 6.1
Selected Application of the Conflictual Model to Stakeholders in Web Publishing

Nor does the process stop there. People who put fiction on their own Web pages can be divided into two groups: those who are mostly interested in developing a readership and those who are mostly interested in finding a way to make money from their work. It is in the best interests of the former group to keep the Internet as open to individual contributions as possible; it may be in the best interests of the latter group to accept some form of corporate control if they can use the economic models the corporations come up with for their own profit. Moreover, some people would like both more readers and increased revenue, goals which may, with some forms of the technology, not be entirely incompatible. Thus, there are at least three different sub-sub-groups within this sub-group, each with its own interests which lead it to define the technology in different ways.

We could go further. Of the sub-sub-group which wants to use the Web to profit from their fiction writing, there are those who hope to be able to do it within the existing technology (ie: by putting a chapter of a novel on the Web and asking those who would like the rest of the novel to pay with their credit card) and those who feel that a more sophisticated economic model is necessary. The former group is less likely to go along with corporate economic schemes than the latter.

My survey of fiction writers on the Web effectively stops there, but I suspect that, if we had enough information, we could keep subdividing stakeholder groups into smaller and smaller units until we inevitably reached the level of the individual. Of course, research on all of the 80 million or so individuals currently on the Internet would tax the resources of even the most well-endowed research institutions! My purpose in pointing this out is not to invalidate the stakeholder model, but just to point out that we must always probe how stakeholder groups are defined to ensure that statements about their interests do have some validity.

The Use of Description

As the reader will have noted, this dissertation is primarily descriptive, explaining what people are doing, the reasons they give for doing it, who they are, etc. The proper relationship between description and theory should be a matter best left to the individual researcher; as Becker said, "The appropriate ratio of description to interpretation is a real problem every describer of the social world has to solve." (1998, 79) However, the truth is that the academy values theory over description; theory is felt to be the proper path to a deeper understanding of real world phenomena. It is necessary, therefore, for me to justify my largely descriptive approach.

I would like to start by observing that computer mediated communications is a new phenomenon in communications history with unique features. While the former point is obvious, the latter isn't, since most of the people who write about the Internet apply existing theoretical constructs to it. If the Internet were simply an extension of an existing communications medium, this would be unproblematic. However, as implied in my dissertation, the Internet is developing into an extension of all existing communications media (see: the variable to variable model I develop in Chapter Four). Applying existing theory to this new medium, as I argued in Chapter One, will necessarily distort our understanding of the medium by accentuating some of its features and downplaying others. Moreover, one of the main problems with the way governments approach the Internet is that they are trying to fit it into existing communication models, leading to attempts at regulation which are doomed to fail (as I showed in Chapter Four).

It becomes necessary, before we can go too deep into theories about how the Internet functions, then, for us to have some empirical evidence about what the Internet actually is. In the absence of this, researchers are like the blind men trying to describe an elephant; each may have a perfectly workable theory about their small part of it, but none are able to grasp the whole. This results in the Internet becoming a Rorschach test for researchers; if you look hard enough, you will likely be able to find some area of online communications to support your theory. Great for the academy, perhaps; not so useful for people who are trying to understand the medium (including businesspeople and legislators).

In addition, as mentioned at various points in the dissertation, the World Wide Web is in a constant state of flux.

The replicability of CMC field research is difficult, if not impossible, for two main reasons. On a technological level, the Internet is permanently changing its configuration and supporting technology. The underlying networking protocols cannot guarantee the same conditions when replicating experiments simply because each time the path of information communication is unique; thus, the time delay and consequences connected with it are different. On a communication level, the difficulties in replication come from the creative aspect of language use. (Sudweeks and Simoff, 1999, 38)

For this reason, researchers on digital technologies have to be especially careful to provide detailed descriptions of their subjects; without such descriptions, it soon becomes impossible to determine whether their theories accurately explain the phenomenon under study.

The combination of the newness of the medium and its ephemerality suggests another reason that current studies should be heavily descriptive: they should be seen as the foundation on which theoretical constructs can, in the future, be built. "A definitive history of the Internet of our times is decades from being written. The various perspectives being written now are the basis on which we build this history." (Costigan, 1999, xx) Becker made the same point in a different way: "I worked my way through graduate school playing piano in taverns and strip joints in Chicago. Should ethnomusicologists study what every tavern piano player (the kind I was) plays in all the joints on all the streets in all the world's cities? No one would have thought it worthwhile to do that around 1900, when a definitive study could have been done, say, of the origins of ragtime. But wouldn't it be wonderful if they had?" (1998, 74)

A related reason for doing descriptive work is that it can upset the cozy assumptions which develop around a subject. "What does all this description do for us? Perhaps not the only thing, but a very important one, is that it helps us get around conventional thinking. A major obstacle to proper description and analysis of social phenomena is that we think we know most of the answers already." (ibid, 83) For instance, many people believe that fiction on the World Wide Web is dominated by science fiction and fantasy, the literature of the young, technically literate people demographers tell us make up the majority of Netizens. In this dissertation, I devoted many pages to describing the wide variety of stories actually available. I could have simply said that the general impression was wrong, but I believe the evidence I have gathered is a much more eloquent argument.

Resolving Conflicts

To this point, we have looked at the various stakeholders and shown that their perceived interests often conflict. Theory, as well as our own experience, suggests that technological artifacts do achieve a form of stability. Perhaps the most important question open to us is: how do we go from an initial state of conflict to a state of stability? One answer is enrolment.

"This describes the process by which a social group propagates its variant of solution by drawing in other groups to support its sociotechnical ensemble. More than in the other configurations, the success of an innovation will here depend upon the formation of a new constituency -- a set of relevant social groups that adopts the technological frame [note omitted]." (Bijker, 1995, 278) To enroll somebody in your stakeholder group, you must convince him or her to align his or her interests with your own. The main tool a stakeholder wields to enroll members of other groups into his or her group is rhetoric; using a variety of arguments, the stakeholder must convince members of other stakeholder groups that it is in their interest to adopt the technology which the original stakeholder wants.

Thus, when corporations felt it was in their interest to develop "push" technologies, they had to find a way to enroll Internet users, to get them to adopt it. The corporations promoted the technology as a means of diminishing information overload: simply sign up with a push service, tell it what you are interested in, and you will get the information you want delivered to your desktop. No more frustration surfing the Web for hours to find the one piece of information you need. The Internet users who did sign up for these services had been enrolled in the corporate framework for understanding the technology. However, the majority of Internet users did not sign up for it. They valued their ability to search for information themselves, and resented the intrusive nature of the technology. You could say that the technological frame through which they perceived the Internet could not be enrolled into the frame of the corporations.

So, in order to further their goals, the corporations needed to find a new stakeholder group which could be enrolled into their technological frame. Bruno Latour suggests that one way to enroll people to your frame is to invent new goals (1987, 114) and use them to develop new stakeholder groups. (ibid, 115) This process is beginning to happen around Web TV. Before its invention, people would watch television (more or less) passively. Web TV creates a whole new experience by making television interactive. A general dissatisfaction with television notwithstanding, there is little reason to believe that anybody actually wants or needs interactive television. Nonetheless, if the creators of Web TV can convince enough people to enroll in their technological frame, a whole new set of stakeholders will emerge: interactive television watchers (as opposed to more passive television watchers). If this happens, it will be because the switch in technological frame from passive to (somewhat more) active television viewer is not as great as the switch from active to passive computer user (which was necessary for the success of push).

Latour also argues that it isn't sufficient to enroll people into your technological frame, since people are by nature unpredictable, and may use the technological artifact you have created in ways that you had not expected or intended. It becomes necessary, then, "to control their behaviour in order to make their actions predictable. [original emphasis]" (ibid, 108) Rhetoric is not necessarily the best way to accomplish this, since, once people have an artifact in their hand, they no longer have to listen to what its creator says about it. (How many people read the manual? Really.) The best way to control the behaviour of those who take up a technology, of course, is to design the artifact in such a way so that it constrains the actions of its users, limiting them to using it the way it was intended by the dominant stakeholder group. As we shall see, this occurred when the radios for public consumption were designed without transmitters; it was in the interest of the corporations which benefited from the development of commercial radio. This is also true of some of the more extreme methods of changing the Internet.

The resolution of conflicts between stakeholders has many dimensions. A stakeholder group that contains a small number of individuals is more likely to develop a united strategy for advancing its interests than a stakeholder group with a large number of individuals, whose interests are not likely to be as homogenous, and whose efforts, therefore, are likely to be fragmented. In the present case, for instance, there are two broadly defined groups whose interests are largely in opposition: large entertainment conglomerates and individual Web page creators. The entertainment conglomerates have a common goal (to maximize profit for their shareholders), and, given a common corporate culture, can be assumed to come up with similar methods to achieve these ends with a given technology. As we have just seen, individuals have a much greater range of interests, and can be expected to follow different paths to achieve their various goals, some of which will be aligned with the interests of the corporations.

There is precedence for this type of analysis. As we shall see, the fate of radio was contested in a manner similar to that of the Internet. Two broad stakeholder groups were in conflict over the way the technology should develop: emerging entertainment corporations which felt that there was profit to be had in the new medium, and public rights groups (representing unions, religious organizations, educational organizations and the like) which felt that the airwaves should be employed for the larger public good. As McChesney (1993) showed in his exhaustive work on the subject, those who wanted the airwaves to continue to be publicly accessible felt that there were several different ways of assuring this, thus pulling the movement in a variety of directions. Had they made a united front against the corporate interests, there is no guarantee they would have prevailed, of course; however, being fragmented made it more difficult for the group to achieve its goals.

One caveat to this general rule must be noted. The failure of push technologies shows that small, relatively homogenous groups do not always prevail. As it happened, the ethos of sharing information united the much larger computer network user community against push technologies. So, while it is generally easier for a small number of individual stakeholders to agree on a strategy to advance their interests than for a large number of individual stakeholders to do so, a simple comparison of the size of stakeholder groups is not a sufficient way of determining the relative effectiveness of the strategies of stakeholder groups.

In Chapter Three, I argued that entertainment conglomerates continued to develop technical methods of turning the Internet into a form of television after the demise of push technologies because digital television offered them the best hope of imposing a workable economic model on the Internet. In the current discussion, we can begin to see a complementary reason for them to continue to pursue this line of research. Approximately 50 million North Americans use the Internet; however, over 340 million people live in the United States and Canada. Thus, there are somewhere in the neighbourhood of 290 million people who do not use the Internet, a huge potential market. Many of these people are too poor to be able to afford computers and monthly Internet access fees, of course. Many of these people have little experience with the technology, and are uncertain how it will benefit them. By making the Internet more like television (by, for instance, adding a set-top box which allows Web access through a person's TV set), entertainment conglomerates hope to make it comfortable for people who do not currently use it by making it analogous to a technology which they already use. This is in line with existing notions of how technology is adopted throughout society:

Early adopters are assumed to be different from late adopters in their willingness to try new things or make changes in their lifestyles. Previous research into the diffusion of new technologies shows that early adopters are different from early and late majority adopters of a technology in their appreciation of the technical aspects of innovation... This makes late adopters much more susceptible to the influence of brand-style marketing, where advertisers attempt to create positive personality or images with which consumers will associate their product [note omitted]." (McQuivey, 1997, 7)

In particular, late adopters are more likely to use borrowed expectations, knowledge of how existing media are structured and used, to describe new, less familiar media. (ibid, 1997, 5) To some extent, their existing expectations determine how they use the emerging technology rather than the new possibilities for communication which it creates.

In this way, the entertainment corporations hope to completely by-pass current users and sell their vision of the Internet (which remains in their control) to a stakeholder group who, because they have little experience of it, have no emotional or intellectual investment in keeping the Internet open as a medium for individual, two-way communications. Again, this is not dissimilar from the efforts of early radio entrepreneurs to enlist individual listeners to their cause by arguing that government regulation of the airwaves (which would be largely to their benefit) would benefit listeners because they would be able to hear clear signals. Most listeners were not interested in transmitting, and therefore didn't even know what the "convenience" of a regulated system would cause them to lose.

This suggests another general rule. A technological artifact makes a long journey from a gleam in an engineer's eye to something which is diffused throughout society: it may start as an academic's theory; prototypes must be designed, built, tested and redesigned, rebuilt, retested, and so on; once a workable model has been achieved, it must be mass produced; distribution networks must be opened, and; demand for the artifact must be generated. Access to the early parts of this production stream gives stakeholders an advantage over those who only have access to the later parts of the production stream. Those who have access to the laboratories have the power to create the artifact, whereas those who have access to artifacts after they are distributed only have the power to accept or reject them. The failure of push technologies suggests that acceptance or rejection can be a strong power; however, as we saw in Chapter Three, those who fund the research, and, therefore, control the research agenda, simply keep developing technologies which advance their interests.

One of the most important aspects of control of the earliest parts of the development stream of new technologies is the ability to direct research funds and efforts by defining problems with existing technologies. According to Hughes, "A salient is a protrusion in a geometric figure, a line of battle, or an expanding weather front. Reverse salients are components in the system that have fallen behind or are out of phase with the others." (1987, 73) Poorly designed motor systems became a reverse salient in automotive design when public environmental consciousness grew to the point that there were protests against the pollution caused by car exhausts. Research efforts tend to cluster around solving the problems posed by reverse salients, since they affect all manufacturers in an industry. Thus, all automobile manufacturers had to devote research resources to improving the designs of their engine systems to lessen polluting exhaust.

Reverse salients are most often seen as technical problems; however, as Hughes points out, "the defining and solving of critical problems is a voluntary action." (ibid, 74) We can go further than this and suggest that how reverse salients are defined and solved depends on the interests of the stakeholder doing the defining and solving, and that they can be negotiated between different stakeholders. Defining and solving reverse salients is a social act.

We can see this from the current study. The traditional phone system, on which the Internet has piggy-backed for most of its existence, gives equal bandwidth in both directions. As online applications such as streaming video become more bandwidth intensive, capacity becomes increasingly strained. Equal bandwidth into and out of the home is considered the reverse salient in this situation by major corporations since, in their vision of a digital future dominated by video-on-demand, many people will not use much of the bandwidth out of the home, which will be wasted. However, for individuals who desire to upload their own video to a server, equal bandwidth in and out of the home would be a necessity; for them, the reverse salient is the lack of bandwidth on the system adequate to everybody's needs. How one defines the problem determines the solutions one will seek: as we saw, some corporate researchers are looking at asymmetrical digital networks to solve the reverse salient as they saw it; for individuals, increasing network capacity would be the solution to the reverse salient as it affects them.

In terms of who controls the research agenda, computer software is unique. With most hardware, huge research and development laboratories are required to create technological advances, limiting those who can develop new artifacts to those with a large amount of money; a new computer program, by way of contrast, can be created by a couple of people in their basement. The history of computer programming contains many stories of people who created a small program for their own benefit which was then taken up by the general computer-user community; in fact, some argue that this is the only way truly original computer software is developed (see, for example, Rushkoff, 1999). In this way, individuals have access to early parts of the production process in digital communication.

One other general rule which can be stated is that economics is coming to play an increasingly important role in the stabilization of the definition of technological artifacts. As previously mentioned, various stakeholder groups use rhetoric to convince other stakeholders that their vision of a technology is correct, and to enlist members of other stakeholder groups into their group. "We need others to help us transform a claim into a matter of fact. The first and easiest way to find people who will immediately believe the statement, invest in the project, or buy the prototype is to tailor the object in such a way that it caters to these people's explicit interests. [original emphasis]" (Latour, 1987, 108) Whereas debates over such issues may once have happened in scientific journals or at learned conferences, rhetorical persuasion favouring specific formations of technology now largely takes place through advertising. Thus, most people are learning about, say, Web TV from the television and print advertisements paid for by the corporations which are pushing the technology. This gives stakeholder groups with the financial resources to push their agendas a tremendous advantage over those without such resources.

Moreover, the rhetoric of the corporate vision of digital technology, when it appears in newspaper and magazine articles, is delivered by "experts," technology researchers or pundits who are assumed to have knowledge which is not available to "non-experts" (that is, members of the general public). "[I]t might be argued that...unorganized, uncoordinated members of the public, lacking in the advice of experts, are not in a strong position to forcefully express their views." (Elliott and Elliott, 1977, 21) In essence, formal research is privileged over individual experience; many people who might otherwise be satisfied with a technology will feel the need to obtain a new technology simply because experts tell them they will benefit from its use.

With previous technologies, this may have been enough to ensure widespread adoption, closing debates about the definition of the artifact. However, the Internet gives individuals and representatives of stakeholder groups concerned with how technology affects individuals a powerful means of presenting their case: chat rooms, personal email, etc. If they choose to use it, stakeholders who disagree with the corporate agenda have an important organizing tool with which to enlist others to their viewpoint about the technology. (This is another argument for the corporations to attempt to enlist people who are not currently on the Internet to their vision of what it can be: they have no access to the rhetoric of individual stakeholders who have a different vision of the technology, except on those rare occasions when such arguments make it into traditional media).

These three ideas (small, homogeneous stakeholder groups have an advantage over larger, more heterogeneous groups; stakeholders with access to early stages of technological development have an advantage over those who only have access to later stages, and; stakeholders who can afford advertising have an advantage in disseminating their rhetoric over those who do not) are mutually reinforcing. Groups with enough money for advertising also tend to be those who fund, or at least have access to, the research of large laboratories (and the funds to develop the discoveries of the laboratories into viable products). Since these things require the accumulation of large amounts of capital, there are few corporations which can accomplish them.

This may just be a case of pointing out the obvious: that the wealthy have means of advancing their interests which are not available to the rest of us. "The private citizen is greatly disadvantaged financially by comparison with private companies, public corporations, trade unions and, as in planning questions, the state itself." (Williams, 1977, 34) As Bijker rightly points out, "explanations in terms of power so easily result in begging what seem to be the most interesting questions." (1995, 11) The obviousness of this truth does not necessarily make it not worth telling. What I have tried to do here is show some of the actual mechanisms by which wealth shapes public debate and, ultimately, the nature of technological change.

Bias?

There is a problem at the heart of social constructivism which needs to be addressed. Hands refers to it as the reflexivity problem: "If scientists make decisions on the basis of their individual or group interests, then that should also be the case for the social scientist who studies science." (1998, 716) Applying their standards to their own work, social scientists do not have a privileged vantage point from which they can trace unbiased histories of technological development. "If sociologists can really find out what is going on out there in the world of science (that it is socially determined), then it means that they have the power to discover (not just construct) the nature of the objects in their domain (the social actions and beliefs of scientists), but this is precisely ability [sic] that they deny to the scientists they study." (ibid, 717)

One method of dealing with this problem is to accept that all findings, including that of the social scientist, are relative, but nonetheless offer invaluable insights. "The argument is that there are no supralocal standards of rationality, truth, or anything else; there are only local and context-dependent standards of valuation. But while these standards are local, they are relevant, important, and binding on those agents in that particular local context; no universal standards does not mean no standards." (ibid, 718) This study, for instance, is limited by the nature of the analytical tools which existed when I wrote it, by the availability of information to me, etc. Moreover, the rapidity with which technology changes guarantees that the situation as I have written about it will not exist in exactly the same way when others read what I have written. Despite this, I obviously feel there is value in describing the state of technological development at this given moment in time.

A second, somewhat less academic way of dealing with this problem is for the author of a work to admit his or her biases, giving the reader the opportunity to assess how they may have affected the work. In this section, I would like to address this issue. I would like to start by pointing out that, while historical studies of technological development are necessary to help tease out theoretical structures, the enterprise of academic research must not stop there. What is the purpose of developing theoretical frameworks for understanding real world phenomena? I believe that it is to then take those theories and apply them to the world as it currently exists. Theory which exists for its own sake, without any relevance to, or in fact, decreasing amounts of reference to, the real world is sterile; an enjoyable parlour game for academics, perhaps, but of little value outside the academy. In the famous Marxist dictum, "The point is not to understand the world, but to change it."

Given this, to the list of stakeholder groups in a new technology, we must add academics. Many will be uncomfortable with this role. However, for me, this is a most compelling argument for studying technologies which have yet to be fixed in the mind of the general public: the possibility of having a positive outcome on the development of a technology, a possibility which is all but extinguished once the technology has achieved stability.

It should be clear from the dissertation, then, that I give primacy to the interests of individual writers who are currently using the Web as a means of personal expression. In the figures in Chapter One which show the various stakeholders in old and new publishing media, the writer is at the top of the chain; this reflects my belief that in all media (including collaborative media such as filmmaking), the writer who originates the material is the most important creative figure. Moreover, by examining the interests of writers first and at much greater length than other stakeholders (in Chapter Two), the interests of the other stakeholders are implicitly (when not explicitly) compared to and seen in the light of the interests of writers. If I had started the dissertation with a similar consideration of the interest of corporations, I would likely have ended up with a similar set of stakeholder relationships, but with a different emphasis which probably would have lead to a different conclusion.

There are two reasons for this approach. The obvious one is that I am a writer who is currently using the Internet as a means of distributing my own work. The not so obvious one is that I am somewhat naive, and I would like to believe the rhetoric of individual empowerment which surrounds the Internet, I really would like to believe that it can remain a personal communications medium, despite the powerful economic forces which would like to turn it into a digital version of television.

Unfortunately, outside of the business press, there is very little debate about the direction the technology is heading. The broad public is not discussing this issue, which will arguably have a great effect on society in the 21st century. This is a problem for social constructivism: how to account for stakeholders, people whose lives will be deeply affected by a technology, who do not have a say in its development. Since they are not active, their interests tend to be ignored. And yet...

Four or five years ago, I was sitting with a friend in a Cafe on Yonge Street, watching the people pass by. I told my friend that most of those people were living in a world that no longer existed, by which I meant that they still thought in terms of a society structured around industrial technologies, but our society was already beginning the transition to digital technologies, which would open up the possibility of completely new social structures, many of which will inevitably be instituted. Like the monks busy at work in the scriptoria after the invention of the printing press, we are all living in a world that few of us have truly grasped. The people whose voices are the weakest in this debate have the most to gain (or lose) by its outcome.

To date, debate about the future course of digital communications has been largely missing from public discourse. One does not have to subscribe to paranoid fantasies to recognize that such a debate is unlikely to be carried on in traditional media, since the same corporations which dominate traditional media are heavily invested in specific formations of digitally networked communications. "One cannot expect that those who are sponsoring the development of a new technology will indulge in listing its undesirable social consequences, since an inherent feature of the promotion process is to minimise these consequences and to argue that they can be technologically overcome." (Hetman, 1977, 7) My hope in writing this dissertation is that it will contribute to a public discussion on these issues. The reader can decide whether this approach is valid, and how it affects what has been written.

This focus on the use of technology by individual people has led me to believe that one important quality a technology can have is that it enables individual user autonomy more than existing technologies. What this means is the subject of the next section.

Recommendations About Individual User Autonomy

Autonomy is, of course, the ability of an individual to choose how to act. A World Wide Web which maximizes individual autonomy would allow surfers to browse wherever they wanted to go, and find whatever information they needed or choose whatever experiences they wanted; further, it would allow individuals to communicate with each other, as well as upload materials to the Web in all available formats, from plain text to full audio and video. The way some technologies diminish individual autonomy is obvious. Push technology, for instance, interferes with the individual's ability to determine his or her experience of the Web since it requires users to accept messages on their screen which are determined by others at times convenient for the others. Web TV decreases individual autonomy by making it impossible for individuals to produce their own material or upload it to the Web.

Some technological interference with individual autonomy is much more subtle. Internet Service Providers which promote their own material on their home pages interfere with individual access to information by making some information harder for a Web surfer to find. Some people may find the convenience of having all their needs served by America Online, Compuserve or other such companies (especially people with children who want to ensure that they surf in safe areas of the Internet) is worth giving up some autonomy. Those who are not so motivated, however, should find themselves ISPs who are not also content producers.

A similar subtle problem occurs with search engines which sell preferred spots at the top of user searches; this interferes with the ability of individuals to find exactly what they are looking for. Again, some people may prefer to have search engines funded in this way than with more direct advertising or, horror of horrors, having to pay themselves. Those who don't should organize to demand that search engines publish their policies on selling positions at the top of searches in a prominent place on their home page, and should avoid using search engines with policies which make it harder for them to find exactly the information for which they are looking without commercial bias.

Other technologies are on the horizon, and it is possible (indeed, it is likely) that there will be yet other technologies in the future which we do not see coming in the present. There is a general approach which individuals can take to determine if these technologies advance their interests. Confronted with a new technology, an individual should ask the following questions. What does the technology allow me to do? What does the technology make it harder (or impossible) for me to do? As we have seen, the answers to these questions are not always obvious; technologies which are touted as having great benefits may, in fact, have serious drawbacks for individuals. In any case, the balance between the answers to these two questions will determine whether an individual should accept or resist a new technology.

As digital networks have become more "user friendly," the number of people who use them has increased dramatically; these people are increasingly not programmers, and, therefore, do not share a common view of the technology. Those who would like to see the Internet continue as a two-way medium of communication, offering a maximum of user autonomy, amount must enroll those who do not see their stake in the technology in these terms.

In practical terms, this means presenting arguments favouring this use of the technology in appropriate chat rooms and other online fora. Of equal, if not greater, importance, is to discuss these issues with individuals as they sign up for online services and begin their experience of the online digital world; enrolling newbies is a crucial means by which individual stakeholders will increase the number of people who resist technologies which lessen their autonomy. I believe this also means that pundits of the digital age must step up their efforts to use traditional media to inform the general public about these issues. The battle for the allegiance of people who are not currently Internet users is likely to be an important determining factor in the shaping of digital communications networks.

While user autonomy is an obviously important principle for individuals, it also has some policy implications which governments should consider. All too often, government regulatory agencies are "captured" by the industry which they are supposed to regulate (for an example of how this worked in Canadian television, see Hardin, 1985). In such situations, the public good becomes conflated with corporate benefit. As I have argued in this dissertation, however, what is good for individual members of the public conflicts with what is good for the major entertainment-producing corporations involved in the Internet. Governments which are serious about their rhetoric of the Internet empowering individuals must take the interest of individuals seriously. Thus, they should embrace the concept of maintaining individual user autonomy on digital communications networks in their regulatory and other deliberations.

To be at all effective, any regulation must be international. On the face of it, regional regulation of an international communications network seems impossible; international cooperation is necessary. As regulation of the international telephone system shows, it is possible for governments to find common ground in the regulation of communications networks which transcend their borders. There are also international treaties governing transborder transmission of radio and television signals.

In order to best serve the interests of their citizens, governments should negotiate agreements which are in accord with the principles of individual autonomy stated above. Unfortunately, negotiations such as those of the World Trade Organization and the World Intellectual Property Organization are driven by the interests of transnational corporations, not individuals; because of this, governments cede much of their power to govern within their borders to distant bureaucracies which have no stake in local conditions. This is inherently undemocratic. For our present purposes, it is important to note that it also undermines the potential of the medium as a two-way communications system benefiting individuals. As a guide to their negotiations, governments must insist that any international treaties which deal with digital communications networks have to be based on the principles of user autonomy outlined above.

Whether or not the World Wide Web remains a means of two-way communications with the potential to build communities or becomes predominantly corporate-driven and commercial is a matter of design, not default. The principles by which we determine the public good will be an important factor in determining the shape technology takes.

Radio in the United States: A Cautionary Tale

It should be clear, from the discussion in Chapter Three, that, although they are having difficulty finding a way to make money from supplying information over digital networks, powerful economic players continue to look for ways to do so. This could profoundly change the nature of the Web, perhaps transforming it into something its current users would not recognize. If this seems farfetched, we only have to look at the history of radio in the United States, which in many ways parallels the current situation with the Web, to see how such a transformation was accomplished in the past.

In the beginning of broadcast radio in the first and second decades of the 20th century, many of the people who transmitted signals were "amateurs who didn't care much about radio's profit-making potential. They got involved with wireless because they were fascinated by the new technology. The amateurs were hackers, basically -- hobbyists, tinkerers, and techno-fetishists who huddled in their garages, attics, basements and woodsheds to experience the wondrous possibilities of the latest communications miracle." (Lappin, 1995, 177) As Lappin suggests, this is directly analogous to the early days of computers, when enthusiasts would build their own machines from kits, and hackers were always on the lookout for a more elegant way of doing things. Others have suggested that, "Radio began as a distributed, many-to-many, bottom-up medium, much like the early days of the World Wide Web..." (Johnson, 1997, 147) Although this second analogy isn't as exact as the first, since those using the Web as a two-way communications medium are, for the most part, not interested in tinkering with the hardware, it is worth keeping in mind that the earliest users of radio could be producers as well as consumers of programming.

Until well into the 1920s,

the new broadcasters were a colorful group. Several distinct categories emerged. First, there were the big manufacturing interests like Westinghouse, GE and RCA. Then there were department stores like Gimbels and Wanamakers which operated stations for self-promotion. Some hotels had stations. There were stations in laundries, chicken farms and a stockyard. In 1922, eleven American newspapers held broadcast licenses, mainly, one suspects, out of self-defense, just as newspapers today were quick to populate the World Wide Web. Churches and universities operated stations. And there were many so-called ego stations operated by wealthy individuals in the spirit of noblesse oblige, or just for the hell of it. (Rowland, 1997, 155)

As with the Web, anybody could transmit as well as receive messages: "When radio was invented, the expectation was that there would be as many transmitters as there were receivers. In the early 1920s, what is now called 'Ham radio' -- i.e., amateur radio -- [was] the dominant mode of interacting with radio." ("Web (vs) TV," 1997, 37) Radio stations did have to be licensed, but "the U.S. Radio Act of 1912 placed no restrictions on ownership of a license beyond American citizenship..." (Rowland, 1997, 155) so anybody who applied for one received it.

Throughout this period, major economic powers were curious about the potential profitability of the new medium.

...almost all research emphasizes the manner in which radio communication was dominated by a handful of enormous corporations, most notably RCA, which was established in 1919 under the auspices of the U. S. government. RCA was partially owned by General Electric (GE) and Westinghouse. By the early 1920s the radio industry -- indeed, the entire communications industry -- had been carefully divided through patent agreements among the large firms. RCA and Westinghouse each launched a handful of radio broadcasting stations in the early and mid-1920s, although the scholarship tends to emphasize the American Telephone and Telegraph (AT&T) Company's WEAF of New York because it was the first station to regularly sell airtime to commercial interests as a means of making itself self-sufficient. (McChesney, 1993, 5)

As we have seen, this mirrors the current situation with the Web. As Rick Salutin observes, "Technological changes in communications are always targets for the rich and mighty -- who want to own them to increase their own profits and power." (1997, A10)

There was a problem, though: it was not immediately apparent how money could be made from this new medium. Advertising was not seriously considered because it was felt the public would not stand for it. WEAF thought it could get around this antipathy by selling blocks of airtime to sponsors rather than individual commercials; in advertising for its new concept, "toll broadcasting," AT&T claimed that, "The American Telephone and Telegraph Company will provide no program of its own, but provide the channels through which anyone with whom it makes a contact can send out their own programs." (Lappin, 1995, 221) In essence, the company pioneered what have come to be known as infomercials. However, as McChesney notes, "AT&T's ability to sell its airtime was undermined by the willingness of the other stations, including those owned by RCA and Westinghouse, to give time away for free [note omitted]." (1993, 15) As we have seen, information available for free on the Internet likewise undermines the ability of producers to charge for their content.

Other economic models were considered during this period. The British model (where the government levied a tax on radio components to pay for the BBC) was suggested by some, but rejected by powerful forces (political as well as economic) as an unnecessary government intrusion into private speech and enterprise. Some suggested that wealthy patrons sponsor radio, as they had libraries and educational institutions, but this didn't go very far. For a long time, this problem seemed insoluble. "Thus a single question appears over and over on the pages of Radio Broadcast magazine throughout the first half of the 1920s: Who will pay for radio broadcasting?" (Lappin, 1995, 219) As much of Chapter Three demonstrated, this is directly analogous to the current situation with the Web: "The Big Question is exactly the same: Where will the money come from to pay for content?" (Rowland, 1997, 322)

WEAF, which pioneered one form of advertising, added a new, and what would prove to be vital, wrinkle to broadcasting when "AT&T realized that it could offer toll broadcasters access to an even larger listening audience (not to mention some impressive production economies of scale) by linking a few radio stations together with phone wires. AT&T called this innovation 'chain broadcasting,' and it was first tried successfully in the summer of 1923, when programming that originated from WEAF in New York was simultaneously broadcast by WJAR in Providence, Rhode Island, and WMAF in South Dartmouth, Massachusetts." (Lappin, 1995, 222) By the mid-1920s, then, a successful economic model (commercial network broadcasting) was in existence; it only required broadcasters to overcome their squeamishness about advertising to exploit it.

Oh, and to do something about all those non-commercial broadcasters clogging what would turn out to be immensely profitable airwaves.

There was a problem with the unregulated airwaves: signals from various stations would overlap or otherwise interfere with each other. This was annoying for listeners. Perhaps more importantly, it made advertising much harder, since radio stations could not guarantee listeners to their frequency would, in fact, hear their station from one block to the next. "When [radio] companies began to test the commercial potential of radio broadcasting, they increasingly clashed over the use of frequencies. Despite numerous industry meetings and considerable government prodding, commercial radio stations could not reach an agreement on how to allocate spectrum, assign channels, and generally police compliance in order to minimize interference." (Mosco, 1989, 187) No less a personage than then Commerce Department Secretary Herbert Hoover argued that legislation was necessary to make it "possible to clear up the chaos of interference and howls in radio reception [note omitted]." (McChesney, 1993, 17/18) Executives in the radio companies readily agreed.

The rhetoric of chaos is echoed in some current writing about the World Wide Web. One commentator rather apocalyptically claimed that

Throughout history, in times of war, society is chaotic. People are doing what they can to make sense of the environment, and rarely is there any sense of structural order. Then, when the war is over and the dust settles, when the custodial forces take over or are restored to power, the communal structure returns and a sense of order reigns. Ironically, this paradigm mirrors the current situation with the Internet and society. While not exactly a war zone, the Web is a chaotic environment, with people trying to make sense of information where no one organization is in power and everyone is attempting to take their advantage where they can. ("Opening the Gate: Increasing Content of the Internet," 1997, 151)

The problems with the Web are quite different from that of radio: pages are created and abandoned on an almost minute-by-minute basis; there are so many pages that it is increasingly difficult for one to find precisely what one wants; different kinds of software make pages inaccessible to some users; there is no central authority which controls entry; and so on. One writer suggested that, "If the DNS [domain name system] discord continues, the stage could be set for a reprise of the spectrum wars of the 1920s and '30s..." (Simpson, 1997, 92) Regardless of the differences between the two media, the rhetoric is eerily similar; the cover of a prominent computer magazine proclaims: "Reinventing the Web: XML and DHTML will bring order to the chaos." (1998)

In the case of radio, the rhetoric of a communication system in chaos was used as a pretext for government regulation. "In 1924, the niggling problem of what to do about remaining two-way radio in the hands of citizens -- amateur radio -- was dealt with by an international convention on the allocation of radio spectrum. Amateurs were henceforth to be denied access to radio frequencies below two hundred meters, which meant, in terms of contemporary engineering knowledge, they no longer had access to the only part of the spectrum usable for long-distance radio communication." (Rowland, 1997, 148) This was only the first step in the process in the United States, though, since the majority of licenses for radio transmitters were not, at the time, in the hands of commercial broadcasters. More direct action was needed.

In 1927, Congress created the Federal Radio Commission to deal with the new medium. The Radio Act which established the FRC divided the radio spectrum into "clear channels" which could only be used by one station at one high frequency, and other channels at lower frequencies which had to be shared by more than one station. The little-known FRC General Order 40 of August 1928 mandated a reallocation of the frequencies on the radio spectrum. General Order 40 signaled the end of radio as a meaningful two-way communications medium and the beginning of corporate hegemony over the medium.

"[O]f the first twenty-five stations set aside for clear channels by the FRC, twenty-three had been licensed to broadcasters affiliated with NBC." (McChesney, 1993, 20) Non-profit broadcasters, including religious groups, unions, educational groups and individuals, were given frequencies with lower power, which they had to share, sometimes being able to broadcast only a few hours a day. This made it increasingly difficult for such stations to maintain any sort of financial stability, causing many to eventually fail. Thus, "without having to actually turn down the license renewal applications of very many broadcasters, there were 100 fewer stations on the air within a year of the implementation of General Order 40," (ibid, 26) the vast majority of which were non-commercial.

The FRC made this decision based on a vague notion in its mandate to ensure that broadcasters be licensed based on "public interest, convenience or necessity." The FRC reasoned that only the most financially stable broadcasters would best serve the public interest (a form of logic which continues to be applied to television to this day), and based its licensing decisions accordingly. There was a problem with this reasoning, though: in some cases, non-commercial stations had better equipment and more capital than commercial stations, which were only just beginning to realize the potential economic benefits of radio. So, a second rationale for licensing grew out of General Order 40: that radio stations should not be proponents of "propaganda," that is, a single point of view. Thus, labor, religious and educational groups were labeled propagandists and excluded from meaningful participation in broadcasting. Corporations, whose only motive was profit, were not.

The effects of these FRC decisions should come as no surprise.

Following the implementation of General Order 40, U. S. broadcasting rapidly crystallized as a system dominated by two nationwide chains supported by commercial advertising. Whereas NBC had twenty-eight affiliates and CBS had sixteen for a combined 6.4 percent of the broadcast stations in 1927, they combined to account for 30 percent of the stations within four years. This, alone, understates their emergence, as all but three of the forty clear channels were soon owned or affiliated with one of the two networks and approximately half of the remaining 70 percent of the stations were low-power independent broadcasters operating with limited hours on shared frequencies. (ibid, 29)

By 1934, James Rorty would write: "For all practical purposes radio in America is owned by big business, administered by big business, and censored by big business." (ibid, 93)

One final facet of this transition should be mentioned: although the technology to both transmit and receive signals existed, radio sets sold to the public were primarily receivers without the capacity to transmit. "Hardware practically flew off dealers' shelves as sales of radio receivers jumped sixfold, from $60 million in 1922 to $358 million in 1924 [my emphasis]." (Lappin, 1995, 219) Levinson argues that this occurred because the cost of radio production was far greater than that of reception: "People could easily afford to have radio receivers in their homes, but not radio transmitters." (1997, 118) While this may have been temporarily true, improvements in transmitting equipment would eventually have brought the price down to the point where it would have been practical for individuals to send as well as receive radio signals, if the technology had been conceived as a two-way medium. Implicit in the sales of radio receivers without transmitting capabilities is the idea that most users of the medium would be consumers of radio signals, but not producers. Moreover, as generations became acclimated to using radio as a one-way medium, it becomes difficult for them to even conceive of the possibility that it could have developed any other way.

Directing the way a medium is used by directing the development of the hardware is, as we have seen, one strategy for harnessing the economic potential of the World Wide Web.

Not long after the consolidation of corporate control of radio, ideologically driven rhetoric developed which naturalized this control. It was argued that the educators, who were among the true innovators of radio in its earliest phases and who battled (as it happened, ineffectually) against corporate control of the medium, were out of touch with radio audiences. "'People do not want to be educated,' [NBC President] Merlin Aylesworth fulminated in 1934, adding, 'they want entertainment.'" (McChesney, 1993, 117) We can begin to see such a rhetoric developing around the Web. "'Content' is a fighting word these days. Virtually every new-media pundit will tell you that content is king, though they're hard-pressed to define what it means or how it works." (Thompson, 1998, 59) Although it might not be intuitively clear what content on the Web means, some are beginning to argue that, "In the end, it may be entertainment -- not content, not community, not shopping, not any of the other ideas that have had their three hours in the sun -- will be king." (Larsen, 1999, 95) This concept plays directly into the hands of corporations whose strength is the creation of entertainment, to the detriment of smaller players who might want to use the system to distribute other kinds of information (including individuals who might want to use it as a medium of personal communication).

Even more damaging was the concept, espoused by Broadcast magazine, that commercialization of the airwaves was inevitable because "progress cannot be stopped." (McChesney, 1993, 70) This is an early expression of a theory of media development which has come to be known as "technological determinism." The basic idea behind technological determinism is that technology has an existence of its own independent of human will, that it develops according to its own internal logic and that, once introduced into society, it changes social structures and relationships. This rhetoric of unstoppable progress can be found throughout current popular literature on the Internet. An important aspect of the rhetoric of the unstoppability of technology is that it masks the social battles which determine how technologies develop, and, in particularly, takes attention away from the self-interest of those who most benefit from certain manifestations of technology and who, therefore, push the technology to develop in specific ways for their own ends. It is important to note, as even this cursory outline of the early history of the medium indicates, there was no inevitability to the emergence of commercial radio broadcasting. Various social actors, each with its own stake in the future of radio, vied to determine how it would develop.

One implication of the rhetoric of the unstoppability of technological progress is that any government attempt to shape the nature of a communications medium is futile, practically unnatural. In the case of radio, "Suddenly, the right of the government to regulate broadcasting, which had been accepted, if not demanded, by the commercial broadcasting industry in the years before 1934, was being questioned, primarily due to its being a threat to the First Amendment rights of broadcasters and the general communication requirements of a democratic society." (ibid, 240) This point cannot be stressed enough: American commercial broadcasters only started arguing against government intervention in radio after government regulation had already established a profitable basis for their enterprise by effectively marginalizing non-profit competitors. As Brock Meeks put it: "[T]he folks on the commerce end, while they buck and spit about all the regulation, in a lot of cases they demand it, because they want to make sure the rules of the road are there, that have to be followed, they need level playing fields, and blah blah, woof woof." (1997, 285)

There is currently a very strident movement to ensure that national governments do not regulate the Internet, much of which uses the rhetoric of inevitability. Mark Stahlman refers to any effort to regulate the Internet as an "assault." (1994, 86) A typical sentiment is that, "'We built [the Internet] to be Russian-proof,' [former head of the government agency which developed the Internet Craig] Fields told the New York Times, 'but it turned out to be regulator-proof.'" (Rosenzweig, 1997, 23) In Chapter Four, I argued that the Internet had features which would make regulation by governments difficult; however, I did not argue that it would make regulation impossible, nor did I argue that all attempts at regulation were illegitimate (as some cyberlibertarians do).

Having come this far, we must recognize that there are points at which the analogy can be accused of being inapt. For example, the American experience of broadcasting regulation was unique in the world: where the American government opted for a wholly private broadcasting system, most other governments chose a wholly public system (thus, the creation of the British and Canadian Broadcasting Systems), or a mixed system with an integral public component. Moreover, while the FCC's mandate was to regulate the smooth running of the broadcast marketplace, other national regulatory bodies such as the Canadian Radio-Television and Telecommunications Commission was given a broader mandate to promote Canadian culture. If the Internet were a solely American phenomenon, the analogy to the early days of radio would be more exact; since the Internet is an international phenomenon, governments with very different histories of technological development will be helping to shape its future.

Two trends in international communications tend to shore up the analogy. The first is that, in most developing nations, the public broadcasting sector is diminishing in importance, some would say becoming increasingly irrelevant. This is due to a complex set of factors, including increasing competition from private broadcasters (especially as new technologies such as cable television and the Internet help us apparoach a 500 channel universe, and go beyond it) eroding its audience; and, government cutbacks to the operating budgets of public broadcasters, which force them to become as competitive for programs which will attract large audiences (such as sports) and advertising as private broadcasters. As a result, in Canada, the CBC is a shadow of its former self, and no longer has the privileged position in people's lives that it had in the first few decades after its creation.

The other, as we have seen, is the trend towards international agreements on trade, intellectual property, etc. Agreements which affect the arts are largely being driven by entertainment conglomerates based in the United States. To the extent that they bind signatories to a particular view of economic life, they can be seen as replicating the American system throughout the world. This suggests that the American experience of broadcasting may be highly relevant to the developing international digital communications system.

You never wade in the same river twice. To date, the rhetoric of chaos has not resulted in meaningful legislation to control the Web. Moreover, the number of people currently on the Internet dwarfs the number of people who were involved in the early days of radio. They arguably have a stronger potential lobby group in political capitals. Perhaps most importantly, there is a fundamental difference between the two media: radio was based on the frequencies available in the atmosphere, which were severely limited. One justification for government regulation of radio was this scarcity of frequencies on which to transmit signals. Computer mediated communication systems such as the Internet not only do not have this problem with bandwidth currently, but, as we have seen, are expected to develop an even greater capacity in the future. These and other factors may make the way the Web develops substantially different than radio. Transforming computer mediated communications networks into television or some other more passive medium is a much greater challenge than changing radio from a two-way to a one-way medium, since the two media are so different. Perhaps it simply will prove impossible to accomplish.

The experience of radio does suggest that a transformation of a medium from an active to a passive mode requires the confluence of three factors: 1) a corporate sector which sees the possibility of profit in the emerging medium; 2) the physical transformation of the medium which removes the possibility of meaningful interactivity, and; 3) legislators sympathetic to the desires of the corporate sector who are, therefore, willing to regulate the medium to the benefit of major corporations. To a greater or lesser extent, I believe I have shown that these three factors currently exist in the development of the Internet. How they play out against the expectations of the individuals using the system should be fascinating to watch.

* * *

One of the major criticisms of technological determinism is that it breeds passivity. If technology is an entity with its own life outside human control, then there is nothing human beings can do to shape it. We can only wait and watch it shape us. Oddly, pure social constructivism is also a recipe for passivity. If we allow that every definition of a possible technology is equally valid, we have no way of rationally choosing between them (although those of us who are direct stakeholders will, nonetheless, believe that our view of a technology is the "correct" one). In particular, public policy makers would be tempted to allow various stakeholder groups to fight it out among themselves, and support whatever technology emerged.

Social constructivism can breed passivity in another way. Let us assume that the form technological development takes is always a matter of negotiation. In that case, it doesn't matter whether or not a technology becomes fixed, achieves stability/closure, since we can always open up the debate and change the form into something which works better (by whatever criteria we choose to define a workable technology). In fact, our experience of the world tells us that this is not so. Technologies do stabilize. As Langdon Winner points out, "By far the greatest latitude of choice exists the very first time a particular instrument, system, or technique is introduced. Because choices tend to become strongly fixed in material equipment, economic investment, and social habit, the original flexibility vanishes for all practical purposes once the initial commitments are made." (1985, 30) As technological systems become more complex, they become more expensive, and the corporations which develop them become less inclined to make significant changes to them. Moreover, as individuals structure their lives around the use of a given technology, they become less inclined to introduce new technologies into their lives which would require them to substantially restructure the way they live.

This has an important ramifications: it is imperative to generate public discussion and debate about a new technology as early as possible in its development/deployment. At the point where a technology has been stabilized in its corporate form and private use, public debate about its value becomes largely moot (at least, until new technologies challenge the position of the existing technology, reopening the debate).

Despite the possibility of passivity in the face of technological change, I maintain that we, individual citizens, have choices which are non-trivial. If we choose to allow the Web to develop in one direction, it will subsequently reorder society in line with what the technology allows people to do, to the advantage of some and the disadvantage of others. If we choose to allow the Web to develop in a different direction, different social structures will emerge which advantage different groups. We have choices. "The important thing is to ensure that it [technical change] operates to the maximum extent possible in the public interest; to this end there can be no relaxation." (Williams, 1977, 35)