Skip to content

Literature at lightspeed:
Chapter Four:
What Governments Can (And Cannot) Do

Non-fiction Cover



Decisions in a democracy are made badly when they are primarily made by and for the benefit of a few stake-holders (land-owners or content providers). (Boyle, 1997, unpaginated)

The thinking of government in the advanced industrial states remains by and large stuck in the worn-out groove of apportioning scarce resources, whether in terms of bandwidth allocation or licensing of content delivery. This defensive posture is inherently weak. The assertive approach would be to do everything possible to optimize the connectedness of the nation. This translates, first of all, into encouraging and supporting — financially, if need be — cable, telco, and even hydro joint ventures; second, combining these initiatives with educational programs that put the power of creation and idea development in the hands of the people, rather than exclusively under the control of established developers and information providers; third, maximizing access to copyright-free public domain material. (de Kerckhove, 1997, 175)

Carl Malamud: “Technology in itself is no guarantee of freedom of speech.” (Ginsburg, 1997, 131)


Introduction

In these neo-Conservative times, it is politically fashionable to deride government as “the problem” and call for cutting it to the bone, privatizing as many of its functions as possible. Those who call for drastic cuts to government programmes forget that government is an instrument of the people, created to do our bidding. Far from being an enemy, government is a vital means by which the collective goals of a people, goals which they could not reach through their individual efforts, can be accomplished. If the people find a particular government is not acting in their interests, they can change its policies by putting pressure on it, or simply voting for a different government at the next election period; the answer is not to cut government back so severely that it cannot adequately maintain its many agreed upon worthwhile and/or necessary functions. Those who call for the privatization of most government functions forget a simple fact about markets: as we saw in Chapter Three, their purpose is the efficient allocation of resources. Period. Markets are not instruments through which socially just societies can be created; they have nothing to do with morality. Governments are the proper instruments for the exercise society’s moral will.

Governments have a number of tools which will, in one way or another, affect art and artists working in online digital media. One is legislation which attempts to directly control expression. Content control (which includes licensing regulations as well as outright censorship) is a stick used by governments to ensure that their nation’s culture is adequately portrayed in their media. Culture is a loaded term, and I don’t intend to get into a discussion of all its nuances here; what is important to note is that governments feel it necessary to promote their culture, however each specific government may define it. “[T]he primary regulatory objective is to protect and promote cultural values.” (Johnstone, Johnstone and Handa, 1995, 113) Some governments already feel that their cultures are under siege by American cultural products:



Will Western-produced news releases and films promote attitudes and opinions contrary to, and incompatible with, their own cultural values and national policies? Will reliance on other countries impede the development of indigenous skills for educational and entertainment programming? Will the lure of Western commercialism undermine their local consumer industries and entice the movement of scarce funds abroad? Will they become unwilling receptors of propaganda warfare between the superpowers and victims of internal interference by other nations? The essential issue is one of uncertainty over whose ideas and ideals will be promoted to which audiences and for what purposes. (Janelle, 1991, 78)



Many governments feel that this problem will be exacerbated by the growth of digital communication networks. “While international in scope, the Net has been dominated so far by American voices and sites.” (Kinney, 1996, 143)

Governmental control of content takes two general forms. Quotas which make radio and television licenses dependent upon the amount of regional programmes they carry are a form of positive control, in the sense that they require producer/distributors of works to act in a specific way. Laws against pornography or hate literature are forms of negative control, which require producer/distributors not to act in a specific way. This chapter will start with a discussion of negative control focusing on the American government’s Communications Decency Act. (You will recall that state censorship was mentioned as a drawback to publishing on the Web by respondents to my 1996 survey. Although not mentioned by respondents to the 1998 survey, it nonetheless has serious potential effects on their ability to use the Web as a publishing medium, and is, therefore, relevant to the current study.)

The nature of digital networks mitigates against government control in a number of ways, however: thus, we shall have to consider the possibility that the international, boundaryless nature of the Internet makes control by local governments unfeasible. This will make up the next section of the chapter.

One other area in which government may be seen to have a legitimate role is to negotiate the interests of various members of society, enforcing contracts between parties where necessary. Perhaps the most important example of this for creators is in the creation and enforcement of copyright law. As we saw in Chapter Two, some of the people who put their fiction on the World Wide Web are concerned with ensuring they get proper credit and, if possible, financial compensation for their work. The particular problems digital media create in regard to copyrights will be the subject of a section later in this chapter, where I shall argue that current developments in the law are to the detriment of individual content creators.

This will be followed by a discussion of a second major problem with any attempt by governments to regulate or otherwise control content on the Internet: the amorphous nature of digital media makes it unlike any existing medium. Three regulatory regimes have arisen to deal with existing media: broadcast, common carrier and First Amendment/free speech protection. Each regime creates different opportunities for positive control of a medium. Perhaps more important for our purposes, each regime favours different stakeholders in the medium; using one model will foreclose the possibility of people using the medium based on another model. A fourth model will be introduced which will bypass the shortcomings of attempts to understand the Internet using existing models, a model which will suggest that new thinking is required by governments intent on any form of regulating this new medium.

Although many hoped to make money from their Web publishing efforts, none of the writers and only one publisher mentioned state support as a source. Nonetheless, many governments (including that of the United States) directly subsidize the work of artists through economic loan and grant programmes. For this reason, the chapter, which began with the stick of government regulation, ends with the carrot of government support. The purpose of government subsidization is to support the creation of worthwhile works of art which would not otherwise be supported by the marketplace. Such works are sometimes attacked for their lack of commercial viability, but those who do so forget that that is part of the rationale of public support in the first place: if such works were commercially viable, they would not need government support. Inasmuch as society benefits from the widest range of works of art, if the marketplace will not support the creation of certain types of work, some other mechanism must be found. In the final section of this chapter, I will look at a few of the programmes in Canada which are intended to financially support the creation of digital artworks.

The Stick: Government Control Through Censorship

Government control over communication media is not new, of course.



Every communications advance in history has been seen by self-appointed moral guardians as something to be controlled and regulated. By 1558, a century after the invention of the printing press, a Papal index barred the works of more than 500 authors. In 1915, the same year that the D. W. Griffith film ‘Birth of a Nation’ changed the U.S. cultural landscape, the U.S. Supreme Court upheld the constitutionality of an Ohio state censorship board created two years earlier, thus exempting motion pictures from free speech protection on the grounds that their exhibition ‘is a business, pure and simple, originated and conducted for profit….’ (Human Rights Watch, 1996, unpaginated)



Many literary works which are now considered classics (everything from Women in Love to Huckleberry Finn) were banned from some jurisdictions because of their content (and the Papal index of forbidden works still exists and is regularly updated). Freedom for adults to read or view material intended specifically for adults was hard won. However, there seems to be a widespread unspoken assumption that electronic forms of speech should not enjoy the same protections that the printed and spoken word do.

This seems to be the reasoning behind the ill-fated 1995 American Communications Decency Act. As an exemplar of the government tendency to attempt to control the media, this is a good place to start an investigation of state censorship.

The Communications Decency Act



Everybody has a favourite cause these days. Mine is smut. I’m for it. Now, owing to the way the laws are written…this is a free speech issue. But we know what’s really going on here. Dirty books are fun. (Lehrer, 1965)



In 1994, Senator James Exon, a Democrat from Nebraska, introduced the Communications Decency Act (CDA) in the United States Senate. Congress stopped sitting before the Senate had time to consider Exon’s bill, so it quietly died. However, Exon reintroduced the bill in the next sitting of the Senate the following year.

The CDA amended the Communications Act of 1934 in order to accomplish two goals. The first was to make it a crime to use computer networks in order to harass another person. According to Exon, “Under my bill, those who use a telecommunications device such as a computer to make, transmit or otherwise make available obscene, lewd, indecent, filthy or harassing communications could be liable for a fine or imprisonment. That is the same language that covers use of the telephone in such a manner.” (undated, unpaginated)

The second goal of the CDA was to protect minors from coming across sexually explicit content online. The CDA made it a crime for anybody who “knowingly within the United States or in foreign communications with the United States by means of telecommunications device makes or makes available any indecent comment, request, suggestion, proposal, image to any person under 18 years of age regardless of whether the maker of such communication placed the call or initiated the communication…” (S.314, 1995, 47 U.S.C. 223 (e)(1)) Anybody violating the Act would be liable for a fine of up to $100,000 and a maximum sentence of two years imprisonment. (ibid)

Proponents of the CDA argued that it was an extension of existing protections for minors into the online world. For example, Cathleen A. Cleaver, director of legal studies at the Family Research Council, stated that “We have long embraced laws that protect children from exploitation by adults. We prohibit adults from selling porn magazines or renting X-rated videos to children. We also require adult bookstores to distance themselves from schools and playgrounds. Do these laws limit adults’ freedom? Of course they do. Are they reasonable and necessary anyway? Few would dispute it.” (1995, unpaginated) Exon himself claimed that “We based this on the law that has been in effect and been approved constitutional with regard to pornography on the telephones and pornography in the U.S. mail. We’re not out in no-man’s land. We’re running on the record of courts’ decisions that have said you can use community standards to protect especially kids on telephones and in the mails. We’re trying to expand that as best we can to the Internet.” (“Focus — Sex in Cyberspace,” 1995, unpaginated)

Cleaver used an interesting analogy to support her position: “[W]e know that pedophiles traditionally stalk kids in playgrounds. Well, we know that computers are the child’s playground of the 1990s. That is where children play these days increasingly. So it is a really toxic mix to have these playgrounds be a place where children are fair game to pedophiles. It is very disturbing.” (McPhee, 1996, unpaginated) Declan McCullogh, a free speech advocate who posted this to the Web, was sarcastic about Cleaver’s position. To be sure, the suggestion that children are fair game to pedophiles on the Net is inflammatory rather than enlightening.

However, Cleaver’s analogy should not be dismissed out of hand. Where new technologies are introduced into a society, many people’s initial reaction is to compare them with existing technologies; this makes understanding and accepting them easier for a lot of people. Above, Exon claimed that the CDA was drawn on existing laws governing the telephone and the mail, making an explicit analogy between those media and the Internet. Cleaver compared pornography on the Internet to that found in adult bookstores. As we shall see, opponents of the CDA make different comparisons; in fact, the battle over the CDA can be seen, in part, as a duel between analogies for the Internet. (This may be true of differences of opinion on the nature of the Internet more generally.) Moreover, spatial metaphors abound on the Internet: for instance, Blithe House Quarterly, one of the ezines explored in Chapter Two, has a picture of the floor plan of a building on its contents page, with each story assigned to a room. Because the two phenomena being compared in any analogy are not identical, an analogy necessarily distorts the nature of what is being discussed. Some analogies, in fact, conceal more than they reveal. The test of a good analogy is how closely related the two phenomena being compared are, and how great the differences between them are. Given all of this, there was (and is) merit in the analogy of places children go online to playgrounds in the real world, and those who opposed the legislation to control pornography on the Internet would need to address this issue.

It is also important to note, before we visit the controversy that the law created, that proponents of the CDA were not necessarily raving anti-free speech zealots, as they were sometimes portrayed by anti-CDA activists. There is a broad consensus in North American society that minors should not be exposed to sexually explicit pictures or stories. While there may be debate about the line at which the definition of minors should be drawn (are 16 or 17 year-olds knowledgeable enough to experience sexually explicit materials without harm?), it is generally accepted that pre-pubescent children are not yet sufficiently emotionally mature to deal with sexually mature subjects. With few exceptions, opponents of the CDA ceded this point. Thus, the CDA was an attempt to create a law which would accomplish a largely accepted social good; the only controversy was whether it was the best means to accomplish this goal.

The CDA passed the Senate by a vote of 84 to 16 on July 14, 1995. (Corcoran, unpaginated) “On June 30, 1995, Representatives Cox and Wyden introduced the Internet Freedom and Family Empowerment Act as an alternative to both the CDA and the Leahy study. The Act would prohibit content and financial regulation of computer based information service by the FCC. In addition, it eliminates any liability for subscribers, service providers or software developers who make a ‘good faith’ effort to restrict access to potentially ‘objectionable’ content.” (Evans and Stone, 1995, unpaginated) This Act was overwhelmingly passed by the House of Representatives. Negotiations between representatives of the two houses resulted in acceptance of Exon’s version of the bill. The CDA was attached to the Telecommunications Act of1996; it was just a small part of a law whose major purpose was to change the telecommunications industry, allowing, for example, local and long distance telephone carriers to compete in each other’s jurisdictions. The Telecommunications Act of 1996 was signed by President Bill Clinton on February 8.

The passage of the CDA in Congress had little effect online. “[D]espite the new law, for the most part it was business as usual on the net, where a search under ‘XXX’ or ‘sex pictures’ produced quick cross-references to dozens of sites promising a variety of products and services.” (Reuters, 1995, unpaginated) Even the passage of the Telecommunications Act, which included the CDA, didn’t affect what was available online: “Pornographic sites still offer up obscene pictures and stories of incest and rape still wait to be read on the Internet bulletin board Usenet, where a new group was formed Thursday night — alt.(expletive).the.communications.decency.act.” (Associated Press, 1995, unpaginated)

While the Internet community didn’t visibly change its online behaviour because of the Telecommunications Bill, reaction to the bill offline was immediate. Minutes after Clinton signed the bill, “the American Civil Liberties Union (ACLU) filed suit challenging the law’s constitutionality. The CDA was on the books for one week and then was restrained by District Judge Ronald Buckwalter.” (“Communication Decency Act,” 1997, unpaginated) Nineteen groups joined the suit, which was presided over by a panel of three judges in Philadelphia, including: the National Writers Union, the Journalism Education Association, Planned Parenthood Federation of America and Human Rights Watch. (Associated Press, 1996, unpaginated)

A second challenge to the Act was undertaken at the same time. On the day that the Telecommunications Act was signed, an inflammatory editorial was published in an online newspaper called The American Reporter. It read, in part, “But if I called you [Congress] a bunch of goddam motherfucking cocksucking cunt-eating blue-balled bastards with the morals of muggers and the intelligence of pond scum, that would be nothing compared to this indictment, to wit: you have sold the First Amendment, your birthright and that of your children. The Founders turn in their graves. You have spit on the grave of every warrior who fought under the Stars and Stripes.” (Russell, 1996, unpaginated) Strong stuff, not typical of The American Reporter. However, as the editor put it in an editorial published in the same issue, “This morning, we are publishing as our lead article a startling piece of commentary by a brave Texas judge, Steve Russell, who is risking his position and his stature in the community to join us in a fight against the erosion of the First Amendment.” (Shea, 1996, unpaginated) An attorney for the publication filed for an injunction against the CDA in New York, where a second panel of three judges was asked to rule on it.

At the same time as these suits were pursued, a variety of protests against the CDA were organized to raise public awareness of the problems some people and groups had with it. Protesters included the Community Breast Health Project, Surf Watch, Sonoma State University, the Abortion Rights Activist Page, Internet on Ramp, authors, computer programmers and graphics designers. (Associated Press, 1996, unpaginated) As their one form of online protest, many sites on the World Wide Web (mostly, but not exclusively, pornographic) turned their background to black and added links to Web pages containing arguments against the CDA. Some suggested that this latter action mostly preached to the converted, to little effect: “This collective act of protest was greeted, at best, with a yawn in Washington and, at worst, with a collective ‘Who cares if their web pages are black? The fools.'” (O’Donnell, 1996, unpaginated) However, the protests seemed to galvanize the online community, giving its offline protests more coherence and weight.

Those opposed to the CDA argued against it on a variety of grounds. The CDA outlawed the transmission of obscene material over the Internet. “The First Amendment protects sexually explicit material from government interference until it is defined as obscene under the Supreme Court’s guidelines for analysis in Miller v. California… Once characterized as obscenity, such material has no First Amendment protection. None of the Communications Decency Act prohibitions of obscene materials violates the First Amendment.” [note omitted] (Evans and Stone, 1995, unpaginated) However, since obscene material was already illegal, the CDA was unnecessary. In a similar vein, “The Supreme Court has also held that child pornography is not protected by the First Amendment. In New York v. Ferber, the Court relied on the fact that child pornography is created by the exploitation of children, and that allowing traffic in child pornography provides economic incentive for such exploitation. The Court also found that such material possesses minimal value. Therefore, child pornography lies outside the protection of the First Amendment and can be prohibited.” (ibid) New laws in this area are only necessary when new media have aspects which make the applicability of existing laws unclear; in such cases, all the new law has to do it clarify how existing law will be applied to the new medium. Since the obscenity and child pornography rulings were not specific to a given medium of communications, they could be applied to the Internet, making the parts of the CDA which covered those issues redundant.

However, the CDA went much further, banning “lewd, indecent, filthy or harassing communications.” According to many critics, this was a completely different kettle of fish. “What is ‘indecent’ speech and what is its significance? In general, ‘indecent’ speech is nonobscene material that deals explicitly with sex or that uses profane language. The Supreme Court has repeatedly stated that such ‘indecency’ is Constitutionally protected. Further, the Court has stated that indecent speech cannot be banned altogether — not even in broadcasting, the single communications medium in which the federal government traditionally has held broad powers of content control.” (Electronic Frontier Foundation, undated (a), unpaginated)

Anti-CDA activists claimed that changing the definition of unallowable material from obscene, which was not Constitutionally protected, to indecent, which to that point had been, would have a chilling effect on speech online. The legal test for obscenity involves three qualifications, the final one being that the work in question “taken as a whole, lacks serious literary, artistic, political, or scientific value.” (Evans and Stone, 1995, unpaginated) Thus, even if it has explicit sexual content, everything from a respected novel to an academic paper cannot be considered obscene. However, because the legal test for indecency does not have this provision, those same works can be considered indecent. “Any discussion of Shakespeare or safe sex would not be allowable except in private areas, where someone can be paid for the task of rigidly screening participants.” (Oram, et al, 1995, unpaginated) Other examples of indecency “could include passages from John Updike or Erica Jong novels, certain rock lyrics, and Dr. Ruth Westheimer’s sexual-advice column.” (Electronic Frontier Foundation (a), unpaginated) Moreover, “As Human Rights Watch, a member group of the coalition [against the CDA], argued in an affidavit to the Supreme Court, the law’s prohibition of ‘indecent’ speech could be applied to its own human rights reporting that includes graphic accounts of rape and other forms of sexual abuse.” (Human Rights Watch, 1999, 31)

The legality of one Web page devoted to its creators favourite paintings came under question:



As nearly as I can tell, most of [the paintings] would qualify as being indecent under the Communication Decency Act. Were the Communication Decency Act to be broadly enforced, it would be illegal to maintain these images on a server located in the United States… Most of the pictures at this page are pre-Raphaelite — either painted by members of the pre-Raphaelite brotherhood itself, or by artists with similar inspirations. While it’s beyond the scope of this page to get into a detailed discussion of pre-Raphaelite art, I find it particularly significant that in their day, many of the pre-Raphaelite artists were decried as “indecent,” perhaps by people with the same narrow mindset as our contemporary politicians and law-makers. (Rimmer, 2000, unpaginated)



Perhaps most immediately relevant for our purposes is the fact that, as we saw, some of the stories written by the writers surveyed in Chapter Two and posted to the Web contained graphic descriptions of sexual acts or profane language. These stories would likely have been considered indecent and made illegal under the CDA. In Chapter Two, I tried to show how the graphic passages were not merely prurient, but part of the overall artistic intent of the writers. While this would be a defense against charges of obscenity in print, it was not a defense against charges of indecency under the CDA. Thus, many of the writers in the survey (the majority of whom, you will recall, were Americans) would have had to remove their work from the Internet or faced criminal charges. It is also worth noting that, in quoting passages from such work, this dissertation would have been illegal under the CDA. While I am a Canadian, and the dissertation will be published on a server in Canada, if it were mirrored on a server in the US, the ISP would likely have been liable under the CDA.

If widely enforced, the CDA would have the effect of limiting communication on the Internet to what would be acceptable for children. In doing so, the Act essentially criminalized speech on the Internet which would be acceptable in other media. For example, The American Reporter editorial quoted above was written specifically to test the limits of the CDA; the author and publisher assumed it was illegal under the Act. However, “Recently, the editorial, shortened for space but with the same raw language, was reprinted in the May issue of Harper’s magazine. There is no possibility, however, that Harper’s publisher could face criminal sanction for distributing the commentary in print.” (Mendels, 1996, unpaginated) Analogies between online communication and existing communications forms abounded: “It’s as if the manager of a Barnes & Noble outlet could be sent to jail simply because children could wander the bookstore’s aisles and search for the racy passages in a Judith Krantz or Harold Robbins novel.” (Electronic Frontier Foundation, undated (a), unpaginated)

The National Writers Union summed up this argument when it resolved that “Electronic communication should have no less protection than print or any other form of speech.” (1995, unpaginated)

Another important objection to the CDA was that it cast its net too wide. Defending the Act, former Attorney-General Edwin Meese, et al argued that



It is not possible to make anything more than a dent in the serious problem of computer pornography if Congress is willing to hold liable only those who place such material on the Internet while at the same time giving legal exemptions or defenses to service or access providers who profit from and are instrumental to the distribution of such material. The Justice Department normally targest [sic] the major offenders of laws. In obscenity cases prosecuted to date, it has targeted large companies which have been responsible for the nationwide distribution of obscenity and who have made large profits by violating federal laws. (1995, unpaginated)



The CDA could be interpreted to hold Internet Service Providers (ISPs) liable for the content on their servers. There are many reasons to object to this. The first is that most ISPs do not screen content; it flows through them. Owing to the nature of the medium, ISPs could be prosecuted for material which they couldn’t possibly know was going through their systems. As Mike Godwin stated, “Internet nodes and the systems that connect to them, for example, may carry [prohibited] images unwittingly, either through unencoded mail or through uninspected Usenet newsgroups. The store-and-forward nature of message distribution on these systems means that such traffic may exist on a system at some point in time even though it did not originate there, and even though it won’t ultimately end up there.” (Evans and Stone, 1995, unpaginated)

The CDA would seem to require ISPs to substantially change the nature of their business. Some commentators argued that this could have a potentially devastating effect on the industry:



The CDA as passed by the Senate would put the burden of censorship directly on the service providers. Under this burden, the risk of litigation would literally put a vast number of service providers out of business. The result of which would be fewer service providers who will then charge higher access fees based on the shrinking ‘supply’ of access to these services. Service providers will also be required to ‘insure’ themselves from the potential litigation. In addition, the service providers will be required to invest in new technology to ‘censor’ the content provided to their subscribers as well as the information passing through their systems. There is no doubt that these costs will be passed along to individual subscribers by the service providers. (ibid, unpaginated)



It was also argued that the volume of traffic which passes through the Internet would make it impossible for any ISP to properly monitor. We shall come back to this point later in the chapter.

Finally, it was pointed out that there were alternatives to government censorship of the Internet. One technical method for keeping minors away from adult content was known as filters. A typical filtering program, Surfwatch, “uses multiple approachs [sic], including keyword- and pattern matching algorithms; the company uses its blocked site list as a supplement to its core filtering technologies… ” (Godwin and Abelson, 1996, unpaginated) Most of the major commercial ISPs offered their own software for concerned parents:



Compuserve offers a kids’ version of WOW!, which lets parents screen their kids’ incoming e-mail, has no chat or shopping features, and restricts Web access to sites approved by WOW!’s staff. America Online provides filters that allow parents to restrict children to Kids Only areas that are supervised by adults, allows parents to block all chat rooms, selected chat rooms, instant messages (a sort of instant e-mail), and newsgroups. Prodigy lets users restrict children by limiting access to certain newsgroups, chat rooms, and the Web. Yahooligans! will permit access only to Internet areas rated “safe.” Microsoft Network’s service automatically restricts access to adult areas except to users who have submitted an electronic form requesting access; Microsoft then checks to see if the account is subscribed to someone over 18. [notes omitted] (Bernstein, 1996, unpaginated)



Legal precedent for American government regulation of speech requires “what the judiciary calls the ‘least restrictive means’ test for speech regulation.” (Electronic Frontier Foundation, undated (b), unpaginated) This means that, if there is a means of accomplishing the aim of government regulation without actually having the government put controls on speech, that means is preferable. Opponents of the CDA argued that, while imperfect, the Internet offered a variety of tools which parents could use to protect their children from indecent materials; if used, filtering mechanisms would protect minors without affecting speech which was legal for adults. [1]

The courts found the anti-CDA arguments compelling: “…on June 11, 1996, a panel convened in Philadelphia, consisting of Chief Judge Dolores Sloviter and Judges Ronald Buckwalter and Stewart Dalzell, enjoined the enforcement of the CDA, finding the statute to be unconstitutional on its face. On June 13, 1996, a panel convened in New York, consisting of Chief Judge Jose Cabranes and Judges Leonard Sand and Denise Cote, entered a similar injunction.” [notes omitted] (Bernstein, 1996, unpaginated)

The government appealed the ruling of the Philadelphia court to the Supreme Court. On June 26, 1997, in the case of Reno vs. the American Civil Liberties Union, the Supreme Court found that the “CDA’s indecent transmission’ and ‘patently offensive display’ provisions abridge the freedom of speech’ protected by the First Amendment.” (Wisenberg, 1997, unpaginated) The Court largely agreed with the reasoning of those who opposed the CDA. On the issue of indecency, for instance, the Court stated that “Although the Government has an interest in protecting children from potentially harmful materials…the CDA pursues that interest by suppressing a large amount of speech that adults have a constitutional right to send and receive…” (ibid) On the issue of filters, the Court stated that “The CDA’s burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the Act’s legitimate purposes… The Government has not proved otherwise.” (ibid) [2]
Thus, by a margin of 7-2, the Supreme Court struck down the Communications Decency Act.

During its lifetime, debate about the CDA was highly polarized and quite vituperative. Judge Russell’s profane article in The American Reporter was not the only provocation. Online journalist Brock Meeks, writing for HotWired, claimed that retaining the indecency standard “is akin to ramming a hot poker up the ass of the Internet.” (1995, unpaginated) Another commentator called the CDA “…Exon’s pillaging of freedoms in the online world.” (Corcoran, unpaginated) Still another opponent of the bill posted the following to the Net:



The [German] purity crusade now found a focus in the “Act for the Protection of Youth Against Trashy and Smutty Literature,” a national censorship bill proposed to the Reichstag late in 1926. This Schmutz und Schund (Smut and Trash) bill, as it was dubbed, aroused fears in German literary and intellectual circles, but the Minister of the Interior soothed the apprehensive with assurances that it “threatens in no way the freedom of literature, [the] arts, or [the] sciences,” having been designed solely for the “protection of the younger generations.”

It was aimed only at works which “undermine culture” and purvey “moral dirt,” he added, and had been devised “not by reactionaries, but by men holding liberal views…” On December 18, 1926, after a bitter debate, the Schmutz und Schund bill passed the Reichstag by a large majority. (Boyer, unpaginated)



The intention, of course, was to compare proponents of the CDA to those who paved the way for Nazi Germany. [3]

But what actually happened here? Congress enacted a law which would have curbed certain kinds of speech. The Supreme Court found it unconstitutional and struck it down. It is unreasonable to expect that every law Congress will pass will be perfect; it is an imperfect institution populated by flawed human beings. That’s why there are three branches to government in the United States: they are meant to be a check each other’s excesses. It seems to me that, tested though it was, the system worked: a bad law was not allowed to stand. Those who were most uncivil in their discussions of the CDA showed a lack of faith in the checks and balances which are supposed to be the great strength of the American system.

After the CDA

The Supreme Court striking down the Communications Decency Act did not end American government efforts to control the content of digital communications networks in the name of protecting children. In the House of Representatives, the Internet Freedom and Child Protection Act of 1997 was introduced in order “To amend the Communications Act of 1934 to restore freedom of speech to the Internet and to protect children from unsuitable online material.” (HR 774 IH, unpaginated) By combining “Internet freedom” with “child protection,” supporters of this bill hoped to make it clear that their efforts to keep material out of the hands of children would not interfere with the rights of adults to engage in protected speech (an important lesson of the defeat of the CDA). This bill added an interesting twist to the debate by mandating that “An Internet access provider shall, at the time of entering an agreement with a customer for the provision of Internet access services, offer such customer, either for a fee or at no charge, screening software that is designed to permit the customer to limit access to material that is unsuitable for children.” (ibid)

In addition, individual states have the ability to pass their own content control laws. “Legislation restricting speech on computer networks has been signed into law in Connecticut, Georgia, Maryland, Montana, Oklahoma, and Virginia; and additional legislation is pending and will very likely be signed into law in Alabama, California, Florida, Illinois, Maryland, New York, Oregon, Pennsylvania, and Washington.” (National Writers Union, 1995, unpaginated) At the time the CDA was being debated, the American Civil Liberties Union claimed to be monitoring bills being proposed in 13 states. (undated, unpaginated)

Moreover, the protection of children is not the only rationale behind legislative attempts to control the content of the Internet. One bill would have made it illegal to transmit information about the making of bombs (the bill, S. 735, and commentary on the bill (Center for Democracy and Technology, 1995) can be found on the Web). There have also been attempts to revive the Comstock Act, first enacted in 1873, which made it illegal to discuss any aspect of abortion, in order to outlaw speech on abortion on the Internet. (Schroeder, 1996, unpaginated) Since they were both efforts to ban speech which was protected by the First Amendment, and which could readily be found in other media, these laws would likely have not survived a court challenge had they been enacted.

Censorship in the Rest of the World

American government attempts to control online speech are noteworthy because of the fact that the United States has a long history of supporting freedom of speech, and the country frequently holds itself up as a model for the rest of the world. However, there are many countries in which attempts at government control over digital communications networks are much more repressive than those in the United States. This section, which is not meant to be comprehensive, will look at some of the attempts to control speech on the Internet around the world.

In 1996, German authorities asked CompuServe, an international Internet Service Provider, to stop carrying 200 newsgroups which public prosecutors in that country had deemed illegal; the company complied. Unfortunately, “Since CompuServe’s software did not initially make it possible to differentiate between German subscribers and others for access to newsgroups, CompuServe suspended access to a number of newsgroups to all its subscribers world-wide…” (European Union Action, 1999, unpaginated) While the prosecutors claimed to be targeting pornography, the effect of their action was to stop CompuServe clients from accessing information on a wide variety of subjects. According to Anna Eshoo, who was, at the time, a Democratic Senator from California, “Among the items that CompuServe is being forced to hide from its four million users are serious discussions about Internet censorship legislation pending in Congress, thoughtful postings about human rights and marriage, and a support group for gay and lesbian youth.” (1996, unpaginated)

This effort was doomed for a variety of reasons. CompuServe clients outside Germany complained that they were no longer getting newsgroups which were perfectly legal in their countries. Moreover, “CompuServe users still of course had access to the Internet and could therefore connect to other host computers that carried the forbidden newsgroups.” (Human Rights Watch, 1996, unpaginated) Eventually, CompuServe improved its software such that it could keep specific newsgroups from the citizens of specific countries, and only blocked Germans from accessing the newsgroups the German prosecutors had asked to be blocked. For their part, the prosecutors relented and allowed all but five of the newsgroups to be reinstated.

You might assume that the lesson to be learned from this experience was that governments didn’t have as much power to control content on the Internet as they might like. In fact, representatives of the European Union, meeting in order to discuss how it should approach Internet regulation, came to a different conclusion: “This demonstrates that there is a need for co-operation between the authorities and Internet access providers in order to ensure that measures are effective and do not exceed what is required.” (European Union Action, 1999, unpaginated) As we have seen, many involved in the battle over the CDA argued that ISPs could not, for moral and practical reasons, be held responsible for the content on their servers which had been created by others. A discussion paper by and for members of the European Union suggests otherwise:



Because of the way in which Internet messages can be re-routed, control can really only occur at the entry and exit points to the Network (the server through which the user gains access or on the terminal used to read or download the information and the server on which the document is published)… [Therefore, if] the illegal content cannot be removed from the host server, for instance because the server is situated in a country where the authorities are not willing to co-operate, or because the content is not illegal in that country, an alternative might be to block access at the level of access providers. (ibid)



Nor are they alone. Singapore, for example, treats the Internet like a broadcast medium, licensing service providers on the condition that they do not carry material unacceptable to the government. (Human Rights Watch, 1996, unpaginated) In South Korea, “Local computer networks will be asked to prohibit access by local subscribers to banned sites, according to the Information and Communications Ethics Committee of the Data and Communications Ministry.” (ibid) As Human Rights Watch points out, “Censorship efforts in the U.S. and Germany lend support to those in China, Singapore, and Iran…” (ibid)

Many governments use technical means to try and control what information their citizens can access. “Saudi Arabia, Yemen, and the United Arab Emirates impose censorship via proxy servers, devices that are interposed between the end-user and the Internet in order to filter and block specified content.” (Human Rights Watch, 1999, 1) To get around this, citizens of these countries can dial into servers in other countries which do not filter communications. However, international phone rates in these countries can be high enough to ensure that only the richest citizens will be able to pursue that option.

Some countries have extended existing laws to the online world. For example, “Internet regulations in Tunisia explicitly extend criminal penalties for defamation and false information to online speech.” (ibid, 3) Other countries, while they have not developed laws or regulations specific to the Internet, apply existing laws to it: ” Jordan and Morocco…” for instance, “have laws that curb press freedom and those laws, such as the ones that prohibit defaming or disparaging the monarchy, narrow the boundaries of what can be expressed online.” (ibid) Finally, as this last example suggests, some countries attempt to control content on the Internet which is specifically political: “The governments of Tunisia, Bahrain, Iran and the United Arab Emirates are among those that block selected Web sites dealing with politics or human rights, thus preventing users in their respective countries from accessing them.” (ibid, 4)

Attempts by governments to censor material on the Internet have two effects on writers. As we saw in Chapter Two, the number of writers from countries other than the United States was underrepresented in my survey. A contributing factor to this may be that stricter censorship laws in other countries inhibits the posting of certain kinds of information. The other effect is that, despite the feeling that some writers have that publishing online makes everybody connected to the Internet in the world their potential readership, the actual readership for their stories is much smaller for reasons that have nothing to do with the technical aspects of the medium and everything to do with politics.

* * *



Dirty books today
Are bold and getting bolder
For smut, I’m glad to say,
Is in the mind of the beholder
When correctly viewed
Everything is lewd
I could tell your stories about Peter Pan
And the Wizard of Oz? There’s a dirty old man! (Lehrer, 1965)



As we saw at the beginning of the chapter, governments will always try to control new media. Sometimes, enlightened legislators will pull back from such efforts; at other times, enlightened courts will strike down such efforts. With regard to the Internet, it has been argued that the international reach of the medium itself makes local and national government regulation difficult, if not impossible. During the battle over the Communications Decency Act in the US, for instance, Jerry Berman of the Center for Democracy and Technology argued, “I don’t know where Sen. Exon downloaded the materials that he found abhorrent, but if they’re downloaded from Sweden or they’re downloaded from Denmark, which looks exactly like any U.S. site, any law that he passes will not reach it.” (“Focus — Sex in Cyberspace,” 1995, unpaginated) Even a pro-CDA representative had to admit that, “the internet is global. How could we regulate pornography when foreign countries are producing 30% of it?” (The person went on to answer his own question: “Well, America has always been the policeman of the world. It has many foreign policies tools to enforce such a law.” (Gensler, 1997, unpaginated)) Setting aside the question of whether or not the United States has the right to enforce its morality on other nations, Gensler acknowledges an important point: how does an international communications system such as the Internet affect the ability of nation-states to control what information their citizens can access? This is the subject of the next section.

Problems with Government Regulation 1: Jurisdictional Disputes

In 1993, Paul Bernardo was charged with the sexual abuse and murder of Kristen French and Leslie Mahaffy. His accomplice, Karla Homolka, cut a deal with the Crown: in exchange for a reduced sentence, she agreed to testify against Bernardo. In a case of really bad planning, Bernardo’s trial did not take place until over a year and a half after Homolka’s. Realizing that if the details of the Homolka trial were made public, the jury pool for the Bernardo trial could be poisoned, Mr. Justice Francis Kovacs of the Ontario Court’s General Division placed a ban on the publication in Canada of any of the details revealed in the Homolka trial. The ban was to last until the Bernardo trial.

At the time, I was learning about computer mediated communications networks, particularly the Internet. I had heard rumours that details of the Homolka trial could be found there. Curious about this possibility, I used Archie and anonymous ftp (the World Wide Web had yet to be given its convenient graphical interface) and found an American newspaper report of the trial in a computer at the University of Buffalo. The whole procedure took me approximately 30 seconds. (I had no interest in the trial itself, so I gave the file to a friend, who was outraged enough for the both of us.)

The belief at the time was that many people had used the Internet to obtain the forbidden information, and that they had distributed it to many more people. “Despite the publication ban [on information on the trial of Karla Homolka], various Internet newsgroups posted details of the case… the information was freely circulated in the U.S. and found its way back north of the border electronically.” (Johnstone, Johnstone and Handa, 1995, 151) Having information about the trial was not, in itself, a crime (only publishing such information was). However, people who got details of the crime over the Internet flouted the intent of the Court’s ruling, which was to ensure that enough people did not know such details so that an unbiased jury could be impaneled for Bernardo’s trial. Because it was so easily circumvented, the ruling came in for much scorn (as do traffic rules which are difficult to enforce), and brought the entire justice system up for ridicule.

Even if a legislative body passes laws to control content on the Internet which hold up in its country, it would still be faced with the problem of jurisdiction. The Internet is a communications system which spans the globe; since information flows more or less freely across borders, laws passed by individual nation-states can be easily circumvented. Worse, since laws passed in one country have no force in other countries, even if a national government can control what its citizens put on the Internet, it cannot control what the citizens of other nations put there.

One way in which computer networks may undermine governments is in the way it allows individuals to act in defiance of laws, making them difficult to enforce. The Homolka trial experience is one example of this.

Soon after the trial of Karla Homolka, a newsgroup was set up on the Internet which contained information on it, alt.fan.karla-homolka. “This newsgroup was started as something of a joke on June 14, 1993, by a University of Waterloo student called Justin Wells who, upon seeing a photograph of Mr. Bernardo’s estranged wife, decided ‘she’s a babe’ and that she needed a fan club. Soon, however, as the horror of the charges against the two became clear, the newsgroup took on a different tone.” (Kapica, 1995, A13) The newsgroup soon came to include “not only discussion of the case but also evidence presented at the trial in which Ms. Homolka was convicted of two counts of manslaughter in the sex slayings…” (Gooderham, 1993, A5) Soon after the trial, it was reported that, “96 different articles have been posted on the Homolka newsgroup, including discussion of the Canadian and U.S. judicial systems, sordid rumours about the case and the text of an article published last week in The Washington Post.” (ibid)

In order to comply with Judge Kovacs’ ban, some Internet Service Providers and universities blocked access to alt.fan.karla-homolka. This was sometimes regarded as an unwelcome attack on free speech. When, on legal advice, Mark Windrim, owner/operator of the MAGIC Bulletin Board Service banned discussion of the Homolka and Bernardo trials online, he earned “a flood of hate E-mail” from subscribers. (Clark, 1994, B28) When the University of Toronto blocked access to newsgroups with information on the trial, the student newspaper The Varsity published a step-by-step guide on how to circumvent the block. According to then-Varsity editor Simona Chiose, “We just wanted to show that despite the university’s effort to censor the information, it can still be obtained.” (Memon, 1994, A8)

For a number of reasons, attempts to block information on the Homolka trial were largely unsuccessful in stopping the banned information from circulating. Writing about the ban at the University of Toronto, for instance, Mary Gooderham pointed out that “the university brings in more than 4,200 other newsgroups, and some of those include the same information as the Homolka one. A newsgroup called alt.journalism, for instance, includes the Washington Post article.” (1993, A5) She went on to state that commercial services made the information available: “CompuServe, one of the largest private on-line computer services, offers the Washington Post article to its subscribers…” (ibid) Furthermore, even if access to the newsgroup at one server was blocked, “any Netsurfer with a little wit could find a ‘mirror site’ — a computer carrying the same newsgroup — in the United States, where the publication ban is not in effect.” (Kapica, 1995, A13) Simply renaming the newsgroup would have gotten around those who were attempting to contain the information; in addition, “users who have had it blocked also have the option of receiving all of the information by electronic mail.” (Gooderham, 1993: A5)

The result of these and other methods around the ban was, according to the Ottawa Citizen, that “26 per cent of those polled knew prohibited details of the Teale [Bernardo]-Homolka trial…” (Wood, 1994)

Coverage of the Homolka trial points out the difference between the Internet and traditional print media as disseminators of information. “A story on the case published this week in Newsweek magazine, titled ‘The Barbie-Ken Murders,’ which was not included in Canadian copies of Newsweek, appeared Sunday on the New York Times Special Features wire service.” (Gooderham, 1993, A5) Although the publishers of Newsweek voluntarily complied with the ban in their print publication, they could not control who could read their article when it was digitized. An even starker example of the difference occurred with a publication which, given that it considers itself part of the vanguard of the digital revolution, should have known to be more careful: “…a single sentence in ‘Paul and Karla Hit the Net’ — a 500-word article on Canadians tapping Internet for banned detail on the Karla Homolka trial — triggered removal of 20,000 Wired mags from retail racks nationwide…distributors in Victoria and across the country scurried to slap stickers over the offending passage in each copy before returning the periodical to the shelves.” (Wood, 1994)

Governments used to be able to control information in traditional print media because, in the worst case, they could seize physical copies of the information, punishing those who were distributing it. Because digital information has no physical form, it is much more difficult to contain, making rules about who can access it much harder to enforce. Although some governments may attempt to control digital information by controlling the physical infrastructure (ie: intervening at the level of service providers), the example of the ban on information on the Homolka trial suggests that this may not be as simple as it has been for previous media (for instance, television).

Another example of a national government being forced to come to terms with new media took place in Serbia, when the Milosevic government attempted to outlaw information which opposed its public line on dissident groups. According to one report, Milosevic was largely successful in controlling traditional media:



In cities now controlled by the opposition, more than 50 TV and radio stations have been closed by the Serbian police on the grounds that their licences were not in order, eliminating alternatives to the heavily controlled propaganda machine of state TV.

And the weekly magazine Nin, considered the most reliable and most serious publication in Serbia, has a new editor-in-chief, Milivoj Glisic, and now embraces a more Serbian, nationalistic editorial policy. A third of the journalists have left in protest. (Perlez, 1997, A9)



According to Dusko Tomasevic, the Milosevic government was not able to shut down news of the regime which made its way onto World Wide Web sites on the Net: “‘The police told students to shut it down, but they cannot,’ Tomasevic says, subdued, matter-of-fact. ‘We have mirror sites now in Europe and North America, and if they shut down the Belgrade server we can directly modem the information overseas. To stop that they will need to shut down every telephone in Serbia — which is impossible.'” (Bennahum, 1997, 168) As Tomasevic claims, anybody with access to the technology can transmit forbidden information from their computer directly to a computer outside their country (and, presumably, outside the control of the government of their nation). Moreover, once the information has been transmitted, it is rapidly disseminated to computers in various nations throughout the world, rendering subsequent control of the source moot.

One of the few remaining independent voices in the region, the radio station B92, had its signal repeatedly jammed before being completely shut down by the Milosevic regime. (Reuters, 1997, A11) In the past, this may have permanently silenced the radio station, but, as it happened, this was not the case:


On December 3 [1995], the Net briefly captured center stage in Belgrade when the Milosevic regime took Radio B92 off the air. B92, then Belgrade’s only radio station that wasn’t under state control, had for two weeks been broadcasting updates on the growing protests in the streets. When Milosevic unplugged B92, the broadcasts were rerouted via the Net using RealAudio. The Voice of America and the BBC also picked up the dispatches, resending them to Serbia via shortwave. Two days later, Milosevic allowed B92 to broadcast again, giving the opposition an important symbolic victory, and inspiring students to start calling their struggle ‘the Internet Revolution.’ (Bennahum, 1997, 168)



Because of their centralized nature, it was once possible for a government to physically seize television or radio transmitters which were used to disseminate information of which the government did not approve. The Internet, being both decentralized and having innumerable points of entry (not only telephone lines, but cable, satellite and, perhaps in time, even power lines), is far more difficult to police in this way. As we saw with print, methods of controlling the medium of radio which once worked are made highly problematic by new digital media.

Some countries are trying. China, for instance,



is in the midst of developing a large academic computing network to link more than one thousand educational institutions by the end of the century. There is only one twist to this network. Unlike American networks, with multiple electronic routes from campus to campus, all traffic in this Chinese network will have to run through Beijing’s Quinghua University. Poon Kee Ho thinks he knows why. The Chinese academic network will be technically unsound, but with a choke point at Quinghua University, government officials ‘can do what they want to monitor it or shut it down’… (Wresch, 1996, 147)



China seems, in fact, to want to return computer networks to the hub-and-spokes model of telephone connectivity in order to be able to exert control over it. Politically, there can be no doubt that the Chinese government has the will to carry this out: as the Tiananmen Square massacre indicates, it is more than willing to defy international opinion to achieve internal political ends. Moreover, the example of Singapore, which has extensive international economic ties despite having repressive laws on Internet use, suggests that governments can attempt to control information flow through computer networks with few repercussions to international relations. (Gibson, 1993)

That having been said, it must be pointed out that the technology works against such centralized control. For one thing, the volume of information passed through China’s system is likely to be huge, with perhaps millions of messages a day. The computing power necessary to monitor such output is mind-boggling. Moreover, how to sift legitimate from illegitimate forms of communications is a logistical nightmare. Simple programmes can be written which will look for certain words (ie: “democracy”) and let the people running the system know in which messages such words occur; but a huge bureaucracy would have to be created to sort through the flagged material to determine what was innocent communications and what was politically unacceptable.

Even if such a system could be set up, Ho’s fear that the Chinese government will shut down the country’s connection to the Internet is probably misplaced. China is currently attempting to increase its educational and corporate connections with the outside world as part of a larger process of helping it to function within the modern world economy. To the extent that the technology is an important part of this process, shutting China’s Internet connection completely would seriously damage these efforts. In this way, unwanted information is somewhat protected by the nation’s need for certain kinds of information; or, as William Wresch eloquently puts it: “Links to the world are innocent. The highway doesn’t know if it is carrying salvation or slaughter.” (1996, 158) As information transfers become increasingly global, as well as increasingly important for international business, the possibility of shutting down local information networks for political reasons becomes increasingly remote.

Even where laws are difficult to enforce, some argue that they still have value. Taylor refers to symbolic legislation, whose purpose is “more ideological than instrumental.” (in press) One major characteristic of symbolic legislation is that “it should espouse a particular social message irrespective of the law’s likely ability to enforce that message.” (ibid) In this way, it can be argued that the ban on information on the Homolka trial, to take one example, should have stayed on the books even after it became clear that it was difficult to enforce in order for the government to be seen to be upholding the ideal of fair trials for the accused. (Ideological purposes need not be benign, however; the Milosevic government of Serbia may outlaw some forms of communication in order to seem to be maintaining control over information in its country even if there are holes in such a ban.)

The danger of symbolic legislation is that it can bring a government into disrepute by making its power to rule a subject of ridicule. This happened in Quebec when local computer company Micro-Bytes Logiciels ran afoul of the Office de la Langue Francais, the government agency charged with protecting the French language in the province, resulting in the company removing most of its home page from the World Wide Web. (“Language Rules,” Montreal Gazette, 1997) According to the OLF, the company’s English-only Web site was a violation of Section 52 of Quebec’s French Language Charter, which reads: “Catalogues, brochures, fliers, commercial directories and all other publications of the same type must be produced in French.” (Beaudoin, 1997, B5)

Reaction to the move was largely negative, the following excerpt from a column in a Montreal alternative weekly being typical: “Where the rest of the world sees a multimedia free-for-all that transcends language, nationality and border, those, um, marvelously iconoclastic individuals at the OLF see printed brochures and neon signs over shoe stores. In NDG. It’s all print advertising to them, and as far as they’re concerned the people who put those print ads on a network that spans the entire globe only intended them to be read by Quebecers.” (Scowen, 1997, 6) The issue was even brought before the federal government when Liberal Member of Parliament Clifford Lincoln scoffed, “The Internet doesn’t belong to Quebec. This isn’t a television channel or a radio station. It’s a totally different entity…” (Contenta, 1997, A2) Beaudoin is not unaware of this: “It has been argued that Quebec cannot exercise jurisdiction over companies located outside its borders that put advertising on the Internet. This is true. But this is no reason, in our view, to abdicate our responsibility to protect Quebec consumers to the extent that we are able.” (1997, B5) This is a clear statement of the symbolic nature of the law. [4]

Two qualifications must be made here. The first is that the French language press seemed more favourably disposed to the ruling on Micro-Bytes Logiciels than the English language press, likely because they have more sympathy for the government’s goal of protecting the French language. The other is that the French language laws are frequently a source of ridicule in Montreal’s English language press. This underscores the point, though, that the more a government’s laws are ridiculed, the greater its legitimacy tends to be undermined. It should also be pointed out that jurisdictional disputes are not limited to nation-states; smaller, more localized governments may also attempt to pass laws which affect the flow of information over digital networks despite how difficult they may be to enforce.

Examples of Internet related challenges to the authority of governments are multiplying. An American company wanting to sell home pregnancy kits over the Internet faces a problem because they cannot legally be sold in Canada. (“E-Commerce Problems,” Ottawa Citizen, 1998) In December, 1996, two organizations dedicated to protecting the French language in Europe sued the Georgia Institute of Technology’s branch in Lorraine because its Web site contained only English (they recently settled out of court). (“English-only Approved for Georgia Tech Web Site in France,” 1998) “During the 1991 attempted coup in Russia…programmers used their computers to keep in touch with the rest of the world, even though the insurgents controlled the centralized radio, television, and newspaper facilities. Messages traveled from Moscow to Vladivostock, to Berkeley, to London, and back, while the technologically illiterate old-timers were powerless to stop them. In the old days, it was easy for Moscow to prescribe what the entire country thought simply by controlling the central broadcasting stations. Not anymore.” (Rawlins, 1996, 80)

As a result of these, and other events, scenarios illustrating the uncontrollable nature of electronic communications networks such as the Internet abound. Nicholas Negroponte, to cite one example, wrote,


If my server is in the British West Indies, are those the laws that apply to, say, my banking? The EU has implied that the answer is yes, while the US remains silent on this matter. What happens if I log in from San Antonio, sell some of my bits to a person in France, and accept digital cash from Germany, which I deposit in Japan? Today, the government of Texas believes I should be paying state taxes, as the transaction would take place (at the start) over wires crossing its jurisdiction. Yikes. As we see, the mind-set of taxes is rooted in concepts like atoms and place. With both of those more or less missing, the basics of taxation will have to change. (1998, 210)



This particular scenario may be overstated. As economist Paul Krugman points out, the movement of people is far from free, and as long as people are physically rooted to one area in one country, it remains possible to make them submit to paying taxes. (Kevin Kelly, 1998, 146)

There are other scenarios which point out the difficulty of local attempts to regulate an international communications system: “Questions that once had clearcut answers are now blurring into meaninglessness. Who should be involved in a computer chase? Who has jurisdiction? If you invade someone’s computer, is that burglary or trespass? Where should the search warrant be issued? And what for? What happens if someone living in country A commits a crime in countries B and C using computers in countries D, E, and F?” (Rawlins, 1996, 83) Or again: “To censor Internet filth at its origins, we would have to enlist the Joint Chiefs of Staff, who could start by invading Sweden and Holland.” (Noam, 1998, 19)

One obvious solution to this problem is for national governments to enter into agreements with each other to regulate international digital communications networks. The likelihood of the nations of the world, with their wildly disparate cultures, agreeing on policies for policing Internet content seems remote; even if it were possible, methods of circumventing such policies makes their enforceability by no means certain. Finally, there will be a hidden cost to such efforts. “In order to combat communicative acts that are defined by one state as illegal, nations are being compelled to coordinate their laws, putting their vaunted ‘sovereignty’ in question.” (Poster, 1995, unpaginated) Ironically, attempts to protect national sovereignty by controlling Internet content may, in this way, lead to it being undermined.

Another problem with any sort of regulation of Internet content is, according to many observers, that the freedom to publish on the Internet is what makes it such a dynamic source of information. “To impose local norms on media that are inherently unlocal is to cripple the media themselves… With enough sufficiently different local norms brought into play, the network will be permitted to transmit nothing but mush.” (Noam, 1998, 46) The irony here is that efforts to “save” the Internet (from, say, purveyors of pornography) may reduce it to something not worth saving.

Some commentators suggest that computer networks will make nation states obsolete. Don Tapscott, for instance, writes: “There is evidence that the I-Way will result in geopolitical disintermediation, undermining the role of everything in the middle, including the nation-state. That is, broadband networks may accelerate polarization of activity toward both the global and the local…” (1996, 310) I am not suggesting this. As I argued previously, governments are instruments of the collective will of citizens and, as such, will continue to serve a legitimate purpose for the foreseeable future. In particular, we should expect them to continue to attempt to regulate communications media, including the Internet. However, any government attempt to regulate digital communication networks, as they are currently configured, must take into account the way the technology can constrain the ways in which governments can act.

Copyright

As we have seen, many of the writers surveyed in Chapter Two are concerned about whether or not traditional notions of copyright will apply to digital communications media. Some are worried that if they cannot exert copyright protections for their writing, they will not be able to make money from it once mechanisms for revenue generation are perfected. Others are concerned that, given the ease with which digital works can be copied and modified, they will not be able to control where or how their work appears without strong copyright protections.

On the other hand, those pushing for stringent application of copyright to digital media the loudest are the transnational entertainment corporations that hope to reap vast profits from the Internet. The tension between these two interests, as well as other issues arising out of the application of traditional notions of copyright to the emerging medium of digital communication networks, is the subject of this section.

A Note About Terms: Are Expressions Property?

Before beginning a discussion of copyright, it is worth noting that a central term used in most such discussions is “intellectual property.” This term is largely a metaphor, comparing the right somebody has in intellectual expression to the right they may have in owning a car, a house, or any other physical commodity. I find the term misleading, leading to extensions of concepts from the physical world to the purely ideational world regardless of whether or not they actually fit well with our experience of the ideational world.

The obvious difference between the two is their tangibility: property rights have traditionally been exerted over physical objects. Expressions of ideas, by way of contrast, may be embodied in a physical object (a book, say, or a videotape), but their essence is not physical; this is made clear by digital media, where messages are transported over vast physical networks, but, themselves, take the form of impulses of light.

The most important aspect of the rights inherent in owning physical property is that its owner has absolute control over what is done with it. Thus, the owner of a car determines who can drive it; the owner of a house determines who can live in it and what activities are permissible within its boundaries, and so on. This is necessary because there is only the one object; if a dispute arises in which it becomes necessary to determine who has the right to decide what will be done with the object, the concept of who “owns” it, whose property it is, is invoked.

When it comes to the expression of ideas, we have seen (and will have cause to consider again) that this is not the case. When somebody buys information, the person who created it can retain a copy. Moreover, information proliferates: whether emailed to a thousand people on a list or talked about around an office cooler, information is soon distributed beyond the ability of its creator to control. Last chapter, we saw some proposed technical solutions to this problem in the digital world; in this chapter, we will look at a legal solution. It is unclear whether any of these solutions will work (indeed, some people believe they will be fruitless and might as well be abandoned); in fact, as we shall shortly see, extending such power too far may be to the detriment of society as a whole. I would suggest part of the reason control of ideas and the expression of ideas is so difficult is that the metaphor of property which is the basis of such attempts at control does not apply to information.

For this reason, although many of the thinkers I quote in this section use the term “intellectual property,” I do not.

What Is Copyright?



If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it…. He who receives an idea from me, receives instructions himself without lessening mine as he who lights his taper at mine, receives light without darkening me. That ideas should be spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature… (Thomas Jefferson, quoted in Samuelson, 1998, unpaginated)



Our commonsense understanding of the way literature works suggests that authors should be rewarded for their work. In fact, it took a long time for such an understanding to be developed, and, even now, long after regulatory regimes were developed in order to protect the interests of authors, their aims and effects are poorly understood.

As described in Chapter One, in the 16th Century Gutenberg’s press spread throughout Europe; the books which were printed on it tended to be ancient texts, largely the Bible, but also many of the philosophical works of the Greeks. Because they were long dead, compensation for the authors of these works was not seen to be important. However, the publishers of these works had a serious economic stake in them: it was not uncommon for a publisher to go to the trouble and expense of developing a volume, only to see it exactly reproduced at lower cost by another publisher. A group of publishers banded together and successfully lobbied the British government for protection, which it granted them; in 1557, the Stationer’s Company, as this group was called, was given exclusive control over all printing and book selling in England. (Gutstein, 1999, 129)

As publishing grew, individual authors were encouraged to write new works. However, because the Stationer’s Company had a monopoly on publishing, it was able to dictate the terms under which British authors would be compensated for their efforts. Most often, the author was given a lump sum payment for a work, and had no legal recourse to any other payment, even if the book went on to become a bestseller which made a lot of money for the publisher. Note this economic imbalance, which favoured the interests of publishers over authors and was supported by statute: it will appear again in modern times.

The Stationer’s monopoly on publishing lasted for 150 years, until the passage of the Statute of Anne in 1709. (ibid, 130) The rights enshrined in this Statute were to be reproduced in the American Constitution some 80 or so years later. According to this latter document, copyright is intended to “promote the progress of science and the useful arts,” by giving the creator of a work, for a limited period of time, the right to control the dissemination of, and thereby profit from, that work. Several things are noteworthy about this formulation of copyright. The most obvious is that publishers are not mentioned; the most important economic stake in an original work is now identified as the author’s. (In fact, an exception has arisen: “…generally the copyright in a work is owned by the individual who creates the work, except for full-time employees working within the scope of their employment and copyrights which are assigned in writing.” (Brinson and Radcliffe, 1991, unpaginated) However, in the cases in which we are most interested, particularly individuals who put their work on a Web page, this exception does not apply.)

Perhaps more important to note about this way of looking at copyright is the idea that it is granted to artists and other creators by society for society’s benefit. “Copyright — the right of a creator to impose a monopoly on the distribution of his or her work — was originally conceived [in Canada] as a privilege bestowed by Parliament on authors to encourage the creation of new ideas, which society needed to continue its development.” (Gutstein, 1999, 3) This privilege is, in fact, severely circumscribed. For example, “Under copyright law, only an author’s particular expression of an idea, and not the idea itself, is protectible.” (Jassin, 1998, 6) For example, you cannot copyright the idea that the sun is shining in the sky, because this would make it illegal for any other author to write anything on this very common observation. However, if you are Samuel Beckett, you can copyright the unique expression which opens the novel Murphy: “The sun shone, having no alternative, on the nothing new.” (1976, 24)

The advantage to society of limiting the benefits creators may obtain from their work is most obvious in the sciences, where ideas in a given book or paper are debated in subsequent publications. As debates in the sciences advance through this type of give and take, the boundaries of human knowledge expand (as well as the technologies which arise out of this knowledge). If scientists were allowed too much control over their creations, scientific debate would be stifled, and the public interest would suffer.

In a similar, though less well understood process, the arts are advanced by building on existing work: “…new works borrow liberally from a common store of facts, information, and knowledge that exists in the public domain in reference books, libraries, schools, government documents, and the news media, as well as in society’s stories, myths and public talks.” (Gutstein, 1999, 138) Existing information is the common property of humanity from which new works of art are forged. Some go so far as to suggest that creativity, far from being the domain of the lone artist as suggested by modern myths of creation, is necessarily “a collective process. No one has totally original ideas: ideas are always built on the earlier contributions of others. Furthermore, contributions to culture — which makes ideas possible — are not just intellectual but also practical and material, including the rearing of families and construction of buildings. Intellectual property is theft, sometimes in part from an individual creator but always from society as a whole.” (Stutz, undated (b), unpaginated)

Consider Shakespeare. Most of his plays are based on historical characters and incidents or previously existing stories; had these stories been protected by strict copyright laws, many of the greatest plays in the English language may never have been written. Or consider it from the opposite point of view. Had Shakespeare’s heirs held a copyright to his work to this day, we may never have had Burton and Taylor’s The Taming of the Shrew or 10 Things I Hate About You, McKellan’s Richard III or Welles’ Chimes at Midnight, West Side Story or some of the greatest films of Olivier or Branagh. In this way, any artist is a nexus of existing and future stories. Copyright law is intended to find a balance between the needs of the artist in the present and the debt he or she owes the past and the future.

Because it is so little commented upon, the public interest in copyright cannot be stressed enough. As Richard Stallman comments, “Progress in music means new and varied music — a public good, not a private one. Copyright holders may benefit from copyright law, but that is not its purpose.” (1993, 48) Digital technologies pose a unique challenge to existing copyright regimes.

Does Copyright Apply to Digital Media?



The grant of an exclusive right to a creative work is “the creation of society — at odds with the inherent free nature of disclosed ideas — and is not to be freely given.” (Thomas Jefferson, quoted in Gutstein, 1999, 160)



Copyright was originally meant to apply to literary works, to books, magazines and other print forms of communication. As new media developed, copyright was applied to them, so that it is now possible to copyright photographs, television shows and movies. [5] The temptation, to apply the existing regime to new media as they are created, is now exerting itself over digital media. However, in some ways digital media distort our existing ideas of copyright when it is applied to them.

To begin with, copyright has traditionally been applied to a work only when it has taken on a “fixed form.” “The point at which this [copyright] franchise was imposed,” writes John Perry Barlow, “was that moment when the ‘word became flesh’ by departing the mind of its originator and entering some physical object, whether book or widget. The subsequent arrival of other commercial media besides books didn’t alter the legal importance of this moment. Law protected the expression and, with few (and recent) exceptions, to express was to make physical.” (1996, 149/150) This is a necessary practicality: when a dispute over the ownership of a work takes place in a court of law, it is impossible to prove that somebody had the expression of an idea in their head at a given date; tangible proof of the existence of the work is necessary to prove who thought it up first. This is the reason many authors register their writing with appropriate guilds or government agencies before they circulate it in an attempt to sell it.

Digital media do not have this fixed quality. When you access a World Wide Web page, for instance, all of the elements of the page are stored in temporary memory on your computer. You can fix the amount of memory devoted to this task at a minimum, in which case each new page will erase elements of the page(s) which preceded it. Even if you set the amount of memory devoted to this task relatively high, you can periodically clean out your cache to free up memory. On the other hand, what you actually see, on your screen, is a temporary arrangement of pixels which is constantly changing. The point is, at no point is a Web page “fixed” on your computer.

Some might argue that the page is fixed on the server on which it resides. Experience with the Web would suggest otherwise: pages are constantly being updated, moved from one server to another (with a concomitant change in URL), or taken off the Internet when, for one reason or another, the sponsor of the page can no longer maintain it. The Web is in a constant state of flux.

Furthermore, digital information relies on specific hardware and software to be readable. Already, much of the information from the early days of computing has been lost because the machines which were in use at the time no longer exist. (Contrast this with print media such as the papyrus scroll, copies of which have existed for thousands of years.) If a copyright dispute involving information created on early computers were to be brought to court today, it would be virtually impossible to prove that such information existed, even if paper tape or other stored copies were available.

Finally, the interactive nature of digital media, where each computer user develops his or her own experience by the choices they make, means that there is no single fixed work. “An especially relevant I-way question is whether transitory combinations of data, such as the results of a database search conducted at the direction of the user, are sufficiently fixed for copyright… A related problem is raised by the I-way’s interactive capacity. We are witnessing the birth of ‘you program it’ interactive entertainment systems. If a user programs a selection of programming that suits her tastes, one wonders if the user has a copyright in that selection of programming.” (Johnstone, Johnstone and Handa, 1995, 174/175)

Digital media clearly do not pass the test of fixity required to be copyrightable. Despite this, legislatures have attempted to pass laws which would make digital information copyrightable. According to the World Intellectual Property Organization, “Some copyright laws provide that computer programs are to be protected as literary works. [note omitted]” (undated, unpaginated) This offends our commonsense idea of what a literary work is. More importantly, though, it does not deal with the problem of the essentially unfixed nature of digital media.

What, after all, is a computer program? It is a set of instructions to a machine. Copyright was never intended to include such things. “Copyright does not protect ideas, processes, procedures, systems or methods, only a specific embodiment of such things. (A book on embroidery could receive copyright but the process of embroidery could not.) Similarly, copyright cannot protect useful objects or inventions. If an object has an intrinsically utilitarian function, it cannot receive copyright.” (Nichols, 1988, 40) Traditionally, legal protection for processes were covered by patent law, not copyright. However, as Nichols points out, the distinction between the two legal regimes has been blurred by the courts:



The Software Act began the erosion of a basic distinction between copyright and patent by suggesting that useful objects were eligible for copyright. In judicial cases such as Diamond v Diehr (1981), the court held that ‘when a claim containing a mathematical formula implements or applies that formula in a structure or process which, when considered as a whole, is performing a function which the patent laws were designed to protect (for example, transforming or reducing an article to a different state of things), then the claim satisfies the requirements of [the copyright law].’

This finding ran against the grain of the long-standing White-Smith Music Publishing Co v Apollo Co decision of 1908 where the Supreme Court ruled that a player piano roll was ineligible for the copyright protection accorded to the sheet music it duplicated. The roll was considered part of a machine rather than the expression of an idea. The distinction was formulated according to the code of the visible: a copyrightable text must be visually perceptible to the human eye and must ‘give to every person seeing it the idea created by the original. (ibid)



The analogy of a computer program to a player piano seems apt, since both are basically sets of instructions for a machine. The 1981 court decision uses some torturous logic in order to essentially overturn the previous court’s decision.

A different approach taken by American courts is to ignore the software altogether and concentrate, instead, on the outward manifestations of programs for digital games. “Referring to requirements that copyright is for ‘original works of authorship fixed in any tangible medium’, Federal District Courts have found that creativity directed to the end of presenting a video display constitutes recognisable authorship and ‘fixation’ occurs in the repetition of specific aspects of the visual scenes from one playing of a game to the next.” (ibid, 42) Thus, if, after repeated play, the same characters, backgrounds and situations repeatedly arise, the courts decided that they were “fixed” for purposes of copyright. In such cases, the courts didn’t require the deposit of the algorithms behind the program in order to copyright it, just a videotape of one playing of the game. (ibid) Again, this seems to be stretching the idea of copyright to fit it over something which clearly shouldn’t get its protection.

There are other problems with the application of copyright to digital media, especially pernicious since they appear, at first sight, to be protecting copyrights. For example, “Several technological solutions may help control intellectual property on the Web. One popular research scheme involves ‘software envelopes’ or ‘cryptolopes’ that contain encrypted versions of the material to be displayed. These envelopes are designed so that the user is automatically billed for viewing the contents of the envelope when it is ‘opened.’ Many people believe that this is the ultimate solution to IP control; support for cryptolope systems of this sort is built into proposed revisions in the intellectual property law. [footnotes omitted]” (Varian, 1997, 33/34) There are practical problems with this system. Perhaps most important is that encryption by itself is unlikely to stop the unpaid for distribution of proprietary material: “When books are electronic, even if they are encrypted, at some point they must be decrypted for the user to read. At that point, they can be copied. Perfectly.” (Rawlins, 1996, 58)

A different form of protection is offered by digital watermarks. “The practice of watermarking documents dates back to the Middle Ages, when Italian papermakers marked their unique pieces of paper to prevent others from falsely claiming craftsmanship. Today, watermarks are still used to identify stationery and stock from bank checks. Like its analog analogue, digital watermarking carries information about the source along with the content.” (Wiggins, 1997, 41) Digital watermarks can be visible to the document user, but that makes them relatively easy to erase. Digital watermarks can also be woven into documents in ways which make them relatively difficult to detect and alter by users. Watermarks do not, in themselves, prevent copying, but they do make it possible for creators with sufficient resources to track illegitimate use of their material. In conjunction with something like encryption, watermarking can be an important tool for those who wish to enforce their control over their material.

Even more powerful tools for controlling digital material are being developed.



Thingmaker has several options for protecting a file. The most stringent is locking a file to a given server, so if anyone tries to run it off a different server, it won’t work. If you want to allow re-use, you can lock the file to prevent editing or allow only specific attributes to be edited. For example, if you wanted an animation to point to a particular Web site when clicked, that particular feature could be fixed. ‘It’s not just about stopping people from stealing your content, it’s how you control the sharing of your content,’ says Steve Barlow, chief technical officer at Parable. (ibid, 43)



At first blush, this may seem an ideal way to compensate creators. However, that is only one side of the copyright coin: the other, you will recall, is that every member of society should have as wide an access as possible to as great an amount of information as possible. This is embodied in two related exceptions to copyright laws: first sale and fair use. According to the first sale doctrine, once I have bought a book, magazine or other publication, I have the right to do with it what I will. I can lend it to a friend after I have read it. I can give it away as a gift. I can quote passages from it to people I know. The creator of the work has no right to compensation for any of these uses.

The most important manifestation of the first sale doctrine is lending libraries. They buy a copy of a variety of publications, then lend them to the public at little or no charge. This may seem unfair to creators, but, in fact, there is no evidence to support the assertion that they lose significant income to libraries or other lenders; it is more likely that people who could not afford their publications would be given the opportunity to read them by borrowing them. In any case, first sale was seen as a necessary means by which the interests of society could be represented, since it helped in the wide dissemination of ideas.

The fair use doctrine allows excerpts from existing works to be used in new works. Fair use is most frequently invoked in academic works and journalistic articles; it is seen as crucial for the development of scientific and other ideas. Fair use is what allows me to build the arguments in this dissertation by quoting from existing sources: I not only quote from the arguments of others to either refute or support them, but I quote facts in support of my own original arguments. With tightly controlled copyrights applied to digital media, it is possible to track such quotation, with the possibility of making the quoter pay for each use of even the smallest portion of somebody else’s work. The fear is that this ability will impede the development of knowledge, since many authors will not be able to afford to pay for all of the material which they reference (I know I wouldn’t). Society would suffer as a result.

It cannot be stressed enough that first sale and fair use are not, as some contend, simply artifacts of the analog age, allowances made to the fact that control of physical forms of expression such as books was necessarily imperfect. They are necessary balances to ensure that society’s interest in the widespread dissemination of ideas is maintained. Thus, we should be wary of statements like, “First Sale and Fair Use doctrines served us reasonably well in an industrial age economy. They simply will not extrapolate to the emerging world of the Net.” (Heterick, Jr., 1997, 20) While this may be true, given the direction large copyright holders are pushing the development of online technology, the conclusion to which most people jump — that copyright should continue without first sale and fair use provisions — is not. This would strengthen the role of the copyright holder at the expense of society as a whole, to the detriment of the development of the sciences and useful arts. “The existing law ensures producers of artistic material the right to profit from their creative works, but it does not allow a creator to control who looks at the material or prevent it from being lent or circulated to others.” (NYT Editorial Staff, unpaginated) Yet, extensions of copyright law, combined with the unique properties of digital media, may do just that. A different conclusion seems more reasonable: “It is not sufficient to simply modify copyright laws and treaties to ‘include’ the new technologies, for the new technologies work in an altogether different manner and do different things than print media.” (Solomon, 1999, 125) Thus, copyright is not the right mechanism for regulating digital media, and some other regime should be developed which finds the right balance between the legitimate interests of information creators and society.

Who Benefits Most From Copyright?

In the 1990s, freelance writers (those who worked on an article by article basis without a contract) in the United States and Canada found that their work was turning up in the darndest places: online databases. In many cases, the corporate owners of the magazine or newspaper for which they wrote took their work and published it online without compensating — or even notifying — the authors. The result was inevitable.

In the United States, “In 1993, ten freelance writers sued the New York Times and other publishers over the unauthorized publication of their work through online computer services.” (Brinson and Radcliffe, 1991, unpaginated) In Canada, “In September 1996, [freelance writer Heather] Robertson launched a landmark $100-million class-action lawsuit against Thomson on behalf of any freelance writers, artists, and photographers who had sold works to the company and wanted to retain control over their electronic-publishing rights. She sought $50 million in compensation, $50 million in punitive damages, and an injunction preventing unauthorized inclusion of freelancer’s works on electronic databases.” (Gutstein, 1999, 125)

The traditional freelance writer’s contract used to go something like this: the writer agreed to allow the publication a window in which it got exclusive rights to publish a work (six months was not uncommon). The contract specified where the work was to be published. After the initial period of publication, the writer could then sell the work to another publication (often at a reduced rate since it wasn’t the first publication of the work). By publishing the work of freelance writers in electronic databases without having previously agreed with the writer to do so, the media corporations seemed to have been breaking the terms of this contract.

The reactions of the courts to these challenges has been mixed. In the United States, “A federal judge in Manhattan has ruled against freelance journalists who argued that publishers should not be allowed to reproduce their work on CD ROMs or in electronic databases without their permission and without paying them beyond what they were paid for the original material. At issue was whether or not electronic reproduction of that sort is essentially equivalent to archival versions of print media on microfilm, which are a publisher’s right under the Copyright Act of 1976.” (“Freelancers Lose to Publishers Over Electronic Reproduction,” 1997, unpaginated) Common sense would suggest that reproduction in an electronic database is a form of republishing material, not merely archiving, especially since the corporations expected to make a lot of money from their databases, but the court seemed to disagree. (At least, so far: the decision is being appealed.) In Canada, by way of contrast, in 1999 “a judge in Ontario Court General Division gave her permission [to Robertson] to launch the class-action suit.” (Gutstein, 1999, 125) [6]

Whether or not these lawsuits are ultimately successful, the publishers may already have won the war. Contracts with freelance writers now give publishers the right to republish material in digital form without further compensating the writer (in one case, that of Thomson Corporation, the new contract covers the right to reproduce material not only in existing digital media, but in any medium which may, in the future, be developed). This is outrageous: creators of information are paid penurious rates while the distributors of the information reap substantial benefits, benefits which arguably used to accrue to the writer, whose market for reselling articles dries up because the first publisher now has the right to keep the material in circulation in perpetuity. What was the writers’ response? “[G]iven their low average annual income and dependence on a small number of publishers, most freelancers caved in to the unfair demands of the publishers and signed these new contracts.” (ibid, 128) Thus, we have returned to the condition of the Stationer’s Company, which, because of a fundamental inequality of economic power, was able to exert its will over writers, dictating the terms under which it would publish their work, to its tremendous economic benefit.

When we think of copyright protection, we usually think of it in reference to a lone writer slaving away in solitude perfecting her or his work of prose. Such writers are the lifeblood of the information society, to be sure. However, major corporations are increasingly using their economic power to undermine the individual author’s rights in a work. In this situation, “Such a discourse [of the romantic creator is being] used cynically to protect existing information monopolies.” (Boyle, 1992, unpaginated)

At the same time as entertainment conglomerate are asking for, and largely receiving, rights from authors and other information creators, they are also demanding increasing protection for their copyrights from governments. In the United States, for instance, the length of time a copyright could be held by a corporation was, until recently, 75 years. (In the original copyright laws, by way of contrast, the length of protection was only 14 years.) However, this was not enough for some corporations: “Disney’s crown jewels are its stable of film classics, which it repackages and reissues anew for each generation of young people. Disney was a key supporter of the 1998 Sonny Bono Copyright Term Extension Act, which extended copyright protection for an additional twenty years. Given that copyright protection for Mickey Mouse was due to expire in 2004, this law provided Disney with a bonanza of perhaps $1 billion.” (Gutstein, 1999, 134)

One of the rationales for extending copyright to 95 years was “to provide for at least the first generation of an author’s heirs, and…since people are living longer, a longer period of protection is needed.” (ibid, 160) However, it’s hard to justify this based on the original intention of copyright law: recall that the purpose was to give creators an economic incentive to create. Nowhere does it state that the heirs of creators have any right to be financially rewarded for their parents’ work; certainly, nobody argued that benefits for one’s heirs was an important spur for individuals to create original work. In this way, the larger public good is not served by extending copyright for their benefit.

Moreover, the public store of ideas is diminished by the extension of copyright. I was born in 1960. If I live 60 years, I will die in 2020. At no time, in the course of my life, will Mickey Mouse be in the public domain; so, even though the character has become a cultural icon, I will not be able to use it in my work. Far from encouraging creativity, the copyright extension will curtail the creation of new work, to the detriment of society. “‘If I could stop this bill by giving perpetual copyright to Mickey Mouse, I’d do it – not that they deserve it: Disney doesn’t pay royalties for Pocahontas and Snow White, so why shouldn’t Mickey Mouse go into the public domain?’ says Dennis Karjala, a law professor at Arizona State University. ‘I’m more worried about the vast run of the rest of American culture that is being tied up. There will be no additions to the public domain for 20 years if this passes. We’ll have another 20 flat years where everyone has to work with what is already in the public domain. The existing cultural base on which current authors can build simply can’t grow,’ he adds.” (Chaddock, 1998, unpaginated)

According to filmmaker John Greyson, whose film Uncut partially deals with copyright issues, a similar problem occurred in Canada. “Sheila Copps famously rewrote Canada’s copyright law and said it was all in the name of artists. In fact, it was all in the name of corporations who treat art as property. It’s really property law — it’s not law that recognizes the actual process of creation.” (Burnett, 1998, 16)

These corporations are increasingly taking their concerns to international trade fora, particularly the World Intellectual Property Organization (WIPO) and the General Agreement on Tariffs and Trade (GATT), in order to gain increasing protection for their copyrights. These are often protections which go well beyond protections which they were getting from national copyright laws.

Article 7 of the World Intellectual Property Organization Copyright Treaty, for example, “defines the right of creators to receive royalties whenever their copyrighted works are reproduced, directly or indirectly, ‘whether permanent or temporary, in any manner or form.’ By expanding the right of reproduction to include indirect reproduction, the article prohibits the creation of temporary copies, unless authorized.” (Gutstein, 1999, 148) This could be interpreted to mean that when you download a Web page to the cache in your computer’s memory so that you can look at it on your screen, you must pay the Web page designer or you will be violating his or her copyright. The requirement that a work must be in a fixed form to be eligible for copyright has been eliminated, although the implications of doing so (can I now copyright an expression of an idea I hold in my head?) have been ignored.

Another section of the WIPO Treaty, Article 10, “creates a new and additional copyright for any work that is simply made available to the public over the Internet. The article goes so far as to state that ‘any communication to the public’ has to be authorized. Even if someone puts a picture, article, or other work on their Web site, in other words, they have to do more than get the permission of the person who created the work. They have to pay royalties for making the work available to the public — even if no one picks it up.” (ibid, 150) One of the rationales for copyright was to give creators financial rewards for their efforts; anything which diminished their financial returns could be seen as in violation of their copyright. With this Article, creators demand control over their work whether or not its use by others affects their income. This could have a deleterious effect on fair use provisions of copyright.

Moreover, these international agreements take precedence over national laws. “In addition, the [WIPO] legislation would curtail the authority of [European Union] nations to enact or maintain fair or private-use privileges in their national laws.” (Samuelson, 1998, 102) Thus, at the same time as they are pushing for extensions for their own copyright interests, corporations are attempting to limit the protections national governments can give to the public interest in copyrights.

When considering GATT’s role in the international movement to apply strict copyright rules to digital communications networks, it is useful to remember that GATT is a body attempting to govern international trade; its concern for culture is, at best, minimal to non-existent. “On the international level we have seen the use of the GATT to turn intellectual property violations into trade violations, thus codifying a particular vision of intellectual property and sanctifying it with the label of ‘The Market.’ [note omitted] (Boyle, 1997, unpaginated) Furthermore, while WIPO is a separate entity whose goal is to “modernize and render more efficient the administration of the Unions established in the fields of the protection of industrial property and the protection of literary and artistic works…” (WIPO, 1993, unpaginated) the organization supplies information on copyright to the World Trade Organization. (WIPO, 1995, unpaginated) As one commentator put it, “In a global economy of ideas, free speech is free trade, and vice versa.” (Browning, 1997, 188)

“Is copyright an aspect of culture or industrial development?” Gutstein relevantly asks. “Industry Canada’s job is to promote the latter, and it was Manley’s agency and not Sheila Copps’s heritage department that headed the Canadian delegation to the 1996 World Intellectual Property Organization in Geneva, where Internet copyright treaties were approved.” (1999, 78) If copyright is seen solely as an issue of trade, then governments will do well by the corporations within their borders to ensure that they get the maximum economic reward for their works. On the other hand, if copyright is seen as a means of developing local culture, then the rights of the holders of copyright to benefit from their work has to be balanced by the right of society to have the widest possible set of ideas disseminated to the widest possible number of citizens. The economic model of copyright does not accommodate the greater public good.

This explains why my focus has been largely on American copyright history. Most countries have different histories when it comes to copyright, but, owing to hard negotiations in international fora, they are becoming “harmonized” (read: the same). Thus, what in the US is called “fair use” is in Canada called “fair dealing,” but the concept is essentially the same. Until the Sonny Bono extension, our terms of protection (life plus 50 years for individuals, 75 years for corporations) were the same. The concept of work for hire is the same. And so on. (Canadian Intellectual Property Office, 1998, unpaginated) Because the multinational corporations pushing for stricter copyright laws internationally are based in the United States, harmonization is frequently code for Americanization. To use an obvious example: most countries will now have to extend their copyright protections to 95 years, or risk the wrath of the artists in their countries whose works are now not as protected as those of the Americans. For this reason, it is important for everybody concerned about the issue to understand American copyright.

To be sure, some individual creators will benefit from the extension of copyrights, especially if they can successfully use the Internet to distribute their work themselves. However, many more will lose, partly because raw material which used to be available to them for their work will be owned and controlled by corporations; partly because corporations have far more resources than individuals to bring to bear to enforce stricter copyright laws; partly because corporations will always have a huge advantage in negotiating contracts. Entertainment conglomerates, not individual creators, benefit the most from current trends in copyright legislation. As director Greyson put it: “It’s pretty urgent in this digital age to grapple with what [copyright] law says versus what law does. Copyright law always pretends to be on the side of protecting artists and often does just the opposite…” (Burnett, 1998, 16)

Alternatives to Copyright



I am not an advocate for frequent changes in laws and constitutions. But laws and institutions must go hand and hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy… (Thomas Jefferson, quoted in IITF Working Group on Intellectual Property Rights, 1995, unpaginated)



Given the control which major corporations have over the publishing industry, it should come as no surprise that “only a very few individuals make enough money from royalties to live on. Most of the rewards from intellectual property go to a few big companies.” (Martin, 1995, unpaginated) Many academics teach to subsidize their writing, for which they get little or no remuneration. All but the most successful writers have to have a day job or other source of income in order to survive while they write. This would seem to undermine one of the basic rationales for copyright law: that new work will only be created if authors are adequately compensated for their efforts. “Actually, most creators and innovators are motivated by their own intrinsic interest, not by rewards. There is a large body of evidence showing, contrary to popular opinion, that rewards actually reduce the quality of work. [note omitted]” (ibid) [7] It has been noted that this applies directly to the Internet: “So far, the idea of open access to these materials hasn’t slowed down the onslaught of new information flowing into the Net. But will information suppliers rebel against the status quo at some point?” (Rose, 1993, 112) It depends which information suppliers one is talking about, of course. A large part of the information flowing onto the Net which Rose talks about is coming from individuals, most of whom do so despite little financial reward; on the other hand, corporate information suppliers are, as we have seen, trying to change the status quo to their advantage.

Given the poor compensation for creation, coupled with the domination by major corporations in the entertainment area, individuals seem poorly served by copyright. One might be tempted to put up material without a copyright notice, but this “permits proprietary modifications.” (“What Is Copyleft?” undated, unpaginated) Anybody can take your uncopyrighted work, make a few changes, and take a copyright on it for themselves. Thus, not only is the information commons not protected by this approach, but it all but guarantees that the original creator will not be given any compensation for his or her work!

A different way of dealing with this dilemma, one which first developed in the computer programming community, is known as “copyleft” or “counter-copyright.” One of the earliest proponents of copyleft was the Free Software Foundation. The rationale for the original copyleft was that “The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software — to make sure the software is free for all its users.” (Free Software Foundation, 1998, unpaginated) The term “free” in this context does not necessarily mean without financial cost, but, rather, distributed without barriers.

Copyleft is not a replacement for copyright; instead, it modifies traditional copyright protections. “Copyleft contains the normal copyright statement, asserting ownership and identification of the author. However, it then gives away some of the other rights implicit in the normal copyright: it says that not only are you free to redistribute this work, but you are also free to change the work. However, you cannot claim to have written the original work, nor can you claim that these changes were created by someone else. Finally, all derivative works must also be placed under these terms.” (Stutz, undated (a), unpaginated) Copyleft goes in the precise opposite direction of copyright, where fanatical control of distribution and derivative works (new works based on a copyrighted work) is the norm.

While originally intended for application to computer programs, some people now argue that “any work of any nature that can be copyrighted can be copylefted with the GNU GPL.” (ibid) The advantage of copyleft in computer programming should be obvious: anybody can take a program and improve upon it (fixing bugs, adding new features, et al). In fact, it has been argued that superior software is created this way. (Thompson, 1999) Applied to artistic works, copyleft would better reflect the balance between the interests of society and the individual creator than the existing copyright regime: “As an alternative to the exclusivity of copyright, the counter-copyright invites others to use and build upon a creative work. By encouraging the widespread dissemination of such works, the counter-copyright campaign fosters a rich public domain.” (Berkman Center for Internet & Society, 1999, unpaginated)

Copyleft may seem to unfairly tip the balance away from creators, since there seems to be no mechanism to compensate them for their work. In fact, anybody who creates a copylefted document can charge for it, including people who create derivative works based on existing copylefted documents. (Free Software Foundation, 1998, unpaginated) What they relinquish is their power to control what somebody does with a work after they purchase it. (In fact, copyleft may simply return the right of first sale, which is being undermined by digital distribution of information, back to the position of importance it held when copyright was applied to analog media.) In some ways, this could undermine the ability of creators to charge for their work, since somebody who buys a copylefted work is legally entitled to send it to a million of her or his friends. Worse, they may create a derivative work which is more popular than the original, selling more copies (possibly at a cheaper price). This surely undermines a creator’s ability to be compensated for his or her work! However, the balance of advantages and disadvantages in copylefting may still be of greater benefit to individual writers than the balance in copyrighting their material.

On a practical level, illegitimate copying of a work is encouraged by the ease with which copies can be made and disseminated by digital communication networks, as well as the lack of perceived financial cost associated with such copying. For the most part, I buy books rather than copy them whole from copies obtained from libraries because the cost of copying them isn’t that much less than buying them outright. However, the cost of downloading articles off the Web is substantially less than buying the print journals, magazines or newspapers in which they appear. It becomes necessary, therefore, for content creators to “make legitimate access to your material as easy as possible at a price that will not encourage pilferage.” (Strong, 1994, unpaginated)

Recall from the last chapter that some people believe that the ease of copying digital materials will lead to a different method of compensating creators, where they are paid for secondary services rather than the primary works. Copyright maximizes the compensation for its holders by giving them a mechanism by which they can charge for the largest number of copies of a work. In this other economic system, by way of contrast, the work becomes an advertisement for the artist’s services; in this case, widest distribution of the work, whether compensated for or not, is to the artist’s benefit. Copyleft achieves this. As Esther Dyson, a proponent of this alternate form of economics, notes, “The issue isn’t that [traditional] intellectual property laws should (or will) disappear; rather, they will simply become less important in the scheme of things” as new economic models develop. (undated, unpaginated)

There is precedence for this view. The musical group The Grateful Dead used to encourage taping of their concerts; a section in front of the stage was usually kept free so that those who wanted to tape concerts could get a clear sound. Moreover, the band let it be known that tapes of their concerts could be copied and circulated among their fans; a trade in these tapes (which cannot properly be called “bootlegs” since the band approved of them) flourished. Despite this, the Grateful Dead prospered. “Enough of the people who copy and listen to Grateful Dead tapes end up paying for hats, T-shirts and performance tickets. The ancillary market is the market.” (ibid)

A different way of looking at compensating creators would be that their greatest advantage is the timeliness of their work. “Anyone who has original material (or first publication rights) that can be metered out over the networks on a regular basis could demand any payment she might want from those to whom the information was first distributed. So the price of getting right up to the faucet for a periodically-published information stream could be set high enough to reward the author and reflect the value to those downstream from the recipients.” (Johnson, 1994, unpaginated) For “information” providers, timeliness is obviously important; by the time somebody has copied a copylefted article, new information could be on the originating site. For artists, it could mean serialization; a writer, for example, could periodically put a new chapter of a novel or a new short story on the Internet. Copies of previous chapters distributed through copyleft then become promotions for the most recent work.

Individual artists are in a difficult position with regard to copyright: they can either keep the copyright for themselves, but be required to find a means of distributing their work themselves, or; they can try to get their work distributed by a major corporation, in which case they are increasingly expected to give up more and more of their rights. Moreover, the increasing benefits corporations are being given under new copyright regimes are having the destructive effect of narrowing public access to, and uses of, the intellectual commons. Copyleft, combined with the new information economics, may be a way of solving these problems.

If copyleft ultimately proves incapable of solving the current problems with copyright, some further creative thinking on this subject must be undertaken. If the primary goal of copyright is to ensure the social good of the widest dissemination of the largest amount of information, the way copyright law is currently developing will need to be rethought.

Problems with Government Regulation 2: Limited Instruments

In the twentieth century, governments have developed three regulatory approaches to communications media: common carrier legislation; broadcast legislation, and; minimal to no legislation (often referred to as “First Amendment” or “free speech” protection). Which of these three approaches is applied to a medium is determined by the characteristics of the medium. The minimalist legislative approach, for instance, is traditionally applied to print publishing, since there are a relatively large number of publishers, cost of entry into publishing is relatively small (compared to, say, developing a television network), paper to publish on is not scarce, etc. To understand which regulatory instrument to apply to digital communication networks, it is necessary to understand the models of different types of media.

One model is “one-to-one” communications. In this model, a single person uses the medium to communicate with another single person (see Fig. 4.1). The two most important examples of one-to-one communications are the telephone system and the post office. One-to-one media are regulated under common carrier legislation, which has two fundamental requirements: 1) the company which runs the medium must carry any message which a user of the medium wants to send over it, and; 2) because the company cannot control the content sent over the medium, it cannot be held legally liable for the content. Thus, the phone company must carry obscene or harassing phone calls as well as its legal traffic, but, while the caller is legally responsible for them, the phone company is not.

Opposed to this is the “one-to-many” model of communications. With this model, a small number of producers use the medium to reach a large number of consumers (see Fig 4.2). It is rare for the number of producers to actually equal one; the important aspect of this model is that the number of producers is exceedingly small relative to the number of consumers. The one-to-many model applies to what are known as “broadcast” media, particularly radio and television. The number of signals it was possible to send over the bandwidth which radio, the first broadcast medium, could receive was severely limited, causing the American government to divide it up and allocate parts of the spectrum to
those who held licenses for the purpose of broadcasting over them (a regime which was quickly taken up by governments of other countries).


Figure 4.1
The One to One Communication Model



Example: a personal conversation, whether face to face or over a telephone.



The fact that the government licenses radio and television stations gives it a tremendous power over those media which it does not have over one-to-one media. Governments can, and do, require broadcast media to fulfill certain obligations in return for their license; if they do not meet these obligations, governments can take their licenses away and give them to somebody else (although this is more a theoretical possibility than a reality, given that virtually no North American broadcaster has ever lost a license because it hadn’t complied with its licensing obligations). For example, for over a decade, the American government had a policy known as the Fairness Doctrine, which required licensed television stations to give equal time to people or corporations who had been maligned in their newscasts. (To get around this obligation, many news organizations started to avoid the most controversial subjects.) In Canada, broadcasters must agree to air a certain percentage of programmes produced by, written by and/or starring Canadians during prime time hours as a condition of their license.

There are many differences between one-to-one and one-to-many media, some less obvious than others. With one-to-one media, the consumer is also a producer (a telephone caller, for instance, is an information producer when talking and an information consumer when listening); with one-to-many media, producers and consumers are clearly separate. For this reason, it is sometimes suggested that one-to-one media are more participatory than one-to-many media, which are more passively consumed. According to Mitch Kapor, for instance, “Users may have indirect, or limited control over when, what, why, and from whom they get information and to whom they send it. That’s the broadcast model today, and its seems to breed consumerism, passivity, crassness, and mediocrity. Or, users may have decentralized, distributed, direct control over when, what, why, and with whom they exchange information. That’s the Internet model today, and it seems to breed critical thinking, activism, democracy, and quality.” (Poster, 1995, unpaginated) Another notable difference is that one-to-one communication — with the possible exception of telephone solicitation — usually takes place at the convenience of the communicators, while one-to-many communication usually takes place at the convenience of the broadcaster (at least, until the creation of tape recorders and VCRs, although many, perhaps most, of us still watch or listen to programs when and as they are broadcast).


Figure 4.2
The One to Many Communication Model



Examples: broadcast radio and television.



While it may seem that the choice of regulatory regime is a matter of determining which model a medium most closely resembles and regulating accordingly, it isn’t that simple. As we shall see in Chapter Six, in its earliest days, radio held out the possibility of one-to-one conversation as well as broadcasting; in fact, the decision by the government to regulate it as a broadcast medium was an important step in closing off other possibilities for it. There are other examples. In the early days of the telephone, for instance, some companies experimented with broadcasting over the medium; a large number of listeners would dial into a specific number at the same time and hear a symphony or a lecture.

Digital communications media complicate this picture.

It is possible to have one-to-one communications over the Internet: when you send an email to a single person, for instance, or when you use telephony software to talk over the Internet to somebody else. However, as mentioned in the last chapter, it is also possible to use the Internet for one-to-many communications, as with the attempts to popularize Web-TV. The Internet also supports a third model: “many-to-many” communications (see Fig 4.3). As the name suggests, here a lot of people alternate between consuming and producing information (it should be kept in mind that the many of many-to-many communications is much smaller than the many of one-to-many — the difference between, say, dozens or sometimes even hundreds and 10 or 20 million). Internet Relay Chat, where many individuals converse with each other in real time, and mailing lists, where many individuals converse asynchronously, are examples of many-to-many communications in digital media; a conference call is a form of many-to-many communications in an older, more established medium. Taking all of this into account, it should be obvious that “Internet communication is not a single medium sharing common time, distribution, and sensory characteristics, but a collection of media that differ in these variables.” (December, 1996, 26)

Digital communications plays havoc with our traditional media categories. Consider streaming video. On the one hand, there are a small number of producers relative to the potential number of consumers for streaming video, a condition of one-to-many media. On the other hand, there is no set schedule of programming; a user can go to
a Web page with streaming video and watch it at any time of the day or night (any day or night that it is on the Web, that is), which goes against the conventions of one-to-many media. What is the most appropriate form of regulation?


Figure 4.3
The Many to Many Communication Model



Examples: Internet Relay Chat or a telephone conference call.


This is only the beginning. In a windows computing environment, it is possible to have all of these forms of communication running at the same time, with the computer user moving back and forth from one to the other. In addition, some computer software allows for the user to be employing different media forms at the same time: in IRC, for instance, while communicating with others in a group, a user can also be sending messages to an individual. Thus, at any given point in time, a networked computer user may be involved in a variety of different types of communication. Digital communications networks can contain anything which can be digitized, which, in effect, is every kind of communication (audio, video, text) in every configuration (one-to-one, one-to-many, many-to-many). For this reason, I am suggesting that we consider it a “variable-to-variable” form of communications (see Fig 4.4).

It should be clear, given all of this, that existing regulatory structures are inappropriate for this emerging medium. To attempt to apply one set of regulations to the Internet would favour that particular communication model over the others, with the possibility that it would limit the full development of the Internet, in all its possible configurations. As Mosco puts it, “Old regulatory approaches based on distinct technologies and discrete services and industries do not work in an era of integrated technologies, services and markets.” (1989, 99)

This was partially affirmed by the position taken by the Canadian Radio-Television and Telecommunications Commission, Canada’s media regulatory body. In 1998, it conducted hearings on whether it should regulate the Internet using the tool available to it: the Broadcast Act. In 1999, it released its findings: that applying the Broadcast Act to the Internet would not be appropriate.

One of the CRTC’s arguments was that the content of much of the information available on the Internet is (as it has been since the medium was created) text. “The Commission notes that…much of the content available by way of the Internet, Canadian or otherwise, currently consists predominantly of alphanumeric text and is therefore excluded from the definition of ‘program’. This type of content, therefore, falls outside the scope of the Broadcasting Act.” (CRTC, 1999, p35) The Commission recognized that email and chat, textual forms of communication which lie at the heart of the Internet, should not be regulated by the means at its disposal.

Even video transmissions over the Internet cannot be assumed to be broadcast for purposes of regulation.



The Commission considers…that some Internet services involve a high degree of ‘customizable’ content. This allows end-users to have an individual one-on-one experience through the creation of their own uniquely tailored content. In the Commission’s view, this content, created by the end-user, would not be transmitted for reception by the public. The Commission therefore considers that content that is ‘customizable’ to a significant degree does not properly fall within the definition of ‘broadcasting’ set out in the Broadcasting Act.” (ibid, p45)



A hypertext story, or, more to the point, an online video game or a hypermedia presentation, cannot, by definition, be a broadcast, even when millions of people experience it, because no two individuals will have exactly the same experience of it. In making this claim, the CRTC was careful to note that the interactive elements of a work must be substantive for it to be exempted from the Broadcast Act:



…the ability to select, for example, camera angles or background lighting would not by itself remove programs transmitted by means of the Internet from the definition of ‘broadcasting’. The Commission notes that digital television can be expected to allow this more limited degree of customization. In these circumstances, where the experience of end-users with the program in question would be similar, if not the same, there is nonetheless a transmission of the program for reception by the public, and, therefore, such content would be ‘broadcasting’. (ibid, p46)



The point at which interactivity stops being superficial and starts affecting the nature of the mediated experience is, of course, highly debatable. Recall that, in Chapter Two, I introduced Brenda Laurel’s concept of “agency” to describe the effectiveness of works of hypertext fiction; it might be a good guideline for this debate. However, as agency is largely subjective, it doesn’t necessarily make the line definitively clear. This will be an important distinction to keep in mind as governments grapple with how best to regulate digital media.

Although generally applauded by those in various industries involved in new media, the decision did come in for some criticism. “How courageous!” one person wrote.



How forward-thinking! The single bureaucratic entity with the legal mandate to regulate harmful content on the Internet, concluding — for itself — that it will do nothing of the sort. Neo-Nazis, violent pornographers and pedophiles rejoice! You no longer need ‘worry,’ to use Ms Bertrand’s own word, about a pesky government agency sticking its nose into your business. You now have licence to do what you do best! (“CRTC’s Internet decision: dumb or dumber?”, 1999, 10)



In fact, the CRTC’s decision did not condone or “give license” to those who commit crimes using the Internet. As the CRTC itself argued, “The Commission acknowledges the expressions of concern about the dissemination of offensive and potentially illegal content over the Internet. It also acknowledges the views of the majority of parties who argued that Canadian laws of general application, coupled with self-regulatory initiatives, would be more appropriate for dealing with this type of content over the Internet than either the Broadcasting Act or Telecommunications Act.” (1999, p121)


Figure 4.4
The Various to Various Communication Model



Example: the Internet. Note that the consumer-producer at the centre of the model may be engaged in activities associated with any or all of the other three models at the same time.


Another argument against the use of the Broadcast Act to regulate digital communications networks has to do with a fundamental difference between them and the media for which the legislation was created: bandwidth. The rationale for regulating radio was that scarce radio frequencies required an efficient means of allocation, and the government was the only organization capable of doing this. But the scarcity argument does not apply to the Internet. “Because there are a limited number of radio and television channels available, the government, which assigns them to broadcasters, is thought to have the right to monitor content for indecency. This is, of course, the wrong analogy for the Internet. Bandwidth on the Net is unlimited, and the government’s permission is not required to attach a server to it in the same way as a radio or television station.” (Wallace and Mangan, 1996, 175) Those like the anonymous author who disagreed with the CRTC decision assume that the broadcast regulations can be transferred to digital communications (as they were transferred wholesale from radio to television); however, the new medium is so different from the old media that such a transfer cannot be taken for granted. Those who believe that the Internet should be regulated as a broadcaster must articulate not only a logical rationale for doing so, but a plan for how it could be made to reasonably work.

There are also problems with applying common carrier legislation to the Internet (most likely to the Internet Service Providers who are most people’s access point to it). The obvious one is that there are some applications which are one-to-many (so-called “net broadcasts,” for example). As broadcasts, these would seem to fall under legislation such as the Broadcast Act. However, there is a more fundamental problem which has to do with the nature of common carriers themselves.

Common carriers have had to be regulated because they were monopolies. Telephone companies, to use the obvious example, were given areas in which they could operate without fear of competition. Moreover, economists suggest that this must necessarily be the case. To be a common carrier, you must be able to carry every message somebody is willing to send through your system; exactly the same service which every other common carrier will, by definition, offer. Branding notwithstanding, because the services must be identical, the only way for common carriers to compete is by price. This tends to drive the price down, driving all but the most competitive out of the market; as the number of competitors dwindles, the market moves back towards a monopoly. (Besides it is not unreasonable to suggest that a given area does not need several companies providing exactly the same service.) Thus, what seems to be a thriving ISP market (industry consolidation which is seeing smaller numbers of larger ISPs notwithstanding) will, if treated as a common carrier industry, be reduced to local monopolies. (It is also worth noting that, despite the theory, telephone service, the most well-known common carrier monopoly, has been increasingly opened to competition since the late 1980s, most particularly in the American Telecommunications Act of 1996.)

Given all of this, the default position of not regulating the Internet seems to be the one many governments are taking. “It may well turn out that the Net will be regulation-proof.” (Heterick, Jr., 1997, 20) However, this is a negative approach to the subject which will satisfy few people. Rightly so. If governments are to decide not to regulate the Internet, they should have a positive rationale for it, something more than, “It won’t be easy.” Such arguments (ie: that the Internet is the most robust communications medium currently in existence, with easy entry to a wide variety of information producers in a large number of formats, and therefore requires little legislative interference) do exist. Moreover, legal situations are arising which require government attention; if a government decides to do nothing about them, that government has a responsibility to justify its decision.

One such issue is that of responsibility for posting on the Internet. Suppose a post appears in a newsgroup which libels a famous person. Who does the person sue to seek redress? The person who originally posted the message? The Internet Service Provider (ISP) from which the person posted the message? The ISP from which the person received the message? The systems operator (sysop) for either ISP? For both ISPs? How about the administrator for the portion of the backbone along which the packets of the message traveled?

This issue has been partially resolved by decisions in American courts. Sidney Blumenthal, an advisor to President Bill Clinton, sued online political commentator Matt Drudge for comments Drudge had made about him. Blumenthal also sued America Online, which carried the column in which the comments were made. U.S. District Judge Paul L. Friedman ruled that AOL and other Internet services, unlike traditional publishers, could not be sued in civil courts for content they received from others:



In recognition of the speed with which information may be disseminated and the near impossibility of regulating information content, Congress decided not to treat providers of interactive computer services like other information providers such as newspapers, magazines or television and radio stations, all of which may be held liable for publishing or distributing obscene or defamatory material written or prepared by others. (“Online Providers Not Responsible for Content from Others,” 1998, unpaginated)



In Zeran vz. America Online, another case which was resolved soon afterwards, the United States Supreme Court concurred with the lower court, claiming that federal law “plainly immunizes computer service providers like AOL from liability for information that originates with third parties.” (“ISPs Not Liable for Actions of Subscribers,” 1998, unpaginated)

ISPs had been arguing for years that the volume of information which flowed through their systems, accounting for hundreds of thousands if not millions of messages every day, made it impossible for them to monitor all of the information they carried. Furthermore, the speed with which messages can travel through the Internet meant that even if an ISP could locate questionable material, by the time it had deleted it, the material could have already found a variety of other places, on its server as well as others, to continue to exist. The court rulings seemed to recognize these problems.

You might wonder why ISPs didn’t just accept being legislated as common carriers. Most of the smaller ISPs, whose main business is access to the wider Internet, would fit comfortably in that category. However, the largest ISPs also offer original content; America Online, for instance, offers its subscribers exclusive online chats with famous people, as well as exclusive discussion fora. And, as we saw last chapter, the addition of Time Warner to its roster gives it control of a large amount of content which has yet to be seen online. Proprietary content is necessary for the largest ISPs to create recognizable brands; unlike traditional common carriers, this allows them to compete on services as well as price. Moreover, as we have seen, some ISPs, as a response to public concerns about child access to pornography on the Internet, seek to limit the amount of adult material on their servers in order to brand themselves as “child friendly.” America Online, for example, “bills itself as a family oriented service — it’s trying to attract a broad base of customers and wants to maintain atmosphere acceptable to Middle America…” (Powell, Premiere Issue, 44) Thus, while arguing that controlling the material which flows through them is impossible, most ISPs have powerful incentives to maintain as much control of such material as possible. The recent court rulings accept this contradiction.

However, what is good for ISPs is not necessarily good for all of the stakeholders in this new medium. These rulings give ISPs the protection of common carrier rules without their obligation to carry any material anybody wants to put on them without prejudice. This gives ISPs which develop their own content a clear and obvious conflict of interest with the millions of their subscribers who might want to upload their content to their page on the service. It also gives them the de facto right to censor anything on their service, which could have a chilling effect on some forms of speech. The best outcome, from the perspective of individual Internet users, would be for whatever they create to be protected by the First Amendment while the ISPs are regulated as common carriers, which would require them to carry everything the individuals created without prejudice. This would require the ISPs to separate their content creation from their connectivity services, possibly by divesting themselves of one or the other of the functions. Given the rapid growth of ISPs into the content creation sector, either within their own companies or by cooperating or merging with existing content creating companies, this already seems highly unlikely, and, as the consolidation process continues, it will only get increasingly difficult.

There is actually precedent for this. The Telecommunications Act of 1996 allowed Regional Bell Operating Companies (the “Baby Bells” created by the break-up of AT&T), which prior to that time had been limited to the common carrier status of telephone companies, to provide “electronic publishing” services through affiliated or jointly operated companies. This could lead to a conflict of interest: the RBOC would be tempted to give preferential rates to communications which it had created. To deal with this potential problem, the Act contained two pages of limitations which were meant to ensure that the RBOC and its affiliate were completely separate. Among other things, it was hoped that this would ensure that “A Bell operating company under common ownership or control with a separated affiliate or electronic publishing joint venture shall provide network access and interconnections for basic telephone service to electronic publishers at just and reasonable rates that are…not higher on a per-unit basis than those charged for such services to any other electronic publisher or any separated affiliate engaged in electronic publishing.” (Neuman, et al, 1998, 37/38) A similar initiative might be feasible to deal with the conflict ISPs have as content providers.

Given the limitations of current legislation in regard to the Internet, the courts are not the best place to decide these issues. Ultimately, legislatures will have to develop new regulatory regimes (even if they are mostly hands-off) to take into account the problems posed by this new medium of communication. In particular, they must stop using analogies to existing media, which, as I have shown, are inadequate, and start developing new regulatory strategies to cope with a radically different communications medium.

There is one additional difficulty to this, however: the changing nature of digital communications may require the issue of government regulation to be regularly revisited. An example may help to clarify this point. In the last chapter, I explored attempts to turn the Internet from a “pull” medium, where individuals went to information and got it for themselves, to a “push” medium, where information is sent to Internet users at the convenience of its creator. This move



could indeed undermine the claim that online censorship is unconstitutional. Precedent holds that indecency can be restricted in media that are pervasive and intrusive: ‘indecent material presented over the airwaves confronts the citizen,’ the [Supreme] Court said in Pacifica, the 1978 ‘seven dirty words’ case.

Meanwhile, CDA [Communications Decency Act] plaintiffs have relied heavily on characterizing the Net as a pull medium. So did the lower court that struck down the law, stating, ‘Communications over the Internet do not ‘invade’ an individual’s home or appear on one’s computer unbidden.’ Not yet. But the day when the Internet is as intrusive as TV or radio may not be far off. Have push media’s marketing-savvy boosters thought about its consequences for free speech? (Shapiro, 1997, 109)



Thus, as the Internet evolves, would-be government regulators will have to take into account its changing nature.

The Carrot: State Support

At the CRTC hearings into whether or not the Canadian government regulatory body should regulate the Internet, a number of participants argued that the government should in no way be involved, allowing the Internet to develop on its own. However, many other participants “favoured some form of support for the production and distribution of new media content, although the majority of these participants clearly preferred an incentive-based approach to one involving regulation.” (CRTC, 1999, p66) Participants in the hearings had a number of suggestions for how government could support the emerging digital media sector. “Among these were direct funding programs targeted specifically at Canadian new media content, various tax incentives to support the new media industry, content-specific industry development initiatives, and activities to stimulate consumer demand for new media content.” (ibid)

In fact, some governments in Canada have or are planning to have incentive programs in place. In its May 1998 budget, for instance, the Ontario government committed $10 million to the Interactive Digital Media Small Business Growth Fund. “Its purpose,” according to the government Web site, “is to invest in strategic initiatives and activities that will spur the growth of, and increase the number of jobs in, small IDM firms and the overall IDM industry in Ontario.” (“Interactive Digital Media Small Business Growth Fund,” 1999, unpaginated) Job creation is further emphasized in the IDM Growth Fund’s list of objectives: “encourage market-led job creation and growth; facilitate industry coordination, alliances and partnerships; coordinate marketing and promotion; increase investment promotion; increase export opportunities; and encourage innovation.” (ibid) An important objective of the IDM Growth Fund is to encourage small businesses to find larger partners who will hopefully help them expand. For this reason, individuals and individual companies are not eligible. (ibid)

The IDM Growth Fund is one example of a trend in government funding: to see the development of digital media as an engine of the economy. Another example is the federal Cultural Industries Development Fund, which “targets entrepreneurs working in book and magazine publishing, sound recording, film and video production and multimedia. The Fund is designed to foster the growth and prosperity of small cultural businesses by providing unconventional and flexible financing to address the challenges and opportunities specific to this creative and fast-moving sector.” (Business Development Bank of Canada, 1999, unpaginated) Note that the purpose of this Fund is to support business (as, perhaps, should be expected from funding by the Business Development Bank of Canada); what gets produced by these companies is irrelevant as long as they employ enough people.

In addition, there is the Multimedia Experimentation Support Fund, which “offers pre-startup support for entrepreneurs with multimedia projects.” (Canada Economic Development for Quebec Regions, 1999, unpaginated) The reference to entrepreneurs, as opposed to artists, makes sense when we find that the fund is “run by the CESAM Multimedia Consortium and receives financial support from the Government of Canada, through Canada Economic Development.” (ibid) Unlike the other programmes, however, individuals are eligible to apply for the Multimedia Experimentation Support Fund.

One major exception to this trend is the Canada Council, the federal government’s main arts funding agency. The Council offers Creative Development Grants, which “pay for expenses related to a program of work that advances individual creative expression and growth as a practitioner,” and Production Grants, which “pay the direct costs of production of an independent media artwork.” (Canada Council, 1999c, unpaginated) The emphasis for these programmes is on the work of individual artists: “The Canada Council considers independent productions to be those over which directorsrtists maintain complete creative and editorial control. Only the directorrtist who initiates and maintains complete creative and editorial control over the work may apply.” (ibid) All projects funded by the Council are chosen by peer assessment committees that “select recipients on the basis of artistic or scholarly merit and against the criteria for each program or prize.” (Canada Council, 1999a, unpaginated)

These are two grants programmes which previously existed, to which new media have been added. The production grants, for instance, “support the direct costs of production of a specific film, video, new media or audio project.” (Canada Council, 1999c, unpaginated) Another example of an existing programme which has added new media is the Media Arts Presentation, Distribution and Development Program, which “offers annual assistance to Canadian non-profit, artist-run media arts distribution organizations. Organizations must demonstrate a serious commitment to the distribution needs and interests of Canadian artists producing independent film, video, new media and audio artworks, by making their work accessible to the public and providing them with a financial return from the sale, rental and licensing of their work.” (Canada Council, 1999d, unpaginated) One problem with adding new media to existing programmes is that new media producers will be competing with old media producers for funding. The makeup of peer juries becomes crucial in this case: if nobody on a jury has knowledge of or experience with new media, the likelihood that such projects will be funded is lessened.

As we have seen, one of the main advantages of the World Wide Web over traditional media is that every individual is a potential information producer. By adding new media to existing funding programmes, the Canadian and Ontario governments have, I fear, revealed an inability to deal with the unique aspects of this new medium. Worse, by focusing on the importance of digital media as centers of economic growth and job creation (which they may well be), governments are directing resources towards corporations which might better be directed at individuals.

The Canada Council also has programmes which are specifically aimed at funding new media projects. For example, it gives financial support to help pay the production costs of an artist’s first independent Media Artworks. What are Media Artworks? The council defines them as “works that use multimedia, computers, or communications or information technologies for creative expression.” (Canada Council, 1999b, unpaginated) These may include: “creative Web sites, CD-ROM/multimedia productions, installations or performances using multimedia, interactive technology, networks and telecommunications.” (ibid)

As we have seen, government attempts at controlling content on the Internet are fraught with problems. A better strategy for promoting regional cultures, one which is likely to be more effective, would be to fund the widest possible range of local content providers; while this may include corporate content providers, the focus should be on individuals. Thousands of Canadians developing Web pages on whatever subjects interest them are likely to say more about what it means to be Canadian than a television network creating a featureless work to sell on international markets. “Against the enormously growing trend toward the universalization and standardization of aesthetic expression, particularly in the expanding telematic nets, the only strategies and tactics that will be of help are those that will strengthen local forms of expression and differentiation of artistic action, that will create vigourously heterogenous energy fields with individual and specific intentions, operations, and access in going beyond the limits that we term mediatization.” (Zielinski, 1997, 281/282)

To be sure, there are problems with direct financial support of individual artists. Existing peer review methods of allocating government funds for the arts, for instance, may be difficult since, this being a new medium, there aren’t a lot of people qualified to judge the merit of potential projects. Even if there are enough qualified people to make up juries for new media works, they may not be representative of the broader public: “Most would also be critical of the culture of dependency that arises when an arts group is primarily accountable to a funding body rather than to its audience. For many, too, much of the British public service tradition of broadcasting and the arts is little more than a disguise for the narrow and exclusive interests of the London-based intellectual wing of the ruling class.” (Mulgan, 1989, 244) In Canada, there has also been a critique of publicly funded art which decried it as elitist and unrepresentative of the interests of most citizens. Moreover, as art forms mature, juries tend to become conservative in their decisions about who to fund: they often continue to promote the work of artists who have already been successful rather than the work of lesser known artists.

Direct support for individual artists may prove not be the best method of supporting Web creators. The development of online colonies of artists, the nodes around which strong community ties between artists can be built, is another possible route to take. Perhaps public money would be well spent on training programmes for individual artists. Likely, a combination of approaches will be necessary. The important thing is that governments must start taking seriously the idea that every individual consumer on the Web is also a potential producer of information, and consider ways of supporting them. To continue to subsidize corporations in traditional ways would be to seriously cripple the potential of the Internet as a personal medium of communication.

Let a thousand Web pages bloom!

Leave a Reply