document

Fahrenheit 451.2: Is Cyberspace Burning?

Document Date: March 17, 2002

Executive Summary

In the landmark case Reno v. ACLU , the Supreme Court overturned the Communications Decency Act, declaring that the Internet deserves the same high level of free speech protection afforded to books and other printed matter.

But today, all that we have achieved may now be lost, if not in the bright flames of censorship then in the dense smoke of the many ratings and blocking schemes promoted by some of the very people who fought for freedom.

The ACLU and others in the cyber-liberties community were genuinely alarmed by the tenor of a recent White House summit meeting on Internet censorship at which industry leaders pledged to create a variety of schemes to regulate and block controversial online speech.

But it was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes.

The White House meeting was clearly the first step away from the principle that protection of the electronic word is analogous to protection of the printed word. Despite the Supreme Court’s strong rejection of a broadcast analogy for the Internet, government and industry leaders alike are now inching toward the dangerous and incorrect position that the Internet is like television, and should be rated and censored accordingly.

Is Cyberspace burning? Not yet, perhaps. But where there’s smoke, there’s fire.

“Any content-based regulation of the Internet, no matter how benign the purpose, could burn the global village to roast the pig.”

­ U.S. Supreme Court majority decision, Reno v. ACLU (June 26, 1997)

Introduction

In his chilling (and prescient) novel about censorship, Fahrenheit 451, author Ray Bradbury describes a futuristic society where books are outlawed. “Fahrenheit 451” is, of course, the temperature at which books burn.

In Bradbury’s novel ­ and in the physical world ­ people censor the printed word by burning books. But in the virtual world, one can just as easily censor controversial speech by banishing it to the farthest corners of cyberspace using rating and blocking programs. Today, will Fahrenheit, version 451.2 ­ a new kind of virtual censorship ­ be the temperature at which cyberspace goes up in smoke?

The first flames of Internet censorship appeared two years ago, with the introduction of the Federal Communications Decency Act (CDA), outlawing “indecent” online speech. But in the landmark case Reno v. ACLU , the Supreme Court overturned the CDA, declaring that the Internet is entitled to the highest level of free speech protection. In other words, the Court said that online speech deserved the protection afforded to books and other printed matter.

Today, all that we have achieved may now be lost, if not in the bright flames of censorship then in the dense smoke of the many ratings and blocking schemes promoted by some of the very people who fought for freedom. And in the end, we may find that the censors have indeed succeeded in “burning down the house to roast the pig.”

Is Cyberspace Burning?

The ashes of the CDA were barely smoldering when the White House called a summit meeting to encourage Internet users to self-rate their speech and to urge industry leaders to develop and deploy the tools for blocking “inappropriate” speech. The meeting was “voluntary,” of course: the White House claimed it wasn’t holding anyone’s feet to the fire.

The ACLU and others in the cyber-liberties community were genuinely alarmed by the tenor of the White House summit and the unabashed enthusiasm for technological fixes that will make it easier to block or render invisible controversial speech. (Note: see appendix for detailed explanations of the various technologies.)

Industry leaders responded to the White House call with a barrage of announcements:

  • Netscape announced plans to join Microsoft ­ together the two giants have 90% or more of the web browser market ­ in adopting PICS (Platform for Internet Content Selection) the rating standard that establishes a consistent way to rate and block online content;
  • IBM announced it was making a $100,000 grant to RSAC (Recreational Software Advisory Council) to encourage the use of its RSACi rating system. Microsoft Explorer already employs the RSACi ratings system, Compuserve encourages its use and it is fast becoming the de facto industry standard rating system;
  • Four of the major search engines ­ the services which allow users to conduct searches of the Internet for relevant sites ­ announced a plan to cooperate in the promotion of “self-regulation” of the Internet. The president of one, Lycos, was quoted in a news account as having “thrown down the gauntlet” to the other three, challenging them to agree to exclude unrated sites from search results;
  • Following announcement of proposed legislation by Sen. Patty Murray (D Wash.), which would impose civil and ultimately criminal penalties on those who mis-rate a site, the makers of the blocking program Safe Surf proposed similar legislation, the “Online Cooperative Publishing Act.”

But it was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes.

What may be the result? The Internet will become bland and homogenized. The major commercial sites will still be readily available they will have the resources and inclination to self-rate, and third-party rating services will be inclined to give them acceptable ratings. People who disseminate quirky and idiosyncratic speech, create individual home pages, or post to controversial news groups, will be among the first Internet users blocked by filters and made invisible by the search engines. Controversial speech will still exist, but will only be visible to those with the tools and know-how to penetrate the dense smokescreen of industry “self-regulation.”

As bad as this very real prospect is, it can get worse. Faced with the reality that, although harder to reach, sex, hate speech and other controversial matter is still available on the Internet, how long will it be before governments begin to make use of an Internet already configured to accommodate massive censorship?
If you look at these various proposals in a larger context, a very plausible scenario emerges.
It is a scenario which in some respects has already been set in motion:

  • First, the use of PICS becomes universal; providing a uniform method for content rating.
  • Next, one or two rating systems dominate the market and become the de facto standard for the Internet.
  • PICS and the dominant rating(s) system are built into Internet software as an automatic default.
  • Unrated speech on the Internet is effectively blocked by these defaults.
  • Search engines refuse to report on the existence of unrated or “unacceptably” rated sites.
  • Governments frustrated by “indecency” still on the Internet make self-rating mandatory and mis-rating a crime.

The scenario is, for now, theoretical ­ but inevitable. It is clear that any scheme that allows access to unrated speech will fall afoul of the government-coerced push for a “family friendly” Internet. We are moving inexorably toward a system that blocks speech simply because it is unrated and makes criminals of those who mis-rate.

The White House meeting was clearly the first step in that direction and away from the principle that protection of the electronic word is analogous to protection of the printed word. Despite the Supreme Court’s strong rejection of a broadcast analogy for the Internet, government and industry leaders alike are now inching toward the dangerous and incorrect position that the Internet is like television, and should berated and censored accordingly.

Is Cyberspace burning? Not yet, perhaps. But where there’s smoke, there’s fire.

Free Speech Online: A Victory Under Siege

On June 26, 1997, the Supreme Court held in Reno v. ACLU that the Communications Decency Act, which would have made it a crime to communicate anything “indecent” on the Internet, violated the First Amendment. It was the nature of the Internet itself, and the quality of speech on the Internet, that led the Court to declare that the Internet is entitled to the same broad free speech protections given to books, magazines, and casual conversation.

The ACLU argued, and the Supreme Court agreed, that the CDA was unconstitutional because, although aimed at protecting minors, it effectively banned speech among adults. Similarly, many of the rating and blocking proposals, though designed to limit minors’ access, will inevitably restrict the ability of adults to communicate on the Internet. In addition, such proposals will restrict the rights of older minors to gain access to material that clearly has value for them.

Rethinking the Rush to Rate

This paper examines the free speech implications of the various proposals for Internet blocking and rating. Individually, each of the proposals poses some threat to open and robust speech on the Internet; some pose a considerably greater threat than others.

Even more ominous is the fact that the various schemes for rating and blocking, taken together, could create a black cloud of private “voluntary” censorship that is every bit as threatening as the CDA itself to what the Supreme Court called “the most participatory form of mass speech yet developed.”

We call on industry leaders, Internet users, policy makers and parents groups to engage in a genuine debate about the free speech ramifications of the rating and blocking schemes being proposed.

To open the door to a meaningful discussion, we offer the following recommendations and principles:

Recommendations and Principles

  • Internet users know best. The primary responsibility for determining what speech to access should remain with the individual Internet user; parents should take primary responsibility for determining what their children should access.
  • Default setting on free speech. Industry should not develop products that require speakers to rate their own speech or be blocked by default.
  • Buyers beware. The producers of user-based software programs should make their lists of blocked speech available to consumers. The industry should develop products that provide maximum user control.
  • No government coercion or censorship. The First Amendment prevents the government from imposing, or from coercing industry into imposing, a mandatory Internet ratings scheme.
  • Libraries are free speech zones. The First Amendment prevents the government, including public libraries, from mandating the use of user-based blocking software.

Six Reasons Why Self-Rating Schemes Are Wrong for the Internet

To begin with, the notion that citizens should “self-rate” their speech is contrary to the entire history of free speech in America. A proposal that we rate our online speech is no less offensive to the First Amendment than a proposal that publishers of books and magazines rate each and every article or story, or a proposal that everyone engaged in a street corner conversation rate his or her comments. But that is exactly what will happen to books, magazines, and any kind of speech that appears online under a self-rating scheme.

In order to illustrate the very practical consequences of these schemes, consider the following six reasons, and their accompanying examples, illustrating why the ACLU is against self-rating:

Reason #1: Self-Rating Schemes Will Cause Controversial Speech To Be Censored.

Kiyoshi Kuromiya, founder and sole operator of Critical Path Aids Project, has a web site that includes safer sex information written in street language with explicit diagrams, in order to reach the widest possible audience. Kuromiya doesn’t want to apply the rating “crude” or “explicit” to his speech, but if he doesn’t, his site will be blocked as an unrated site. If he does rate, his speech will be lumped in with “pornography” and blocked from view. Under either choice, Kuromiya has been effectively blocked from reaching a large portion of his intended audience ­ teenage Internet users ­ as well as adults.

As this example shows, the consequences of rating are far from neutral. The ratings themselves are all pejorative by definition, and they result in certain speech being blocked.

The White House has compared Internet ratings to “food labels” ­ but that analogy is simply wrong. Food labels provide objective, scientifically verifiable information to help the consumer make choices about what to buy, e.g. the percentage of fat in a food product like milk. Internet ratings are subjective value judgments that result in certain speech being blocked to many viewers. Further, food labels are placed on products that are readily available to consumers ­ unlike Internet labels, which would place certain kinds of speech out of reach of Internet users.

What is most critical to this issue is that speech like Kuromiya’s is entitled to the highest degree of Constitutional protection. This is why ratings requirements have never been imposed on those who speak via the printed word. Kuromiya could distribute the same material in print form on any street corner or in any bookstore without worrying about having to rate it. In fact, a number of Supreme Court cases have established that the First Amendment does not allow government to compel speakers to say something they don’t want to say ­ and that includes pejorative ratings. There is simply no justification for treating the Internet any differently.

Reason #2: Self-Rating Is Burdensome, Unwieldy, and Costly.

Art on the Net is a large, non-profit web site that hosts online “studios” where hundreds of artists display their work. The vast majority of the artwork has no sexual content, although there’s an occasional Rubenesque painting. The ratings systems don’t make sense when applied to art. Yet Art on the Net would still have to review and apply a rating to the more than 26,000 pages on its site, which would require time and staff that they just don’t have. Or, they would have to require the artists themselves to self-rate, an option they find objectionable. If they decline to rate, they will blocked as an unrated site even though most Internet users would hardly object to the art reaching minors, let alone adults.

As the Supreme Court noted in Reno v. ACLU , one of the virtues of the Internet is that it provides “relatively unlimited, low-cost capacity for communication of all kinds.” In striking down the CDA, the Court held that imposing age-verification costs on Internet speakers would be “prohibitively expensive for noncommercial ­ as well as some commercial ­ speakers.” Similarly, the burdensome requirement of self-rating thousands of pages of information would effectively shut most noncommercial speakers out of the Internet marketplace.

The technology of embedding the rating is also far from trivial. In a winning ACLU case that challenged a New York state online censorship statute, ALA v. Pataki , one long-time Internet expert testified that he tried to embed an RSACi label in his online newsletter site but finally gave up after several hours.

In addition, the ratings systems are simply unequipped to deal with the diversity of content now available on the Internet. There is perhaps nothing as subjective as a viewer’s reaction to art. As history has shown again and again, one woman’s masterpiece is another woman’s pornography. How can ratings such as “explicit” or “crude” be used to categorize art? Even ratings systems that try to take artistic value into account will be inherently subjective, especially when applied by artists themselves, who will naturally consider their own work to have merit.

The variety of news-related sites on the Web will be equally difficult to rate. Should explicit war footage be labeled “violent” and blocked from view to teenagers? If along news article has one curse word, is the curse word rated individually, or is the entire story rated and then blocked?

Even those who propose that “legitimate” news organizations should not be required to rate their sites stumble over the question of who will decide what is legitimate news.

Reason #3: Conversation Can’t Be Rated.

You are in a chat room or a discussion group ­ one of the thousands of conversational areas of the Net. A victim of sexual abuse has posted a plea for help, and you want to respond. You’ve heard about a variety of ratings systems, but you’ve never used one. You read the RSACi web page, but you can’t figure out how to rate the discussion of sex and violence in your response. Aware of the penalties for mis-labeling, you decide not to send your message after all.
The burdens of self-rating really hit home when applied to the vibrant, conversational areas of the Internet. Most Internet users don’t run web pages, but millions of people around the world send messages, short and long, every day, to chat rooms, news groups and mailing lists. A rating requirement for these areas of the Internet would be analogous to requiring all of us to rate our telephone or streetcorner or dinner party or water cooler conversations.

The only other way to rate these areas of cyberspace would be to rate entire chatrooms or news groups rather than individual messages. But most discussion groups aren’t controlled by a specific person, so who would be responsible for rating them? In addition, discussion groups that contain some objectionable material would likely also have a wide variety of speech totally appropriate and valuable for minors­ but the entire forum would be blocked from view for everyone.

Reason #4: Self-Rating Will Create “Fortress America” on the Internet.

You are a native of Papua, New Guinea, and as an anthropologist you have published several papers about your native culture. You create a web site and post electronic versions of your papers, in order to share them with colleagues and other interested people around the world. You haven’t heard about the move in America to rate Internet content. You don’t know it, but since your site is unrated none of your colleagues in America will be able to access it.

People from all corners of the globe ­ people who might otherwise never connect because of their vast geographical differences ­ can now communicate on the Internet both easily and cheaply. One of the most dangerous aspects of ratings systems is their potential to build borders around American- and foreign-created speech. It is important to remember that today, nearly half of all Internet speech originates from outside the United States.

Even if powerful American industry leaders coerced other countries into adopting American ratings systems, how would these ratings make any sense to a New Guinean? Imagine that one of the anthropology papers explicitly describes a ritual in which teenage boys engage in self-mutilation as part of a rite of passage in achieving manhood. Would you look at it through the eyes of an American and rate it “torture,” or would you rate it “appropriate for minors” for the New Guinea audience?

Reason #5: Self-Ratings Will Only Encourage, Not Prevent, Government Regulation.

The webmaster for Betty’s Smut Shack, a web site that sells sexually explicit photos, learns that many people won’t get to his site if he either rates his site “sexually explicit” or fails to rate at all. He rates his entire web site “okay for minors.” A powerful Congressman from the Midwest learns that the site is now available to minors. He is outraged, and quickly introduces a bill imposing criminal penalties for mis-rated sites.

Without a penalty system for mis-rating, the entire concept of a self-ratings system breaks down. The Supreme Court that decided Reno v. ACLU would probably agree that the statute theorized above would violate the First Amendment, but as we saw with the CDA, that won’t necessarily prevent lawmakers from passing it.

In fact, as noted earlier, a senator from Washington state ­ home of Industry giant Microsoft, among others ­ has already proposed a law that creates criminal penalties for mis-rating. Not to be outdone, the filtering software company Safe Surf has proposed the introduction of a virtually identical federal law, including a provision that allows parents to sue speakers for damages if they “negligently” mis-rate their speech.

The example above shows that, despite all good intentions, the application of ratings systems is likely to lead to heavy-handed government censorship. Moreover, the targets of that censorship are likely to be just the sort of relatively powerless and controversial speakers, like the groups Critical Path Aids Project, Stop Prisoner Rape, Planned Parenthood, Human Rights Watch, and the various gay and lesbian organizations we represented in Reno v. ACLU .

Reason #6: Self-Ratings Schemes Will Turn the Internet into a Homogenized Medium Dominated by Commercial Speakers.

Huge entertainment conglomerates, such as the Disney Corporation or Time Warner, consult their platoons of lawyers who advise that their web sites must berated to reach the widest possible audience. They then hire and train staff to rate all of their web pages. Everybody in the world will have access to their speech.

There is no question that there may be some speakers on the Internet for whom the ratings systems will impose only minimal burdens: the large, powerful corporate speakers with the money to hire legal counsel and staff to apply the necessary ratings. The commercial side of the Net continues to grow, but so far the democratic nature of the Internet has put commercial speakers on equal footing with all of the other non-commercial and individual speakers.

Today, it is just as easy to find the Critical Path AIDS web site as it is to find the Disney site. Both speakers are able to reach a worldwide audience. But mandatory Internet self-rating could easily turn the most participatory communications medium the world has yet seen into a bland, homogenized, medium dominated by powerful American corporate speakers.

Is Third-Party Rating the Answer?

Third-party ratings systems, designed to work in tandem with PICS labeling, have been held out by some as the answer to the free speech problems posed by self-rating schemes. On the plus side, some argue, ratings by an independent third party could minimize the burden of self-rating on speakers and could reduce the inaccuracy and mis-rating problems of self-rating. In fact, one of the touted strengths of the original PICS proposal was that a variety of third-party ratings systems would develop and users could pick and choose from the system that best fit their values. But third party ratings systems still pose serious free speech concerns.

First, a multiplicity of ratings systems has not yet emerged on the market, probably due to the difficulty of any one company or organization trying to rate over a million web sites, with hundreds of new sites ­ not to mention discussion groups and chat rooms ­ springing up daily.

Second, under third-party rating systems, unrated sites still may be blocked.

When choosing which sites to rate first, it is likely that third-party raters will rate the most popular web sites first, marginalizing individual and non-commercial sites. And like the self-rating systems, third-party ratings will apply subjective and value-laden ratings that could result in valuable material being blocked to adults and older minors. In addition, available third-party rating systems have no notification procedure, so speakers have no way of knowing whether their speech has received a negative rating.

The fewer the third-party ratings products available, the greater the potential for arbitrary censorship. Powerful industry forces may lead one product to dominate the marketplace. If, for example, virtually all households use Microsoft Internet Explorer and Netscape, and the browsers, in turn, use RSACi as their system, RSACi could become the default censorship system for the Internet. In addition, federal and state governments could pass laws mandating use of a particular ratings system in schools or libraries. Either of these scenarios could devastate the diversity of the Internet marketplace.

Pro-censorship groups have argued that a third-party rating system for the Internet is no different from the voluntary Motion Picture Association of America ratings for movies that we’ve all lived with for years. But there is an important distinction: only a finite number of movies are produced in a given year. In contrast, the amount of content on the Internet is infinite. Movies are a static, definable product created by a small number of producers; speech on the Internet is seamless, interactive, and conversational. MPAA ratings also don’t come with automatic blocking mechanisms.

The Problems With User-Based Blocking Software in the Home

With the explosive growth of the Internet, and in the wake of the recent censorship battles, the marketplace has responded with a wide variety of user-based blocking programs. Each company touts the speed and efficiency of its staff members in blocking speech that they have determined is inappropriate for minors. The programs also often block speech based on keywords. (This can result in sites such as www.middlesex.gov or www.SuperBowlXXX.com being blocked because they contain the keywords “sex” and “XXX.”).

In Reno v. ACLU , the ACLU successfully argued that the CDA violated the First Amendment because it was not the least restrictive means of addressing the government’s asserted interest in protecting children from inappropriate material. In supporting this argument, we suggested that a less restrictive alternative was the availability of user-based blocking programs, e.g. Net Nanny, that parents could use in the home if they wished to limit their child’s Internet access.

While user-based blocking programs present troubling free speech concerns, we still believe today that they are far preferable to any statute that imposes criminal penalties on online speech. In contrast, many of the new ratings schemespose far greater free speech concerns than do user-based software programs.

Each user installs the program on her home computer and turns the blocking mechanism on or off at will. The programs do not generally block sites that they haven’t rated, which means that they are not 100 percent effective.

Unlike the third-party ratings or self-rating schemes, these products usually do not work in concert with browsers and search engines, so the home user rather than an outside company sets the defaults. (However, it should be noted that this “standalone” feature could theoretically work against free speech principles, since here, too, it would be relatively easy to draft a law mandating the use of the products, under threat of criminal penalties.)

While the use of these products avoids some of the larger control issues with ratings systems, the blocking programs are far from problem-free. A number of products have been shown to block access to a wide variety of information that many would consider appropriate for minors. For example, some block access to safer sex information, although the Supreme Court has held that teenagers have the right to obtain access to such information even without their parent’s consent. Other products block access to information of interest to the gay and lesbian community. Some products even block speech simply because it criticizes their product.

Some products allow home users to add or subtract particular sites from a list of blocked sites. For example, a parent can decide to allow access to “playboy.com” by removing it from the blocked sites list, and can deny access to “powerrangers.com” by adding it to the list. However most products consider their lists of blocked speech to be proprietary information which they will not disclose.

Despite these problems, the use of blocking programs has been enthusiastically and uncritically endorsed by government and industry leaders alike. At the recent White House summit, Vice President Gore, along with industry and non-profit groups, announced the creation of www.netparents.org, a site that provides direct links to a variety of blocking programs.

The ACLU urges the producers of all of these products to put real power in users’ hands and provide full disclosure of their list of blocked speech and the criteria for blocking.

In addition, the ACLU urges the industry to develop products that provide maximum user control. For example, all users should be able to adjust the products to account for the varying maturity level of minors, and to adjust the list of blocked sites to reflect their own values.

It should go without saying that under no set of circumstances can governments constitutionally require anyone ­ whether individual users or Internet Service Providers ­ to run user-based blocking programs when accessing or providing access to the Internet.

Why Blocking Software Should Not Be Used by Public Libraries

The “never-ending, worldwide conversation” of the Internet, as one lower court judge called it, is a conversation in which all citizens should be entitled to participate ­ whether they access the Internet from the library or from the home. Just as government cannot require home users or Internet Service Providers (ISPs) to use blocking programs or self-rating programs, libraries should not require patrons to use blocking software when accessing the Internet at the library. The ACLU, like the American Library Association (ALA), opposes use of blocking software in public libraries.

Libraries have traditionally promoted free speech values by providing free books and information resources to people regardless of their age or income. Today, more than 20 percent of libraries in the United States offering free access to the Internet, and that number is growing daily. Libraries are critical to realizing the dream of universal access to the Internet, a dream that would be drastically altered if they were forced to become Internet censors.

In a recent announcement stating its policy, the ALA said:

Libraries are places of inclusion rather than exclusion. Current blocking/filtering software prevents not only access to what some may consider “objectionable” material, but also blocks information protected by the First Amendment. The result is that legal and useful material will inevitably be blocked.

Librarians have never been in the business of determining what their patrons should read or see, and the fact that the material is now found on Internet is no different. By installing inaccurate and unreliable blocking programs on library Internet terminals, public libraries ­ which are almost always governmental entities­ would inevitably censor speech that patrons are constitutionally entitled to access.

It has been suggested that a library’s decision to install blocking software is like other legitimate selection decisions that libraries routinely make when they add particular books to their collections. But in fact, blocking programs take selection decisions totally out of the hands of the librarian and place them in the hands of a company with no experience in library science. As the ALA noted, “(F)ilters can impose the producer’s viewpoint on the community.”

Because, as noted above, most filtering programs don’t provide a list of the sites they block, libraries won’t even know what resources are blocked. In addition, Internet speakers won’t know which libraries have blocked access to their speech and won’t be able to protest.

Installing blocking software in libraries to prevent adults as well as minors from accessing legally protected material raises severe First Amendment questions. Indeed, that principle ­ that governments can’t block adult access to speech in the name of protecting children ­ was one of the key reasons for the Supreme Court’s decision in Reno v. ACLU.

If adults are allowed full access, but minors are forced to use blocking programs, constitutional problems remain. Minors, especially older minors, have a constitutional right to access many of the resources that have been shown to be blocked by user-based blocking programs.

One of the virtues of the Internet is that it allows an isolated gay teenager in Des Moines, Iowa to talk to other teenagers around the globe who are also struggling with issues relating to their sexuality. It allows teens to find out how to avoid AIDS and other sexually transmitted diseases even if they are too embarrassed to ask an adult in person or even too embarrassed to check out a book.

When the ACLU made this argument in Reno v. ACLU, it was considered controversial, even among our allies. But the Supreme Court agreed that minors have rights too. Library blocking proposals that allow minors full access to the Internet only with parental permission are unacceptable.

Libraries can and should take other actions that are more protective of online free speech principles. First, libraries can publicize and provide links to particular sites that have been recommended for children. Second, to avoid unwanted viewing by passersby (and to protect the confidentiality of users), libraries can install Internet access terminals in ways that minimize public view. Third, libraries can impose “content-neutral” time limits on Internet use.

Conclusion

The ACLU has always favored providing Internet users, especially parents, with more information. We welcomed, for example, the American Library Association’s announcement at the White House summit of The Librarian’s Guide to Cyberspace for Parents and Kids, a “comprehensive brochure and Web site combining Internet terminology, safety tips, site selection advice and more than 50 of the most educational and entertaining sites available for children on the Internet.”

In Reno v. ACLU, we noted that Federal and state governments are already vigorously enforcing existing obscenity, child pornography, and child solicitation laws on the Internet. In addition, Internet users must affirmatively seek out speech on the Internet; no one is caught by surprise.

In fact, many speakers on the Net provide preliminary information about the nature of their speech. The ACLU’s site on America Online, for example, has a message on its home page announcing that the site is a “free speech zone.” Many sites offering commercial transactions on the Net contain warnings concerning the security of Net information. Sites containing sexually explicit material often begin with a statement describing the adult nature of the material. Chat rooms and newsgroups have names that describe the subject being discussed. Even individual e-mail messages contain a subject line.

The preliminary information available on the Internet has several important components that distinguish it from all the ratings systems discussed above: (1) it is created and provided by the speaker; (2) it helps the user decide whether to read any further; (3) speakers who choose not to provide such information are not penalized; (4) it does not result in the automatic blocking of speech by an entity other than the speaker or reader before the speech has ever been viewed. Thus, the very nature of the Internet reveals why more speech is always a better solution than censorship for dealing with speech that someone may find objectionable.

It is not too late for the Internet community to slowly and carefully examine these proposals and to reject those that will transform the Internet from a true marketplace of ideas into just another mainstream, lifeless medium with content no more exciting or diverse than that of television.

Civil libertarians, human rights organizations, librarians and Internet users, speakers and providers all joined together to defeat the CDA. We achieved a stunning victory, establishing a legal framework that affords the Internet the highest constitutional protection. We put a quick end to a fire that was all but visible and threatening. The fire next time may be more difficult to detect ­ and extinguish.

Appendix: Internet Ratings Systems ­ How Do They Work?

The Technology: PICS, Browsers, Search Engines, and Ratings

The rating and blocking proposals discussed below all rely on a few key components of current Internet technology. While none of this technology will by itself censors speech, some of it may well enable censorship to occur.

PICS: The Platform for Internet Content Selection (PICS) is a rating standard that establishes a consistent way to rate and block online content. PICS was created by a large consortium of Internet industry leaders, and became operational last year. In theory, PICS does not incorporate or endorse any particular rating system ­ the technology is an empty vessel into which different rating systems can be poured. In reality, only three Third-party rating systems have been developed for PICS SafeSurf, Net Shepherd, and the de facto industry standard RSACi.1

Browsers: Browsers are the software tool that Internet users need in order to access information on the World Wide Web. Two products, Microsoft’s Internet Explorer and Netscape, currently control 90% of the browser market. Microsoft’s Internet Explorer is now compatible with PICS. That is, the Internet Explorer can now be configured to block speech that has been rated with PICS-compatible ratings. Netscape has announced that it will soon offer the same capability. When the blocking feature on the browser is activated, speech with negative ratings is blocked. In addition, because a vast majority of Internet sites remain unrated, the blocking feature can be configured to block all unrated sites.

Search Engines: Search engines are software programs that allow Internet users to conduct searches for content on a particular subject, using a string of words or phrases. The search result typically provides a list of links to sites on the relevant topic. Four of the major search engines have announced a plan to cooperate in the move towards Internet ratings. For example, they may decide not to list sites that have negative ratings or that are unrated.

Ratings Systems: There are a few PICS-compatible ratings systems already in use. Two self-rating systems include RSACi and Safe Surf. RSACi, developed by the same group that rates video games, attempts to rate certain kinds of speech, like sex and violence, according to objective criteria describing the content. For example, it rates levels of violence from “harmless conflict; some damage to objects” to “creatures injured or killed.” Levels of sexual content are rated from “passionate kissing” to “clothed sexual touching” to “explicit sexual activity; sex crimes.” The context in which the material is presented is not considered under the RSACi system; for example, it doesn’t distinguish educational materials from other materials.

Safe Surf applies a complicated ratings system on a variety of types of speech, from profanity to gambling. The ratings are more contextual, but they are also more subjective and value-laden. For example, Safe Surf rates sexual content from “artistic” to “erotic” to “explicit and crude pornographic.”

Net Shepherd, a third-party rating system that has rated 300,000 sites, rates only for “maturity” and “quality.”

Notes

1 While PICS could be put to legitimate use with adequate free speech safeguards, there is a very real fear that governments, especially authoritarian governments, will use the technology to impose severe content controls.

Credits

The principal authors of this white paper are Ann Beeson and Chris Hansen of the ACLU Legal Department and ACLU Associate Director Barry Steinhardt. Additional editorial contributions were provided by Marjorie Heins of the Legal Department, and Emily Whitfield of the Public Education Department. This report was prepared by the ACLU Public Education Department: Loren Siegel, Director; Rozella Floranz Kennedy, Editorial Manager; Ronald Cianfaglione, Designer.

Every month, you'll receive regular roundups of the most important civil rights and civil liberties developments. Remember: a well-informed citizenry is the best defense against tyranny.