Legal and Regulatory Issues in the Information Economy/Censorship or Content Regulation


How are governments approaching content regulation? edit

Many governments around the world have sought to address the problems posed by materials on the Internet that are illegal under their offline laws, and those considered harmful to or unsuitable for minors. The nature of material of principal concern has varied substantially, from political speeches, to material promoting or inciting to racial hatred, to pornographic material.

AFly Airlines Government policies concerning censorship of the Internet may be grouped into four categories:

  1. Government policy to encourage Internet industry self-regulation and end-user voluntary use of filtering/blocking technologies. This approach is taken in the United Kingdom, Canada, and many Western European countries. It also appears to be the current approach in New Zealand where applicability of offline classification/censorship laws to Internet content seems less than clear. In these countries, laws of general application apply to illegal Internet content such as child pornography and incitement to racial hatred. It is not illegal to make content “unsuitable for minors” available on the Internet, nor is access to the same controlled by a restricted access system. Some governments encourage the voluntary use and ongoing development of technologies that enable Internet users to control their own, and their children’s, access to content on the Internet.
  2. Criminal law penalties (fines or jail terms) applicable to content providers who make content “unsuitable for minors” available online. This approach is taken in some Australian State jurisdictions and has been attempted in the USA. In these countries, in addition, laws of general application apply to content that is illegal for reasons other than its unsuitability for children, such as child pornography.
  3. Government-mandated blocking of access to content deemed unsuitable for adults. This approach is taken in Australian Commonwealth law (although it has not been enforced in this manner to date) and in China, Saudi Arabia, Singapore, the United Arab Emirates and Vietnam, among others. Some countries require Internet access providers to block material while others allow only restricted access to the Internet through a government-controlled access point.
  4. Government prohibition of public access to the Internet. A number of countries, like China, either prohibit general public access to the Internet, or require Internet users to be registered/licensed by a government authority before permitting them restricted access.

Do developed countries regulate Internet content? edit

Yes. The Internet censorship regime in Australia consists of legislation at both Commonwealth and State/Territory Government levels. The Commonwealth regime is a complaints-based system and applies to content hosts, including ISPs, but not to content creators/providers. Content hosts are required to delete Australian hosted content from their server (Web, Usenet, FTP, etc.) that is deemed “objectionable” or “unsuitable for minors” on receipt of a take-down notice from the government regulator. The law does not require ISPs to block access to content hosted outside Australia. Instead, the ABA notifies filtering/blocking software providers of content hosted outside Australia to be added to their blacklists. Australian Internet users are not required by law to use blocking software. In addition, State and Territory criminal laws apply to content providers/creators. These laws enable prosecution of Internet users who make available material that is deemed “objectionable” or “unsuitable for minors”. The detail of the criminal offence provisions is different in each jurisdiction that has enacted or proposed laws of this nature.

Recent regulatory activity in France concerning illegal material on the Internet has focused on enforcing French laws prohibiting race hate material. In May 2000, a French judge ruled that USA Yahoo! Inc must make it impossible for French users to access sites auctioning race hate memorabilia. Yahoo! said it is technically impossible for it to block Internet users in France from seeing Nazi-related content on its USA Web site and that its French site complied with France’s laws prohibiting advertising Nazi memorabilia. In November 2001, a US District Court ruled that Yahoo! does not have to comply with the French court’s order concerning access to its USA site. The Court ruled that the USA First Amendment protects content generated in the US by American companies from being regulated by authorities in countries that have more restrictive laws on freedom of expression.

In the mid-1990s, German ISPs blocked access to some Internet content outside Germany containing material that is illegal under German laws of general application, particularly race hate propaganda and child pornography. In July 2000, it was reported that the German government had ceased trying to bar access to content outside Germany but police would continue to aim to stop illegal “homegrown” material. In 2001 and 2002, German authorities issued take-down notices to a number of Web hosts in the USA which refused to comply. The Ministry for Families, Seniors, Women and Children continues to issue take-down notices to foreign Web hosts under the “Act of the Dissemination of Publications and Other Media Morally Harmful to Youth” in relation to offshore sites that contain material “harmful to youth”. The Ministry claims jurisdiction over Web sites worldwide that contain “pornographic, extreme violence, war-mongering, racist, fascist and/or anti-Semitic content”. The notices require the Web host (as opposed to the Web site owner or content provider) to either remove the material or subject it to an age-verification system based on, for example, credit card checks.

What are the British and American approaches to Internet censorship? edit

The United Kingdom has not enacted censorship legislation specific to the Internet and appears to have no intention of so doing. In September 1996, a non-government organization named the UK Internet Watch Foundation (IWF) was established by ISP associations to implement proposals for dealing with illegal material on the Internet, with particular reference to child pornography. The IWF was established after the London Metropolitan Police sent a letter to all ISPs on August 9, 1996 requesting them to censor Usenet news groups or else police would find it necessary to prosecute ISPs in relation to illegal material made available via their systems.

The IWF operates a hotline to enable members of the public to report child pornography or other illegal material on the Internet. When the IWF receives a report, it reviews the material and decides whether it is potentially illegal. It then tries to determine the origin of the material and notifies the UK police or appropriate overseas law enforcement agency. It also notifies UK ISPs that they should take the material down from their servers; if they do not, they risk prosecution.

In February 2002, the IWF announced it would henceforth also deal with “criminally racist content” and that the Home Office had provided IWF with “an extended guide to the application of the [UK] law to racism on the Internet-’Racially Inflammatory Material on the Internet’”.

In 1996, the United States government began the push for Internet censorship when it passed into law the Communications Decency Act (CDA), which criminalized the sending of anything “indecent” over the Internet. In June 1996, a Philadelphia court struck down the CDA as unconstitutional as it went against the free speech guarantee. The Court ruled that the Internet is a “free marketplace of ideas” and should not be treated like television. One of the judges wrote, “...the Internet may fairly be regarded as a never-ending worldwide conversation. The Government may not, through the CDA, interrupt that conversation. As the most participatory form of mass speech yet developed, the Internet deserves the highest protection from governmental intrusion.” [33]

Another failed Internet content regulation legislation is the Children’s Internet Protection Act (CIPA), a US federal law passed in December 2000 that ties crucial library funding to the mandated use of blocking programs on Internet terminals used by both adults and minors in public libraries. A federal court decisively rejected the CIPA on the ground that blocking programs cannot effectively screen out only material deemed “harmful to minors”. The court called the software a “blunt instrument”, adding that “the problems faced by manufacturers and vendors of filtering software are legion”. [34]

The 9/11 attacks in New York and Washington and the presumed use of the Internet by terrorists to contact each other and prepare the operation resulted in the imposition of tough security measures and strict regulation of the Internet. A few hours after the attacks, FBI agents visited the head offices of the country’s main ISPs, including Hotmail, AOL and Earthlink, to confiscate details of possible e-mail messages between the terrorists. The monitoring of data on the Internet was legalized on October 24, 2001 with the enactment of the USA Patriot Act. This anti-terrorist measure confirmed the authority already given to the FBI to install the Carnivore program on an ISP’s equipment to monitor the flow of e-mail messages and store records of Web activity by people suspected of having contact with a foreign power. This requires only the permission of a special court.

Which developing countries regulate Internet content? edit

In September 1996, China reportedly banned access to certain Web sites by using a filtering system to prevent delivery of offending information. The banned sites included Western news outlets, Taiwanese commentary sites, anti-China dissident sites, and sexually explicit sites. A study by the Harvard Law School found that China has the most extensive Internet censorship in the world, regularly denying users access to 19,000 Web sites that the government deems threatening. The study, which tested access from multiple points in China over six months, found that Beijing blocked thousands of the most popular news, political and religious sites, along with selected entertainment and educational destinations. China also does not allow users to connect to major Western religious sites. News media sites are also often blocked. Among the sites users had trouble reaching in the test period were those of National Public Radio, The Los Angeles Times, The Washington Post, and Time magazine.

In Saudi Arabia public access to the Internet has been funnelled through a single government-controlled center since February 1999, when Internet access was first made available. From this center, the government blocks access to Internet content deemed unsuitable for the country’s citizens, such as information considered sensitive for political or religious reasons, pornographic sites, and the like. According to a report in The New York Times on November 19, 2001, over 7,000 sites are added to the blacklist monthly and the control center receives more than 100 requests a day to remove specific sites from the blacklist-many because they have been wrongfully characterized by the US commercial blocking software used.

The Singapore Broadcasting Authority (SBA) has regulated Internet content as a broadcasting service since July 1996. Under a Class Licence Scheme, Internet Content Providers and ISPs are deemed automatically licensed. Licensees are required to comply with the Class Licence Conditions and the Internet Code of Practice, which includes the definition of “prohibited material”. Briefly, “prohibited material” is that which is deemed “objectionable on the grounds of public interest, public morality, public order, public security, national harmony, or is otherwise prohibited by applicable Singapore laws.” The SBA has the power to impose sanctions, including fines, on licensees who contravene the Code of Practice.

The SBA takes a light-touch approach in regulating services on the Internet. For example, licensees found to be in breach of regulations will be given a chance to rectify the breach before the Authority takes action. Users in Singapore have access to all material available on the Internet, with the exception of a few high impact illegal Web sites, and Internet content is not pre-censored by SBA; nor are ISPs required to monitor the Internet. SBA is concerned primarily with pornography, violence, and incitement of racial or religious hatred. SBA’s purview covers only the provision of material to the public. Private communications, such as email and Internet Relay Chat between two individuals or parties, are not covered.

Are there countries that do not regulate content? edit

In August 1998, the Canadian Radio-Television and Telecommunications Commission (CRTC) called for public discussion on what role-if any-it should have in regulating matters such as online pornography, hate speech, and “Canadian content” on the Web. Subsequently, in May 1999, the CRTC issued a media release titled “CRTC Won’t Regulate the Internet”. It states, among others, that “[a]fter conducting an in-depth review, the CRTC has concluded that the new media on the Internet are achieving the goals of the Broadcasting Act and are vibrant, highly competitive and successful without regulation. The CRTC is concerned that any attempt to regulate Canadian new media might put the industry at a competitive disadvantage in the global marketplace.”

Likewise, as of this writing, Denmark has no law making it a criminal offense to make material unsuitable for minors available on the Internet. Nor is there any proposal to create such a law. Discussion regarding protection of minors is unfolding primarily around the issue of filtering at public libraries.

Similarly, the “new media” (Internet) in Norway is not regulated by law. Instead the efforts are toward informing the public of the developments in the Internet through the Norwegian Board of Film Classification, which every now and then publishes reports concerning technological advancements and their social impact.

Is regulating the Net similar to regulating the telephone, radio or TV? edit

No. Government involvement in radio and television is based on the “scarcity” doctrine, which holds that government censorship of content is justified by the government’s role in assigning broadcast frequencies on a scarce spectrum. The Internet, on the other hand, is not a “scarce” resource as anyone can attach a computer to it without the government’s permission. Nor is it a government-licensed common carrier like a phone company. Moreover, the regulations that have been held constitutional for telephone, radio and TV merely seek to shift (“channel”) explicit speech to a time or place where children cannot access it, but not to ban such speech entirely.

Is censorship of the Internet the answer? edit

The Internet is the fastest growing and largest tool for mass communication and information distribution in the world. It can be used to distribute large amounts of information anywhere in the world at a minimal cost. The problem is that information may be “good” or “bad.” In the last 10 years, there has been increasing concern about damaging Internet content, including violence and sexual content, bomb-making instructions, terrorist activity, and child pornography.

What then? Should governments step in to filter information? Or should individuals be allowed to determine for themselves what is harmful? The question is not easily answered as it involves striking a delicate balance between the individual’s freedom of expression and information and a State’s right to prevent what it considers harmful to its subjects.

Table 2 sums up the two positions on censorship of the Internet.

Censorship No Censorship
Despite the generally prevailing principle of freedom of speech in democratic countries, it is widely accepted that certain types of speech are not given protection as they are deemed to be of insufficient value compared to the harm they cause. Child pornography in the print or broadcast media, for instance, is never tolerated. The Internet should be no exception to these basic standards. Truly offensive material such as hardcore pornography and extreme racial hatred are no different simply because they are published on the World Wide Web as opposed to a book or video. Censorship is generally an evil and should be avoided wherever possible. Child pornography is an extreme example and there is already sufficient legislation to deal with those who attempt to produce, distribute or view such material. Other forms of speech may well be truly offensive but the only way a society can deal with them is by being exposed to them and combating them. Otherwise, these groups will merely go underground.
Censorship is tailored to the power of the medium. Accordingly, there is a higher level of censorship attached to television, films and video than to newspapers and books. This is due to the recognition that moving pictures and sound are more graphic and powerful than text and photographs or illustrations. There is also normally more regulation of videos than cinema films because the viewer of a video is a captive audience with the power to rewind, view again, and distribute more widely. The Internet, which increasingly uses video and sound, should be attached the same level of power and regulated accordingly. The distinction between censorship of the print and broadcast media is becoming increasingly irrelevant. It is quite possible that in 10 years time people will be entirely reliant on the Internet for news and entertainment. The reason why the print media is comparatively unregulated is that medium is the primary means of distributing information in society. For this reason, the Internet must be granted the same protection. When the founding fathers of the US constitution spoke of the freedom of the press, they were concerned about the primary and most powerful organ of the media at that time, the print press. Nowadays they would more likely be concerned with preventing censorship of the broadcast media and the Internet, which are our prime means of distributing information.
That it is hard to censor the Internet does not mean we should not seek to do so, it is extremely difficult already to prevent the sale of snuff movies or hard core pornography but governments do so because it is deemed to be of societal importance. A more relevant difficulty is the anonymity provided by the Internet, which gives pornographers and criminals the opportunity to abuse the medium. Asian countries have experimented with requiring citizens to provide identification before posting content on to the Internet. Such a system, if universally adopted, could be a relatively simple way of enforcing laws against truly offensive and harmful content. Even allowing for the extreme problems surrounding freedom of speech, Internet censorship would be more or less impossible. Governments can attempt to regulate what is produced in their own country but it would be impossible to regulate material from abroad. What is the point in removing all domestic reference to hardcore pornography in the USA when it is possible to access such material from the United Kingdom or Sweden? It is also possible for citizens to produce material and store it in an overseas domain, further complicating the issue. True freedom of speech requires anonymity in some cases to protect the author. The governments that have introduced ID requirements for Internet use also deny many basic rights to their citizens. The Internet allows citizens to criticize their government and distribute news and information without reprisal from the State. Such a system clearly could not survive with ID requirements.
In many countries there are multiple liabilities for production of slanderous material and material that incites racial hatred. Where the author or publisher cannot be traced or are insolvent the printers can be sued or prosecuted in some circumstances. The relatively small number of ISPs should be made liable if they assist in the provision of dangerous and harmful information such as bomb making instructions, hard core pornography, and the like. ISPs are certainly the wrong people to decide what can and cannot be placed on the Internet. There is already far too much control of this new technology by big business without also making them judge and jury of all Internet content. In any case, the sheer bulk of information ISPs allow to be published is such that vetting would be more or less impossible. Were there is liability for allowing such material to be displayed, ISPs would inevitably err on the side of caution to protect their financial interests. This would result in a much more heavily censored Internet.
The issues at stake in this debate-protection of children, terrorist activity, crime, racial hatred, etc.-are all international problems. If a global solution is required, then it can be achieved by international cooperation and treaties. It is acknowledged that it is justifiable to censor where harm is caused to others by the speech, words or art of an author. All the examples cited above are clearly causing harm to various groups in society. By a combination of the initiatives listed above, it is possible to limit that harm. Many ISPs have shown themselves to be responsible in immediately removing truly offensive content where they have been alerted to it. What is required is self-regulation by the industry recognizing their responsibility to Internet users but not imposing arbitrary and draconian restrictions upon its use. It is already possible for parents to use “Net nanny” browsers that will edit out offensive and inappropriate material for younger users.
Source: Matt Butt, “Summary: Should governments censor material on the World Wide Web?” (November 3, 2000); available from IDEA Debatabase http://www.debatabase.org/debatabase/details.asp?topicID=83

What about self-regulation? edit

Self-regulation is less costly than traditional command-and-control regulation. First, command-and-control rules are unsuited to the rapid changes of technology in the innovation age. Second, with self-regulation authorities need not drastically expand their enforcement mechanisms. From the standpoint of participants in markets, whether industry or consumers, self-regulation might arise as a natural outgrowth of consumer demand. This “bottom-up” process is voluntary and likely to be highly decentralized.

How can self-regulation be made effective? edit

Codes of conduct should be adopted to ensure that Internet content and service providers act in accordance with principles of social responsibility. These codes should meet community concerns and operate as an accountability system that guarantees a high level of credibility and quality. For instance, as part of codes of conduct, Internet providers hosting content have an obligation to remove illegal content when they are informed that such content exists. The procedure for giving notice and take-down should be indicated.

A service provider may include in its contracts with users and content providers terms that allow it to comply with its legal obligations and protect it from liability. It is in the best interest of industry to take on such responsibility since it enhances consumer confidence and is ultimately good for business.

To be effective, codes of conduct must be the product of and be enforced by self-regulatory agencies. Such agencies must be broadly representative and accessible to all relevant parties. Subject to a process of acquiescence by public authorities, they should enjoy certain legal privileges enhancing their functions. Effective self-regulation requires active consumer and citizen consultation by such agencies. Without user involvement, a self-regulatory mechanism will not accurately reflect user needs, will not be effective in delivering the standards it promotes, and will fail to create confidence.

Is there a role for government under a regime of self-regulation? edit

Self-regulation cannot function without the support of public authorities. The support can be in the form of simply not interfering with the self-regulatory process, or endorsing or ratifying self-regulatory codes and giving support through enforcement.

There are clearly limits to what can be achieved by self-regulation. It alone cannot guarantee that child pornographers are caught and punished. However, self-regulatory mechanisms can help ensure that criminals do not use the Internet with impunity. Governments should, through education and public information, raise awareness among users about self-regulatory mechanisms such as the means to filter and block content and to communicate complaints about Internet content through hotlines.

For governments, the emphasis should be on achieving regulatory efficiency by allowing business to take on as much of the task as possible. After all, business has a strong interest in creating trust across the whole spectrum of users and providers of services.

But where should the dividing line between business self-regulation and government regulation be drawn? Clearly, governments must ensure that the law is respected in cyberspace, to protect intellectual property and stop criminal abuse, for example. Business accepts the key role of governments in establishing Internet policy and is no less determined that the Internet should not become a free-for-all. In general terms, business urges governments to leave untouched those areas where there is no clear evidence that business conduct will have a negative effect on society or on the fundamental rights of individuals.

What about empowering the end-users? edit

Filtering technology can empower users by allowing them to select the kinds of content they and their children are exposed to. Used wisely, this technology can help shift control of and responsibility for harmful content from governments, regulatory agencies, and supervisory bodies to individuals. Thus, there is need for an improved architecture for the rating and filtering of Internet content. An independent organization that will provide a basic vocabulary for rating and oversee updates to the system at periodic intervals is needed.

A good filtering system realizes several important values: end user autonomy, respect for freedom of expression, ideological diversity, transparency, respect for privacy, interoperability and compatibility. Moreover, the system must feature a user-friendly interface that encourages actual use of its features and makes choice a real possibility for the vast majority of end users. Third parties should be encouraged to develop and provide free filters. Industry should promote the availability and use of filtering systems, educating consumers about how to filter and making it easy for parents, teachers, and other concerned adults to choose, install and adapt filters to their set of values. Regulatory requirements for service providers to screen or filter content should be avoided. Government or regulatory agencies may supply filters but should not mandate their use.

Likewise, there is a need for technical and organizational communication devices to ensure that users can respond to content on the Internet that they find to be of substantial concern. These “hotlines” ensure that, where necessary and appropriate, effective action can be taken to remedy such concerns. The task of evaluating the legality or illegality of specific data is difficult for Internet providers and should, therefore, be integrated into the work of hotlines. In order to function, hotlines need an environment and operational rules that honor their specific task of handling problematic-and perhaps illegal-content. Legislators should formulate minimum require-ments regarding the organizational setup and procedures of hotlines and, in turn, shield them from criminal or civil liability incurred in the proper conduct of their business (“safe harbor”).

What should be considered when choosing a particular regulatory mechanism? edit

Whatever the approach to content regulation, the important consideration is that regulation must not stifle innovation. It would seem that a hybrid between a government-regulated regime and an industry-regulated regime may be the right combination when dealing with censorship in the information age.

Because the Internet is global, there is a need for an international network of hotlines governed by a framework agreement containing minimum standards on the handling of content concerns and stipulating mutual notification between hotlines. The hotline in the country where the content is located is asked to evaluate it and to take action. This mechanism results in content providers being acted against only if the material is illegal in the host country. The mechanism also overcomes difficulties in the complex diplomatic procedures necessary for cross-border cooperation of law enforcement authorities.

In the final analysis, no regulatory mechanism can work independently of an education and awareness campaign. The Internet industry should have a continuous online and offline program to develop general awareness of self-regulatory mechanisms such as filtering systems and hotlines. Schools should provide the necessary skills for children to understand the benefits and limitations of online information and to exercise self-control over problematic Internet content. The Internet is itself a process, an enormous system for change and response, feedback and transformation. Like the Internet, the legal system and regulatory mechanisms around it must incorporate similar practices of learning and changing. [35]