Internet Governance/Issues and Actors

What are some of the issues involved in Internet governance? edit

As we have seen, Internet governance encompasses a range of issues and actors, and takes place at many layers. Throughout the network, there exist problems that need solutions, and, more importantly, potential that can be unleashed by better governance. It is not possible here to capture the full range of issues. This section, rather, seeks to provide a sampling. It describes the issues by layers, and it also discusses key actors for each layer. A more extensive description of actors and their responsibilities can be found in Appendix 1; Figure 1 also contains a representation of issues by layer.

Most importantly, this section attempts to make clear the real importance of Internet governance by drawing links between apparently technical decisions and their social, economic or political ramifications. Indeed, an important principle (and difficulty) of Internet governance is that the line between technical and policy decision-making is often blurred. Understanding the “real world” significance of even the most arcane technical decision is essential to understanding that decision and its processes, and to thinking of new ways to structure Internet governance.

What are some of the governance issues at the infrastructure layer? edit

The infrastructure layer can be considered the foundational layer of the Internet: It includes the copper and optical fibre cables (or “pipes”) and radio waves that carry data around the world and into users’ homes. It is upon this layer that the other two layers (logical and content) are built, and governance of the infrastructure layer is therefore critical to maintaining the seamlessness and viability of the entire network. Given the infrastructure layer’s importance, it makes sense that a wide range of issues requiring governance can be located at this level. Three, in particular, merit further discussion.

Interconnection

The Internet is a “network of networks”; it is composed of a multitude of smaller networks that must connect together (“interconnect”) in order for the global network to function seamlessly. In traditional telecommunications networks, interconnection is clearly regulated at the national level by State authorities, and at the international level (i.e., between national networks) by well-defined principles and agreements, some of which are supervised by the ITU. Interconnection between Internet networks, however, is not clearly governed by any entity, rules or laws. In recent years, this inherent ambiguity has become increasingly problematic, leading to high access costs for remote and developing countries, and in need of some kind of governance solution. Indeed, in its final report, the WGIG identified the ambiguity and uneven distribution of international interconnection costs as one of the key issues requiring a governance solution.15

On the Internet, access providers must interconnect with each other across international, national or local boundaries. Although not formalized, it is commonly said that there are three categories of access providers in this context: Tier 1 (large international backbone operators); Tier 2 (national or regional backbone operators); and Tier 3 (local ISPs). In most countries, there is some regulation of interconnection at national and local levels (for Tiers 2 and 3 ISPs), which may dictate rates and other terms between these providers. Internationally, however, there is no regulation, and the terms of any interconnection agreement are generally determined on the basis of negotiation and bargaining. In theory, this allows the market to determine interconnection in an efficient manner. In practice, however, unequal market position, and in particular the important positions occupied by Tier 1 providers, means that the larger providers are often able to dictate terms to the smaller ones, which in turn must bear the majority of the costs of connection.

This situation is particularly problematic for developing countries, which generally lack ownership of Tier 1 infrastructure and are often in a poor position to negotiate favourable access rates. By some accounts, ISPs in the Asia-Pacific region paid companies in the United States US$ 5 billion in “reverse subsidies” in 2000; in 2002, it was estimated that African ISPs were paying US$ 500 million a year. One commentator, writing on access in Africa, argues that “the existence of these reverse subsidies is the single largest factor contributing to high bandwidth costs”. 16

It should be noted that not everyone would agree with that statement, and that high international access costs are not by any means the only reason for high local access costs. A related – indeed, in a sense, the underlying – problem is the general lack of good local content in many developing countries. It is this shortage of local content, stored on local servers, that leads to high international connectivity costs as users are forced to access sites and information stored outside the country. Moreover, as we shall discuss further later, the lack of adequate competition policies and inadequately developed national markets also play a significant role in raising access costs for end-users. Increasing connectivity within regions has reduced some of the concerns for the costs of connection to major backbones, as has absolute cost of undersea optical cable services.

Actors Involved

This opaque setup – in effect, the absence of interconnection governance at the Tier 1 level – has created a certain amount of dissatisfaction, and some initial moves towards a governance solution. One of the first bodies to raise the issue of interconnection pricing was the Asia-Pacific Economic Cooperation Telecommunications and Information Working Group (APEC Tel), which, in 1998, questioned the existing system (or lack thereof ) of International Charging Arrangements for Internet Services (ICAIS). In addition, Australia, whose ISPs pay very high international access charges due to remoteness and relative lack of competition, has also expressed unhappiness with the current arrangement.

Regional groups such as APEC Tel have played an important role in putting today’s shortcomings on the agenda. However, the main body actually dealing with the issue is ITU, where a study group has been discussing governance mechanisms that could alleviate the current situation. Three main proposals appear to be on the table, with the chief disagreement being between larger industry players who would prefer a market-driven solution; and smaller industry players and developing countries, who would prefer a system that resembles the settlement currently used in international telecommunications. Under this system, the amount of traffic carried by operators is measured in terms of call-minutes and reconciled using previously agreed-upon rates. In the case of inter-provider Internet connections, however, there is no such thing as a “call minute,” since all traffic flows by way of packets which are not identified with specific calls. While packets can be easily counted, it is not necessarily clear which party, the sender or receiver, should be charged for any particular packet, particularly when the source and destination for those packets may not reside on the individual providers who are exchanging traffic.17

An added difficulty is that the settlement system relies on negotiated and often protracted bilateral agreements, whereas the Internet seems to require a global, multilateral solution. Identifying the appropriate global forum, however, has proven difficult: the issue does not fall under ICANN’s remit, and progress at the ITU has been slow. Some have suggested that interconnection charges should be considered a trade-related matter that could be taken up under WTO. For the moment, the lack of a global forum to deal with this issue represents perhaps the most serious obstacle to its resolution.

As noted earlier, it should also be reemphasized that the lack of an international settlement regime is not the only reason for high access costs. Often, poor (or absent) competition policies within countries also contribute to the problem, leading to inefficient markets and inflated costs for ISPs. Thus, a holistic approach to the problem of interconnection and access costs would address both the international and the local dimensions of the problem. For example, some countries have taken positive steps in this regard by de-licensing or drastically reducing entry barriers for ISPs. We discuss these issues further later in Section IV.

Universal Access

Another key area of governance concerns access, and in particular the notion of universal access. This notion is somewhat hard to define; in fact, one of the important tasks for governance would be to clarify between several competing definitions.18 Outstanding issues include whether universal access should cover:

  • access for every citizen on an individual or household basis, or for communities (e.g., villages and small towns) to ensure that all citizens are within reach of an access point;
  • access only to basic telephony (i.e., narrow-band), or access also to value-added services like the Internet and broadband; and
  • access only to infrastructure, or also to content, services and applications.

In addition, any adequate definition of universal access must also address the following questions:

  • How to define “universal”? Universal access is frequently taken to mean access across geographic areas, but it could equally refer to access across all gender, income, or age groups. In addition, the term is frequently used almost synonymously with the digital divide, to refer to the need for equitable access between rich and poor countries.
  • Should universal access include support services? Access to content or infrastructure is not very useful if users are unable to make use of that access due to the fact that they are illiterate or uneducated. For this reason, it is sometimes argued that universal access policies must include a range of socio-economic support services.

Each of these goals, or a combination of them, is generally widely held to be desirable. However, the realization of universal access is complicated by the fact that there usually exist significant economic disincentives to connect traditionally underserved populations. For example, network providers argue that connecting geographically remote customers is financially unremunerative, and that the required investments to do so would make their businesses unsustainable. For this reason, one of the key governance decisions that must be made in any attempt to provide universal access is whether that goal should be left to market forces alone, or whether the State should provide some form of financial support to providers.

When (as is frequently the case) States decide to provide some form of public subsidy, then it is essential to determine the appropriate funding mechanism. Universal service funds, which allocate money to providers that connect underserved areas, are one possible mechanism. A more recent innovation has been the use of least cost-subsidy auctions, in which providers bid for a contract to connect underserved areas; the provider requiring the lowest subsidy is awarded the contract.

In addition to funding, the governance of universal access also encompasses a range of other topics. For instance, definitions of universal access need to be regularly updated to reflect technological developments – recently, some observers have suggested that universal service obligations (traditionally imposed only on fixed-line telecommunications providers), should also be imposed on cellular phone companies, and possibly even on ISPs. Interconnection arrangements, rights-of-way, and licensing policies are other matters that are relevant to universal access. The range of issues suggests the complexity of an adequate governance structure – but it also suggests the importance of such a structure.

Actors Involved

Since traditional universal access regulation involves fixed-line telephony, national and international telecommunications regulators are usually the most actively involved in governance for universal access. At the international level, the ITU-D, the development wing of ITU, plays an important role in developing policies, supporting various small-scale experiments in countries around the world, and in providing training and capacity building to its member states.

Most of the policies concerning universal access, however, are set within individual countries, by national governments and domestic regulatory agencies. In India, for example, the Department of Telecommunications (DOT), in consultation with the Telecommunications Regulatory Authority of India (TRAI), administers a universal service fund that disburses subsidies to operators serving rural areas. The Nepalese government has extended telecommunication access in its eastern region through least cost-subsidy auctions that award licenses to private operators that bid the lowest for a government subsidy. And in Hong Kong, universal service costs are estimated and shared among telecommunications operators based on the amount of international minutes traffic carried by them.

In addition to these traditional telecommunications authorities, international institutions and aid groups have begun taking an increasing interest in the subject of universal access. Both the World Bank (WB) and the United Nations (UN), for example, devote significant resources to the issue, as do several nongovernmental organizations (NGOs). For example, WB is providing US$ 53 million to fund the eSri Lanka project that aims to bridge the access gap between the western province and the northern, eastern and southern provinces. The project will subsidize the building of a fibre-optic backbone and rural telecentres. Similarly, the International Development Research Centre (IDRC) has funded several projects that consider optimal approaches for achieving rural connectivity (e.g., through the use of Wireless Fidelity (WiFi), or by setting up village telecentres).

One important venue where governance (and other) issues related to universal access are being discussed is WSIS. At its inception, the representatives of WSIS declared their

common desire and commitment to build a people-centred, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes.19

WSIS is likely to increase the interest of international aid agencies in the subject of universal access. In the future, as convergence becomes an increasing reality, multilateral bodies like the UN, ITU and others are likely to become more involved in developing appropriate governance mechanisms.

Next-Generation Pathways

Technology evolves at a rapid pace, and this evolution often brings great benefits to the Internet. However, the process of adopting new technologies can also be complicated, and is a further area requiring governance. Two issues, in particular, can benefit from governance.

The first concerns decisions on when to deploy new technologies. Many members of the technical community (and others) would argue that such decisions should simply be left to consumer choice. But governments often feel otherwise. For example, some governments have resisted the use of IP technology for phone calls, fearing the resulting loss of revenue to incumbent telecom operators. Likewise, many governments have yet to de-license the necessary spectrum for Wi-Fi networks, often citing security concerns. States may also choose to prioritize some technologies (e.g., narrowband connectivity) over others (e.g., more expensive broadband) in an effort to pursue social or developmental goals.

Such decisions are often met with scepticism, but the issue is not whether governments are right or wrong in resisting certain next-generation technologies. What matters is to understand that the decision on introducing new pathways is a governance decision: it is the product of active management by the State, and, ideally, by other involved stakeholders. Thus, a comprehensive approach to Internet governance would include mechanisms and steps to introduce next-generation pathways in a smooth and effective manner.

Next-generation technologies also require governance to ensure that they are deployed in a manner that is harmonious with pre-existing (or “legacy”) systems. Such coordination is essential at every layer of the network, but it is especially critical at the infrastructure layer. If new means of transmitting information cannot communicate with older systems, then that defeats the very purpose of deploying new systems. For example, much attention has been given in recent years to the promise of broadband wireless technologies like third generation (3G – for the cellular network) and Worldwide Interoperatibility of Microwave Access (WiMax – which is a wireless backbone technology that could potentially extend the reach of wireless Internet connectivity). Such network technologies are useful to the extent that they are compatible with existing communications networks. As with the decision on when to introduce new pathways, then, governance solutions are also required to decide how to introduce them, and in particular to ensure that standards and other technical specifications are compatible with existing networks.20

Actors Involved

As with other the topics discussed here, the governance of next-generation pathways is a broad-ranging process that involves a range of stakeholders. National governments are of course important, and play a determining role in deciding which technologies are adopted. This role, however, is often supplemented by advice from other groups. For example, ITU and other multilateral organizations play a key role in recommending the deployment of new technologies to national governments. In addition, industry groups sometimes play a role in lobbying for certain technologies over others. To balance their role, it is also important for governments to take into account the views of consumer groups and civil society.

Finally, standards bodies like IETF, the International Standards Organization (ISO) and others have an essential role to play, particularly in ensuring compatibility between new and legacy systems. In the case of standards based on open source, it is also possible for consumer and user groups to have a greater say over which technologies are adopted, and how they can promote social and other values.

What are some of the governance issues at the logical layer? edit

The logical layer sits on top of the infrastructure layer. Sometimes called the “code”layer, it consists of the software programs and protocols that“drive” the infrastructure, and that provide an interface to the user. At this layer, too, there exist various ways in which governance can address problems, enhance existing processes, and help to ensure that the Internet achieves its full potential.

Standards
Standards are among the most important issues addressed by Internet governance at any layer. As noted, the Internet is only able to function seamlessly over different networks, operating systems, browsers and devices because it sits on a bedrock of commonly agreed-upon technical standards. TCP/IP, discussed earlier, is perhaps the most important of these standards. Two other key standards are the HyperText Mark-up Language (HTML) and the HyperText Transfer Protocol (HTTP), developed by Tim Berners-Lee and his colleagues at CERN in Geneva to standardize the presentation and transport of web pages. Much as TCP/IP is the basis for the growth of the Internet’s infrastructure, so HTTP and HTML are the basis for the phenomenal growth of the World Wide Web. Other critical standards include Extensible Mark-up Language (XML), a standard for presenting information on web pages, and IPv6 (Internet Protocol, version 6, the successor to the current IPv4), used in Internet-addressing systems (see discussion below).

The centrality of standards to the Internet means that discussions over the best mechanism to manage and implement them are as old as the network itself. Indeed, long before the current governance debate, standards were already the product of de facto governance, primarily by consensus-driven technical bodies. This need for governance occurs because standards rely for their effectiveness on universal acceptance, which, in turn, relies on groups or bodies to decide upon and publish standard specifications. Without such control, the Internet would simply fragment into a Babel of competing technical specifications. Indeed, such a spectre haunts the future of HTML and XML, which over time has become increasingly fragmented due to competing versions and add-ons by private companies.

Another important issue concerns what some perceive as the gradual “privatization” of standards. While many standards on the Internet have traditionally been “open” (in the sense that their specifications have been available to all, often without a fee), there have been some indications of a move towards fee-based standards. For example, in 2001, the World Wide Web Consortium (W3C), a critical Internet standards body, raised the ire of the Internet community when it proposed endorsing patented standards for which users would have to make royalty payments; the proposal was later withdrawn, but it raised significant concerns that such moves could reduce the openness of the network.

Finally, standards also require governance so that they can be updated to accommodate new technologies or needs of the Internet community. For example, ongoing network security concerns (driven by the rise of viruses, spam, and other forms of unwanted content) have prompted some to call for new specifications for TCP/IP that would include more security mechanisms. Likewise, some feel that the spread of broadband, and the rise of applications relying on voice and rich media like movies, require the introduction of Quality of Service (QOS) standards to prioritize certain packets over others.

Currently, for example, the network does not differentiate between an email or a phone call, which is why Internet telephony remains somewhat unreliable (voice packets can be delayed or dropped along the way). However, the introduction of QOS standards, which could discriminate between packets, could mean a departure from the Internet’s cherished e2e architecture. This difficult dilemma – balancing the competing needs of openness with flexibility and manageability – is an example of the importance of adequate governance mechanisms that are able to reconcile competing values and goals.

Actors involved

Currently, Internet standards are determined in various places, including international multi-sectoral bodies, regional bodies, industry fora and consortiums, and professional organizations. This wide range of venues is emblematic not just of the variety of actors involved in Internet governance (broadly defined), but also of the range of issues and interests that must be accommodated. Industry representatives, for example, are often far more concerned with speed and efficiency in decision-making, while civil society representatives would sacrifice a certain amount of speed in the name of greater consultation and deliberation.

Amidst this variety of actors, three in particular are critical to the development of core Internet standards:

The Internet Engineering Task Force (IETF): IETF, a large body with open participation to all individuals and groups, is the primary standards body for the Internet. Through its various working groups, it sets standards for Internet security, packeting, and routing, among other issues. Probably the most important standards that fall under IETF are the TCP and IP protocols.

In addition to being one of the most important groups, IETF has also been a quintessential Internet decision-making body. Its open participation, consultative decision-making processes, and relative lack of organizational hierarchy have made it a model for an inclusive, yet highly effective, system of governance that is unique to the online world. Until recently, most of IETF’s work took place informally, primarily via face to face meetings, mailing lists and other virtual tools. However, as the organization’s membership (and the Internet itself ) grew in size and complexity, certain administrative changes were introduced to streamline – and, to a certain extent centralize – decision-making. The Internet Activities Board (now the Internet Architecture Board (IAB)) made standards decisions until 1992 when this task became the responsibility of the Internet Engineering Steering Group (IESG). Pleases refer to Appendix 2, ‘Internet Standards’, for more information on IETF and other standards bodies.

International Telecommunication Union-Telecommunication Standardization (ITU-T): ITUT is the standard-setting wing of the ITU. It operates through study groups whose recommendations must subsequently be approved by member-states. As one of the oldest standards bodies, and as a member of the UN with an intergovernmental membership body, the ITU-T’s standard recommendations traditionally carry considerable weight. However, during the 1990s, with the rise of the Internet, ITU-T (at that time known as the International Telegraph and Telephone Consultative Committee, or CCITT), found its relevance called into question due to the increasing importance of new bodies like IETF. Since that time, ITU-T has substantially revamped itself and is today a key player in the standards-setting community.

ITU-T and IETF do attempt to work together, but they have a somewhat contentious relationship. While the latter represents the open and free-wheeling culture of the Internet, the former has evolved from the more formal culture of telecommunications. IETF consists primarily of technical experts and practitioners, most of them from the developed world; ITU-T is made up of national governments, and as such can claim membership (and the resulting legitimacy) of many developing country states. Their disparate cultures, yet similar importance, highlights the necessity (and the challenge) of different groups working together to ensure successful governance.

World Wide Web Consortium (W3C): W3C develops standards and protocols that exist on top of core Internet standards. It was created in 1994 as a body that would enhance the World Wide Web by developing new protocols while at the same time ensuring interoperability. Among others issues, it has developed standards to promote privacy (the P3P platform), and a protocol that would allow users to filter content (PICS). W3C also works on standards to facilitate access for disabled people.

The consortium is headed by Tim Berners-Lee, sometimes referred to as the “inventor of the World Wide Web” for his pioneering work in developing key standards like HTML and HTTP. It is a fee-based organization, with a significant portion of its membership made up by industry representatives.

Management of the Domain Name System

The coordination and management of the DNS is another key area requiring governance at the logical layer. In recent years, the DNS has been the focus of some of the most heated (and most interesting) debates over governance, largely due to the central role played by ICANN.21

Understanding the DNS

In order to understand some of the governance issues surrounding the DNS, it is first necessary to understand what the DNS is, and how it functions. Operating as a lookup system, the DNS allows users to use memorable alphanumeric names to identify network services such as the World Wide Web and email servers. It is a system that maps names (e.g., www.undp.org) to a string of four numbers separated by periods called IP addresses (e.g., 165.65.35.38).

Until 2000, the Internet had eight top-level domain names: .arpa, .com, .net, .org, .int, .edu, .gov and .mil. These domains are called generic top-level domains, or gTLDs. As the Internet grew, there were increasing calls for more top-level domain names to be added, and, in 2000, ICANN announced seven new gTLDs: .aero, .biz, .coop, .info, .museum, .name, and .pro. Another series of new gTLDs have also been announced recently, although not all of them are yet operational.

In addition to these gTLDs, the DNS also includes another set of top-level domains known as country code top-level domains, or ccTLDs. These were created to represent individual countries, and include two-letter codes such as .au (Australia), .fr (France), .gh (Ghana), and .in (India).

Actors Involved

ICANN, a non-profit corporation formed to manage the DNS by the US government in 1998, is the main body responsible for governance of the DNS. At its founding, ICANN was upheld as a new model for governance on the Internet – one that would be international, democratic, and include a wide variety of stakeholders from all sectors.

Almost from the start, however, ICANN has proven to be controversial, and its many shortcomings (perceived or real) have led some observers to conclude that a more traditional system of governance, modeled after multilateral organization like ITU or the UN, would be more appropriate for Internet governance. Indeed, although not always explicitly mentioned, one sub-text of the WSIS process is, precisely, such a rethink of governance models. Many developing countries, in particular, would like to see a greater role for national governments, and a distancing of core DNS functions from the US government, under whose aegis ICANN still functions.

ICANN’s missteps and mishaps cannot be fully documented here. They include a short-lived attempt to foster online democracy through direct elections to ICANN’s board (the effort was troubled from the start and the elections no longer take place). They also include a variety of decisions that have led many to question the organization’s legitimacy, accountability and representation. To be fair to ICANN, its many troubles are probably indications not so much of a single organization’s shortcomings, but rather of the challenges in developing new models of governance for the Internet.

ICANN’s troubles also shed light on the difficulty of distinguishing between technical decision-making and policy decisions which have political, social, and economic ramifications. ICANN’s original mandate is clear: technical management of the DNS. Esther Dyson, ICANN’s first chair, has argued that

ICANN does not “aspire to address” any Internet governance issues; in effect, it governs the plumbing, not the people. It has a very limited mandate to administer certain (largely technical) aspects of the Internet infrastructure in general and the Domain Name System in particular.22

Despite such claims, ICANN’s history shows that even the most narrowly defined technical decisions can have important policy ramifications. For example, the decision on which new top-level domain names to create was a highly charged process that appeared to give special status to some industries (e.g., the airline industry). In addition, ICANN’s decisions regarding ccTLDs have proven contentious, touching upon issues of national sovereignty and even the digital divide. Indeed, one of the more difficult issues confronting governance of the DNS is the question of how, and by whom, ccTLDs should be managed. Long before the creation of ICANN, management of many ccTLDs was originally granted by IANA to volunteer entities that were not located in, nor related to, the countries in question. As a result, some countries (e.g., South Africa) have taken legal recourse to reclaim their ccTLDs. To be fair, many of these assignments were undertaken at a time when few governments, let alone developing country governments, had any interest or awareness of the Internet. The DNS was created in 1984, long before the creation of ICANN in 1998. ICANN’s relationship with country operators has also sometimes proven difficult due to the oversight of the organization by the US government.

This has led to a situation in which many country domain operators have created their own organizations to manage regional TLDs. For example, European domain operators have created their own regional entity, the Council of European National TLD Registries (CENTR). While not quite such a dramatic development yet, regional entities do raise the frightening prospect of Internet balkanization, in which a fragmented DNS is managed by competing entities, and the Internet is no longer a global, seamless network. To remedy this problem, ICANN created a supporting organization of ccTLDs called the country code supporting organization or ccNSO. All ccTLD operators have been invited to join this organization, including all the participants in CENTR. Significant progress has been made in achieving this objective.

IP Allocation and Numbering

As mentioned, IP addresses are composed of sets of four numbers (ranging from 0 to 255) separated by periods – this is just a representation of a 32-bit number that expresses an IP address in IPv4. In fact, every device on the network requires a number, and numbering decisions for IP addresses as well as for other devices are critical to the smooth functioning of the Internet.

Several governance steps have already been taken in the realm of numbering. One of the most important areas of governance concerns recent moves to address a perceived shortage of IP addresses. Under the current protocol, IPv4, there exist some 4.2 billion possible unique IP addresses. The number may appear large, but the proliferation of Internet-enabled devices like cell phones, digital organizers and home appliances – each of which is assigned a unique IP number – could in theory deplete the available addresses, thereby stunting the spread of the network.

In addition, as we shall see below, the shortage of IP space has been a particular concern for developing countries.

To address this potential shortage, two steps have been taken:

  • First, the technical community has developed a new protocol known as IPv6. This protocol, which would allow for some 340 undecillion (3.4 × 1038) addresses, essentially solves the shortage problem. IPv6 also introduces a range of additional features not currently supported in IPv4, including better security, and the ability to differentiate between different streams of packets (e.g., voice and data).
  • Second, the technical community also introduced a process known as “Network Address Translation” (NAT), which allowed for the use of private addresses. Under NAT, individual computers on a single private network (for example, within a company or university) use non-unique private addresses that are translated into public, unique IP addresses as they leave the private network boundary. Many Internet architects find this to create a serious erosion of the Internet’s end-to-end principles.

An additional example of governance in the realm of numbering can be found in recent efforts to develop a shared platform for the Public Switched Telephone Network (PSTN) and the IP network. These efforts have been led by the IETF, which has developed a standard known as ENUM. Essentially, ENUM translates telephone numbers into web addresses, and as such “merges” the DNS with the existing telephone numbering system. Although not widely deployed yet, it offers potential in several areas. For example, it should allow telephone users, with access only to a 12-digit keypad, to access Internet services; it should also make it significantly easier to place telephone calls between the PSTN and the Internet using VoIP.

Actors Involved

The main body involved in the distribution of IP numbers is IANA. IANA allocates blocks of IP address space to the Regional Internet Registries (RIRs)23 , which allocate IP addresses and numbers to ISPs and large organizations. Currently, the issue of how numbers will be allocated under IPv6 has become a matter of some contention. In particular, it appears possible that national governments, which have not shown much interest in IP allocation thus far, may henceforth play a greater role. Unless this is done with great care, the basic need to constrain the growth of routing tables could be seriously affected.

Although the IANA distributes IP space, it was the IETF, in consultation with industry and technical groups that played the leading role in developing IPv6. As noted, the IETF has also been at the forefront of ENUM development. However, given the bridging role played by ENUM between the Internet and existing telephone systems, organizations more traditionally involved in telecommunications governance have also claimed a role. In particular, as the international authority for telephone codes, the ITU has been involved in the application of ENUM; the IETF design specified a key role for the ITU in validating the topmost levels of delegation for ENUM domain names. In addition, as PSTN numbering still remain largely the domain of national regulators, it seems likely that any widespread deployment of ENUM will by necessity involve governments too.

What are some of the governance issues at the content layer? edit

For average users, the content layer is their only experience of the Internet. This is where the programs and services and applications they access on an everyday basis exist. This does not mean that governance on the content layer is the only area relevant to average users. As should be clear by now, the three layers are inter-dependent, and what happens at the content layer is very much contingent on what happens at the other layers. For example, without an effective mechanism for ensuring interconnection, it would be impossible – or at any rate fruitless – to use a web-browser at the content level.

Nonetheless, governance at this layer is a matter of critical (if not singular) importance for users. Here, we examine three issues of particular importance:

Internet Pollution

Pollution is the generalized term used to refer to a variety of harmful and illegal forms of content that clog (or pollute) the Internet. Although the best known examples of pollution are probably spam (unsolicited email) and viruses, the term also encompasses spyware, phishing attacks (in which an email or other message solicits and misuses sensitive information, e.g., bank account numbers), and pornography and other harmful content.

From a minor nuisance just a few years ago, Internet pollution has risen to epidemic proportions. By some estimates, 10 out of every 13 emails sent today is spam.24 Such messages clog often scarce bandwidth, diminish productivity, and impose an economic toll. According to one study, spam results in an annual Euro 10-billion loss just through lost bandwidth.25 Similarly, in the United States, the Federal Trade Commission (FTC) announced in 2003 that up to 27.3 million Americans have been victims of some form of identity theft within the past five years, and that in 2002 alone, the cost to businesses of such theft amounted to nearly US$ 48 billion.26 It should be made clear that the losses are a consequence of the creation of new credit card accounts by the identity thieves, not necessarily the stealing of money from the individual victims of identity theft.

In addition to the economic damage, pollution also damages the Internet by reducing the amount of trust average users have in the network.27 Trust is critical to the Internet’s continued growth. If users begin fearing the openness that has thus far made the network such a success, this would slow the spread of the network, damage the prospects for e-commerce, and possibly result in a number of “walled gardens” where users hide from the wider Internet community.

Actors Involved

One of the reasons for the rapid growth of pollution is the great difficulty in combating it. Spam and viruses exploit the Internet’s e2e architecture and anonymity. With few exceptions, unsolicited mailers and those who spread viruses are extremely difficult to track down using conventional methods. In this sense, pollution represents a classic Internet challenge to traditional governance mechanisms: combating it requires new structures and tools, and new forms of collaboration between sectors.

A variety of actors and bodies are involved in trying to combat spam. They employ a diverse range of approaches, which can be broadly divided into two categories:

Technical approaches: Technical solutions to spam include the widely deployed junk mail filters we find in our email accounts, as well as a host of other detection and prevention mechanisms. Yahoo and Microsoft, for example, have discussed implementing an e-stamp solution that would charge tiny amounts for legitimate emails to be accepted in an inbox. Although such proposals are likely to encounter opposition, they address the underlying economic reality that spam is so prolific in part because it is so cheap (indeed, free) to send unwanted emails.

A number of industry and civil society groups also exist to develop technical solutions to pollution, and to spam in particular. For example, the Messaging Anti Abuse Working Group (MAAWG) is a coalition of leading ISPs, including Yahoo, Microsoft, Earthlink, America Online, and France Telecom. MAAWG has advocated a set of technical guidelines and best practices to stem the tide of spam. It has also evaluated methods to authenticate email senders using IP addresses and digital content signatures. Similarly, the Spamhaus Project is an international non-profit organization that works with law enforcement agencies to track down the Internet’s Spam Gangs, and lobbies governments for effective anti-spam legislation.

Legal and regulatory approaches: These joint industry efforts have led to some successes in combating spam. However, technical efforts to thwart spam must always confront the problem of false positives – i.e., the danger that valid emails will wrongly be classified as spam by the program or other technical tool.

For this and other reasons, technical solutions have increasingly been supplemented by legal approaches, at both the national and international levels. In the United States, FTC has deemed spreading spyware a violation of federal law, and the CAN-SPAM Act of 2003 makes it a criminal offence to send misleading commercial email. Similarly, in 2002, South Korea addressed the problem of spam originating from its territory by implementing a new antispam law; results have been positive, with the percentage of South Korean commercial emails represented by spam dropping from 90 percent to 70 percent in a three-month period.28

Given the global reach of the Internet, such national approaches clearly need to be supplemented by cross-border co-operation. As Viviane Reding, the EU Information Society Commissioner recently put it: “[The EU] cannot act alone in the fight against spam, as it is essentially borderless.” In that context, two worldwide initiatives deserve mention: the Seoul-Melbourne Pact signed by Australia, Korea and several other Asia-Pacific Internet economies; and the London Action Plan, an initiative of the US FTC and the British Department of Trade and Industry, a global think-tank on spam which brings together regulators, civil society and industry stakeholders around the world.

At the multilateral level, the OECD Anti Spam Task Force is currently developing a comprehensive “toolkit” designed to combat spam, while ITU has also been active, organizing thematic meetings on spam and cyber-security and considering the possibility of establishing a global Memorandum of Understanding (MoU) on the issue (perhaps within the context of WSIS).

Ultimately, despite all the efforts to solve it, pollution remains a serious and growing menace. This does not mean that no effective governance solution exists. On the contrary, the issue is rare in uniting a disparate group of stakeholders. Businesses, individuals, governments, and civil society: all are harmed by the proliferation of pollution. Far from being an example of the failure of governance, then, pollution could yet emerge as a model for a truly multi-sectoral collaboration that effectively deploys the range of legal and technical tools available.

Cybercrime

Cybercrime is intimately linked to the issue of online pollution. Indeed, many forms of pollution (e.g., phishing, pharming or unsolicited emails) can be considered examples of criminal activity. Cybercrime also encompasses a number of other actions, notably financial fraud, online pornography, hacking, and security attacks such as the injection of viruses, worms and Trojan Horses, the conduct of denial of service attacks, and a variety of other damaging practices. In addition, terrorism that is facilitated by the Internet has emerged as a major concern in recent years.

Cybercrime involves a range of issues, some of which are also evident in the offline world, and some of which are unique to the online environment. One issue that pertains to both environments concerns the need to balance checks on criminal activity with respect for human rights and civil liberties. Some observers have raised concerns that steps ostensibly taken by governments to limit crime may also facilitate censorship, surveillance and other forms of state repression. Numerous such instances were documented in a 2004 report, Internet Under Surveillance, by the civil society group Reporters without Borders.29

There are also issues that are unique to (or at any rate more pronounced in) the online world. The difficulty of securing evidence is one such issue: governments have struggled to frame laws that impose reasonable data retention requirements on ISPs and others without imposing undue burdens. In addition, the question of service providers’ liability has often proven difficult. While some countries hold providers responsible for criminal transactions conducted on their networks, many providers argue that they cannot possibly monitor all activity and should therefore not be held liable. This issue rose to the fore in late 2004, when the head of eBay India, Avnish Bajaj, was arrested over a pornographic video that had been sold on the company’s auction site (this despite the fact that the posting of the video violated the user terms of service provided by the company). Bajaj was subsequently released on bail, but the case led to a significant amount of discussion and debate around the world regarding the criminal liability of service providers.

Finally, the Internet poses new and unique challenges to international legal harmonization: on a global network, national jurisdictions sometimes come into conflict. When a company or provider located in one country offers customers in another country a service, it is not always clear which country’s laws should apply. We shall discuss this issue further later, in the section on globalization.

Actors Involved

As might be expected, national governments play a key role in controlling cybercrime. Many countries have now adopted fairly comprehensive legislation for a wide range of crimes. Often, though, laws adopted on a national basis only have limited effect, given the global nature of the network. For that reason, cybercrime is increasingly being addressed through multilateral treaties and agreements that involve several countries. Among the best known of such mechanisms is the Council of Europe (CoE) Convention on Cybercrime, which came into effect in July 2004. The convention, signed by 44 countries, including all EU nations, is considered the leading international instrument against cybercrime today. The G-8 nations, too, have issued a 10-point action plan for dealing with cybercrime.

Other non-State actors also have a role to play. Industry groups and private companies, for instance, can act by adopting codes of conduct and other self-regulatory mechanisms. Such mechanisms have become increasingly common as a way for ISPs to govern themselves and avoid what they perceive as heavy-handed state regulation. Sometimes, such efforts are also supplemented by the participation of civil society and consumer groups; this is seen as an effective way of ensuring that industry governs itself in a manner that is truly in the public interest.30

Intellectual Property Rights

Although it is hardly a new issue, Intellectual Property Rights (IPR) has risen to the top of the Internet governance agenda recently. This is, in large part, because the Internet has made it far easier to impinge on copyright (and, to a lesser extent, other IPR) protections. From the simplest cut-and-paste operation to the more complex process of burning a CD, the ease of duplicating and disseminating information has led some to protest that the Internet is undercutting the incentive to innovate in our society. Others argue that, on the contrary, new laws designed to tighten IPR protections on the Internet are, in fact, undermining the principle of “fair use”, which traditionally upholds the rights of consumers to use copyrighted works for non-commercial purposes (e.g., lending a CD to a friend).

IPR is a vast topic – too vast to cover comprehensively in a primer such as this one.31 A brief overview would identify three distinct (if inter-linked) areas where governance solutions for IPR are required:

Copyright and peer-to-peer networks: Copyright violations, particularly within the music industry, have emerged as perhaps the most important IPR issue on the Internet. The rise of peer-to-peer (P2P) networks like Napster and Kazaa, which connect individual users and allow them to share digital files on a massive scale, has threatened existing business models in the music (and, to a lesser extent, movie and video game) industries. In 2003, worldwide sales of music fell by 7.6 percent in value, a fall that industry representatives attributed primarily (and somewhat questionably) to music downloads and file-sharing.32

In response, the music industry, represented by the International Federation of the Phonographic Industry (IFPI) worldwide and the Recording Industry Association of America (RIAA), has begun to file lawsuits against file-sharing networks, and even against individual users of those networks. Between September 2003 and July 2005, the IFPI filed more than 14,000 lawsuits against file-sharers in 12 countries; in the US, the recording industry has sued thousands of users, and settled 600 cases for around US$ 3000 each.33 Many of the US cases have been filed under the US Digital Millennium Copyright Act (DMCA), which in 1998 strengthened copyright laws and extended their application to the Internet. In Asia, too, governments have enacted a number of statutes and provisions to strengthen IPR provisions. For example, under a recently revised copyright law in Singapore, local ISPs are required to remove websites if copyright owners report a violation on the site.

Such aggressive tactics appear to have had some success, and the downward trend in record sales has slowed somewhat. The apparent change in fortunes has been aided by the record industry’s recognition that it needed to change its business model to adapt to the new realities of the Internet. As a result, legal and paid download services have become increasingly popular among consumers. Apple’s online music store, iTunes, is perhaps the best known, but a number of other services also exist. The IFPI estimates that legal downloads in the first half of 2005 tripled over the same period in 2004; meanwhile, it also estimates that the number of illegal files available on file-sharing networks rose only 3 percent in the first half of 2005.34

Software and Open Source: IPR difficulties within the creative industries have received the most attention, but the related issues of software copyrights and piracy are equally important. They are particularly essential to address in the developing world, where the high prices of software are held responsible by some for perpetuating the digital divide.

This issue in fact pre-dates the rise of P2P networks and file-sharing, but has become more pressing recently for two reasons. First, costly de facto operating systems and software packages make it increasingly difficult for companies and other entities to survive in the online world. Second, as a growing number of countries seek to join WTO, and as their obligations under the Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement become effective, pressure has been growing on developing nations in particular to enforce IPR protections more stringently.

Unlike with medicines (which also fall under TRIPS), software companies have shown little inclination to lower the prices of their products. As a consequence, developing countries (and others) have increasingly turned to Free and Open Source Software (FOSS), which is available without license to programmers and users around the world. Linux, in particular, has emerged as a popular alternative operating system, and in some countries local governments have begun mandating (or at least encouraging) their departments to use FOSS. For developing countries in particular, using FOSS can help ensure that their vital technology functions do not grow dependent on expensive foreign software companies.

Domain Names and the Uniform Dispute Resolution Policy (UDRP): IPR protections for software, music and movies existed long before the Internet. But protection for domain names, which have emerged as a vital form of intellectual property in the online world, is a new area of IPR law and policy that has emerged specifically due to the Internet.

During the early and mid-1990s, when the commercial potential of the World Wide Web became apparent, the online world witnessed the birth of a trend known as cybersquatting – a practice by which URLs containing company names or other forms of intellectual property were registered by users and, often, resold to the companies in question for exorbitant sums. Resolving disputes over domain names was difficult, partly due to the lack of legal precedent, and partly due to the international nature of the Internet (i.e., as we shall see later, determining the appropriate national jurisdiction is often difficult).

It was in response to such disputes that ICANN, with the help of WIPO developed its UDRP,35 a series of guidelines that aimed to circumvent the often cumbersome, expensive and ineffective legal options available. UDRP contains instructions for domain name registrars on when and how to cancel or transfer ownership of domain names in dispute. Some critics have accused UDRP of favouring large corporations and commercial interests. On the whole, though, it has proven a relatively successful alternative to traditional, and predominantly legal, IPR protections.

Actors Involved

As the preceding discussion makes clear, a range of actors and institutions are involved in the IPR governance agenda. The groups involved include national governments, trade associations (like the Recording Industry Association of America – RIAA), multilateral institutions like WIPO and WTO, and international non-governmental organizations like ICANN. In addition, civil society groups like the Electronic Frontier Foundation (EFF) and the Creative Commons movement, both of which seek to increase the amount of non-copyrighted information available in the public domain, are also playing a growing role.

The involvement of these various actors points to the international, and multi-sectoral, nature of IPR governance on the Internet. Increasingly, it is clear that IPR governance cannot be successfully led solely by national governments and legal tools. A range of alternative bodies and mechanisms (e.g., the UDRP process mentioned earlier) will also be essential.

One trend worth highlighting in this context is the increasing reliance on technology in protecting IPRs. Digital rights management (DRM) software, in particular, has become common in recent years, and is used by many companies to control how songs, movies or other content can be accessed and disseminated. Such software has also received legal recognition, for example in the WIPO Copyright Treaty, and in the DMCA Act, which made it a criminal offence to try to break DRM protections. Singapore’s recent intellectual property law also treats attempts to circumvent technical protections as a criminal offence. DRM tools are effective, but they are also somewhat controversial: critics argue that they undermine fair-use rights, and give record and other companies too much control in determining how consumers can use material they purchase. For example, DRM software can determine how many times a song can be played, or on how many diferent devices it can be stored.

These are complicated issues: existing legal and other protections are still in the process of adapting to the very new environment represented by the Internet. It remains to be seen whether the emergent processes of IPR governance can uphold the critical balance between an inventor’s incentive to innovate, and citizens’ right of ownership.