A Bit History of Internet/Printable version

A Bit History of Internet

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.



The Internet is a many things to many people. Some people use it for socializing, some people use it for communicating, some people use it for learning, some people use it for remotely controlling equipment, while others just use it for fun. The Internet has served many purposes beyond its original intention of providing reliable communication infrastructure in the face of a disaster such as a nuclear attack. Most of the users of the Internet are not technology savvy and cannot even differentiate between bits and bytes or between PCs and servers. Yet amazingly, without knowing a thing about how it works, they use the Internet to complete their tasks efficiently and effectively. It is our hope that by writing this book, we may shed some light on the history of the Internet in fun, intuitive, and informative ways.

The title chosen for this book is “A Bit History of Internet”. If you can imagine how many exabytes of information are available on the Internet (one exabyte is equivalent to 1000 000 000 000 000 000 bits), you'll see that this book only covers a small fraction of the history of the Internet. We as the users—or citizens—of the Internet nevertheless try our best to contribute to information about the Internet as we've come to understand it from many online and offline sources. The term "collective intelligence" describes a major underpinning of the Internet phenomenon; now we are using the same technique to produce this book. By writing this book we hope that current and future generations of Internet users can appreciate the Internet as it is, use it to its maximum potential, and above all leverage it for the benefit of mankind.

Chapter 1 introduces the new concept of the Internet and how it came into existence. We cover the key players and the separation of the Internet (as we know it today) from the original ARPAnet.

Chapter 2 lays the foundation of circuit switching vs packet switching. It describes how the limits of circuit switching were apparent to the early Internet pioneers, and how the conventional circuit switching community originally dismissed the possibility of a wide scale deployment of the new packet switching technology.

Chapter 3 introduces the Internet edges. Unlike a dumb circuit switching node, the edge nodes are intelligent, performing most of the processing for running end users' specific tasks on the network.

Chapter 4 introduces the simple Internet core architecture that enables easy maintenance, and the Internet's highly reliable/efficient infrastructure which distributes the routing of information packets.

Chapter 5 introduces the fundamentals of networked nodes on the Internet. One of the key building blocks that form the basis of communication on the Internet is the client-server concept. Due to the proliferation of computer power to the end node (i.e. PC), the PC can become the server while simultaneously acting as a client. This relatively new phenomenon, covered in Chapter 6, has been made popular by Napster and other peer-to-peer (P2P) file-sharing networks, and has taken the entertainment industry by surprise since it make it easy for the users to share their music collections. This remains an unresolved issue until today.

Chapter 7 introduces the new service model based on the Internet's global and high-speed characteristics. Accessing remote servers across the continent is as simple as accessing local resources. This has created a new cloud-computing industry that is championed by Amazon and Google. The argument for the cloud proponents is that it is easier and cheaper for a larger, specialized company to host the software and data on behalf of smaller businesses, rather than each end-user setting up and maintaining their own server farm.

Chapter 8 introduces the new concept of Internet-of-Things. With the introduction of IPv6 address it is theoretically possible for every item on Earth to be connected to the Internet. Useful ideas have come out of this, like having a multitude of Internet-connected sensors that can monitor critical information such as air or sea pollution. Last but not least, the conclusion will provide some insights into the history of the Internet and what is awaiting us in the future, as all humans, and indeed all things, are connected to the global and universal Internet.

Chapter 1 : Introduction

The Internet is a worldwide system of interconnected computer networks. The computers and computer networks exchange information using TCP/IP (Transmission Control Protocol/Internet Protocol). The computers are connected via the telecommunications networks, and the Internet can be used for e-mailing, transferring files and accessing information on the World Wide Web.[1]

The World Wide Web is a system of Internet servers that use HTTP (Hypertext Transfer Protocol) to transfer documents formatted in HTML (Hypertext Mark-up Language). These are viewed by using software for web browsers such as Netscape, Safari, Google Chrome and Internet Explorer. Hypertext enables a document to be connected to other documents on the web through hyperlinks. It is possible to move from one document to another by using hyperlinked text found within web pages.[2]

Nowadays, there are several ways that enable us to access the Internet. Technology keeps improving, method to access the Internet also increase. People can now access Internet services by using their cell phone, laptop and various gadgets. The number of Internet service providers also keeps increasing. For example in Malaysia, there are many Internet service providers such as TM Net, Maxis , Digi, Celcom, Umobile, etc.[3]

Communication is becoming much easier than before due to the appearance of Internet. One of the conveniences is that messages, in the forms of email, can be sent at any corner of the world within fractions of seconds. Besides that, email also facilitated mass communication which means that one sender reaches many receivers. Some of the services made available due to Internet include video conferencing, live telecast, music, news, e-commerce, etc.theese communication network are widely used across the globe

Internet History edit

The rapid development of Internet started at the early 1960, paralleled with the developments of the computers. Those scientists and researcher started to realize a great vision, a future that everyone will be able to communicate by using their computers. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962 and followed by Leonard Kleinrock from MIT, who published the first paper on packet switching theory.[4]

ARPANET, which is the former of Internet, was a project launched by Military Department of USA. It was brought online at Oct 29, 1969 by Charley Kline at UCLA, when he attempted to perform a remote login from UCLA to SRI. In order to get attentions from public, they made the first public demonstration of ARPANET at an international conference at October 1972.[5]

The initial ARPANET was a single, closed network. In order to communicate with an ARPANET, one had to be attached to another ARPANET IMP (interface message processor). Hence, the disadvantages of single network were realized and lead the development of open-architecture network and also different protocols of internetworking, which enable multiple networks can be joined together. E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The FTP protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the FTP protocol.[6]

Before the TCP/IP protocol was introduced by BoB Kahn, the networking protocols used for the ARPANET was NCP, Network Control Protocols. NCP did not have the ability to address networks further downstream than a destination IMP on ARPANET. By 1980, the Internet had reached a certain level of maturity and started to exposed to public usage more and more often. At the same time, French launched the Minitel project to bring data networking into everyone’s home by gave away a free terminal to each household requested.[7]

At the 1990s, the Internet predecessor, ARPANET finally came to an end, and replaced by the NSFNET which serve as a backbone connecting regional networks in USA. However, the most significant changes of Internet at 1990s was the World Wide Web(WWW) application which truly brought Internet to our daily life. Various technologies such as VoIP, HTML, web browsers with graphical user interfaces, P2P file sharing, instant messaging which is very familiar to us nowadays.[8]

Internet usage and benefit edit

It is globalization and modernization nowadays, Internet become more and more useful and come into everyday life. From the early days of computers to now, communication between people to people has been the technology’s most frequent used. People using the Internet to sent or received email. Using email leads people to spend more time online, encourages their use of the Internet for information, entertainment. All this can save time and money because it is consider efficiency enough and always is cheaper. As new Internet communication services arise such as those instant messaging, chat rooms, online games, auctions (eBay), and my groups, they become instantly popular.[9]

Information searching is the second basic Internet function. Many use the Internet to search and download some of the free, useful computer software that provided by the developers on computers worldwide. The only major problem would be finding what you need among the storehouses of data found in databases and libraries. It is therefore necessary to explain the two major methods of accessing computers and locating files without the information retrieval function would not be possible. There are also educational resources on the Internet. They are in various forms such as journals and databases on different types of knowledge. For example, people can access online journals, or learning languages. There are also special homepages on special topics or subjects of interest. These sites are very helpful to those who are doing academic research or even teachers. This will go a long way to improve their knowledge to make them more competitive and knowledgeable. There are now even virtual libraries and full degree programs, all available online.[10] j

In fact, it is undeniable a trend of commerce on the Internet. The communication facilities have now rapidly become integrated as core business tools such as Internet market, banking, advertisement and so on. Thus most of the business functions are communicative in nature. Users or consumer need not to waste their time on queue up waiting their transaction and service. Internet makes communication with customers and other business partner even to some respondent if related to some online survey. [11]

To conclude, using the Internet in the right way, give lots of benefits.

Internet Application edit

Network applications are among the most important parts in computer network – if we couldn’t get any useful applications, there would be no use designing networking protocols to support them. Over half a century ago, a variety of intelligent network applications were created. These applications included the classic text-based applications that become popular in the 1970s and later developed into e-mail, remote access to computers, file transfers, newsgroup and text chat in 1980s. As time went by they introduced more killer applications like the World Wide Web, encompassing Web surfing, search and electronic commerce in middle 1990s. Applications like instant messaging with buddy list and P2P file sharing were introduced at the end of millennium.

Nowadays, instead of the already applications was updated, they also include successful audio and video applications including Internet telephony, video sharing and streaming, social networking applications, Internet radio and IP television (IPTV). In addition, the increased penetration of residential broadband access and ubiquity increased access wireless applications set the stage for an exciting future. At the core of the network applications development writing programs that take place on different end systems communicated with each other through the network. [12]

For example, the Web application, there are two different programs that communicate with each other: the browser program running on the user’s host desktop, laptop. PDA, cell phone and so on; and the Web server program running in the Web server host. As another example, in a P2P file sharing systems have a program in each host community to participate in file sharing. In this case, the programs on the various hosts may be similar or identical. This basic design – namely, confining application software to the end.[13]

Chapter Summaries edit

Chapter 2 is about packet switching vs. circuit switching. It begins with an introduction to packet switching. Then it delves into the history of packet switching, the advantages of packet switching, the cons and pros of circuit switching and ends with a comparison of the two data transfer methods.

Chapter 3 discusses about the Internet edge. It begins with an introduction explaining the meaning of Internet edge. It proceeds to explain the benefits of having intelligence at the edge. A brief explanation of shared access follows and finally it discusses last mile evolution and its problems.

Chapter 4 begins with an introduction of the Internet core. Following the introduction is a discussion of the advantages of the dummy core. It then explains packet-switched router, virtual circuit router (ATM), and latest router technologies. It concludes with the benefits of Internet networking.

Chapter 5 discusses about client-server. It begins with an introduction of client-server. A discussion about client and server program follows. The chapter then continues with an explanation of both client/server evolution and server farm. It ends with some examples of client-server.

Chapter 6 explains about Peer-to-Peer (P2P) file sharing ways for users in the Internet.P2P allows a group of computer users within the same networking program to access files directly from one another's hard drive. It discuss about the architecture and categories of P2P which is Pure P2P, Hybrid P2P and Centralized P2P. It then will show how it works by applying P2P technique in daily life several aspects. It ends with the discussion of the advantages and the disadvantages.

Chapter 7 is about the cloud computing. In cloud computing environment, you can store your files on the internet in both private and public networks. This chapter includes the techniques and the types of cloud computing which including Public Cloud, Private Cloud, Hybrid Cloud, and Community Cloud. We will see the evolution of cloud computing through the architectures and technology. It follows with the applications of files storing in different sectors and the pros and cons of cloud computing.

Chapter 8 will discuss about the revolution of computing and communications, and its development depends on dynamic technical innovation of Internet-of-Things (IoT). IoT allows us to connect things, sensors, actuators, and other smart technologies, thus enabling person-to-object and object-to-object communications. It begins with the architecture and explain about what is IPv6 including the topics of Addressing and Routing, Security, Address Auto-Configuration, Administrative Workload, Mobility (Support for Mobile Devices), and Multicast Technologies.

References edit

  1. lan Peter.The beginnings of the Internet.Retrieved 21,November,2011 from http://www.nethistory.info/History%20of%20the%20Internet/beginnings.html
  2. Laura B. Cohen.Understanding the World Wide Web.Retrieved 21,November,2011 from http://www.internettutorials.net/www.asp
  3. Zyas.The Internet Concept.Retrieved 22,November,2011 from http://www.zyas.com/articles/The-Internet-Concept-24.html
  4. Walt Howe.A Brief History of the Internet.Retrieved 27,November,2011 from http://www.walthowe.com/navnet/history.html
  5. Gregory Gromov.Roads and Crossroads of the Internet History.Retrieved 27,November,2011 from http://www.netvalley.com/cgi-bin/intval/net_history.pl?chapter=1
  6. Kurose Ross.History of Computer Networking and the Internet.In,Computer Networking:A Top-Down Approach(pp. 61-64)
  7. Retrieved 27,November,2011 from http://www.livinginternet.com/i/ii_ncp.htm
  8. Histories of the Internet. Retrieved 27,November,2011 from http://www.isoc.org/internet/history/brief.shtml
  9. Terry Bernstein, Anish B. Bhimani, Eugene Schultz, Carol A. Siegel, Internet Security for Business. 29th November 2011 from http://www.invir.com/int-bus-advantages.html
  10. Courtney Boyd MyersHow The Internet is Revolutionizing Education . 30 November 2011 from http://chevyvolt.cm.fmpub.net/#http://thenextweb.com/insider/2011/05/14
  11. Emporion Plaza Ltd Benefits of Internet Use. 30 November 2011 from http://www.cyberethics.info/cyethics2/page.php?pageID=70&mpath=/86/88
  12. Kurose Ross.Application Layer,Computer Networking:A Top-Down Approach(pp. 111-112)
  13. Kurose Ross.Application Layer,Computer Networking:A Top-Down Approach(pp. 113)

Chapter 2 : Circuit switching vs packet switching

Author/Editor: Heng Zheng Hann, Chong Yi Yong, Fareezul Asyraf, Farhana binti Mohamad, Fong Poh Yee

Introduction to Packet Switching edit

Packet switching is a digital networking communications method which is similar to message switching using short messages. All the transmitted data are broken into suitably sized blocks (shorter units), regardless of their contents, types, or structures. This grouped data are called packets. Each packet is associated with a header before they are transmitted individually through the network. The path taken by each packet to reach their respective destination is depending on the status of links, or algorithms used by switching equipment. Meanwhile, switching is carried out by special nodes on the network which govern the flow of data. Devices such as routers, switches or bridges are the one which carry out the task.

Packet switching differences itself with another principal networking paradigm, circuit switching. Circuit switching is a method which sets up a limited number of dedicated connections with a constant bit rate and a constant delay between nodes for exclusive use during a communication session. In terms of network traffic fee, circuit switching is characterized by a fee per time unit of connection time, even when no data is transferred, while packet switching is characterized by a fee per unit of information.

Many different strategies, algorithms and protocols used to maximize the throughput of packet switching networks. There are two major packet switching protocols that we are using today. The first one is connectionless packet switching, also known as datagram switching, and the second one is connection-oriented packet switching, also known as virtual circuit switching. In datagram, each packet is treated individually for complete addressing and routing. Thus, sometimes packets are transmitted in different path or any routes available. Virtual circuit switching means the packet switching behaves like circuit switching. Connections between devices are completed by using dedicated nodes or routes for packet transmission. Each packet includes a connection identifier and delivered in an order.

Packet mode communication may be used with or without transitional forwarding nodes (packet switches or routers). In all packet mode communication, network resources are managed by statistical multiplexing or dynamic bandwidth allocation in which a communication channel is efficiently divided into an arbitrary number of logical variable-bit-rate channels or data streams. Statistical multiplexing, packet switching and other store-and-forward buffering introduce varying delay and input/output amount in the transmission. Each logical stream consists of a sequence of packets, which normally are forwarded by the multiplexers and intermediate network nodes asynchronously using first-in, first-out buffering. Alternatively, the packets may be forwarded according to some scheduling restraint for fair queuing, traffic shaping or for segregated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium, the packets may be delivered according to some packet-mode multiple access schemes.

History of Packet Switching edit

The concept of switching small blocks of data was first explored by Paul Baran in the early 1960s. Independently, Donald Davies at the National Physical Laboratory (NPL) in the UK had developed the same ideas.

Baran developed the concept of message block switching during his research at the RAND Corporation for the US Air Force into survivable communications networks, first presented to the Air Force in the summer of 1961 as briefing B-265[1] then published as RAND Paper P-2626 in 1962 and then including and expanding somewhat within a series of eleven papers titled On Distributed Communications in 1964. Baran's P-2626 paper described a general architecture for a large-scale, distributed, survivable communications network. The paper focuses on three key ideas: first, use of a decentralized network with multiple paths between any two points; and second, dividing complete user messages into what he called message blocks (later called packets); then third, delivery of these messages by store and forward switching.

Baran's study made its way to Robert Taylor and J.C.R. Licklider at the Information Processing Technology Office, both wide-area network evangelists, and it helped influence Lawrence Roberts to adopt the technology when Taylor put him in charge of development of the ARPANET. Baran's work was similar to the research performed independently by Donald Davies at the National Physical Laboratory, UK. In 1965, Davies developed the concept of packet-switched networks and proposed development of a UK wide network. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Baran's work. A member of Davies' team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles, bringing the two groups together.

Interestingly, Davies had chosen some of the same parameters for his original network design as Baran, such as a packet size of 1024 bits. In 1966 Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. The NPL Data Communications Network entered service in 1970. Roberts and the ARPANET team took the name "packet switching" itself from Davies's work.

The first computer network and packet switching network deployed for computer resource sharing was the Octopus Network at the Lawrence Livermore National Laboratory that began connecting four Control Data 6600 computers to several shared storage devices and to several hundred Teletype Model 33 ASR terminals for time sharing use starting in 1968.

In 1973 Vint Cerf and Bob Kahn wrote the specifications for Transmission Control Protocol (TCP), an internetworking protocol for sharing resources using packet-switching among the nodes.

Packet Switching in Networks edit

Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks such as computer networks, to minimize the transmission latency, the time it takes for data to pass across the network. It is also used to increase robustness of communication.

There are seven layers in network communication to transmit data from one note to another. The end user’s computer buffers a data of any length to be sent. If the data is too large, it will be cut into smaller pieces (segmentation). Each segment will be appended with a header which contains information for where the data will be send to and some relevant information. Then the segment is then passed down to the next layer which will do the same task on the segment which is called data now. These processes is iterated until the data reaches the final layer and it is placed on the communication link and sent to the receiving node. The receiving node will just simply remove the headers from data and puts all the segments back together in its initial condition (encapsulation).

These layers are introduced to break down the complexity of communications. The top layer (layer 7) is the layer at user level. As the layers go down, they get increasingly primitive. Layer 1 is the most primitive form as it is just binary numbers prepared to be transmit to the end node.

seven layers of open systems interconnection models

Layer Number Name Description
7 Application Layer Gives application software access to the open systems interconnection environment. This layer contains management functions.
6 Presentation Layer Establishes ‘common ground’ for applications to communicate with the other layers. This layer formats data and provides syntaxes for applications.
5 Session Layer Provides a service that controls the communication between applications running on end nodes.
4 Transportation Layer Provides a means of communicating between end nodes. Its functions may include sequencing, error detection and optimisation of communication.
3 Network Layer Deals with communication of data on a network. Here, network information is gathered, including addresses, routing information, etc.
2 Data Link Layer Deals with maintaining and optimising the actual connection to the network. It also performs error checking on the communicated data.
1 Physical Layer Deals with the physical connection between nodes in a network. It tends to deal with a ‘bit stream’ rather than any single pieces on information.

Advantages of Packet Switching edit

Packet switching offers some of the advantages over circuit switching:

• More robust than the circuit switched systems and packet switching is more suitable for transmitting the binary data. (since that is what it was originally designed for packet switching)

• Since the advances of the technology nowadays, it is possible to encode voice and send it in packetised format with minimal problems. In the past it was generally accepted that delay sensitive data (for example voice) was to be handled through a circuit switched network.

• A damaged packet can be resent, this is because only the damaged part is sent, no need to resent an entire file.

• It allows multiplexing, different users, or different processes from the same user, can communicate at the same time.

• Might be more economical than using private lines if the amount of traffic between terminals does not warrant a dedicated circuit.

• Might be more economical than dialed data when the data communication sessions are shorter than a telephone call minimum chargeable time unit.

• Destination information is contained in each packet, so numerous messages can be sent very quickly to many different destinations. The rate depends on how fast the data terminal equipment (DTE) can transmit the packets.

• Computers at each node allow dynamic data routing. This inherent intelligence in the network picks the best possible route for a packet to take through the network at any particular time. So the throughput and efficiency might be maximized.

• The packet network inherent intelligence also allows graceful degradation of the network in the event of a node or path (link) failure. Automatic rerouting of the packets around the failed area causes more congestion in those areas, but the overall system is still operable.

• Ability to emulate a circuit switched network. For example, X.25 and ATM uses a method of communication that is called virtual circuit. These virtual circuits perform in much the same way as circuit switched circuits, but with one fundamental difference; Virtual circuits allow other virtual circuits to occupy the same link. This means that communication can occur concurrently along a link between many nodes (rather than between two nodes, which is the solution circuit switching provides).

Introduction to Circuit Switching edit

Circuit switching is the most commonly use in for the dedicated telecommunication field. But this is strong concept the developer use for communication that provide reliability and consistent connection that established between one node to another node. So, many people have strong misunderstanding about circuit switching that only use in establishment of voice analog and voice circuit but in fact the circuit switching is provide more than consistency connection among the network.

The simple analogy for circuit switching is the reserving the lane on the highway. For circuit switching, we reserve the lane so that only the car (data) can reach to the destination fast and consistent without delay. There is no traffic jam occur that will slow down the car. On the other hand, the packet switching, they didn’t reserve the lane, so the packets (car) can easily going through the highway (connection) that will produce the congestion on the connection.

Telephony system is much related to the circuit switching system. To make a connection, we must setup a connection between two nodes that is sender and receiver. So the subscriber will use operator to connect to another subscriber to make a call connection. In this situation, electrical connection is established between two subscribers. As the result, the connection is reserved and cannot be used by other user. In the mean time, the copper wire that carries that call will not receive any calls that made by other users in the same time.

When the nodes are connected with an electrical circuit physically, then the switching network become available to generate a communication route between the nodes before the user communication. There are three switch circuit methods of circuit switching which are store and forward, cut-through and the last is fragment-free switching.

Advantages and Pitfalls of Circuit Switching edit

Both the advantages and pitfalls should be taken in consideration when talking about Circuit Switching. This is due to the fact that although it may be useful in certain scenarios, it may not be convenient in others. For example, some may prefer to use Packet Switching over Circuit Switching and some may prefer the opposite.

Below are the lists of some of the advantages in Circuit Switching:

1. Guaranteed bandwidth

The communication performance in Circuit Switching is predictable and there will be no "best-effort" delivery with no real guarantees.

2. Simple abstraction

Circuit Switching is a reliable communication channel between hosts and one would not have to worry about lost or out-of-order packets.

3. Simple forwarding

The forwarding in Circuit Switching is based on time slot or frequency and one would not need to inspect a packet header.

4. Low per-packet overhead

There will be no IP (and TCP/UDP) header on each packet in Circuit Switching.

Despite the advantages listed above, Circuit Switching cannot avoid having pitfalls or disadvantages that has led to the invention of Packet Switching in order to overcome its pitfalls. Below are some of the pitfalls in Circuit Switching:

1. Wasted bandwidth

Since most traffic occurs in bursts, in Circuit Switching this may leads to idle connection during silent period. Because it is unable to achieve gains from statistical multiplexing that relies in identifying, predicting and allocating more time for the generally more active paths.

2. Blocked connections

When resources are not sufficient, the connection will refuse to be connected and thus, Circuit Switching is unable to offer "okay" service to everybody.

3. Connection set-up delay

There will be no communication until the connection is set up. Plus, in Circuit Switching, it is unable to avoid extra latency for small data transfers.

4. Network state

The network nodes in Circuit Switching must store per-connection information and it is unable to avoid per-connection storage and state.

Circuit Switching vs Packet Switching edit

There are a few aspects when it comes to comparing between circuit switching and packet switching. The few common comparisons made to distinguish the two are:

1. History of the techniques

Circuit switching is a technique that has been developed long before packet switching came into view. Circuit switching is used to dedicate point-to-point connections during calls and this method is used in Public Switched Telephone Network. Packet switching, however, is a more modern way of sending data by breaking it down to pieces and transmitting based on the destination address in each packet, hence the name packet switching. Today, the Voice Over Internet Protocol (VoIP) is commonly associated with packet switching technology which means voice communication or multimedia gets sent over the Internet through Internet Protocol networks.

2. Cost

When circuit switching is used, the end-to-end point of the communication path is reserved for users. For example, when a call is made, the communication path (telephone line) is considered ‘rented’ and users find that charges apply to how long they use the path. So the cost falls on the single user who made the call.

However, with VoIP, many people can use the same communication path (there is no dedicated circuit to any user in particular e.g. the internet line) and the cost is shared.

3. Reliability

Most people would think that modern technology would fare better than ancient ones. On the contrary, in this case, it seems that circuit switching is deemed to be the more reliable method of transferring data.

Since circuit switching reserves the whole communication path, the can almost be said that the information can be get across with no loss. This makes circuit switching very preferable when it comes to real time communication such as conference calls and video streams.

Meanwhile, packet switching breaks the information into small packets and transmits them over a congested medium such as the Internet, there is bound to be lost packets along the way. However, there are various protocols built to prevent huge data losses in a packet switched network, so even though real time data stream may pose a problem, packet switched networks are still relatively reliable.


In the modern and fast paced world, what we are looking for is efficiency, low costs and reliability and packet-switched networks seems to fulfill most of the criteria that the society is looking for. It would only be a matter of time before circuit switching becomes a thing of the past.

Chapter 3 : Internet Edge

Author/Editor: Loh Boon Pin/Lim Weng Kai/Lee Ming Yue/Lim Wai Chun/Lee Lih Horng

Internet Edge Introduction edit

What is Internet edge? Internet edge is considered an edge of a network and look at the electronic components with what we are using including computers, personal digital assistants (PDAs), cellphones, smartphones, tablets, and other devices that we use in our daily lives. The computers and other electronic devices that connect to the Internet are often known as end systems because they are situated at the edge of the Internet.

The end-to-end principle is a classic design principle of computer networking that states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes, provided they can be implemented "completely and correctly" in the end hosts. First explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark, it has inspired and informed many subsequent debates on the proper distribution of functions in the Internet and communication networks more generally.

The Internet's end systems include desktop computers (e.g., desktop personal computers (PCs), Macs, and Linux boxes), servers (e.g., Web and email servers), and mobile computers (e.g., portable computers, PDAs, and phones with wireless Internet connections). With the accelerating rate at which technology is advancing today, gaming consoles (such as PlayStation 2/3 and XBOX360) as well as digital cameras are all connected to the Internet as end systems. They allow users to interact directly with the Internet to send and receive data. Other end systems are not accessed directly by users, but do facilitate Internet communications. These include servers for data such as email and web pages. Users connect with such end systems through their own computers, which contact the server to access and transfer information. The end user always interacts with the end systems. The Internet’s end systems include some computers with which the end user does not interact.

End systems are also referred to as hosts as they host application programs such as a Web browser programs. Hosts or end systems usually are further categorized into two groups: clients and servers. Clients tend to be desktop and mobile PCs, PDAs, and so on, whereas servers tend to be more powerful machines that store and distribute Web pages, stream videos, replay emails, and so on.

Advantages of intelligent at the edge edit

Nowadays, to have a better throughput performance, intelligence at the network's edge is needed. There are several advantages to show the importance of intelligence at the network’s edge.

Wireless controller is now becoming the bottleneck for throughput and security enforcement as throughput needs rise. By using the 11n access points, distributed intelligence allows more of the on-site data flow to be routed internally on the edge of the network without sending that data to the wireless controller and back. The ability of the "spokes" to communicate directly with one another along an optimal path increases, even prioritizing more critical data while providing full security and mobility services.

Furthermore, remote troubleshooting and advanced self-healing are greatly facilitated by the access points that distribute intelligence throughout the network. During outages, access points can also serve a bridging function to reduce latencies. Since security is as important to the organization as solid network coverage and availability, it is important to make sure that any distributed architecture has enough application awareness to be self-healing without dropping VoIP calls. It's also important to be able to deliver the same firewall capabilities as hub-and-spoke to avoid compromising quality of service (QoS). This helps maintain network services throughout outages, ensuring the organization and its assets continue to benefit from continued local QoS prioritization, authentication, security policies and direct routing as well as backhaul failover to 3G.

With distributed traffic management, a single controller can oversee up to eight times the number of access points. This frees up controllers to focus on large scale network and policy management as well as other services, resulting in a more efficient architecture. Access points with built-in sensors for security and troubleshooting can also eliminate extra installation and power costs that would come with a separate sensor network.

Furthermore, access edge switches are tasked with providing intelligence at the edges of your network. The modular design of these switches enables network administrators to first classify traffic, and then decide what action to take with that classified traffic. Traffic can be classified based on criteria such as: source/destination port, source/destination MAC address, Virtual Local Area Network (VLAN) ID, protocol, IP source/destination address, IP source/destination network, Type of Service (ToS)/Differentiated Services Code Point (DSCP), TCP/UDP source/destination port, or TCP flags.

Classifiers can also be associated with a QoS policy to allow per-port or per-flow traffic shaping. This lets classified traffic be shaped in 64 kbps increments based on individual users or applications. This is especially popular in university housing scenarios, where traffic shaping on a per-port basis allows differentiated service and/or better control of bandwidth requirements at the edge of the network.

Moreover, greater intelligence at the edge of the network can also make IT budgets go further, offering advantages in both capital and operational expenditures. Adding 802.11n access points to the network can be less expensive than adding more controllers, and can actually result in significant savings since access points with greater intelligence can reduce the number of controllers needed.

In short, the advantages of having intelligence at network edge is obviously important and greatly providing better throughput to user and clients without compromising security or quality of service and driving up cost.

Shared Access edit

Access network referred to the physical links that connect an end system to the first router (edge router). Local telco referred to the local wired telephone infrastructure provided by a local telephone provider such as Telekom Malaysia and Digi Service Center in Malaysia. Central Office (CO) is a building where the telco switch located and each residence or customer will link to its nearest telco switch.


Dial-up Internet access is set up when the PC is attached to a dial-up modem, which is in turn attached to the home’s analog phone line. Users dial an ISP’s phone number and make a traditional phone connection with the Internet Service Provider (ISP) modem. Dial-up internet access has two major drawbacks. It is very slow with a maximum rate of 56kbps. User can only choose either surf the Web or receive and make an ordinary phone call over the phone line at the same instant time.


Digital Subscriber Line (DSL) internet access is obtained from the wired local phone access (i.e., the local telco). A telephone call and an Internet connection can share the DSL link at the same time because the single DSL link is separated into 3 channels (high-speed downstream, medium-speed upstream, ordinary two-way telephone). DSL has two major advantages over dial-up internet access. It can transmit and receive data at much higher rates. Besides that, users can simultaneously talk on the phone and access the Internet.


Cable Internet access makes use the cable television company’s existing cable television infrastructure. It is known as hybrid fiber-coaxial (HFC) access network because both fiber and coaxial cable are used in this system. Cable internet access is a shared broadcast medium. Therefore the transmission rate will be lower when there are several users download a video file simultaneously.


Some local telcos provide high-speed Internet access over optical fibers which are known as Fiber-To-The-Home (FTTH)or Fiber-To-The-Premise (FTTP). Direct fiber is the simplest optical distribution network for one optical fiber link from the CO to each home so that user receives high bandwidth. Actually, each fiber leaving the central office is shared by many homes and it is split into individual customer-specific fibers when the fiber gets relatively close to the homes.


Ethernet is a local area network (LAN) used to connect an end system to the edge router in corporate and university campuses.

Wide-area wireless access

Wide-area wireless access networks enable users to roam internet on the beach, on a bus, or in your car. For such wide-area access, a base station is used over the cellular phone infrastructure within a few tens of kilometers.


WiFi is a wireless LAN access based on IEEE 802.11 technology which installed at everywhere such as universities, business office, cafés, airports, homes and even in airplanes. Wireless users must be within a few tens of meters of an access point that in turn is connected to the wired Internet.


WiMAX, also known as IEEE 802.16, operates independently of the cellular network and provides speeds of 5 to 10 Mbps or higher over distances of tens of kilometers.

Last mile evolution and problems edit

The 'last mile' refers to the final connection, reaching the user, which enables the user to actually connect their computer to the Internet. In the year 1989, the first commercial dial-up ISP, “The World” started their business in United States; at the time, the only means of internet access for the general public was dial-up, by telephone line. In the early 2000s, this was followed by ADSL internet connections, which allowed the users to surf the internet at a higher speed. FTTH is the latest internet connection service provided by local ISP to their network. What are the differences between dial-up connection, ADSL connection, FTTH and others type of internet connection currently available?

Wired connections


From the word “Dialup”, users know they need to dial a number to make an internet connection to the outside world. Modern dial-up modems normally transfer at 56Kbps unless the ISP use compression techniques to exceed 56Kbps speed limit. During the dial-up process, we can hear an "handshake" noise.The noise takes place to established connection and information exchange with the local remote server. Several problems occur in dial-up connection; phone line during dial-up connection for voice service would be disturbed; only a computer can connect to the internet by using dial-up connection per phone line; it has low speed internet connection compared to the new ADSL connection.


Asymmetric Digital Subscriber Line (ADSL) is a type of Digital Subscriber Line technology (DSL). The term “Asymmetric” in ADSL connection means the upload band and download band is unequal bandwidth. Normally ISP will allocate more bandwidth for the download band rather than the upload band because the clients usually browse web pages, download files, read email etc, which require the server to send data to their clients. ADSL or DSL has a connection speed ranging from 256Kbps to 20Mbps. ADSL also use the same infrastructure as dial-up connection. The unused bandwidth in the twin-pair copper wire phone line can be used simultaneously for transmission of data (internet) and voice or fax services at the same line. There are few problems associated with ADSL connection. Firstly, not every phone line is equipped for ADSL service and it maybe not be available for some rural areas. Secondly, it requires static IP address to send or receive data. This is the main problem that causes the run out of IPv4 address and the static IPs have high internet security risk. Finally, the ADSL has been replaced by optical fiber connection to counter the limitation of the ADSL bandwidth.


Fiber to the Home (FTTH)has become popular in the city area in which it provide high speed internet access, IPTV and phone call services. Some ISP companies in Hong Kong, US and South Korea provide 1Gbps Internet connection to their network. FTTH connections are using optical fiber to replace the usual metal local loop used for the last mile telecommunications. FTTH lets their users experience same upload and download bandwidths. This means the FTTH internet connection is good for certain network applications especially P2P type file sharing. Until now, optical fiber connection still is the fastest internet connection between point-to-points. Unless scientists discover a new technology that can defeat the speed of the light.

Wireless connections

3G & Wimax 4G

3rd generation mobile telecommunication (3G) is a mobile telecommunication services that fulfill the International Mobile Telecommunication-2000 (IMT-2000) standard. 3G and 4G Wimax are the evolution of 2G GSM network. Below are few examples of 2G networks until 4G networks:

2G- Fully digital 2G networks replaced 1G (analog) in the 1980s.

2.5G- known as General Packet Radio Service (GPRS),used for data transfer. Data transfer speeds up to 114Kbps

2.75G- known as Enhanced Data GSM Environment (EDGE), used for data transfer such as pictures sharing or browse internet. Data transfer speeds up to 384Kbps.

3G – speeds up to 2Mbps for stationary or walking users and 384kbps in moving vehicle. Provide better security than 2G network. Used for mobile TV, Videoconferencing and Video on demand.

3.5G – known as High Speed Downlink Packet Access (HSDPA), evolution for mobile telephone data transmission. Speed up to 7.2Mbps.

4G – known as Worldwide Interoperability for Microwave Access (Wimax). Offers speeds of up to 75Mbps, but in reality the speed needs to be split among the users.

The biggest problem that occurs in the wireless channel is that the bandwidth needs to be shared among the users, leading to competition for bandwidth as a scarce resource. More simultaneous users lead to smaller shares of the bandwidth pie for all, and thus poorer service. Local ISPs usually attempt to contain the problem with fair usage policies; these set (and thereby cap) a quota of data usage for each user. Such measures are unpopular, and some ISPs are trying to provide (and advertise) solutions without such capping.

Conclusion/Summary of Internet Edge edit

Internet edge is considered as an edge of a network. Desktop computers, servers, and mobile computers are included as the Internet’s end systems because they are at the edge of a network. End systems run (host) application programs such as Web browser program, therefore they are also referred to as hosts. Hosts are divided into two categories which are clients and servers. The client-server model of Internet application is a distributed application because client program running on one end system (one computer) and server program running on another end system (another computer). Client components initiate requests for services, which provided by server components to one or many clients.

There are several advantages of intelligence at network’s edge. By using the 11n access points, the ability of on-site data flow is increased and prioritizing more critical data while providing full security and mobility services. Moreover, the access points also greatly facilitate remote troubleshooting and advanced self-healing. During outages, it can also serve a bridging function to reduce latencies. Furthermore, access edge switches are designed to enable network administrators to classify traffic first, and then decide what action to take with that classified traffic. Classified traffic is shaped in 64 kbps increments based on individual users or applications. This traffic shaping on a per-port basis allows differentiated service and/or better control of bandwidth requirements at the edge of the network. Additionally, it is less expensive by adding 802.11n access points to the network than adding more controllers and access points with greater intelligence can reduce the number of controllers needed. In short, intelligence at network edge is greatly providing better throughput to user and clients without compromising security or quality of service and driving up cost.

Access network referred to the physical links that connect an end system to the first router (edge router). Local telco referred to the local wired telephone infrastructure provided by a local telephone provider. Central Office (CO) is a building where the telco switch located and each residence or customer will link to its nearest telco switch. There are some ways to share the access network such as dial-up which is using the phone line to connect network, DSL which obtained from the wired local phone access, cable which is using both fiber and coaxial cable, FTTH which is provided high-speed internet access over optical distribution and Ethernet which is LAN used to connect an end system to the edge router. Besides that, other ways also enable to access the network like WiFi which is a wireless LAN access based on IEEE 802.11 technology, Wide-area wireless access networks which is enable users to roam internet in outdoor and WiMAX which operates independently of the cellular network in higher speed or distances.

In dialup connection, phone line for voice service can be disturbed; only one computer can connect to the internet by using per phone line; the internet connection speed is low compared to ADSL connection. For ADSL connection, not every phone line is equipped for ADSL service, it maybe not available for some rural area and it required static IP address to send or receive data. While users experience same upload and download bandwidths with FTTH. It is the fastest internet connection between point-to-point using optical fiber. Problem for wireless channel is the bandwidth need to be shared among the users. The speed is low when more users use the network at the same time. References: 1)"PUSHING INTELLIGENCE TO THE EDG"E Dec 1, 2005 12:00 PM, By James Gompers;"LIVING AT THE EDGE" BY M. THIYAGARAJAN; "INTELLIGENCE AT THE EDGE" BY ALLIED TELESYN. 2)James F. Kurose and Keith W. Ross, "Computer Network:A Top-Down Approach,"1.2The Internet edge,5th ed.,2009,pp 9–12.

Chapter 4 : Internet Core

Edited by Khairul Farhana, Kwa Meng Kwang, Lau Kien Lok, Kelvin Tan Kuan Ming, and Lee Kok Pun

Introduction to Internet Core edit

A global network connecting millions of computers. More than 100 countries are linked into exchanges of data, news and opinions. Unlike online services, which are centrally controlled, the Internet is decentralized by design. Each Internet computer, called a host, is independent. Its operators can choose which Internet services to use and which local services to make available to the global Internet community. Remarkably, this anarchy by design works exceedingly well. There are a variety of ways to access the Internet. It is also possible to gain access through a commercial Internet Service Provider (ISP).

Core network and backbone network typically refer to the high capacity communication facilities that connect primary nodes. Core/backbone network provides path for the exchange of information between different sub-networks. In the world of Enterprises, the term backbone is more used, while for Service Providers, the term core network is more used.

Core/backbone network usually has a mesh topology that provides any-to-any connections among devices on the network. Many main service providers in the world have their own core/backbone networks, that are interconnected. Some large enterprises have their own core/backbone network, which are typically connected to the public networks.

The devices and facilities in the core / backbone networks are switches and routers. The trend is to push the intelligence and decision making into access and edge devices and keep the core devices dump and fast. As a result, switches are more and more often used in the core/backbone network facilities. The technologies used in the core and backbone facilities are data link layer and network layer technologies such as SONET, DWDM, ATM, IP, etc. For enterprise backbone network, gigabit Ethernet or 10 gigabit Ethernet technologies are also often used.

Advantage of dummy Core edit

Dummy core is a hardware model in a internet core system. The main aim of dummy core is to produce and accept packets. Dummy core is almost similar to the characteristic of a real source.

There are a few advantages of dummy core as listed below:

  • Ample Infrastructure

Dummy core is considered as very flexible because they can be modified in any form as long as it works on a system. For example, if the dummy core has congestion, extra connections or switching is added for reduction of packets congestion. Since dummy cores are low cost and easy setup, they can be reliable communication links between end users and servers. Moreover, dummy cores can be set in a very large scale of network communications.

  • Simple Specification

As dummy core’s design is simple, dummy cores just generate and receive packets, and it does not have any extra specification that available in real internet core. The packets that sent through dummy core contain addresses, thus the dummy core required to send the packets to their respective destinations. Therefore, dummy core provides easy maintenance.

  • User privilege

Dummy core enable the users to have full privilege towards the control of their interaction. To make it easier to understand, let’s say that two user wants to bring some changes to their interaction, for example to bring in a third party into the interaction. They can just do the changes as they are in full control. This brings more convenience to the user. Another example is that if an IP connected user wants to perform a special three-way connectivity service, he does not need to order the service from the networking company. What the user can do is to install a program that sends the packets to two different destinations, and then receives packets from both the destinations.

Dummy core is solely based on the KISS (Keep It Simple, Stupid) concept. It is like the water in the river. The water will just flow and follow the stream, as how bits move to where it should go without any features or intelligent. Because of its advantages mentioned above over the intelligent core, it is still being used until today.

Packet-switched Router edit

A packet switch is a node in a network which uses the packet switching paradigm for data communication. A packet-switched network is an interconnected set of networks that are joined by routers or packet-switched routers.

In networks, packets are switched to various network segments by routers located at various point throughout the network. Routers are specialized computers that forward packets through the best paths. When the packets are sent through the network, the packets start to head off in different directions taking the least busy path at that instant. Packets contain header information that includes a destination address. Routers in the network read the address and forward packets along the most appropriate path to that destination. During the course of its journey, a packet will travel through many routers. Basically, routers run routing protocols to discover neighbouring routers and the networks attached to them. These protocols let routers exchange information about the network topology as it changes due to new or failed links.


In a four-stage router pipeline, buffer write (BW) is the first stage. The routing computation (RC) occurs in the second stage. Virtual channel allocation (VA) and switch allocation (SA) are performed in the third stage. The fourth stage is the flit traverses the switch (ST). Each pipeline stage takes one cycle followed by one cycle to do the link traversal (LT) to the next router. When a new flit arrives at the router, it is decoded and buffered according to its specified input virtual channel (BW). Routing computation (RC) is performed to determine the appropriate output port. In the virtual channel allocation stage (VA), the flit arbitrates for a virtual channel based on the output port determined in the previous stage. Once a virtual channel has been successful allocated, the flit proceeds to the fourth stage. The flit arbitrates for access to the switch (SA) based on its input-output port pair. Once the switch has been allocated to the flit, it can proceed to the switch traversal stage (ST) and traverse the router crossbar. The link traversal stage (LT) occurs outside the router to carry the flit to the next node in its path. Each packet is broken down into several filts, the header flit is responsible for routing computation and virtual channel allocation while the body and tail flits reuse that computation and allocation.

Virtual Circuit Router edit

Virtual circuit is one of the switching techniques used to handle packets besides datagram. Virtual circuit is actually a hybrid or combination of packets and circuits. It tries to obtain the best of two worlds meaning taking the statistical multiplexing of packet switching on one hand and on the other hand, the traffic management and quality of service of circuit switching. Despite their name, virtual circuits are essentially a connection-oriented version of packet switching; it forwards information as packets (sometimes called cells), but it keeps connection state associated with each flow. In the virtual circuit approach, a preplanned route or connection is established between source and destination prior any packets are transferred. Before the exchange of data can happen, call set up is required where call request is sent by source and call accept packets is replied by destination (handshake). Once the route is established, all the packets between a pair of communicating parties follow this same path through the network and therefore arrive in sequence. Each packet contains a virtual circuit identifier or number instead of destination address so there is no routing decisions required for each packet which means less router memory and bandwidth require. Therefore in virtual circuit, router just uses the circuit number to index into a table to find out where the packet goes.

There was a race between IP (Internet Protocol) and ATM (Asynchronous Transfer Mode) to dominate data networks in the early 90's. In the end, IP routers become more influence over ATM switches partly because the former were simpler and thus faster to hit the market and easier to configure. In contrast, MPLS (Multiprotocol Label Switching) just works below IP, rather than competing with it, and it is an attempt to do simple traffic engineering in the Internet. Recently MPLS has only started getting sizeable deployment with some backbone carriers.

Due to the benefits of virtual circuit packet switching and the growing popularity of IP, many Internet service providers send IP packets over virtual circuits. Virtual circuit packet switching technologies that have been used in the Internet backbones are ATM and, more recently, MPLS.

ATM is a virtual circuit switching technology which was standardized starting in the late 1980s. It is an important technology during 1980s and early 1990s which embraced by the telecommunications industry. ATM is a specific asynchronous packet-oriented information, multiplexing and switching transfer model standard, originally devised for digital voice and video transmission. ATM uses fixed-length payloads with a length of 48 bytes and a 5-byte header, yielding 53-byte long ATM cells. Among the 40 header bits of a cell, 28 are reserved to identify the virtual circuit to which the cell belongs. The corresponding fields are called VCI/VPI (Virtual Circuit Identifier/Virtual Path Identifier). The VCI/VPI fields are updated at each switch.

VPI is identified in cell’s header and cannot establish a virtual channel before virtual path.

VCI only have local significance – different virtual paths reuse VCIs (but VCIs on same path must be unique)

A first issue that arises when trying to send IP packets over ATM virtual circuits is the need for a definition of an encapsulation of IP packets in ATM cells, for example, how to put IP data inside ATM cells. Encapsulation is performed by an adaptation layer (AAL). Moreover, most IP packets are too large to fit in a 53-byte ATM cell. An IP header is at least 20 bytes long, and hosts cannot be coerced to send IP packets of at most 53 bytes. Therefore, IP packets must be cut into smaller pieces in a process called segmentation before they can be encapsulated and put in ATM cells. The last router on the path of IP packets must reassemble the fragments to reconstitute the original IP packets. Segmentation and Reassembly (SAR) is a complex and time consuming process.

Latest Router Technology edit

There are a few new technologies invented to create better router.

  • Control plane

A router can record which route should be used to forward data packets in the routing table listing. It uses internal preconfigured address to do this, static routes.

  • Forwarding plane

The router routes the incoming and outgoing data packets according to its header. It uses data recorded in the routing of control plane.

  • Flow control

It deal with receives and transmits packets, the realization of the process of sending and receiving buffer management, congestion control and fairness to deal with.

  • Error instructions

Identify errors and errors arising from errors and the necessary ICMP error message.

  • Programmable ASIC

It is focus on implement number of functions into a chip with simple design, high reliability and low power consumption but guarantee high performance.

  • Routing protocol

They are RIP/RIPv2, OSPF, IS-IS and BGP4. They aim is to improve the router performance such as, backplane capacity, throughput, packet loss rate, forward delay, routing table capacity and reliability.

  • Router queue management

Queue management is important so that router can process a number of data packets at the same by processing the data packets one by one. There are a few algorithms available. Packet scheduling algorithm which based on time scale (start time stamp and finish time stamp). Rotation scheduling algorithm which each process is given equal time (round-robin). Last algorithm is priority-based which based on predetermined or user-specified priorities.

  • Router security

It combines with firewall to protect the data packets. For example, the router can transfer data provided by the network or users only if the network associated with the access router to confirm the data packets is secure and safe. Another example is the router cannot visit the entire network so that the privacy is ensured.

  • IPv6

IPv6 expands the network address bits from 32 bits (IPv4) to 128 bits, so that can provide enough unique IP address for every networked device.

  • Cut-through switching

The switch start forwarding the frame before the whole frame is arrived. Normally, as soon as the destination address is processed. It fasten the process, however decreases the reliability.

  • Store and forward

The switch starts forwarding the frame only when the whole frame is arrived. So, the router will have time to check the integrity the data packets. It has high reliability, thus suitable for high error rate transmission.

  • Adaptive switching

The router will operate the cut through switching or store and forward based on the current situation. The router will operate in cut through switching mode normally, but if the error rate jumps high suddenly, the router will change to store and forward mode.

Conclusions edit

Most of the benefits of internet networking can be divided into two generic categories: connectivity and sharing. Networks allow computers, and hence their users, to be connected together. They also allow for the easy sharing of information and resources, and cooperation between the devices in other ways. Since modern business depends so much on the intelligent flow and management of information, this tells you a lot about why networking is so valuable.

References edit














http://books.google.com.my/books?id=3q-emDersecC&pg=PA22&lpg=PA22&dq=packet+switched+router+pipeline&source=bl&ots=vkjYtYU35S&sig=FBdWzMk9OqMPFpwvUGqn65CGRCk&hl=en&ei=MGW7TtPDO43NrQeLufXLBg&sa=X&oi=book_result&ct=result&resnum=10&ved=0CFcQ6AEwCQ#v=onepage&q=packet%20switched%20router%20pipeline&f=false http://www.livinginternet.com/i/iw_packet.htm










Chapter 5 : Client-Server

Client-Server Paradigm/Introduction edit

A client-server model is defined as the relationship between two computer programs that communicates with each other. The clients initiate the communication by sending service requests to the servers. They are typically personal computer with network software applications installed. Mobile device can also be function as a client. Servers are usually devices with files and databases stored inside, including complex applications like web sites. They have higher-powered central processors, larger disk drives and more memory compare to the client. The server will fulfil the request of the client by sharing resources with the clients while the client does not have any processing capability. This client-server model often happened over a network of computers. However, it can happen within a single computer too.

The client-server model can be found in functions such as email exchange, web access and database access. A web browser is actually a client. It runs software locally that processes information received from the web server. In a typical client-server model, one server is activated and it will wait for the client requests. Multiple client programs share the service of the same server program. They are often part of a larger application. The Internet's main program, TCP/IP (Transmission Control Protocol/Internet Protocol) is also built based on client-server model. The TCP/IP allow user to make client requests for FTP (File Transfer Protocol) servers.[1]

The client-server architecture helps to reduce network traffic by providing a query response. It does not provide total file transfer. There are two type of the client-server architecture, which are the two tiers architecture and the three tier architectures. As for the two tier client-server architecture, the user interface is stored in the client while the data are stored in the server. Information processing is separated between the user interface environment and the data management server environment. In the three tiers client-server architecture, middleware is used between user interface environment and the data management server environment. This helps to overcome the drawback of the two tiers client-server architecture. It improves the performance when there are a large number of users. It also improves in flexibility as compare to the two tier approach. However, its development environment is more difficult to use than the development of the two tier’s application.[2]

Client and Server Programs/architectures edit

A client program is a program running on one end system (host) that requests and receives a service from a server program running on another end system. The client-server model includes Web, e-mail, file transfer remote login, and many other popular applications. Client-server internet applications are defined as distributed applications since a client program typically runs on one computer and the server program runs on another computer.

The client program and server program interact by sending messages to each other over the internet. At this level of abstraction, the routers, links and other nuts and bolts of the internet serve as a black box that transfers messages between the distributed of an internet application.

In client-server architecture, the server is an always-on host and the clients host can be either always-on or sometimes-on. The Web is a classic example of an always-on host. In the client-server architecture, clients do not communicate directly with each other. For instance, two browsers do not directly communicate. Besides that, because a server is an always-on host, a client can always contact the server by sending a packet to the server’s address.

A single server host isn't capable of keeping up with all the requests from clients; the server will quickly become overwhelmed if it is only one or two servers that handle all of the requests. To solve this problem, data intensive is often used to create a powerful virtual server in server-client architectures (application base on client-server are often infrastructure intensive, that means they require the service provider to purchase, install and maintain server farms). The service providers need to pay a cost for an interconnection and bandwidth cost for sending and receiving data and infrastructure intensive. For example, a popular service like Facebook is infrastructure intensive and is costly to provide.

However, not all of the internet consists of pure client programs interacting with pure server programs. Two predominant architecture paradigms are used in modern network applications, client-server and peer-to-peer (P2P). Different from client-server, P2P is direct communication between pairs of intermittently connected hosts. Some examples include internet calling (e.g.Skype), file distribution (e.g.BitTorrent), file sharing (e.g.eMule) and IPTV (e.gPPLive)[3].[3]

Client/Server Evolution edit

A long time ago, client-server computing was just using mainframes and connecting to dumb terminals. Through the years, personal computers started to evolve and replaced these terminals but the processing is still done on the mainframes. With the improvement in computer technology, the processing demands started to split between personal computers and mainframes.

The term client-server refers to a software architecture model consisting of two parts, client systems and server systems. These two components can interact and form a network that connect multiple users. Using this technology, PCs are able to communicate with each other on a network. These networks were based on file sharing architecture, where the PC downloads files from corresponding file server and the application is running locally using the data received. However, the shared usage and the volume of data to be transferred must be low to run the system well.

As the networks grew, the limitations of file sharing architectures become the obstacles in the client-server system. This problem is solved by replaced the file server with a database server. Instead of transmitting and saving the file to the client, database server executes request for data and return the result sets to the client. In the results, this architecture decreases the network traffic, allowing multiple users to update data at the same time.

Typically either Structured Query Language (SQL) or Remote Procedure Calls (RPCs) are used to communicate between the client and server.[4] There are several types of client-server architecture. One of the architecture is the Two Tier Architecture, where a client is directly connected to a server. This architecture has a good application development speed and work well in homogeneous environments when the user population work is small. The problem exists in this architecture is the distribution of application logic and processing in this model. If the application logic is distributed to dozens of client systems, the application maintenance will be very difficult. To overcome the limitations of the Two-Tied Architecture, Three Tier Architecture is introduced. By introducing the middle tier, clients connect only to the application server instead of connect directly to the data server. By this way, the load of maintaining the connection is removed. The database server is able to manage the storage and retrieve the data well. Thus, the application logic and processing can be handled in any application systematically.To enhance the Three Tier Architecture, it can be extended to N-tiers when the middle tier provides connections to various types of services, integrating and coupling them to the client, and to each other.For example , web server is added to Three Tier Architecture to become Four Tier Architecture where the web server handle the connection between application server and the client.Therefore ,more users can be handled at the same time.[5]

Server Farm edit

A server farm or server cluster is a group of computers acting as server that is kept in a single location. When demands are increase on a computer server, it may be difficult or impossible to handle with just only one server. Hence, server farm is being used. With server farm, workload is distributed among multiple server components, providing for expedited computing processes. Server farms were most frequently used by institutions that were academic/ research- based or big companies. However, since computers and companies have become more main stream, server farm is now being used in many companies regardless of the companies’ size or demand.[6]

A server farm perform services as providing centralized access control, file access, printer sharing, and backup for workstation users. The servers may have individual operating systems or a shared operating system and may also be set up to provide load balancing when there are many server requests.[7]

In a server farm, there are usually a primary and a backup server. These two servers might be given the same task to perform. Hence, if one server fails, the other server can act as backup. This is to maintain server functionality while preventing the server from going offline if a problem were to occur. Usually, a server farm will have network switches and routers, these devices allow different parts of the server farm to communicate. All these computers, router, power supplies and all the others components are typically mounted on a 19-inch rack in a server room.[8]

The performance of a server farm is not evaluated by the performance of the processors. A server farm performance is actually limited by the server room’s cooling system and the cost of electricity. Due to its large consumption of electricity, the design parameter for a server farm is usually based on performance per watt. For those on a 24/7 run system, power saving features is more important in order to make sure that the system is able to shut down some parts according to demand. Thus, the service will not be interrupted while the system can save more power.[9]

Examples edit

There are many examples of the Client-Server Model. In this chapter, we will discuss on the interface, internet, banking account and data processing system.

Firstly, we will discuss about the interface. Interface is a concept where components interact between each other. A component is then able to function independently while using interfaces to communicate with other components via an input/output system and an associated protocol.[10]

The internet is also an example of a client server. On the internet, we are the clients and requests are sent by us to the server. The internet which is the server will then search for the subject and then provide us with options. Hence, the internet, or also known as the World Wide Web is actually a huge collection of servers that responds to various types of clients. The information that is available on the web server can be access by many different clients.[11]

The process of checking account balance in a bank using online banking system is another example of a client-server program. The client program in the computer will send service request to the server of the bank. The program may also forward the request to its own database client program which sends the request to the server database of another bank in order to get the information. The information is then sent back to the bank’s data client and return back to the client program of the computer which display the account balance to the user.[12]

Data processing is another application that implements the client-server model. When copying a file from another computer, a client is used to access the file on the server. In other words, the server serves data to the client.

References edit

  1. Sullivan, J. (2000). Definition::Client/Server. In TechTarget. Retrieved November 10, 2011, from http://searchnetworking.techtarget.com/definition/client-server
  2. Types of Client Server Architecture? (2011,, November 10). In Answer.com. Retrieved November 10, 2011, from http://wiki.answers.com/Q/Types_of_client_server_architecture
  3. Ross, K. (2010). Networking: A Top-down Approach, Fifth Edition
  4. Andykramek ( 2008, September 29). Introduction to Client Server Architecture. Retrieved November 15, 2011, from http://weblogs.foxite.com/andykramek/archive/2008/09/29/6935.aspx
  5. David Raney(2009).The Evolution of Client/Server Computing. Retrieved November 15, 2011, from http://cis.cuyamaca.net/draney/214/web_server/client.htm
  6. Server Farm (2011, November 23). In Server Farm Org. Retrieved November 23, 2011, from http://serverfarm.org/
  7. Definition:: Server Farm (2011, November 23). In TechTarget. Retrieved November 23, 2011, from http://searchcio-midmarket.techtarget.com/definition/server-farm
  8. What is a Server Farm? (2011, November 23). In WiseGeek. Retrieved November 23, 2011, from http://www.wisegeek.com/what-is-a-server-farm.htm
  9. Server Farm (2011, November 23). In Wikipedia, The Free Encyclopedia. Retrieved November 23, 2011, from http://en.wikipedia.org/wiki/Server_farm
  10. Mitchell, B. Client (2011) Server Model:: What is Client Server Network Technology. In About.com. Retrieved November 23, 2011, from http://compnetworking.about.com/od/basicnetworkingfaqs/a/client-server.htm
  11. Greer, J. D. (1995-1996) Client/Server, the internet and WWW. In Robelle Solutions Technology Inc. Retrieved November 23, 2011, from http://www.robelle.com/www-paper/paper.html
  12. Client Server Model (2011, November 10). In Wikipedia, The Free Encyclopedia. Retrieved November 10, 2011, from http://en.wikipedia.org/wiki/Client%E2%80%93server_model

Chapter 6 : Peer-to-peer

Definition Of Peer To Peer edit

When referring, in relation to a computer network, to peer-to-peer (P2P), it will mean the network and relation formed by computers connecting to each other, this can be done over the Internet. A “peer” will be the computer that can participate directly with others, over that network, without the need of central server. This requires that a peer (computer) on a P2P network to function like a server as well as client. P2P networks are not exclusively created for file sharing but that is the predominant application of the technology.

The only requirements for a computer to join a peer-to-peer network are a network connection and P2P software. In a file sharing setup a peer will only need to connect with another so that the user can search for files that the other peer is sharing. Common P2P software program that use a P2P network are Bittorent, Limewire, Kazaa, Bearshare and many other similar software.

Peer to Peer Architectures edit

P2P is sub-field of distributed systems, mainly focusing in file distribution and indexing but not limited to it. A P2P network is created by using identical software, on different computers. The software communicates with each other, forming a P2P network, to complete the processing required for the completion of the distributed task.

Some of the more recent P2P implementations use an abstract overlay network, where the software is build to function at the application layer on top the native or physical network topology. P2P networks are organized by following specific criteria and algorithms which lead to a specific topologies and properties.

P2P architecture depends in placing a network server as well as a network client on each computer, creating a peer. This will provide users to access services from other computers running the software (or compatible implementations) as well announce local services. P2P architectures are actually more complicated than the client-server architecture. Since there is not a central coordinator, there is a requirement to implement a bootstrap scheme as to allow each peer to know the network address of others, over a highly dynamic network, to be able to participate on the P2P network.

Based on how the nodes are link to each other, P2P can be classified as unstructured P2P and structured P2P. Unstructured P2Ps does not impose any structure on the network. P2P unstructured system have no absolute centralized system.

Then there are 3 types categories can be seen :

  • Pure peer-to-peer where consists one routing layer, no preferred nodes with any special infrastructure function.
  • Hybrid peer -to-peer often called supernodes. Where it allows such infrastructure nodes to exist.
  • Centralized peer-to-peer used for indexing function and bootstrap entire system. It has similarities with structured architecture but the connection between peer not determined by any algorithm.

Structured P2P consists protocol to ensure any node can efficiently route to search some peer that has desired files even if the file is rare to be found. The common type structured P2P is DHT (Distributed Hash Table) consists hashing used to assigned ownership of each to a particular peer.

How It Works edit

P2P works over networks with several peers (computers) cooperating without a fixed central server (one that the network depends for its existence, in mixed models there can be multiple servers).

How do you get these millions of peers to communicate/to cooperate/to provide service to each other without having a central server?

1. Several computers all connected to the Internet and wanted to cooperate and communicate with each other. But, if they do not know each other and there’s no central server that they can communicate through, then they need some kind of mechanism that allows them to find each other.

2. All the peers now assigned to have a GUID (Global Unique ID). The peers will identify each other by this GUID. After assigning a GUID to every single peer in the network, the peers are organized into a virtual ring. And the peers are ordered in the ring according to their GUID.

3. This organization is purely virtual or logical. It does not mean that peer 1 is closed to peer 2 geographically. It is just virtual way of organizing the peers into a network. Once the peers are organized into these rings, then somehow each peer needs to have a set of references to other peers in the network so that they will be able to communicate with them. But, it is not possible to for peer to have reference to every single other peer in the network. Therefore, each peer needs to have a reference to a subset of the rest of the peer in the network. Thus, if there are millions peer in the network, and each peer might only have references to few other peers in the network (HOME). We choose peer in the subset based on the distance from a peers’ own GUID.

How do you get them to know each other? You need a boot peer, which has to be the 1st peer in the network. The boot peer has to be known to at least the 1st peer answering the network. The 1st peer, the joining peer, contacts the boot peer and asked to join the network. And then the boot peer responds with a GUID. In some networks, the joining peer might decide the joining GUID. Now the network consists of 2 peers. The next thing that the joining peer does is to ask the boot peer for the copy of the routing table. Then the boot peer would send the copy of the routing table back to the joining peer and the joining peer could use this routing table to build its own routing table. It then will use this temporary routing table to search for its ideal peers in the network. When a connection has been established, the user can then search for files. When a search has been submitted, it connects to all nodes on its connection list. The results are then displayed and a connection is made.

P2P Application Areas:

• File sharing

• Educational search engine

• Video streaming (TV, movies, etc)

• Telecommunications

• Web portals

• Bio informatics

P2P Searching Techniques:

• Flooding queries

• Centralize indexing

• Decentralize indexing

• Crawling

• Hashing

Few examples of P2P networks are such as Kazaa, eMule, Napster, Gnutella, BitTorrent, Ants, Ares, DirectConnect, Freenet, OpenFt, FastTrack, Kad Network, NeoNet, Mute, eDonkey and SoulSeek.

Peer-to-Peer Application edit

The applications for peer-to-peer is very wide. One can connect to other user’s machine and giving them the ability to connect to one's machine. It has the ability to discover, query, and share contents with other peers.

The applications of the peer to peer network :

Content distribution

  1. Babelgum – use a proprietary P2P protocol with an encrypted data stream to prevent piracy.
  2. Blinkx BBTV - fuses TV and other video from the web using hybrid peer-to-peer streaming which enables content providers to disperse the burden of content delivery through the blinkx network and peer-to-peer distribution.
  3. Jaman - provides pieces of the movie via their server and via P2P.
  4. Software publication and distribution (Linux, several games); via file sharing networks.
  5. Peercasting for multicasting streams.

Exchange of physical goods, services, or space.

  • Peer-to-Peer renting web platforms enable people to find and reserve goods, services, or space on the virtual platform, but carry out the actual P2P transaction in the physical world.


  1. Domain Name System, for Internet information retrieval.
  2. Cloud computing
  3. Dalesa a peer-to-peer web cache for LANs (based on IP multicasting).


  1. The sciencenet P2P search engine.
  2. BOINC - Open-source software for volunteer computing and grid computing.

Communications networks

  1. Skype, one of the most widely used internet phone applications is using P2P technology.
  2. VoIP (using application layer protocols such as SIP)
  3. Instant messaging and online chat.

File sharing

  1. Film
  2. Video
  3. Music and Audio
  4. Computer applications

Privacy protection

  • Tor – shields its user’s identities by sending their traffic through a network of relays set up by volunteers around the world.

Business • Collanus Workplace – cross platform peer-to-peer system for collaborative teams to work on projects and transmit their work amongst work group.

Advantages and Disadvantages of P2P Networks edit

Advantages of P2P Networks

• It’s simple and easy to setup it requires only HUB or switches and RJ45 cables to connect all the computers together.

• File on the computer can be easily access on another computer if it set to shared folders.

• It’s cheap than having to use a server. The only cost involved is hardware, cabling and maintenance.

• The architecture of the layout is simple (how it connects)

• If one of the computers fails to work the other computers that connected will still be able to work.

• It doesn’t need any full time system administrator. Every user is basically their own administrator of their computers.

Disadvantages of P2P Networks

• The system is not centralized, thus administrating is difficult. You can’t determine the whole accessibility setting of others computer.

• Your network is vulnerable towards viruses, spywares, Trojans, etc.

• Data recovery is difficult and each computer should have its own back-up system.

• It’s only suitable for a small network that consist only 2-8 computers where high level security is not required. Not suitable for a business network that handle sensitive data.

References edit

Chapter 7 : Cloud-computing

Author/Editor: Ong Yong Sheng(Introduction,overview and types) Nurul Syafiena(application), Fatin Ramli(), Ong Hock Yew(advantages and disadvantages), Nur Malina(Architecture)

Introduction edit

Cloud computing, most of the people cannot relate these two together. How does “cloud” relate to “computing”? In fact now in modern ICT life we live with a lot of applications of cloud computing. In simple, we could describe cloud as internet, and cloud computing as large systems that are connected in public or private networks. For example, a company could have computers that connect to an application and allow workers to log in into a Web-based service which hosts all the programs the user would need for his or her job. In this system, a company will not have to provide the right hardware and software for every employee hired to do their jobs. Hence it could reduce cost, and make data or application more easily obtained and ubiquitously accessed.[1]

Overview edit

We could divide cloud computing into three areas:

  • Software as a Service (SaaS). In SaaS layer, users do not need to install, manage the software, while the softwares is provided by the host. Users just need to connect to the internet. (e.g., Google Apps, Salesforce.com, WebEx)
  • Platform as a Service (PaaS). In PaaS layer, black-box services are provided with which developers can build applications on top of the compute infrastructure. Developers tools that are offered as a service to build services, or data access and database services will be provided in this layer. (e.g., Coghead, Google Application Engine)
  • Infrastructure as a Service (IaaS). Providing computational and storage infrastructure in a centralized, location-transparent service (e.g., Amazon)

[2] [3]

Cloud Computing Architecture edit

Generally, cloud computing architecture is dividing into two sections. Both sections are connected to each other through a network. The first section is known as front end that consist of client computing and the second section know as back end that consists of data storage, application server and some type of control node.

Front End edit

The front end is the section which the computer user and client can observe. Its indicating client’s computer and the application required to accessing the cloud computing architecture. Besides that, application programming interface (API) is use as a communication medium between each of hardware component that related with the cloud computing architecture. Commonly, cloud computing systems have different kind interface of web service such as Microsoft’s Internet Explorer, or Firefox. Moreover, another benefit of cloud computing that has provided special software systems that are designed for specific tasks. [4]

Back end edit

The back end is the “cloud” section of the system that refers to some physical peripherals. There are three main components that consists in back end architecture. First component is the data storage, where the information can be placed for fast recovery. Data can be stored on the cloud either by clients or the cloud application. Normally data capacity of a cloud system has bulky number of data redundancy. In addition, data storage component in cloud computing architecture is usually designed to store more than a copy of each data set. This is to prevent the data become damage and inaccessible. Second component is an application server that connected with the cloud computing architecture. Commonly, it’s involves a number of different application servers, in other word it can be responsible for different function. Each of these servers is usually designed to run one program or service and many of them may be available to the client through front end interface such as video games or data processing. Third component that involved in cloud computing architecture is control nodes. It is used to maintaining the whole system. Besides that, it is also used to monitoring client’s demand and also traffic flow to ensure the systems runs smoothly. Protocol is known as a set of rules that contain server information. Middleware is a special kind of software that applies in each of server in cloud computing system. By using this software, it will creating communication link between each computer that are connected in the network.[5]

Types of Cloud Computing edit

Public Cloud edit

Public cloud (also referred to as ‘external’ cloud) describes the conventional meaning of cloud computing. Computing infrastructure is hosted at the vendor’s premises. The users could not view the location of the cloud computing infrastructure. The computing infrastructure is shared between organizations.

Private Cloud edit

Private cloud (also referred to as ‘corporate’ or ‘internal’ cloud) User dedicated computing architecture and it is not shared with any other users or organizations. It is more secure than Public Cloud and can be externally hosted or in premise hosted clouds.

Hybrid Cloud edit

Some critical, secure files or applications are hosted in private clouds while not so critical application or files are hosted in public cloud. This combination is called as Hybrid Cloud

The top layer which is an application is also known as ‘software as a service’ (SaaS). In this layer, the users are really limited to what the application can do. Companies involved in this include public emails providers such as Gmail, Hotmail, Yahoo Mail, etc. Most companies use the services in this particular Cloud layer. Usually, the user can only get the pre-defined function and cannot access more than that. Other than that, there are good and bad characteristics of Cloud Applications. The good points are it's free, easy to use and it offers a lot of different things. The bad point is that the user can use the application as it appears. The user has no knowledge or any control to the application.

Community Cloud edit

Organizations of the same community share the same cloud infrastructure. [6]

Application and Layer edit

There are basically 3 layers in cloud computing. Company used it differently based on what they are offered. The 3 layers are application, platform and lastly infrastructure. It usually present in form of pyramid which is infrastructure at the bottom layer, platform on the middle and application on the top.

The bottom layer edit

The bottom layer which is infrastructure is also known as ‘infrastructure as a service’ (IaaS). This is where the things start and where people began to built. This is the layer where the cloud hosting lives. The examples of company that provides Cloud infrastructure are Amazon Web Services, GoGrid, and the Rackspace Cloud . Cloud infrastructure also known to work as a deliver computer infrastructure. Most companies, in this part will operate their own infrastructure. It will allow them to give more services and features and also give more control than other layers in Cloud Pyramid. There are pros and cons in the characteristic of Cloud Infrastructure. The pros are it can give the company full access or control of the company infrastructure meanwhile the cons is that sometimes it come with premium price. It can be so complex to maintain, manage and also to build.


The middle layer edit

The middle layer which is platform is also known as ‘platform as a service’ (PaaS). The examples of company and product of Cloud Platform are Google App Engine, Heroku, Mosso (now the Rackspace CloudSites offering), Engine Yard, Joyent or force.com. In contrast to the Cloud Application, in this layer is where the users build up the increase of flexibility and control. Unfortunately, it still somehow limits what can user do or not. There are strengths and weakness of characteristic in this Cloud Platform. The strength of Cloud Platform is that it has more control than cloud Application and it also good for developers with a certain position target. Meanwhile the weakness is sometimes it more depend on Cloud Infrastructure Providers and sometimes it also stick to the platform ability only.


The top layer edit

The top layer which is application is also known as ‘software as a service’ (SaaS). In this layer, the users are really limited to what can the application can do. The part of company that involves is the public emails providers such as Gmail, Hotmail, Yahoo Mail, etc. Most company uses the services in this particular Cloud layer. Usually, the user can only get the pre-defined function and cannot access more than that. Other than that, there are the good and the bad of the characteristic in Cloud Applications. The good are, its free, easy to use and it offer a lot of different things. The bad is that the user can use the application as it appears. The user have no knowledge or any control to the application.


Advantages and Disadvantages edit

Advantages edit

• Lower computer costs. You don't need a high-end computer to run cloud computing web-based applications. Since applications run in the cloud, not on the desktop PC, your desktop PC doesn't need the processing power or hard disk space demanded by traditional desktop software. This reduces the cost, as it is lot cheaper practically than software that cost you extra on anything. Moreover, it also reduces the cost of storage as you would have to put extra money to get that much needed extra storage.

• Improved performance. Since running the cloud needs very little programs and processes, your RAM will have space for other programs. This definitely improves the performance of your PC.

• No software costs. You can get most of what you need for free, instead of purchasing expensive software applications. Most cloud computing applications today, such as the Google Docs suite, are free.

• Limitless storage capacity. Cloud computing offers virtually unlimited storage. Your computer's current 200 gigabyte hard drive is peanuts compared to the hundreds of petabytes (a million gigabytes) available in the cloud. This is because in cloud computing, all of the user’s hard drive is considered as storage, and this will add up to a very large amount of storage.

• Increased data reliability. Unlike desktop computing, in which a hard disk crash can destroy all your valuable data, a computer crashing in the cloud shouldn't affect the storage of your data. This is because that your data is stored in the cloud as well as in your hard disk. So if either one crash, you still have another.


Disadvantages edit

• Requires a constant Internet connection. Cloud Computing works online and absolutely dependent on network connections. This brings in many drawbacks such as; if network connection is slow or not available then you would not be able to work.

• Can be slow. Even on a fast connection, web-based applications can sometimes be slower than accessing a similar software program on your desktop PC. This is due to the problem of latency. For example, if the cloud data centre is located off shore client connection time to your data may not be as fast as you hope for.

• Stored data can be lost. Theoretically, data stored in the cloud is usually safe, the data is replicated across multiple machines. But there is a chance that your data might go missing from all of these machines. So just in case, have a copy of the data in your disk.


References edit

  1. How Cloud Computing Works. by Strickland,J. In How Stuff Works. Retrieved from http://computer.howstuffworks.com/cloud-computing/cloud-computing.htm
  2. . Overview of Cloud Computing by Harris,T. Cloud Computing - An Overview. Retrieved from http://www.thbs.com/pdfs/Cloud-Computing-Overview.pdf
  3. . Cloud Computing: An Overview by Creeger,M(2009). Association for Computing Machinery. Retrieved from http://queue.acm.org/detail.cfm?id=1554608
  4. How Cloud Computing Works. by Jonathan Strickland. Cloud Computing Architecture. Retrieved from http://computer.howstuffworks.com/cloud-computing/cloud-computing.htm
  5. Cloud Computing Architecture. Retrieved from http://www.cloudcomputingarchitecture.net
  6. . Types of Cloud Computing by ArchieIndian(2010). MICROREVIEWS. Retrieved from http://microreviews.org/types-of-cloud-computing/
  7. a b c . The Cloud Pyramid, Cloud Computing Explained.Retrieved from http://pyramid.gogrid.com//
  8. a b S. Commedia. (2009) Advantages and Disadvantages of Cloud Computing. Retrieved November 2, 2011, from http://goarticles.com/article/Advantages-and-Disadvantages-of-Cloud-Computing/4780305/.

Chapter 8 : Internet-of-Things

Author/Editor: Yong Tze Lin, Tan Yong Xiang, Teoh Chong Sheng, Wong Meng Huei, Woo Yi Wen, Woo Yit Wei

Introduction of Internet of Things edit

The Internet of Things (IoT) is a technological revolution that represents the future of computing and communications. Its development depends on the dynamic technical innovation in a number of important fields, from wireless sensors to nanotechnology.[1]

The concept of the IoT comes from Massachusetts Institute of Technology(MIT)’s Auto-ID Center in 1999.[2] The MIT Auto-ID Laboratory is dedicated to create the IoT using Radio Frequency Identification (RFID) and Wireless Sensor Networks. IoT is a foundation for connecting things, sensors, actuators, and other smart technologies, thus enabling person-to-object and object-to-object communications.[3] A new dimension has been added to the world of Information and Communication Technologies (ICTs): anyone can access the information ubiquitously and pervasively from anywhere, any device anytime. Connections will multiply and create an entirely new dynamic network of networks, which forms the IoT.[1]

RFID techniques and related identification technologies will be the cornerstone of the upcoming IoT. While RFID was initially developed with retail and logistics applications in mind in order to replace the bar code, developments of active components will make this technology much more than a simple identification scheme. It is expected in the near future that a single numbering scheme, such as IPv6, will make every single object identifiable and addressable.[4] The technologies of the IoT provide many benefits to the world. For example, sensor technologies are being used to test the quality and purity of different products, such as coffee in Brazil and beef in Namibia.

However, the security and privacy issues need to be considered. Concerns over privacy and data protection are widespread, particularly as sensors and smart tags can track users’ movements, habits and ongoing preferences. To promote a more widespread adoption of the technologies underlying the IoT, principles of informed consent, data confidentiality and security must be safeguarded.[1]

Architecture of Internet of Things edit

The IoT needs an open architecture to maximise interoperability among heterogeneous systems and distributed resources including providers and consumers of information and services, whether they be human beings, software, smart objects or devices. Architecture standards should consist of well-defined abstract data models, interfaces and protocols, together with concrete bindings to neutral technologies (such as XML, web services etc.) in order to support the widest possible variety of operating systems and programming languages.[5]

The architecture should have well-defined and granular layers, in order to foster a competitive marketplace of solutions, without locking any users into using a monolithic stack from a single solution provider. Like the Internet, the IoT architecture should be designed to be resilient to disruption of the physical network and should also anticipate that many of the nodes will be mobile, and they may have intermittent connectivity also they may use various communication protocols at different times to connect to the IoT .[6]

IoT nodes may need to form peer networks with other nodes dynamically and autonomously locally or remotely, this should be done through a decentralized, distributed approach to the architecture, with support for semantic search, discovery and peer networking. Anticipating the vast volumes of data which may be generated, it is important that the architecture also includes mechanisms for moving intelligence and capabilities for filtering, pattern recognition, machine learning and decision-making towards the very edges of the network to enable distributed and decentralized processing of the information, either close to where data is generated or remotely located in the cloud. The architectural design will also need to enable the processing, routing, storage and retrieval of events as well as allows for disconnected operations (e.g., where network connectivity might only be intermittent). Effective caching, pre-positioning and synchronization of requests, updates and data flows need to be an integral feature of the architecture. By developing and defining the architecture in terms of open standards, we can expect increased participation from solution providers of all sizes and a competitive marketplace that benefits end users. In summary, the following issues have to be addressed:

• Distributed open architecture with end to end characteristics, interoperability of heterogeneous systems, neutral access, clear layering and resilience to physical network disruption.

• Decentralized autonomic architectures based on peering of nodes.

• Architectures moving intelligence at the very edge of the networks, up to users’ terminals and things.

• Cloud computing technology, event-driven architectures, disconnected operations and synchronization.

• Use of market mechanisms for increased competition and participation. [5]

IPv6 edit

Today’s dominant Internet Protocol (IP), Internet Protocol version 4 (IPv4), which has only about 4.3 billion addresses, is not enough to satisfy the rising IP addresses demand due to exponential growth in the number of internet user.[7] This worsening address drought leads to the introduction of Internet Protocol version 6 (IPv6),[8] which was developed to solve the shortage of internet address. It is often referred to as “next-generation internet” because of its almost limitless IP addresses (3.4x10^38 addresses).[9] It serves the function of IPv4, but without the same limitations of IPv4. Besides numerous address spaces, the differences between IPv6 and IPv4 are in five major areas: Addressing and Routing, Security, Network Address Translation (NAT), Administrative Workload, and Mobility. IPv6 also offers remarkable capability in the area of multicasting technologies.[10]

Addressing and Routing edit

IPv6’s extremely large address space gives Internet Service Providers (ISPs) to have sufficient IP addresses to assign to every end system so that every IP device has a truly unique address. Another goal of this address space expansion is to improve the connectivity, reliability, and flexibility. The additional address space is also helpful in the core of Internet by reducing the size and complexity of the global routing tables.[10]

Security edit

One of the goals of IPv6 is Virtual Private Networks (VPNs). The new Internet Protocol Security (IPSec) security protocols, Encapsulating Security Protocol (ESP) and Authentication Header (AH) are capabilities posses by IPv6 that IPv4 does not offer.[10] Indeed, IPv6 mandates that security be provided through information encryption and source authentication.

Address Auto-Configuration edit

IPv6 Auto-Configuration feature reduces the total time that use to configuring and managing the systems. This ‘stateless’ auto-configuration means that it is no more need to configure IP addresses for end systems, even via Dynamic Host Configuration Protocol (DHCP).[10] This allows new equipment to communicate with the network once it is detected, which means devices are ready to use on demand, on the other word, plug-and-play.

Administrative Workload edit

IPv6 improves communication and eliminates the need for NAT, through its automated configuration capabilities.

Mobility (Support for Mobile Devices) edit

IPv6 hosts are not restricted by location. As its name suggests, Mobile IP allows a device to roam from one network to another without losing their established IP addresses.[8]

Multicast Technologies edit

IPv6 allows multiple addresses for hosts and networks, which means transmission of a single datagram to multiple receivers. This optimizes media streaming applications and allowing more data transmission to millions of locations. Except unicast communication, IPv6 defines a new kind of service, called “anycast”.[10] Anycast communication allows the same address to be placed on more than one device so that when traffic is sent to one device addressed in this way, it is routed to the nearest host that shares the same address.[8]

Sensor edit

The IoT is a network of objects connected by things like sensors, RFID tags, and IP addresses. In this respect, sensors have a special role in the IoT paradigm. According to the International Telecommunication Unit (ITU Report 2005), Internet of Things can be defined as a vision “... to connect everyday objects and devices to large databases and networks ... (using) a simple, unobtrusive and cost-effective system of item identification...”.

In the IoT, sensors are the edge of the electronics ecosystem.[11] Sensors allow the physical world to interact with computers, playing an important role in bridging the gap between the physical world and the virtual one. This allows a richer array of data, other than data available from keyboard and mouse inputs. Currently, the internet is full of information that has been input by someone at the keyboard. But the concept of Internet of Things will change that, because we are at an inflexion point where more Internet data originates from sensors rather than keyboard inputs.

A sensor is a device that can measure a physical quality and converts that physical quantity into a signal that can be read by an instrument or an observer. In the idea of the Internet of Things, the ability to detect changes in the physical status of things is also essential for recording changes in the environment.[12] Sensors collect data from the environment, such as vibrations, temperature, and pressure, among others, and convert them into data that can be processed and analyzed. This allows the Internet of Things to record any changes in the environment or an object.

For example, by having sensors installed on a bridge, the data collected can be used to estimate the number of cars that travel on the bridge, the traffic on the bridge at different times of the day, and the speed of the cars travelling on the bridge. This data can then be used for navigation systems, to allow programs or software to determine the fastest route, depending on the time of day.

Also, the sensors installed on to a bridge can be used to determine the safety of the structure of the bridge. For example, the sensors can be made to detect the vibrations along each part of the bridge, to detect any impending failure or fault. By collecting such information, any problems such as damage to a structure can be detected early on and dealt with, before any problems arise.

Embedded intelligence in things themselves can further enhance the power of the network. This is possible because the information processing capabilities are devolved, or delegated, to the edges of the network. Embedded intelligence will distribute processing power to the edges of the network, and offers greater possibilities for data processing and increasing the resilience of the network. With embedded intelligence, the things or devices connected at the edge of the network can make independent decisions based on the input received at the sensors.

“Smart things” are difficult to define. However, the term implies a certain processing power and reaction to external stimuli. Advances in smart homes, smart vehicles and personal robotics are some of the leading areas. Research on wearable computing is swiftly progressing. Scientists are using their imagination to develop new devices and appliances, such as intelligent ovens that can be controlled through phones or the internet, online refrigerators and networked blinds. The Internet of Things will draw on the functionality offered by all of these technologies to realize the vision of a fully interactive and responsive network environment.

RFID edit

Radio Frequency Identification (RFID) is a system that transmits the data of an object or a person using radio waves for identifying or tracking the object or person. It is done by first attaching a tag, known as the RFID tag, to the object or person. This tag will then be read by the reader to determine its identification information.[13]

It works much like a barcode, where a scanner scans the barcode and the information will be obtained from the barcode. However, barcode requires a line of sight in order to be scanned whereas RFID tags do not need a line of sight to be read. This means that RFID tags can be read even if the tag is kept in a box or a container, or kept in a pocket. This is because it uses radio waves. This is a huge advantage of RFID. Another advantage of it is that there is a type of RFID tag known as a passive RFID tag which does not require batteries to function. Its power supply comes from the radio energy transmitted by the reader. Other than that, hundreds of RFID tags can be read at a time, unlike the barcode where only 1 can be scanned at a time.[13]

It is often seen that RFID is a prerequisite of Internet of Things. This is because the Internet of Things is a network of objects connected together and if all everyday objects in the world are to be connected, we would definitely need a simple and cost effective system to do it. RFID is the solution to this problem.[1] RFID tags are very simple and small enough such that it can be attached to everyday devices without being noticed. In terms of cost effectiveness, passive tags are said to cost starting from only US$0.05 each. This means that it is very cheap and is possible to be attached to huge amounts of everyday objects. Other than that, as said in the previous paragraph, there is a type of RFID tag known as a passive tag which does not require any batteries to function and gets its power supply from the radio energy transmitted by the reader. This will save the cost of batteries and we do not have to worry about batteries being worn out and replacing them. This will save us from much hassle of checking and replacing batteries. Other than saving trouble and cost, this also gives the tags infinite lifetime because they are completely dependent on the reader for power. As long as there is a reader, the tag will work. Another point stated in the previous paragraph is that hundreds of RFID tags can be read at a time. The RFID system was designed to be able to distinguish the different tags which are within range of the RFID reader. This means that there will be no mistake in the information which the tag provides and will not be jumbled up with information from other tags. RFID tags can be integrated with sensors to send not only identification data but also valuable information.[14] A sensor will monitor the change of a physical status and convert it into a signal which will be stored by the RFID tag. When a reader reads a tag, the sensors information will be sent to the reader along with the identity of the object. This way, we can monitor changes in an object such as temperature, pressure or vibration. This allows us to avoid any disaster or safety hazard from happening. For example, if we were to tag the tyres of a vehicle with a pressure sensor and we assume that a workshop has a RFID tag reader, every time the vehicle enters the workshop, the reader will automatically read the tag and obtain the information of the pressure of the tyres. It will be able to identify a specific tyre which has too much or too little pressure and so we can either increase or decrease the pressure to prevent any mishap from happening.[1]

Security and Privacy edit

The trend of having ever more objects included in the IT data flows and ever more connected devices, moving toward mobile and decentralized computing is evident. The Internet of things has become a new Era in this day and age. There is a need to have a solution to guarantee privacy and the security of the customers in order to have a widespread adoption of any object identification system.

The security has been done as an add-on feature in most cases, and the feeling that the public acceptance for the internet of things will happen only when the strong security solutions are in place. This could be hybrid security mechanisms that for example combine hardware security with key diversification to deliver superior security that makes attacks significantly more difficult or even impossible. The selection of security features and mechanisms will continue to be determined by the impact on business processes.

The security and privacy issues should be addressed by the forthcoming standards which must define different security features to provide confidentiality, integrity, or availability services.

These are some security and privacy requirements with descriptions:

• Resilience to attacks: The system has to avoid single points of failure and should adjust itself to node failures.

• Data authentication: As a principle, retrieved address and object information must be authenticated.[15]

• Access control: Information providers must be able to implement access control on the data provided.[16]

• Client privacy: Measures need to be taken that only the information provider is able to infer from observing the use of the lookup system related to a specific customer.

The fulfillment of customer privacy requirements is quite difficult. A number of technologies have been developed in order to achieve information privacy goals. These Privacy Enhancing Technologies (PET) can be described in short as follow:[17]

• Virtual Private Networks (VPN) is extranets established by close groups of business partners. As only partners have access, they promise to be confidential and have integrity. However, this solution does not allow for a dynamic global information exchange and is impractical with regard to third parties beyond the borders of the extranet.

• Transport Layer Security (TLS), based on an appropriate global trust structure, could also improve confidentiality and integrity of the IoT. However, as each Object Naming Service (ONS) delegation step requires a new TLS connection, the search of information would be negatively affected by many additional layers.

Conclusion edit

In conclusion, Internet of Things is the concept in which the virtual world of information technology connected to the real world of things. The technologies of Internet of things such as RFID and Sensor make our life become better and more comfortable.

References edit

  1. a b c d e International Telecommunications Union (2005). ITU Internet Reports 2005: The Internet of Things. Retrieved from www.itu.int/internetofthings/
  2. Patrick J. Sweeney, Patrick J. Sweeney (II.) (2005). RFID for dummies. Wiley Publishing, Inc.
  3. Uckelmann.D, Harrison.M, Michahells.F (2011). Architecting the Internet of Things. Springer Heidelberg Dordrecht London New York
  4. Internet of Things in 2020: Roadmap for the future. (May, 2008) Retrieved from: http://ec.europa.eu/information_society/policy/rfid/documents/iotprague2009.pdf
  5. a b Vision and Challenges for Realising the Internet of Things, European Union 2010,ISBN 9789279150883. Invalid <ref> tag; name "visions" defined multiple times with different content
  6. National Intelligence Council, Disruptive Civil Technologies — Six Technologies with Potential Impacts on US Interests Out to 2025—Conference Report CR 2008–07, April 2008, Online: www.dni.gov/nic/NIC_home.html.
  7. What is IPv6? (2011, June 01). In Apple Inc. Retrieved November 10, 2011, from http://support.apple.com/kb/HT4669
  8. a b c Todd Lammle (2007). CCNA: Cisco Certified Network Associate Study Guide, Sixth Edition.
  9. What is IPv6? (2000-2011). In What Is My IP Address.com. Retrieved November 15, 2011, from http://whatismyipaddress.com/ip-v6
  10. a b c d e What is IPv6? (2000). In Opus One. Retrieved November 4, 2011, from http://www.opus1.com/ipv6/whatisipv6.html
  11. Sensors empower the "Internet of Things" (May, 2010) Retrieved from: http://www.edn.com/article/509123-Sensors_empower_the_Internet_of_Things_.php
  12. International Telecommunications Union (2005). ITU Internet Reports 2005: The Internet of Things Executive Summary. Retrieved from http://www.itu.int/osg/spu/publications/internetofthings/InternetofThings_summary.pdf
  13. a b Wikipedia Retrieved from: http://en.wikipedia.org/wiki/Radio-frequency_identification
  14. The Internet of Things: 20th Tyrrhenian Workshop on Digital Communications
  15. For RFID authentication see Juels, supra note 14, at 384 s; Rolf H. Weber/Annette Willi, IT-Sicherheit und Recht, Zurich 2006, at 284.
  16. See also Eberhard Grummt/Markus Mu¨ ller, Fine-Grained Access Control for EPC Information Services, in: Floerkemeier/ Langheinrich/Fleisch/Mattern/Sarma, supra note 4, at 35–49.
  17. Fabian, supra note 6, 61 s; Benjamin Fabian/Oliver Gu¨ nther,Security Challenges of the EPCglobal Network, Communications of the ACM, Vol. 52, July 2009, 121–125, at 124 s.

Chapter 9 : Conclusions

Contributors: Ahmad Luqman, Airil Hafiiz Bin Hasri, Amir Hanifi Bin Maddiah, Abdul Rahim Bin Abdul Halim, Mohammed Ali Alqahtani.

Origins of the Internet edit

J.C.R. Licklider of MIT was the first to write and record the social interaction which can enabled through networking in 1962. The intention was to make sure everyone could access and programs data from many sites. He was the ones that emphasized the importance of networking concept.[1]

Then came along, Leonard Kleinrock which published paper on packet switching theory in 1961 and made a book on in 1964. His theory towards computer networking was using packets rather than circuits. In addition, he wanted to make the computer communicate towards each other. So in 1965, a low speed dial-up telephone line was develop the first time for wide area computer network from TX-2 computer in Mass. To Q-32 computer in California. As a result, it work well in time-shared computers, retrieving data and running programs but the circuit switched phone was not enough. The need of packet switching from Kleinrock was confirmed. In 1966, the computer network concept and plans for “ARPANET” was developed in DARPA and published in 1967. These projects were done in MIT, RAND and NPL without any of the researcher knowing the other work until it was shown in a conference. At that moment the line speed proposed was from 2.4kbps to 50kbps in ARPANET design.[2]

In 1968,the packet switches called Interface Message Processors (IMP) which was one of key components was develop by DARPA.[3] It was released after community had refined the specification and structure for the ARPANET. IMP is the installed at UCLA as the first host computer connected in 1969. By the end of 1969, four host computers were connected together into initial ARPANET,[2] and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network.

In 1970, S. Crocker finished the initial ARPANET Host-to-Host protocol called Network Control Protocol (NCP) under Network Working Group (NWG). It then finally develop applications after implementing NCP through 1971-1971.[4]

In 1972 Kahn organized a very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). In addition the application of electronic mail was also introduced at the moment. The basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism was wrote by Ray Tomlinson.[5] For improvement, Roberts then expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From here email took a big step as the largest network application used until now.

The Initial Internet edit

In 1972, Kahn was to introduced the idea of open-architecture networking at DARPA.[3] It was originally part of packet radio program but the separated in its own. The idea was to make the packet radio works as a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Since it could avoid dealing with multitude of different operating systems, and continuing to use NCP, his first contemplated developing a protocol local only to the packet radio network.

However, NCP cannot address networks, further downstream and so changes were required to NCP. It relied on ARPANET to provide end-to-end reliability. The protocol would be grinding halt, if the packets were lost. In this case, NCP had no end-end host error control and so the ARPANET was to be the only network in existence.[2] It would be so reliable that no error control required on the part of the hosts. Thus, Kahn decided to develop protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP) where the protocol would act as communications protocol.

So in 1973, Vint Cerf was asked to work with him on the detailed design of the protocol. As a result, the first version to approach on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets.

However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.

A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET.[2] Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.

The Future Of The Internet edit

The Internet has been used daily and widely throughout the globe. It gives such a huge impact on culture and commerce since the mid-1990s, including the rise of near instant communication for instances, electronic mail, instant messaging, Voice over Internet Protocol (VoIP), two-way interactive video calls, and the World Wide Web. Based on how the Internet revolutionized and the way people around the world accessed information and communicated through the 1990s, the research and education community always strive to develop a better speed, bandwidth, and functionality for the benefits of next generation.

There are several major trends in recreating the future of the Internet along with extrapolated predictions. The first trend lies in bandwidth. The future of the Internet growth in bandwidth availability shows little sign of flattening. Large increases of bandwidth in the 10 Mbps range and up will continue to be installed to home users through cable, phone, and wireless networks. Cable modems and telephone-based DSL modems will continue to spread high speed Internet throughout populated areas. High resolution audio, video, and virtual reality will be increasingly available online and on demand, and the cost of all kinds of Internet connections will continue to decrease.

The second major trend is about wireless. The future of Internet wireless communications is the end game. Wireless frequencies have two great advantages. The first one is there are no infrastructure start-up or maintenance costs other than the base stations. The second one is, it give users the availability to access the internet almost anywhere and everywhere, taking Internet use from one dimension to three. Wireless Internet networks will offer increasingly faster services at vastly lower costs over wider distances and it would eventually push out the physical transmission systems. The use of radio communications networks in the 1970's inspired the Internet's open TCP/IP design. Since then, the wireless technologies experimented in the 1990's was continually improved. By the early 2000s, several technologies provided reliable, secure, high bandwidth networking that worked in crowded city centers and on the move, providing nearly the same mobility for Internet communications as for the cellular phone.

Last but not least is about the integration. The future of the Internet integration with an increasing number of other technologies is slowly but surely taking their places in this modern world. Furthermore, phones, televisions, home appliances, portable digital assistants, and a range of other small hardware devices, will become increasingly integrated with the internet. That means the accessibility of the internet would become much easier for the users, thus they will be able to access, status, and control this connected infrastructure from anywhere that have the connection of the internet and vice versa.

In addition, one of the leading efforts to define the future of the next generation Internet is the Internet2 project, which grew out of the transition of the National Science Foundation Network (NSFNET) to the Very High Speed Backbone Network Service (vBNS, vbns.net). The vBNS supported very high bandwidth research applications, and was established in 1995 as a cooperative agreement between MCI and the National Science Foundation. Internet2 is an advanced networking consortium led by the U.S. research and education community. Internet2 also used revolutionary-class IP and optical network technologies and it also known as the advanced technologies that enable services and achievements beyond the scope of individual institutions.[6]

The Role of Documentation edit

A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols.

The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks.

In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, 1998.

The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or a least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams.

Over time, the RFCs have become more focused on protocol standards (the "official" specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the "documents of record" in the Internet engineering and standards community.

The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems.

Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed - RFCs were presented by joint authors with common view independent of their locations.

The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC.

As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network's first role in information sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet.

History of the future edit

The FNC commonly passed a term resolution on October 24, 1995. Resolution definition was developed in discussion with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.[7]

In two decades since internet exists, it has changed so much. It has conceived in the era of time-sharing, but somehow survived until the era of personal computers, client-server, peer-to-peer computing, and the network computer. It was designed before LANs existed but has accommodated that new network technology as well as the more recent ATM and frame switched services. It was made as supporting a range of function from file sharing and remote login to resource sharing and collaboration, and produced electronic mail as well as World Wide Web. The most important thing is that it started as small band of committed researchers, and has grown to be a commercial success with billions of dollars of annual investment.

The internet, even though a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will continue change almost same speed as the rate change of the computer industry if it remain useful. So many services that has been provide nowadays such as real time transport, in order to support audio and video streams. The availability of the of persistent networking along with powerful computing and communication in portable form such as laptop computers, PDAs, cellular phones and other always making things possible for a new paradigm of nomadic computing and communications.

This development gave us new applications such as internet telephone, and nowadays got internet television. Usually it was developed to allow more complicated forms of pricing and cost recovery. It is changing to put up another generation of original network technologies with different characteristics and requirements, from broadband residential access to satellites.

According to [8], the structural design of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a production of stakeholders - stakeholders now with an economic as well as a good investment in the network. In the debates over control of the domain name space and the form of the next generation IP addresses, an effort to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stake-holders. At the same time, the industry hard to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology.[7]

References edit

  1. The Internet: A Historical Encyclopedia. p. 197.
  2. a b c d McQuillan, John (May 1975), "The Evolution of Message Processing Techniques in the ARPA Network", Network Systems and Software Infotech State of the Art Report 24
  3. a b Robert; Leiner, Barry; Mills, David; Postel, Jonathon B. (March 1985). The DARPA Internet Protocol Suite Cole. IEEE INFOCOM 85, Washington, D.C.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. The Internet: the basics. p. 202.
  5. Encyclopedia of New Media: An Essential Reference to Communication and Technology. {{cite book}}: Text "page 489" ignored (help)<
  6. "Internet2 Network".
  7. a b Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff (2000). A Brief History of the Internet, version 3.31. Institut for Information system and Computer Median.{{cite book}}: CS1 maint: multiple names: authors list (link)