Você está na página 1de 8

University of Guyana • Faculty of Natural Sciences •

Department of Computer Science

CSI225 Internet Computing (lecture 1)

The Internet – An Introduction

What is the Internet?


In short, the Internet is a revolutionary global medium. More
specifically it is a worldwide, publically accessible network of
other computer networks – a network of networks.

Most networks that interconnect to make up the Internet can be


classified as academic-related, government-related, business-
related, or domestic-related.

Regardless of what specific purpose the Internet may facilitate,


it always relates (in some way or the other) to the exchange of
information across great distances.

How did the Internet come about?


The history of the Internet is quite a lengthy one; however, the
significant events will be chronologically listed to give you a
quick idea of how the Internet came to be what it is today (i.e.
2007).

In 1957, the Soviet Union launched Sputnik I, the first satellite.


This made the US nervous (things were tense then with the
threat of nuclear war and all) so the US Department of Defense
formed ARPA (Advanced Research Projects Agency) to do
research to ensure national security.
The IPTO (Information Processing Techniques Office),
affiliated with ARPA and headed by J.C.R. Licklider
discussed the potential benefits of a country-wide

Page 1 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

communications network to securely share research


information under ARPA.

Licklider suggested to his superiors that Lawrence Roberts


would be perfect for the job to develop such a network.

So, Roberts took up the offer and led the development of the
network based on Paul Baran’s and Donald Davies’ new
concept of packet switching.

A special computer called the Interface Message Processor


was developed to implement the design and ARPAnet went
“online” in October of 1969. The first communications were
between a research centre at the University of California
(UCLA) and that of the Standford Research Institute (SRI)
(California).

Shortly afterwards, more Institutions were added to the group


of “interconnected nodes” namely: UCSB, University of Utah,
Harvard, NASA etc; bringing the total to 15 by 1971.

All this time ARPAnet was using the NCP (network control
program) to act as a protocol for their network, however in
1983, it was replaced by the TCP/IP protocol (the protocol we
use today) developed by Robert Khan, Vinton Cerf and
others. This leads to one of the first definitions of an “internet”
as being a connected set of networks.

By the early 1990s the Soviet Union was no more and


ARPAnet went into retirement and became NSFnet (National
Science foundation network). The NSFnet was soon connected

Page 2 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

to the CSnet (Conputer Science network), which linked


universities throughout North America and then to the EUnet
(European network) which connected research facilities in
Europe.

Shortly afterwards, the popularity of the Internet grew


exponentially along with the number of hosts, partially due to
NSFnet’s management but mostly because of the advent of the
World Wide Web in 1991. This persuaded the US
Government to transfer management to independent
organisations (listed below) in 1995 and that’s the history.
The organisations that marshal the Internet are:
1. ICANN – Internet Corporation for Assigned Names &
Numbers
a. IANA – Internet Assigned Numbers Authority
b. ASO – Address Support Organisation
c. CCNSO – Country Code Names Supporting
Organisation
d. GNSO – Generic Names Supporting
Organisation
e. NSI – Network Solutions
f. Accredited Domain Name Registrars
2. ISOC – Internet Society
a. IAB – Internet Architecture Board
i. IETG / IESG – Internet Engineering
Task Force
ii. IRTF / IRSG – Internet Research Task
Force
3. Others

Page 3 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

The Infrastructure of the Internet


The technical infrastructure of the Internet comprises a
hardware side and a software side.

Hardware Set-Up

The global network of networks which is the Internet is


physically interconnected by cable, routers and servers that are
essentially used to form ISPs and Wide Area Networks.
At the endpoints of the interconnected networks lie a series of
Clients (End-User computers).
The following diagram (fig 1.) illustrates this:

Page 4 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

Software Set-Up

The physical structure depicted in fig. 1 can be thought of as


primarily comprising computers (clients and servers) that share
all types of information.

The sharing of such information is facilitated by the data


packet. For instance, one computer at an endpoint is able to
transfer information to another computer at another endpoint
by converting the information into packets and posting them
across the series of ISPs, Backbones etc, to the intended
destination.

But a few technical questions arise:


1- How do these packets find their way across the vast
Internet to their destination?
2- How is the destination computer singled out from all
the other computers?
3- How is the communication between these computers
coordinated for an effective transfer?

The answer to question 1 relates to the presence of routers


throughout the Internet. A router is the hardware element of the
Internet that directs Internet traffic by basically reading the
destination address on a data packet, and routing the packet
through the appropriate sub-networks to reach its destination in
the shortest time possible.

This brings us to question 2. Every computer that belongs to a


network is given a unique address, to help identify it. This
unique address is known as its IP address (e.g 65.199.203.58)

Page 5 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

and is embedded in every packet sent, so that routers know


which computer of which network to direct the particular
packet.

Finally, we come to question 3. Since any information sent


over the Internet is broken up into several discrete packets to
be transferred, there needs to be some common system of
control to coordinate and ensure that all of these packets reach
their destination safely to reconstruct the information initially
sent. This system of control, known as a protocol, essentially
facilitates the effective communication of all elements of the
Internet. As stated earlier, the primary protocol of the Internet
is TCP/IP which represents a stack of all the protocols used
over the Internet such as FTP (file transfer protocol) and
HTTP (Hypertext transfer protocol) the protocol of the Web.

The Internet and the World Wide Web (WWW)


The success and popularity of the Internet was primarily
brought on by the advent of the World Wide Web in 1991,
however, the concepts of the web originated many decades
before.

In the 1960’s Ted Nelson discovered and popularized the


Hypertext concept. The hypertext concept is a user interface
paradigm employed today in hyperlinks throughout the web as
well as electronic documents that overcomes the linearity in
traditional documents. Douglas Engelbart, in accordance with
Nelson’s research, developed the first mouse, GUI and
hypertext system.

Page 6 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

Many years after in the 1980s, the web, in its initial state, was
developed in Europe by Tim Berners-Lee and Robert
Cailliau and subsequently gained its worldwide popularity in
the 1990s through the efforts of Marc Andreesen and NCSA
(National Center for Supercomputing Applications) that
developed Mosaic and Netscape Browsers.

The web, as we know it today, runs on the Internet and consists


of:
1- Hyperlinked Web pages
2- Web browsers to view the hyperlinked web pages.
3- Web addresses or Uniform Resource Identifiers
(URIs)(e.g. http://www.yahoo.com) to locate the
hyperlinked web pages or other file resources.
4- Web servers to host the hyperlinked collection of web
pages known as websites and other resources.
5- Domain Name Servers (DNSs) that assist in locating
web servers via their URIs instead of a cryptic IP
address.

The web, however, is coordinated and driven by three


particular standards. They are:
i. The Uniform Resource Identifier (URI)
ii. The Hypertext Transfer Protocol (HTTP), which
specifies how the browser and web server communicate
with each other.
iii. The Hypertext Markup Language (HTML), used to
define the structure and content of hypertext
documents.

Page 7 E Marks 1/29/2007


University of Guyana • Faculty of Natural Sciences •
Department of Computer Science

References:
Stewart, William. (2006). Living Internet. Available online at
< http://www.livinginternet.com/>

Bolter, Jay David (2001). Writing Space: Computers, Hypertext, and the
Remediation of Print. New Jersey: Lawrence Erlbaum Associates. ISBN 0-8058-
2919-9.

Page 8 E Marks 1/29/2007

Você também pode gostar