Você está na página 1de 58

Agenda Module II (2 sessions)

• Internet Architecture
• The Life Cycle Approach
• Overview of different Phases
• The Network
• Information transfer
The Internet and the WWW
What is the Internet?

• A loosely configured global wide-area network.


• Includes more than 31,000 different networks in
over 100 different countries.
• Millions of people visit and contribute to the
Internet, through e-mail and the World Wide
Web.
• Began as a Department of Defense project.
• For detailed information about the history of the
Internet, see:
http://dir.yahoo.com/Computers_and_internet/Internet/History/
Early history of the Internet

• In the 1950s the U.S. Department of Defense


became concerned that a nuclear attack could
disable its computing (and thus planning and
coordinating) capabilities.
• By 1969 the Advanced Research Projects Agency
Network (ARPANet) had been constructed.
• The first computers to be connected were ones at
the University of California at Los Angeles, SRI
International, the University of California at
Santa Barbara, and the University of Utah.
The changing Internet

Early on researchers began to find new uses for the


Internet, beyond its original purpose of controlling
weapons systems.

These new applications included the following:


• Electronic mail
• File transfer protocol
• Telnet
• User’s News Network (Usenet)
The new uses

• In 1972 a researcher wrote a program that could


send and receive messages over the Internet.
E-mail was quickly adopted by Internet users.
• File transfer protocol (FTP) allowed researchers
using the Internet to transfer files easily across
great distances.
• Telnet allows users of the Internet to log into
their computer accounts from remote sites.
• All three of these applications are still widely
used. We will discuss them again later.
Usenet

• In 1979 a group of students and programmers at


Duke and the University of North Carolina
started Usenet, short for User News Network.
• Usenet allows anyone who connects to the
network to read and post articles on a variety
of subjects.
• Usenet survives today in what are called news-
groups.
Newsgroups

There are several thousand newsgroups covering a


highly varied groups of subjects.
Examples:
– alt.cats 
– comp.databases 
– rec.climbing 
– soc.penpals 

The first part of the name of each group tells you


what type of group it is and the remaining parts
indicate the subject matter.
Terminology

• A hypertext server is a computer that stores files


written in hypertext markup language (HTML)
and lets other computers connect to it and read
those files. It is now called a Web server.
• A hyperlink is a special tag that contains a pointer
to another location in the same or in a different
HTML document.
• HTML is based on Standard Generalized Markup
Language (SGML), which organizations have
used for many years to manage large document
filing systems.
Early Web browsers

• A Web browser is a software interface that lets


users read (or browse) HTML documents.
• Early web browsers were text based.
• Although the Web caught on quickly in the
research community, broader acceptance was
slow to materialize.
• Part of the problem was that the early browsers
were difficult to use.
GUI Web browsers

• In 1993, Marc Andressen led a team of researchers


and developed the first software with a graphical
user interface for viewing pages over the Web.
• This first GUI browser was named Mosaic.
• Mosaic widened the appeal of the Web by making
access easier and adding multimedia capabilities.

• Andressen later went on to develop the Netscape


Navigator browser.
Control of the Internet
• No one organization currently controls the Internet.
• Several groups oversee aspects of the development
of the Internet.
– Internet Engineering Task Force (IETF)
Oversees the evolution of Internet protocols
– Internet Registries (InterNIC)
Maintain and allocate Internet domains
– World Wide Web Consortium (W3C)
Develops standards for the WWW
• See the Internet Standardization Organizations.
Internet 2

A project to develop another Internet, Internet2, is


is being led by over 170 U.S. universities working
in partnership with industry and government.

This new network is designed to allow development


and deployment of advanced network applications
and technologies.

For more information see: http://www.internet2.edu/


A model for networking
• The world’s telephone companies were the early
models for networked computers because the
networks used leased telephone company lines.
• Telephone companies at the time established a
single connection between sender and receiver
for each telephone call.
• Once a connection was established, data traveled
along that path.
Circuit switching
• Telephone company switching equipment (both
mechanical and computerized) selected the
phone lines, or circuits, to connect in order to
create the path between caller and receiver.
• This centrally controlled, single connection
model is known as circuit switching.
• Using circuit switching does not work well for
sending data across a large network.
• Point-to-point connections for each sender/
receiver pair is expensive and hard to manage.
A different approach
• The Internet uses a less expensive and more easily
managed technique than circuit switching.
• Files and messages are broken down into packets
that are labeled with codes that indicate their
origin and destination.
• Packets travel from computer to computer along
the network until they reach their destination.
• The destination computer reassembles the data
from the packets it receives.
• This is called a packet switching network.
Packet switching
• In a packet-switched network, (some of) the
computers that an individual packet encounters
determine the best way to move the packet to its
destination.
• Computers performing this determination are
called routers.
• The programs that the computers use to determine
the path are called routing algorithms.
Benefits of packet switching
There are benefits to packing switching:
• Long streams of data can be broken down into
small manageable data chunks, allowing the
small packets to be distributed over a wide
number of possible paths to balance traffic.
• It is relatively inexpensive to replace damaged
data packets after they arrive, since if a data
packet is altered in transit only a single
packet must be retransmitted.
Open architecture
When it was being developed, the people working
on ARPANet adhered to the following principles:
3. Independent networks should not require any
internal changes in order to be connected.
4. The router computers do not retain information
about the packets that they handle.
5. Packets that do not arrive at their destinations
must be retransmitted from their source network.
6. No global control exists over the network.
Most popular Internet
protocols
The most popular Internet protocols include:
• TCP/IP
• HTTP (Hypertext transfer protocol)
• E-mail protocols (SMTP, POP, IMAP)
• FTP (File transfer protocol)

Each protocol is used for a different purpose,


but all of them are important.
TCP/IP
• The protocols that underlie the basic operation of
the Internet are TCP (transmission control
protocol) and IP (Internet protocol).
• Developed by Internet pioneers Vinton Cerf and
and Robert Kahn, these protocols establish rules
about how data are moved across networks and
how network connections are established and
broken.
• Four layer architecture
Purposes of each protocol
• TCP controls the assembly of a message into
smaller packets before it is transmitted over
the network. It also controls the reassembly
of packets once they reach their destination.
• The IP protocol includes rules for routing
individual data packets from their source to
their destination. It also handles all addressing
details for each packet.
Network layers
The work done by communications software is
broken into multiple layers, each of which handles
a different set of tasks.

Each layer is responsible for a specific set of tasks


and works as one unit with the other layers when
delivering information over the Internet.

Each layer provides services for the layer above it.


TCP/IP architecture
There are five layers in the Internet model:
1. Application
2. Transport
3. Internet
4. Network interface
5. Hardware

The lowest layer is the hardware layer that handles


the individual pieces of equipment attached to the
network. The highest layer is the application layer
where various network applications run.
Positioning within the layers
A full discussion of the Internet model is beyond
the scope of this class.

It is, however, useful to know where each protocol


resides. TCP operates in the transport layer and IP
in the Internet layer. See Figure 2-2 on page 38.

Some of the application layer protocols include


HTTP, SMTP, POP, IMAP, and FTP. (Telnet
also operates in the application layer).
Web System Architecture

Internet Web Server and


Web Clients Database
Application Server
Web System Architecture

Web Browser : It is client interface.


Web Server : it is one of the main components of the service
system,. It interacts with the web clients as well as backend
system.
Application Server : It hosts the e-commerce application
software.
HTTP
• HTTP (hypertext transfer protocol) is the protocol
responsible for transferring and displaying Web
pages.
• It has continued to evolve since being introduced.
• Like other Internet protocols, HTTP uses the client/
server model of computing. Thus, to understand
how HTTP works, we need to first discuss the
client/server model.
HTTP/1.0
HTTP/1.1
HTTP
Request Method in HTML
• Get
• Head
• Post
Client/server model
• In the client/server model there are two roles: the
client and the server.
• The client process makes requests of the server.
The client is only capable of sending a request
to the server and then waiting for the reply.
• The server satisfies the requests of the client. It
usually has access to a resource, such as data,
that the client wants. When the resource that
the client wants becomes available, it sends a
message to the client.
• This model simplifies communication.
HTTP and client/server
• With HTTP the client is the user’s Web browser
and the server is the Web server.
• To open a session, the browser sends a request
to the server that holds the desired web page.
• The server replies by sending back the page or an
error message if the page could not be found.
• After the client verifies that the response sent was
correct, the TCP/IP connection is closed and
the HTTP session ends.
• Each new page that is desired will result in a new
HTTP session and another TCP/IP connection.
One page, multiple requests
• If a Web page contains objects such as movies,
sound, or graphics, a client must make a
request for each object.
• For example, a Web page containing a back-
ground sound and three graphics will result in
five separate server request messages to retrieve
the four objects plus the page itself.
Internet addresses
Internet addresses are represented in several ways,
but all the formats are translated to a 32-bit number
called an IP address.

The increased demand for IP addresses will soon


make 32-bit addresses too small, and they will be
replaced with 128-bit addresses in the near future.
See the links page for more information.

How does increasing the number of bits in the


address help with increasing demand?
Dotted quads
• IP numbers appear as a series of up to 4 separate
numbers delineated by a period.
• Examples:
students.depaul.edu: 140.192.1.100
condor.depaul.edu: 140.192.1.6
facweb.cs.depaul.edu: 140.192.33.6
• Each of the four numbers can range from 0 to
255, so the possible IP addresses range from
0.0.0.0 to 255.255.255.255
Domain names
• Since IP numbers can be difficult for humans to
remember, domain names are associated with
each IP address.
• Examples:
students.depaul.edu: 140.192.1.100
facweb.cs.depaul.edu: 140.192.33.6
• A domain name server is responsible for the
mapping between domain names and IP
addresses.
Uniform resource locator
• People on the Web use a naming convention
called the uniform resource locator (URL).
• A URL consists of at least two and as many as
four parts.
• A simple two part URL contains the protocol
used to access the resource followed by the
location of the resource.
Example: http://www.cs.depaul.edu/
• A more complex URL may have a file name
and a path where the file can be found.
A URL deconstructed

http://facweb.cs.depaul.edu/asettle/ect250/section602/hw/assign2.htm

hypertext
path that indicates
transfer domain document
the location of the
protocol name
document in the
host’s file system
Anatomy of an e-mail address

asettle @ cs . depaul . edu

Domain
Handle Host/Server Domain
Type

Others:
• students
• hawk
• condor
Domain types
• edu: educational
• com: commercial
• net: originally for telecommunications
• org: organizations (non-profit)
• gov: U.S. government
• ja, uk, de, … : Nations other than the U.S.
• New additions: info, biz, name, pro, museum,
coop, aero, tv. See links page for a related
news story.
Internet utility programs
TCP/IP supports a variety of utility programs that
allow people to use the Internet more efficiently.

These utility programs include:


• Finger
• Ping
Architecture of A Web Based
E-Commerce System
E-Commerce hardware and
Software
Revisiting the Three Tier
Model
First Tier – Web Client

It provides a web based GUI displayed through


a web browser in the client computer .
Second Tier – Server side
Applications
It consists of server side applications that run
on a web server or a dedicated application
server . These application implement the
business logic of the web system.
Major Factors : Efficiency , Security , cost
effectiveness and Compatibility
CGI : Common Gateway Interface
ASP : Active Server Page
Java Servlet
Third Tier – Database
Management System
It provides data storage / retrieval services for the
second tier so that dynamic web pages can be created.
It may consist of one database or group of databases.
For this we need database connectivity.
One of the most popular method is by means of JDBC –
ODBC bridge . Others are Proprietary Network Protocol
Drivers and Native API drivers.
To communicate with a database , we used SQL.
Web servers
• The components of a web server are:
– Hardware
– Software
• When determining what sort of server hardware
and software to use you have to consider:
– Size of the site
– Purpose of the site
– Traffic on the site
• A small, noncommercial Web site will require
less resources than a large, commercial site.
The role of a web server
• Facilitates business
– Business to business transactions
– Business to customer transactions
• Hosts company applications
• Part of the communications infrastructure

Poor decisions about web server platforms can


have a negative impact on a company. This is
particularly true for purely online (“click and
mortar”) companies.
Hosting considerations
Will the site be hosted in-house or by a provider?
Factors to consider:
• The bandwidth and availability needed for the
expected size, traffic, and sales of the site
• Scalability: If the Web site needs to grow or has
a sudden increase in traffic, can the provider
still handle it?
• Personnel requirements or restraints
• Budget and cost effectiveness of the solution
• Target audience: Business-to-customer (B2C) or
business-to-business (B2B)
Types of Web sites
• Development sites: A test site; low-cost
• Intranets: Available internally only

• B2B and B2C commerce sites


• Content delivery site

Each type of site has a different purpose,


requires different hardware and software,
and incurs varying costs.
Commerce sites
Commerce sites must be available 24 hours a day,
7 days a week. Requirements include:
• Reliable servers
• Backup servers for high availability
• Efficient and easily upgraded software
• Security software
• Database connectivity

B2B sites also require certificate servers to issue


and analyze electronic authentication information.
Content delivery site
• Examples:
 USA Today
 New York Times
 ZDNet
• Sell and deliver content: news, summaries,
histories, other digital information.
• Hardware requirements are similar to the
commerce sites.
• Database access must be efficient.
What is Web hosting?
Web hosts are Internet service providers who also
allow access to:
• E-commerce software
• Storage space
• E-commerce expertise

You can choose:


• Managed hosting: the service provider manages
the operation and oversight of all servers
• Unmanaged hosting: the customer must maintain
and oversee all servers
Benefits
• Cost effective for small companies or those without
in-house technical staff.
• May require less investment in hardware/software.
• Can eliminate the need to hire and oversee technical
personnel.
• Make sure that the site is scalable.
Services provided
• Access to hardware, software, personnel
• Domain name, IP address
• Disk storage
• Template pages to use for designing the site
• E-mail service
• Use of FTP to upload and download information
• Shopping cart software
• Multimedia extensions (sound, animation, movies)
• Secure credit card processing
Summary
• ISPs have Web hosting expertise that small or
medium-sized companies may not.
• Creating and maintaining a Web site using an
existing network can be difficult.
• With the exception of large companies with large
Web sites and in-house computer experts, it is
almost always cheaper to use outside Web
hosting services.
Examples
• EZ Webhost
• Interland
• HostPro
• HostIndex
 Managed hosting
 Other hosting options
• TopHosts.com

Você também pode gostar