Você está na página 1de 112

Networking Basics

This section outlines the basic concepts required to understand a


computer network and its fundamentals.

Definition of a network

Network. Connected set of devices and end systems, computers,


servers, etc., that can communicate with each other:

Networks carry data.


Networks connect elements within the same location or
among different locations.

Networks can be found in different environments. homes, small


offices, corporate offices, etc.

(1)Corporate office/Main office. All employees are connected


through a network where the corporate information servers
usually sit.
(2)Remote Locations. Connect to the main office in a variety of
ways:
Branch offices. Smaller locations where employees use
local network resources to communicate among
themselves and also get information from the Main office
by means of the network communicating Branch office
and Main office.
Home office. Employees work from their home (home
office). They require connections to the Main/Branch
offices. And,
Mobile users. Employees need connectivity to the Main
office while travelling.

Sometimes small offices and home offices are grouped under the term
(SOHO Small Office, Home Office)

By means of a network, users have access to information remotely


located, data and applications, and can share resources like
printers, network storage devices or backup devices. By sharing
resources networks provide efficiency and cost savings.
The key four elements of a network are:

The rules.
The messages.
The media. And,
The devices.

Characteristics of a Network.
The main characteristics of a network are as follows:

Cost. Amount of money paid for the elements of the network


the installation and the maintenance.
Speed. How fast data is transmitted among the end points,
known as data rate. Measured in bps (bits per second) or
multiples of bps such as Mbps (Mega bits per second).
Topology. Physical arrangement of the network and its
components ,physical topology, or path followed by data
within the network, logical topology.
Availability. Probability of the network to be operational. It is
usually calculated by dividing the minutes the network has been
available by the total minutes of the period of time being
considered and them multiplied by 100, given in percentage.
Reliability. Dependency on network components and media
measured as probability of failure or mean time between
failures (MTBF).
Scalability. Ability of the network to cope with growth in terms
of number of users or connectivity requirements.
Security. Protection of networks elements and the data
contained on those or travelling through the network.

Network Physical components


The main physical components of a network are:

1. Servers and Computers. Endpoints of the network. Transmit


and receive data.
2. Interconnections: Provide means for data to travel across the
network. There are three types of interconnections:
a. Type 1. NIC (Network Interface Card). Connects
Computers and Servers to the network media.
b. Type 2: Network Media. (Cable/Wireless). Transports data
as electrical signals.
c. Type 3: Connectors. Connection points for the media.
3. Switches. Provide local connectivity to end systems and move
data from one end system to another.
4. Routers. Interconnect networks by selecting the best path
among all the existing paths between the networks.

Resources Shared in a Network.


The major resources shared in a network are:

1. Data and Applications. Such as e-mail, web browsers, instant


messaging, databases, etc.
2. Physical resources. Such us servers, printers, switches,
routers, etc.
3. Storage devices. Used to make information available to all the
network users. There are different types of storage devices.
a. Direct Attached Storage (DAS). Storage units connected to
a server accessible to network users.
b. Network Attached Storage (NAS). Storage units directly
accessible through the network.
c. Storage Area Networks (SAN). Networks of storage units
accessible through the network.
4. Backup devices. Tapes, drives or storage units used to make a
recoverable copy of user information to be used in case of
disaster.
Network Topology.

The Network Topology defines how devices interconnect among


themselves. There are two types of topology: physical and logical.

1. Physical Topology refers to the way devices interconnect from


the physical perspective: cabling layout. There are three main
types of physical topology:

Bus. Devices are cabled together in a line sharing a single


cable.
Ring. Devices are cabled together in a line, but the last one
connects to the first in the line forming a ring. We can have a
single ring or a dual ring, for redundancy purposes.
Star. A central device connects all the other network
devices. We can have a single star or an extended star,
where some nodes act as centers of interconnection for
smaller stars.

2. Logical Topology refers to the way information flows in a


network.

Physical and Logical topologies in a network can be the same


or can be different.
A network could be connected in star topology but works as a ring
(information travels from one station to another and when reaching
the last one moves into the first). By observing the physical
layout of a network, logical topology cannot be predicted.

There are different types of Logical topologies:

Bus. In a bus topology devices are logically connected in a


single line. Information sent by one device is seen by all the
others.
Ring. In a ring topology devices are locally connected one to
another in the form of a ring or circle. The ring can be single or
dual.
Star. In a start topology devices are logically connected through
a central device.
Full Mesh. All devices are connected to each other for
redundancy.
Partial Mesh. Devices are connected to each other but no all
of them are interconnected.

Network Media

Media carries information in the form of electrical signals. The


process of translating the information to voltages in a cable or waves
in a fiber is called coding.
There are different types of media:

Copper. Information travels encoded as electrical voltages.


Copper cables can be classified as:
UTP (Unshielded Twisted Pair) - Pairs of copper are not
wrapped in an isolation film.
STP (Shielded Twisted Pair) - Pairs of copper are
individually wrapped in an isolation film.
Fiber Optics. Information travel as electromagnetic
waves (lights) over plastic or glass cables.
Wireless. Information travels over the air in the form of
electromagnetic waves.

Each media type has different characteristics that need to be


considered during the selection of the appropriate media:

Cable Length. Copper cable spans to 100m while fiber spans


up to several kilometers.
Cost. Covers for the installation and maintenance cost.
Bandwidth. Typically, fiber has higher bandwidth than copper
although there are different bandwidth copper cables.
Installation Requirements. Each media has a different
installation complexity. Fiber is usually more complex to install
than copper.
Electro Magnetic Interference (EMI) or Radio Frequency
Interference (RFI). Fiber is not affected by interference (EMI
or RFI) while copper usually gets affected by other sources of
electromagnetic waves.

Ethernet Network Media.


The type of Ethernet technology used by a network is described by
means of an acronym that has the following format:

<SPEED>BASE-<MEDIA/CABLE TYPE>
<Speed>. Refers to the actual speed of the network in Mbps or
Gbps.
<Media/Cable Type>. Refers to the type of cable and media
used by the network.

Examples of Technology types are as follows:

100BASE-TX. 100Mbps over Cat 5 UTP cable reaching a maximum


distance of 100m.
Trick. In order to remember which fiber technology has the longest
reach, think about the following:
SX=Short = Multi-mode.
LX=Long = Mono-mode.

Ethernet Collision Domain and Broadcast Domain

Ethernet has become the de-facto standard for LAN Technology.

In Ethernet Networks, different devices connected to the same


network can transmit information at the same time. When this
happens, information gets mixed in the physical media and it is not
understandable any more by any of the connected devices. This is
known as collision. After a collision happens, the end devices wait for
a random period of time before retrying the transmission in a way
probability of collision gets reduced for the retransmission.

All devices subject to collision (connected in a way collisions might


happen) belong to what is called a collision domain.

In Ethernet Networks, information transiting the network is seen by


all the end devices connected to the media, even when information is
only addressed to an specific device within the network. There is also
information addressed to ALL devices in the network. This information
is known as broadcast information. When a device transmits
broadcast information, all the devices able to receive this information
belong to what is called a broadcast domain.

Sometimes a broadcast domain and a collision domain are the same.


In other occasions, a broadcast domain can contain multiple
collision domains depending on how devices get connected to the
network, by means of a bridge, a switch or a router.

Ethernet Collision Domain, Broadcast Domain and Connectivity


devices.
Bridges and Switches aggregate devices into a single network.

A bridge serves as an extension of the physical media, but each


of its ports acts as a different collision domain, although all of
them belong to the same broadcast domain. Bridges were used in the
past to isolate collision domains, increasing the performance of the
networks. Those devices where having limited number of ports.
In other words, when two or more devices transmit at the same time
when connected to a bridge, a collision wont happen.

All broadcast information sent by any of the devices connected to a


bridge will be seen by all the other connected devices.

A switch acts as an intelligent device by forwarding information


from one port to another and therefore avoiding collisions among
devices connected to different ports. All devices connected to a
switch are in the same broadcast domain, while each port represents
a different collision domain.

In other words, when two or more devices connected to a switch


transmit at the same time, the switch will move information from one
port to another avoiding collisions. Switching is performed by using a
switching table that maps each device and the port where the
device is connected. All broadcast information sent by any of the
devices connected to a switch will be seen by all the other connected
devices.
A router connects networks. Each port of a router is a separate
network and therefore a different collision and broadcast domain.

Broadcasts never get transmitted across a router.

Criteria to select devices to meet a network specification.


Switches.

Hubs and Switches are used to connect end devices to a LAN.

Hubs are used in small networks and where cost is an issue.


Nowadays hubs are not deployed any more.
Switches allow connecting devices to a network while
segmenting collision domains and providing additional security.

There are different key factors to be considered when selecting a


switch:

Cost. Depends on number of ports and device capabilities


(hardware and software).
Interface Characteristics. Enough number of ports to cover
the requirements as well as ports using the same LAN
technology as the devices to be connected (physical media,
speed, connectors, etc.).
Hierarchical Network Layer. Switches at different positions of
the network hierarchy have different requirements (see next
topics).

Routers:
Routers are used to interconnect networks (LANs and WANs).
There are different key factors to be considered when selecting a
router:

Expandability. Capability to add additional interfaces to a


router in a way new networks can be connected.
Media. Type of interfaces required to connect the different
networks.
Operating System Features. such as Quality of Service,
Security, additional services support.

How to connect devices to a Network. Types of cables.

Devices are connected in pairs (Computer to Switch or Switch to


Router, for example). In order for the information (electrical signals) to
flow from one device to another, whatever is transmitted by one
device needs to be received by the other. This, practically speaking,
forces to have a complementary pin-out between devices that are
most commonly connected. By complementary we mean that
Transmit pins (TX) and Receive pins (RX), need to be arranged in a
way that a cable with no crossings (Straight Through Cable is
used).
Sometimes is still required to connect devices with the same pin-out,
for example router to router. In this case, as the arrangement of the
pins would not be correct (TX would not be facing RX) a cable
performing the crossing is required and therefore a Crossover
Cable is used.

In general, the rules are as follows:

Connecting devices with different layouts of the pins (TX facing


RX and RX facing TX) is achieved through a straight over cable.
Computer to hub, computer to switch, etc.
Connecting devices with the same layout of the pins (TX facing
TX and RX facing RX) requires a crossover cable. Router to
router, hub to hub, computer to router, etc.

Tip. If you were wondering why at your home you are using a straight
forward cable to connect to your broadband router instead of a
crossover, the answer is not so obvious. In reality, ports used to
connect your home computers to broadband routers are acting as
switch ports and, therefore, they have the same pin-out as a switch.
As we have learnt, the straight forward cable is the most appropriate.

Local Area Network versus Wide Area Network.


Local Area Network (LAN) is a network restricted to a limited
geographical area. LANs can vary in size from a big campus network
to a small branch office network. The key elements of a LAN are as
follows:

- Computers.
- Interconnections: Network Interface Cards (NICs) and the media.
- Networking devices (hubs, switches and routers).
- Protocols (Ethernet, IP, TCP, etc.).

LANs connect multiple type of devices: servers, printers, bridges,


switches, routers, firewalls, voice gateways, etc.

In order to build a Local Area Network (LAN) different technologies


can be used such as Ethernet, Fast Ethernet (FE), Gigabit Ethernet
(GE), Token Ring, Fiber Distributed Data Interface (FDDI). Each LAN
technology works at different speed and may use different physical
media and communication protocols.

Ethernet is the most commonly used LAN technology in any of


its flavors:

Ethernet speed of 10 Mbps (Mega bits per second).


Fast Ethernet speed of 100Mbps (Mega bits per second).
Gigabit Ethernet 1000Mbps = 1Gbps (Giga bits per
second).

Wide Area Network (WAN) is a network used to interconnect


different locations. In essence, a WAN connects different LANs
allowing Inter-LAN communication. A set of connected LANs is called
internetwork. Internet is a public internetwork. Private internetworks
are usually referred as intranets.

In order to build a Wide Area Network different WAN technologies can


be used such as Asynchronous Transfer Mode (ATM), Frame Relay
(FR), Integrated Switched Digital Network (ISDN), Analog dialup, etc.
Each technology may use different interface types, allow connectivity
at different speed, be always available or operate only on demand
and use different communication protocols.

Connecting to a WAN can be done in any of the following ways:

RJ45 connection to dialup or DSL modem.


Coaxial cable to a cable modem.
60-pin serial connector to a Channel Service Unit (CSU)/Data
Service Unit (DSU).
RJ45 T1 Controller to a CSU/DSU.
Interface is the point of attachment into a Network. If the interface
connects into a LAN it is called LAN interface. If the point connects
into a WAN it is called WAN interface.

WAN Networks are usually working at lower speed than LAN


Networks.

WAN Networks are usually provided through a Service Provider


infrastructure.

Teleworkers

Teleworkers is a term defining employees of a company working


remotely (not in an office). Teleworkers usually require access to the
corporate network and to the resources located on it.

This remote connectivity can be achieved in different ways:

Through a private WAN network (usually owned by a


service provider): Frame Relay (FR), ATM or dedicated
connection circuits (Leased Lines).
Through a Router to Router IP Sec based tunnel, i.e.
secure connection over an existing network.
Through a VPN access (tunnel) established over the
Internet.

For this connections to be established, there is a need for the right


devices and/or software at the home office location, but also at the
entry point into the corporate network.

At the home office location we usually have broadband routers, VPN


software clients, FR/ATM capable routers, etc.

At the corporate office we usually have VPN capable routers,


FR/ATM capable routers, security appliances, servers containing user
profiles required during authentication, etc.

Hierarchical Model Design.

Networks are designed using a hierarchical and modular


approach to facilitate scalability and performance. Each layer of the
model has a different function within the network.

Access Layer. Provides access to end users and devices and


also connects to other networks such as the Internet or WAN
connections. Users connecting through the Telephone Network
also get aggregated through the access layer.
Distribution Layer. Connects core and access layers in a
redundant fashion.
Core Layer. Acts as Network Backbone and is usually formed
by High Speed devices and connections.

Different types of network devices are usually deployed at the


different layers:

Access Layer.

Routers.
Telephone/VPN Concentrators.
Switches having the following characteristics. Port security,
VLAN support, Different speed Interfaces (Fast/Giga Ethernet),
Power over Ethernet (IP Phones), Link Aggregation, Quality of
Service (QoS).

Distribution Layer.

Switches having the following characteristics: Layer 3 support,


High forwarding rate, Different speed Interfaces (Giga/10 Giga
Ethernet), Redundant components, Link Aggregation, Quality of
Service (QoS) and Security Support.

Core Layer.

Switches having the following characteristics: Layer 3 support,


Extremely high forwarding rate, Different speed Interfaces
(Giga/10 Giga Ethernet), Redundant components, Link
Aggregation, Quality of Service (QoS) and Security Support.

Enterprise Architecture
Cisco has developed what is called an Enterprise Architecture. An
Enterprise Architecture is a generic representation of the most
common layout for enterprise networks showing the different layers
and the structured approach towards providing network connectivity
for companies.

The Enterprise Architecture has the following sub-architectures:

Enterprise Campus: Usually located at the headquarters of


a company, provides connectivity to local users following the
hierarchical model described before (access, distribution,
core). As part of this level, we can usually find the main
datacenter where the corporate servers sit.
Enterprise Data Center: Refers to the set of servers and
server farms where the corporate information is stored and
accessed.
Enterprise Edge: Acts as interconnection point
towards/from the different networks used by the company to
establish connectivity to other locations, the internet or the
company teleworkers.
Enterprise Branch: Refers to remote networks located at
the different company branch offices.
Enterprise Teleworker: Refers to the connectivity provided
to remote users working from home or with a mobile
scheme.
All these subsystems connect through different types of WAN
networks such us Frame Relay (FR), Asynchronous Transfer Mode
(ATM), Internet Service Providers (ISPs) or even through the Public
Switched Telephone Network (PSTN)

Section Summary

Network is a set of devices and end systems interconnected by


the use of a common set of rules, messages and protocols.
Information in a network travels through the media (wired or
wireless) in the form of electromagnetic signals.
If a network connects systems located in a limited geographical
area, it is known as Local Area Network. If the network spans
multiple geographical areas it is called a Wide Area Network.
Wide Area Networks connect local area networks forming what
is called an internetwork. Internet is a public internetwork.
The de-facto standard for Local Area Networks is Ethernet which
uses specific protocols and messages. There are different
flavors of Ethernet depending on the speed an the physical
media being used.
Each network has a different physical topology (way elements
are interconnected) and a logical topology (way information
travels through the network). Among the most important
topologies we can find the bus, the ring, the star and the mesh.
Devices and Systems are shown in network diagrams by using
specific icons referring to them.
Networks are structured in a hierarchical model where three
main layers are considered. Access, Distribution and Core. Each
Layer has a function and different connectivity devices are used
on each area.
Cisco Networks has defined an "Enterprise Architecture"
describing the way corporate networks are buildt. The
enterprise architecture has different sub-architectures. Campus,
Data Center, Edge, Branch and Teleworker.
Layered Architectures
Modern networks are based on a layered architecture that splits the
functions involved in an end to end communication among different
layers. Networking layered architectures were born as part of the
normalization effort performed by the ISO.

Definition of protocol, encapsulation.

In order for human beings to communicate, they must speak the


same language. Exact same concept applies to computers and
networks. Computers can only communicate through a network when
they follow the same set of rules and format the messages in a way
they can be understood by the receiving party. In other words, they
use the same protocol.

Protocols are set of rules defining how systems can communicate


among themselves. Protocols define the syntax of the messages, the
sequencing of messages, the means to recover from a communication
errors, etc.

When two systems are able to communicate because they use the
same set of protocols we usually say they can interoperate.
Protocols add additional information to the user data in a way
the delivery of the information from the source to the destination can
be accomplished. This additional information is protocol specific (for
example message sequence number) and it is called overhead.
There are two types of overhead:

Header: Refers to overhead added before the user data


Trailer: Refers to overhead added after the user data

In a layered architecture (when multiple levels of protocols are used)


the user data gets multiple headers/trailers before it is transmitted
into the physical media.

Encapsulation is the process of wrapping data (adding headers and


trailers) with the required protocol information that makes the
communication from source to destination possible.

Encapsulation is like introducing data into an envelop. This envelop


contains information not relevant to the data being transmitted itself
but contains information that will allow the user data to reach its
destination

OSI Model. Layered Architecture.


The OSI model was released in 1984 as an attempt to standardize the
protocols being used for internetworking. OSI stands for Open
Systems Interconnection and is an International Standards
Organization (ISO) standard.

The term open opposes to proprietary. In the initial stages of


networking each company was developing a different set of protocols
to allow systems to communicate and interchange information. This
proprietary protocols did not allow systems from different vendors to
interoperate. Therefore, the was not an open communications
standard.

ISO wanted the OSI model to become the open standard that
everyone would follow in a world where all computers and systems
could communicate. Finally TCP/IP became the de facto standard
that everyone follows, but still OSI Model is used to illustrate how
protocols work in a layered architecture.

A layered architecture is a set of protocols used to format the


information sent to the physical media acting one after another. At
each layer overhead information is added in the format of
headers/trailers to the data coming from the previous layer. This
overhead information is control information used to deliver the user
data to the destination.
At each of the levels the information containing the user data as well
as all the overhead is called Protocol Data Unit (PDU). This generic
name applies to all levels. In addition, at certain levels, the PDUs have
specific names. The PDU at Transport level is called segment. The
PDU at Network level is called packet and the PDU at Data Link level
is called Frame. Finally, the PDU at Physical level is set of bits that
will be transmitted through the physical media.

The seven layers of the OSI model are as follows:

- Application (7): Provides the interface between the user


application and the networking services. Deals with
authentication, and availability of communication partners.
- Presentation (6): Defines the format and organization of the
data being transmitted in a way it can be understood by the
destination. Also deals with encryption. Presentation layer can
also translate between the formats being used by source and
destination to make the communication possible.
- Session (5): Establishes, manages and terminates sessions
between to communicating systems. Synchronizes dialogue
between the presentation layers of the two communicating
systems and manages their data exchange.
- Transport (4): Deals with fragmentation and reassembly of
data when required for transmission over the physical media.
Provides a data transport service to the upper layers dealing
with reliability, flow control, error detection and recovery.
- Network (3): Provides connectivity and path selection between
two hosts in the network. Also deals with host unique
addressing in a network/internetwork.
- Data Link (2): Formats data to be transmitted into the physical
media, defines access the physical media is gained in a shared
media environment and provides error recovery mechanisms.
- Physical (1): Defines the electrical, mechanical, procedural
and functional aspects of the transmission over the physical
media. Aspects such as voltage levels, pin-outs, types of
connectors, etc. are defined at this layer.

There are certain benefits of defining a layered communications


architecture. The most relevant are as follows:

- Reduced complexity: Complex communications functions are


are divided into simpler parts.
- Modular design approach: Each of the parts of the model
can be designed independently.
- Standard Interfaces: By following the same architecture a set
of standard interfaces are defined that will facilitate
interoperability.
- Supports interoperability: Adherence to a standard
architecture makes vendors interoperable, i.e. systems from
different vendors can communicate to each other.
- Accelerates development of hardware/software: Modular
design and reduce complexity translates into a faster
development cycle for hardware and software.
- Makes it easier to understand the way communication
among systems works. OSI model facilitates the understanding
of how telecom systems work.

Data Communications Process

Information gets originated at a source and is sent to a destination.

The the OSI application layer communicates to the end user


application while the Physical Layer communicates with the physical
media. User data traverses the entire OSI stack and each layer adds
specific information to the data received from the upper layer in the
format of headers or trailers that will be only understood by the peer
layer at the destination side.

The process of passing data down the stack and adding headers and
trailers is called encapsulation.
By means of encapsulation peer layers (layers at the same OSI level
at source and destination) interchange control information and critical
parameters for the communication.

The entire process works as illustrated in the diagram:

1. User data is sent by an application to the application layer.


2. Application layer adds the application layer header (AH) and
passes it down to the presentation layer.
3. Presentation layer adds the presentation layer (PH) header and
passes it down to the session layer.
4. Session layer adds the session layer header (SH) and passes it
down to the transport layer.
5. Transport layer adds the transport layer header (TH) and data
becomes a segment. The segment is passed down to the
network layer.
6. Network layer adds the network layer header (NH) and data
becomes a packet. The packet is passed down to the data link
layer.
7. Data link layer adds the data link layer header (DH) and a trailer
called Frame Check Sequence(FCS) used to detect whether the
data travelled error free through the physical medial. The data
becomes a frame. The frame is passed down to the physical
layer.
8. Physical layer transmits the bits into the physical media.

For simplicity we have not illustrated any trailer except the FCS.

As described before, each OSI layer has different functions and,


therefore, the information contained in the header of a layer is related
to the specific function of this layer.

For example, if the transport layer fragments data to accommodate it


to the underlining technology being used for transmission, the
transport layer header (TH) will contain information allowing the
unique identification of the fragments in a way that reassembly can
be done at the destination. Equally if the presentation layer is charge
of the data format, enough information for the receiver to interpret
the data will be included in the presentation layer header (PH).

At the receiver side, the opposite process takes place. The receiver
de-encapsulates the information received through the media:

1. Physical layer receives the bits from the physical media.


2. Data link layer checks the FCS to see if any errors may have
happened during the transit of the bits over the media. If errors
are found, the recovery procedures are triggered (for example
requesting a data link level retransmission), otherwise strips the
data link layer header (DH) and passes the remaining data up to
the network layer.
3. The process gets repeated for each of the layers (stripping
headers and passing data up) until it reaches the application
layer that sends the user data into the application.

Peer to Peer Communication

Each layer of the OSI model at the source communicates with its peer
layer at the destination in order for the data to travel across the
network. This form of communication is called peer-to-peer
communication.

When data travels down the OSI model stack, each layer encapsulates
the data received from the upper layer building a Protocol Data Unit
(PDU) by adding headers/trailers which information is only relevant to
its peer layer at the destination side.

Each layer offers services to the upper layer that are achieved
by means of the headers/trailers being added to the data received
from the upper layer.
For example: Transport layer, takes data from the session layer and
fragments this data if required. Once the data is fragmented, the
corresponding header is added where each fragment is identified. In
this example, the transport layer offers the fragmentation service to
the session layer and adds a header that will only be understood and
used by the peer transport layer at the destination side. The transport
layer at the destination side will use the information contained in the
header to identify each fragment and sent the complete data upwards
to the receiving session layer.

As each of the headers being added follows a set of predefined rules


governing the communication between the peer layers, we can say
that peer layers communicate using specific layer protocols.

Hubs work only at layer 1 (physical), Switches work at layer 2


(data link)

TCP/IP Model. Layered Architecture

TCP/IP suite is a layered architecture similar to the OSI reference


model although it was created by a different organization. The name
TCP/IP refers to the two main protocols being used by the model
(Internet Protocol (IP) and Transport Control Protocol (TCP))

TCP/IP model defines four different layers performing the following


functions:
Application Layer: This layer handles high level protocols
and deals with data representation, encoding and the control
of the dialog between the parties.
Transport Layer: This layer deals with fragmentation,
reliability, flow control and error correction.
Internet Layer: This layer is in charge of routing packets
through the network from source to destination. It also deals
with host addressing.
Network Access Layer: This layer handles the media and
media specific protocols (LAN and WAN). Aspects such as
media access control, error recovery, etc. are part of the
network access layer functions. It includes the functionality
of the OSI data link and physical layers.

Each layer uses a number of protocols (same as in the OSI model) to


perform its function. Some of the most relevant TCP/IP protocols are
listed below:

Application Layer:

Telnet: Used to remotely connect into a host.


Hyper Text Transfer Protocol (HTTP): Used to transfer
information visible through a web client (internet navigation).
File Transfer Protocol (FTP): Allows transference of files
between host systems.
Simple Mail Transfer Protocol (SMTP) / Post Office
Protocol (POP3): Used to send and receive mails to/from a
mail server.

Transport Layer:

Transport Control Protocol (TCP): Allows the


establishment of reliable connections between host systems.
This protocol has in-built mechanisms to recover from errors
during the transit of the information across the network.
User Datagram Protocol (UDP): Delivers data from source
to destination in a non-reliable way (no recovery
mechanisms built-in). Used when data is time sensitive
recovering from errors does not make sense.

Network Layer:

Internet Protocol (IP): Allows unique identification of each


host in a network by means of a globally significant identifier
called IP Address. Once each host as a unique ID, they can
be addressed and the path to them over the network can be
discovered (route discovery).
Address Resolution Protocol (ARP): Used to translate
from IP addresses to host hardware addresses (also called
Medium Access Control addresses or MAC addresses).
Internet Control Message Protocol (ICMP): Used to send
network layer control messages. Also used to discover
whether a host can be reached or not over the network
based on its IP address.

Network Access Layer:

Ethernet: De facto standard for LAN technology. Defines the


physical and logical aspects required to use the media
(copper or fiber).
Frame Relay: WAN technology Standard used to
communicate networks over a Service Provider
Infrastructure.

OSI Model vs TCP/IP Model

OSI Model and TCP/IP Model are two different layered architectures
defining how hosts deliver data end to end through a set of protocols
providing different functions required for the communication to take
place.
Both are based on the concept of encapsulation, i.e. data travelling
down the stack and headers/trailers being added on each layer to
support layer specific communication related functions.

In terms of differences we could highlight:

TCP/IP model has an application layer covering the functions


being performed by the session, presentation and application
layers of the OSI model.
TCP/IP model combines OSI data link layer and physical layer
into a single Network Access layer
TCP/IP model is the de-facto standard for modern networks
while OSI is more at a theoretical level. OSI is not fully
implemented in any system although is a good model to
illustrate the way networks work and the overall
encapsulation process.

In terms of similarities we could highlight:

TCP/IP model and OSI model have application layers although


in TCP/IP the application layer includes functions from the
OSI session and presentation layers.
TCP/IP model and OSI model have very similar transport and
session layers.
TCP/IP model and OSI model are based on packet-switching
as opposed to circuit switching.

Circuit switching is a communication technique where the physical


media is reserved for a single information flow and carries only
information belonging to this flow (for example, a telephone call
where the line carries only a single conversation and it is not released
until the conversation is over). Packet switching is a communication
technique where the physical media carries information from multiple
flows and each flow uses the channel only for the duration of the
transmission of the packets generated by this flow. In that sense, the
physical media is shared among different flows.

Modern LAN and WAN technologies are packet switched based where
the media is shared by different flows and, therefore, the need of a
data link or network access layer determining how the media is
accessed and shared.

Section Summary

Devices in a network communicate using a common set of


messages and rules called protocols.
In order for two devices to be able to communicate (inter-
operate), they must use the same protocol.
When messages are transmitted in a network they are
encapsulated in a virtual envelop containing headers and
trailers used by the layers to perform different functions such as
sequencing, error correction, etc.
The information contained in the headers/trailers and the way it
is used defines each layer protocol, which is the set of rules and
messages interchanged among the layers to facilitate the
transit of information through the network.
Information sent in a network goes through a different set of
layers each of them adding specific headers/trailers used to
facilitate the communication process. Each layer header/trailer
information is only relevant to the same layer at the receiving
end. In other words, headers and trailers allow layers to
interchange information in what is called a peer to peer layer
communication.
There are two main relevant layered architectures:
o OSI. Original layered architecture released in 1984 is
used as a reference but is not usually implemented in it
entirety.
o TCP/IP. De facto-standard for modern networks.
OSI layers are as follows: Application, Presentation, Session,
Transport, Network, Data Link and Physical.
TCP/IP Layers are as follows: Application, Transport, Network
and Network Access.
Each layer has a function which is achieved by the use of
specific layer protocols.
TCP/IP Protocol
This section outlines the main layers of the TCP/IP layer stack and
focuses on all what os required to understand and manage real
networks running TCP/IP.

TCP/IP Network Layer

The most relevant TCP/IP layes is the network layer. The TCP/IP
network layer has been critical to the success of the stack and, what
is more important, to the success and growth of Internet.

TCP/IP Stack

TCP/IP suite is a layered architecture similar to the OSI reference


model although it was created by a different organization, the IETF
(Inter Engineering Task Force) with the objective of standardizing a
previous network concept created by the USA Defence Advanced
Research Projects Agency (DARPA). DARPA created what it was called
the ARPA net, a high availability network connecting investigation
centres starting in 1969 and based on the same operation principles
that will be described in this chapter. The name TCP/IP refers to the
two main protocols being used by the model (Internet Protocol (IP)
and Transport Control Protocol (TCP))
TCP/IP model defines four different layers performing the following
functions:

Application Layer: This layer handles high level protocols


and deals with data representation, encoding and the control
of the dialog between the parties.
Transport Layer: This layer deals with fragmentation,
reliability, flow control and error correction.
Internet Layer: This layer is in charge of routing packets
through the network from source to destination. It also deals
with host addressing.
Network Access Layer: This layer handles the media and
the media specific protocols (LAN and WAN). Aspects such as
media access control, error recovery, etc. are part of the
network access layer functions. It includes the functionality
of the OSI data link and physical layers.

Each layer uses a number of protocols (same as in the OSI model) to


perform its function.

The following sections will expand on the functionality of the different


layers and its associated concepts.
TCP/IP Internet Layer

The IP layer or Internet layer provides an unreliable connectionless


packet delivery service to the upper layers:

Unreliable because it tries to send the packets from the


source to the destination but in case it is not possible there is
no recovery mechanism built in. The Internet layer will only
try to inform the source that packet was not delivered by
means of a protocol called Internet Message Control Protocol
(ICMP)
Connectionless because as opposed to other layers, at the
internet layer there is no explicit connection established
between peer internet layers. Packets are sent to the
destination without any connection establishment. Therefore
each packet is handled individually along the network.

The main functions of the Internet Layer are as follows:

Logical Addressing: Each host has an individual and


unique across the entire network address called IP Address.
This IP address identifies the host/system.
Routing: Refers to the move of packets across the network
to reach the destination. This move is based on the
knowledge of the network topology and the location of the
different hosts based on their IP addresses.
Encapsulation: Internet layer encapsulates information
received from the transport layer into packets by adding the
corresponding Internet Layer header.
Fragmentation and Reassembly: In case the data
received from the transport layer does not fit into the
network access layer frame, the IP layer fragments the data
and reassembles it at the destination. This is required
because some Network Access layer protocols (such as
Ethernet) have limitations in terms of packet size.
Error Diagnostics: As described before, the Internet layer
does not recover from errors on the packet delivery but has
protocols to notify errors on the delivery and and to
exchange information about the possible routing issues.

IP Layer header

As described before, one of the functions of the IP layer is to


encapsulate data coming from the transport layer into a packet. Each
packet has an IP layer header containing, among multiple fields
(shown in the diagram) the IP addresses of the source and the
destination systems.
An IP address is a 32 bit long identifier having a hierarchical
structure where the high order bits (leftmost) represent the network
address portion (Network ID) and the low order bits (rightmost)
represent the host address portion (Host ID).
Within the same internetwork (collection of networks) each network is
identified by its unique Network ID, while within the same network
each host/system is identified by its unique Host ID.

Two hosts on the same network, share the Network ID part of the IP
Address.
The address is hierarchy is as follows:

Internetwork (collection of nets) Network (unique Network ID within


the internetwork) Host (unique Host ID within the network)

Each packet carries within the IP header the Source IP Address


(identifier of the host sending the packet) and the Destination IP
Address (identifier of the host that should receive the packet and the
data contained on it). Based on the Destination IP Address, packets
are routed across the network.

IP Address Format

An IP address is a 32 bit long identifier having a hierarchical


structure where the high order bits (leftmost) represent the network
address portion (Network ID) and the low order bits (rightmost)
represent the host address portion (Host ID).

In order to facilitate the readability, an IP address is represented in


decimal dotted format. Each octet of the binary IP Address is
converted into decimal and separated by a dot.

In the example shown in the diagram, the Host IP represented in


binary format is as follows:

11000000101010000000000100000001

Talking each octet and converting it into decimal:


11000000 = 192
10101000 = 168
00000001 = 1
00000001 = 1
Resulting into: 192.168.1.1

This way of representing IP addresses is called decimal dotted


notation.

Where the Network ID = 192.168.1 and the Host ID = 1


The length of the Network ID field (number of bits) depends on the IP
Class.

Network and Broadcast Addresses


There are two reserved IP addresses on each network:

Network Address: The network address has all the Host ID


bits set to 0. This IP address represents the entire network and
it is being used to build the routing tables. This address is not
used in the IP headers.

In the example, the address 192.168.1.0 represents the entire


group of hosts included in the network: from 192.168.1.1 to
192.168.1.254.

Broadcast Address: The broadcast address has all the Host ID


bits set to 1. This IP Address is being used to send packets to all
the hosts in a given network. When a packet is sent to the
broadcast address will be recieved by all the host.

In the example, the address 192.168.1.255 is being used on the


IP header when the packet should be received by all the hosts in
the network.

Broadcast packets are never forwarded beyond a single network. In


other words, routers never forward broadcasts and therefore keep
the broadcast local to the network.
In particular, the IP address 255.255.255.255 (all ones) is called
the global broadcast address. Packets sent to this address are
addressing all hosts in all networks.

IP Address Classes (1)

The full IP range space has been divided into what is called classes.
Classes are used to accommodate for different network sizes by:

Defining the length (number of bits) of the Network ID part of


the IP Address
Defining specific ranges of IP Addresses included in each
class

There are 5 defined classes:

Class A
1 octet (byte) used for the Network ID
First bit of the network ID set to 0 (range of Network ID
varies from 00000000 (0) to 01111111 (127).

The network IDs 0 and 127 are reserved and can not be used.
Therefore there is a total of 126 available networks: Network IDs (1-
126).
The total number of hosts that can be part of a network are 224-2
because there are 24 bits available for the Host ID and we need to
take out the 2 reserved IP Addresses by Network (Network Address
and Broadcast Address).

Network 127.0.0.0 is called a loopback network and it is used as


destination IP address when a host wants to send packets to itself.

Class B
2 octets (bytes) used for the Network ID
First bits of the network ID set to 10 (range of Network
ID varies from 10000000 (128) to 10111111 (191).

The total number of hosts that can be part of a network are 216-2
because there are 16 bits available for the Host ID and we need to
take out the 2 reserved IP Addresses by Network (Network Address
and Broadcast Address).

Class C
3 octets (bytes) used for the Network ID
First bits of the network ID set to 110 (range of
Network ID varies from 11000000 (192) to 11011111
(223).

The total number of hosts that can be part of a network are 28-2
because there are 8 bits available for the Host ID and we need to take
out the 2 reserved IP Addresses.

Class D

Multicast addresses. In this class, the first octet has a range from
1110 (224) to 11101111 (239) and the remaining bits specify the
multicast group.

Multicast is used to send packets to multiple hosts at the same time.


Each host is able to receive packets addressed to the multicast groups
it is subscribed to.

Class E

Experimental addresses. In this class the first octet has a range


from 11110000 (224) to 11111111 (255). The range of addresses
included in this class are reserved for experimentation and can not be
used in a normal network environment.

Following this classes structure, whenever an IP address is provided,


the class type and the number of bits included in the Network ID can
be found just by looking at the first octet of this IP.
Classful Addressing

This approach where IP addresses are divided in classes and the


number of bits in the Network ID is determined by the class type is
called classful addressing.

IP Address Classes (2)

The table below summarizes the different IP classes characteristics.

Size of
Size of
Leading network Number Addresses
Class rest Start address End address
bits number of networks per network
bit field
bit field

A 0 8 24 128 (27) 16,777,216 (224) 0.0.0.0 127.255.255.2

B 10 16 16 16,384 (214) 65,536 (216) 128.0.0.0 191.255.255.2

C 110 24 8 2,097,152 (221) 256 (28) 192.0.0.0 223.255.255.2

Networks and IP Addressing (all together)


The figure shows an internetwork (connection of networks)
comprising three different networks.

Each system/host may have one or multiple interfaces. Each


interface is always attached to a single network. Each interface has a
unique IP Address. One system/host may have multiple IP addresses
(as many as interfaces).

In particular, and looking at the diagram, we can identify the following


networks:

Network A:

Network A has two interfaces attached to it: two router interfaces.

IP Addresses on this network have 10 as first object and, therefore,


they are class A addresses. As in any other class A address, the
Network ID spans to the first octet of the address (10) and the Host ID
is built based on the last 3 octets.

There are 16,777,214 possible Host IDs (if we exclude the reserved
network and broadcast addresses)

The Network Address is constructed by setting all the Host ID bits


to 0
Net ID:10 + Host ID:00000000.00000000.00000000 = 10.0.0.0
The Broadcast Address is constructed by setting all the Host ID bits
to 1
Net ID:10 + Host ID:11111111.11111111.11111111 =
10.255.255.255

Network B:

Network B has three interfaces attached to it: two host interfaces and
one router interface.

IP Addresses on this network have 170 as first object and, therefore,


they are class B addresses. As in any other class B address, the
Network ID spans to the first two octets of the address (170.15) and
the Host ID is built based on the last 2 octets.

There are 65,534 possible Host IDs (if we exclude the reserved
network and broadcast addresses)

The Network Address is constructed by setting all the Host ID bits


to 0
Net ID:170.15 + Host ID:00000000.00000000 = 170.15.0.0

The Broadcast Address is constructed by setting all the Host ID bits


to 1
Net ID:170.15 + Host ID:11111111.11111111 = 170.15.255.255

Network C:

Network C has three interfaces attached to it: two host interfaces and
one router interface.

IP Addresses on this network have 192 as first object and, therefore,


they are class C addresses. As in any other class C address, the
Network ID spans to the first 3 octets of the address (192.168.1) and
the Host ID is built based on the last octet.

There are 254 possible Host IDs (if we exclude the reserved network
and broadcast addresses)

The Network Address is constructed by setting all the Host ID bits


to 0
Net ID:129.168.1 + Host ID:00000000 = 192.168.1.0

The Broadcast Address is constructed by setting all the Host ID bits


to 1
Net ID:129.168.1 + Host ID:11111111 = 192.168.1.255

We are following a classful approach where the length (number of


bits) of the Network ID is only defined by the class where the IP
Address belongs to.
There is one major drawback of the classful approach: the waste of
IP addresses.
If we take as an example Network C, we can see that there are 254
possible host addresses but, given the number of hosts in the system,
only 3 addresses are required. Due to the fact that Network IDs must
be unique within the same internetwork, the unused IPs are wasted.

Public vs Private IP Addresses

IP addresses allow inter-system communication because their use is


based on a number of basic assumptions:

There are no duplicates, i.e. the IP Address assigned to an


interface is unique within the entire internetwork.
Interfaces are grouped around networks in a way that
knowing the location of a network implies knowing the location
of all the interfaces within the network. In other words,
interfaces belonging to the same network are not scattered
along the internetwork.
Two networks could potentially be interconnected.

If any pair of networks could be interconnected, and due to the


uniqueness requirement of IP Addresses with in the same
internetwork, each host interface must have a different IP address.
This is specially true in the case of the most important internetwork
(the Internet) where each host should be able to communicate with
any other host.

It is also true that there are networks that will always be isolated
(never connected to other networks) Why cant we use on those
isolated networks IP addresses that are used in other locations? Why
cant we reuse IP addresses on isolated networks?
In order to allow reuse of IP addresses (duplication), the IP space has
been divided into two different groups of addresses:

Public IPs: These IP addresses must never be reused. They


are assigned to a single interface across all the
internetworks.
Private IPs: These IP addresses may be reused, i.e. used
multiple times in networks that will never interconnect to
other networks.

Private IP space is defined in RFC 1918. The following networks have


been defined for private use:

Class A: 10.0.0.0 10.255.255.255 (1 network)


Class B: 172.16.0.0 172.16.255.255 (255 networks)
Class C: 192.168.0.0 192.168.255.255 (255 networks)

Each host in the Internet has a unique public IP address.


Public networks and IP addresses are assigned to entities
(companies, governments, etc.) by the IANA (Internet Authority for
Number Assignment).

Given the increasing number of hosts connected to the Internet, the


number of public IP addresses is exhausted. In other words,
there are not enough public addresses to uniquely identify the
hosts/systems requiring universal connectivity.

In case we have the need of connecting a network using private


addresses to other networks, we can use a protocol called Network
Address Translation (NAT) that allows translating (changing) the IP
headers, on all the packets leaving the network, by replacing the
private IP by public addresses and performing the opposite conversion
on the way back (see diagram).

Classless Inter Domain Routing (CIDR)


The way we have defined the IP addresses and their division into
classes is called classful routing.

In classful routing, the following statements apply:

The first octet of an IP address identifies the network class.


Once the network class has been determined, there is a fix
length for the Network ID and the Host ID fields of the IP
address.

As we already described, operating in classful mode leads to waste in


IP address space because:

Class A and B networks are too big in terms of number of


available hosts.
Class C networks are too small in terms of number of available
hosts.
There are not enough number of networks to cope with the
current needs in terms of host grouping into networks,
especially when it comes to public IPs.

A different approach has been taken to overcome these limitations.


This approach is known as:
Classless Inter Domain Routing (CIDR)
Variable Length Subnet Masking (VLSM)
In this mode of operation, the length of the Network ID and Host
ID is not fixed but variable. Every network number is followed by a
/ sign and a number indicating the length of the Network ID in
number of bits.

The use of CIDR allows to:

Split a network into smaller chunks called subnets


Group several networks into a bigger network called supernet.
The process of grouping networks into a bigger net is also called
summarization.

We will later describe in detail the summarization process.

In CIDR, the network 192.168.1.0/24 represents a network where the


first three octets (8 bits x 3 =24) are used for the Network ID. Equally,
the network 192.168.1.0/25 represents a network where the first 25
bits (3 octets and 1 additional bit) are used for the Network ID.

There is another important concept associated to CIDR called subnet


mask.

A subnet mask is a 32 bit vector where there are as many top-left bits
set to one as the number of bits that are part of the Network ID.

In the example above 192.168.1.0/25 indicates that the Network ID is


25 bits long and, therefore, the subnet mask is represented by the
following vector:

11111111.11111111.11111111.1000000 (25 bits set to one)

As for the IP addresses, and for simplicity and readability, the subnet
mask is also translated into decimal octet by octet.

In our example /25 translates into the following decimal dotted


notation subnet mask: 255.255.255.128.

The subnet mask is important to determine whether two


hosts/systems belong to the same subnet. Routers and hosts
perform a binary AND operation between the subnet mask and the IP
Address to extract the Network ID and later compare with other IPs.
Imagine we want to determine whether two hosts (192.168.1.5 and
192.168.1.36) belong to the same subnet when using a /25 subnet
mask.

In order to extract the Network ID, we perform the AND operation


between the IP Address and the subnet mask.

192.168.1.5 = 11000000.10101000.00000001.00000101
AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0

192.168.1.0 is the Network ID for this network. As the Network ID is


25 bits long, there are 7 bits remaining for Host IDs, giving a total of
27-2 (126) possible hosts if we exclude the reserved network and
broadcast addresses.

For this subnet the Network address is built by setting all the Host ID
bits to 0:
11000000.10101000.00000001.00000000 =192.168.1.0
For this subnet the Broadcast address is built by setting all the Host ID
bits to 1:
11000000.10101000.00000001.01111111 =192.168.1.127

Summarizing, our subnet is as follows:


Network Address: 192.168.1.0/25
Subnet mask: 255.255.255.128
Broadcast Address: 192.168.1.127
Host Addresses: 192.168.1.1 192.168.1.126

Coming back to the original question, does 192.168.1.36 belong to


the same subnet?

Let us perform the AND operation between the IP Address and the
subnet mask:

192.168.1.36 = 11000000.10101000.00000001. 00100100


AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0

As a result of the operation we obtain the same Network ID, and


therefore, 192.168.1.5 and 192.168.1.36 belong to the same subnet.

We already knew because based on the given /25 subnet mask we


calculated the IP addresses valid range within the given subnet
(192.168.1.0 192.168.1.126).
Subnet Mask

There is another important concept associated to CIDR called subnet


mask.

A subnet mask is a 32 bit vector where there are as many top-left bits
set to one as the number of bits that are part of the Network ID.

In the example above 192.168.1.0/25 indicates that the Network ID is


25 bits long and, therefore, the subnet mask is represented by the
following vector:

11111111.11111111.11111111.1000000 (25 bits set to one)

As for the IP addresses, and for simplicity and readability, the subnet
mask is also translated into decimal octet by octet.

In our example /25 translates into the following decimal dotted


notation

Extracting the Network ID from the IP Address and the subnet


mask
The subnet mask is important to determine whether two
hosts/systems belong to the same subnet. Routers and hosts
perform a logical binary AND operation between the subnet mask and
the IP Address to extract the Network ID and later compare with other
IPs (Two hosts/systems within the same network have always the
same Network ID).

Imagine we want to determine whether two hosts (192.168.1.5 and


192.168.1.36) belong to the same subnet when using a /25 subnet
mask.

In order to extract the Network ID, we perform the AND operation


between the IP Address and the subnet mask.

192.168.1.5 = 11000000.10101000.00000001.00000101
AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0

192.168.1.0 is the Network ID for this network. As the Network ID is


25 bits long, there are 7 bits remaining for Host IDs, giving a total of
27-2 (126) possible hosts if we exclude the reserved network and
broadcast addresses.
For this subnet the Network address is built by setting all the Host ID
bits to 0:
11000000.10101000.00000001.00000000 =192.168.1.0

For this subnet the Broadcast address is built by setting all the Host ID
bits to 1:
11000000.10101000.00000001.01111111 =192.168.1.127

Summarizing, our subnet is as follows:

Network Address: 192.168.1.0/25


Subnet mask: 255.255.255.128
Broadcast Address: 192.168.1.127
Host Addresses: 192.168.1.1 192.168.1.126

Coming back to the original question, does 192.168.1.36 belong to


the same subnet?

Let us perform the AND operation between the IP Address and the
subnet mask:

192.168.1.36 = 11000000.10101000.00000001. 00100100


AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0

As a result of the operation we obtain the same Network ID, and


therefore, 192.168.1.5 and 192.168.1.36 belong to the same subnet.

We already knew because based on the given /25 subnet mask we


calculated the IP addresses valid range within the given subnet
(192.168.1.0 192.168.1.126).

Subnetting and Summarization


The use of variable length subnet masks (VLSM) or variable length
Network IDs allows performing what is called subnetting and
summarization:

By subnetting we refer to the split of a network into a number of


smaller subnets (subnets containing less number of hosts).
Subnetting is achieved by increasing the length of the Network ID, in
other words, by using a longer subnet mask.

By summarizing (super-netting) we refer to the grouping of a


number of networks into a bigger network called supernet (containing
more number of hosts). Summarization is achieved by reducing the
length of the Network ID, in other words, by using a shorter subnet
mask. This is possible because every network having a shorter
Network ID, includes by definition all the subnets having longer
Network IDs.

Subnetting:

Let us take the classful network 192.168.1.0/24. /24 is sometimes


called the natural mask, meaning the mask that reflects the classful
definition of a network. In this case 192.168.1.0 is a C type network
and, therefore, the Network ID is 24 bits long which matches with the
/24.
By adding bits to the Network ID length we can further divide a
network into subnets. Each subnet will have a number of valid hosts
to be determined by the regular rule (2n-2) being n the number of bits
reserved for the Host ID after the mask extension.

Each subnet will also have a network address and a broadcast


address, built using the usual rule: network address all Host ID bits
set to 0; broadcast address all Host ID bits set to 1.
The number of subnets defined by a given mask is calculated using
the following formula:
2x
being x the number of bits added to the natural mask length.

Let us use an example to illustrate the concept of subnetting:


Take the classful network 192.168.1.0/24

If we add one additional bit to the natural mask (/25), we split the
network into two subnetworks (one having bit 25 set to 0 and one
having bit 25 set to 1.
As we have added 1 bit to the natural mask, we have a total of 21=2
subnets available after the split.

In this case, there are 7 bits left for the Host ID, meaning each of the
subnets will have a total of 27-2=126 valid hosts.

The two subnets are as follows:


Original Network Address: 192.168.1.0/24

Subnet 1:
Network ID: 192.168.1.00000000=192.168.1.0 (bit 25=0)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.00000000=192.168.1.0 (all Host ID
bits = 0)
Broadcast Address: 192.168.1.01111111=192.168.1.127 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.1 192.168.1.126
(126 hosts)

Subnet 2:
Network ID: 192.168.1.10000000=192.168.1.128 (bit 25=1)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.10000000=192.168.1.128 (all Host
ID bits = 0)
Broadcast Address: 192.168.1.11111111=192.168.1.255 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.129 192.168.1.254
(126 hosts)
Subnetting

Let us take the classful network 192.168.1.0/24. /24 is sometimes


called the natural mask, meaning the mask that reflects the classful
definition of a network. In this case 192.168.1.0 is a C type network
and, therefore, the Network ID is 24 bits long which matches with the
/24

By adding bits to the Network ID length we can further divide a


network into subnets. Each subnet will have a number of valid hosts
to be determined by the regular rule (2n-2) being n the number of bits
reserved for the Host ID after the mask extension.

Each subnet will also have a network address and a broadcast


address, built using the usual rule: network address all Host ID bits
set to 0; broadcast address all Host ID bits set to 1.

The number of subnets defined by a given mask is calculated using


the following formula:
2x
being x the number of bits added to the natural mask length.

Let us use an example to illustrate the concept of subnetting:

Take the classful network 192.168.1.0/24


If we add one additional bit to the natural mask (/25), we split the
network into two subnetworks (one having bit 25 set to 0 and one
having bit 25 set to 1.
As we have added 1 bit to the natural mask, we have a total of 21=2
subnets available after the split.

In this case, there are 7 bits left for the Host ID, meaning each of the
subnets will have a total of 27-2=126 valid hosts.

The two subnets are as follows:

Original Network Address: 192.168.1.0/24

Subnet 1:
Network ID: 192.168.1.00000000=192.168.1.0 (bit 25=0)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.00000000=192.168.1.0 (all Host ID
bits = 0)
Broadcast Address: 192.168.1.01111111=192.168.1.127 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.1 192.168.1.126
(126 hosts)

Subnet 2:
Network ID: 192.168.1.10000000=192.168.1.128 (bit 25=1)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.10000000=192.168.1.128 (all Host
ID bits = 0)
Broadcast Address: 192.168.1.11111111=192.168.1.255 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.129 192.168.1.254
(126 hosts)

By subnetting we can contribute to a better use of IP addresses as


we built networks having the required size in terms of number of
hosts releasing other IPs to be used as part of other networks.

Summarizing
Summarizing is the opposite to subneting. By summarizing, the
network mask length is reduced and a number of networks grouped
under the umbrella of a bigger network called summary or
supernet
The summary network, groups all the combination of subnets having
a longer network mask as it includes all the addresses contained in
those.

A summary can be used to represent a large number of networks.

The diagram shows the aggregation of networks in a summary


network by reducing the network mask length.

In a sense, we could say that a Network address is also a Summary


address as it summarizes (includes) all the possible hosts or subnets
that could be built with the IP space that contains.

In the example, 192.168.1.0/24 can be used to represent all the hosts


included in the network without subneting, or can be used also to
summarize two subnets: 192.168.1.0/25 and 192.168.1.128/25.

Routing
As described before, one of the functions of the TCP/IP Network layer
is routing packets, which refers to the move of packets across the
network until they reach the destination network/host.

This move is based on the knowledge of the network topology and the
location of the different hosts based on their IP addresses.

The devices performing the routing function are called routers.


Routers work at layer 3.

Routers have knowledge of the topology of the network (location


of each net/subnet) and make decisions based on this knowledge.

Each router has what is called a routing table where each known
network has an entry containing information about the network
such as, the cost to reach the network, the next router interface to
visit in order to reach the network (next hop), the interface to be
used to reach the network, etc. By means of the routing table, the
packets are forwarded among routers and can reach the destination
systems/hosts.

Let us imagine that the host 192.168.1.15 (source) sends a packet to


the destination host 172.15.0.25. The packet flows through the
network as follows:
1) Source host sends the packet to the network and reaches
Router A
2) Router A performs a routing table lookup using the Network
ID of the destination host. The routing table entry shows that, in
order to reach 172.15.0.0 network, E1 interface has to be used
and the packet sent to the next hop interface 10.125.1.2.
3) The packet gets forwarded through interface E1 of router A
reaching Router B.
4) Router B performs a routing table lookup using the Network
ID of the destination host. The routing table entry shows that, in
order to reach 172.15.0.0 network, E0 interface has to be used
and this host is directly connected to the router (no other router
has to be involved)
5) The packet gets forwarded through interface E0 reaching the
destination host.

Section Summary

TCP/IP Layered Architecture has four main layers: Application,


Transport, Network and Network Access. Each Layer uses its
own protocols to facilitate the transit of information through the
network.
As in any other layered architecture, information to be
transmitted using TCP/IP flows through the different layers
which add the relevant headers (H)/trailers (T). In TCP/IP the
information to be transmitted is known as DATA at application
layer. DATA gets the Transport H/T and becomes a segment.
Segments get the Network H/T becoming a packet. Packets get
the Network Access Layer H/T becoming a frame.
TCP/IP Network Layer provides an unreliable and connectionless
delivery service for segments (delivers segments end to end
withput establishing a connection) and performs the following
functions: Local Addressing, Routing, Encapsulation,
Fragmentation/Reassembly and Error Diagnostics.
Each device in a Network has a Network Layer identifier known
as IP Address. An IP address is a 32 bit number built in a
hierarchical scheme and containing two parts. A network part
identifying the network the device belongs to and a host part
identifying the specific device within the network.
There are two main ways of using IP addresses. Using a classful
scheme, IP address network part and host part have predefined
lengths for a given network. Using a Classless Inter Domain
Routing (CIDR) Scheme, where IP Address network and host part
have variable lengths for a given network. CIDR is also known
as VLSM (Variable Length Subnet Masking).
Network mask is a 32 bit number used to indicate the
network/host parts of an IP Address. Bits set to 1 in network
mask belong to the network portion of the IP address. Bits set to
0 in a network mask belong to the host portion of the IP
address.
When using classful IP address allocations different ranges of IP
addresses known as Classes are defined. Each class has a
specific number of bits belonging to the network and host
sections of the IP address. By deciding the number of bits
belonging to the network portion each class has a number of
possible nets. By defining the number of bits belonging to the
host portion each network within a class has a maximum
number of host within each of the networks.
Network masks used in classful IP address allocations are known
as natural masks.
A network (represented by a network number and specific
number of bits within the network mask) can be further divided
in a set of subnets. This process is known as subnetting and is
achieved by extending the length of the network portion within
the network mask.
A number of networks (represented by different network
numbers and having specific number of bits within their
network masks) can be summarized. This process is known as
summarization and is achieved by reducing the length of the
network portion within the network mask. The network
summarizing multiple networks is called a supernet.
Routing is the process of delivering packets from one end of the
network to the destination using the IP addresses of the
different hosts belonging to the network. Routing is performed
by layer 3 devices known as routers which in essence have
knowledge of the network topology (which host identified with
an IP address- is located where).
Network TCP/IP Layer performs connectionless routing (no
connection is required between the hosts for the packets to be
sent) and unreliable delivery (there is no recovery mechanism
being implented at this layer in case a packet cannot reach the
destination host).
IP Space is further divided into public IP space and Private IP
space depending on whether IP addresses may be reused.
End to End Communication Process
This section outlines the end to end flow of information between two
computers that communicate to themselves.

End to End Data Communications Process

This section outlines the End to End data communication process


between two hosts.

The network topology comprises the following elements:

Three networks interconnected by means of two routers.


Two of the networks are Ethernet while routers get connected
through a WAN network based on Frame Relay Technology.
The network addresses (all /24) are depicted in the diagram.
Two hosts establishing a data session: one on each of the
Ethernet networks. Host A acts as a web client, while Host B
is a web server.

Based on the Network Access Layer rules (you can only communicate
with hosts/systems within your own link/network) any communication
between A and B will involve two hops.
Looking at the layered architecture, we can extract the following
conclusions:

Application Layers communicate directly (process to


process communication)
Transport Layers communicate directly (host to host
communication)
Network and Network Access Layers follow a hop by hop
approach where packets and frames need to be examined by
each of the routers at layer 2 first (destination MAC
identification) and at layer 3 later (destination IP
identification).

Our End to End communication process will follow up a web session


established between the web client (Host A) and the web
server (Host B).

End to End Data Communications Process: Tables and IDs

The diagram shows the the ARP Tables and Routing Tables on the
different network elements:
- ARP Tables: Contain the mapping between the IP addresses
and the MAC/Physical addresses on each of the network
elements. In the case of Frame Relay networks, the ARP table
looks a little bit different: We will discuss this in detail, but the
table contains the Frame Relay circuit required to reach each of
the destination networks.
- Routing Tables: Contain the next hop IP and designated
interface to reach each of the know networks. They also contain
the default route, i.e. the next hop interface to be used in
case the destination network is not listed within the routing
table.
Other elements to pay attention within the diagram are:
- MAC addresses of all network interfaces.
- Interface names of all network interfaces: Ethernet interfaces
are called En (n being the interface number) and Frame Relay
interfaces are called Sn (n being the interface number)
We will now follow up the end to end communication process where a
web session will be established and mantained between the
web client (Host A) and the web server (Host B).

Application and Transport Layers

We will examine the at application and transport layers the processes


involved within the web browsing from the web client (Host A) to the
web server (host B)

1. The user opens the web browser application and types the link
of the web page he wants to visit. This web client
application/process needs to communicate with the web server
containing the page described by the link.
2. In order to establish the connection to the web server, the IP
address must be obtained. This is achieved by means of the
DNS application layer protocol.
3. Once the destination IP is known, we are ready to establish a
connection at transport level. Web browsing uses the HTTP
application protocol which runs on top of the TCP
transport protocol. TCP is a reliable connection oriented protocol
requiring initial session establishment through the three
way handshake process (SYN, SYN-ACK, ACK). TCP
segments sent from A to B will use a random port as source TCP
port (1124) and a well know destination port (80) in order for
Host B to direct the traffic to the correct process (web server).
Using ports we allow traffic to be multiplexed.
4. After establishing the connection the application will request
the web page from the server and the server will find the
page in its repository and will deliver it to the client. All the
request and delivery of the web page process is achieved
through the HTTP protocol. HTTP messages are interchanged
on top of the TCP session protocol. During all the interchange
process the error recovery and flow control mechanisms at
TCP level will make sure the information is interchanged on a
reliable manner.
5. Once the content of the web page is received, it will get
displayed by the browser application.
6. Finally, once the page has been received, the connection will
be released using the TCP four way connection release
mechanism.
In the following sections we will examine how a packet is built and
sent on all the interfaces of the networks it will traverse until reaching
its final IP destination (Host B)

Packet Sent by Host A


Host A network layer gets data from the TCP transport layer that
needs to delivered to the final IP destination which, in this case, is
Host B (172.1.40.25). The following steps take place within host B in
order to construct the packet and the frame that will be sent over the
network:

1. Host A checks that the IP final destination (172.1.40.25) does


not belong to its same subnet.
2. Host A can not send the packet directly to Host B and needs to
consult its routing table to decide which is the next hop
interface towards Host B
3. Host A performs a lookup within its routing table and does not
find the destination network listed but finds a default route
pointing to the next hop interface 192.168.1.1
4. Initially the MAC address corresponding to this IP wont be listed
within the ARP table and an ARP query will be sent to the
network in order to get the correct MAC address for the next
hop device. This MAC address will be 0000:000A:0001
5. Once the destination IP and destination MAC are known, Host A
can build the layer 3 packet and the layer 2 frame using the
following information:
1. Layer 3 source IP: 192.168.1.17
2. Layer 3 destination IP: 172.1.40.25
3. Layer 2 source MAC: 0000:000C:1AED
4. Layer 2 destination MAC: 000:000A:0001 (MAC address of
the next hop interface. Interface E0 of Router X)
5. The frame containing the IP packet containing the TCP
segment is sent to the physical media.
Router X recognizes the MAC address of one of its interfaces and
retrieves the frame from

Packet Sent by Router X

The following processes take place within Router X


1. Router X recognizes the MAC address of one of its interfaces on
the frame sent by Host A and retrieves the packet from the
Ethernet physical media.
2. Router X de-encapsulates the packet out of the frame and
checks the destination IP address. The packet is not addressed
to Router X and therefore decides to forward it.
3. The packet destination IP is not in any of the networks where
Router X is attached. Therefore, Router X can not send the
packet directly to Host B and needs to consult its routing table
to decide which is the best next hop interface towards Host B
4. Router X performs a lookup within its routing table and does not
find the destination network listed but finds a default route
pointing to the next hop interface 10.1.2.254
5. Router X checks the ARP table and finds the right Frame Relay
Circuit Number (DLCI) to be used when sending packets to
10.1.2.254.
6. Router X builds a Frame Relay frame and encapsulates the IP
packet on it leaving it untouched.
7. Once the destination IP and destination Frame Relay circuit are
known, Host A can build the layer 3 packet and the layer 2
frame using the following information:
1. Layer 3 source IP: 192.168.1.17 (same as before)
2. Layer 3 destination IP: 172.1.40.25 (same as before)
3. Layer 2 DLCI (circuit): 1001
4. The Frame Relay frame containing the IP packet
containing the TCP segment is sent to the physical media.
8. Router Y receives the frame over circuit number 1001 and
retrieves it from the physical media.

Packet Sent by Router Y

The following processes take place within Router Y

1. Router Y receives the frame over circuit number 1001 and


retrieves it from the physical media.
2. Router Y de-encapsulates the packet out of the frame and
checks the destination IP address. The packet is not addressed
to Router Y and therefore decides to forward it.
3. The packet destination IP is in one of the networks where Router
Y is attached to. Therefore, Router Y can send the packet
directly to Host B.
4. Initially the MAC address corresponding to this IP wont be listed
within the ARP table and an ARP query will be sent to the
network in order to get the correct MAC address for Host B. This
MAC address will be 0000:000C:2FAC
5. Once the destination IP and destination MAC are known, Router
Y can build the layer 3 packet and the layer 2 frame using the
following information:
1. Layer 3 source IP: 192.168.1.17 (same as before)
2. Layer 3 destination IP: 172.1.40.25 (same as before)
3. Layer 2 source MAC: 0000:000A:0002
4. Layer 2 destination MAC: 0000:000C:2FAC (MAC address
of Host B)
6. The frame containing the IP packet containing the TCP segment
is sent to the physical media.

Host B recognizes the MAC address of one of its interfaces and


retrieves the frame from the physical media.

Packet Received by Host B

The following processes take place within Host B

1. Host B recognizes the MAC address of one of its interfaces on


the frame sent by Router Y and retrieves the frame from the
physical media.
2. Host B de-encapsulates the packet out of the frame and checks
the destination IP address. The packet is addressed to it and
therefore decides to forward the transport data contained on it
to the upper layer.
3. The transport layer de-encapsulates the application data and
checks the destination port for it.
4. As destination port is 80, the application data gets delivered to
the web server process.
5. Web server process gets de data and acts accordingly.

Section Summary

When performing and end to end communication between two


hosts, all layers of the TCP/IP architecture get involved.
Layers 1 and 2 (Network Access) perform the delivery of frames
at each individual network segment. They use MAC addresses to
reach specific devices always within the same physical or
logical layer 2 segment.
Layer 3 (Network) performs the end to end delivery of packets
through the network. By means of IP addresses each host can
be uniquely identified within the network and routing tables can
be built at the different router devices allowing the network to
network communication flow through them.
Layer 4 (Transport) transports the specific application data in an
reliable or unreliable mode (as required) performing the
separation of traffics destined to different application at the
receiving end or source from different applications at the
originating end.
Each of the layers uses its own protocols and performs the
required actions (connection establishment, error recovery,
frame forwarding, etc.) in a way the end to end communication
works.
Each device in the network keeps record of the required
information about itself or about other network elements in the
form of tables such as the ARP table or the routing table.

Ethernet Technology. Switching


Ethernet technology is, by far, the most deployed in terms of Local
Area Networks. This section describes the fundamentals of these
technology, as well, as some advanced concepts that allow extending
the Ethernet networks to provide more functionality and to extend the
number of stations potentially attached to them.

Ethernet Local Area Networks


Ethernet has become the de-facto standard for Local Area Networks.
Ethernet technology has evolved over time to accommodate for
higher data speeds and new media requirements.

Ethernet standards define the behavior of the two lower layers of the
OSI Model: data link and physical.

The data link layer in Ethernet is further divided into two sublayers:

Logical Link Control (LLC): Defined by the IEEE 802.2


standard, LLC sublayer acts as an intermediate entity between
the network layer and the MAC sublayer providing
connectionless and connection oriented services to it.
Media Access Control (MAC): Defined by the IEEE 802.3
standard, MAC sublayer performs two main functions:
Data Encapsulation: Assembly/De-assembly of the
frames, Ethernet addressing and error detection.
Media Access Control: Control of the procedures used
to access the physical media which in Ethernet is shared
among all the stations/hosts connected to the same
network segment. The protocol being used to govern the
media access is called CSMA/CD.

The physical layer specification defines all the mechanical, electrical


and procedural interface into the transmission media. In the case of
Ethernet, although the original media was coaxial cable, the
technology has evolved towards the use of twisted pair cable
(shielded or unshielded) and fiber. Physical layer specifications are
also part of the IEEE 802.3 standard and its different sub
specifications (see the diagram)
Initially Ethernet was running in a bus logical and physical topology
where all stations were connected to a single cable linking them. This
type of cable where multiple hosts are connected is called a network
segment.

The maximum length of a network segment is defined by the media


type (copper or fiber) and the specific flavor of Ethernet being used:
10BASET, 10BASEFL, etc.

Other legacy Local Area Network technologies different from


Ethernet include:

FDDI (Fiber Data Distributed Interface): Standard from the


American National Standards Institute X3T9.5 (now X3T12)
using fiber media in a dual ring configuration.
Token Ring: Based on the IEEE 802.5 standard is also using a
ring topology where information travels from one station to
another and there is a special frame circulating around the ring
called token. The possession of the token grants permission to
transmit to the station holding it.

FDDI and Token Ring are no longer used due to the lower cost and
higher performance

CSMA/CD
CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection
and is the protocol used by Ethernet Medium Access Control (MAC)
sublayer to gain access and transmit frames on a shared media.

Each station/host willing to RECEIVE frames from the media will listen
to information flow (frames on the cable) and will read and process
only those frames containing its unicast Ethernet address, a multicast
Ethernet address representing a group where the station is subscribed
to or the Ethernet broadcast address.

A station/host willing to TRANSMIT frames into the media will follow


the following steps:

1. Carrier Sense: Listen to the media and determine whether there


is someone else transmitting into it. If no one is transmitting, the
station is allowed to transmit.

2. Send information to the media: The station sends the frame to


the shared media, i.e. transmits the information using the specific
physical specifications of the media being used (copper, fiber, etc.)

At this point in time, although the station sensed the media to use
it only when the media is free, it could still be the case that
multiple stations/hosts transmit at the same time based on the fact
that all of them found the media unused.
In case multiple stations/hosts transmit at the same time, the
signals will get mixed in the media making the frames
unrecognizable in the same multiple conversations get mixed up
when several people speak in loud voice. This phenomenon is
known as collision. When a collision happens all the frames being
transmitted need to be discarded and resent into the media. This is
an important circumstance that has to be monitored by all the
transmitting stations (Collision Detect).

3. The station listens to the media in order to detect a possible


collision.

4. If a collision is detected, the frame being transmitted is


considered invalid and requiring retransmission. Further more, if a
station/host detects a collision while transmitting it stops sending
the frame and generates a jamming signal into the media to
facilitate the detection of the collision by other stations.

5. After the collision the station/host sending the frame waits a


random time before trying to transmit the frame again. This mode
of operation is called backoff algorithm.

6. If a collision did not occur, the station/host considers the


transmission of the frame as successful.

All the stations belonging to the same network segment where a


collision can occur are defined as belonging to the same collision
domain. In other words, a collision domain is the set of stations/hosts
subject of an Ethernet collision in case multiple of them transmit at
the same time.

The operation of CSMA/CD assumes the Network Interface Card (NIC)


is operating in half duplex mode (HDX) i.e., the stations cannot
transmit and receive at the same time.

Ethernet Frame Format


When a station/host wants to transmit organizes the information into
a frame encapsulating the network layer packets. The frame acts as a
container for the packets.
The Ethernet frame has a predefined format. This format includes
header and trailer overhead used among other functions for station
identification (header) and error detection (trailer).

The figure shows the typical formats of an Ethernet frame under two
different standards:

Initial Ethernet standard also know as Ethernet II or DIX as


it was developed by DEC, Intel and Xerox.
IEEE 802.3 standard developed as an evolution of Ethernet
II

Both frame formats are very similar. The main difference is as follows:
Ethernet II uses a field called Type to indicate the network layer
protocol being carried; In the case of IEEE 802.3, this field does not
exist and there is another field (with the same size) called Length
indicating the size in bytes of the network layer packet being
carried. In addition, when IEEE 802.3 format is used, an additional
LLC sublayer header (IEEE 802.2) is added before the payload to
perform specific functions related to the operation of the LLC protocol.

In 1997, the IEEE 802.3 standard was reviewed to include the original
Ethernet format in such a way that networks using IEEE 802.3
standard could indistinctly use the Type field or the combination of
Length + LLC header (IEEE 802.2).

The definition of the different fields used in an Ethernet frame are as


follows:

Preamble: Sequence of alternating 1s and 0s used to


synchronize the receiving stations.
SOF: Start Of Frame delimiter used to indicate the actual frame
is going to start.
Destination Address: Medium Access Control Address (MAC
Address or Layer 2 Address) of the station which should receive
the frame. This address can be unicast, multicast or broadcast.
Source Address: Medium Access Control Address (MAC
Address or Layer 2 Address) of the station sending the frame.
This MAC address is unicast.
Type / Length: In Ethernet II this field indicates the layer 3
protocol being carried (for example 0x800 for IP). In 802.3
format the length of the network layer packet or the length of
the network layer packet + LLC header is specified.
Data: The data being carried by an Ethernet frame includes
either the network layer packet or the combination of an LLC
header + the network layer packet, depending on the format
being used. The minimum length of the data field is 46 bytes
and the maximum is 1500 bytes. If the data to be carried by the
frame is smaller than 46 bytes, padding is added.
FCS (Frame Check Sequence): This field is a 32 bytes Cyclic
Redundancy Code (CRC) used for error detection.

ADVANCED NOTE:
As the maximum payload size on an Ethernet frame is 1500 bytes,
the length field of the frame can have a maximum value of 1500, i.e.
0x5DC in hexadecimal. In order for the receiving station to easily find
out whether the filed carries length or type, the type of protocol field
uses always values above 0x5DC in such a manner that the receiving
station will consider this field a length when the value is less or equal
than 0x5DC and will consider it a protocol type when the value is
above 0x5DC. For a full list of protocol values refer to the following
link:
http://www.iana.org/assignments/ethernet-numbers
Ethernet Addressing

Each station on an Ethernet network has a unique identifier called


Ethernet address, MAC address or Layer 2 Address. MAC addresses
are used to refer to individual stations/hosts but also to refer to a
group of stations/hosts. There are three different types of MAC
addresses:

Unicast addresses: Unicast addresses are used as source


addresses identifying the station sending the information or as
destination addresses identifying the unique station that should
receive the information being sent.
Multicast addresses: Multicast addresses are used as
destination addresses when the information is sent to a group of
stations. All stations receiving the data should have subscribed
to a multicast group designated by the multicast address.
Broadcast address: Broadcast address is used as destination
address when the information is sent to ALL stations within the
local area network that are reachable at level 2. The group of
stations reachable at layer 2 by a broadcast frame is called
broadcast domain.

A MAC address is 48 bits long and is divided into the fields depicted
by the figure. The first 24 bits are known as Organizational Unique
Identifier (OUI) while the remaining 24 bits are called vendor
assigned station address
The Organizational Unique Identifier (OUI) is further divided into
the following:

Broadcast or Multicast bit: Set to 1 in case the address is


broadcast or multicast
Locally administered bit: Set to 1 if the MAC addresses are
not the ones provided by the vendor for the interface but they
have been assigned by the network administrator (very rare
case).
Vendor ID: The remaining 22 bits uniquely identify the vendor
of the hardware being used to access the network over
Ethernet.

The vendor assigned station address is a 22 bit long identifier


assigned by the vendor of the Ethernet hardware and it is different for
each Ethernet Interface. In other words, two Network Interface Cards
(NICs) from the same vendor will never have the same station
address.

Therefore, the combination of vendor ID + vendor assigned


station ID is unique and serves as an identifier of an Ethernet
Interface within a network station/host, router, etc. and in general is
unique for every device attached to an Ethernet network.
MAC addresses are 48 bits long but they are usually represented in
hexadecimal format separating each octet by a colon (:).

There is one Ethernet broadcast address. Its value is


FF:FF:FF:FF:FF:FF (all bits set to 1).

Ethernet Physical Media (Physical Layer)


Frames created at the data link layer (MAC+LLC) are transmitted into
the media by the physical layer. The physical layer Ethernet
specifications deal with aspects such us:

Physical media used


Connectors required for the attachment to the media
Characteristics of the signals being transmitted over the
media

When it comes to physical media, Ethernet runs on top of copper


(coaxial and twister pair), fiber (multimode and monomode) and over
the air on a wireless configuration (IEEE 802.11). Coaxial cable is not
in use nowadays although is still possible to find old networks running
on this media. Monomode fiber has a longer reach than multimode
fiber given the technology being used for its construction and the way
the light is inserted into the media.

In terms of connectors, those are different depending on the type of


media being used. In twisted pair environments the RJ45 connector is
used while in fiber environments different types of connectors are
used such as SC, ST or SX.
Connectors are sometimes grouped into a swappable
Input/Output unit called Gigabit Interface Converter (GBIC). The
GBIC modules are usually inserted into switches and routers.

The media used determines the maximum speed as well as the


maximum reach of a single network segment being a network
segment a piece continuous (not broken) cable where stations/hosts
are connected to.

The type of Ethernet technology used (including physical media) is


described by means of an acronym that has the following format:

<SPEED>BASE-<MEDIA/CABLE TYPE>
<Speed>: Refers to the speed of the network in Mbps
<Media/Cable Type>: Refers to the type of media and
connector being used

Example:
100BASE-TX: 100Mbps over twisted pair cable with a maximum reach
of 100m

Ethernet Physical Media UTP (Physical Layer)

Ethernet most common physical media is copper in the form of


twisted pair. A twisted pair cable is made of 8 copper conductors
grouped in pairs that are twisted along the length of the cable. Those
pairs can be covered or not by an isolation film leading to the two
types of twisted pair cables:

UTP (Unshielded Twisted Pair) - Pairs of copper not wrapped in


an isolation film
STP (Shielded Twisted Pair) - Pairs of copper individually
wrapped in an isolation film

Out of those types, UTP is the most commonly used.

The assignment of the functions to the different conductors


(pin-out) is governed by the EIA/TIA T568 standard. Out of the
eight conductors only 4 of them are used. Two carry the transmitted
signal from the device (TX), the other two carry the received signal
into the device (RX). Each used conductor is labeled as + or leading
to the following naming convention for the cables: TX+, TX-, RX+,
RX-.

The diagram shows the layout of the UTP cable. The different
conductors are colored in a predefined fashion, allowing them
to be distinguished without ambiguity at both ends of the cable:

Pair 1: conductor 1, white and green, conductor 2, green


Pair 2: conductor 1, white and orange, conductor 2, orange
Pair 3: conductor 1, white and blue, conductor 2, blue
Pair 4, conductor 1, white and brown, conductor 2, brown

There are two ways of assigning the TX and RX signals into the
conductors: T568A and T568B. The difference has to do with the fact
that TX and RX signals are reversed in T568A and T568B standard,
meaning the conductors acting as TX in T568A are RX in T568B and
vice versa.

This allows the connectivity of devices in a way that the information


transmitted in one end (TX) is received at the other end (RX). A clear
example is the way a host connects to a switch. A host uses T568A
while a switch uses T568B and therefore TX conductors on one side
facing RX on the other and vice versa. This leads to two different
types of connecting cables:

Straight Forward: either T568A or T568B at both ends


Cross-over: T568A at one end and T568B at the other.

Following the logic described before, the use of a straight forward or


cross-over cable is determined by the combination of conductors that
make TX signal on one end face the RX signal on the other, giving the
standard pin-out of the devices being connected.
The list below shows the standard pin-out of the different networking
devices. When connecting devices with the same pin-out a
cross-over cable must be used while when connecting devices
with different pin-outs a straight forward cable is adequate:

Host: T568A
Router: T568A
Hub, Bridge or Switch: T568B

UTP cable has an impedance of 100 ohms and a gauge of 22 or 24.


UTP cable can be used with the majority of network technologies. The
quality of the UTP Cable is defined by its Category. The following
categories exist in the market:

Category 1: Used for telephony. Not adequate for data transmission.


Category 2: Capable of transmitting up to 4Mbps.
Category 3: Capable of transmitting up to 10Mbps. Used for 10BaseT
Category 4: Capable of transmitting up to 16Mbps. Used by Token
Ring.
Category 5: Capable of transmitting up to 100Mbps. Used by
100BaseT.
Category 5e: Capable of transmitting up to 1000Mbps (1Gb).
Category 6: Capable of transmitting up to 1000Mbps (1Gb) uses a
gauge of 24.

Extending the Ethernet


In the past, when connecting devices into an Ethernet network we
used to have a single physical cable in a bus topology (Ethernet over
coaxial cable). This physical cable was known as network segment.
As seen on the physical media discussion, in this approach the
maximum network physical length is limited by the maximum reach
of the media.

Extending the Ethernet (1). Collision and Broadcast Domain.

Modern networks extend their reach by using networking devices


acting as Ethernet repeaters and using a slightly different physical
topology from the one used in the past by the coaxial cable networks.
These devices act as multiport concentration devices.

Each host/station attached to the network connects to a different


physical port of the networking concentration device. As electrical
signals reaching every port are replicated on the others (under certain
configuration restrictions) the logical topology is still a bus topology
where CSMA/CD can be used with no limitation.

Each cable connected to the concentration networking device is by


nature a physical network segment where the media properties
(speed and reach) apply.

Two main types of devices can be used to extend Ethernet networks:


Hubs and Switches. They are very different devices, but before going
into their definition we should go once again over the concepts of
collision and broadcast in a way we can set the ground for
understanding hub and switch operation.

Extending the Ethernet (2). Collision and Broadcast Domain.


In Ethernet networks access to the media is governed by
CSMA/CD. This mechanism allow different devices connected to the
same network to transmit at the same time. When this happens,
information gets mixed in the physical media and it is not any more
understandable by any of the connected devices. This is know as
collision. After a collision happens, the devices willing to transmit
wait for a random period of time before retrying (back off). As this
random period is different for each of the devices, the probability of a
new collision gets minimized.

All devices subject to collision (connected in a way collisions might


happen) belong to what is called a collision domain. Devices are
within the same collision domain when physically connected to the
same cable (natural collision domain) or when connected through a
device acting as a simple Ethernet repeater such a Hub.

Hubs replicate the electrical signal received in one of its ports into the
others and, therefore, devices connected to them are virtually within
the same cable, i.e. collision domain.

In Ethernet networks, when a device transmits a broadcast frame,


this frame is seen by all devices connected to the same media
(natural broadcast domain) or by those connected through a
networking element which does not block broadcast frames such a
hub, a bridge or a switch.
All the devices which can receive Ethernet broadcast frames sent by
other stations within their same network belong to what is called a
broadcast domain.

Sometimes a broadcast domain and a collision domain contain the


same elements. In other occasion, the broadcast domain contains
multiple collision domains. This basically depends on the networking
element used to extend the Ethernet network (hub, switch or router).

Extending the Ethernet (3): Networking devices

As we have seen bridges and switches aggregate devices into a


single logical network made of multiple network segments, but the
way this aggregation takes place is completely different.

A bridge serves an extension of the physical media but each of


its ports acts as a different collision domain although all of them
belong to the same broadcast domain. Bridges were used in the past
to isolate collision domains increasing the performance of the
networks. Those devices where having limited number of ports.

A switch acts as an intelligent device. It forwards information


from one port to another based on the destination MAC
address contained within the Ethernet frames. It does not act as a
simple repeater but as a selective forwarder.
When two or more devices connected to a switch transmit at the
same time, the switch will forward frames from one port to another
avoiding collisions. Switching is performed using a switching table
that maps each device with the physical port where the device is
connected based on its MAC address.

Each port of the switch is a separated collision domain.

Broadcast frames sent by any of the devices connected to a switch


port will be forwarded to all devices connected to the other ports (if
configuration allows so). Therefore all the ports of a switch belong to
the same broadcast domain.
Nowadays switches are the most common solution deployed to
extend Ethernet networks.
Finally, a router connects networks. Each port of a router is a
separate network and therefore a separate collision and broadcast
domain.

Broadcasts never get transmitted across a router.

Each of the individual collision domains of a network is known as a


Network Segment.

Switches and Switching Operation

A switch is a network element that expands the Ethernet network by


selectively forwarding frames from one port to another. The
switch main characteristics are as follows:
Has multiple ports, each of them acting as a single collision
domain, i.e. as a single network segment.
Each port of a switch is having dedicated bandwidth.
Forwarding is done in hardware increasing the network
performance.
Allows Ethernet Full Duplex operation (devices connected to
a switch can transmit and receive frames at the same time as
collisions are prevented by the switch).
Switch operation:

When a frame is received within one of the switch ports, the switch
must decide among one of the following options:

Forward the frame to a destination port: Unicast frames


Forward the frame to all ports of the switch: Broadcast
frames
Filter the frame: Depends on the switch configuration

In order to forward the frame to a given port, the switch uses a MAC
address Table stored in its RAM memory. This table maps each
connected station with the port were the station is connected to. The
switch identifies each station based on its MAC address.

In order to build the MAC address Table, the switch listens to the
Ethernet traffic passing through. When a frame is received in one of
the ports the switch extracts the SOURCE MAC address of the
transmitting station. If a given MAC address is used as source MAC in
a port, it basically means the station identified by this MAC is
connected to the port. Following this logic, the switch builds a MAC to
port table (MAC address table) that is used to make forwarding
decisions. This technique is known as Dynamic MAC Learning.

When the MAC addresses are dynamically learnt they stay on the MAC
address table for a configurable period of time know as aging time.
Once the aging time has expired, the MAC address is removed from
the MAC address table.

Switches can also be configured for Manual MAC address to port


mapping.

Once the MAC address table is complete, the switch uses the
following forwarding logic:

Unicast frame received: Extract the DESTINATION MAC. If


the destination MAC is in the MAC table, forward the frame to
the corresponding port. If the destination MAC is not in the MAC
table, the MAC to port mapping is not know for this specific
station and, therefore, forward the frame to all the switch ports.
Broadcast frame received: Forward the frame to all the
switch ports.

In addition, a switch may perform frame filtering based on its


configuration.

A switch never forwards a frame to the same port where the frame
was received.
When multiple switches are connected, they build a Layer 2 loop
free topology by means of the STP (Spanning Tree Protocol)
avoiding frames to be forwarded multiple times into the same
network segment.

The MAC table is also known as switching table, bridging table or CAM
(Content Addressable Memory) table.

Let us examine the operation of a switch in action:

Example 1:
1.1: A sends a frame to C
1.2: The frame reaches port number 1
1.3: The switch analyses the Ethernet header:
Looks at the source MAC address to find out whether the
source station is already within the MAC table. In this case
there is a matching entry for the MAC address and the
port.
Looks at the destination MAC address to find out
whether the destination station is contained within the
MAC table. In this case there is an entry pointing to port 3,
where the frame will be forwarded.
1.4: The frame gets forwarded to port 3 as per the MAC address table.
1.5: C receives the frame

Example 2:
2.1: B sends a broadcast frame to all the stations within its broadcast
domain
2.1: The frame reaches port 2
2.3: The switch analyses the frame and discovers this is a broadcast
frame and, therefore, gets forwarded to all the stations belonging to
the broadcast domain
2.4: The frame reaches all the stations within the broadcast domain: A
and C

Advanced Switching Concepts

Forwarding Methods: When a frame is received in a switch it is


processed according to one of the following forwarding methods:

Store and Forward: The frame is stored in memory in its


entirety and checked for integrity based on the CRC field
included within the Ethernet trailer. If the frame contains no
errors it is forwarded to the required port based on the
information included in the CAM Table.
Cut Through: The frame is not stored in memory and gets
forwarded without integrity checking as soon as the destination
MAC is known. The only part of the frame stored by the switch,
in this case, is the initial part of the header up to the point
where the destination MAC can be obtained.

Memory Buffering: When storing frames a switch can use one of the
following strategies:
Shared Memory: The switch uses a common memory
repository to store all the frames no matter which port they
were received into.
Port Memory: Each of the switch ports has its own memory
repository for frame storage.

Switching Symmetry: Based on whether the ports of a switch have or


not the same speed, switching can be classified as:

Symmetric Switching: Performed between two ports


operating at the same speed.
Asymmetric Switching: Performed between two ports
operating at different speeds. Asymmetric switching requires
the use of buffering techniques specially when forwarding
frames from a faster port to a slower port.

Layer 2 and Layer 3 Switching

A switch is a device that performs selective forwarding of frames and


implements in hardware the forwarding mechanisms thus improving
performance and delivering what is know as Fast Forwarding.

A layer 2 switch performs fast forwarding based on the Layer 2


information contained within the frame, i.e. based on the destination
MAC address.
A layer 3 switch performs fast forwarding based on the Layer 3
information contained within the frame, i.e. looks at the IP addresses
within the packet carried by the frame and makes forwarding
decisions based on those.

Virtual Local Area Networks (VLANs)

Switches allow to segment a Local Area Network into different


collision domains. Each port of a switch is a collision domain.
Switches also allow to segment a Local Area Network into a set of
isolated networks corresponding to different broadcast
domains. Each of this separated broadcast domain is know as a
Virtual LAN (VLAN)

Virtual because all the devices belonging to the LANs are


connected to the same switch but logically (virtually) isolated
by the switch configuration.
LAN because each of the isolated broadcast domains acts as a
different Local Area Network where Ethernet mode of operation
applies.

VLANs can be created based on a number of criteria such as port


number, MAC address, etc. but the most common criteria for deciding
which stations belong to a VLAN is the port within the switch where
the stations are connected to.
Using VLANs has a number of benefits such as:

Security and Traffic Isolation: By splitting a network into a


set of LANs, traffic generated by one LAN is not seen by the
others.
Performance Improvement: Limiting the extension of a
broadcast domain improves the network performance from
different perspectives:
Switch Performance: Forwarding effort is reduced as well
as consumption of switch resources such as memory or
CPU.
Network Performance: By reducing the broadcast domain,
bandwidth can be used in a more optimized way as each
of the VLAN will share separate bandwidth.
Resiliency: Problems in one of the VLANs (such as a broadcast
storm) will stay localized and wont affect the other VLANs
configured in the switch.
Easier management and troubleshooting: By reducing the
number of stations connected to a LAN the management and
troubleshooting tasks get simplified as they can be taken VLAN
by VLAN.
Cost savings: As the implementation of multiple networks
does not require multiple switches.

When VLANS are implemented, each VLAN performs forwarding


decisions based on its own MAC Address Table which is built and
managed independently.

VLAN Trunks
VLANs can span multiple switches, in a way two stations
connected to a different switch belong to the same VLAN. These
VLANs require layer 2 connectivity (across switches) including the
forwarding of broadcasts along all the port belonging to the VLAN no
matter the physical location.

In order for a VLAN to be present in multiple switches, those need to


be connected by an Ethernet link. Usually this Ethernet link carries
frames for all the VLANs shared among the switches and is
know as a Trunk.

Frames from different VLANs are carried on the trunk link. Switches at
both ends of the trunk need to identify which frames belong to
each of the VLANs in order to perform the right forwarding actions
as well as to keep the inter VLAN isolation. VLAN separation on a
trunk is achieved by means of a VLAN Tag added into the existing L2
header of each of the frames traversing the trunk.

There are two protocols describing the format and use of the VLAN
Tag:

Inter Switch Link (ISL): Cisco proprietary protocol used for


VLAN Tagging.
802.1Q: IEEE standard for VLAN Tagging. Describes the format
of the tag and refers to 802.1p when it comes to the format and
use of the priority bits included within the tag.
A VLAN Tag is added after the Source MAC field of the frame and it is
a 32 bits field identifying the VLAN number of the frame as well as
other information relevant to the trunk. The VLAN Tag format as per
the 802.1Q standard is composed of the following parts:

EtherType: 16 bits field always set at hex 0x8100. When a


switch receives a frame with this EtherType value knows that
the frame carries a VLAN Tag.
Tag Control Information: Made of the following fields:
Priority: 3 bits identifying the priority of the frame across
the trunk in a scale of 8 values, being 8 the top priority.
Use of this field is described in the 802.1p specification.
CFI (Canonical Field Identifier): 1 bit used when carry
token ring traffic.
VLAN ID: 12 bits identifying the VLAN.

Based on the size of the VLAN ID field, we can deduct that only 4096
VLANs are allowed. VLAN numbers 0 and 4095 are reserved.

Although most of the traffic traversing a trunk is tagged by means of


a VLAN Tag, there is a possibility of sending not tagged traffic over the
trunk if this traffic belongs to what is called the Native VLAN of the
trunk. All traffic untagged in a trunk Ethernet Link will be sent to the
stations belonging to the native VLAN.

VLAN Trunking Protocol (VTP)


When VLANs span multiple switches, the VLAN assigned numbers and
names need to be consistent across the entire switching domain. This
can be achieved either manually or by means of a protocol called
VLAN Trunking Protocol (VTP) that keeps track of the changes in
VLAN configuration and propagates them along the entire switching
infrastructure.

A VTP domain is a set of interconnected switches acting as a single


environment where VLANs are being shared.

Each of the switch members of a VTP domain can act as one of the
following roles:
Server: This is the default mode. VLANs can be configured and
updated on a server switch. Changes are propagated to all
members of the VTP domain.
Transparent: VLANs can be configured and updated on
transparent switch. Changes are not propagated to the VTP
domain and have only local impact. A VTP transparent switch
does not sync up his VLAN database with any other switch.
Client: VLANs cannot be configured or updated in a client
switch. A VTP client switch syncs up his VLAN database with
other servers and clients.

VTP Operation:
Every time the VLAN configuration of a VTP domain is changed, a new
VLAN database is created. Each database has a revision number
being the one with the highest revision number the most up to date.
In a Transparent VTP switch VLAN database revision number is always
zero.

VTP configured switches send periodic updates over all their trunk
interfaces using VLAN 1 (know as default VLAN) on predefined
intervals (default is 5 mins.). Server switches send, in addition,
asynchronous updates whenever their VLAN configuration has been
modified. Updates use a multicast frame and contain the revision
number of the database operating on the switch sending the update.

Upon reception of an update (periodic or asynchronous) the


members of the VTP domain should act as follows:

Server: If the VLAN database revision number contained in the


update has a higher revision number than the one operating in
the switch, accept the changes and update the revision number.
Forward the advertisement to other trunk links.
Client: If the database revision number contained in the update
has a higher revision number than the one operating in the
switch, accept the changes and update the revision number.
Forward the advertisement to other trunk links.
Transparent: Forward the advertisement to other trunk links.
Do not update its VLAN configuration.

The operation of VTP can be optimized by enabling VTP pruning.


This feature does not send VLAN updates over trunks which are not
relevant.

Imagine a change in VLAN 2 happens on a Server switch. This change


will be propagated only to those switches where VLAN 2 is defined
(ports are assigned to this VLAN) and therefore reducing the traffic on
the non relevant trunks (trunks not carrying VLAN 2).

There are three different types of VTP messages:

Summary advertisements: Sent periodically, contain the


revision number but no VLAN information.
Subset advertisement: Sent to indicate a change on the
VLAN configuration, contain a new revision number and VLAN
information.
Advertisement request: Sent whenever a switch comes in
operation to get an update from the neighbouring switches.

VLAN Trunking Protocol (VTP) Operation

Every time the VLAN configuration of a VTP domain is changed, a new


VLAN database is created. Each database has a revision number
being the one with the highest revision number the most up to date.
In a Transparent VTP switch, VLAN database revision number is
always zero.

VTP configured switches send periodic updates over all their trunk
interfaces using VLAN 1 (know as default VLAN) on predefined
intervals (default is 5 mins.). Server switches send, in addition,
asynchronous updates whenever their VLAN configuration has been
modified. Updates use a multicast frame and contain the revision
number of the database operating on the switch sending the update.

Upon reception of an update (periodic or asynchronous) the


members of the VTP domain should act as follows:

Server: If the VLAN database revision number contained in the


update has a higher revision number than the one operating in
the switch, accept the changes and update the revision number.
Forward the advertisement to other trunk links.
Client: If the database revision number contained in the update
has a higher revision number than the one operating in the
switch, accept the changes and update the revision number.
Forward the advertisement to other trunk links.
Transparent: Forward the advertisement to other trunk links.
Do not update its VLAN configuration.

The operation of VTP can be optimized by enabling VTP pruning.


This feature does not send VLAN updates over trunks which are not
relevant.

Imagine a change in VLAN 2 happens on a Server switch. This change


will be propagated only to those switches where VLAN 2 is defined
(ports are assigned to this VLAN) and therefore reducing the traffic on
the non relevant trunks (trunks not carrying VLAN 2).

There are three different types of VTP messages:

Summary advertisements: Sent periodically, contain the


revision number but no VLAN information.
Subset advertisement: Sent to indicate a change on the
VLAN configuration, contain a new revision number and VLAN
information.
Advertisement request: Sent whenever a switch comes in
operation to get an update from the neighbouring switches.
In the example depicted, a new VLAN is created in one of the Server
switches of a VTP domain. The following steps take place:

1. VLAN 4 (non existing VLAN) is added to the domain.


2. The Server switch increments the revision number on its own
database.
3. The server switch sends an update over its trunk interfaces
announcing a new revision number and a VLAN change.
4. The update reaches a transparent switch. The transparent
switch does not update its own database but forwards the
update to its trunks.
5. The update reaches a client 1 switch. As the update contains a
new database number, the client updates its own database and
revision number.
6. The client switch forwards the update
7. The update reaches a client 2 switch. As the update contains a
new database number, the client updates its own database and
revision number.
8. The update reaches a client 3 switch. As the update contains a
new database number, the client updates its own database and
revision number.

NOTE: The diagram depicts a simplified version of the update


process. In reality more than one messages is sent to propagate the
update.
The circles with a VLAN number indicate the VLANs where ports are
assigned to in a given switch. If a change on VLAN 1 is performed
(such us updating VLAN name) the switch will be propagated along
the trunks but will never be propagated through the trunk reaching
Client 2. This information is not relevant to this client as there is no
port on VLAN 2 belonging to it. We can say that VLAN 2 information is
pruned from the trunk connecting to Client 2. This is an example of
VLAN pruning operation.
Dynamic Trunking Protocol (DTP)

Dynamic Trunking Protocol (DTP) is a Cisco proprietary protocol


used to create trunks across inter-switch links. DTP is used to
negotiate the encapsulation method of the trunk (ISL or 802.1Q).

Each trunk port can be configured in one of the different possible


modes allowed by the protocol and this will result in a different
configuration of the trunk once the ports at each side have finished
the negotiation process. The existing port modes are as follows:

Auto: Causes the port to passively be willing to convert to a


trunk. The port will not trunk unless the neighbor is set to on or
desirable. This is the default mode. Note that auto-auto (both
ends default) links will not become trunks.
ON: Forces the link into permanent trunking, even if the
neighbor doesn't agree.
OFF: Forces the link to permanently non trunking, even if the
neighbor doesn't agree.
Desirable: Causes the port to actively attempt to become a
trunk, subject to neighbor agreement (neighbor set
to on, desirable, or auto.)
Nonegotiate: Forces the port to permanently trunk but not
send DTP frames. For use when the DTP frames confuse the
neighboring (non-Cisco) 802.1Q switch. The neighboring switch
must be set to trunking manually.
Spanning Tree Protocol (STP)

Redundant layer 2 networks can be built by connecting switches


over multiple Ethernet links in a way that several physical paths exist
between any two stations connected to the network. In the event of a
link failure, the redundant layer 2 infrastructure will provide an
alternate path among stations to keep the network operational.

Layer 2 redundancy is key when delivering resilient networks, but


due to Ethernet technology mode of operation, a number of problems
may be seen in a redundant layer 2 network:

Broadcast Thunderstorms: Refers to the endless forwarding


of broadcast frames in a non loop-free redundant Ethernet
network.

Frame Retransmissions: Refers to the delivery of multiple


copies of a unicast frame to a single destination.

MAC database instabilities: Refers to the reception of the


same frame into a switch over multiple interfaces. As the MAC
address table is based on the detection of source MAC
addresses contained within frames, if the same frame is
received over multiple interfaces, the MAC to port mapping can
not be built in an stable manner.
In order to avoid all the issues described above, and in the absence of
any Ethernet looping prevention mechanism a protocol called
Spanning Tree Protocol (STP) is used build loop-free layer 2
topologies within an Ethernet redundant network.

Spanning Tree Protocol (STP) activates and deactivates layer 2 links to


build a dynamic loop-free tree starting at a switch designated as root
and spanning the entire switched network. The tree is dynamic in the
sense that gets readjusted in case of link failures with the
principle of keeping the layer 2 infrastructure loop-free.

In the diagram we can see there are multiple physical paths between
Host A and Host B. Two of them are highlighted (green path and
purple path). In the event of failure of the link connecting switch 2 to
switch 3, the green path wont be available any more, but the network
topology offer a number of alternate paths being the purple path one
of those.

Unless any special mechanism is implemented (STP) all the existing


paths between Host A and Host B will be permanently available and
the issues described above will be visible and will impact the network
functionality.

Spanning Tree Protocol (STP). Broadcast storms.


One of the issues existing on a layer 2 redundant Ethernet is called
broadcast storms. Let us describe the issue:

1. Host A sends a broadcast frame to the network.


2. Switch 1 sends the broadcast to all its interfaces including those
connecting to Switch 2 and 3.
3. Switch 2 and Switch 3 receive the frame and send the broadcast
to all its connected interfaces (except the one where the frame
came from, i.e. Switch 1), Switch 2 sends it to Switch 3 and
Switch 3 sends it to Switch 2.

At this point in time we can already realize that Switch 2 and 3 have
received the broadcast frame two times due to the broadcast
operation in Ethernet networks.

4. Switch 2 receives the frame from Switch 3 and forwards it


through all the interfaces (except the one where the frame
came from) including the interface connecting back to Switch 1.
5. Switch 1 receives the frame and forwards it to Switch 3
6. Switch 3 receives the frame from Switch 2 and forwards it back
to Switch 1
7. Switch 1 receives the broadcast frame back from Switch 2 and
Switch 3 and floods it into all its interfaces starting the overall
process again.

At this point in time two additional issues become visible:

1. The broadcast frame gets transmitted again and again in


and endless loop.
2. As the switches are getting the same frame though multiple
interfaces, the same source MAC address is seen through all this
interfaces forcing a continuous update of the MAC switching
table that becomes unstable.

This endless forwarding process is called broadcast storm and


heavily impacts the switches memory and CPU utilization as they
cannot cope with so many copies of the same broadcast frame. In the
end this type of broadcast storms kill the switch.

Spanning Tree Protocol (STP) Operation (1)


In a redundant layer 2 network, some issues may happen due to the
mode of operation of Ethernet (broadcast storms, MAC database
instability, duplicate frames, etc.). Those issues could heavily impact
the performance of an switched network up to the point where it is no
longer operational. In order to avoid them, Spanning Tree Protocol
(STP) is used as described in the standard 802.1d.

Spanning Tree Protocol (STP) builds a loop-free layer 2 logical


topology (tree) over a redundant switching infrastructure:

A root node for the logical tree is selected.


All the trunk ports are taken to one of the following
states:
Forwarding (F): The port receives and sends
traffic.
Blocked (B): The port does not receive or send
traffic (with some exceptions to be discussed).
A trunk link connecting two switches will be active only if
both ports are in Forwarding state.
In the event of a link/port failure, the logical tree gets
re-calculated to provide connectivity redundancy.

In the diagram A, we can see that once the STP protocol has
converged,
An spanning tree rooted at S1 has been built. Only purple
links are active while the remaining links are not utilized.
All the trunk ports are set as (F) Forwarding or (B)
Blocked. Only the links were both ports are at (F) state will
be active at a given point in time.

Imagine the link connecting S2 and S4 goes down. Initially, this event
results on switch S4 being isolated from the switching domain, as the
only active link (S2-S4) failed. STP will automatically recalculate a
new spanning tree by changing the state of the trunk ports in order to
activate or deactivate links.

Spanning Tree Protocol (STP) Operation (2)

We will describe the process used by STP to create a loop-free


spanning tree in a redundant switched network environment. Lets
define some important terms first:

Designated port (DP): Is the only port that will be in


Forwarding (F) state for a given switch in a given network
segment.
Root switch: Is the switch used as root for the logical tree
topology built by the Spanning Tree Protocol. All ports in the
root switch are designated ports, i.e. all ports are in Forwarding
(F) state.
Root port (RP): In a Non-Root switch refers to the port with
the lowest path cost (distance) towards the root switch.
Bridge ID: Is a switch identifier. Bridge ID is composed of two
elements:
Priority (2 bytes): Integer number in the range from1 to
65535, being 32768 the default value. Priority number is
used in during the root bridge selection. The switch with
the lowest priority will become root switch.
Switch MAC Address (6 bytes): Is the MAC address
representing the entire switch (it is not an individual
interface MAC).
Port priority: Port configurable parameter used as a tie-
breaker during the Root Port (RP) selection process.
Possible values are 0 to 63.

Port Cost: Cost of sending a frame out of a port. The cost


depends on the speed of the link where the port is connected
to. 802.d standard defines the default cost values for different
link types:
Ethernet 10Mbps: 100
Ethernet 100Mbps: 19
Ethernet 1Gbps: 4
Etc.
Port ID: 2 byte Identifier of a port. Used when a switch
has multiple ports connected to the same segment.

Path-Cost: Is the cumulative cost of all the links traversed by a


path between two switches belonging to the same switching
domain. It is calculated as the sum of the port costs of all the
outgoing ports used to visit all the links of the path from the
source switch to the destination switch.

STP Operation:
1) Every switch in the topology sends a multicast frame called
Bridge Packet Data Unit (BPDU) at regular intervals (2 sec.
by default). The BPDU frame contains the Bridge ID of the
switch.
2) Switches receiving a BDPU from other switches flood it to all
their connected interfaces leaving the original Bridge ID
contained on the frame untouched.
3) Switches in the network follow certain steps in order to build
the loop-free spanning tree:
1) Switches select the Root Switch: As BPDU frames
(carrying Bridge IDs) are forwarded by all the switches
into their connected interfaces, after a period of time
every switch in the layer 2 domain is aware of all the
existing switches. If a switch has not seen any other
switch with a lower Bridge ID, it elects himself as root
switch. In case of two switches having the same Bridge ID,
the one with the lowest MAC address becomes root.

The root switch will transition all its ports into Forwarding state, i.e.
all its ports will be Designated Ports (DPs).

1) Switches select the Root Ports: Every switch in the


topology selects the port with the lowest cost (distance)
to the root switch and designates it as Root Port (RP)
taking it into Forwarding (F) state.
Every time a BPDU is sent over an outgoing interface a BPDU field
called COST is increased by the cost of the outgoing port. As a result
of the BDP forwarding process, a switch will receive BPDUs form the
root switch over all the redundant paths. Each BPDU will have a
different COST field representing the cumulative cost of reaching the
Root switch over the path. Switches will select the Root Port as the
port with the lowest cumulative cost.

In case of symmetric networks, where the cumulative cost received


over different interfaces is the same, the port priority will be used to
select the Root Port (RP). The port having the lowest priority will take
precedence.

1) Switches select the Designated Ports (DP): Once the


Root Ports (RPs) have been selected, switches switch will
select the Designated Port (DP) for each of the network
segments. In a given segment, the DP will be selected
from the switch with the lowest cost path (distance)
towards the root switch. DPs will go into Forwarding (F)
state. All the other ports will become Non-Designated
Ports (NDPs) and will go into Blocking (B) state.

In case of symmetric networks, where the cumulative cost over


different paths is the same, the Bridge ID will be used as tie-breaker).
The Designated Port (DP) will belong to the switch with the lowest
Bridge ID.

Once the STP protocol has converged, the network will have:
A loop-free tree topology
A single Root Switch
All the ports in the Root Switch configured as DPs in Forwarding
(F) State
All the remaining switches selecting a single RP as the one
closest to the Root Switch
A single DPs on each of the network segments (excluding the
root ports)

Let us look at the depicted topology and follow the Spanning Tree
Algorithm process:
1) Selection of root switch:
In this switching domain S5 has the lowest switch priority
and therefore will become Root switch.
All the ports in of the Root switch are designated ports
and configured in Forwarding (F) state.
Selection of Root Ports:
All the non root switches select as root port the one
having the lowest path distance towards the root switch
based on the link costs (LC) defined by the standard. In
me majority of the cases there is a single port connecting
to the lowest cost path.
In the case of S1, the root switch can be reached through
two different paths having the same cost (S1-S2-S5 and
S1-S3-S5). The Port Priority will be used as tie breaker. In
this case Port 1 becomes the root port as it is the one
having the lowest priority value.
Selection of designated ports on non-root switches:
On each of the segments the selected designated port
belongs to the switch closest to the root switch:
Link S1-S3: Closest switch is S3 S3 Port 1 becomes
DP
Link S1-S2: Closest switch is S2 S2 Port 1 becomes
DP.
Link S2-S4: Closest switch is S4 S4 Port 1 becomes
DP.
Link S2-S3: Closest switch is S3 S3 Port 2 becomes
DP.
Link S4-S3: S4 and S3 are at the same distance
Tie breaker is Bridge ID. S3 port 3 becomes DP as it
has lower priority 32768 < 65535.

Once the DP have been selected on each of the segments, all ports
not selected as DP or RP will go into Blocking state.

Spanning Tree Protocol (STP) Operation (3)


STP protocol builds a loop-free logical topology in a redundant
switched network. Once all the ports have reached the blocking or
forwarding state, the protocol has converged.

Convergence Time is the time required for the STP protocol to reach
an stable loop-free logical topology where all the ports in the topology
are in the forwarding or blocking state. Convergence Time is 801.1d is
30 to 50 seconds.

Once the network has converged, link or port failures will result in a
topology re-calculation, i.e. in another convergence cycle.

Each port where STP is active, will go through a number of states,


before reaching the final forwarding or blocking state. Those states
are as follows:

1) Disable: Port is down or link connected to the port is down.


2) Blocking: The port and link connected to it come up. Port is
able to receive BPDUs, does not update the MAC address table.
A port will stay in this state for the period of time indicated by
the Max Age timer.
3) Listening: The port sends and receives BPDUs. Does not
update the MAC address table. A port will stay in this state for
the period of time indicated by the Forward Delay timer.
During the Listening state the port is actively involved in the
creation of the logical topology following a three steps process:
1. Selection of Root Switch
2. Selection of Root Ports
3. Selection of Designated Ports
4. Learning: The port sends and receives BPDUs. Updates
the MAC address table. A port will stay in this stay for the
period of time indicated by the Forward Delay timer.

1) Forwarding: The port sends and receives BPDUs. The port


sends and receives user traffic. The updates the MAC address
table.

The complete logic for state transitions is depicted in the diagram and
works as follows:

1. After the switchs initialization or startup, all the ports


immediately go to a blocking state.
2. After the configured Max Age has been reached, the switch
ports transition from the blocking state to the listening state.
3. After the configured Forward Delay time has been reached,
the port enters the learning state.
4. After the configured Forward Delay has been reached in the
learning state, the port either transitions into Forwarding (F)
mode or back to blocking (B) mode.
5. If STP has decided the port will be a forwarding port, the port is
placed in forwarding mode; but if the port is a highercost
redundant link, the port is placed in blocking mode again.

All the process is governed by a number of timers:

- Hello Timer: Represents the interval between two consecutive


BPDUs sent by a port (5 sec. default)
- Max Age: Represents the interval a port will wait for getting a
periodic BPDU from the other ports running STP. In other words,
represents the validity time for a received BPDU (20 sec.
default).
- Forward Delay: Represents the time it takes to the port to go
over the listening and learning states before it is ready for
Forwarding (15 sec. default)

Convergence Time can be defined in terms of the the STP timers.


Before a port can converge, it must go through a number of states
lasting:
Max Age + 2 * Forward Delay = Convergence Time
Multi VLAN STP Protocol Extensions

Multi VLAN environments: In multi VLAN environments, two flavors of


STP can be configured:

Common Spanning Tree (CST): STP is run once for all the
VLANs. All the VLANs share the same logical loop-free topology.
CST does not allow load root switch load sharing but reduces
CPU and memory use on the switches as they require a single
running instance of STP
Per-VLAN Spanning Tree + (PVST+): STP runs as many time
as VLANs are configured in the switching environment. Every
VLAN has a different loop-free topology and root switch function
can be shared by different switches for different VLANs.
Switches use more CPU and memory to hold the multi-STP
information.

In the case of PVST+, the Bridge ID is composed of the following


elements:

Bridge Priority: 2 bytes.


Extended System ID: Contains the different VLAN IDs. Every
time a BPDU is sent for a different VLAN STP instance, the
System ID is change to reflect the VLAN ID corresponding to the
instance of STP being executed.
MAC Address: 6 bytes. MAC address representing the switch.
There is an additional extension of STP in multi-VLAN environments
called Multi Spanning Tree Protocol (MSTP). MSTP allows multiple
instances of STP to run in a single switched environment, but allows
STP instances to be common to a number of VLANs, saving CPU and
Memory as well as enabling root switch load sharing.
MSTP was initially specified as 802.1s but lately included as part of
802.1q.

Rapid Spanning Tree Protocol (RSTP)

Rapid Spanning Tree Protocol (RSTP) is specified as 802.1w and is a


new version of STP which supersedes 802.1d but remains backwards
compatible. RSTP converges much faster than STP.

In essence, RSTP behaves the same as STP and follows the same
convergence process:

Elects the root switch of the topology


Elects root ports on all non-root switches
Elects designated ports in each of the LAN Segments
Places each physical port in Forwarding or Blocking State. In
RSTP, Blocking state is called Discarding State

The main differences between STP and RSTP are as follows:


Ports can be configured to directly transition into
Forwarding State: This is very useful specially on edge
ports directly connected to stations. Running STP on those
ports does not make sense as they need to always be active
and they are not part of any redundant layer 2 link.
In order to enable a faster convergence in the event of
failure, two different loop-free topologies are built:
Active and Alternate. Ports can be part of the active
topology (RP and DP) or be part of alternate topology
(Alternate and Back up).

In order for the different topologies to be elected, new port roles are
defined (Root, Designated, Alternate, Backup, Disable) and new port
states are also used (Discarding, Learning and Forwarding).

The corresponding multi VLAN versions of RSTP do exist: PVRST+


and MRSTP.

Você também pode gostar