Você está na página 1de 8

Asynchronous Transfer Mode

Asynchronous Transfer Mode (ATM) represents a relatively recently developed


communications technology designed to overcome the constraints associated with
traditional, and for the most part separate, voice and data networks. ATM has its
roots in the work of a CCITT (now known as ITU-T) study group formed to
develop broadband ISDN standards during the mid-1980s. In 1988, a cell
switching technology was chosen as the foundation for broadband ISDN, and in
1991, the ATM Forum was founded.
The ATM Forum represents an international consortium of public and private
equipment vendors, data communications and telecommunications service
providers, consultants, and end users established to promote the implementation of
ATM. _To accomplish this goal, the ATM Forum develops standards with the ITU
and other standards organizations.
The first ATM Forum standard was released in 1992. Various ATM Forum
working groups are busy defining additional standards required to enable ATM to
provide a communications capability for the wide range of LAN and WAN
transmission schemes it is designed to support. This standardization effort will
probably remain in effect for a considerable period due to the comprehensive
design goal of the technology, which was developed to support voice, data, and
video on both local and wide area networks.
Comparing Network Features
Feature Data Communications Telecommunications ATM
Data, voice,
Traffic support Data Voice
video
Transmission
Packet Frame Cell
unit
Transmission
Variable Fixed Fixed
length
Switching type Packet Circuit Cell
Connectionless or Connection-
Connection type Connection-oriented
Connection-oriented oriented
Time sensitivity None to some All Adaptive
Defined class or
Delivery Best effort Guaranteed
guaranteed
Media and
Defined by protocol Defined by class Scalable
operating rate
Media access Shared or dedicated Dedicated Dedicated

Thus, ATM provides a mechanism for merging voice, data, and video onto LANs
and WANs. You can gain an appreciation for how ATM accomplishes this by
learning about its architecture.

Architecture
ATM is based on the switching of 53-byte cells, in which each cell consists of a 5-
byte header and a payload of 48 bytes of information. Figure 14.1 illustrates the
format of the ATM cell, including the explosion of its 5-byte header to indicate the
fields carried in the header.

The 53-byte ATM cell.


The 4-bit Generic Flow Control (GFC) field is used as a mechanism to regulate the
flow of traffic in an ATM network between the network and the user. The use of
this field is currently under development. As we will shortly note, ATM supports
two major types of interfaces: Network-to-User (UNI) and Network-to-Network
(NNI). When a cell flows from the user to the network or from the network to the
user, it will carry a GFC bit value. However, when it flows within a network or
between networks, the GFC field is not used. Instead of being wasted, its space can
be used to expand the length of the Virtual Path Identifier field.
The 8-bit Virtual Path Identifier (VPI) field represents one half of a two-part
connection identifier used by ATM. This field identifies a virtual path that can
represent a group of virtual circuits transported along the same route. Although the
VPI is eight bits long in a UNI cell, the field expands to 12-bit positions to fill the
Generic Flow Control field in an NNI cell. It is described in more detail later in
this chapter.
The Virtual Channel Identifier (VCI) is the second half of the two-part connection
identifier carried in the ATM header. The 16-bit VCI field identifies a connection
between two ATM stations communicating with one another for a specific type of
application. Multiple virtual channels (VCs) can be transported within one virtual
path. For example, one VC could be used to transport a disk backup operation,
while a second VC is used to transport a TCP/IP-based application. The virtual
channel represents a one-way cell transport facility. Thus, for each of the
previously described operations, another series of VCIs is established from the
opposite direction. You can view a virtual channel as an individual one-way end-
to-end circuit, whereas a virtual path that can represent a collection of virtual
channels can be viewed as a network trunk line. After data is within an ATM
network, the VPI is used to route a common group of virtual channels between
switches by enabling ATM switches to simply examine the value of the VPI. Later
in this chapter, you will examine the use of the VCI.
The Payload Type Identifier (PTI) field indicates the type of information carried in
the 48-byte data portion of the ATM cell. Currently, this 3-bit field indicates
whether payload data represents management information or user data. Additional
PTI field designators have been reserved for future use.
The 1-bit Cell Loss Priority (CLP) field indicates the relative importance of the
cell. If this field bit is set to 1, the cell can be discarded by a switch experiencing
congestion. If the cell cannot be discarded, the CLP field bit is set to 0.
The last field in the ATM cell header is the 8-bit Header Error Control field. This
field represents the result of an 8-bit Cyclic Redundancy Check (CRC) code,
computed only over the ATM cell header. This field provides the capability for
detecting all single-bit errors and certain multiple-bit errors that occur in the 40-bit
ATM cell header.
Advantages of the Technology
The use of cell-switching technology in a LAN environment provides some distinct
advantages over the shared-medium technology employed by Ethernet, token-ring,
and FDDI networks. Two of those advantages are obtaining full bandwidth access
to ATM switches for individual workstations and enabling attaching devices to
operate at different operating rates. Those advantages are illustrated in Figure 14.2,
which shows an ATM switch that could be used to support three distinct operating
rates. Workstations could be connected to the switch at 25Mbps, and a local server
could be connected at 155Mbps to other switches either to form a larger local LAN
or to connect to a communications carrier's network via a different operating rate.
The selection of a 53-byte cell length results in a minimum of latency in
comparison to the packet length of traditional LANs, such as Ethernet, which can
have a maximum 1526-byte frame length. Because the ATM cell is always 53
bytes in length, cells transporting voice, data, and video can be intermixed without
the latency of one cell adversely affecting other cells. Because the length of each
cell is fixed and the position of information in each header is known, ATM
switching can be accomplished via the use of hardware. In comparison, on
traditional LANs, bridging and routing functions are normally performed by
software or firmware, which executes more slowly than hardware-based switching.

ATM is based on the switching of 53-


byte cells.
Two additional features of ATM that
warrant discussion are its
asynchronous operation and its
connection-oriented operation. ATM
cells are intermixed via
multiplexing, and cells from
individual connections are
forwarded from switch to switch via
a single-cell flow. However, the
multiplexing of ATM cells occurs
via asynchronous transfer, in which cells are transmitted only when data is present
to send. In comparison, in conventional time division multiplexing, keep-alive or
synchronization bytes are transmitted when there is no data to be sent. Concerning
the connection-oriented technology used by ATM, this means that a connection
between the ATM stations must be established before data transfer occurs. The
connection process results in the specification of a transmission path between
ATM switches and end stations, enabling the header in ATM cells to be used to
route the cells on the required path through an ATM network.
The ATM Protocol Reference Model
Three layers in the ATM architecture form the basis for the ATM Protocol
Reference model, illustrated in Figure 14.5. Those layers are the Physical layer, the
ATM layer, and the ATM Adaptation layer.
ATM CLASS OF SERVICE (COS) AND QUALITY OF SERVICE (QOS)

ATM networks employ Classes of Service (CoS) for optimizing network


performance and supporting applications with specified bandwidth or throughput
requirements. ATM service classes resolve congestion problems and traffic
management issues in order to ensure seamless transmission in multivendor
environments. A Class of Service (CoS) refers to a category of ATM connections
that features identical traffic patterns and resource requirements. Each class
provisions a distinct level of service and associated QoS guarantees. Depending
upon the format of the QoS service requested, the ATM network defines a series of
CoS categories. The Variable Bit Rate (VBR) Class of Service consists of
applications with specific requirements for delays and throughputs such as
packetized voice and data applications. The real-time Variable Bit Rate (VBR-rt)
Class of Service requires real-time support for provisioning applications such as
video-on-demand (VOD) and voice-over-IP (VoIP). VBR-rt bandwidth
requirements vary over time. However, delay and delay variance limits are clearly
established.

The non-real-time variable bit rate (VBR-nrt) Class of Service eliminates the need
for guaranteed delivery of applications such as multimedia e-mail, bulk file
transmissions, and business and educational database transactions with minimal
service requirements. Bandwidth for VBR-nrt applications varies within a
specified range. However, delay and delay variance requirements are not fully
defined. The Available Bit Rate (ABR) Class of Service requires the use of flow
control mechanisms for ensuring allocation of bandwidth on-demand for non-real-
time, mission-critical applications. With ABR applications, guaranteed minimum
transmission rates are specified for the duration of the connection. In addition,
ABR also establishes peak transmission rates for data bursts when bandwidth is
available. As a consequence, the ABR service class tolerates delay variations.
Applications grouped into this category allow priority traffic to consume
bandwidth first. ABR applications include LAN emulation (LANE), file and data
distribution, and LAN interconnections.
The Unspecified Bit Rate (UBR) Class of Service is equivalent to best-effort
delivery in IP networks. Delay-tolerant UBR applications include Web browsing
and IP transmissions. Because UBR applications require minimal network support,
QoS guarantees and pre-established throughput levels are not defined. The
Constant Bit Rate (CBR) Class of Service (CoS) requires utilization of a virtual
channel with constant bandwidth for seamlessly transporting applications in
accordance with pre-defined response time requirements. CBR applications include
videoconferencing, telephony services, and television broadcasts.

In conjunction with establishing a CoS, ATM networks define cell rates and burst
size to facilitate seamless network performance. For example, Peak Cell Rate
(PCR) indicates the maximum rate at which cells transit the network for brief time
periods. Sustainable Cell Rate (SCR) refers to the cell rate that is sustained for a
specified period of time. Maximum Burst Size (MBS) defines the maximum
number of back-to-back cells that transit the network.
ATM APPLICATIONS
ATM is a connection-oriented virtual network transmission and switching
technology that combines the low-delay of circuit-switched networks with the
bandwidth flexibility and high-speed of packet-switched networks. ATM is an
enabler of basic and advanced applications such as remote sensing, 3-D (three-
dimensional) interactive simulations, tele-instruction, biological teleresearch, and
medical teleconsultations. Edge devices at the boundary of an ATM network
convert non-ATM traffic streams into standard ATM cells.
ATM technology is implemented in backbone, enterprise, and edge switches as
well as hubs, routers, bridges, multiplexers, servers, server farms, and NICs
(Network Interface Cards) in high-end Internet appliances. The ATM Data
Exchange Interface (DXI) enables fast access to public network services. A
flexible and extendible networking solution, ATM technology supports network
configurations that include DANs (Desk Area Networks), LANs, MANs
(Metropolitan Area Networks), WANs (Wide Area Networks), and GANs (Global
Area Networks).

Switching techniques
Circuit switching is defined as a mechanism applied in telecommunications
(mainly in PSTN) whereby the user is allocated the full use of the communication
channel for the duration of the call.
That is if two parties wish to communicate, the calling party has to first dial the
numbers of the called party. Once those numbers are dialed, the originating
exchange will find a path to the terminating exchange, which will in turn find the
called party.

After the circuit or channel has been set up, then communication will take place,
then once they are through the channel will be cleared. This mechanism is referred
to as being connection-oriented.

Advantages of Circuit Switching:

* Once the circuit has been set up, communication is fast and without error.

* It is highly reliable

Disadvantages:

* Involves a lot of overhead, during channel set up.

* Waists a lot of bandwidth, especial in speech whereby a user is sometimes


listening, and not talking.

* Channel set up may take longer.

To overcome the disadvantages of circuit switching, packet switching was


introduced, and instead of dedicating a channel to only two parties for the duration
of the call it routes packets individually as they are available. This mechanism is
referred to as being connectionless.

Packet switching
Packet switching refers to protocols in which messages are broken up into small
packets before they are sent. Each packet is
transmitted individually across the net and may even follow
different routes to the destination. Thus, each packet has header
information about the source, destination, packet numbering etc. At
the destination, the packets are reassembled into the original
message. Most modern Wide Area Networks (WANs) protocols, such
as TCP/IP, X.25 and Frame Relay are based on packet switching
technologies.
Packet switching's main difference from Circuit Switching is that that the
communication lines
are not dedicated to passing messages from the source to the destination. In Packet
Switching,
different messages (and even different packets) can pass through different routes,
and when
there is a "dead time" in the communication between the source and the
destination, the lines
can be used by other routers.
Circuit Switching is ideal when data must be transmitted quickly, must arrive in
sequencing
order and at a constant arrival rate. Thus, when transmitting real time data, such as
audio and
video, Circuit Switching networks will be used. Packet Switching is more efficient
and robust for
data that is bursty in its nature and can withstand delays in transmission, such as e-
mail
messages, and Web pages.

Message switching
Message switching, often known as store-and-forward switching, accepts data
messages, stores the messages and then forwards each message to the next
destination or node when the circuits become available. Message switching is a
message passing system where the complete message is received before it is passed
onto the next node. This means that each message is using at most one link at any
given time but may require more storage buffers on intermediate nodes. Telex and
“torn-tape” systems are good examples of message switching. Message switching
does not operate at real time.
Switching Method Advantages Disadvantages
Message Switching Cost effective for leased Problematic
service of low volume survivability
  Very efficient trunk Needs large storage
utilization buffers
    Delivery delay

Você também pode gostar