Você está na página 1de 60

GSM band allocations

There is a total of fourteen different recognised GSM frequency bands. These are defined in 3GPP
TS 45.005.
BAND UPLINK DOWNLINK COMMENTS
(MHZ) (MHZ)
380 380.2 - 389.8 390.2 - 399.8
410 410.2 - 419.8 420.2 - 429.8
450 450.4 - 457.6 460.4 - 467.6
480 478.8 - 486.0 488.8 - 496.0
710 698.0 - 716.0 728.0 - 746.0
750 747.0 - 762.0 777.0 - 792.0
810 806.0 - 821.0 851.0 - 866.0
850 824.0 - 849.0 869.0 - 894.0
900 890.0 - 915.0 935.0 - 960.0 P-GSM, i.e. Primary or standard GSM
allocation
900 880.0 - 915.0 925.0 - 960.0 E-GSM, i.e. Extended GSM allocation
900 876.0 - 915 921.0 - 960.0 R-GSM, i.e. Railway GSM allocation
900 870.4 - 876.0 915.4 - 921.0 T-GSM
1800 1710.0 - 1785.0 1805.0 - 1880.0
1900 1850.0 - 1910.0 1930.0 - 1990.0

GSM frequency band usage


The usage of the different frequency bands varies around the globe although there is a large
degree of standardisation. The GSM frequencies available depend upon the regulatory
requirements for the particular country and the ITU (International Telecommunications Union)
region in which the country is located.
As a rough guide Europe tends to use the GSM 900 and 1800 bands as standard. These bands are
also generally used in the Middle East, Africa, Asia and Oceania.
For North America the USA uses both 850 and 1900 MHz bands, the actual band used is
determined by the regulatory authorities and is dependent upon the area. For Canada the 1900
MHz band is the primary one used, particularly for urban areas with 850 MHz used as a backup in
rural areas.
For Central and South America, the GSM 850 and 1900 MHz frequency bands are the most widely
used although there are some areas where other frequencies are used.

GSM multiband phones


In order that cell phone users are able to take advantage of the roaming facilities offered by GSM,
it is necessary that the cellphones are able to cover the bands of the countries which are visited.
Today most phones support operation on multiple bands and are known as multi-band phones.
Typically most standard phones are dual-band phones. For Europe, Middle east, Asia and Oceania
these would operate on GSM 900 and 1800 bands and for North America, etc dual band phones
would operate on GSM 850 and 1900 frequency bands.
To provide better roaming coverage, tri-band and quad-band phones are also available. European
triband phones typically cover the GSM 900, 1800 and 1900 bands giving good coverage in Europe
as well as moderate coverage in North America. Similarly North America tri-band phones use the
900, 1800 and 1900 GSM frequencies. Quad band phones are also available covering the 850,
900, 1800 and 1900 MHz GSM frequency bands, i.e. the four major bands and thereby allowing
global use.

GSM Power class


Not all mobiles have the same maximum power output level. In order that the base station knows
the maximum power level number that it can send to the mobile, it is necessary for the base
station to know the maximum power it can transmit. This is achieved by allocating a GSM power
class number to a mobile. This GSM power class number indicates to the base station the
maximum power it can transmit and hence the maximum power level number the base station can
instruct it to use.
Again the GSM power classes vary according to the band in use.
GSM GSM 900 GSM 1800 GSM 1900
POWER
CLASS
NUMBER
Power levelMaximum Power levelMaximum Power levelMaximum
number power output number power output number power output
1 PL0 30 dBm / 1W PL0 30 dBm / 1W
2 PL2 39dBm / 8W PL3 24 dBm/ 250PL3 24 dBm / 250
mW mW
3 PL3 37dBm / 5W PL29 36 dBm / 4W PL30 33 dBm / 2W
4 PL4 33dBm / 2W
5 PL5 29 dBm / 800
mW

GSM power amplifier design considerations


One of the main considerations for the RF power amplifier design in any mobile phone is its
efficiency. The RF power amplifier is one of the major current consumption areas. Accordingly, to
ensure long battery life it should be as efficient as possible.
It is also worth remembering that as mobiles may only transmit for one eighth of the time, i.e. for
their allocated slot which is one of eight, the average power is an eighth of the maximum.
Further pages from this tutorial

GSM system overview


The GSM system was designed as a second generation (2G) cellular phone technology. One of the
basic aims was to provide a system that would enable greater capacity to be achieved than the
previous first generation analogue systems. GSM achieved this by using a digital TDMA (time
division multiple access approach). By adopting this technique more users could be accommodated
within the available bandwidth. In addition to this, ciphering of the digitally encoded speech was
adopted to retain privacy. Using the earlier analogue cellular technologies it was possible for
anyone with a scanner receiver to listen to calls and a number of famous personalities had been
"eavesdropped" with embarrassing consequences.

GSM services
Speech or voice calls are obviously the primary function for the GSM cellular system. To achieve
this the speech is digitally encoded and later decoded using a vocoder. A variety of vocoders are
available for use, being aimed at different scenarios.
In addition to the voice services, GSM cellular technology supports a variety of other data services.
Although their performance is nowhere near the level of those provided by 3G, they are
nevertheless still important and useful. A variety of data services are supported with user data
rates up to 9.6 kbps. Services including Group 3 facsimile, videotext and teletex can be supported.
One service that has grown enormously is the short message service. Developed as part of the
GSM specification, it has also been incorporated into other cellular technologies. It can be thought
of as being similar to the paging service but is far more comprehensive allowing bi-directional
messaging, store and forward delivery, and it also allows alphanumeric messages of a reasonable
length. This service has become particularly popular, initially with the young as it provided a
simple, low fixed cost.
GSM basics
The GSM cellular technology had a number of design aims when the development started:

• It should offer good subjective speech quality


• It should have a low phone or terminal cost
• Terminals should be able to be handheld
• The system should support international roaming
• It should offer good spectral efficiency
• The system should offer ISDN compatibility

The resulting GSM cellular technology that was developed provided for all of these. The overall
system definition for GSM describes not only the air interface but also the network or
infrastructure technology. By adopting this approach it is possible to define the operation of the
whole network to enable international roaming as well as enabling network elements from different
manufacturers to operate alongside each other, although this last feature is not completely true,
especially with older items.
GSM cellular technology uses 200 kHz RF channels. These are time division multiplexed to enable
up to eight users to access each carrier. In this way it is a TDMA / FDMA system.
The base transceiver stations (BTS) are organised into small groups, controlled by a base station
controller (BSC) which is typically co-located with one of the BTSs. The BSC with its associated
BTSs is termed the base station subsystem (BSS).
Further into the core network is the main switching area. This is known as the mobile switching
centre (MSC). Associated with it is the location registers, namely the home location register (HLR)
and the visitor location register (VLR) which track the location of mobiles and enable calls to be
routed to them. Additionally there is the Authentication Centre (AuC), and the Equipment Identify
Register (EIR) that are used in authenticating the mobile before it is allowed onto the network and
for billing. The operation of these are explained in the following pages.
Last but not least is the mobile itself. Often termed the ME or mobile equipment, this is the item
that the end user sees. One important feature that was first implemented on GSM was the use of a
Subscriber Identity Module. This card carried with it the users identity and other information to
allow the user to upgrade a phone very easily, while retaining the same identity on the network. It
was also used to store other information such as "phone book" and other items. This item alone
has allowed people to change phones very easily, and this has fuelled the phone manufacturing
industry and enabled new phones with additional features to be launched. This has allowed mobile
operators to increase their average revenue per user (ARPU) by ensuring that users are able to
access any new features that may be launched on the network requiring more sophisticated
phones.

GSM system overview


The table below summarises the main points of the GSM system specification, showing some of the
highlight features of technical interest.
SPECIFICATION SUMMARY FOR GSM CELLULAR SYSTEM
Multiple access technology FDMA / TDMA
Duplex technique FDD
Uplink frequency band 933 -960 MHz
(basic 900 MHz band only)
Downlink frequency band 890 - 915 MHz
(basic 900 MHz band only)
Channel spacing 200 kHz
Modulation GMSK
Speech coding Various - original was RPE-LTP/13
Speech channels per RF channel 8
Channel data rate 270.833 kbps
Frame duration 4.615 ms
GSM summary
The GSM system is the most successful cellular telecommunications system to date. With
subscriber numbers running into billions and still increasing, it has been proved to have met its
requirements. Further pages of this GSM tutorial or overview detail many of the GSM basics from
the air interface, frame and slot structures to the logical and physical channels as well as details
about the GSM network.
Today the GSM cell or mobile phone system is the most popular in the world. GSM handsets are
widely available at good prices and the networks are robust and reliable. The GSM system is also
feature-rich with applications such as SMS text messaging, international roaming, SIM cards and
the like. It is also being enhanced with technologies including GPRS and EDGE. To achieve this
level of success has taken many years and is the result of both technical development and
international cooperation. The GSM history can be seen to be a story of cooperation across Europe,
and one that nobody thought would lead to the success that GSM is today.
The first cell phone systems that were developed were analogue systems. Typically they used
frequency-modulated carriers for the voice channels and data was carried on a separate shared
control channel. When compared to the systems employed today these systems were
comparatively straightforward and as a result a vast number of systems appeared. Two of the
major systems that were in existence were the AMPS (Advanced Mobile Phone System) that was
used in the USA and many other countries and TACS (Total Access Communications System) that
was used in the UK as well as many other countries around the world.
Another system that was employed, and was in fact the first system to be commercially deployed
was the Nordic Mobile Telephone system (NMT). This was developed by a consortium of companies
in Scandinavia and proved that international cooperation was possible.
The success of these systems proved to be their downfall. The use of all the systems installed
around the globe increased dramatically and the effects of the limited frequency allocations were
soon noticed. To overcome these a number of actions were taken. A system known as E-TACS or
Extended-TACS was introduced giving the TACS system further channels. In the USA another
system known as Narrowband AMPS (NAMPS) was developed.

New approaches
Neither of these approaches proved to be the long-term solution as cellular technology needed to
be more efficient. With the experience gained from the NMT system, showing that it was possible
to develop a system across national boundaries, and with the political situation in Europe lending
itself to international cooperation it was decided to develop a new Pan-European System.
Furthermore it was realized that economies of scale would bring significant benefits. This was the
beginnings of the GSM system.
To achieve the basic definition of a new system a meeting was held in 1982 under the auspices of
the Conference of European Posts and Telegraphs (CEPT). They formed a study group called the
Groupe Special Mobile ( GSM ) to study and develop a pan-European public land mobile system.
Several basic criteria that the new cellular technology would have to meet were set down for the
new GSM system to meet. These included: good subjective speech quality, low terminal and
service cost, support for international roaming, ability to support handheld terminals, support for
range of new services and facilities, spectral efficiency, and finally ISDN compatibility.
With the levels of under-capacity being projected for the analogue systems, this gave a real sense
of urgency to the GSM development. Although decisions about the exact nature of the cellular
technology were not taken at an early stage, all parties involved had been working toward a digital
system. This decision was finally made in February 1987. This gave a variety of advantages.
Greater levels of spectral efficiency could be gained, and in addition to this the use of digital
circuitry would allow for higher levels of integration in the circuitry. This in turn would result in
cheaper handsets with more features. Nevertheless significant hurdles still needed to be
overcome. For example, many of the methods for encoding the speech within a sufficiently narrow
bandwidth needed to be developed, and this posed a significant risk to the project. Nevertheless
the GSM system had been started.

GSM launch dates


Work continued and a launch date for the new GSM system of 1991 was set for an initial launch of
a service using the new cellular technology with limited coverage and capability to be followed by a
complete roll out of the service in major European cities by 1993 and linking of the areas by 1995.
Meanwhile technical development was taking place. Initial trials had shown that time division
multiple access techniques offered the best performance with the technology that would be
available. This approach had the support of the major manufacturing companies which would
ensure that with them on board sufficient equipment both in terms of handsets, base stations and
the network infrastructure for GSM would be available.
Further impetus was given to the GSM project when in 1989 the responsibility was passed to the
newly formed European Telecommunications Standards Institute (ETSI). Under the auspices of
ETSI the specification took place. It provided functional and interface descriptions for each of the
functional entities defined in the system. The aim was to provide sufficient guidance for
manufacturers that equipment from different manufacturers would be interoperable, while not
stopping innovation. The result of the specification work was a set of documents extending to
more than 6000 pages. Nevertheless the resultant phone system provided a robust, feature-rich
system. The first roaming agreement was signed between Telecom Finland and Vodafone in the
UK. Thus the vision of a pan-European network was fast becoming a reality. However this took
place before any networks went live.
The aim to launch GSM by 1991 proved to be a target that was too tough to meet. Terminals
started to become available in mid 1992 and the real launch took place in the latter part of that
year. With such a new service many were sceptical as the analogue systems were still in
widespread use. Nevertheless by the end of 1993 GSM had attracted over a million subscribers
and there were 25 roaming agreements in place. The growth continued and the next million
subscribers were soon attracted.

Global GSM usage


Originally GSM had been planned as a European system. However the first indication that the
success of GSM was spreading further a field occurred when the Australian network provider,
Telstra signed the GSM Memorandum of Understanding.

Frequencies
Originally it had been intended that GSM would operate on frequencies in the 900 MHz cellular
band. In September 1993, the British operator Mercury One-to-One launched a network. Termed
DCS 1800 it operated at frequencies in a new 1800 MHz band. By adopting new frequencies new
operators and further competition was introduced into the market apart from allowing additional
spectrum to be used and further increasing the overall capacity. This trend was followed in many
countries, and soon the term DCS 1800 was dropped in favour of calling it GSM as it was purely
the same cellular technology but operating on a different frequency band. In view of the higher
frequency used the distances the signals travelled was slightly shorter but this was compensated
for by additional base stations.
In the USA as well a portion of spectrum at 1900 MHz was allocated for cellular usage in 1994. The
licensing body, the FCC, did not legislate which technology should be used, and accordingly this
enabled GSM to gain a foothold in the US market. This system was known as PCS 1900 (Personal
Communication System).

GSM success
With GSM being used in many countries outside Europe this reflected the true nature of the name
which had been changed from Groupe Special Mobile to Global System for Mobile communications.
The number of subscribers grew rapidly and by the beginning of 2004 the total number of GSM
subscribers reached 1 billion. Attaining this figure was celebrated at the Cannes 3GSM conference
held that year. Figures continued to rise, reaching and then well exceeding the 3 billion mark. In
this way the history of GSM has shown it to be a great success.
The GSM technical specifications define the different elements within the GSM network
architecture. It defines the different elements and the ways in which they interact to enable the
overall network operation to be maintained.
The GSM network architecture is now well established and with the other later cellular systems
now established and other new ones being deployed, the basic GSM network architecture has been
updated to interface to the network elements required by these systems. Despite the
developments of the newer systems, the basic GSM network architecture has been maintained,
and the elements described below perform the same functions as they did when the original GSM
system was launched in the early 1990s.
GSM network architecture elements
The GSM network architecture as defined in the GSM specifications can be grouped into four main
areas:

• Mobile station (MS)


• Base-station subsystem (BSS)
• Network and Switching Subsystem (NSS)
• Operation and Support Subsystem (OSS)

Simplified GSM Network Architecture

Mobile station
Mobile stations (MS), mobile equipment (ME) or as they are most widely known, cell or mobile
phones are the section of a GSM cellular network that the user sees and operates. In recent years
their size has fallen dramatically while the level of functionality has greatly increased. A further
advantage is that the time between charges has significantly increased.
There are a number of elements to the cell phone, although the two main elements are the main
hardware and the SIM.
The hardware itself contains the main elements of the mobile phone including the display, case,
battery, and the electronics used to generate the signal, and process the data receiver and to be
transmitted. It also contains a number known as the International Mobile Equipment Identity
(IMEI). This is installed in the phone at manufacture and "cannot" be changed. It is accessed by
the network during registration to check whether the equipment has been reported as stolen.
The SIM or Subscriber Identity Module contains the information that provides the identity of the
user to the network. It contains are variety of information including a number known as the
International Mobile Subscriber Identity (IMSI).

Base Station Subsystem (BSS)


The Base Station Subsystem (BSS) section of the GSM network architecture that is fundamentally
associated with communicating with the mobiles on the network. It consists of two elements:
• Base Transceiver Station (BTS): The BTS used in a GSM network comprises the radio
transmitter receivers, and their associated antennas that transmit and receive to directly
communicate with the mobiles. The BTS is the defining element for each cell. The BTS
communicates with the mobiles and the interface between the two is known as the Um
interface with its associated protocols.
• Base Station Controller (BSC): The BSC forms the next stage back into the GSM
network. It controls a group of BTSs, and is often co-located with one of the BTSs in its
group. It manages the radio resources and controls items such as handover within the
group of BTSs, allocates channels and the like. It communicates with the BTSs over what
is termed the Abis interface.

Network Switching Subsystem (NSS)


The GSM network subsystem contains a variety of different elements, and is often termed the core
network. It provides the main control and interfacing for the whole mobile network. The major
elements within the core network include:

• Mobile Switching services Centre (MSC): The main element within the core network
area of the overall GSM network architecture is the Mobile switching Services Centre
(MSC). The MSC acts like a normal switching node within a PSTN or ISDN, but also
provides additional functionality to enable the requirements of a mobile user to be
supported. These include registration, authentication, call location, inter-MSC handovers
and call routing to a mobile subscriber. It also provides an interface to the PSTN so that
calls can be routed from the mobile network to a phone connected to a landline. Interfaces
to other MSCs are provided to enable calls to be made to mobiles on different networks.
• Home Location Register (HLR): This database contains all the administrative
information about each subscriber along with their last known location. In this way, the
GSM network is able to route calls to the relevant base station for the MS. When a user
switches on their phone, the phone registers with the network and from this it is possible
to determine which BTS it communicates with so that incoming calls can be routed
appropriately. Even when the phone is not active (but switched on) it re-registers
periodically to ensure that the network (HLR) is aware of its latest position. There is one
HLR per network, although it may be distributed across various sub-centres to for
operational reasons.
• Visitor Location Register (VLR): This contains selected information from the HLR that
enables the selected services for the individual subscriber to be provided. The VLR can be
implemented as a separate entity, but it is commonly realised as an integral part of the
MSC, rather than a separate entity. In this way access is made faster and more
convenient.
• Equipment Identity Register (EIR): The EIR is the entity that decides whether a given
mobile equipment may be allowed onto the network. Each mobile equipment has a number
known as the International Mobile Equipment Identity. This number, as mentioned above,
is installed in the equipment and is checked by the network during registration. Dependent
upon the information held in the EIR, the mobile may be allocated one of three states -
allowed onto the network, barred access, or monitored in case its problems.
• Authentication Centre (AuC): The AuC is a protected database that contains the secret
key also contained in the user's SIM card. It is used for authentication and for ciphering on
the radio channel.
• Gateway Mobile Switching Centre (GMSC): The GMSC is the point to which a ME
terminating call is initially routed, without any knowledge of the MS's location. The GMSC
is thus in charge of obtaining the MSRN (Mobile Station Roaming Number) from the HLR
based on the MSISDN (Mobile Station ISDN number, the "directory number" of a MS) and
routing the call to the correct visited MSC. The "MSC" part of the term GMSC is misleading,
since the gateway operation does not require any linking to an MSC.
• SMS Gateway (SMS-G): The SMS-G or SMS gateway is the term that is used to
collectively describe the two Short Message Services Gateways defined in the GSM
standards. The two gateways handle messages directed in different directions. The SMS-
GMSC (Short Message Service Gateway Mobile Switching Centre) is for short messages
being sent to an ME. The SMS-IWMSC (Short Message Service Inter-Working Mobile
Switching Centre) is used for short messages originated with a mobile on that network.
The SMS-GMSC role is similar to that of the GMSC, whereas the SMS-IWMSC provides a
fixed access point to the Short Message Service Centre.

Operation and Support Subsystem (OSS)


The OSS or operation support subsystem is an element within the overall GSM network
architecture that is connected to components of the NSS and the BSC. It is used to control and
monitor the overall GSM network and it is also used to control the traffic load of the BSS. It must
be noted that as the number of BS increases with the scaling of the subscriber population some of
the maintenance tasks are transferred to the BTS, allowing savings in the cost of ownership of the
system.
The network structure is defined within the GSM standards. Additionally each interface between
the different elements of the GSM network is also defined. This facilitates the information
interchanges can take place. It also enables to a large degree that network elements from
different manufacturers can be used. However as many of these interfaces were not fully defined
until after many networks had been deployed, the level of standardisation may not be quite as
high as many people might like.

1. Um interface The "air" or radio interface standard that is used for exchanges between a
mobile (ME) and a base station (BTS / BSC). For signalling, a modified version of the ISDN
LAPD, known as LAPDm is used.
2. Abis interface This is a BSS internal interface linking the BSC and a BTS, and it has not
been totally standardised. The Abis interface allows control of the radio equipment and
radio frequency allocation in the BTS.
3. A interface The A interface is used to provide communication between the BSS and the
MSC. The interface carries information to enable the channels, timeslots and the like to be
allocated to the mobile equipments being serviced by the BSSs. The messaging required
within the network to enable handover etc to be undertaken is carried over the interface.
4. B interface The B interface exists between the MSC and the VLR . It uses a protocol
known as the MAP/B protocol. As most VLRs are collocated with an MSC, this makes the
interface purely an "internal" interface. The interface is used whenever the MSC needs
access to data regarding a MS located in its area.
5. C interface The C interface is located between the HLR and a GMSC or a SMS-G. When a
call originates from outside the network, i.e. from the PSTN or another mobile network it
ahs to pass through the gateway so that routing information required to complete the call
may be gained. The protocol used for communication is MAP/C, the letter "C" indicating
that the protocol is used for the "C" interface. In addition to this, the MSC may optionally
forward billing information to the HLR after the call is completed and cleared down.
6. D interface The D interface is situated between the VLR and HLR. It uses the MAP/D
protocol to exchange the data related to the location of the ME and to the management of
the subscriber.
7. E interface The E interface provides communication between two MSCs. The E interface
exchanges data related to handover between the anchor and relay MSCs using the MAP/E
protocol.
8. F interface The F interface is used between an MSC and EIR. It uses the MAP/F protocol.
The communications along this interface are used to confirm the status of the IMEI of the
ME gaining access to the network.
9. G interface The G interface interconnects two VLRs of different MSCs and uses the
MAP/G protocol to transfer subscriber information, during e.g. a location update procedure.
10. H interface The H interface exists between the MSC the SMS-G. It transfers short
messages and uses the MAP/H protocol.
11. I interface The I interface can be found between the MSC and the ME. Messages
exchanged over the I interface are relayed transparently through the BSS.

Although the interfaces for the GSM cellular system may not be as rigorouly defined as many
might like, they do at least provide a large element of the definition required, enabling the
functionality of GSM network entities to be defined sufficiently.
One of the key elements of the development of the GSM, Global System for Mobile
Communications was the development of the GSM air interface. There were many requirements
that were placed on the system, and many of these had a direct impact on the air interface.
Elements including the modulation, GSM slot structure, burst structure and the like were all
devised to provide the optimum performance.
During the development of the GSM standard very careful attention was paid to aspects including
the modulation format, the way in which the system is time division multiplexed, all had a
considerable impact on the performance of the system as a whole. For example, the modulation
format for the GSM air interface had a direct impact on battery life and the time division format
adopted enabled the cellphone handset costs to be considerably reduced as detailed later.

GSM signal and GMSK modulation characteristics


The core of any radio based system is the format of the radio signal itself. The carrier is modulated
using a form of phase sift keying known as Gaussian Minimum Shift Keying (GMSK). GMSK was
used for the GSM system for a variety of reasons:

• It is resilient to noise when compared to many other forms of modulation.


• Radiation outside the accepted bandwidth is lower than other forms of phase shift keying.
• It has a constant power level which allows higher efficiency RF power amplifiers to be used
in the handset, thereby reducing current consumption and conserving battery life.

Note on GMSK:
GMSK, Gaussian Minimum Shift Keying is a form of phase modulation that is used in a number of portable radio and
wireless applications. It has advantages in terms of spectral efficiency as well as having an almost constant amplitude
which allows for the use of more efficient transmitter power amplifiers, thereby saving on current consumption, a
critical issue for battery power equipment.
Click on the link for a GMSK tutorial

The nominal bandwidth for the GSM signal using GMSK is 200 kHz, i.e. the channel bandwidth and
spacing is 200 kHz. As GMSK modulation has been used, the unwanted or spurious emissions
outside the nominal bandwidth are sufficiently low to enable adjacent channels to be used from the
same base station. Typically each base station will be allocated a number of carriers to enable it to
achieve the required capacity.
The data transported by the carrier serves up to eight different users under the basic system by
splitting the carrier into eight time slots. The basic carrier is able to support a data throughput of
approximately 270 kbps, but as some of this supports the management overhead, the data rate
allotted to each time slot is only 24.8 kbps. In addition to this error correction is required to
overcome the problems of interference, fading and general data errors that may occur. This means
that the available data rate for transporting the digitally encoded speech is 13 kbps for the basic
vocoders.

GSM slot structure and multiple access scheme


GSM uses a combination of both TDMA and FDMA techniques. The FDMA element involves the
division by frequency of the (maximum) 25 MHz bandwidth into 124 carrier frequencies spaced
200 kHz apart as already described.
The carriers are then divided in time, using a TDMA scheme. This enables the different users of the
single radio frequency channel to be allocated different times slots. They are then able to use the
same RF channel without mutual interference. The slot is then the time that is allocated to the
particular user, and the GSM burst is the transmission that is made in this time.
Each GSM slot, and hence each GSM burst lasts for 0.577 mS (15/26 mS). Eight of these burst
periods are grouped into what is known as a TDMA frame. This lasts for approximately 4.615 ms
(i.e.120/26 ms) and it forms the basic unit for the definition of logical channels. One physical
channel is one burst period allocated in each TDMA frame.
There are different types of frame that are transmitted to carry different data, and also the frames
are organised into what are termed multiframes and superframes to provide overall
synchronisation.

GSM slot structure


These GSM slot is the smallest individual time period that is available to each mobile. It has a
defined format because a variety of different types of data are required to be transmitted.
Although there are shortened transmission bursts, the slots is normally used for transmitting 148
bits of information. This data can be used for carrying voice data, control and synchronisation
data.

GSM slots showing offset between transmit and receive


It can be seen from the GSM slot structure that the timing of the slots in the uplink and the
downlink are not simultaneous, and there is a time offset between the transmit and receive. This
offset in the GSM slot timing is deliberate and it means that a mobile that which is allocated the
same slot in both directions does not transmit and receive at the same time. This considerably
reduces the need for expensive filters to isolate the transmitter from the receiver. It also provides
a space saving.

GSM burst
The GSM burst, or transmission can fulfil a variety of functions. Some GSM bursts are used for
carrying data while others are used for control information. As a result of this a number of
different types of GSM burst are defined.

• Normal burst uplink and downlink


• Synchronisation burst downlink
• Frequency correction burst downlink
• Random Access (Shortened Burst) uplink

GSM normal burst


This GSM burst is used for the standard communications between the basestation and the mobile,
and typically transfers the digitised voice data.
The structure of the normal GSM burst is exactly defined and follows a common format. It contains
data that provides a number of different functions:

1. 3 tail bits: These tail bits at the start of the GSM burst give time for the transmitter to
ramp up its power
2. 57 data bits: This block of data is used to carry information, and most often contains
the digitised voice data although on occasions it may be replaced with signalling
information in the form of the Fast Associated Control CHannel (FACCH). The type of data
is indicated by the flag that follows the data field
3. 1 bit flag: This bit within the GSM burst indicates the type of data in the previous field.
4. 26 bits training sequence: This training sequence is used as a timing reference and for
equalisation. There is a total of eight different bit sequences that may be used, each 26
bits long. The same sequence is used in each GSM slot, but nearby base stations using the
same radio frequency channels will use different ones, and this enables the mobile to
differentiate between the various cells using the same frequency.
5. 1 bit flag Again this flag indicates the type of data in the data field.
6. 57 data bits Again, this block of data within the GSM burst is used for carrying data.
7. 3 tail bits These final bits within the GSM burst are used to enable the transmitter power
to ramp down. They are often called final tail bits, or just tail bits.
8. 8.25 bits guard time At the end of the GSM burst there is a guard period. This is
introduced to prevent transmitted bursts from different mobiles overlapping. As a result of
their differing distances from the base station.

GSM Normal Burst

GSM synchronisation burst


The purpose of this form of GSM burst is to provide synchronisation for the mobiles on the
network.

1. 3 tail bits: Again, these tail bits at the start of the GSM burst give time for the
transmitter to ramp up its power
2. 39 bits of information:
3. 64 bits of a Long Training Sequence:
4. 39 bits Information:
5. 3 tail bits Again these are to enable the transmitter power to ramp down.
6. 8.25 bits guard time: to act as a guard interval.

GSM Synchronisation Burst

GSM frequency correction burst


With the information in the burst all set to zeros, the burst essentially consists of a constant
frequency carrier with no phase alteration.

1. 3 tail bits: Again, these tail bits at the start of the GSM burst give time for the
transmitter to ramp up its power.
2. 142 bits all set to zero:
3. 3 tail bits Again these are to enable the transmitter power to ramp down.
4. 8.25 bits guard time: to act as a guard interval.

GSM Frequency Correction Burst


GSM random access burst
This form of GSM burst used when accessing the network and it is shortened in terms of the data
carried, having a much longer guard period. This GSM burst structure is used to ensure that it fits
in the time slot regardless of any severe timing problems that may exist. Once the mobile has
accessed the network and timing has been aligned, then there is no requirement for the long
guard period.

1. 7 tail bits: The increased number of tail bits is included to provide additional margin
when accessing the network.
2. 41 training bits:
3. 36 data bits:
4. 3 tail bits Again these are to enable the transmitter power to ramp down.
5. 69.25 bits guard time: The additional guard time, filling the remaining time of the GSM
burst provides for large timing differences.

GSM Random Access Burst

GSM discontinuous transmission (DTx)


A further power saving and interference reducing facility is the discontinuous transmission (DTx)
capability that is incorporated within the specification. It is particularly useful because there are
long pauses in speech, for example when the person using the mobile is listening, and during
these periods there is no need to transmit a signal. In fact it is found that a person speaks for less
than 40% of the time during normal telephone conversations. The most important element of DTx
is the Voice Activity Detector. It must correctly distinguish between voice and noise inputs, a task
that is not trivial. If a voice signal is misinterpreted as noise, the transmitter is turned off an effect
known as clipping results and this is particularly annoying to the person listening to the speech.
However if noise is misinterpreted as a voice signal too often, the efficiency of DTX is dramatically
decreased.
It is also necessary for the system to add background or comfort noise when the transmitter is
turned off because complete silence can be very disconcerting for the listener. Accordingly this is
added as appropriate. The noise is controlled by the SID (silence indication descriptor).
The GSM system has a defined GSM frame structure to enable the orderly passage of information.
The GSM frame structure establishes schedules for the predetermined use of timeslots.
By establishing these schedules by the use of a frame structure, both the mobile and the base
station are able to communicate not only the voice data, but also signalling information without
the various types of data becoming intermixed and both ends of the transmission knowing exactly
what types of information are being transmitted.
The GSM frame structure provides the basis for the various physical channels used within GSM,
and accordingly it is at the heart of the overall system.

Basic GSM frame structure


The basic element in the GSM frame structure is the frame itself. This comprises the eight slots,
each used for different users within the TDMA system. As mentioned in another page of the
tutorial, the slots for transmission and reception for a given mobile are offset in time so that the
mobile does not transmit and receive at the same time.
GSM frame consisting of eight slots
The basic GSM frame defines the structure upon which all the timing and structure of the GSM
messaging and signalling is based. The fundamental unit of time is called a burst period and it
lasts for approximately 0.577 ms (15/26 ms). Eight of these burst periods are grouped into what is
known as a TDMA frame. This lasts for approximately 4.615 ms (i.e.120/26 ms) and it forms the
basic unit for the definition of logical channels. One physical channel is one burst period allocated
in each TDMA frame.
In simplified terms the base station transmits two types of channel, namely traffic and control.
Accordingly the channel structure is organised into two different types of frame, one for the traffic
on the main traffic carrier frequency, and the other for the control on the beacon frequency.
GSM multiframe
The GSM frames are grouped together to form multiframes and in this way it is possible to
establish a time schedule for their operation and the network can be synchronised.
There are several GSM multiframe structures:

• Traffic multiframe: The Traffic Channel frames are organised into multiframes
consisting of 26 bursts and taking 120 ms. In a traffic multiframe, 24 bursts are used for
traffic. These are numbered 0 to 11 and 13 to 24. One of the remaining bursts is then
used to accommodate the SACCH, the remaining frame remaining free. The actual position
used alternates between position 12 and 25.
• Control multiframe: the Control Channel multiframe that comprises 51 bursts and
occupies 235.4 ms. This always occurs on the beacon frequency in time slot zero and it
may also occur within slots 2, 4 and 6 of the beacon frequency as well. This multiframe is
subdivided into logical channels which are time-scheduled. These logical channels and
functions include the following:
o Frequency correction burst
o Synchronisation burst
o Broadcast channel (BCH)
o Paging and Access Grant Channel (PACCH)
o Stand Alone Dedicated Control Channel (SDCCH)

GSM Superframe
Multiframes are then constructed into superframes taking 6.12 seconds. These consist of 51 traffic
multiframes or 26 control multiframes. As the traffic multiframes are 26 bursts long and the
control multiframes are 51 bursts long, the different number of traffic and control multiframes
within the superframe, brings them back into line again taking exactly the same interval.

GSM Hyperframe
Above this 2048 superframes (i.e. 2 to the power 11) are grouped to form one hyperframe which
repeats every 3 hours 28 minutes 53.76 seconds. It is the largest time interval within the GSM
frame structure.
Within the GSM hyperframe there is a counter and every time slot has a unique sequential number
comprising the frame number and time slot number. This is used to maintain synchronisation of
the different scheduled operations with the GSM frame structure. These include functions such as:

• Frequency hopping: Frequency hopping is a feature that is optional within the GSM
system. It can help reduce interference and fading issues, but for it to work, the
transmitter and receiver must be synchronised so they hop to the same frequencies at the
same time.
• Encryption: The encryption process is synchronised over the GSM hyperframe period
where a counter is used and the encryption process will repeat with each hyperframe.
However, it is unlikely that the cellphone conversation will be over 3 hours and accordingly
it is unlikely that security will be compromised as a result.

GSM Frame Structure Summary

GSM frame structure summary


By structuring the GSM signalling into frames, multiframes, superframes and hyperframes, the
timing and organisation is set into an orderly format that enables both the GSM mobile and base
station to communicate in a reliable and efficient manner. The GSM frame structure forms the
basis onto which the other forms of frame and hence the various GSM channels are built.
GSM uses a variety of channels in which the data is carried. In GSM, these channels are separated
into physical channels and logical channels. The Physical channels are determined by the timeslot,
whereas the logical channels are determined by the information carried within the physical
channel. It can be further summarised by saying that several recurring timeslots on a carrier
constitute a physical channel. These are then used by different logical channels to transfer
information. These channels may either be used for user data (payload) or signalling to enable the
system to operate correctly.

Common and dedicated channels


The channels may also be divided into common and dedicated channels. The forward common
channels are used for paging to inform a mobile of an incoming call, responding to channel
requests, and broadcasting bulletin board information. The return common channel is a random
access channel used by the mobile to request channel resources before timing information is
conveyed by the BSS.
The dedicated channels are of two main types: those used for signalling, and those used for traffic.
The signalling channels are used for maintenance of the call and for enabling call set up, providing
facilities such as handover when the call is in progress, and finally terminating the call. The traffic
channels handle the actual payload.
The following logical channels are defined in GSM:
TCHf - Full rate traffic channel.
TCH h - Half rate traffic channel.
BCCH - Broadcast Network information, e.g. for describing the current control channel structure.
The BCCH is a point-to-multipoint channel (BSS-to-MS).
SCH - Synchronisation of the MSs.
FCHMS - frequency correction.
AGCH - Acknowledge channel requests from MS and allocate a SDCCH.
PCHMS - terminating call announcement.
RACHMS - access requests, response to call announcement, location update, etc.
FACCHt - For time critical signalling over the TCH (e.g. for handover signalling). Traffic burst is
stolen for a full signalling burst.
SACCHt - TCH in-band signalling, e.g. for link monitoring.
SDCCH - For signalling exchanges, e.g. during call setup, registration / location updates.
FACCHs - FACCH for the SDCCH. The SDCCH burst is stolen for a full signalling burst. Function not
clear in the present version of GSM (could be used for e.g. handover of an eight-rate channel, i.e.
using a "SDCCH-like" channel for other purposes than signalling).
SACCHs - SDCCH in-band signalling, e.g. for link monitoring.
Audio codecs or vocoders are universally used within the GSM system. They reduce the bit rate of
speech that has been converted from its analogue for into a digital format to enable it to be carried
within the available bandwidth for the channel. Without the use of a speech codec, the digitised
speech would occupy a much wider bandwidth then would be available. Accordingly GSM codecs
are a particularly important element in the overall system.
A variety of different forms of audio codec or vocoder are available for general use, and the GSM
system supports a number of specific audio codecs. These include the RPE-LPC, half rate, and AMR
codecs. The performance of each voice codec is different and they may be used under different
conditions, although the AMR codec is now the most widely used. Also the newer AMR wideband
(AMR-WB) codec is being introduced into many areas, including GSM
Voice codec technology has advanced by considerable degrees in recent years as a result of the
increasing processing power available. This has meant that the voice codecs used in the GSM
system have large improvements since the first GSM phones were introduced.

Vocoder / codec basics


Vocoders or speech codecs are used within many areas of voice communications. Obviously the
focus here is on GSM audio codecs or vocoders, but the same principles apply to any form of
codec.
If speech were digitised in a linear fashion it would require a high data rate that would occupy a
very wide bandwidth. As bandwidth is normally limited in any communications system, it is
necessary to compress the data to send it through the available channel. Once through the
channel it can then be expanded to regenerate the audio in a fashion that is as close to the original
as possible.
To meet the requirements of the codec system, the speech must be captured at a high enough
sample rate and resolution to allow clear reproduction of the original sound. It must then be
compressed in such a way as to maintain the fidelity of the audio over a limited bit rate, error-
prone wireless transmission channel.
Audio codecs or vocoders can use a variety of techniques, but many modern audio codecs use a
technique known as linear prediction. In many ways this can be likened to a mathematical
modelling of the human vocal tract. To achieve this the spectral envelope of the signal is estimated
using a filter technique. Even where signals with many non-harmonically related signals are used it
is possible for voice codecs to give very large levels of compression.
A variety of different codec methodologies are used for GSM codecs:

• CELP: The CELP or Code Excited Linear Prediction codec is a vocoder algorithm that was
originally proposed in 1985 and gave a significant improvement over other voice codecs of
the day. The basic principle of the CELP codec has been developed and used as the basis
of other voice codecs including ACELP, RCELP, VSELP, etc. As such the CELP codec
methodology is now the most widely used speech coding algorithm. Accordingly CELP is
now used as a generic term for a particular class of vocoders or speech codecs and not a
particular codec.

The main principle behind the CELP codec is that is uses a principle known as "Analysis by
Synthesis". In this process, the encoding is performed by perceptually optimising the
decoded signal in a closed loop system. One way in which this could be achieved is to
compare a variety of generated bit streams and choose the one that produces the best
sounding signal.
• ACELP codec: The ACELP or Algebraic Code Excited Linear Prediction codec. The ACELP
codec or vocoder algorithm is a development of the CELP model. However the ACELP codec
codebooks have a specific algebraic structure as indicated by the name.
• VSELP codec: The VSELP or Vector Sum Excitation Linear Prediction codec. One of the
major drawbacks of the VSELP codec is its limited ability to code non-speech sounds. This
means that it performs poorly in the presence of noise. As a result this voice codec is not
now as widely used, other newer speech codecs being preferred and offering far superior
performance.

GSM audio codecs / vocoders


A variety of GSM audio codecs / vocoders are supported. These have been introduced at different
times, and have different levels of performance.. Although some of the early audio codecs are not
as widely used these days, they are still described here as they form part of the GSM system.

CODEC NAME BIT RATECOMPRESSION TECHNOLOGY


(KBPS)
Full rate 13 RTE-LPC
EFR 12.2 ACELP
Half rate 5.6 VSELP
AMR 12.2 - 4.75 ACELP
AMR-WB 23.85 - 6.60 ACELP

GSM Full Rate / RPE-LPC codec


The RPE-LPC or Regular Pulse Excited - Linear Predictive Coder. This form of voice codec was the
first speech codec used with GSM and it chosen after tests were undertaken to compare it with
other codec schemes of the day. The speech codec is based upon the regular pulse excitation LPC
with long term prediction. The basic scheme is related to two previous speech codecs, namely:
RELP, Residual Excited Linear Prediction and to the MPE-LPC, Multi Pulse Excited LPC. The
advantages of RELP are the relatively low complexity resulting from the use of baseband coding,
but its performance is limited by the tonal noise produced by the system. The MPE-LPC is more
complex but provides a better level of performance. The RPE-LPC codec provided a compromise
between the two, balancing performance and complexity for the technology of the time.
Despite the work that was undertaken to provide the optimum performance, as technology
developed further, the RPE-LPC codec was viewed as offering a poor level of voice quality. As other
full rate audio codecs became available, these were incorporated into the system.

GSM EFR - Enhanced Full Rate codec


Later another vocoder called the Enhanced Full Rate (EFR) vocoder was added in response to the
poor quality perceived by the users of the original RPE-LPC codec. This new codec gave much
better sound quality and was adopted by GSM. Using the ACELP compression technology it gave a
significant improvement in quality over the original LPC-RPE encoder. It became possible as the
processing power that was available increased in mobile phones as a result of higher levels of
processing power combined with their lower current consumption.
GSM Half Rate codec
The GSM standard allows the splitting of a single full rate voice channel into two sub-channels that
can maintain separate calls. By doing this, network operators can double the number of voice calls
that can be handled by the network with very little additional investment.
To enable this facility to be used a half rate codec must be used. The half rate codec was
introduced in the early years of GSM but gave a much inferior voice quality when compared to
other speech codecs. However it gave advantages when demand was high and network capacity
was at a premium.
The GSM Half Rate codec uses a VSELP codec algorithm. It codes the data around 20 ms frames
each carrying 112 bits to give a data rate of 5.6 kbps. This includes a 100 bps data rate for a
mode indicator which details whether the system believes the frames contain voice data or not.
This allows the speech codec to operate in a manner that provides the optimum quality.
The Half Rate codec system was introduced in the 1990s, but in view of the perceived poor quality,
it was not widely used.

GSM AMR Codec


The AMR, Adaptive Multi-rate codec is now the most widely used GSM codec. The AMR codec was
adopted by 3GPP in October 1988 and it is used for both GSM and circuit switched UMTS / WCDMA
voice calls.
The AMR codec provides a variety of options for one of eight different bit rates as described in the
table below. The bit rates are based on frames that are 20 millisceonds long and contain 160
samples. The AMR codec uses a variety of different techniques to provide the data compression.
The ACELP codec is used as the basis of the overall speech codec, but other techniques are used in
addition to this. Discontinuous transmission is employed so that when there is no speech activity
the transmission is cut. Additionally Voice Activity Detection (VAD) is used to indicate when there
is only background noise and no speech. Additionally to provide the feedback for the user that the
connection is still present, a Comfort Noise Generator (CNG) is used to provide some background
noise, even when no speech data is being transmitted. This is added locally at the receiver.
The use of the AMR codec also requires that optimized link adaptation is used so that the optimum
data rate is selected to meet the requirements of the current radio channel conditions including its
signal to noise ratio and capacity. This is achieved by reducing the source coding and increasing
the channel coding. Although there is a reduction in voice clarity, the network connection is more
robust and the link is maintained without dropout. Improvement levels of between 4 and 6 dB may
be experienced. However network operators are able to prioritise each station for either quality or
capacity.
The AMR codec has a total of eight rates: eight are available at full rate (FR), while six are
available at half rate (HR). This gives a total of fourteen different modes.

MODE BIT RATEFULL RATE (FR) /


(KBPS) HALF RATE (HR)
AMR 12.2 12.2 FR
AMR 10.2 10.2 FR
AMR 7.95 7.95 FR / HR
AMR 7.40 7.40 FR / HR
AMR 6.70 6.70 FR / HR
AMR 5.90 5.90 FR / HR
AMR 5.15 5.15 FR / HR
AMR 4.75 4.75 FR / HR
AMR codec data rates

AMR-WB codec
Adaptive Multi-Rate Wideband, AMR-WB codec, also known under its ITU designation of G.722.2, is
based on the earlier popular Adaptive Multi-Rate, AMR codec. AMR-WB also uses an ACELP basis
for its operation, but it has been further developed and AMR-WB provides improved speech quality
as a result of the wider speech bandwidth that it encodes. AMR-WB has a bandwidth extending
from 50 - 7000 Hz which is significantly wider than the 300 - 3400 Hz bandwidths used by
standard telephones. However this comes at the cost of additional processing, but with advances
in IC technology in recent years, this is perfectly acceptable.
The AMR-WB codec contains a number of functional areas: it primarily includes a set of fixed rate
speech and channel codec modes. It also includes other codec functions including: a Voice Activity
Detector (VAD); Discontinuous Transmission (DTX) functionality for GSM; and Source Controlled
Rate (SCR) functionality for UMTS applications. Further functionality includes in-band signaling for
codec mode transmission, and link adaptation for control of the mode selection.
The AMR-WB codec has a 16 kHz sampling rate and the coding is performed in blocks of 20 ms.
There are two frequency bands that are used: 50-6400 Hz and 6400-7000 Hz. These are coded
separately to reduce the codec complexity. This split also serves to focus the bit allocation into the
subjectively most important frequency range.
The lower frequency band uses an ACELP codec algorithm, although a number of additional
features have been included to improve the subjective quality of the audio. Linear prediction
analysis is performed once per 20 ms frame. Also, fixed and adaptive excitation codebooks are
searched every 5 ms for optimal codec parameter values.
The higher frequency band adds some of the naturalness and personality features to the voice.
The audio is reconstructed using the parameters from the lower band as well as using random
excitation. As the level of power in this band is less than that of the lower band, the gain is
adjusted relative to the lower band, but based on voicing information. The signal content of the
higher band is reconstructed by using an linear predictive filter which generates information from
the lower band filter.

BIT NOTES
RATE
(KBPS)
6.60 This is the lowest rate for AMR-WB. It is used for circuit switched connections for GSM and UMTS
and is intended to be used only temporarily during severe radio channel conditions or during network
congestion.
8.85 This gives improved quality over the 6.6 kbps rate, but again, its use is only recommended for use in
periods of congestion or when during severe radio channel conditions.
12.65 This is the main bit rate used for circuit switched GSM and UMTS, offering superior performance to
the original AMR codec.
14.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
15.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
18.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
19.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
23.05 Not suggested for full rate GSM channels.
23.85 Not suggested for full rate GSM channels, and provides speech quality similar to that of G.722 at 64
kbps.
Not all phones equipped with AMR-WB will be able to access all the data rates - the different
functions on the phone may not require all to be active for example. As a result, it is necessary to
inform the network about which rates are available and thereby simplify the negotiation between
the handset and the network. To achieve this there are three difference AMR-WB configurations
that are available:

• Configuration A: 6.6, 8.85, and 12.65 kbit/s


• Configuration B: 6.6, 8.85, 12.65, and 15.85 kbit/s
• Configuration C: 6.6, 8.85, 12.65, and 23.85 kbit/s

It can be seen that only the 23.85, 15.85, 12.65, 8.85 and 6.60 kbit/s modes are used. Based on
listening tests, it was considered that these five modes were sufficient for a high quality speech
telephony service. The other data rates were retained and can be used for other purposes
including multimedia messaging, streaming audio, etc.
GSM codecs summary
There has been a considerable improvement in the GSM audio codecs that have been in use.
Starting with the original RTE-LPC speech codec and then moving through the Enhanced Full Rate,
EFR codec and the GSM half rate codec to the AMR codec which is now the most widely used and
provides a variable rate that can be tailored to the individual conditions. Also the newer AMR-WB
codec wills ee increasing use. Although with newer technologies such as LTE, Long Term Evolution
which uses an all IP based system, codecs are still used to provide data compression and improved
spectral efficiency, the idea of a codec will still be used, although some of the GSM codecs that are
in use today will be superseded.
One of the key elements of a mobile phone or cellular telecommunications system, is that the
system is split into many small cells to provide good frequency re-use and coverage. However as
the mobile moves out of one cell to another it must be possible to retain the connection. The
process by which this occurs is known as handover or handoff. The term handover is more widely
used within Europe, whereas handoff tends to be use more in North America. Either way, handover
and handoff are the same process.

Requirements for GSM handover


The process of handover or handoff within any cellular system is of great importance. It is a critical
process and if performed incorrectly handover can result in the loss of the call. Dropped calls are
particularly annoying to users and if the number of dropped calls rises, customer dissatisfaction
increases and they are likely to change to another network. Accordingly GSM handover was an
area to which particular attention was paid when developing the standard.

Types of GSM handover


Within the GSM system there are four types of handover that can be performed for GSM only
systems:

• Intra-BTS handover: This form of GSM handover occurs if it is required to change the
frequency or slot being used by a mobile because of interference, or other reasons. In this
form of GSM handover, the mobile remains attached to the same base station transceiver,
but changes the channel or slot.
• Inter-BTS Intra BSC handover: This for of GSM handover or GSM handoff occurs when
the mobile moves out of the coverage area of one BTS but into another controlled by the
same BSC. In this instance the BSC is able to perform the handover and it assigns a new
channel and slot to the mobile, before releasing the old BTS from communicating with the
mobile.
• Inter-BSC handover: When the mobile moves out of the range of cells controlled by
one BSC, a more involved form of handover has to be performed, handing over not only
from one BTS to another but one BSC to another. For this the handover is controlled by
the MSC.
• Inter-MSC handover: This form of handover occurs when changing between networks.
The two MSCs involved negotiate to control the handover.

GSM handover process


Although there are several forms of GSM handover as detailed above, as far as the mobile is
concerned, they are effectively seen as very similar. There are a number of stages involved in
undertaking a GSM handover from one cell or base station to another.
In GSM which uses TDMA techniques the transmitter only transmits for one slot in eight, and
similarly the receiver only receives for one slot in eight. As a result the RF section of the mobile
could be idle for 6 slots out of the total eight. This is not the case because during the slots in which
it is not communicating with the BTS, it scans the other radio channels looking for beacon
frequencies that may be stronger or more suitable. In addition to this, when the mobile
communicates with a particular BTS, one of the responses it makes is to send out a list of the
radio channels of the beacon frequencies of neighbouring BTSs via the Broadcast Channel (BCCH).
The mobile scans these and reports back the quality of the link to the BTS. In this way the mobile
assists in the handover decision and as a result this form of GSM handover is known as Mobile
Assisted Hand Over (MAHO).
The network knows the quality of the link between the mobile and the BTS as well as the strength
of local BTSs as reported back by the mobile. It also knows the availability of channels in the
nearby cells. As a result it has all the information it needs to be able to make a decision about
whether it needs to hand the mobile over from one BTS to another.
If the network decides that it is necessary for the mobile to hand over, it assigns a new channel
and time slot to the mobile. It informs the BTS and the mobile of the change. The mobile then
retunes during the period it is not transmitting or receiving, i.e. in an idle period.
A key element of the GSM handover is timing and synchronisation. There are a number of possible
scenarios that may occur dependent upon the level of synchronisation.

• Old and new BTSs synchronised: In this case the mobile is given details of the new
physical channel in the neighbouring cell and handed directly over. The mobile may
optionally transmit four access bursts. These are shorter than the standard bursts and
thereby any effects of poor synchronisation do not cause overlap with other bursts.
However in this instance where synchronisation is already good, these bursts are only used
to provide a fine adjustment.
• Time offset between synchronised old and new BTS: In some instances there may
be a time offset between the old and new BTS. In this case, the time offset is provided so
that the mobile can make the adjustment. The GSM handover then takes place as a
standard synchronised handover.
• Non-synchronised handover: When a non-synchronised cell handover takes place, the
mobile transmits 64 access bursts on the new channel. This enables the base station to
determine and adjust the timing for the mobile so that it can suitably access the new BTS.
This enables the mobile to re-establish the connection through the new BTS with the
correct timing.

Inter-system handover
With the evolution of standards and the migration of GSM to other 2G technologies including to 3G
UMTS / WCDMA as well as HSPA and then LTE, there is the need to handover from one technology
to another. Often the 2G GSM coverage will be better then the others and GSM is often used as
the fallback. When handovers of this nature are required, it is considerably more complicated than
a straightforward only GSM handover because they require two technically very different systems
to handle the handover.
These handovers may be called intersystem handovers or inter-RAT handovers as the handover
occurs between different radio access technologies.
The most common form of intersystem handover is between GSM and UMTS / WCDMA. Here there
are two different types:

• UMTS / WCDMA to GSM handover: There are two further divisions of this category of
handover:
o Blind handover: This form of handover occurs when the base station hands off
the mobile by passing it the details of the new cell to the mobile without linking to
it and setting the timing, etc of the mobile for the new cell. In this mode, the
network selects what it believes to be the optimum GSM based station. The mobile
first locates the broadcast channel of the new cell, gains timing synchronisation
and then carries out non-synchronised intercell handover.
o Compressed mode handover: using this form of handover the mobile uses the
gaps I transmission that occur to analyse the reception of local GSM base stations
using the neighbour list to select suitable candidate base stations. Having selected
a suitable base station the handover takes place, again without any time
synchronisation having occurred.
• Handover from GSM to UMTS / WCDMA: This form of handover is supported within
GSM and a "neighbour list" was established to enable this occur easily. As the GSM / 2G
network is normally more extensive than the 3G network, this type of handover does not
normally occur when the mobile leaves a coverage area and must quickly find a new base
station to maintain contact. The handover from GSM to UMTS occurs to provide an
improvement in performance and can normally take place only when the conditions are
right. The neighbour list will inform the mobile when this may happen.

Summary
GSM handover is one of the major elements in performance that users will notice. As a result it is
normally one of the Key Performance Indicators (KPIs) used by operators to monitor performance.
Poor handover or handoff performance will normally result in dropped calls, and users find this
particularly annoying. Accordingly network operators develop and maintain their networks to
ensure that an acceptable performance is achieved. In this way they can reduce what is called
"churn" where users change from one network to another.

ATM
The Asynchronous Transfer Mode (ATM) was developed to enable a single data networking
standard to be used for both synchronous channel networking and packet-based networking.
Asynchrnonous transfer mode also supports multiple levels of quality of service for packet traffic.
In this way, asynchronous transfer mode can be thought of as supporting both circuit-switched
networks and packet-switched networks by mapping both bitstreams and packet-streams. It
achieves this by sending data in a series or stream of fixed length cells, each of which has its own
identifier. These data cells are typically sent on demand within a synchronous time-slot pattern in
a synchronous bit-stream. Although this may not appear to be asynchronous, the asynchronous
element of the "Asynchronous Transfer Mode", comes from the fact that the sending of the cells
themselves is asynchronous and not from the synchronous low-level bitstream that carries them.
One of the original aims of Asynchronous Transfer Mode was that it should provide a basis for
Broadband Integrated Services Digital Network (B-ISDN) to replace existing PSTN (Private � ). As
a result of this the standards for Asynchronous Transfer Mode standards include not only the
definitions for the Physical transmission techniques (Layer 1), but also layers 2 and 3.
In addition to this, the development of Asysnchronous Transfer Mode was focussed heavily on the
requirements for telecommunications providers rather than local data networking requirements,
and as a result it is more suited to large area telecommunications applications rather than smaller
local area data network solutions, or general computer networking.
While Asynchronous Transfer Mode is widely used for many applications, it is generally only used
for transport of IP traffic. It has not become the single standard for providing a single integrated
technology for LANs, public networks, and user services.

Basic asynchronous transfer mode system


There are two basic elements to an ATM system. Any system can be made up a number of each of
these elements:

• ATM switch This accepts the incoming cells or information "packets" from another ATM
entity which may be either another switch or an end point. It reads and updates the cell
header information and switches the information cell towards its destination
• ATM end point This element contains the ATM network interface adaptor to enable
data entering or leaving the ATM network to interface to the external world. Examples of
these end points include workstations, LAN switches, video codecs and many more items.

ATM networks can be configured in many ways. The overall network will comprise a set of ATM
switches interconnected by point-to-point ATM links or interfaces. Within the network there are
two types of interface and these are both supported by the switches. The first is UNI and this is
used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type
of interface is known as NNI. This connects two ATM switches.
ATM operation
In ATM the information is formatted into fixed length cells consisting of 48 bytes (each 8 bits long)
of payload data. In addition to this there is a cell header that consists of 5 bytes, giving a total cell
length of 53 bytes. This format has been chosen because time critical data such as voice packets is
not affected by very long packets being sent. The data carried in the header comprises payload
information as well as what are termed virtual-circuit identifiers and header error check data.
ATM is what is termed connection orientated. This has the advantage that the user can define the
requirements that are needed to support the calls, and in turn this allows the network to allocated
the required resources. By adopting this approach, several calls can be multiplexed efficiently and
ensuring that the required resources can be allocated.
There are two types of connection that are specified for asynchronous transfer mode:

• Virtual Channel Connections - this is the basic connection unit or entity. It carries a
single stream of data cells from the originator to the end user.
• Virtual Path Connections - this is formed from a collection of virtual channel
connections. A virtual path is an end to end connection created across an ATM
(asynchronous transfer mode) network. For a virtual path connection, the network routes
all cells from the virtual path across the network in the same way without regard for the
individual virtual circuit connection. This results in faster transfer.

The idea of virtual path connections are also used within the ATM network itself to route
traffic between switches

ATM networks can be configured in many ways. The overall network will comprise a set of ATM
switches interconnected by point-to-point ATM links or interfaces. Within the network there are
two types of interface and these are both supported by the switches. The first is UNI and this is
used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type
of interface is known as NNI. This connects two ATM switches.

E-Carrier, E1
The E carrier system has been created by the European Conference of Postal and
Telecommunications Administrations (CEPT) as a digital telecommunications carrier scheme for
carrying multiple links. The E-carrier system enables the transmission of several (multiplexed)
voice/data channels simultaneously on the same transmission facility. Of the various levels of the
E-carrier system, the E1 and E3 levels are the only ones that are used.
More specifically E1 has an overall bandwidth of 2048 kbps and provides 32 channels each
supporting a data rate of 64 kbps. The lines are mainly used to connect between the PABX (Private
Automatic Branch eXchange), and the CO (Central Office) or main exchange.
The E1 standard defines the physical characteristics of a transmission path, and as such it
corresponds to the physical layer (layer 1) in the OSI model. Technologies such as ATM and others
which form layer 2 are able to pass over E1 lines, making E1 one of the fundamental technologies
used within telecommunications.
A similar standard to E1, known as T1 has similar characteristics, but it is widely used in North
America. Often equipment used for these technologies, e.g. test equipment may be used for both,
and the abbreviation E1/T1 may be seen.

E1 beginnings
The life of the standards started back in the early 1960s when Bell Laboratories, where the
transistor was invented some years earlier, developed a voice multiplexing system to enable better
use to be made of the lines that were required, and to provide improved performance of the
analogue techniques that were used. The step of the process converted the signal into a digital
format having a 64 kbps data stream. The next stage is to assemble twenty four of the data
streams into a framed data stream with an overall data rate of 1.544 Mbps. This structured signal
was called DS1, but it is almost universally referred to as T1.
In Europe, the basic scheme was taken by what was then the CCIT and developed to fit the
European requirements better. This resulted in the development of the scheme known as E1. This
has provision for 30 voice channels and runs at an overall data rate of 2.048 Mbps. In Europe E1
refers to both the formatted version and the raw data rate.

E1 Applications and standards


The E-carrier standards form part of the overall Synchronous Digital Hierarchy (SDH) scheme. This
allows where groups of E1 circuits, each containing 30 circuits, to be combined to produce higher
capacity. E1 to E5 are defined and they are carriers in increasing multiples of the E1 format.
However in reality only E3 is widely used and this can carry 480 circuits and has an overall
capacity of 34.368 Mbps.
Physically E1 is transmitted as 32 timeslots and E3 has 512 timeslots. Unlike Internet data
services which are IP based, E-carrier systems are circuit switched and permanently allocate
capacity for a voice call for its entire duration. This ensures high call quality because the
transmission arrives with the same short delay (Latency) and capacity at all times. Nevertheless it
does not allow the same flexibility and efficiency to be obtained as that of an IP based system.
In view of the different capacities of E1 and E3 links they are used for different applications. E1
circuits are widely used to connect to medium and large companies, to telephone exchanges. They
may also be used to provide links between some exchanges. E3 lines are used where higher
capacity is needed. They are often installed between exchanges, and to provide connectivity
between countries.

E1 basics
An E1 link runs over two sets of wires that are normally coaxial cable and the signal itself
comprises a nominal 2.4 volt signal. The signalling data rate is 2.048 Mbps full duplex and
provides the full data rate in both directions.
For E1, the signal is split into 32 channels each of 8 bits. These channels have their own time
division multiplexed slots. These are transmitted sequentially and the complete transmission of the
32 slots makes up a frame. These Time Slots are nominated TS0 to TS31 and they are allocated to
different purposes:

• TS0 is used for synchronisation, alarms and messages


• TS1 - TS 15 used for user data
• TS16 is used for signalling, but it may also carry user data
• TS17 - TS31 are used for carrying user data

Time slot 0 is reserved for framing purposes, and alternately transmits a fixed pattern. This allows
the receiver to lock onto the start of each frame and match up each channel in turn. The standards
allow for a full Cyclic Redundancy Check to be performed across all bits transmitted in each frame.
E1 signalling data is carried on TS16 is reserved for signalling, including control, call setup and
teardown. These are accomplished using standard protocols including Channel Associated
Signalling (CAS) where a set of bits is used to replicate opening and closing the circuit. Tone
signalling may also be used and this is passed through on the voice circuits themselves. More
recent systems use Common Channel Signalling (CCS) such as ISDN or Signalling System 7 (SS7)
which sends short encoded messages containing call information such as the caller ID.
Several options are specified in the original CEPT standard for the physical transmission of data.
However an option or standard known as HDB3 (High-Density Bipolar-3 zeros) is used almost
exclusively.

Future
E1 and also T1 are well established for telecommunications use. However with new technologies
such as ADSL, DSL, and the other IP based systems that are now being widely deployed, these will
spell the end of E1 and T1. Nevertheless they have given good service over many years, and they
will remain in use as a result of this wide deployment for some years to come.

What is an Erlang and Erlang B


The Erlang is widely used in telecommunications technology. The Erlang is a statistical measure of
the voice traffic density in a telecommunications system and it is widely used because, for any
element in a telecommunications system, whether it is a landline, or uses cellular technology, it is
necessary to be able to understand the traffic volume. As a result it is helps to have a definition of
the telecommunications traffic so that the volume can be quantified in a standard way and
calculations can be made. Telecommunications network designers make great use of the Erlang to
understand traffic patterns within a voice network and they use the figures to determine the
capacity that is required in any area of the network.

Who was Erlang?


The Erlang is named after a Danish telephone engineer named A.K Erlang (Agner Krarup Erlang).
He was born on 1st January 1878 and although he trained as a mathematician, he was the first
person to investigate traffic and queuing theory in telephone circuits.
After receiving his MA, Erlang worked in a number of schools. However, Erlang was a member of
the Danish Mathematician's Association (TBMI) and it was through this organization that Erlang
met the Chief Engineer of the Copenhagen Telephone Company (CTC) and as a result, he went to
work for them from 1908 for almost 20 years.
While he was at CTC, Erlang studied the loading on telephone circuits, looking at how many lines
were required to provide an acceptable service without installing too much over-capacity that
would cost the company money. There was a trade-off between cost and service level.
Erlang developed his theories over a number of years, and published several papers. He expressed
his findings in mathematical forms so that they could be used to calculate the required level of
capacity, and today the same basic equations are in widespread use..
In view of his groundbreaking work, the International Consultative Committee on Telephones and
Telegraphs (CCITT) honoured him in 1946 by adopting the name "Erlang" for the basic unit of
telephone traffic.
Erlang died on 3rd February 1929 after an unsuccessful abdominal operation.

Erlang basics
The Erlang is the basic unit of telecommunications traffic intensity representing continuous use of
one circuit and it is given the symbol "E". It is effectively call intensity in call minutes per sixty
minutes. In general the period of an hour is used, but it actually a dimensionless unit because the
dimensions cancel out (i.e. minutes per minute).
The number of Erlangs is easy to deduce in a simple case. If a resource carries one Erlang, then
this is equivalent to one continuous call over the period of an hour. Alternatively if two calls were
in progress for fifty percent of the time, then this would also equal one Erlang (1E). Alternatively if
a radio channel is used for fifty percent of the time carries a traffic level of half an Erlang (0.5E)
From this it can be seen that an Erlang, E, may be thought of as a use multiplier where 100% use
is 1E, 200% is 2E, 50% use is 0.5E and so forth.
Interestingly for many years, AT&T and Bell Canada measured traffic in another unit called CCS,
100 call seconds. If figures in CCS are encountered then it is a simple conversion to change CCS to
Erlangs. Simply divide the figure in CCS by 36 to obtain the figure in Erlangs

Erlang function or Erlang formula and symbol


It is possible to express the way in which the number of Erlangs are required in the format of a
simple function or formula.
A = λ x h
Where:
λ = the mean arrival rate of new calls
h = the mean call length or holding time
A = the traffic in Erlangs.
Using this simple Erlang function or Erlang formula, the traffic can easily be calculated.

Erlang-B and Erlang-C


Erlang calculations are further broken down as follows:

• Erlang B: The Erlang B is used to work out how many lines are required from a
knowledge of the traffic figure during the busiest hour. The Erlang B figure assumes that
any blocked calls are cleared immediately. This is the most commonly used figure to be
used in any telecommunications capacity calculations.
• Extended Erlang B: The Extended Erlang B is similar to Erlang B, but it can be used to
factor in the number of calls that are blocked and immediately tried again.
• Erlang C: The Erlang C model assumes that not all calls may be handled immediately
and some calls are queued until they can be handled.

These different models are described in further detail below.

Erlang B
It is particularly important to understand the traffic volumes at peak times of the day.
Telecommunications traffic, like many other commodities, varies over the course of the day, and
also the week. It is therefore necessary to understand the telecommunications traffic at the peak
times of the day and to be able to determine the acceptable level of service required. The Erlang B
figure is designed to handle the peak or busy periods and to determine the level of service
required in these periods.

Erlang C
The Erlang C model is used by call centres to determine how many staff or call stations are
needed, based on the number of calls per hour, the average duration of call and the length of time
calls are left in the queue. The Erlang C figure is somewhat more difficult to determine because
there are more interdependent variables. The Erlang C figure, is nevertheless very important to
determine if a call centre is to be set up, as callers do not like being kept waiting interminably, as
so often happens.

Erlang summary
The Erlang formulas and the concepts put forward by Erlang are still an essential part of
telecommunications network planning these days. As a result, telecommunications engineers
should have a good understanding of the Erlang and the associated formulae.
despite the widespread use of the Erlang concepts and formulae, it is necessary to remember that
there are limitations to their use. It is necessary to remember that the Erlang formulas make
assumptions. Erlang B assumes that callers who receive a busy tone will not immediately try
again. Also Erlang C assumes that callers will not hold on indefinitely. It is also worth remembering
that the Erlang formulas are based on statistics, and that to make these come true an infinite
number of sources is required. However for most cases a total of ten sources gives an adequate
number of sources to give sufficiently accurate results.
The Erlang is a particularly important element of telecommunications theory, and it is a
cornerstone of many areas of telecommunications technology today. However one must be aware
of its limitations and apply the findings of any work using Erlangs, the Erlang B and Erlang C
formulas or functions with a certain amount of practical knowledge.

Ethernet IEEE 802.3 tutorial


This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
Ethernet, defined under IEEE 802.3, is one of today's most widely used data communications
standards, and it finds its major use in Local Area Network (LAN) applications. With versions
including 10Base-T, 100Base-T and now Gigabit Ethernet, it offers a wide variety of choices of
speeds and capability. Ethernet is also cheap and easy to install. Additionally Ethernet, IEEE 802.3
offers a considerable degree of flexibility in terms of the network topologies that are allowed.
Furthermore as it is in widespread use in LANs, it has been developed into a robust system that
meets the needs to wide number of networking requirements.

Ethernet, IEEE 802.3 history


The Ethernet standard was first developed by the Xerox Corporation as an experimental coaxial
cable based system in the 1970s. Using a Carrier Sense Multiple Access / Collision Detect
(CSMA/CD) protocol to allow multiple users it was intended for use with LANs that were likely to
experience sporadic use with occasional heavy use.
The success of the original Ethernet project lead to a joint development of a 10 Mbps standard in
1980. This time three companies were involved: Digital Equipment Corporation, Intel and Xerox.
The Ethernet Version 1 specification that arose from this development formed the basis for the
first IEEE 802.3 standard that was approved in 1983, and finally published as an official standard
in 1985. Since these first standards were written and approved, a number of revisions have been
undertaken to update the Ethernet standard and keep it in line with the latest technologies that
are becoming available.

Ethernet network elements


The Ethernet IEEE 802.3 LAN can be considered to consist of two main elements:

• Interconnecting media: The media through which the signals propagate is of great
importance within the Ethernet network system. It governs the majority of the properties
that determine the speed at which the data may be transmitted. There are a number of
options that may be used:
o Coaxial cable: This was one of the first types of interconnecting media to be used
for Ethernet. Typically the characteristic impedance was around 110 ohms and
therefore the cables normally used for radio frequency applications were not
applicable.
o Twisted Pair Cables Type types of twisted pair may be used: Unshielded Twisted
Pair (UTP) or a Shielded Twisted Pair (STP). Generally the shielded types are better
as they limit stray pickup more and therefore data errors are reduced.
o Fibre optic cable: Fibre optic cable is being used increasingly as it provides very
high immunity to pickup and radiation as well as allowing very high data rates to
be communicated.
• Network nodes The network nodes are the points to and from which the communication
takes place. The network nodes also fall into categories:
o Data Terminal Equipment - DTE: These devices are either the source or
destination of the data being sent. Devices such as PCs, file servers, print servers
and the like fall into this category.
o Data Communications Equipment - DCE: Devices that fall into this category
receive and forward the data frames across the network, and they may often be
referred to as 'Intermediate Network Devices' or Intermediate Nodes. They include
items such as repeaters, routers, switches or even modems and other
communications interface units.

Ethernet network topologies


There are several network topologies that can be used for Ethernet communications. The actual
form used will depend upon the requirements.

• Point to point: This is the simplest configuration as only two network units are used. It
may be a DTE to DTE, DTE to DCE, or even a DCE to DCE. In this simple structure the
cable is known as the network link. Links of this nature are used to transport data from
one place to another and where it is convenient to use Ethernet as the transport
mechanism.
• Coaxial bus: This type of Ethernet network is rarely used these days. The systems used
a coaxial cable where the network units were located along the length of the cable. The
segment lengths were limited to a maximum of 500 metres, and it was possible to place
up to 1024 DTEs along its length. Although this form of network topology is not installed
these days, a very few legacy systems might just still be in use.
• Star network: This type of Ethernet network has been the dominant topology since the
early 1990s. It consists of a central network unit, which may be what is termed a multi-
port repeater or hub, or a network switch. All the connections to other nodes radiate out
from this and are point to point links.
Summary
Despite the fact that Ethernet has been in use for many years, it is still a growing standard and it
is likely to be used for many years to come. During its life, the speed of Ethernet systems has
been increased, and now new optical fibre based Ethernet systems are being introduced. As the
Ethernet standard is being kept up to date, the standard is likely to remain in use for many years
to come.

Ethernet IEEE 802.3 Standards


This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
Ethernet, 802.3 is defined under a number of IEEE standards, each reflecting a different flavour of
Ethernet. One of the successes of Ethernet has been the way in which it has been updated so that
it can keep pace with improving technology and the growing needs of the users.
As a result of this the IEEE standards committee for Ethernet has introduced new standards to
define higher performance variants. Each of the Ethernet IEEE 802.3 standards is given a different
reference so that it can be uniquely identified.
In addition to this the different IEEE 802.3 standards may be known by other references that
reflect the different levels of performance. These are also defined below.

IEEE 802.3 standards


The IEEE 802.3 standard references all include the IEEE 802.3 nomenclature as standard. Different
releases and variants of the standard are then designated by different designated letters after the
802.3 reference, i.e. IEEE 802.3*. These are defined in the table below.

STANDARD YEAR DESCRIPTION


SUPPLEMENT
802.3a 1985 10Base-2 (thin Ethernet)
802.3c 1986 10 Mb/s repeater specifications (clause 9)
802.3d 1987 FOIRL (fiber link)
802.3i 1990 10Base-T (twisted pair)
802.3j 1993 10Base-F (fiber optic)
802.3u 1995 100Base-T (Fast Ethernet and auto-negotiation)
802.3x 1997 Full duplex
802.3z 1998 1000Base-X (Gigabit Ethernet)
802.3ab 1999 1000Base-T (Gigabit Ethernet over twisted pair)
802.3ac 1998 VLAN tag (frame size extension to 1522 bytes)
802.3ad 2000 Parallel links (link aggregation)
802.3ae 2002 10-Gigabit Ethernet
802.3as 2005 Frame expansion
802.3at 2005 Power over Ethernet Plus
Ethernet standards supplements and releases
New technologies are being added to the list of IEEE 802.3 standards to keep pace with
technology.

Ethernet terminology
There is a convention for describing the different forms of Ethernet. For example 10Base-T and
100Base-T are widely seen in the technical articles and literature. The designator consists of a
three parts:
• The first number (typically one of 10, 100, or 1000) indicates the transmission speed in
megabits per second.
• The second term indicates transmission type: BASE = baseband; BROAD = broadband.
• The last number indicates segment length. A 5 means a 500-meter (500-m) segment
length from original Thicknet. In the more recent versions of the IEEE 802.3 standard,
letters replace numbers. For example, in 10BASE-T, the T means unshielded twisted-pair
cables. Further numbers indicate the number of twisted pairs available. For example in
100BASE-T4, the T4 indicates four twisted pairs.

Summary
The Ethernet IEEE 802.3 standards are continually being updated to ensure that the generic
standard keeps pace with constant advance of technology and the growing needs of the users. As
a result, IEEE 802.3, Ethernet is still at the forefront of network communications technology, and it
appears it will retain this position of dominance for many years to come. In addition to the
different IEEE 802.3 standards, the terminology used to define the different flavours is also widely
used for defining which Ethernet variant is used.

Ethernet IEEE 802.3 Frame Format / Structure


This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
Ethernet, IEEE 802.3 defines the frame formats or frame structures that are developed within the
MAC layer of the protocol stack.
Essentially the same frame structure is used for the different variants of Ethernet, although there
are some changes to the frame structure to extend the performance of the system should this be
needed. With the high speeds and variety of media used, this basic format sometimes needs to be
adapted to meet the individual requirements of the transmission system, but this is still specified
within the amendment / update for that given Ethernet variant.

10 / 100 Mbps Ethernet MAC data frame format


The basic MAC data frame format for Ethernet, IEEE 802.3 used within the 10 and 100 Mbps
systems is given below:

Basic Ethernet MAC Data Frame Format


The basic frame consists of seven elements split between three main areas:-

• Header
o Preamble (PRE) - This is seven bytes long and it consists of a pattern of alternating
ones and zeros, and this informs the receiving stations that a frame is starting as
well as enabling synchronisation. (10 Mbps Ethernet)
o Start Of Frame delimiter (SOF) - This consists of one byte and contains an
alternating pattern of ones and zeros but ending in two ones.
o Destination Address (DA) - This field contains the address of station for which the
data is intended. The left most bit indicates whether the destination is an individual
address or a group address. An individual address is denoted by a zero, while a
one indicates a group address. The next bit into the DA indicates whether the
address is globally administered, or local. If the address is globally administered
the bit is a zero, and a one of it is locally administered. There are then 46
remaining bits. These are used for the destination address itself.
o Source Address (SA) - The source address consists of six bytes, and it is used to
identify the sending station. As it is always an individual address the left most bit
is always a zero.
o Length / Type - This field is two bytes in length. It provides MAC information and
indicates the number of client data types that are contained in the data field of the
frame. It may also indicate the frame ID type if the frame is assembled using an
optional format.(IEEE 802.3 only).
• Payload
o Data - This block contains the payload data and it may be up to 1500 bytes long. If
the length of the field is less than 46 bytes, then padding data is added to bring its
length up to the required minimum of 46 bytes.
• Trailer
o Frame Check Sequence (FCS) - This field is four bytes long. It contains a 32 bit
Cyclic Redundancy Check (CRC) which is generated over the DA, SA, Length / Type
and Data fields.

1000 Mbps Ethernet MAC data frame format


The basic MAC data frame format for Ethernet is modified slightly for 1GE, IEEE 802.3z systems.
When using the 1000Base-X standard, there is a minimum frame size of 416bytes, and for
1000Base-T there is a minimum frame size of 520bytes. To accommodate this, an extension is
added as appropriate. This is a non-data variable extension field to any frames that are shorter
than the minimum required length.

1GE / 1000 Mbps Ethernet MAC Data Frame Format

Half-duplex transmission
This access method involves the use of CSMA/CD and it was developed to enable several stations
to share the same transport medium without the need for switching, network controllers or
assigned time slots. Each station is able to determine when it is able to transmit and the network
is self organising.
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three
categories. The first is Carrier Sense. Here each station listens on the network for traffic and it can
detect when the network is quiet. The second is the Multiple Access aspect where the stations are
able to determine for themselves whether they should transmit. The final element is the Collision
Detect element. Even though stations may find the network free, it is still possible that two
stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop
transmitting. They then back off a random amount of time before attempting a retransmission.
The random delay is important as it prevents the two stations starting to transmit together a
second time.
Note: According to section 3.3 of the IEEE 802.3 standard, each octet of the Ethernet frame, with
the exception of the FCS, is transmitted low-order bit first.

Full duplex
Another option that is allowed by the Ethernet MAC is full duplex with transmission in both
directions. This is only allowable on point-to-point links, and it is much simpler to implement than
using the CSMA/CD approach as well as providing much higher transmission throughput rates
when the network is being used. Not only is there no need to schedule transmissions when no
other transmissions are underway, as there are only two stations in the link, but by using a full
duplex link, full rate transmissions can be undertaken in both directions, thereby doubling the
effective bandwidth.

Ethernet addresses
Every Ethernet network interface card (NIC) is given a unique identifier called a MAC address. This
is assigned by the manufacturer of the card and each manufacturer that complies with IEEE
standards can apply to the IEEE Registration Authority for a range of numbers for use in its
products.
The MAC address comprises of a 48-bit number. Within the number the first 24 bits identify the
manufacturer and it is known as the manufacturer ID or Organizational Unique Identifier (OUI) and
this is assigned by the registration authority. The second half of the address is assigned by the
manufacturer and it is known as the extension of board ID.
The MAC address is usually programmed into the hardware so that it cannot be changed. Because
the MAC address is assigned to the NIC, it moves with the computer. Even if the interface card
moves to another location across the world, the user can be reached because the message is sent
to the particular MAC address.

100 Mbps Ethernet / IEEE 802.3u including 100


Base-T
This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
100Base-T Ethernet was originally known as "Fast Ethernet" when the IEEE 802.3u standard that
defines it was released in 1995. At that time, it was the fastest version of Ethernet that was
available offering a speed of 100 Mbps (12.5 MByte/s excluding 4B/5B overhead). Now 100Base-T
has been overtaken by other standards such as 1GB and more recently 10 GB Ethernet offering
speeds of 10 and 100 times that of the 100Base-T versions. Nevertheless 100Base-T is widely
used for most networking applications as it offers a performance that is more than acceptable for
many applications. Officially, the 100BASE-T standard is IEEE 802.3u.

100Base-T overview
100BaseT Ethernet, also known as Fast Ethernet is defined under the 802.3 family of standards
under 802.3u. Like other flavours of Ethernet, 100Base-T, Fast Ethernet is a shared media LAN. All
the nodes within the network share the 100 Mbps bandwidth. Additionally it conforms to the same
basic operational techniques as used by other flavours of Ethernet. In particular it uses the
CSMA/CD access method, but there are some minor differences in the way the overall system
operates.
The designation for 100Base-T is derived from a standard format for Ethernet connections. The
first figure is the designation for the speed in Mbps. The base indicates the system operates at
baseband and the following letters indicate the cable or transfer medium.

Note on CSMA/CD:
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first
is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The
second is the Multiple Access aspect where the stations are able to determine for themselves whether they should
transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still
possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then
back off a random amount of time before attempting a retransmission. The random delay is important as it prevents
the two stations starting to transmit together a second time.

There are a number of cabling versions available:

• 100Base-TX: uses two pairs of Category 5 UTP *


• 100Base-T4: uses four pairs of Category 3 (now obsolete) *
• 100Base-T2: uses two pairs of Category 3 (now obsolete) *
• 100Base-FX: It uses two strands of multi-mode optical fibre for receive and transmit.
Maximum length is 400 metres for half-duplex connections (to ensure collisions are
detected) or 2 kilometres for full-duplex and is primarily intended for backbone use
• 100Base-SX: It uses two strands of multi-mode optical fibre for receive and transmit. It
is a lower cost alternative to using 100Base-FX, because it uses short wavelength optics
which are significantly less expensive than the long wavelength optics used in 100Base-FX.
100Base-SX: can operate at distances up to 300 metres
• 100Base-BX: is a version of Fast Ethernet over a single strand of optical fibre (unlike
100Base-FX, which uses a pair of fibres). Single-mode fibre is used, along with a special
multiplexer which splits the signal into transmit and receive wavelengths.
* The segment length for a 100Base-T cable is limited to 100 metres.

Fast Ethernet data frame format


Although the frame format for sending data over an Ethernet link does not vary considerably,
there are some changes that are needed to accommodate the different physical requirements of
the various flavours. The format adopted for Fast Ethernet, 802.3u is given below:

Ethernet data frame format

Fast Ethernet (802.3u) Data Frame Format


It can be seen from the diagram above that the data can be split into several elements:
PRE This is the Preamble and it is seven bytes long and it consists of a series of alternating
ones and zeros. This warns the receivers that a data frame is coming and it allows them to
synchronise to it.
SOF This is the Start Of Frame delimiter. This is only one byte long and comprises a pattern of
alternating ones and zeros ending with two bits set to logical "one". This indicates that the next bit
in the frame will be the destination address.
DA This is the Destination Address and it is six bytes in length. This identifies the receiver that
should receive the data. The left-most bit in the left-most byte of the destination address
immediately follows the SOF.
SA This is the Source Address and again it is six bytes in length. As the name implies it
identifies the source address.
Length / Type This two byte field indicates the payload data length. It may also provide the
frame ID if the frame is assembled using an alternative format.
Data This section has a variable length according to the amount of data in the payload. It may
be anywhere between 46 and 1500 bytes. If the length of data is below 46 bytes, then dummy
data is transmitted to pad it out to reach the minimum length.
FCS This is the Frame Check Sequence which is four bytes long. This contains a 32 bit cyclic
redundancy check (CRC) that is used for error checking.

Data transmission speed


Although the theoretical maximum data bit rate of the system is 100 Mbps. The rate at which the
payload is transferred on real networks is far less than the theoretical maximum. This is because
additional data in the form of the header and trailer (addressing and error-detection bits) on every
packet, along with the occasional corrupted packet which needs to be re-sent slows the data
transmission. In addition to this time is lost time waiting after each sent packet for other devices
on the network to finish transmitting.

Fast Ethernet using Cat 5 cable


Fast Ethernet can be transmitted over a variety of media, but 100Base-t is the most common form
and it is carried over Cat 5 cable. These cables have four sets of twisted pair wires of which only
two are used for 10Base-T or 100Base-T. For 10Base-T and 100Base-T one pair of wires is used
for the transmitted data (TD) and another for the received data (RD) as shown below. The data is
carried differentially over the wires and in this way the "+" and "-" wires carry equal and opposite
signals. As a result any radiation is cancelled out.

PIN WIRE FUNCTION


COLOUR
1 White + Green +TD
2 Green -TD
3 White + Orange +RD
PIN WIRE FUNCTION
COLOUR
4 Blue Not used
5 White + Blue Not used
6 Orange -RD
7 White + Brown Not used
8 Brown Not used
Wiring for Cat 5 cable used 100 Base-T Ethernet

Fast Ethernet Applications


Fast Ethernet in the form of 100Base-T, IEEE 802.3u has become one of the most widely used
forms of Ethernet. It became almost universally used for LAN applications in view of the ease of its
use and the fact that systems could sense whether 10Base-T or 100Base-T speeds should be used.
In this way 100Base-T systems could be incorporated steadily and mixed with existing 10Base-T
equipment. The higher specification standard would be used once the two communicating
elements were both 100Base-T. In addition to this the fibre based version is also used, but in view
of the fact that Cat5 cable is so cheap and easy to use, the wired version is more common.
However the fibre version has the advantage of being able to communicate over greater distances.

Gigabit Ethernet, 1GE including 1000Base-T


This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
Gigabit Ethernet, 1GE, is the next development of the Ethernet standard beyond the popular
100Base-T version. As the name suggests, Gigabit Ethernet, 1GE, allows the transfer of data at
speeds of 1000 Mbps or 1Gbps. It is particularly easy to install because the 1000Base-T variant is
designed to run over Cat 5 UTP (unshielded twisted pair) that is widely and cheaply available.
Initially Gigabit Ethernet, 1GE was only used for applications such as backbone links within large
networks, but as the technology has become more affordable it is being used more widely, and the
1000Base-T variant is often incorporated within PCs themselves. However even 1 Gigabit Ethernet
is being superseded as 10 Gigabit Ethernet is available and being widely used. Despite this, the 1
Gigabit version will still be designed into new product for many years to come.

Gigabit Ethernet, 1GE development


The success of the Ethernet standard has been its ability to evolve and move forward in such a
way that it can keep up with or even ahead of the networking requirements for local area
networks. The original development of Ethernet took place in 1970s at the Xerox Corporation.
Since was launched on to the market it has steadily evolved, seeing versions including 10Base-T
and later 100Base-T become networking standards.
With its success the Ethernet standard was taken over by the IEEE under their standard IEEE
802.3. Accordingly IEEE 802.3ab, which defines Gigabit Ethernet was ratified in 1999 and it
became known as 1000Base-T.

Gigabit Ethernet basics


Although the 1000Base-T version of Gigabit Ethernet is probably the most widely used, the
802.3ab specification also details versions that can operate over other media:

• 1000Base-CX This was intended for connections over short distances up to 25 metres
per segment and using a balanced shielded twisted pair copper cable. However it was
succeeded by 1000Base-T.
• 1000Base-LX This is a fiber optic version that uses a long wavelength
• 1000Base-SX This is a fiber optic version of the standard that operates over multi-
mode fiber using a 850 nanometer, near infrared (NIR) light wavelength
• 1000Base-T Also known as IEEE 802.3ab, this is a standard for Gigabit Ethernet over
copper wiring, but requires Category 5 (Cat 5) cable as a minimum.

The specification for Gigabit Ethernet provides for a number of requirements to be met. These can
be summarised as the points below:

• Provide for half and full duplex operation at speeds of 1000 Mbps.
• Use the 802.3 Ethernet frame formats.
• Use the CSMA/CD access method with support for one repeater per collision domain.
• Provide backward compatibility with 10BASE-T and 100BASE-T technologies.

Note on CSMA/CD:
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first
is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The
second is the Multiple Access aspect where the stations are able to determine for themselves whether they should
transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still
possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then
back off a random amount of time before attempting a retransmission. The random delay is important as it prevents
the two stations starting to transmit together a second time.

Like 10Base-T and 100Base-T, the predecessors of Gigabit Ethernet, the system is a physical
(PHY) and media access control (MAC) layer technology, specifying the Layer 2 data link layer of
the OSI protocol model. It complements upper-layer protocols TCP and IP, which specify the Layer
4 transport and Layer 3 network portions and enable communications between applications.

Gigabit transport mechanism for 1000Base-T


In order to enable Gigabit Ethernet, 1000Base-T to operate over standard Cat 5 or Cat 5e cable,
the transmission techniques employed operate in a slightly different way to that employed by
either 10Base-T or 100Base-T. While it accomplishes this it still retains backward compatibility with
the older systems.
Cat 5 cables have four sets of twisted pair wires of which only two are used for 10Base-T or
100Base-T. 1000BaseT Ethernet makes full use of the additional wires.
To see how this operates it is necessary to look at the wiring and how it is used. For 10Base-T and
100BaseT one pair of wires is used for the transmitted data and another for the received data as
shown below:

PIN WIRE FUNCTION


COLOUR
1 White + Green +TD
2 Green -TD
3 White + Orange +RD
4 Blue Not used
5 White + Blue Not used
6 Orange -RD
7 White + Brown Not used
8 Brown Not used
Wiring for Cat 5 cable used for 10 and 100 Base-T
The data is transmitted along the twisted pair wires. One wire is used for the positive and one for
the negative side of the waveform, i.e. send and return. As the two signals are the inverse of each
other any radiation is cancelled out. From the table the lines are labelled RD for received data and
TD for transmitted data.
The Cat 5 cable used for transmitting 100BaseT Ethernet actually has a maximum clock rate of
125 Mbps. The reason for this is that the signal is coded so that 8 bits are coded into a 10 bit
signal in a scheme known as 8B/10B. Thus to transmit at 100 Mbps the maximum clock rate is 125
MHz. This factor can also be used to advantage by 1000BaseT, Gigabit Ethernet.
To achieve the rate of 1000 Mbps, Gigabit Ethernet, 1000Base-T uses a variety of techniques to
retain the maximum clock rate of 125 MHz while increasing the data transfer rate of a Gigabit. In
this way the standard Cat 5 cable can be used as Gigabit Ethernet cable.
The first technique is that rather than using two wires to enable it to carry a signal representing a
"0" or "1", it uses two sets of twisted pair and in this way four different data combinations can be
transmitted: "00", "01", "10", and "11". This gives a four-fold increase in transmission speed. To
give a further increase in speed by a factor of two, each twisted pair is used for transmission and
reception of data, i.e. each twisted pair is bi-directional.
This method of transmission is known as 4D-PAM5, and the maximum data rate is 125 Mbps x 4 x
2 = 1000Mbps.
A further voltage is used for error correction.
Although the same cables are sued for Gigabit Ethernet, the designations for the individual lines in
the Gigabit Ethernet cable is changed to map the way in which the data is carried. The letters "BI"
indicate the data is bi-directional and the letters DA, DB, � etc indicate Data A, Data B, � etc.

PIN WIRE FUNCTION


COLOUR
1 White + Green +BI-DA
2 Green --BI-DA
3 White + Orange +BI-DB
4 Blue +BI-DC
5 White + Blue -BI-DC
6 Orange -BI-DB
7 White + Brown +BI-DD
8 Brown -BI-DD
Line designations for Cat 5 Gigabit Ethernet cable
Gigabit Ethernet is rapidly becoming an accepted standard not just for use for high speed links in
networks, but also for standard links between PCs and the relevant servers. Many PCs have
Gigabit Ethernet fitted as standard and this also means that networks require to use Gigabit
Ethernet switches, and routers, etc. However the fact that standard Cat 5 cable can be used for
the 1000Base-T variant means that Gigabit Ethernet will rapidly take over from the previous
variants of Ethernet, allowing speeds to be steadily increased.

Practical aspects
Gigabit Ethernet, 1GE has been developed with the idea of using ordinary Cat 5 cables. However
several companies recommend the use of higher spec Cat 5e cables when Gigabit Ethernet
applications are envisaged. Although slightly more expensive, these Cat 5e cables offer improved
crosstalk and return loss performance. This means that they are less susceptible to noise. When
data is being passed at very high rates, there is always the possibility that electrical noise can
cause problems. The use of Cat 5e cables may improve performance, particularly when used in a
less quiet electrical environment, or over longer runs.

Ethernet cable summary


- a summary of the different types of cable used for the different types of
Ethernet including 10Base-T and 100Base-T, etc. A description of Cat 5
and other category cables is provided as well as Cat-5 crossover
cables. This type of cable uses RJ45 connectors and accordingly is
often referred to as RJ45 network cable, or RJ45 patch cable.
This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
The Ethernet standard is well established. It is used in a variety of different environments and
accordingly there is a variety of different types of cable over which it operates. It is possible not
only for Ethernet to operate at different speeds, but there are different varieties of cable that can
be used within the same speed category. In order to ensure that Ethernet operates correctly, the
types of cable, their electrical conditions and the maximum lengths over which they may operate
are specified.
For many applications, ready made Ethernet cables may be purchased, and a knowledge of the
construction of any Ethernet cables is not required. However for other applications it is necessary
to know the construction of the Ethernet cable. As a result of this advertisements for different
types of cable, Cat-5, Cat-5e, Cat-6 may be widely seen. These cables may be used for different
applications.
A summary of Ethernet cables and their maximum operating lengths is given below:

SPECIFICATION CABLE TYPE MAXIMUM LENGTH


10BaseT Unshielded Twisted Pair 100 metres
10Base2 Thin coaxial cable 180 metres
10Base5 Thick coaxial cable 500 metres
10BaseF Fibre optic cable 2000 metres
100BaseT Unshielded twisted pair 100 metres
100BaseTX Unshielded twisted pair 220 metres
Ethernet cable type summary
Lengths provided are those accepted as the maximum.
These lengths are not necessarily included in the IEEE standard.

Categories for Ethernet cables


A variety of different cables are available for Ethernet and other telecommunications and
networking applications. These cables that are described by their different categories, e.g. Cat 5
cables, Cat-6 cables, etc, which are often recognised by the TIA (telecommunications Industries
Association) and they are summarised below:

• Cat-1: This is not recognised by the TIA/EIA. It is the form of wiring that is used for
standard telephone (POTS) wiring, or for ISDN.
• Cat-2: : This is not recognised by theTIA/EIA. It was the form of wiring that was used
for 4Mbit/s token ring networks.
• Cat-3: : This cable is defined in TIA/EIA-568-B. It is used for data networks employing
frequencies up to 16 MHz. It was popular for use with 10 Mbps Ethernet networks
(100Base-T), but has now been superseded by Cat-5 cable.
• Cat-4: : This cable is not recognised by the TIA/EIA. However it can be sued for
networks carrying frequencies up to 20 MHz. It was often used on 16Mbps token ring
networks.
• Cat-5: : This is not recognised by the TIA/EIA. It is the cable that is widely used for
100Base-T and 1000Base-T networks as it provides performance to allow data at 100 Mbps
and slightly more (125 MHz for 1000Base-T) Ethernet.
• Cat-5e: : This form of cable is recognised by the TIA/EIA and is defined in TIA/EIA-568-
B.. It has a slightly higher frequency specification that Cat-5 cable as the performance
extends up to 125 Mbps. It can be used for 100Base-T and 1000Base-t (Gigabit Ethernet).
• Cat-6: : This cable is defined in TIA/EIA-568-B. It provides more than double the
performance of Cat-5 and Cat-5e cables allowing data at up to 250Mbps to be passed.
• Cat-7: : This is an informal number for ISO/IEC 11801 Class F cabling. It comprises four
individually shielded pairs inside an overall shield. It is aimed at applications where
transmission of frequencies up to 600 Mbps is required.

Further descriptions of cat-5 and Cat-5e cables are given below as these are widely sued for
Ethernet networking applications today.
Ethernet Cat 5 cable
Cat 5 cables or to give them their full name category 5 cable is the current preferred cable type for
LAN network and telephone wiring where twisted pair cabling is required. Cat 5 cables consist of
an unshielded cable comprising four twisted pairs, typically of 24 gauge wire. The terminating
connector is an RJ-45 jack. In view of this these Cat5 network cables are often referred to as RJ45
network cables or RJ45 patch cables. Certified Cat-5 cables will have the wording "Cat-5" written
on the side. As they conform to EIA/TIA 568A-5, this is written on the outer sheath. It is always
best to use the appropriate network cables when setting up a network as faulty or not to standard
cables can cause problems that may be difficult to identify and trace.
Cat5 network cable is now the standard form of twisted pair cable and supersedes Cat 3. The Cat 5
cables can be used for data speeds up to 125 Mbps, thereby enabling them to support 100Base-T
which has a maximum data speed of 100 Mbps whereas the Cat-3 cable was designed to be
compatible with 10Base-T. The Cat5 cable is able to support working up to lengths of 100 metres
at the full data rate.
Where it is necessary to operate at higher speeds, as in the case of Gigabit Ethernet, an enhanced
version of Cat 5 cable known as Cat 5e is often recommended, although Cat 5 is specified to
operate with Gigabit Ethernet, 1000Base-T. Alternatively Cat 5e can be used with 100Base-T to
enable greater lengths (up to 350 metres) to be achieved.
The wires and connections within the Cat 5 or Cat 5e cable vary according to the applications. A
summary of the signals carried and the relevant wires and connections is given in the table below:

PIN COLOUR TELEPHONE 10BASE-T 100BASE-T 1000BASE-T POE POE


NO MODE A MODE B
1 White / green +TX +TD +BI_DA 48 V out
2 Green -TX -TX -BI_DA 48 V out
3 White / orange +RX +RX +BI_DB 48 V return
4 Blue Ring +BI_DC 48 V out
5 Blue / white Tip -BI_DC 48 V out
6 Orange -RX -RX -BI_DB 48 V return
7 White / brown +BI_DD 48 V return
8 Brown -BI_DD 48 V return
RJ-45 / Cat 5 / Cat 5e Wiring
In the table, TX is transmitted data, and RX is received data. BI_Dn is bi-directional data, A, B, C,
and D.

Ethernet Cat 5 crossover cables


There are a number of different configurations of cable that may be employed according to the
equipment and the requirement. The most common type are the straight through cables which are
wired in a 1 to 1 configuration. However Cat-5 crossover cables are also required on occasions.
Typically a Cat-5 cable used to connect a computer (PC) to a switch will be a straight through
cable. However if two computers or two switches are connected together then a Cat5 crossover
cable is used.
Many Ethernet interfaces in use today are able to detect the type of cable, whether it is a straight
through or crossover cable, and they are able to adapt to the required format. This means that the
requirement for Cat-5 crossover cables is less than it might otherwise be.
When using Cat-5 Ethernet crossover cables, they are not marked with the fact that they are
crossover cables. Accordingly it is often wise to mark them to avoid confusion later.

Ethernet Cat 5e cables


In order to improve the performance of the cabling used for Ethernet and other applications, the
Cat 5 cable was upgraded to cat 5e. This cable provides for improved levels of screening and
reduced cross-talk. This is achieved by having individually screened twisted pairs.
Summary
Cat 5 network cable is now the standard for networking. Using a cost effective RJ45 connector,
these cables may often be referred to as RJ45 network cable or RJ45 patch cable as they are able
to link or patch different Ethernet items together very easily. Now with the introduction of Cat 5e
Ethernet cables, these are becoming more widespread in their use.
Although this is not a complete summary of all the types of Ethernet cable that may be found, it
gives a guide to some of the most common. 10Base-T and 100Base-T are possibly the most widely
used forms of Ethernet, although higher speeds are now becoming common place. In addition to
this the variety of cables including Cat-5 cable and all its versions including crossover Cat 5 cables
may be required and obtained from a variety of suppliers.

Power over Ethernet, PoE, IEEE 802.3af / 802.3at


This Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different
aspects of Ethernet, IEEE 802.3 operation and technology:
[1] Ethernet IEEE 802.3 tutorial [2] Ethernet IEEE 802.3 standards [3] Ethernet IEEE 802.3 data
frames structure [4] 100 Mbps Ethernet inc 100 Base-T [5] Gigabit Ethernet 1GE [6] Ethernet
cables [7] Power over Ethernet, 802.3af and 802.3at
Powering network devices can sometime present problems, especially if they are located remotely.
One convenient solution is to supply the power over an Ethernet LAN cable. Power over Ethernet,
is defined under two IEE standards, namely IEEE 802.3af and later IEE 802.3at which defined a
number of enhancements.
This Power over Ethernet, PoE is now being used for a wide variety of applications including
powering IP telephones, wireless LAN access points, webcams, Ethernet hubs and switches and
many more devices. It is convenient to use and as a result, Power over Ethernet is widely used
and many products are available.

PoE Development
With Ethernet now an established standard, one of the limitations of Ethernet related equipment
was that it required power and this was not always easily available. As a result some
manufacturers started to offer solutions whereby power could be supplied over the Ethernet cables
themselves. To prevent a variety of incompatible Power over Ethernet, PoE, solutions appearing on
the market, and the resulting confusion, the IEEE began their standardisation process in 1999.
A variety of companies were involved in the development of the IEEE standard. The result was the
IEEE802.3af standard that was approved for release on 12 June 2003. Although some products
were released before this date and may not fully conform to the standard, most products available
today will conform to it, especially if they quote compliance with 802.3af.
A further standard, designated IEEE 802.3at was released in 2009 and this provided for several
enhancements to the original IEEE 802.3af specification.

PoE overview
The standard allows for a supply of 48 volts with a maximum current of 400 milliamps to be
provided over two of the available four pairs used on Cat 3 or Cat 5 cable. While this sounds very
useful with a maximum available power of 19.2 watts, the losses in the system normally reduce
this to just under 13 watts.
The standard Cat 5 cable has sets of twisted pair cable, and the IEEE standard allows for either to
be used for 10Base-T and 100Base-T systems. The standard allows for two options for Power over
Ethernet: one uses the spare twisted pairs, while the second option uses the wires carrying the
data. Only one option may be used and not both.
When using the spare twisted pairs for the supply, the pair on pins 4 and 5 connected together
and normally used for the positive supply. The pair connected to pins 7 and 8 of the connector are
connected for the negative supply. While this is the standard polarity, the specification actually
allows for either polarity to be used.
When the pairs used for carrying the data are employed it is it is possible to apply DC power to the
centre tap of the isolation transformer that are used to terminate the data wires without disrupting
the data transfer. In this mode of operation the pair on pins 3 and 6 and the pair on pins 1 and 2
can be of either polarity.
As the supply reaching the powered device can be of either polarity a full wave rectifier (bridge
rectifier) is used to ensure that the device consuming the power receives the correct polarity
power.
Within the 802.3af standard two types of device are described:
• Power Sourcing Equipment, PSE This is the equipment that supplies power to the
Ethernet cable.
• Powered Devices, PD This is equipment that interfaces to the Ethernet cable and is
powered by supply on the cable. These equipments may range from switches and hubs to
other items including webcams, etc.

Power Sourcing Equipment, PSE


This needs to provide a number of functions apart from simply supplying the power over the
Ethernet system. The PSE obviously needs to ensure that no damage is possible to any equipment
that may be present on the Ethernet system. The PSE first looks for devices that comply with the
IEEE 802.3af specification. This is achieved by applying a small current-limited voltage to the
cable. The PSE then checks for the presence of a 25k ohm resistor in the remote device. If this
load or resistor is detected, then the 48V is applied to the cable, but it is still current-limited to
prevent damage to cables and equipment under fault conditions.
The PSE will continue to supply power until the Powered Device (PD) is removed, or the PD stops
drawing its minimum current.

Powered Device, PD
The powered device must be able to operate within the confines of the Power over Ethernet
specification. It receives a nominal 48 volts from the cable, and must be able to accept power from
either option, i.e. either over the spare or data cables. Additionally the 48 volts supplied is too
high for operating the electronics to be powered, and accordingly an isolated DC-DC converter is
used to transform the 48V to a lower voltage. This also enables 1500V isolation to be provided for
safety reasons.

PoE Summary
Power over Ethernet, PoE, defined as IEEE 802.3af or the enhancements under IEEE 802.3at
provide a particularly valuable means of remotely supplying and controlling equipment that may
be connected to an Ethernet network or system. PoE enables units to be powered in situations
where it may not be convenient to run in a new power supply for the unit. While there are
limitations to the power that can be supplied, the intention is that only small units are likely to
need powering in this way. Larger units can be powered using more conventional means.

Fibre optic communications tutorial


his fiber optic communications tutorial is split into several pages each of which addresses different
aspects of fibre optical communication operation and technology including fibre optic cable,
transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
Fibre optic communication has revolutionised the telecommunications industry. It has also made
its presence widely felt within the data networking community as well. Using fibre optic cable,
optical communications have enabled telecommunications links to be made over much greater
distances and with much lower levels of loss in the transmission medium and possibly most
important of all, fiber optical communications has enabled much higher data rates to be
accommodated.
As a result of these advantages, fibre optic communications systems are widely employed for
applications ranging from major telecommunications backbone infrastructure to Ethernet systems,
broadband distribution, and general data networking.

Development of fibre optics


Since the earliest days of telecommunications there has been an ever increasing need to transmit
more data even faster. Initially single line wires were used. These gave way to coaxial cables that
enabled several channels to transmitted over the same cable. However these systems were limited
in bandwidth and optical systems were investigated.
Optical communications became a possibility after the first lasers were developed in the 1960s.
The next piece of the jigsaw fell into place when the first optical fibers with a sufficiently low loss
for communications purposes were developed in the 1970s. Then, during the late 1970s a
considerable amount of research was undertaken. This resulted in the installation of the first
optical fibre telecommunications system. It ran over a distance of 45 km and used a wavelength of
0.5 mm and had a data rate of just 45 Mbps - a fraction of what is possible today.
Since then, considerable improvements have been made in the technology. Data rates have
improved and in addition to this the performance of the optical fibre has been improved to enable
much greater distances to be achieved between repeaters. As an indication of this the speeds that
can now be achieved along through a fibre optic system exceed 10 Tbps.
When the first fibre optic transmission systems were being developed, it was thought that the fibre
optic cabling and technology would be prohibitively expensive. However, this has not been the
case and costs have fallen to the extent that fibre optics now provides the only viable option for
many telecommunications applications. In addition to this it is also used in many local area
networks where speed is a major requirement.

Advantages of fibre optics


There are a number of compelling reasons that lead to the widespread adoption of fibre optic
cabling for telecommunications applications:

• Much lower levels of signal attenuation


• Fibre optic cabling provides a much higher bandwidth allowing more data to be delivered
• Fibre optic cables are much lighter than the coaxial cables that might otherwise be used.
• Fibre optics do not suffer from stray interference pickup that occurs with coaxial cabling

Fibre optic transmission system


Any fibre optic data transmission system will comprise a number of different elements. There are
three major elements (marked in bold), and a further one that is vital for practical systems:

• Transmitter (light source)


• Fibre optic cable
• Optical repeater
• Receiver (Detector)

The different elements of the system will vary according to the application. Systems used for lower
capacity links, possibly for local area networks will employ somewhat different techniques and
components to those used by network providers that provide extremely high data rates over long
distances. Nevertheless the basic principles are the same whatever the system.
In the system the transmitter of light source generates a light stream modulated to enable it to
carry the data. Conventionally a pulse of light indicates a "1" and the absence of light indicates
"0". This light is transmitted down a very thin fibre of glass or other suitable material to be
presented at the receiver or detector. The detector converts the pulses of light into equivalent
electrical pulses. In this way the data can be transmitted as light over great distances.

Fibre optic transmitter


Although the original telecommunications fibre optic systems would have used large lasers, today
a variety of semiconductor devices can be used. The most commonly used devices are light
emitting diodes, LEDs, and semiconductor laser diodes.
The simplest transmitter device is the LED. Its main advantage is that it is cheap, and this makes
it ideal for low cost applications where only short runs are needed. However they have a number
of drawbacks. The first is that they offer a very low level of efficiency. Only about 1% of the input
power enters the optical fibre, and this means that high power drivers would be needed to provide
sufficient light to enable long distance transmissions to be made. The other disadvantage of LEDs
is that they produce what is termed incoherent light that covers a relatively wide spectrum.
Typically the spectral width is between 30 and 60 nm. This means that any chromatic dispersion in
the fibre will limit the bandwidth of the system.
In view of their performance, LEDs are used mainly in local-area-network applications where the
data rates are typically in the range 10-100 Mb/s and transmission distances are a few kilometres.
Where higher levels of performance are required, i.e. it is necessary that the fibre optic link can
operate over greater distances and with higher data rates, then lasers are used. Although more
costly, they offer some significant advantages. In the first instance they are able to provide a
higher output level, and in addition to this the light output is directional and this enables a much
higher level of efficiency in the transfer of the light into the fibre optic cable. Typically the coupling
efficiency into a single mode fibre may be as high as 50%. A further advantage is that lasers have
a very narrow spectral bandwidth as a result of the fact that they produce coherent light. This
narrow spectral width enables the lasers to transmit data at much higher rates because modal
dispersion is less apparent. Another advantage is that semiconductor lasers can be modulated
directly at high frequencies because of short recombination time for the carriers within the
semiconductor material.
Laser diodes are often directly modulated. This provides a very simple and effective method of
transferring the data onto the optical signal. This is achieved by controlling current applied directly
to the device. This in turn varies the light output from the laser. However for very high data rates
or very long distance links, it is more effective to run the laser at a constant output level
(continuous wave). The light is then modulated using an external device. The advantage of using
an external means of modulation is that it increases the maximum link distance because an effect
known as laser chirp is eliminated. This chirp broadens the spectrum of the light signal and this
increases the chromatic dispersion in the fibre optic cable.

Fibre optic cable


The full details and description of fibre optic cabling are found in a separate article / tutorial on this
area of the website. In essence a fibre optic cable consists of core, around which is another layer
referred to as the cladding. Outside of this there is a protective outer coating.
The fibre optic cables operate because their cladding has a refractive index that is slightly lower
than that of the core. This means that light passing down the core undergoes total internal
reflection when it reaches the core / cladding boundary, and it is thereby contained within the core
of the optical fibre.

Repeaters and amplifiers


There is a maximum distance over which signals may be transmitted over fibre optic cabling. This
is limited not only by the attenuation of the cable, but also the distortion of the light signal along
the cable. In order to overcome these effects and transmit the signals over longer distances,
repeaters and amplifiers are used.
Opto-electric repeaters may be used. These devices convert the optical signal into an electrical
format where it can be processed to ensure that the signal is not distorted and then converted
back into the optical format. It may then be transmitted along the next state of the fibre optic
cable.
An alternative approach is to use an optical amplifier. These amplifiers directly amplify the optical
signal without the need to convert the signal back into an electrical format. The amplifiers consist
of a length of fibre optic cable that is doped with a rare earth mineral named Erbium. The treated
fibre cable is then illuminated or pumped with light of a shorter wavelength from another laser and
this serves to amplify the signal that is being carried.
In view of the much reduced cost of fibre optic amplifiers over repeaters, amplifiers are far more
widely used. Most repeaters have been replaced, and amplifiers are used in virtually all new
installations these days.

Receivers
Light travelling along a fibre optic cable needs to be converted into an electrical signal so that it
can be processed and the data that is carried can be extracted. The component that is at the heart
of the receiver is a photo-detector. This is normally a semiconductor device and may be a p-n
junction, a p-i-n photo-diode or an avalanche photo-diode. Photo-transistors are not used because
they do not have sufficient speed.
Once the optical signal from the fibre optic cable ahs been applied to the photo-detector and
converted into an electrical format it can be processed to recover the data which can then be
passed to its final destination.

Summary
Fibre optic transmission of data is generally used for long distance telecommunications network
links and for high speed local area networks. Currently fibre optics is not used for the delivery of
services to homes, although this is a long term aim for many telcos. By using optical fibre cabling
here, the available bandwidth for new services would be considerably higher and the possibility of
greater revenues would increase. Currently the cost oft his is not viable, although it is likely to
happen in the medium term.

Optical fibre cable tutorial


This fiber optic communications tutorial is split into several pages each of which addresses
different aspects of fibre optical communication operation and technology including fibre optic
cable, transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
In recent years, optical fibres, and fibre optic cabling has fallen in cost, making it fall within the
economic reach of many more telecommunications and data networking applications. As a result
fibre optics are now in widespread use, and form the backbone of most telecommunications
networks and many local area data networks.
While there are many components used in building up a fibre optic link, the fibre optic cabling is
obviously the key element.

Optical fibre construction


Fibre optic technology relies on the fact that it is possible to send a light beam along a thin fibre
suitably constructed. A fibre optic cable consists of a glass or silica core. The core of the optical
fibre is surrounded by a similar material, i.e. glass or silica, called the cladding, that has a
refractive index that is slightly lower than that of the core. It is found that even when the cladding
has a slightly higher refractive index, the light passing down the core undergoes total internal
reflection, and it is thereby contained within the core of the optical fibre.
The Outside the cladding there is placed a plastic jacket. This is used to provide protection to the
optical fibre itself. In addition to this, optical fibres are usually grouped together in bundles and
these are protected by an overall outer sheath. This not only provides further protection but also
serves to keep the optical fibres together.

Optical fibre types


There is a variety of different types of fibre optic cable that can be used, and there are a number
of ways in which types may be differentiated. There are two major categories:

• Step index fibre optic cabling


• Graded index fibre optic cabling

The step index cable refers to cable in which there is a step change in the refractive index between
the core and the cladding. This type is the more commonly used. The other type, as indicated by
the name, changes more gradually over the diameter of the fibre. Using this type of cable, the
light is refracted towards the centre of the cable.
Optical fibres or optical fibers can also be split into single mode fibre, and multimode fibre.
Mention of both single mode fiber and multi-mode fiber is often seen in the literature.
Single mode fiber This form of optical fibre is the type that is virtually exclusively used these
days. It is found that if the diameter of the optical fibre is reduced to a few wavelengths of light,
then the light can only propagate in a straight line and does not bounce from side to side of the
fibre. As the light can only travel in this single mode, this type of cable is called a single mode
fibre. Typically single mode fibre core are around eight to ten microns in diameter, much smaller
than a hair.
Single mode fiber does not suffer from multi-modal dispersion and this means that it has a much
wider bandwidth. The main limitation to the bandwidth is what is termed chromatic dispersion
where different colours, i.e. Wavelengths propagate at different speeds. Chromatic dispersion of
the optical fibre cable occurs within the centre of the fibre itself. It is found that it is negative for
short wavelengths and changes to become positive at longer wavelengths. As a result there is a
wavelength for single mode fiber where the dispersions is zero. This generally occurs at a
wavelength of around 1310 nm and this is the reason why this wavelength is widely used.
The disadvantage of single mode fibre is that it requires high tolerance to be manufactured and
this increases its cost. Against this the fact that it offers superior performance, especially for long
runs means that much development of single mode fiber has been undertaken to reduce the costs.
Multimode fiber This form of fibre has a greater diameter than single mode fibre, being
typically around 50 microns in diameter, and this makes them easier to manufacture than the
single mode fibres.
Multimode optical fiber has a number of advantages. As it has a wider diameter than single mode
fibre it can capture light from the light source and pass it to the receiver with a high level of
efficiency. As a result it can be used with low cost light emitting diodes. In addition to this the
greater diameter means that high precision connectors are not required. However this form of
optical fibre cabling suffers from a higher level of loss than single mode fibre and in view of this its
use is more costly than might be expected at first sight. It also suffers from multi-mode modal
dispersion and this severely limits the usable bandwidth. As a result it has not been widely used
since the mid 1980s. Single mode fiber cable is the preferred type.

Attenuation within an optical fibre


Although fibre optic cables offer a far superior performance to that which can be achieved with
other forms of cable, they nevertheless suffer from some levels of attenuation. This is caused by
several effects:

• Loss associated with the impurities There will always be some level of impurity in
the core of the optical fibre. This will cause some absorption of the light within the fibre.
One major impurity is water that remains in the fibre.
• Loss associated with the cladding When light reflects off the interface between the
cladding and the core, the light will actually travel into the core a small distance before
being reflected back. This process causes a small but significant level of loss and is one of
the main contributors to the overall attenuation of a signal along an fibre optic cable.
• Loss associated with the wavelength It is found that the level of signal attenuation
in the optical fibre depends the wavelength used. The level increases at certain
wavelengths as a result of certain impurities.

Despite the fact that attenuation is an issue, it is nevertheless possible to transmit data along
single mode fibres for considerable distances. Lines carrying data rates up to 50 Gbps are able to
cover distances of 100 km without the need for amplification.

Materials used for optical fibres


There are two main types of material used for optical fibres. These are glass and plastic. They offer
widely different characteristics and therefore fibres made from the two different substances find
uses in very different applications.

Optical fibre sizes


One of the major ways of specifying optical fibre cables is by the diameters of the inner core and
the external cladding. As may be expected there are industry standards for these and this helps in
reducing the variety of fittings needed for connectors, splices and the tools needed for fitting.
The standard for most optical fibres is 125 microns (um) for the cladding and 245 microns (um)
for the outer protective coating. Multimode optical fibres have core sizes of either 50 or 62.5
microns whereas the standards for single mode fibres is approximately 8 to 10 microns.
When specifying optical fibre cables, the diameters usually form the major part of the cable
specification. A multimode fibre with a core diameter of 50 microns and a cladding diameter of 125
microns would be referred to as a 50/125 fibre.
In addition to the specification of the diameter, other parameters such as the loss, etc are also
required, but these elements do not form part of the cable type in the same way as the diameter.

Fibre optic connectors


This fiber optic communications tutorial is split into several pages each of which addresses
different aspects of fibre optical communication operation and technology including fibre optic
cable, transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
There are many occasions when it is necessary to connect a fibre optic cable to another item. It
may be that the fibre optic cable needs to be connected to another cable, or to an electronic
interface device where the optical signal is converted to an electrical signal or to a light source. It
is necessary that the fibre optic cable is correctly interfaced so that the minimum amount of light
is lost. To achieve this it is necessary to use the correct form of fibre optic connector. In these
cases fibre optic connectors are required.
While fibre optic connectors offer a very convenient method of connecting fibre optic cables, they
should only be used where necessary. They introduce a loss at each connection. Typically the
value is between 10 and 20 percent. Against this they make reconfiguring systems very much
easier.

Connector basics
The fibre optic connector basically consists of a rigid cylindrical barrel surrounded by a sleeve. The
barrel provides the mechanical means by which the connector is held in place wit the mating half.
A variety of methods are used to ensure the connector is held in place, ranging from screw fit, to
latch arrangements. The main requirement si that the end of the fibre optic cable is held
accurately in place so that the maximum light transfer occurs.
As it is imperative that the optical fibre is held securely and accurately in place, connectors will
normally be designed so that the fibre is glued in place, and in addition to this strain relief is also
provided
Fibre ends may also be polished. For single mode fibre, the ends may be polished with a slight
convex curvature so that the centres of the cables from the two connectors achieve physical
contact. This approach reduces the back reflections, although the level of loss may be slightly
higher.

Fibre optic connector types


Fibre optic connectors (fiber optic connectors) come in a variety of formats. These different fibre
optic connectors may be used in slightly different applications or under different circumstances, as
each type has its own capabilities.
When choosing a fibre optic connector, it is necessary to ensure that its properties meet the needs
of the particular application in question. Some fibre optic connectors may be suitable for different
optical fibres, and this needs to be taken into consideration.
There is a wide variety of different fiber optic connectors available. A selection of some is given
below:

• FC/PC This form of fibre optic connector is used for single-mode fiber optic cable. It
provides very accurate positioning of the single-mode fiber optic cable with respect to
transmitter (optical source) or the receiver (optical detector).
• SC This form of connector is mainly used with single-mode fiber optic cables. The
connector is simple low cost and reliable. The location and alignment is provided using a
ceramic ferrule. It also has a locking tab to enable it to be mated and removed without
fear of it accidentally falling loose.
• Plastic fiber optic cable connectors As the name implies, these fibre optic cable
connectors are only used with plastic fibre optic cabling.

Fibre optic splicing


This fiber optic communications tutorial is split into several pages each of which addresses
different aspects of fibre optical communication operation and technology including fibre optic
cable, transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
Rather than using optical fibre connectors, it is possible to splice two optical fibres together. An
fibre optic splice is defined by the fact that it gives a permanent or relatively permanent
connection between two fibre optic cables. That said, some manufacturers do offer fibre optic
splices that can be disconnected, but nevertheless they are not intended for repeated connection
and disconnection.
There are many occasions when fibre optic splices are needed. One of the most common occurs
when a fibre optic cable that is available is not sufficiently long for the required run. In this case it
is possible to splice together two cables to make a permanent connection. As fibre optic cables are
generally only manufactured in lengths up to about 5 km, when lengths of 10 km are required, for
example, then it is necessary to splice two lengths together.
Fibre optic splices can be undertaken in two ways:

• Mechanical splices
• Fusion splices

The mechanical splices are normally used when splices need to be made quickly and easily. To
undertaken a mechanical fibre optic splice it is necessary to strip back the outer protective layer
on the fibre optic cable, clean it and then perform a precision cleave or cut. When cleaving
(cutting) the fibre optic cable it is necessary to obtain a very clean cut, and one in which the cut
on the fibre is exactly at right angles to the axis of the fibre.
Once cut the ends of the fibres to be spliced are placed into a precision made sleeve. They are
accurately aligned to maximise the level of light transmission and then they are clamped in place.
A clear, index matching gel may sometimes be used to enhance the light transmission across the
joint.
Mechanical fibre optic splices can take as little as five minutes to make, although the level of light
loss is around ten percent. However this level of better than that which can be obtained using a
connector.
Fusion splices form the other type of fibre optic splice that can be made. This type of connection is
made by fusing or melting the two ends together. This type of splice uses an electric arc to weld
two fibre optic cables together and it requires specialised equipment to perform the splice. The
protective coating from the fibres to be spliced is removed from the ends of the fibres. The ends of
the fibre optic cable are then cut, or to give the correct term they are cleaved with a precision
cleaver to ensure that the cuts are exactly perpendicular. The next stage involves placing the two
optical fibres into a holder in the fibre optic splicer. First the ends if the cable are inspected using a
magnifying viewer. Then the ends of the fibre are automatically aligned within the fibre optic
splicer. Then the area to be spliced is cleaned of any dust often by a process using small electrical
sparks. Once complete the fibre optic splicer then uses a much larger spark to enable the
temperature of the glass in the optical fibre to be raised above its melting point and thereby
allowing the two ends to fuse together. The location spark and the energy it contains are very
closely controlled so that the molten core and cladding do not mix to ensure that any light loss in
the fbre optic splice is minimised.
Once the fibre optic splice has been made, an estimate of the loss is made by the fibre optic
splicer. This is achieved by directing light through the cladding on one side and measuring the light
leaking from the cladding on the other side of the splice.
The equipment that performs these splices provides computer controlled alignment of the optical
fibres and it is able to achieve very low levels of loss, possibly a quarter of the levels of mechanical
splices. However this comes at a process as fusion welders for fibre optic splices are very
expensive.

Mechanical and fusion splices


The two types of fibre optic splices are used in different applications. The mechanical ones are
used for applications where splices need to be made very quickly and where the expensive
equipment for fusion splices may not be available. Some of the sleeves for mechanical fibre optic
splices are advertised as allowing connection and disconnection. In this way a mechanical splice
may be used in applications where the splice may be less permanent.
Fusion splices offer a lower level of loss and a high degree of permanence. However they require
the use of the expensive fusion splicing equipment. In view of this they tend to be used more for
the long high data rate lines that are installed that are unlikely to be changed once installed.

Fibre optic transmitters


This fiber optic communications tutorial is split into several pages each of which addresses
different aspects of fibre optical communication operation and technology including fibre optic
cable, transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
In order that data can be carried along a fibre optic cable, it is necessary to have a light source or
optical transmitter. This fibre optic transmitter is one of the key elements of any fibre optic
communications system and the choice of the correct one will depend upon the particular
application that is envisaged.
Fibre optic transmitter choices
There is a variety of different aspects to any fibre optic transmitter. For any application, the
different specifications need to be examined to ensure that the particular fibre optic transmitter
will meet the requirements.
One of the major aspects to any fibre optic transmitter, is its power level. It is obvious that the
fibre optic transmitter should have a sufficiently high level of light output for the light to be
transmitted along the fibre optic cable to the far end. Some fibre optic cable lengths many only be
a few metres or tens of metres long, whereas others may extend for many kilometres. In the case
of the long lengths, the power of the fibre optic transmitter is of great importance.
The type of light produced is also important. Light can be split into two categories, namely
coherent and incoherent light. Essentially, coherent light has a single frequency, whereas
incoherent light contains a wide variety of light packets all containing different frequencies, i.e.
there is no single frequency present. While some emitters may appear to emit a single colour, they
can still be incoherent because the light output is centred around a given frequency or wavelength.
The frequency or wavelength of the light can also be important. Often fibre optic systems will
operate around a given wavelength. Typically the wavelength of operation is given.
It is also necessary to consider the rate at which the transmitter can be modulated as this affects
the data rate for the overall transmission. In some instances low rate systems may only need to
carry data at a rate of a few Mbps, whereas main telecommunications links need to transmit data
at many Gbps.

Types of fibre optic transmitter


There are two main types of fibre optic transmitter that are in use today. Both of them are based
around semiconductor technology:

• Light emitting diodes (LEDs)


• Laser diodes

Semiconductor optical transmitters have many advantages. They are small, convenient, and
reliable. However, the two different types of fibre optic transmitter have very different properties
and they tend to be used in widely different applications.
LED transmitters These fibre optic transmitters are cheap and reliable. They emit only
incoherent light with a relatively wide spectrum as a result of the fact that the light is generated by
a method known as spontaneous emission. A typical LED used for optical communications may
have its light output in the range 30 - 60 nm. In view of this the signal will be subject to chromatic
dispersion, and this will limit the distances over which data can be transmitted
It is also found that the light emitted for a LED is not particularly directional and this means that it
is only possible to couple them to multimode fibre, and even then the overall efficiency is low
because not allt he light can be coupled into the fibre optic cable.
LEDs have significant advantages as fibre optic transmitters in terms of cost, lifetime, and
availability. They are widely produced and the technology to manufacture them is straightforward
and as a result costs are low.
Laser diode transmitters These fibre optic transmitters are more expensive and tend to be
used for telecommunications links where the cost sensitivity is nowhere near as great.
The output from a laser diode is generally higher than that available from a LED, although the
power of LEDs is increasing. Often the light output from a laser diode can be in the region of 100
mW. The light generation arises from what is termed stimulated emission and this generates
coherent light. In addition to this the output is more directional than that of a LED and this enables
much greater levels of coupling efficiency into the fibre optic cable. This also allows the use of
single mode fibre which enables much greater transmission distances to be achieved. A further
advantage of using a laser is that they have a coherent light output and this means that the light
is nominally on a single frequency and modal dispersion is considerably less.
A further advantage of lasers is that they can be directly modulated with high data rates. Although
LEDS can be modulated directly, there is a lower limit to the modulation rate. One of the
disadvantages of a laser diode fibre optic
Nevertheless laser diode fibre optic transmitters have some drawbacks. They are much more
expensive than LEDs. Furthermore they are quite sensitive to temperature and to obtain the
optimum performance they need to be in a stable environment. They also do not offer the same
life as LEDs, although as much research has been undertaken into laser diode technology, this is
much less of an issue than previously.
Fibre optic transmitter summary
In view of the different characteristics that LEDs and laser diode fibre optic transmitters posses
they are used in different applications. The table below summarises some of the chief
characteristics of the two devices.

CHARACTERISTIC LED LASER


DIODE
Cost Low High
Data rate Low High
Distance Short Long
Fibre type Multimode fibre Multimode and single mode fibre
Lifetime High Low
Temperature sensitivity Minor Significant
LEDs tend to be used for the more cost sensitive applications and ones where lower data rates and
shorter distances are required. Local area networks with speeds up to a maximum of 100 Mbps
and distances up to a kilometre or so represent the upper limits. Long distance
telecommunications fibre optic links with Gbps data rates require the use of the more expensive
laser diode fibre optic transmitters.

Fibre optic receiver


his fiber optic communications tutorial is split into several pages each of which addresses different
aspects of fibre optical communication operation and technology including fibre optic cable,
transmitters, receivers, connectors, etc:
[1] Fiber optic communications tutorial [2] Optical fiber [3] Fiber optic connectors and couplers [4]
Fiber optic splicing [5] Fiber optic transmitters [6] Fiber optic receivers
Once data has been transmitted across a fibre optic cable, it is necessary for it to be received and
converted into electrical signals so that it can be processed and distributed to its final destination.
The fibre optic receiver is the essential component in this process as it performs the actual
reception of the optical signal and converts it into electrical pulses. Within the fibre optic receiver,
the photodetector is the key element
A variety of semiconductor photo-detectors may be used as fibre optic receivers. They are
normally semiconductor devices, and a form of photo-diode. A variety of diodes may be used in
fibre optic receivers, namely p-n photodiode, a p-i-n photodiode, or an avalanche photodiode.
Metal-semiconductor-metal (MSM) photodetectors are also used in fibre optic receivers on
occasions as well.

Overall receiver
Although the photo-detector is the major element in the fibre optic receiver, the are other
elements to the whole unit. Once the light has been received by the fibre optic receiver and
converted into electronic pulses, the signals are processed by the electronics in the receiver.
Typically these will include various forms of amplification including a limiting amplifier. These serve
to generate a suitable square wave that can then be processed in any logic circuitry that may be
required.
Once in a suitable digital format the received signal may undergo further signal processing in the
form of a clock recovery, etc. This will undertaken before the data from the fibre optic receiver is
passed on.

Diode performance
One of the keys to the performance of the overall fibre optic receiver is the photodiode itself. The
response times of the diodes govern the speed of the data that can be recovered. Although
avalanche diodes provide high speed they are also more noisy and require a sufficiently high level
of signal to overcome this.
The most common type of diode used is the p-i-n diode. This type of diode gives a greater level of
conversion than a straight p-n diode as the light is converted into carriers in the region at the
junction, i.e. between the p and n regions. The presence of the intrinsic region increases this area
and hence the area in which light is converted.

IMS, IP Multimedia Subsystem


IMS, or IP Multimedia Subsystem is having a major impact on the telecommunications industry,
both wired and wire-less. Although IMS was originally created for mobile applications by 3GPP and
3GPP2, its use is more widespread as fixed line providers are also being forced to find ways of
integrating mobile or mobile associated technologies into their portfolios. As a result the use of
IMS, IP multimedia subsystem is crossing the frontiers of mobile, wire-less and fixed line
technologies. Indeed there is very little within IMS that is wireless or mobile specific, and as a
result there are no barriers to its use in any telecommunications environment.
IMS, IP multimedia subsystem, itself is not a technology, but rather it is an architecture. It is
based on Internet standards which are currently the major way to deliver services on new
networks. However one of the key enablers for the architecture is the Session Initiation Protocol
(SIP), a protocol that has been devised for establishing, managing and terminating sessions on IP
networks. The overall IMS architecture uses a number of components to enable multimedia based
sessions between two or more end devices.
One of the elements is a presence server that handles the user status, and this is a key element
for applications such as Push to talk over Cellular (PoC) where the presence, or user status is key
to enabling one user to be able to talk to another.
With users now needing to activate many sessions using different applications and often
concurrently, IMS provides a common IP interface so that signalling, traffic, and application
development are greatly simplified. In addition to this an IMS architecture means that subscribers
can connect to a network using multiple mobile and fixed devices and technologies. With a variety
of new applications from Push to talk over Cellular (PoC), gaming, video and more becoming
available, it will be necessary to be able to integrate them seamlessly for users to be able to gain
the most from these new applications.
It also has advantages for operators as well. Apart from enabling them to maximise their
revenues, functions including billing, and "access approval" can be unified across the applications
on the network, thereby considerably simplifying this area.

IMS, IP Multimedia Subsystem architecture


IMS provides a unified architecture which can be divided into three layers:

• Transport and Endpoint Layer


• Session Control Layer
• Application Server Layer

IMS Transport and Endpoint Layer


This layer initiates and terminates the SIP signalling, setting up sessions and providing bearer
services including the conversion from analogue or digital formats to packets. It also provides the
media gateways for converting the VoIP data to the PSTN TDM format.

IMS Session Control Layer


This layer contains what is termed the Call Session Control Function (CSCF) which provides the
endpoints for the registration and routing for the SIP signalling messages, enabling them to be
routed to the correct application servers. The CSCF also enables QoS to be guaranteed. It achieves
this by communicating with the transport and endpoint layer.
The layer also includes other elements including the Home Subscriber Server (HSS) that maintains
the user profiles including their registration details as well as preferences and the like. It includes
the presence server essential to many interactive applications such as PoC. A further element of
the session Control Layer is the Media Gateway Control.

Application Server Layer


The control of the end services required by the user is undertaken by the Application Server Layer.
The IMS architecture and SIP signalling has been designed to be flexible and in this way it is
possible to support a variety of telephony and non-telephony servers concurrently. Within this
layer there is a wide variety of different servers that are supported. This includes a Telephony
Application Server (TAS), IP Multimedia - Service Switching Function (IM-SSF), Supplemental
Telephony Application Server, Non-Telephony Application Server, Open Service Access - Gateway
(OSA-GW), etc.

Summary
The telecommunications industry as a whole is turning to IP based transport of data along with the
introduction of new multimedia services. To enable this to be achieved telecommunications
networks, whether fixed, cellular or wireless will need to be far more flexible and to achieve this it
will be necessary to implement IMS, IP Multimedia Subsystem.

ISDN Tutorial
ISDN or Integrated Services Digital Network is an international standard for end to end digital
transmission of voice, data and signaling. It can operate over copper based systems and allows
the transmission of digital data over the telecommunications networks, typically ordinary copper
based systems and providing higher data speeds and better quality than analogue transmission.
The ISDN specifications provide a set of protocols that enable the set up, maintenance and
completion of calls.
ISDN, Integrated Services Digital Network, provides a number of significant advantages over
analogue systems.
In is basic form it enables two simultaneous telephone calls to be made over the same line
simultaneously.
Faster call connection. It typically takes a second to make connections rather than the much
longer delays experienced using purely analogue based systems.
Data can be sent more reliably and faster than with the analogue systems.
Noise, distortion, echoes and crosstalk are virtually eliminated.
The digital stream can carry any form of data from voice to faxes and internet web pages to data
files - this gives the name 'integrated services'

ISDN Usage
ISDN is in use around the world, but with the introduction of ADSL it is facing strong competition.
The technology never gained much market share in the USA, although it used in other countries.
In Japan it became reasonably popular in the late 1990s although it is now in decline with the
advent of ADSL. The system was also introduced in Europe where providers such as BT, France
Telecom and Deutsche Telekom introduced services.

ISDN Configurations
There are two types of channel that are found within ISDN. These are the 'B' and 'D' channels. The
B or 'bearer' channels are used to carry the payload data which may be voice and / or data, and
the d or 'Delta' channel is intended for signalling and control, although it may also be used for data
under some circumstances.
Additionally there are two levels of ISDN access that may be provided. These are known as BRI
and PRI.
BRI (Basic Rate Interface) - This consists of two B channels, eac of which provides a bandwidth
of 64 kbps under most circumstances. One D channel with a bandwidth of 16 kbps is also
provided. Together this configuration is often referred to as 2B+D.
The basic rate lines connect to the network using a standard twisted pair of copper wires. The data
can then be transmitted simultaneously in both directions to provide full duplex operation. The
data stream is carried as two B channesl as mentioned above, each of which carry 64 kbps (8 k
bytes per second). This data is interleaved with the D channel data and this is used for call
management: setting up, clearing down of calls, and some additional data to maintain
synchronisation and monitoring of the line.
The network end of the line is referred to as the 'Line Termination' (LT) while the user end acts as
a termination for the network and is referred to as the 'Network Termination' (NT). Within Europe
and Australia, the NT physically exists as a small connection box usually attached to a wall etc,
and it converts the two wire line (U interface) coming in from the network to four wires (S/T
interface or S bus). The S/T interface allows up to eight items or 'terminal equipments' to be
connected, although only two may be used at any time. The terminal equipments may be
telephones, computers, etc, and they are connected in what is termed a point to point
configuration. In Europe the ISDN line provides up to about 1 watt of power that enables the NT to
be run, and also enables a basic ISDN phone to be used for emergency calls. In North America a
slightly different approach may be adopted in that the terminal equipment may be directly
connected to the network in a point to point configuration as this saves the cost of a network
termination unit, but it restricts the flexibility. Additionally power is not normally provided.
PRI (Primary Rate Interface) - This configuration carries a greater number of channels than the
Basic Rate Interface and has a D channel with a bandwidth of 64 kbps. The number of B channels
varies according to the location. Within Europe and Australia a configuration of 30B+D has been
adopted providing an aggregate data rate of 2.048 Mbps (E1). For North America and Japan, a
configuration of 23B+1D has been adopted. This provides an aggregate data rate of 1.544 Mbps
(T1).
The primary rate connections utilise four wires - a pair for each direction. They are normally 120
ohm balanced lines using twisted pair cable. Primary rate connections always use a point to point
configuration.
Primary rate lines are widely used to conenct to Private Branch eXchanges (PBX) in an office etc.
Typically this may be used to provide a number of POTS (Plain Old Telephone System) or basic
rate ISDN lines to the users.

Summary
Although ISDN is has been overtaken by technologies such as ADSL it is nevertheless still widely
used in many areas, particularly where existing services need to be maintained, or where
compatibility needs to be guaranteed. As such it is still an important technology that will be
encountered for many years to come.

Mobile IP tutorial
Mobile IP is becoming increasingly important. Mobile IP is required because high speed data and
mobility are two key factors for today's wireless and telecommunications industry.
While high speed data is one issue, mobility is equally important. People need to take laptop
computers with them use them anywhere as if they were working from their home network. While
it is possible to make connections reasonably easily, improvements are being put in place to
ensure full mobility and ease of use. Accordingly Mobile IP is a key element enabling this facility to
become more robust and easier to use
As infrastructures and standards are already in place for data transfer it is necessary to adapt
them to take account of mobility and introduce Mobile IP via an existing route rather than
introducing completely new techniques. The most common services are the data services using the
Internet Protocol (IP). When using this, a user which may be any form of node or computer is
normally connected to a particular network or sub-network. Moving the computer from one
network or sub-network to another creates problems because routing tables need to be updated to
enable the data to reach the user at the new location.

Home operation
When connected to the base network, users are attached to their home network and all the routing
tables needed to send the data to the required destination are set up for the computer in this
locations. Using their home network IP address they can move anywhere within this particular
network with no problem.

Mobile IP foreign agent and foreign networks


It is becoming increasingly common for computers to need to operate in networks other than their
home network. The mobility of laptops has made this increasingly common. The network it
connects to will not be its home network, but instead it will be what is termed a foreign network.
Under these circumstances it needs a method of connecting back to the home network so that
data packets sent to the home network can be forwarded to the new location and vice versa.
Mobile IP achieves this using what is called a Foreign Agent (FA). Each network has its own foreign
agent to enable mobile data operation to be provided. It operates by advertising its presence and
services on its network looking for any foreign users that may have attached to its network. Once
a foreign user is found it communicates with them to establish the required information to link to
the home network.
Similarly on the home network there is an equivalent agent and this is naturally called the "Home
Agent" (HA). This Home Agent acts as what is termed a "proxy" for the mobile user. In other
words it takes the place of the home IP location and routes data to the foreign agent, allowing
communication with what is termed the "correspondent node" (CN).
In operation the foreign agent connects to the home agent when authentication is done and it uses
what is termed an IP tunnel for communication. In this tunnel IP packets are packed within IP
packets communicating the data. In this way the computer is able to move around freely using
this Mobile IP, and communicate with data packets being routed via the home network.

Cell phone applications


With more data being transmitted over cellular networks there is a similar need for mobility within
this arena as well, and accordingly mobile networks are also starting to employ Mobile IP. Work is
well advanced on the CDMA2000 system used widely in North America, Asia and a number of other
parts of the world. For UMTS, other areas of development are receiving the main focus of
development and Mobile IP work is expected to follow on and be included in later releases of the
standard.
The way in which IP is used on a cellular system is very similar to that employed using a dial up
phone connection where a computer is to connect to the Internet. Here the user makes a
connection using what is termed the Point to Point Protocol (PPP). As the connection is established
the service provider assigns an IP address to the user. Once this has occurred then the data
packets have an address to which they can be routed. While the connection is maintained all
packets of data are routed to this IP address and others are obviously sourced from it.
The same happens when a mobile phone connects to the internet. A connection is established and
an IP address is assigned to the phone or laptop. This works well while the phone is connected to
the same base station or local switching centre. However when it needs to move away, a problem
arises because each switching centre acts as a different sub-network. As when a mobile moves
from one switching centre to the next the connection needs to be broken and a new one
established using a new IP address. For CDMA2000 networks this is known as Simple IP. This is
clearly not an efficient method of operating and considerably reduces the performance of the
system because it breaks all the IP based connections made by applications running on the mobile
node.
Accordingly the mobile phone system is treated as a network in the same way as it is for a wired
LAN. Accordingly each switching centre has a foreign agent. This operates in the same way that it
does for a wired LAN system. It communicates messages from a mobile operating that has moved
away from its home switching centre, and in this way the IP connection is not broken.
By adopting this approach the foreign agents serving different switching centres are used, and the
information updated with the home agent as the mobile moves from one switching centre to the
next. Although this complicates the handover process, it enables a continuous connection to be
maintained, despite the mobile moving its location and requiring to be served by different
switching centres.

Summary
With the telecommunications scene changing rapidly, moving from a voice centred service to a
data centred service and hybrid approaches being offered to provide the optimum service, Mobile
IP is an important technique to be used to enable seamless transition from one area to the next,
and one technology to the next.

SIP - Session Initiation Protocol


SIP, the Session Initiation Protocol is used in many applications and has been adopted as the
signalling protocol for use with Voice over IP ( VoIP ). SIP is a signalling protocol that is used for
establishing sessions on an IP network. The presence of SIP enables sessions to be set up in a way
that enables a host of new services to be made available, thereby allowing far greater flexibility to
be achieved.
SIP, Session Initiation Protocol, is focussed purely on establishing, modifying and terminating
sessions, and has no interest in the content of the sessions. In view of the focus of SIP, it provides
a level of simplicity that enables to be extensible, and to site easily within different deployment
architectures and scenarios.
SIP is an RFC standard - RFC 3261 from the Internet Engineering Task Force (IETF). This is the
organization that is responsible for administering and developing the mechanisms that support the
Internet. While other protocols have been used in the past, SIP has now become the protocol of
choice as a result of its flexibility and ability to be updated.
Key functions
There are a number of key functions that SIP provides. It is able to provide name translation and
user location, it negotiates the features that will be available in a session and it manages the
participants in a session.

• User location and name translation - this function enables data to reach a party regardless
of location. To achieve this SIP, Session Initiation Protocol addresses are used. These are
very similar in format to email addresses, having elements such as a domain name and a
user name or phone number. Also because of their structure, they are easy to associate
with email addresses.
• Feature negotiation - as different parties may have different features that are supported it
is necessary that both ends communicate in a way that both can support. For example it
would be no use a video enabled phone trying to sent video to a voice only phone. Thus
when a link is set up all participants negotiate to agree the features that are supported.
Also when one user leaves a session, the remaining ones may renegotiate to determine
whether any new features may be supported.
• Participant management - sessions need to be managed to enable users to enter or leave
sessions. SIP provides this capability.

SIP elements
SIP comprises two basic elements, namely the SIP User Agent and the SIP Network Server:

• The SIP User Agent This is the component of the protocol that resides with the user. In
turn it consists of two parts: the User Agent Client (UAC) which initiates the calls and the
User Agent Server (UAS) which answers calls. It allows calls to be made using a peer to
peer client server protocol.
• SIP network server This element contains three basic parts: the SIP Stateful Server, the
SIP Stateless Server, and thirdly the SIP Redirect Server. These servers act to provide the
location of the user and accordingly direct data to the user, and they also provide name
resolution in a similar way that email addresses and domain names do on the Internet as it
is unlikely that users will remember IP addresses.

SIP also provides its own transfer mechanism which is independent of the packet layer. This
enables it to perform reliably over protocols such as UDP - a particularly useful feature under some
circumstances.

USB tutorial
USB, or the Universal Serial Bus Interface is now well established as an interface for computer
communications. In many areas it has completely overtaken RS232 and the parallel or Centronics
interface for printers, and it is also widely used for memory sticks, computer mice, keyboards and
for many other functions. One of the advantages of USB is its flexibility: another is the speed that
USB provides.
USB provides a sufficiently fast serial data transfer mechanism for data communications, however
it is also possible to obtain power through the connector and this has further added to the
popularity of USB as many small computer peripherals may be powered via this. From memory
and disk drives to other applications such as small fans and coffee cup warmers, the USB port on
computers can be used for a variety of tasks.

USB evolution
The USB interface was developed as a result of the need for a communications interface that was
convenient to use and one that would support the higher data rates being required within the
computer and peripherals industries.
The first proper release of a USB specification was Version 0.7 of the specification. This occurred in
November 1994. This was followed in January 1996 by USB 1.0. USB 1.0 was widely adopted and
became the standard on many PCs as well as many printers using the standard. In addition to this
a variety of other peripherals adopted the USB interface, with small memory sticks starting to
appear as a convenient way for transferring or temporarily storing data.
With USB 1.0 well established, faster data transfer rates were required, and accordingly a new
specification, USB 2 was released. Wit the importance of USB already established it did not take
long for the new standard to be adopted.
With USB defining its place in the market, other developments of the standard were investigated.
With the need for mobility in many areas of the electronics industry taking off, the next obvious
move for USB was to adopt a wireless interface. In doing this wireless USB would need to retain
the same flexible approach that provided the success for the wired interface. In addition to this the
wireless USB interface needs to be able to transfer data at rates which will be higher than those
currently attainable with the wired USB 2 connections. To achieve this ultra-wideband UWB
technology is used.

USB capabilities
The basic concept of USB was for an interface that would be able to connect a variety of computer
peripheral devices, such as keyboards and mice, to PCs. However, since its introduction, the
applications for USB have widened and it has been used for many other purposes including,
including measurement and automation.
In terms of performance, USB 1.1 enabled a maximum throughput of 12 Mbps, but with the
introduction of USB 2.0 the maximum speed is 480 Mbps.
In operation, the USB host automatically detects when a new device has been added. It then
requests identification from the device and appropriately configures the drivers. The bus topology
allows up to 127 devices to run concurrently on one port. Conversely, the classic serial port
supports a single device on each port. By adding hubs, more ports can be added to a USB host,
creating connections for more peripherals.

USB Standards
USB is a standard that is being updated. Since its first introduction, the standard has been
improved to meet the increasing needs of the user community. As a result there are a number of
different USB standards, but fortunately these are backwards compatible.

1. USB1.1: This was the original version of the USB, Universal Serial Bus and was released
in September 1998 after a few problems with the USB 1.0 specification released in January
1996 were resolved.. It provided a Master / Slave interface and a tiered star topology
which was capable of supporting up to 127 devices and a maximum of six tiers or hubs.
The master or "Host" device is normally a PC with the slaves or "Devices" linked via the
cable.

One of the aims of the USB standard was to minimise the complexity within the Device by
enabling the Host to perform the processing. This meant that devices would be cheap and
readily accessible.

The data transfer rates of USB 1.1 are defined as:


o Low speed: 1.5 Mbps
o Full speed: 12 Mbps

The cable length for USB 1.1 is limited to 5 metres, and the power consumption
specification allows each device to take up to 500mA, although this is limited to 100mA
during start-up.

USB 1.1 does not allow extension cables or the inclusion of pass-through monitors (due to
timing and power limitations).

2. USB 2.0: The USB 2.0 standard is a development of USB 1.1 which was released in April
2000. The main difference when compared to USB 1.1 was the data transfer speed
increase up to a "High Speed" rate of 480 Mbps. However it should be noted that even
though devices are labelled USB 2.0, they may not be able to meet the full transfer speed.
3. USB 3.0 : This improved USB standard which was first demonstrated at the Intel
Developer Forum in September 2007. The major feature is what is termed the SuperSpeed
bus, which provides a fourth transfer mode which gives data transfer rates of 4.8 Gbit/s.
Although the raw throughput is 4 Gbit/s, data transfer rates of 3.2 Gbit/s, i.e.0.4 GByte/s
more after protocol overhead are deemed acceptable within the standard. The standard is
also backwards compatible with USB 2.0

USB connections and cables


The USB connector is remarkably simple having just four main connections for the data and power.
In addition to this it is also possible to use extender cables. The maximum allowable length for an
individual cable is 5 metres (3 metres for slow devices) and this allows the USB data acquisition
module to be located remotely from the computer.
A USB cable has two forms of connector. These are designated the "A" and "B" connectors. The
connections to the connectors are given below:

PIN FUNCTION
1 Vbus
4.75 - 5.25 V
2 Data -
3 Data +
4 Ground
Shell Screen
USB cable pin assignments
The connectors used for USB are designed to enable the power and ground connections to be
made first applying power to the device before the signal lines are connected.

USB tutorial summary


With USB in almost universal use in new computers, a host of peripherals using the USB standard,
its use is set to continue for many years to come. With the USB standard being updated to enable
it to keep pace with technology, it could run like a similar story to Ethernet, where it will be in use
for many years, but still at the forefront of technology..

VoIP Tutorial
This Voice over Internet Protocol, VoIP tutorial is split into several pages each of which addresses
different aspects of Voice over IP operation and technology:
[1] Voice over IP, VoIP technology tutorial [2] VoIP protocols [3] VoIP testing and
monitoring
Voice over Internet Pprotocol, also called Voice over IP or just VoIP technology is having a major
impact on the telecommunications industry. VoIP technology provides advantages for both the
user and also the provider, allowing calls to be made more cheaply, as well as enabling data and
voice to be carried over the same network efficiently. In view of the way VoIP technology is being
adopted, telecommunications providers are having to adopt the new technology. Already it has
caused some impact on major businesses, and there will be more to come.
Until recently voice traffic was carried using a circuit switched approach. Here a dedicated circuit
was switched to provide a call for a user. Now with new data and Internet style technology used
for VoIP, packet data and Internet Protocol (IP) is used to enable a much more efficient use of the
available capacity.

What is VoIP?
The concept of Voice over Interent Protocol, Voice over IP, or VoIP, is quite straightforward. A VoIP
system basically consists of a number of endpoints which may be VoIP phones or computers and
an IP network.
In a VoIP system, the phone or computer acting as an endpoint consists of a few blocks. It
includes a vocoder (voice encoder / decoder) which converts the audio to and from the analogue
format into a digital format. It also compresses the encoded audio, and in the reverse direction it
decompresses the reconstituted audio. The data generated is split into packets in the required
format by the network interface card which sends them with the relevant protocol into the outside
world. Signalling and call control is also applied through this card so that calls may be set up,
pulled down, and other actions may be undertaken.
The IP network accepts the packets and provides the medium over which they can be forwarded,
routing them to their final destination. As complete circuits are not dedicated to a given user, at
times when no data needs to be sent, for example during quiet periods in speech, etc, the capacity
can be used by other users. This makes a significant difference to the efficiency of a system, and
allows significant savings to be made.

VoIP Protocols
In order to be able to communicate using a VoIP system, there are two types of protocol that must
be used. One is a signalling protocol, and the other is a protocol to facilitate the data exchange.
The signalling protocol is used to control and manage the call. It includes elements such as call set
up, clear down, call forwarding and the like. The first protocol to be widely used for VoIP was
H323. However this is not a particularly rigorous definition and as a result other variants have
been developed. One known as "Skinny" and is a Cisco Proprietary protocol and is from Nortel and
is called Unistem. In view of this there are often interfacing problems. As a result of this a new
protocol termed SIP (Session Initiation Protocol) is now being widely adopted as the main
standard.
The second type of protocol is used to manage the data exchange for the VoIP traffic. The one
used is termed RTP (Real Time Protocol) and this can handle both audio and video. RTP handles
the data exchange, but in addition to this a codec is required. Where voice is used a vocoder is
used (a codec can be used for any form of data including audio, video, etc). The most widely used
VoIP vocoder is G711, although there is a variety of others that are used with varying data rates
and providing different levels of voice quality.

Service quality
Quality of Service, QoS, for the data link has a major impact on VoIP perceived sound quality. The
data exchange must take place in real time and any delays in the system cause significant
disruption to the traffic. Delayed packets may mean that packets arrive out of order, or with
varying gaps between them, resulting in garbled speech, Packets may even disappear resulting in
lost information.
For any packet passing through an IP network it is possible to define the class of service required.
It is important that packets that need to be transferred in real time are given a higher quality of
service than those that can be transferred as the network permits. This is particularly important
for services like VoIP that are termed delay sensitive applications.

Advantages
Voice over IP, VoIP technology provides a number of significant advantages to operators and to
users. For the user one of the main advantages is the flexibility. Phones are software based,
sometimes being attached to computers. As a result a considerable degree of flexibility is afforded
to the user. It is possible to move the phone around and by enabling the system to recognise the
individual phone it is possible to route the data to it automatically. In addition to this ideas such as
mobile IP could enable the user to be located away from the home network and still receive calls.
A further advantage is that the wireless network technologies such as 802.11 can carry the calls as
voice is simply another form of application. This gives further flexibility as the phone does not have
to be physically wired to a network. Again Quality of Service is a major factor and this is being
addressed under 802.11e
For the operator some of the advantages are different. One of the major drivers towards the use of
VoIP is cost. Previously digital traffic was handled using time division techniques. This had the
disadvantage that when a particular time slot allocated to a user was dormant, it could not be
used. Using IP techniques much higher levels of efficiency can be attained. Although the system
required to carry packet data is more complicated, the returns far outweigh the additional costs.
Summary
As with all technologies there are disadvantages. The main one with VoIP is voice quality. This
results from the use of a vocoder to digitise and compress the audio. Quality is comparable with
that from a mobile phone, but for the future with rapidly improving standards of vocoders there
are likely to be significant improvements in this area.
In the long term VoIP is the way the market is moving, and now with increasing speed. Offering
not only great improvements in flexibility, but also major cost savings, but with the requirement
for large levels of investment, this is the way that the telecommunications market is moving.
However to remain competitive it will be necessary to adopt the new VoIP technology.

Voice over IP, VoIP protocols


This Voice over Internet Protocol, VoIP tutorial is split into several pages each of which addresses
different aspects of Voice over IP operation and technology:
[1] Voice over IP, VoIP technology tutorial [2] VoIP protocols [3] VoIP testing and
monitoring
Voice over Internet Protocol, VoIP, has seen an enormous level of growth in recent years and it is
likely that this with continue, or even increase in the foreseeable future. The reason for this level
of growth in the use of VoIP results from the cost savings it provides, the increase in flexibility,
and the fact that the same network can be used for voice and data.
Although the first VoIP networks started to appear as early as 1995, and these VoIP networks used
proprietary protocols to enable the IP data to be exchanged. However with the rapid growth, and
the need for VoIP traffic to be routed globally, the need for established standards to control the
exchange of data arose. In view of the fact that there were a number of different requirements,
several standards were developed and are in use today.
Although it may be surprising at first sight that there are several VoIP protocols, these different
protocols are required for different reasons and they have been deployed in many systems. Even
though the concept of one unified VoIP protocol or set of protocols would seem ideal, this is
unlikely to happen in view of the extent to which the different protocols have already been
deployed. Additionally different companies and organizations will use the VoIP protocol which best
meets their requirements and accordingly, they will want to maintain what they are currently
using. This will mean that there will be resistance to any change, especially as interfaces have
been devised to enable interoperability.

VoIP protocols overview


Although working together, there are a number of different organizations and bodies that are
mentioned when referring to VoIP protocols:

• IETF This is the Internet Engineering Task Force. It is a community of engineers that
defines some of the prominent standards used on the Internet (including VoIP protocols)
and seeks to spread understanding of how they work.
• ITU the International Telecommunication Union. This is an international organization
within the United Nations System used by where governments and private sector
companies to coordinate and standardize telecommunications networks, services and
standards on a global basis.

In addition to the organizations involved, there is also a variety of different VoIP protocols and
standards.

• H.248 H.248 is an ITU Recommendation that defines "Gateway Control Protocol" and it
is also referred to as IETF RFC 2885 (Megaco). It defines a centralized architecture for
creating multimedia applications and it extends MGCP. H.248 is the result of a joint
collaboration between the ITU and the IETF and it is another VoIP protocol.
• H.323 This is ITU Recommendation that defines "packet-based multimedia
communications systems." H.323 defines a distributed architecture for multimedia
applications, and it is thus a VoIP protocol.
• Megaco This is also known as IETF RFC 2885 and ITU Recommendation H.248. H.248
defines a centralized architecture for creating multimedia applications.
• Media Gateway Control Protocol (MGCP) This is also known as IETF RFC 2705. It
defines a centralized architecture for creating multimedia applications, and it is therefore a
VoIP protocol.
• Real-Time Transport Protocol (RTP) This VoIP protocol is defined under IETF RFC
1889 and it details a transport protocol for real-time applications. RTP provides the
transport mechanism to carry the audio/media portion of VoIP communication and is used
for all VoIP communications.
• Session Initiation Protocol (SIP) This is also known as IETF RFC 2543 and it defines
a distributed architecture for creating multimedia applications.

Centralised and distributed architectures


One of the advantages of VoIP is that it does not legislate for the architecture of the network that
carries the data. Early telecommunications networks used a centralised structure where all the
intelligence was contained at the switching station or exchange. With the advent of packet
technology, the routing and intelligence can be distributed to where it is most convenient to locate
it. This may be by having a distributed architecture, or a centralised one.
While both architectures can be employed with VoIP, the type of architecture does have an impact
on the optimum VoIP protocols to use. This is one of the reasons why a number of VoIP protocols
are used, and will remain to be used.

VoIP testing
This Voice over Internet Protocol, VoIP tutorial is split into several pages each of which addresses
different aspects of Voice over IP operation and technology:
[1] Voice over IP, VoIP technology tutorial [2] VoIP protocols [3] VoIP testing and
monitoring
IP networks carrying VoIP traffic are very complicated. They carry both voice and data traffic and
this results in a variety of traffic with different requirements being carried and this presents many
challenges. In order to ensure that all the requirements are met and the network operates to its
maximum efficiency can present many challenges. Obviously the design must be correct, but once
implemented testing of the network is needed to ensure that it is able to operate correctly when
installed, and then maintained correctly ensuring that its performance is maintained or optimised
to provide the performance meets the needs of the network provider and the user. For VoIP,
testing is an essential element of any network. However specialised VoIP testing techniques are
required.

VoIP network architecture


The structure of a VoIP network comprises many entities and this means that VoIP testing is
essential to ensure that the network is operating satisfactorily. A typical VoIP network will include
many different entities:

• Signalling gateways
• Media gateways
• Gatekeepers
• Class 5 switches
• SS7 network
• Network management system
• Billing system

This variety of different entities within the VoIP network all communicate with each other using a
variety of protocols. To perform correctly it is necessary to ensure that they communicate
efficiently and that no bottlenecks are created. Analysing the performance of a VoIP network is not
always easy. However it can be achieved and significant improvements in performance can be
achieved if the VoIP testing scenarios are carefully chosen and planned, and the data analysed to
reveal any problems.

VoIP testing fundamentals


When considering VoIP testing, it is first necessary to categorise the different applications or
different types of VoIP testing that may be employed. There are three areas into which VoIP
testing (and other types of testing) may be categorised:
• VoIP functionality or pre-deployment testing
• VoIP standards compliance testing
• VoIP performance testing

Each element within the overall VoIP testing regime is of importance as it will ensure that the
network is able to perform properly. A problem in any one element will result in the whole network
not performing properly. These tests can be of the form of a functional test in the laboratory
before deployment. VoIP testing of the individual elements is essential to make sure that any
problems do not manifest themselves during deployment. Isolating the problem, and fixing it at
this stage is considerably more costly.

20mA current loop serial data communications


The current loop interface for serial data communications is used in a number of applications. As
the name implies, the current loop serial data communications scheme uses the level of current
instead of voltage to provide the signalling method. Often the scheme is called a 20mA current
loop interface in view of the standard level of current that is often used.
The 20 mA current loop scheme is has been used for many years for sending digital data. Although
not a formal standard, it is a de-facto standard that that was widely used for many serial data
communications applications. It was been incorporated into the old teleprinters or teletypes for
sending data between two equipments along with being used in a variety of other applications. In
fact many older machines (prior to the 1960s) used a 60 mA current loop system, although later
machines adopted a 20mA current loop standard - the first one being the Model 33 teletype.
Nowadays the current loop system is not as widely used, but it still finds applications in a number
of areas resulting from its advantages. With the advent of RS-422 (first introduced in 1978) and
RS-485 (first introduced in 1983), the popularity of 20mA current loop soon waned and it is now
only used in niche areas.

Advantages and disadvantages of current loop


The 20mA current loop format for serial data communications has a number of advantages that
mean it is still used in a variety of applications. Some of the main advantages are:

• Line losses are not usually significant: The fact that a current source is used means
that voltage losses caused by line resistance are unlikely to cause a problem.
• Can be used for long distances: As voltage losses are not normally significant, 20mA
current loop systems can be sued for carrying data over long distances, sometimes up to
several kilometres.
• Can be isolated from ground: By using opto-isolators it is possible to isolate the
signalling system from ground.
• Provided a simple form of networking: As the system uses a current loop, it is
possible to run several teleprinters receiving data from one source by placing each
teleprinter in the loop. This meant that 20 mA current loop provided an early form of
networking.

While 20mA current loop provides a number of advantages, it also has some disadvantages that
must be considered.

• No official standard: No recognised standards bodies have ever published a standard


for the 20mA current loop system. This has meant that there are areas of uncertainty. For
example it is necessary to know some technical details of the interface circuits to ensure
that they interface correctly.
• Slow speed: The speed at which current loop systems are able to transmit data is
generally much less than voltage based systems. However for short distances speeds up to
19.2k baud are possible, although for longer distances it will be necessary tor educe the
speed, possibly as low as 300 baud.
• Convenience: The circuits used for voltage based signalling systems are generally more
convenient than those used for 20mA current loop.
• Signalling: Many voltage based systems use multiple lines for handshaking which
speeds the operation. 20 mA current loop traditionally only uses two lines and therefore
any handshaking signals need to be carried within the messaging making it less flexible.
Analogue current loop
While the current loop system described here focuses on a digital format used for data signalling,
other systems use an analogue approach. These schemes generally use a 4 - 20 mA current loop
system and can be used to control transducers.
Although a little crude by today's standards, analogue 4-20mA current loop systems allow control
over a single pair of wires and again the resistive losses are less significant, enabling more
accurate control that that provided by a voltage based system over a distance.
Additionally, analogue systems such as this current loop system are easier to troubleshoot that
any digital systems that could be used. However they may be much less flexible as only one
parameter can be controlled at any one time.

Current loop summary


While the 20mA current loop system is not as widely used as it used to be, it is still found in some
niche areas. It still has advantages to offer in terms of distances that can be covered and the noise
immunity it can offer. However it was never adopted as a formal standard and this means that
when using equipment with 20mA current loop, it is necessary to check the specifications of the
interfaces of both transmitter and receiver.

Careers

Application Engineer-I / 3+ yrs / IN,VAS/Technical/Product Support

Location: Pune, India

Job RKI/PO/GTAC/579
Code:

Positions: 1

Description

About Redknee

Redknee is a leading global provider of innovative communication software


products, solutions and services. Redknee's award-winning solutions enable
wireless and wireline operators to monetize the value of each subscriber
transaction while personalizing the subscriber experience to meet mainstream,
niche and individual market segment requirements. Redknee's revenue
generating solutions provide advanced convergent billing, rating, charging and
policy for voice, messaging and next generation data services to over 70 network
operators in over 50 countries. Established in 1999, Redknee Solutions Inc.
(TSX: RKN) is the parent of the wholly-owned operating subsidiary Redknee Inc.
and its various subsidiaries. References to Redknee refer to the combined
operations of those entities.
For more information, please visit us at: www.redknee.com.

Application Support Engineer I -GTAC

Description

You are a born problem solver. You live to troubleshoot and resolve customer
technical issues. You enjoy the challenge of working with leading-edge
telecommunications technology. You expertly investigate and analyze customer
problems to ensure that products and solutions are accomplishing the customer’s
objectives. You autonomously execute special assignments and creatively
prioritize deadlines to get the job done.

Roles & Responsibilities

You will provide support for Redknee products deployed on customer sites
across the globe;

Work to resolve production issues and meet the laid down SLA’s.

Escalate Trouble Tickets to appropriate person if necessary.

Prepare for and perform on-site and/or remote installations/upgrades of


Redknee application(s) software and relevant third party products;

Support New Release activities.

Prepare incident reports and Method of procedure for Software upgrades ,


emergency patches.

Test the releases in Lab nodes.

Act as a central point of contact in resolving issues/facilitate in mitigating issues


that take place in the production environment

Flexible to work in the Shifts


Technical Skills-

The ideal candidate will have 3+ years of relevant experience in technical/product support and hands-on
experience on majority of the below listed areas:

• Should have very good working knowledge of Oracle (Skills- SQL , PL/SQL)
• UNIX knowledge and expertise;
• Good knowledge/experience of Telecom, IN, VAS and related technologies
(like SMS, MMS, VMS, GPRS, IVR, USSD, Mediation, Billing, Service
Provisioning).
• Promotion Configuration and implementation.
• Good Knowledge of SS7, SIGTRAN, VOIP, MGCP, ISUP, MAP, TCAP, INAP
and CAMEL protocols.
• Experience with shell and perl scripting is advantageous;
• knowledge of any Telecom Billing systems is advantageous.
Soft Skills:

Detail oriented good analytical and problem solving skills

Excellent communication skills, written and oral

Highly motivated, goal-oriented and well-organized – “to win” orientation

Exemplary team-player

Education and qualifications

Graduate or Post Graduate Degree in Engineering/Science

Você também pode gostar