Escolar Documentos
Profissional Documentos
Cultura Documentos
There is a total of fourteen different recognised GSM frequency bands. These are defined in 3GPP
TS 45.005.
BAND UPLINK DOWNLINK COMMENTS
(MHZ) (MHZ)
380 380.2 - 389.8 390.2 - 399.8
410 410.2 - 419.8 420.2 - 429.8
450 450.4 - 457.6 460.4 - 467.6
480 478.8 - 486.0 488.8 - 496.0
710 698.0 - 716.0 728.0 - 746.0
750 747.0 - 762.0 777.0 - 792.0
810 806.0 - 821.0 851.0 - 866.0
850 824.0 - 849.0 869.0 - 894.0
900 890.0 - 915.0 935.0 - 960.0 P-GSM, i.e. Primary or standard GSM
allocation
900 880.0 - 915.0 925.0 - 960.0 E-GSM, i.e. Extended GSM allocation
900 876.0 - 915 921.0 - 960.0 R-GSM, i.e. Railway GSM allocation
900 870.4 - 876.0 915.4 - 921.0 T-GSM
1800 1710.0 - 1785.0 1805.0 - 1880.0
1900 1850.0 - 1910.0 1930.0 - 1990.0
GSM services
Speech or voice calls are obviously the primary function for the GSM cellular system. To achieve
this the speech is digitally encoded and later decoded using a vocoder. A variety of vocoders are
available for use, being aimed at different scenarios.
In addition to the voice services, GSM cellular technology supports a variety of other data services.
Although their performance is nowhere near the level of those provided by 3G, they are
nevertheless still important and useful. A variety of data services are supported with user data
rates up to 9.6 kbps. Services including Group 3 facsimile, videotext and teletex can be supported.
One service that has grown enormously is the short message service. Developed as part of the
GSM specification, it has also been incorporated into other cellular technologies. It can be thought
of as being similar to the paging service but is far more comprehensive allowing bi-directional
messaging, store and forward delivery, and it also allows alphanumeric messages of a reasonable
length. This service has become particularly popular, initially with the young as it provided a
simple, low fixed cost.
GSM basics
The GSM cellular technology had a number of design aims when the development started:
The resulting GSM cellular technology that was developed provided for all of these. The overall
system definition for GSM describes not only the air interface but also the network or
infrastructure technology. By adopting this approach it is possible to define the operation of the
whole network to enable international roaming as well as enabling network elements from different
manufacturers to operate alongside each other, although this last feature is not completely true,
especially with older items.
GSM cellular technology uses 200 kHz RF channels. These are time division multiplexed to enable
up to eight users to access each carrier. In this way it is a TDMA / FDMA system.
The base transceiver stations (BTS) are organised into small groups, controlled by a base station
controller (BSC) which is typically co-located with one of the BTSs. The BSC with its associated
BTSs is termed the base station subsystem (BSS).
Further into the core network is the main switching area. This is known as the mobile switching
centre (MSC). Associated with it is the location registers, namely the home location register (HLR)
and the visitor location register (VLR) which track the location of mobiles and enable calls to be
routed to them. Additionally there is the Authentication Centre (AuC), and the Equipment Identify
Register (EIR) that are used in authenticating the mobile before it is allowed onto the network and
for billing. The operation of these are explained in the following pages.
Last but not least is the mobile itself. Often termed the ME or mobile equipment, this is the item
that the end user sees. One important feature that was first implemented on GSM was the use of a
Subscriber Identity Module. This card carried with it the users identity and other information to
allow the user to upgrade a phone very easily, while retaining the same identity on the network. It
was also used to store other information such as "phone book" and other items. This item alone
has allowed people to change phones very easily, and this has fuelled the phone manufacturing
industry and enabled new phones with additional features to be launched. This has allowed mobile
operators to increase their average revenue per user (ARPU) by ensuring that users are able to
access any new features that may be launched on the network requiring more sophisticated
phones.
New approaches
Neither of these approaches proved to be the long-term solution as cellular technology needed to
be more efficient. With the experience gained from the NMT system, showing that it was possible
to develop a system across national boundaries, and with the political situation in Europe lending
itself to international cooperation it was decided to develop a new Pan-European System.
Furthermore it was realized that economies of scale would bring significant benefits. This was the
beginnings of the GSM system.
To achieve the basic definition of a new system a meeting was held in 1982 under the auspices of
the Conference of European Posts and Telegraphs (CEPT). They formed a study group called the
Groupe Special Mobile ( GSM ) to study and develop a pan-European public land mobile system.
Several basic criteria that the new cellular technology would have to meet were set down for the
new GSM system to meet. These included: good subjective speech quality, low terminal and
service cost, support for international roaming, ability to support handheld terminals, support for
range of new services and facilities, spectral efficiency, and finally ISDN compatibility.
With the levels of under-capacity being projected for the analogue systems, this gave a real sense
of urgency to the GSM development. Although decisions about the exact nature of the cellular
technology were not taken at an early stage, all parties involved had been working toward a digital
system. This decision was finally made in February 1987. This gave a variety of advantages.
Greater levels of spectral efficiency could be gained, and in addition to this the use of digital
circuitry would allow for higher levels of integration in the circuitry. This in turn would result in
cheaper handsets with more features. Nevertheless significant hurdles still needed to be
overcome. For example, many of the methods for encoding the speech within a sufficiently narrow
bandwidth needed to be developed, and this posed a significant risk to the project. Nevertheless
the GSM system had been started.
Frequencies
Originally it had been intended that GSM would operate on frequencies in the 900 MHz cellular
band. In September 1993, the British operator Mercury One-to-One launched a network. Termed
DCS 1800 it operated at frequencies in a new 1800 MHz band. By adopting new frequencies new
operators and further competition was introduced into the market apart from allowing additional
spectrum to be used and further increasing the overall capacity. This trend was followed in many
countries, and soon the term DCS 1800 was dropped in favour of calling it GSM as it was purely
the same cellular technology but operating on a different frequency band. In view of the higher
frequency used the distances the signals travelled was slightly shorter but this was compensated
for by additional base stations.
In the USA as well a portion of spectrum at 1900 MHz was allocated for cellular usage in 1994. The
licensing body, the FCC, did not legislate which technology should be used, and accordingly this
enabled GSM to gain a foothold in the US market. This system was known as PCS 1900 (Personal
Communication System).
GSM success
With GSM being used in many countries outside Europe this reflected the true nature of the name
which had been changed from Groupe Special Mobile to Global System for Mobile communications.
The number of subscribers grew rapidly and by the beginning of 2004 the total number of GSM
subscribers reached 1 billion. Attaining this figure was celebrated at the Cannes 3GSM conference
held that year. Figures continued to rise, reaching and then well exceeding the 3 billion mark. In
this way the history of GSM has shown it to be a great success.
The GSM technical specifications define the different elements within the GSM network
architecture. It defines the different elements and the ways in which they interact to enable the
overall network operation to be maintained.
The GSM network architecture is now well established and with the other later cellular systems
now established and other new ones being deployed, the basic GSM network architecture has been
updated to interface to the network elements required by these systems. Despite the
developments of the newer systems, the basic GSM network architecture has been maintained,
and the elements described below perform the same functions as they did when the original GSM
system was launched in the early 1990s.
GSM network architecture elements
The GSM network architecture as defined in the GSM specifications can be grouped into four main
areas:
Mobile station
Mobile stations (MS), mobile equipment (ME) or as they are most widely known, cell or mobile
phones are the section of a GSM cellular network that the user sees and operates. In recent years
their size has fallen dramatically while the level of functionality has greatly increased. A further
advantage is that the time between charges has significantly increased.
There are a number of elements to the cell phone, although the two main elements are the main
hardware and the SIM.
The hardware itself contains the main elements of the mobile phone including the display, case,
battery, and the electronics used to generate the signal, and process the data receiver and to be
transmitted. It also contains a number known as the International Mobile Equipment Identity
(IMEI). This is installed in the phone at manufacture and "cannot" be changed. It is accessed by
the network during registration to check whether the equipment has been reported as stolen.
The SIM or Subscriber Identity Module contains the information that provides the identity of the
user to the network. It contains are variety of information including a number known as the
International Mobile Subscriber Identity (IMSI).
• Mobile Switching services Centre (MSC): The main element within the core network
area of the overall GSM network architecture is the Mobile switching Services Centre
(MSC). The MSC acts like a normal switching node within a PSTN or ISDN, but also
provides additional functionality to enable the requirements of a mobile user to be
supported. These include registration, authentication, call location, inter-MSC handovers
and call routing to a mobile subscriber. It also provides an interface to the PSTN so that
calls can be routed from the mobile network to a phone connected to a landline. Interfaces
to other MSCs are provided to enable calls to be made to mobiles on different networks.
• Home Location Register (HLR): This database contains all the administrative
information about each subscriber along with their last known location. In this way, the
GSM network is able to route calls to the relevant base station for the MS. When a user
switches on their phone, the phone registers with the network and from this it is possible
to determine which BTS it communicates with so that incoming calls can be routed
appropriately. Even when the phone is not active (but switched on) it re-registers
periodically to ensure that the network (HLR) is aware of its latest position. There is one
HLR per network, although it may be distributed across various sub-centres to for
operational reasons.
• Visitor Location Register (VLR): This contains selected information from the HLR that
enables the selected services for the individual subscriber to be provided. The VLR can be
implemented as a separate entity, but it is commonly realised as an integral part of the
MSC, rather than a separate entity. In this way access is made faster and more
convenient.
• Equipment Identity Register (EIR): The EIR is the entity that decides whether a given
mobile equipment may be allowed onto the network. Each mobile equipment has a number
known as the International Mobile Equipment Identity. This number, as mentioned above,
is installed in the equipment and is checked by the network during registration. Dependent
upon the information held in the EIR, the mobile may be allocated one of three states -
allowed onto the network, barred access, or monitored in case its problems.
• Authentication Centre (AuC): The AuC is a protected database that contains the secret
key also contained in the user's SIM card. It is used for authentication and for ciphering on
the radio channel.
• Gateway Mobile Switching Centre (GMSC): The GMSC is the point to which a ME
terminating call is initially routed, without any knowledge of the MS's location. The GMSC
is thus in charge of obtaining the MSRN (Mobile Station Roaming Number) from the HLR
based on the MSISDN (Mobile Station ISDN number, the "directory number" of a MS) and
routing the call to the correct visited MSC. The "MSC" part of the term GMSC is misleading,
since the gateway operation does not require any linking to an MSC.
• SMS Gateway (SMS-G): The SMS-G or SMS gateway is the term that is used to
collectively describe the two Short Message Services Gateways defined in the GSM
standards. The two gateways handle messages directed in different directions. The SMS-
GMSC (Short Message Service Gateway Mobile Switching Centre) is for short messages
being sent to an ME. The SMS-IWMSC (Short Message Service Inter-Working Mobile
Switching Centre) is used for short messages originated with a mobile on that network.
The SMS-GMSC role is similar to that of the GMSC, whereas the SMS-IWMSC provides a
fixed access point to the Short Message Service Centre.
1. Um interface The "air" or radio interface standard that is used for exchanges between a
mobile (ME) and a base station (BTS / BSC). For signalling, a modified version of the ISDN
LAPD, known as LAPDm is used.
2. Abis interface This is a BSS internal interface linking the BSC and a BTS, and it has not
been totally standardised. The Abis interface allows control of the radio equipment and
radio frequency allocation in the BTS.
3. A interface The A interface is used to provide communication between the BSS and the
MSC. The interface carries information to enable the channels, timeslots and the like to be
allocated to the mobile equipments being serviced by the BSSs. The messaging required
within the network to enable handover etc to be undertaken is carried over the interface.
4. B interface The B interface exists between the MSC and the VLR . It uses a protocol
known as the MAP/B protocol. As most VLRs are collocated with an MSC, this makes the
interface purely an "internal" interface. The interface is used whenever the MSC needs
access to data regarding a MS located in its area.
5. C interface The C interface is located between the HLR and a GMSC or a SMS-G. When a
call originates from outside the network, i.e. from the PSTN or another mobile network it
ahs to pass through the gateway so that routing information required to complete the call
may be gained. The protocol used for communication is MAP/C, the letter "C" indicating
that the protocol is used for the "C" interface. In addition to this, the MSC may optionally
forward billing information to the HLR after the call is completed and cleared down.
6. D interface The D interface is situated between the VLR and HLR. It uses the MAP/D
protocol to exchange the data related to the location of the ME and to the management of
the subscriber.
7. E interface The E interface provides communication between two MSCs. The E interface
exchanges data related to handover between the anchor and relay MSCs using the MAP/E
protocol.
8. F interface The F interface is used between an MSC and EIR. It uses the MAP/F protocol.
The communications along this interface are used to confirm the status of the IMEI of the
ME gaining access to the network.
9. G interface The G interface interconnects two VLRs of different MSCs and uses the
MAP/G protocol to transfer subscriber information, during e.g. a location update procedure.
10. H interface The H interface exists between the MSC the SMS-G. It transfers short
messages and uses the MAP/H protocol.
11. I interface The I interface can be found between the MSC and the ME. Messages
exchanged over the I interface are relayed transparently through the BSS.
Although the interfaces for the GSM cellular system may not be as rigorouly defined as many
might like, they do at least provide a large element of the definition required, enabling the
functionality of GSM network entities to be defined sufficiently.
One of the key elements of the development of the GSM, Global System for Mobile
Communications was the development of the GSM air interface. There were many requirements
that were placed on the system, and many of these had a direct impact on the air interface.
Elements including the modulation, GSM slot structure, burst structure and the like were all
devised to provide the optimum performance.
During the development of the GSM standard very careful attention was paid to aspects including
the modulation format, the way in which the system is time division multiplexed, all had a
considerable impact on the performance of the system as a whole. For example, the modulation
format for the GSM air interface had a direct impact on battery life and the time division format
adopted enabled the cellphone handset costs to be considerably reduced as detailed later.
Note on GMSK:
GMSK, Gaussian Minimum Shift Keying is a form of phase modulation that is used in a number of portable radio and
wireless applications. It has advantages in terms of spectral efficiency as well as having an almost constant amplitude
which allows for the use of more efficient transmitter power amplifiers, thereby saving on current consumption, a
critical issue for battery power equipment.
Click on the link for a GMSK tutorial
The nominal bandwidth for the GSM signal using GMSK is 200 kHz, i.e. the channel bandwidth and
spacing is 200 kHz. As GMSK modulation has been used, the unwanted or spurious emissions
outside the nominal bandwidth are sufficiently low to enable adjacent channels to be used from the
same base station. Typically each base station will be allocated a number of carriers to enable it to
achieve the required capacity.
The data transported by the carrier serves up to eight different users under the basic system by
splitting the carrier into eight time slots. The basic carrier is able to support a data throughput of
approximately 270 kbps, but as some of this supports the management overhead, the data rate
allotted to each time slot is only 24.8 kbps. In addition to this error correction is required to
overcome the problems of interference, fading and general data errors that may occur. This means
that the available data rate for transporting the digitally encoded speech is 13 kbps for the basic
vocoders.
GSM burst
The GSM burst, or transmission can fulfil a variety of functions. Some GSM bursts are used for
carrying data while others are used for control information. As a result of this a number of
different types of GSM burst are defined.
1. 3 tail bits: These tail bits at the start of the GSM burst give time for the transmitter to
ramp up its power
2. 57 data bits: This block of data is used to carry information, and most often contains
the digitised voice data although on occasions it may be replaced with signalling
information in the form of the Fast Associated Control CHannel (FACCH). The type of data
is indicated by the flag that follows the data field
3. 1 bit flag: This bit within the GSM burst indicates the type of data in the previous field.
4. 26 bits training sequence: This training sequence is used as a timing reference and for
equalisation. There is a total of eight different bit sequences that may be used, each 26
bits long. The same sequence is used in each GSM slot, but nearby base stations using the
same radio frequency channels will use different ones, and this enables the mobile to
differentiate between the various cells using the same frequency.
5. 1 bit flag Again this flag indicates the type of data in the data field.
6. 57 data bits Again, this block of data within the GSM burst is used for carrying data.
7. 3 tail bits These final bits within the GSM burst are used to enable the transmitter power
to ramp down. They are often called final tail bits, or just tail bits.
8. 8.25 bits guard time At the end of the GSM burst there is a guard period. This is
introduced to prevent transmitted bursts from different mobiles overlapping. As a result of
their differing distances from the base station.
1. 3 tail bits: Again, these tail bits at the start of the GSM burst give time for the
transmitter to ramp up its power
2. 39 bits of information:
3. 64 bits of a Long Training Sequence:
4. 39 bits Information:
5. 3 tail bits Again these are to enable the transmitter power to ramp down.
6. 8.25 bits guard time: to act as a guard interval.
1. 3 tail bits: Again, these tail bits at the start of the GSM burst give time for the
transmitter to ramp up its power.
2. 142 bits all set to zero:
3. 3 tail bits Again these are to enable the transmitter power to ramp down.
4. 8.25 bits guard time: to act as a guard interval.
1. 7 tail bits: The increased number of tail bits is included to provide additional margin
when accessing the network.
2. 41 training bits:
3. 36 data bits:
4. 3 tail bits Again these are to enable the transmitter power to ramp down.
5. 69.25 bits guard time: The additional guard time, filling the remaining time of the GSM
burst provides for large timing differences.
• Traffic multiframe: The Traffic Channel frames are organised into multiframes
consisting of 26 bursts and taking 120 ms. In a traffic multiframe, 24 bursts are used for
traffic. These are numbered 0 to 11 and 13 to 24. One of the remaining bursts is then
used to accommodate the SACCH, the remaining frame remaining free. The actual position
used alternates between position 12 and 25.
• Control multiframe: the Control Channel multiframe that comprises 51 bursts and
occupies 235.4 ms. This always occurs on the beacon frequency in time slot zero and it
may also occur within slots 2, 4 and 6 of the beacon frequency as well. This multiframe is
subdivided into logical channels which are time-scheduled. These logical channels and
functions include the following:
o Frequency correction burst
o Synchronisation burst
o Broadcast channel (BCH)
o Paging and Access Grant Channel (PACCH)
o Stand Alone Dedicated Control Channel (SDCCH)
GSM Superframe
Multiframes are then constructed into superframes taking 6.12 seconds. These consist of 51 traffic
multiframes or 26 control multiframes. As the traffic multiframes are 26 bursts long and the
control multiframes are 51 bursts long, the different number of traffic and control multiframes
within the superframe, brings them back into line again taking exactly the same interval.
GSM Hyperframe
Above this 2048 superframes (i.e. 2 to the power 11) are grouped to form one hyperframe which
repeats every 3 hours 28 minutes 53.76 seconds. It is the largest time interval within the GSM
frame structure.
Within the GSM hyperframe there is a counter and every time slot has a unique sequential number
comprising the frame number and time slot number. This is used to maintain synchronisation of
the different scheduled operations with the GSM frame structure. These include functions such as:
• Frequency hopping: Frequency hopping is a feature that is optional within the GSM
system. It can help reduce interference and fading issues, but for it to work, the
transmitter and receiver must be synchronised so they hop to the same frequencies at the
same time.
• Encryption: The encryption process is synchronised over the GSM hyperframe period
where a counter is used and the encryption process will repeat with each hyperframe.
However, it is unlikely that the cellphone conversation will be over 3 hours and accordingly
it is unlikely that security will be compromised as a result.
• CELP: The CELP or Code Excited Linear Prediction codec is a vocoder algorithm that was
originally proposed in 1985 and gave a significant improvement over other voice codecs of
the day. The basic principle of the CELP codec has been developed and used as the basis
of other voice codecs including ACELP, RCELP, VSELP, etc. As such the CELP codec
methodology is now the most widely used speech coding algorithm. Accordingly CELP is
now used as a generic term for a particular class of vocoders or speech codecs and not a
particular codec.
The main principle behind the CELP codec is that is uses a principle known as "Analysis by
Synthesis". In this process, the encoding is performed by perceptually optimising the
decoded signal in a closed loop system. One way in which this could be achieved is to
compare a variety of generated bit streams and choose the one that produces the best
sounding signal.
• ACELP codec: The ACELP or Algebraic Code Excited Linear Prediction codec. The ACELP
codec or vocoder algorithm is a development of the CELP model. However the ACELP codec
codebooks have a specific algebraic structure as indicated by the name.
• VSELP codec: The VSELP or Vector Sum Excitation Linear Prediction codec. One of the
major drawbacks of the VSELP codec is its limited ability to code non-speech sounds. This
means that it performs poorly in the presence of noise. As a result this voice codec is not
now as widely used, other newer speech codecs being preferred and offering far superior
performance.
AMR-WB codec
Adaptive Multi-Rate Wideband, AMR-WB codec, also known under its ITU designation of G.722.2, is
based on the earlier popular Adaptive Multi-Rate, AMR codec. AMR-WB also uses an ACELP basis
for its operation, but it has been further developed and AMR-WB provides improved speech quality
as a result of the wider speech bandwidth that it encodes. AMR-WB has a bandwidth extending
from 50 - 7000 Hz which is significantly wider than the 300 - 3400 Hz bandwidths used by
standard telephones. However this comes at the cost of additional processing, but with advances
in IC technology in recent years, this is perfectly acceptable.
The AMR-WB codec contains a number of functional areas: it primarily includes a set of fixed rate
speech and channel codec modes. It also includes other codec functions including: a Voice Activity
Detector (VAD); Discontinuous Transmission (DTX) functionality for GSM; and Source Controlled
Rate (SCR) functionality for UMTS applications. Further functionality includes in-band signaling for
codec mode transmission, and link adaptation for control of the mode selection.
The AMR-WB codec has a 16 kHz sampling rate and the coding is performed in blocks of 20 ms.
There are two frequency bands that are used: 50-6400 Hz and 6400-7000 Hz. These are coded
separately to reduce the codec complexity. This split also serves to focus the bit allocation into the
subjectively most important frequency range.
The lower frequency band uses an ACELP codec algorithm, although a number of additional
features have been included to improve the subjective quality of the audio. Linear prediction
analysis is performed once per 20 ms frame. Also, fixed and adaptive excitation codebooks are
searched every 5 ms for optimal codec parameter values.
The higher frequency band adds some of the naturalness and personality features to the voice.
The audio is reconstructed using the parameters from the lower band as well as using random
excitation. As the level of power in this band is less than that of the lower band, the gain is
adjusted relative to the lower band, but based on voicing information. The signal content of the
higher band is reconstructed by using an linear predictive filter which generates information from
the lower band filter.
BIT NOTES
RATE
(KBPS)
6.60 This is the lowest rate for AMR-WB. It is used for circuit switched connections for GSM and UMTS
and is intended to be used only temporarily during severe radio channel conditions or during network
congestion.
8.85 This gives improved quality over the 6.6 kbps rate, but again, its use is only recommended for use in
periods of congestion or when during severe radio channel conditions.
12.65 This is the main bit rate used for circuit switched GSM and UMTS, offering superior performance to
the original AMR codec.
14.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
15.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
18.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
19.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels
are high.
23.05 Not suggested for full rate GSM channels.
23.85 Not suggested for full rate GSM channels, and provides speech quality similar to that of G.722 at 64
kbps.
Not all phones equipped with AMR-WB will be able to access all the data rates - the different
functions on the phone may not require all to be active for example. As a result, it is necessary to
inform the network about which rates are available and thereby simplify the negotiation between
the handset and the network. To achieve this there are three difference AMR-WB configurations
that are available:
It can be seen that only the 23.85, 15.85, 12.65, 8.85 and 6.60 kbit/s modes are used. Based on
listening tests, it was considered that these five modes were sufficient for a high quality speech
telephony service. The other data rates were retained and can be used for other purposes
including multimedia messaging, streaming audio, etc.
GSM codecs summary
There has been a considerable improvement in the GSM audio codecs that have been in use.
Starting with the original RTE-LPC speech codec and then moving through the Enhanced Full Rate,
EFR codec and the GSM half rate codec to the AMR codec which is now the most widely used and
provides a variable rate that can be tailored to the individual conditions. Also the newer AMR-WB
codec wills ee increasing use. Although with newer technologies such as LTE, Long Term Evolution
which uses an all IP based system, codecs are still used to provide data compression and improved
spectral efficiency, the idea of a codec will still be used, although some of the GSM codecs that are
in use today will be superseded.
One of the key elements of a mobile phone or cellular telecommunications system, is that the
system is split into many small cells to provide good frequency re-use and coverage. However as
the mobile moves out of one cell to another it must be possible to retain the connection. The
process by which this occurs is known as handover or handoff. The term handover is more widely
used within Europe, whereas handoff tends to be use more in North America. Either way, handover
and handoff are the same process.
• Intra-BTS handover: This form of GSM handover occurs if it is required to change the
frequency or slot being used by a mobile because of interference, or other reasons. In this
form of GSM handover, the mobile remains attached to the same base station transceiver,
but changes the channel or slot.
• Inter-BTS Intra BSC handover: This for of GSM handover or GSM handoff occurs when
the mobile moves out of the coverage area of one BTS but into another controlled by the
same BSC. In this instance the BSC is able to perform the handover and it assigns a new
channel and slot to the mobile, before releasing the old BTS from communicating with the
mobile.
• Inter-BSC handover: When the mobile moves out of the range of cells controlled by
one BSC, a more involved form of handover has to be performed, handing over not only
from one BTS to another but one BSC to another. For this the handover is controlled by
the MSC.
• Inter-MSC handover: This form of handover occurs when changing between networks.
The two MSCs involved negotiate to control the handover.
• Old and new BTSs synchronised: In this case the mobile is given details of the new
physical channel in the neighbouring cell and handed directly over. The mobile may
optionally transmit four access bursts. These are shorter than the standard bursts and
thereby any effects of poor synchronisation do not cause overlap with other bursts.
However in this instance where synchronisation is already good, these bursts are only used
to provide a fine adjustment.
• Time offset between synchronised old and new BTS: In some instances there may
be a time offset between the old and new BTS. In this case, the time offset is provided so
that the mobile can make the adjustment. The GSM handover then takes place as a
standard synchronised handover.
• Non-synchronised handover: When a non-synchronised cell handover takes place, the
mobile transmits 64 access bursts on the new channel. This enables the base station to
determine and adjust the timing for the mobile so that it can suitably access the new BTS.
This enables the mobile to re-establish the connection through the new BTS with the
correct timing.
Inter-system handover
With the evolution of standards and the migration of GSM to other 2G technologies including to 3G
UMTS / WCDMA as well as HSPA and then LTE, there is the need to handover from one technology
to another. Often the 2G GSM coverage will be better then the others and GSM is often used as
the fallback. When handovers of this nature are required, it is considerably more complicated than
a straightforward only GSM handover because they require two technically very different systems
to handle the handover.
These handovers may be called intersystem handovers or inter-RAT handovers as the handover
occurs between different radio access technologies.
The most common form of intersystem handover is between GSM and UMTS / WCDMA. Here there
are two different types:
• UMTS / WCDMA to GSM handover: There are two further divisions of this category of
handover:
o Blind handover: This form of handover occurs when the base station hands off
the mobile by passing it the details of the new cell to the mobile without linking to
it and setting the timing, etc of the mobile for the new cell. In this mode, the
network selects what it believes to be the optimum GSM based station. The mobile
first locates the broadcast channel of the new cell, gains timing synchronisation
and then carries out non-synchronised intercell handover.
o Compressed mode handover: using this form of handover the mobile uses the
gaps I transmission that occur to analyse the reception of local GSM base stations
using the neighbour list to select suitable candidate base stations. Having selected
a suitable base station the handover takes place, again without any time
synchronisation having occurred.
• Handover from GSM to UMTS / WCDMA: This form of handover is supported within
GSM and a "neighbour list" was established to enable this occur easily. As the GSM / 2G
network is normally more extensive than the 3G network, this type of handover does not
normally occur when the mobile leaves a coverage area and must quickly find a new base
station to maintain contact. The handover from GSM to UMTS occurs to provide an
improvement in performance and can normally take place only when the conditions are
right. The neighbour list will inform the mobile when this may happen.
Summary
GSM handover is one of the major elements in performance that users will notice. As a result it is
normally one of the Key Performance Indicators (KPIs) used by operators to monitor performance.
Poor handover or handoff performance will normally result in dropped calls, and users find this
particularly annoying. Accordingly network operators develop and maintain their networks to
ensure that an acceptable performance is achieved. In this way they can reduce what is called
"churn" where users change from one network to another.
ATM
The Asynchronous Transfer Mode (ATM) was developed to enable a single data networking
standard to be used for both synchronous channel networking and packet-based networking.
Asynchrnonous transfer mode also supports multiple levels of quality of service for packet traffic.
In this way, asynchronous transfer mode can be thought of as supporting both circuit-switched
networks and packet-switched networks by mapping both bitstreams and packet-streams. It
achieves this by sending data in a series or stream of fixed length cells, each of which has its own
identifier. These data cells are typically sent on demand within a synchronous time-slot pattern in
a synchronous bit-stream. Although this may not appear to be asynchronous, the asynchronous
element of the "Asynchronous Transfer Mode", comes from the fact that the sending of the cells
themselves is asynchronous and not from the synchronous low-level bitstream that carries them.
One of the original aims of Asynchronous Transfer Mode was that it should provide a basis for
Broadband Integrated Services Digital Network (B-ISDN) to replace existing PSTN (Private � ). As
a result of this the standards for Asynchronous Transfer Mode standards include not only the
definitions for the Physical transmission techniques (Layer 1), but also layers 2 and 3.
In addition to this, the development of Asysnchronous Transfer Mode was focussed heavily on the
requirements for telecommunications providers rather than local data networking requirements,
and as a result it is more suited to large area telecommunications applications rather than smaller
local area data network solutions, or general computer networking.
While Asynchronous Transfer Mode is widely used for many applications, it is generally only used
for transport of IP traffic. It has not become the single standard for providing a single integrated
technology for LANs, public networks, and user services.
• ATM switch This accepts the incoming cells or information "packets" from another ATM
entity which may be either another switch or an end point. It reads and updates the cell
header information and switches the information cell towards its destination
• ATM end point This element contains the ATM network interface adaptor to enable
data entering or leaving the ATM network to interface to the external world. Examples of
these end points include workstations, LAN switches, video codecs and many more items.
ATM networks can be configured in many ways. The overall network will comprise a set of ATM
switches interconnected by point-to-point ATM links or interfaces. Within the network there are
two types of interface and these are both supported by the switches. The first is UNI and this is
used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type
of interface is known as NNI. This connects two ATM switches.
ATM operation
In ATM the information is formatted into fixed length cells consisting of 48 bytes (each 8 bits long)
of payload data. In addition to this there is a cell header that consists of 5 bytes, giving a total cell
length of 53 bytes. This format has been chosen because time critical data such as voice packets is
not affected by very long packets being sent. The data carried in the header comprises payload
information as well as what are termed virtual-circuit identifiers and header error check data.
ATM is what is termed connection orientated. This has the advantage that the user can define the
requirements that are needed to support the calls, and in turn this allows the network to allocated
the required resources. By adopting this approach, several calls can be multiplexed efficiently and
ensuring that the required resources can be allocated.
There are two types of connection that are specified for asynchronous transfer mode:
• Virtual Channel Connections - this is the basic connection unit or entity. It carries a
single stream of data cells from the originator to the end user.
• Virtual Path Connections - this is formed from a collection of virtual channel
connections. A virtual path is an end to end connection created across an ATM
(asynchronous transfer mode) network. For a virtual path connection, the network routes
all cells from the virtual path across the network in the same way without regard for the
individual virtual circuit connection. This results in faster transfer.
The idea of virtual path connections are also used within the ATM network itself to route
traffic between switches
ATM networks can be configured in many ways. The overall network will comprise a set of ATM
switches interconnected by point-to-point ATM links or interfaces. Within the network there are
two types of interface and these are both supported by the switches. The first is UNI and this is
used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type
of interface is known as NNI. This connects two ATM switches.
E-Carrier, E1
The E carrier system has been created by the European Conference of Postal and
Telecommunications Administrations (CEPT) as a digital telecommunications carrier scheme for
carrying multiple links. The E-carrier system enables the transmission of several (multiplexed)
voice/data channels simultaneously on the same transmission facility. Of the various levels of the
E-carrier system, the E1 and E3 levels are the only ones that are used.
More specifically E1 has an overall bandwidth of 2048 kbps and provides 32 channels each
supporting a data rate of 64 kbps. The lines are mainly used to connect between the PABX (Private
Automatic Branch eXchange), and the CO (Central Office) or main exchange.
The E1 standard defines the physical characteristics of a transmission path, and as such it
corresponds to the physical layer (layer 1) in the OSI model. Technologies such as ATM and others
which form layer 2 are able to pass over E1 lines, making E1 one of the fundamental technologies
used within telecommunications.
A similar standard to E1, known as T1 has similar characteristics, but it is widely used in North
America. Often equipment used for these technologies, e.g. test equipment may be used for both,
and the abbreviation E1/T1 may be seen.
E1 beginnings
The life of the standards started back in the early 1960s when Bell Laboratories, where the
transistor was invented some years earlier, developed a voice multiplexing system to enable better
use to be made of the lines that were required, and to provide improved performance of the
analogue techniques that were used. The step of the process converted the signal into a digital
format having a 64 kbps data stream. The next stage is to assemble twenty four of the data
streams into a framed data stream with an overall data rate of 1.544 Mbps. This structured signal
was called DS1, but it is almost universally referred to as T1.
In Europe, the basic scheme was taken by what was then the CCIT and developed to fit the
European requirements better. This resulted in the development of the scheme known as E1. This
has provision for 30 voice channels and runs at an overall data rate of 2.048 Mbps. In Europe E1
refers to both the formatted version and the raw data rate.
E1 basics
An E1 link runs over two sets of wires that are normally coaxial cable and the signal itself
comprises a nominal 2.4 volt signal. The signalling data rate is 2.048 Mbps full duplex and
provides the full data rate in both directions.
For E1, the signal is split into 32 channels each of 8 bits. These channels have their own time
division multiplexed slots. These are transmitted sequentially and the complete transmission of the
32 slots makes up a frame. These Time Slots are nominated TS0 to TS31 and they are allocated to
different purposes:
Time slot 0 is reserved for framing purposes, and alternately transmits a fixed pattern. This allows
the receiver to lock onto the start of each frame and match up each channel in turn. The standards
allow for a full Cyclic Redundancy Check to be performed across all bits transmitted in each frame.
E1 signalling data is carried on TS16 is reserved for signalling, including control, call setup and
teardown. These are accomplished using standard protocols including Channel Associated
Signalling (CAS) where a set of bits is used to replicate opening and closing the circuit. Tone
signalling may also be used and this is passed through on the voice circuits themselves. More
recent systems use Common Channel Signalling (CCS) such as ISDN or Signalling System 7 (SS7)
which sends short encoded messages containing call information such as the caller ID.
Several options are specified in the original CEPT standard for the physical transmission of data.
However an option or standard known as HDB3 (High-Density Bipolar-3 zeros) is used almost
exclusively.
Future
E1 and also T1 are well established for telecommunications use. However with new technologies
such as ADSL, DSL, and the other IP based systems that are now being widely deployed, these will
spell the end of E1 and T1. Nevertheless they have given good service over many years, and they
will remain in use as a result of this wide deployment for some years to come.
Erlang basics
The Erlang is the basic unit of telecommunications traffic intensity representing continuous use of
one circuit and it is given the symbol "E". It is effectively call intensity in call minutes per sixty
minutes. In general the period of an hour is used, but it actually a dimensionless unit because the
dimensions cancel out (i.e. minutes per minute).
The number of Erlangs is easy to deduce in a simple case. If a resource carries one Erlang, then
this is equivalent to one continuous call over the period of an hour. Alternatively if two calls were
in progress for fifty percent of the time, then this would also equal one Erlang (1E). Alternatively if
a radio channel is used for fifty percent of the time carries a traffic level of half an Erlang (0.5E)
From this it can be seen that an Erlang, E, may be thought of as a use multiplier where 100% use
is 1E, 200% is 2E, 50% use is 0.5E and so forth.
Interestingly for many years, AT&T and Bell Canada measured traffic in another unit called CCS,
100 call seconds. If figures in CCS are encountered then it is a simple conversion to change CCS to
Erlangs. Simply divide the figure in CCS by 36 to obtain the figure in Erlangs
• Erlang B: The Erlang B is used to work out how many lines are required from a
knowledge of the traffic figure during the busiest hour. The Erlang B figure assumes that
any blocked calls are cleared immediately. This is the most commonly used figure to be
used in any telecommunications capacity calculations.
• Extended Erlang B: The Extended Erlang B is similar to Erlang B, but it can be used to
factor in the number of calls that are blocked and immediately tried again.
• Erlang C: The Erlang C model assumes that not all calls may be handled immediately
and some calls are queued until they can be handled.
Erlang B
It is particularly important to understand the traffic volumes at peak times of the day.
Telecommunications traffic, like many other commodities, varies over the course of the day, and
also the week. It is therefore necessary to understand the telecommunications traffic at the peak
times of the day and to be able to determine the acceptable level of service required. The Erlang B
figure is designed to handle the peak or busy periods and to determine the level of service
required in these periods.
Erlang C
The Erlang C model is used by call centres to determine how many staff or call stations are
needed, based on the number of calls per hour, the average duration of call and the length of time
calls are left in the queue. The Erlang C figure is somewhat more difficult to determine because
there are more interdependent variables. The Erlang C figure, is nevertheless very important to
determine if a call centre is to be set up, as callers do not like being kept waiting interminably, as
so often happens.
Erlang summary
The Erlang formulas and the concepts put forward by Erlang are still an essential part of
telecommunications network planning these days. As a result, telecommunications engineers
should have a good understanding of the Erlang and the associated formulae.
despite the widespread use of the Erlang concepts and formulae, it is necessary to remember that
there are limitations to their use. It is necessary to remember that the Erlang formulas make
assumptions. Erlang B assumes that callers who receive a busy tone will not immediately try
again. Also Erlang C assumes that callers will not hold on indefinitely. It is also worth remembering
that the Erlang formulas are based on statistics, and that to make these come true an infinite
number of sources is required. However for most cases a total of ten sources gives an adequate
number of sources to give sufficiently accurate results.
The Erlang is a particularly important element of telecommunications theory, and it is a
cornerstone of many areas of telecommunications technology today. However one must be aware
of its limitations and apply the findings of any work using Erlangs, the Erlang B and Erlang C
formulas or functions with a certain amount of practical knowledge.
• Interconnecting media: The media through which the signals propagate is of great
importance within the Ethernet network system. It governs the majority of the properties
that determine the speed at which the data may be transmitted. There are a number of
options that may be used:
o Coaxial cable: This was one of the first types of interconnecting media to be used
for Ethernet. Typically the characteristic impedance was around 110 ohms and
therefore the cables normally used for radio frequency applications were not
applicable.
o Twisted Pair Cables Type types of twisted pair may be used: Unshielded Twisted
Pair (UTP) or a Shielded Twisted Pair (STP). Generally the shielded types are better
as they limit stray pickup more and therefore data errors are reduced.
o Fibre optic cable: Fibre optic cable is being used increasingly as it provides very
high immunity to pickup and radiation as well as allowing very high data rates to
be communicated.
• Network nodes The network nodes are the points to and from which the communication
takes place. The network nodes also fall into categories:
o Data Terminal Equipment - DTE: These devices are either the source or
destination of the data being sent. Devices such as PCs, file servers, print servers
and the like fall into this category.
o Data Communications Equipment - DCE: Devices that fall into this category
receive and forward the data frames across the network, and they may often be
referred to as 'Intermediate Network Devices' or Intermediate Nodes. They include
items such as repeaters, routers, switches or even modems and other
communications interface units.
• Point to point: This is the simplest configuration as only two network units are used. It
may be a DTE to DTE, DTE to DCE, or even a DCE to DCE. In this simple structure the
cable is known as the network link. Links of this nature are used to transport data from
one place to another and where it is convenient to use Ethernet as the transport
mechanism.
• Coaxial bus: This type of Ethernet network is rarely used these days. The systems used
a coaxial cable where the network units were located along the length of the cable. The
segment lengths were limited to a maximum of 500 metres, and it was possible to place
up to 1024 DTEs along its length. Although this form of network topology is not installed
these days, a very few legacy systems might just still be in use.
• Star network: This type of Ethernet network has been the dominant topology since the
early 1990s. It consists of a central network unit, which may be what is termed a multi-
port repeater or hub, or a network switch. All the connections to other nodes radiate out
from this and are point to point links.
Summary
Despite the fact that Ethernet has been in use for many years, it is still a growing standard and it
is likely to be used for many years to come. During its life, the speed of Ethernet systems has
been increased, and now new optical fibre based Ethernet systems are being introduced. As the
Ethernet standard is being kept up to date, the standard is likely to remain in use for many years
to come.
Ethernet terminology
There is a convention for describing the different forms of Ethernet. For example 10Base-T and
100Base-T are widely seen in the technical articles and literature. The designator consists of a
three parts:
• The first number (typically one of 10, 100, or 1000) indicates the transmission speed in
megabits per second.
• The second term indicates transmission type: BASE = baseband; BROAD = broadband.
• The last number indicates segment length. A 5 means a 500-meter (500-m) segment
length from original Thicknet. In the more recent versions of the IEEE 802.3 standard,
letters replace numbers. For example, in 10BASE-T, the T means unshielded twisted-pair
cables. Further numbers indicate the number of twisted pairs available. For example in
100BASE-T4, the T4 indicates four twisted pairs.
Summary
The Ethernet IEEE 802.3 standards are continually being updated to ensure that the generic
standard keeps pace with constant advance of technology and the growing needs of the users. As
a result, IEEE 802.3, Ethernet is still at the forefront of network communications technology, and it
appears it will retain this position of dominance for many years to come. In addition to the
different IEEE 802.3 standards, the terminology used to define the different flavours is also widely
used for defining which Ethernet variant is used.
• Header
o Preamble (PRE) - This is seven bytes long and it consists of a pattern of alternating
ones and zeros, and this informs the receiving stations that a frame is starting as
well as enabling synchronisation. (10 Mbps Ethernet)
o Start Of Frame delimiter (SOF) - This consists of one byte and contains an
alternating pattern of ones and zeros but ending in two ones.
o Destination Address (DA) - This field contains the address of station for which the
data is intended. The left most bit indicates whether the destination is an individual
address or a group address. An individual address is denoted by a zero, while a
one indicates a group address. The next bit into the DA indicates whether the
address is globally administered, or local. If the address is globally administered
the bit is a zero, and a one of it is locally administered. There are then 46
remaining bits. These are used for the destination address itself.
o Source Address (SA) - The source address consists of six bytes, and it is used to
identify the sending station. As it is always an individual address the left most bit
is always a zero.
o Length / Type - This field is two bytes in length. It provides MAC information and
indicates the number of client data types that are contained in the data field of the
frame. It may also indicate the frame ID type if the frame is assembled using an
optional format.(IEEE 802.3 only).
• Payload
o Data - This block contains the payload data and it may be up to 1500 bytes long. If
the length of the field is less than 46 bytes, then padding data is added to bring its
length up to the required minimum of 46 bytes.
• Trailer
o Frame Check Sequence (FCS) - This field is four bytes long. It contains a 32 bit
Cyclic Redundancy Check (CRC) which is generated over the DA, SA, Length / Type
and Data fields.
Half-duplex transmission
This access method involves the use of CSMA/CD and it was developed to enable several stations
to share the same transport medium without the need for switching, network controllers or
assigned time slots. Each station is able to determine when it is able to transmit and the network
is self organising.
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three
categories. The first is Carrier Sense. Here each station listens on the network for traffic and it can
detect when the network is quiet. The second is the Multiple Access aspect where the stations are
able to determine for themselves whether they should transmit. The final element is the Collision
Detect element. Even though stations may find the network free, it is still possible that two
stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop
transmitting. They then back off a random amount of time before attempting a retransmission.
The random delay is important as it prevents the two stations starting to transmit together a
second time.
Note: According to section 3.3 of the IEEE 802.3 standard, each octet of the Ethernet frame, with
the exception of the FCS, is transmitted low-order bit first.
Full duplex
Another option that is allowed by the Ethernet MAC is full duplex with transmission in both
directions. This is only allowable on point-to-point links, and it is much simpler to implement than
using the CSMA/CD approach as well as providing much higher transmission throughput rates
when the network is being used. Not only is there no need to schedule transmissions when no
other transmissions are underway, as there are only two stations in the link, but by using a full
duplex link, full rate transmissions can be undertaken in both directions, thereby doubling the
effective bandwidth.
Ethernet addresses
Every Ethernet network interface card (NIC) is given a unique identifier called a MAC address. This
is assigned by the manufacturer of the card and each manufacturer that complies with IEEE
standards can apply to the IEEE Registration Authority for a range of numbers for use in its
products.
The MAC address comprises of a 48-bit number. Within the number the first 24 bits identify the
manufacturer and it is known as the manufacturer ID or Organizational Unique Identifier (OUI) and
this is assigned by the registration authority. The second half of the address is assigned by the
manufacturer and it is known as the extension of board ID.
The MAC address is usually programmed into the hardware so that it cannot be changed. Because
the MAC address is assigned to the NIC, it moves with the computer. Even if the interface card
moves to another location across the world, the user can be reached because the message is sent
to the particular MAC address.
100Base-T overview
100BaseT Ethernet, also known as Fast Ethernet is defined under the 802.3 family of standards
under 802.3u. Like other flavours of Ethernet, 100Base-T, Fast Ethernet is a shared media LAN. All
the nodes within the network share the 100 Mbps bandwidth. Additionally it conforms to the same
basic operational techniques as used by other flavours of Ethernet. In particular it uses the
CSMA/CD access method, but there are some minor differences in the way the overall system
operates.
The designation for 100Base-T is derived from a standard format for Ethernet connections. The
first figure is the designation for the speed in Mbps. The base indicates the system operates at
baseband and the following letters indicate the cable or transfer medium.
Note on CSMA/CD:
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first
is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The
second is the Multiple Access aspect where the stations are able to determine for themselves whether they should
transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still
possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then
back off a random amount of time before attempting a retransmission. The random delay is important as it prevents
the two stations starting to transmit together a second time.
• 1000Base-CX This was intended for connections over short distances up to 25 metres
per segment and using a balanced shielded twisted pair copper cable. However it was
succeeded by 1000Base-T.
• 1000Base-LX This is a fiber optic version that uses a long wavelength
• 1000Base-SX This is a fiber optic version of the standard that operates over multi-
mode fiber using a 850 nanometer, near infrared (NIR) light wavelength
• 1000Base-T Also known as IEEE 802.3ab, this is a standard for Gigabit Ethernet over
copper wiring, but requires Category 5 (Cat 5) cable as a minimum.
The specification for Gigabit Ethernet provides for a number of requirements to be met. These can
be summarised as the points below:
• Provide for half and full duplex operation at speeds of 1000 Mbps.
• Use the 802.3 Ethernet frame formats.
• Use the CSMA/CD access method with support for one repeater per collision domain.
• Provide backward compatibility with 10BASE-T and 100BASE-T technologies.
Note on CSMA/CD:
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first
is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The
second is the Multiple Access aspect where the stations are able to determine for themselves whether they should
transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still
possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data
being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then
back off a random amount of time before attempting a retransmission. The random delay is important as it prevents
the two stations starting to transmit together a second time.
Like 10Base-T and 100Base-T, the predecessors of Gigabit Ethernet, the system is a physical
(PHY) and media access control (MAC) layer technology, specifying the Layer 2 data link layer of
the OSI protocol model. It complements upper-layer protocols TCP and IP, which specify the Layer
4 transport and Layer 3 network portions and enable communications between applications.
Practical aspects
Gigabit Ethernet, 1GE has been developed with the idea of using ordinary Cat 5 cables. However
several companies recommend the use of higher spec Cat 5e cables when Gigabit Ethernet
applications are envisaged. Although slightly more expensive, these Cat 5e cables offer improved
crosstalk and return loss performance. This means that they are less susceptible to noise. When
data is being passed at very high rates, there is always the possibility that electrical noise can
cause problems. The use of Cat 5e cables may improve performance, particularly when used in a
less quiet electrical environment, or over longer runs.
• Cat-1: This is not recognised by the TIA/EIA. It is the form of wiring that is used for
standard telephone (POTS) wiring, or for ISDN.
• Cat-2: : This is not recognised by theTIA/EIA. It was the form of wiring that was used
for 4Mbit/s token ring networks.
• Cat-3: : This cable is defined in TIA/EIA-568-B. It is used for data networks employing
frequencies up to 16 MHz. It was popular for use with 10 Mbps Ethernet networks
(100Base-T), but has now been superseded by Cat-5 cable.
• Cat-4: : This cable is not recognised by the TIA/EIA. However it can be sued for
networks carrying frequencies up to 20 MHz. It was often used on 16Mbps token ring
networks.
• Cat-5: : This is not recognised by the TIA/EIA. It is the cable that is widely used for
100Base-T and 1000Base-T networks as it provides performance to allow data at 100 Mbps
and slightly more (125 MHz for 1000Base-T) Ethernet.
• Cat-5e: : This form of cable is recognised by the TIA/EIA and is defined in TIA/EIA-568-
B.. It has a slightly higher frequency specification that Cat-5 cable as the performance
extends up to 125 Mbps. It can be used for 100Base-T and 1000Base-t (Gigabit Ethernet).
• Cat-6: : This cable is defined in TIA/EIA-568-B. It provides more than double the
performance of Cat-5 and Cat-5e cables allowing data at up to 250Mbps to be passed.
• Cat-7: : This is an informal number for ISO/IEC 11801 Class F cabling. It comprises four
individually shielded pairs inside an overall shield. It is aimed at applications where
transmission of frequencies up to 600 Mbps is required.
Further descriptions of cat-5 and Cat-5e cables are given below as these are widely sued for
Ethernet networking applications today.
Ethernet Cat 5 cable
Cat 5 cables or to give them their full name category 5 cable is the current preferred cable type for
LAN network and telephone wiring where twisted pair cabling is required. Cat 5 cables consist of
an unshielded cable comprising four twisted pairs, typically of 24 gauge wire. The terminating
connector is an RJ-45 jack. In view of this these Cat5 network cables are often referred to as RJ45
network cables or RJ45 patch cables. Certified Cat-5 cables will have the wording "Cat-5" written
on the side. As they conform to EIA/TIA 568A-5, this is written on the outer sheath. It is always
best to use the appropriate network cables when setting up a network as faulty or not to standard
cables can cause problems that may be difficult to identify and trace.
Cat5 network cable is now the standard form of twisted pair cable and supersedes Cat 3. The Cat 5
cables can be used for data speeds up to 125 Mbps, thereby enabling them to support 100Base-T
which has a maximum data speed of 100 Mbps whereas the Cat-3 cable was designed to be
compatible with 10Base-T. The Cat5 cable is able to support working up to lengths of 100 metres
at the full data rate.
Where it is necessary to operate at higher speeds, as in the case of Gigabit Ethernet, an enhanced
version of Cat 5 cable known as Cat 5e is often recommended, although Cat 5 is specified to
operate with Gigabit Ethernet, 1000Base-T. Alternatively Cat 5e can be used with 100Base-T to
enable greater lengths (up to 350 metres) to be achieved.
The wires and connections within the Cat 5 or Cat 5e cable vary according to the applications. A
summary of the signals carried and the relevant wires and connections is given in the table below:
PoE Development
With Ethernet now an established standard, one of the limitations of Ethernet related equipment
was that it required power and this was not always easily available. As a result some
manufacturers started to offer solutions whereby power could be supplied over the Ethernet cables
themselves. To prevent a variety of incompatible Power over Ethernet, PoE, solutions appearing on
the market, and the resulting confusion, the IEEE began their standardisation process in 1999.
A variety of companies were involved in the development of the IEEE standard. The result was the
IEEE802.3af standard that was approved for release on 12 June 2003. Although some products
were released before this date and may not fully conform to the standard, most products available
today will conform to it, especially if they quote compliance with 802.3af.
A further standard, designated IEEE 802.3at was released in 2009 and this provided for several
enhancements to the original IEEE 802.3af specification.
PoE overview
The standard allows for a supply of 48 volts with a maximum current of 400 milliamps to be
provided over two of the available four pairs used on Cat 3 or Cat 5 cable. While this sounds very
useful with a maximum available power of 19.2 watts, the losses in the system normally reduce
this to just under 13 watts.
The standard Cat 5 cable has sets of twisted pair cable, and the IEEE standard allows for either to
be used for 10Base-T and 100Base-T systems. The standard allows for two options for Power over
Ethernet: one uses the spare twisted pairs, while the second option uses the wires carrying the
data. Only one option may be used and not both.
When using the spare twisted pairs for the supply, the pair on pins 4 and 5 connected together
and normally used for the positive supply. The pair connected to pins 7 and 8 of the connector are
connected for the negative supply. While this is the standard polarity, the specification actually
allows for either polarity to be used.
When the pairs used for carrying the data are employed it is it is possible to apply DC power to the
centre tap of the isolation transformer that are used to terminate the data wires without disrupting
the data transfer. In this mode of operation the pair on pins 3 and 6 and the pair on pins 1 and 2
can be of either polarity.
As the supply reaching the powered device can be of either polarity a full wave rectifier (bridge
rectifier) is used to ensure that the device consuming the power receives the correct polarity
power.
Within the 802.3af standard two types of device are described:
• Power Sourcing Equipment, PSE This is the equipment that supplies power to the
Ethernet cable.
• Powered Devices, PD This is equipment that interfaces to the Ethernet cable and is
powered by supply on the cable. These equipments may range from switches and hubs to
other items including webcams, etc.
Powered Device, PD
The powered device must be able to operate within the confines of the Power over Ethernet
specification. It receives a nominal 48 volts from the cable, and must be able to accept power from
either option, i.e. either over the spare or data cables. Additionally the 48 volts supplied is too
high for operating the electronics to be powered, and accordingly an isolated DC-DC converter is
used to transform the 48V to a lower voltage. This also enables 1500V isolation to be provided for
safety reasons.
PoE Summary
Power over Ethernet, PoE, defined as IEEE 802.3af or the enhancements under IEEE 802.3at
provide a particularly valuable means of remotely supplying and controlling equipment that may
be connected to an Ethernet network or system. PoE enables units to be powered in situations
where it may not be convenient to run in a new power supply for the unit. While there are
limitations to the power that can be supplied, the intention is that only small units are likely to
need powering in this way. Larger units can be powered using more conventional means.
The different elements of the system will vary according to the application. Systems used for lower
capacity links, possibly for local area networks will employ somewhat different techniques and
components to those used by network providers that provide extremely high data rates over long
distances. Nevertheless the basic principles are the same whatever the system.
In the system the transmitter of light source generates a light stream modulated to enable it to
carry the data. Conventionally a pulse of light indicates a "1" and the absence of light indicates
"0". This light is transmitted down a very thin fibre of glass or other suitable material to be
presented at the receiver or detector. The detector converts the pulses of light into equivalent
electrical pulses. In this way the data can be transmitted as light over great distances.
Receivers
Light travelling along a fibre optic cable needs to be converted into an electrical signal so that it
can be processed and the data that is carried can be extracted. The component that is at the heart
of the receiver is a photo-detector. This is normally a semiconductor device and may be a p-n
junction, a p-i-n photo-diode or an avalanche photo-diode. Photo-transistors are not used because
they do not have sufficient speed.
Once the optical signal from the fibre optic cable ahs been applied to the photo-detector and
converted into an electrical format it can be processed to recover the data which can then be
passed to its final destination.
Summary
Fibre optic transmission of data is generally used for long distance telecommunications network
links and for high speed local area networks. Currently fibre optics is not used for the delivery of
services to homes, although this is a long term aim for many telcos. By using optical fibre cabling
here, the available bandwidth for new services would be considerably higher and the possibility of
greater revenues would increase. Currently the cost oft his is not viable, although it is likely to
happen in the medium term.
The step index cable refers to cable in which there is a step change in the refractive index between
the core and the cladding. This type is the more commonly used. The other type, as indicated by
the name, changes more gradually over the diameter of the fibre. Using this type of cable, the
light is refracted towards the centre of the cable.
Optical fibres or optical fibers can also be split into single mode fibre, and multimode fibre.
Mention of both single mode fiber and multi-mode fiber is often seen in the literature.
Single mode fiber This form of optical fibre is the type that is virtually exclusively used these
days. It is found that if the diameter of the optical fibre is reduced to a few wavelengths of light,
then the light can only propagate in a straight line and does not bounce from side to side of the
fibre. As the light can only travel in this single mode, this type of cable is called a single mode
fibre. Typically single mode fibre core are around eight to ten microns in diameter, much smaller
than a hair.
Single mode fiber does not suffer from multi-modal dispersion and this means that it has a much
wider bandwidth. The main limitation to the bandwidth is what is termed chromatic dispersion
where different colours, i.e. Wavelengths propagate at different speeds. Chromatic dispersion of
the optical fibre cable occurs within the centre of the fibre itself. It is found that it is negative for
short wavelengths and changes to become positive at longer wavelengths. As a result there is a
wavelength for single mode fiber where the dispersions is zero. This generally occurs at a
wavelength of around 1310 nm and this is the reason why this wavelength is widely used.
The disadvantage of single mode fibre is that it requires high tolerance to be manufactured and
this increases its cost. Against this the fact that it offers superior performance, especially for long
runs means that much development of single mode fiber has been undertaken to reduce the costs.
Multimode fiber This form of fibre has a greater diameter than single mode fibre, being
typically around 50 microns in diameter, and this makes them easier to manufacture than the
single mode fibres.
Multimode optical fiber has a number of advantages. As it has a wider diameter than single mode
fibre it can capture light from the light source and pass it to the receiver with a high level of
efficiency. As a result it can be used with low cost light emitting diodes. In addition to this the
greater diameter means that high precision connectors are not required. However this form of
optical fibre cabling suffers from a higher level of loss than single mode fibre and in view of this its
use is more costly than might be expected at first sight. It also suffers from multi-mode modal
dispersion and this severely limits the usable bandwidth. As a result it has not been widely used
since the mid 1980s. Single mode fiber cable is the preferred type.
• Loss associated with the impurities There will always be some level of impurity in
the core of the optical fibre. This will cause some absorption of the light within the fibre.
One major impurity is water that remains in the fibre.
• Loss associated with the cladding When light reflects off the interface between the
cladding and the core, the light will actually travel into the core a small distance before
being reflected back. This process causes a small but significant level of loss and is one of
the main contributors to the overall attenuation of a signal along an fibre optic cable.
• Loss associated with the wavelength It is found that the level of signal attenuation
in the optical fibre depends the wavelength used. The level increases at certain
wavelengths as a result of certain impurities.
Despite the fact that attenuation is an issue, it is nevertheless possible to transmit data along
single mode fibres for considerable distances. Lines carrying data rates up to 50 Gbps are able to
cover distances of 100 km without the need for amplification.
Connector basics
The fibre optic connector basically consists of a rigid cylindrical barrel surrounded by a sleeve. The
barrel provides the mechanical means by which the connector is held in place wit the mating half.
A variety of methods are used to ensure the connector is held in place, ranging from screw fit, to
latch arrangements. The main requirement si that the end of the fibre optic cable is held
accurately in place so that the maximum light transfer occurs.
As it is imperative that the optical fibre is held securely and accurately in place, connectors will
normally be designed so that the fibre is glued in place, and in addition to this strain relief is also
provided
Fibre ends may also be polished. For single mode fibre, the ends may be polished with a slight
convex curvature so that the centres of the cables from the two connectors achieve physical
contact. This approach reduces the back reflections, although the level of loss may be slightly
higher.
• FC/PC This form of fibre optic connector is used for single-mode fiber optic cable. It
provides very accurate positioning of the single-mode fiber optic cable with respect to
transmitter (optical source) or the receiver (optical detector).
• SC This form of connector is mainly used with single-mode fiber optic cables. The
connector is simple low cost and reliable. The location and alignment is provided using a
ceramic ferrule. It also has a locking tab to enable it to be mated and removed without
fear of it accidentally falling loose.
• Plastic fiber optic cable connectors As the name implies, these fibre optic cable
connectors are only used with plastic fibre optic cabling.
• Mechanical splices
• Fusion splices
The mechanical splices are normally used when splices need to be made quickly and easily. To
undertaken a mechanical fibre optic splice it is necessary to strip back the outer protective layer
on the fibre optic cable, clean it and then perform a precision cleave or cut. When cleaving
(cutting) the fibre optic cable it is necessary to obtain a very clean cut, and one in which the cut
on the fibre is exactly at right angles to the axis of the fibre.
Once cut the ends of the fibres to be spliced are placed into a precision made sleeve. They are
accurately aligned to maximise the level of light transmission and then they are clamped in place.
A clear, index matching gel may sometimes be used to enhance the light transmission across the
joint.
Mechanical fibre optic splices can take as little as five minutes to make, although the level of light
loss is around ten percent. However this level of better than that which can be obtained using a
connector.
Fusion splices form the other type of fibre optic splice that can be made. This type of connection is
made by fusing or melting the two ends together. This type of splice uses an electric arc to weld
two fibre optic cables together and it requires specialised equipment to perform the splice. The
protective coating from the fibres to be spliced is removed from the ends of the fibres. The ends of
the fibre optic cable are then cut, or to give the correct term they are cleaved with a precision
cleaver to ensure that the cuts are exactly perpendicular. The next stage involves placing the two
optical fibres into a holder in the fibre optic splicer. First the ends if the cable are inspected using a
magnifying viewer. Then the ends of the fibre are automatically aligned within the fibre optic
splicer. Then the area to be spliced is cleaned of any dust often by a process using small electrical
sparks. Once complete the fibre optic splicer then uses a much larger spark to enable the
temperature of the glass in the optical fibre to be raised above its melting point and thereby
allowing the two ends to fuse together. The location spark and the energy it contains are very
closely controlled so that the molten core and cladding do not mix to ensure that any light loss in
the fbre optic splice is minimised.
Once the fibre optic splice has been made, an estimate of the loss is made by the fibre optic
splicer. This is achieved by directing light through the cladding on one side and measuring the light
leaking from the cladding on the other side of the splice.
The equipment that performs these splices provides computer controlled alignment of the optical
fibres and it is able to achieve very low levels of loss, possibly a quarter of the levels of mechanical
splices. However this comes at a process as fusion welders for fibre optic splices are very
expensive.
Semiconductor optical transmitters have many advantages. They are small, convenient, and
reliable. However, the two different types of fibre optic transmitter have very different properties
and they tend to be used in widely different applications.
LED transmitters These fibre optic transmitters are cheap and reliable. They emit only
incoherent light with a relatively wide spectrum as a result of the fact that the light is generated by
a method known as spontaneous emission. A typical LED used for optical communications may
have its light output in the range 30 - 60 nm. In view of this the signal will be subject to chromatic
dispersion, and this will limit the distances over which data can be transmitted
It is also found that the light emitted for a LED is not particularly directional and this means that it
is only possible to couple them to multimode fibre, and even then the overall efficiency is low
because not allt he light can be coupled into the fibre optic cable.
LEDs have significant advantages as fibre optic transmitters in terms of cost, lifetime, and
availability. They are widely produced and the technology to manufacture them is straightforward
and as a result costs are low.
Laser diode transmitters These fibre optic transmitters are more expensive and tend to be
used for telecommunications links where the cost sensitivity is nowhere near as great.
The output from a laser diode is generally higher than that available from a LED, although the
power of LEDs is increasing. Often the light output from a laser diode can be in the region of 100
mW. The light generation arises from what is termed stimulated emission and this generates
coherent light. In addition to this the output is more directional than that of a LED and this enables
much greater levels of coupling efficiency into the fibre optic cable. This also allows the use of
single mode fibre which enables much greater transmission distances to be achieved. A further
advantage of using a laser is that they have a coherent light output and this means that the light
is nominally on a single frequency and modal dispersion is considerably less.
A further advantage of lasers is that they can be directly modulated with high data rates. Although
LEDS can be modulated directly, there is a lower limit to the modulation rate. One of the
disadvantages of a laser diode fibre optic
Nevertheless laser diode fibre optic transmitters have some drawbacks. They are much more
expensive than LEDs. Furthermore they are quite sensitive to temperature and to obtain the
optimum performance they need to be in a stable environment. They also do not offer the same
life as LEDs, although as much research has been undertaken into laser diode technology, this is
much less of an issue than previously.
Fibre optic transmitter summary
In view of the different characteristics that LEDs and laser diode fibre optic transmitters posses
they are used in different applications. The table below summarises some of the chief
characteristics of the two devices.
Overall receiver
Although the photo-detector is the major element in the fibre optic receiver, the are other
elements to the whole unit. Once the light has been received by the fibre optic receiver and
converted into electronic pulses, the signals are processed by the electronics in the receiver.
Typically these will include various forms of amplification including a limiting amplifier. These serve
to generate a suitable square wave that can then be processed in any logic circuitry that may be
required.
Once in a suitable digital format the received signal may undergo further signal processing in the
form of a clock recovery, etc. This will undertaken before the data from the fibre optic receiver is
passed on.
Diode performance
One of the keys to the performance of the overall fibre optic receiver is the photodiode itself. The
response times of the diodes govern the speed of the data that can be recovered. Although
avalanche diodes provide high speed they are also more noisy and require a sufficiently high level
of signal to overcome this.
The most common type of diode used is the p-i-n diode. This type of diode gives a greater level of
conversion than a straight p-n diode as the light is converted into carriers in the region at the
junction, i.e. between the p and n regions. The presence of the intrinsic region increases this area
and hence the area in which light is converted.
Summary
The telecommunications industry as a whole is turning to IP based transport of data along with the
introduction of new multimedia services. To enable this to be achieved telecommunications
networks, whether fixed, cellular or wireless will need to be far more flexible and to achieve this it
will be necessary to implement IMS, IP Multimedia Subsystem.
ISDN Tutorial
ISDN or Integrated Services Digital Network is an international standard for end to end digital
transmission of voice, data and signaling. It can operate over copper based systems and allows
the transmission of digital data over the telecommunications networks, typically ordinary copper
based systems and providing higher data speeds and better quality than analogue transmission.
The ISDN specifications provide a set of protocols that enable the set up, maintenance and
completion of calls.
ISDN, Integrated Services Digital Network, provides a number of significant advantages over
analogue systems.
In is basic form it enables two simultaneous telephone calls to be made over the same line
simultaneously.
Faster call connection. It typically takes a second to make connections rather than the much
longer delays experienced using purely analogue based systems.
Data can be sent more reliably and faster than with the analogue systems.
Noise, distortion, echoes and crosstalk are virtually eliminated.
The digital stream can carry any form of data from voice to faxes and internet web pages to data
files - this gives the name 'integrated services'
ISDN Usage
ISDN is in use around the world, but with the introduction of ADSL it is facing strong competition.
The technology never gained much market share in the USA, although it used in other countries.
In Japan it became reasonably popular in the late 1990s although it is now in decline with the
advent of ADSL. The system was also introduced in Europe where providers such as BT, France
Telecom and Deutsche Telekom introduced services.
ISDN Configurations
There are two types of channel that are found within ISDN. These are the 'B' and 'D' channels. The
B or 'bearer' channels are used to carry the payload data which may be voice and / or data, and
the d or 'Delta' channel is intended for signalling and control, although it may also be used for data
under some circumstances.
Additionally there are two levels of ISDN access that may be provided. These are known as BRI
and PRI.
BRI (Basic Rate Interface) - This consists of two B channels, eac of which provides a bandwidth
of 64 kbps under most circumstances. One D channel with a bandwidth of 16 kbps is also
provided. Together this configuration is often referred to as 2B+D.
The basic rate lines connect to the network using a standard twisted pair of copper wires. The data
can then be transmitted simultaneously in both directions to provide full duplex operation. The
data stream is carried as two B channesl as mentioned above, each of which carry 64 kbps (8 k
bytes per second). This data is interleaved with the D channel data and this is used for call
management: setting up, clearing down of calls, and some additional data to maintain
synchronisation and monitoring of the line.
The network end of the line is referred to as the 'Line Termination' (LT) while the user end acts as
a termination for the network and is referred to as the 'Network Termination' (NT). Within Europe
and Australia, the NT physically exists as a small connection box usually attached to a wall etc,
and it converts the two wire line (U interface) coming in from the network to four wires (S/T
interface or S bus). The S/T interface allows up to eight items or 'terminal equipments' to be
connected, although only two may be used at any time. The terminal equipments may be
telephones, computers, etc, and they are connected in what is termed a point to point
configuration. In Europe the ISDN line provides up to about 1 watt of power that enables the NT to
be run, and also enables a basic ISDN phone to be used for emergency calls. In North America a
slightly different approach may be adopted in that the terminal equipment may be directly
connected to the network in a point to point configuration as this saves the cost of a network
termination unit, but it restricts the flexibility. Additionally power is not normally provided.
PRI (Primary Rate Interface) - This configuration carries a greater number of channels than the
Basic Rate Interface and has a D channel with a bandwidth of 64 kbps. The number of B channels
varies according to the location. Within Europe and Australia a configuration of 30B+D has been
adopted providing an aggregate data rate of 2.048 Mbps (E1). For North America and Japan, a
configuration of 23B+1D has been adopted. This provides an aggregate data rate of 1.544 Mbps
(T1).
The primary rate connections utilise four wires - a pair for each direction. They are normally 120
ohm balanced lines using twisted pair cable. Primary rate connections always use a point to point
configuration.
Primary rate lines are widely used to conenct to Private Branch eXchanges (PBX) in an office etc.
Typically this may be used to provide a number of POTS (Plain Old Telephone System) or basic
rate ISDN lines to the users.
Summary
Although ISDN is has been overtaken by technologies such as ADSL it is nevertheless still widely
used in many areas, particularly where existing services need to be maintained, or where
compatibility needs to be guaranteed. As such it is still an important technology that will be
encountered for many years to come.
Mobile IP tutorial
Mobile IP is becoming increasingly important. Mobile IP is required because high speed data and
mobility are two key factors for today's wireless and telecommunications industry.
While high speed data is one issue, mobility is equally important. People need to take laptop
computers with them use them anywhere as if they were working from their home network. While
it is possible to make connections reasonably easily, improvements are being put in place to
ensure full mobility and ease of use. Accordingly Mobile IP is a key element enabling this facility to
become more robust and easier to use
As infrastructures and standards are already in place for data transfer it is necessary to adapt
them to take account of mobility and introduce Mobile IP via an existing route rather than
introducing completely new techniques. The most common services are the data services using the
Internet Protocol (IP). When using this, a user which may be any form of node or computer is
normally connected to a particular network or sub-network. Moving the computer from one
network or sub-network to another creates problems because routing tables need to be updated to
enable the data to reach the user at the new location.
Home operation
When connected to the base network, users are attached to their home network and all the routing
tables needed to send the data to the required destination are set up for the computer in this
locations. Using their home network IP address they can move anywhere within this particular
network with no problem.
Summary
With the telecommunications scene changing rapidly, moving from a voice centred service to a
data centred service and hybrid approaches being offered to provide the optimum service, Mobile
IP is an important technique to be used to enable seamless transition from one area to the next,
and one technology to the next.
• User location and name translation - this function enables data to reach a party regardless
of location. To achieve this SIP, Session Initiation Protocol addresses are used. These are
very similar in format to email addresses, having elements such as a domain name and a
user name or phone number. Also because of their structure, they are easy to associate
with email addresses.
• Feature negotiation - as different parties may have different features that are supported it
is necessary that both ends communicate in a way that both can support. For example it
would be no use a video enabled phone trying to sent video to a voice only phone. Thus
when a link is set up all participants negotiate to agree the features that are supported.
Also when one user leaves a session, the remaining ones may renegotiate to determine
whether any new features may be supported.
• Participant management - sessions need to be managed to enable users to enter or leave
sessions. SIP provides this capability.
SIP elements
SIP comprises two basic elements, namely the SIP User Agent and the SIP Network Server:
• The SIP User Agent This is the component of the protocol that resides with the user. In
turn it consists of two parts: the User Agent Client (UAC) which initiates the calls and the
User Agent Server (UAS) which answers calls. It allows calls to be made using a peer to
peer client server protocol.
• SIP network server This element contains three basic parts: the SIP Stateful Server, the
SIP Stateless Server, and thirdly the SIP Redirect Server. These servers act to provide the
location of the user and accordingly direct data to the user, and they also provide name
resolution in a similar way that email addresses and domain names do on the Internet as it
is unlikely that users will remember IP addresses.
SIP also provides its own transfer mechanism which is independent of the packet layer. This
enables it to perform reliably over protocols such as UDP - a particularly useful feature under some
circumstances.
USB tutorial
USB, or the Universal Serial Bus Interface is now well established as an interface for computer
communications. In many areas it has completely overtaken RS232 and the parallel or Centronics
interface for printers, and it is also widely used for memory sticks, computer mice, keyboards and
for many other functions. One of the advantages of USB is its flexibility: another is the speed that
USB provides.
USB provides a sufficiently fast serial data transfer mechanism for data communications, however
it is also possible to obtain power through the connector and this has further added to the
popularity of USB as many small computer peripherals may be powered via this. From memory
and disk drives to other applications such as small fans and coffee cup warmers, the USB port on
computers can be used for a variety of tasks.
USB evolution
The USB interface was developed as a result of the need for a communications interface that was
convenient to use and one that would support the higher data rates being required within the
computer and peripherals industries.
The first proper release of a USB specification was Version 0.7 of the specification. This occurred in
November 1994. This was followed in January 1996 by USB 1.0. USB 1.0 was widely adopted and
became the standard on many PCs as well as many printers using the standard. In addition to this
a variety of other peripherals adopted the USB interface, with small memory sticks starting to
appear as a convenient way for transferring or temporarily storing data.
With USB 1.0 well established, faster data transfer rates were required, and accordingly a new
specification, USB 2 was released. Wit the importance of USB already established it did not take
long for the new standard to be adopted.
With USB defining its place in the market, other developments of the standard were investigated.
With the need for mobility in many areas of the electronics industry taking off, the next obvious
move for USB was to adopt a wireless interface. In doing this wireless USB would need to retain
the same flexible approach that provided the success for the wired interface. In addition to this the
wireless USB interface needs to be able to transfer data at rates which will be higher than those
currently attainable with the wired USB 2 connections. To achieve this ultra-wideband UWB
technology is used.
USB capabilities
The basic concept of USB was for an interface that would be able to connect a variety of computer
peripheral devices, such as keyboards and mice, to PCs. However, since its introduction, the
applications for USB have widened and it has been used for many other purposes including,
including measurement and automation.
In terms of performance, USB 1.1 enabled a maximum throughput of 12 Mbps, but with the
introduction of USB 2.0 the maximum speed is 480 Mbps.
In operation, the USB host automatically detects when a new device has been added. It then
requests identification from the device and appropriately configures the drivers. The bus topology
allows up to 127 devices to run concurrently on one port. Conversely, the classic serial port
supports a single device on each port. By adding hubs, more ports can be added to a USB host,
creating connections for more peripherals.
USB Standards
USB is a standard that is being updated. Since its first introduction, the standard has been
improved to meet the increasing needs of the user community. As a result there are a number of
different USB standards, but fortunately these are backwards compatible.
1. USB1.1: This was the original version of the USB, Universal Serial Bus and was released
in September 1998 after a few problems with the USB 1.0 specification released in January
1996 were resolved.. It provided a Master / Slave interface and a tiered star topology
which was capable of supporting up to 127 devices and a maximum of six tiers or hubs.
The master or "Host" device is normally a PC with the slaves or "Devices" linked via the
cable.
One of the aims of the USB standard was to minimise the complexity within the Device by
enabling the Host to perform the processing. This meant that devices would be cheap and
readily accessible.
The cable length for USB 1.1 is limited to 5 metres, and the power consumption
specification allows each device to take up to 500mA, although this is limited to 100mA
during start-up.
USB 1.1 does not allow extension cables or the inclusion of pass-through monitors (due to
timing and power limitations).
2. USB 2.0: The USB 2.0 standard is a development of USB 1.1 which was released in April
2000. The main difference when compared to USB 1.1 was the data transfer speed
increase up to a "High Speed" rate of 480 Mbps. However it should be noted that even
though devices are labelled USB 2.0, they may not be able to meet the full transfer speed.
3. USB 3.0 : This improved USB standard which was first demonstrated at the Intel
Developer Forum in September 2007. The major feature is what is termed the SuperSpeed
bus, which provides a fourth transfer mode which gives data transfer rates of 4.8 Gbit/s.
Although the raw throughput is 4 Gbit/s, data transfer rates of 3.2 Gbit/s, i.e.0.4 GByte/s
more after protocol overhead are deemed acceptable within the standard. The standard is
also backwards compatible with USB 2.0
PIN FUNCTION
1 Vbus
4.75 - 5.25 V
2 Data -
3 Data +
4 Ground
Shell Screen
USB cable pin assignments
The connectors used for USB are designed to enable the power and ground connections to be
made first applying power to the device before the signal lines are connected.
VoIP Tutorial
This Voice over Internet Protocol, VoIP tutorial is split into several pages each of which addresses
different aspects of Voice over IP operation and technology:
[1] Voice over IP, VoIP technology tutorial [2] VoIP protocols [3] VoIP testing and
monitoring
Voice over Internet Pprotocol, also called Voice over IP or just VoIP technology is having a major
impact on the telecommunications industry. VoIP technology provides advantages for both the
user and also the provider, allowing calls to be made more cheaply, as well as enabling data and
voice to be carried over the same network efficiently. In view of the way VoIP technology is being
adopted, telecommunications providers are having to adopt the new technology. Already it has
caused some impact on major businesses, and there will be more to come.
Until recently voice traffic was carried using a circuit switched approach. Here a dedicated circuit
was switched to provide a call for a user. Now with new data and Internet style technology used
for VoIP, packet data and Internet Protocol (IP) is used to enable a much more efficient use of the
available capacity.
What is VoIP?
The concept of Voice over Interent Protocol, Voice over IP, or VoIP, is quite straightforward. A VoIP
system basically consists of a number of endpoints which may be VoIP phones or computers and
an IP network.
In a VoIP system, the phone or computer acting as an endpoint consists of a few blocks. It
includes a vocoder (voice encoder / decoder) which converts the audio to and from the analogue
format into a digital format. It also compresses the encoded audio, and in the reverse direction it
decompresses the reconstituted audio. The data generated is split into packets in the required
format by the network interface card which sends them with the relevant protocol into the outside
world. Signalling and call control is also applied through this card so that calls may be set up,
pulled down, and other actions may be undertaken.
The IP network accepts the packets and provides the medium over which they can be forwarded,
routing them to their final destination. As complete circuits are not dedicated to a given user, at
times when no data needs to be sent, for example during quiet periods in speech, etc, the capacity
can be used by other users. This makes a significant difference to the efficiency of a system, and
allows significant savings to be made.
VoIP Protocols
In order to be able to communicate using a VoIP system, there are two types of protocol that must
be used. One is a signalling protocol, and the other is a protocol to facilitate the data exchange.
The signalling protocol is used to control and manage the call. It includes elements such as call set
up, clear down, call forwarding and the like. The first protocol to be widely used for VoIP was
H323. However this is not a particularly rigorous definition and as a result other variants have
been developed. One known as "Skinny" and is a Cisco Proprietary protocol and is from Nortel and
is called Unistem. In view of this there are often interfacing problems. As a result of this a new
protocol termed SIP (Session Initiation Protocol) is now being widely adopted as the main
standard.
The second type of protocol is used to manage the data exchange for the VoIP traffic. The one
used is termed RTP (Real Time Protocol) and this can handle both audio and video. RTP handles
the data exchange, but in addition to this a codec is required. Where voice is used a vocoder is
used (a codec can be used for any form of data including audio, video, etc). The most widely used
VoIP vocoder is G711, although there is a variety of others that are used with varying data rates
and providing different levels of voice quality.
Service quality
Quality of Service, QoS, for the data link has a major impact on VoIP perceived sound quality. The
data exchange must take place in real time and any delays in the system cause significant
disruption to the traffic. Delayed packets may mean that packets arrive out of order, or with
varying gaps between them, resulting in garbled speech, Packets may even disappear resulting in
lost information.
For any packet passing through an IP network it is possible to define the class of service required.
It is important that packets that need to be transferred in real time are given a higher quality of
service than those that can be transferred as the network permits. This is particularly important
for services like VoIP that are termed delay sensitive applications.
Advantages
Voice over IP, VoIP technology provides a number of significant advantages to operators and to
users. For the user one of the main advantages is the flexibility. Phones are software based,
sometimes being attached to computers. As a result a considerable degree of flexibility is afforded
to the user. It is possible to move the phone around and by enabling the system to recognise the
individual phone it is possible to route the data to it automatically. In addition to this ideas such as
mobile IP could enable the user to be located away from the home network and still receive calls.
A further advantage is that the wireless network technologies such as 802.11 can carry the calls as
voice is simply another form of application. This gives further flexibility as the phone does not have
to be physically wired to a network. Again Quality of Service is a major factor and this is being
addressed under 802.11e
For the operator some of the advantages are different. One of the major drivers towards the use of
VoIP is cost. Previously digital traffic was handled using time division techniques. This had the
disadvantage that when a particular time slot allocated to a user was dormant, it could not be
used. Using IP techniques much higher levels of efficiency can be attained. Although the system
required to carry packet data is more complicated, the returns far outweigh the additional costs.
Summary
As with all technologies there are disadvantages. The main one with VoIP is voice quality. This
results from the use of a vocoder to digitise and compress the audio. Quality is comparable with
that from a mobile phone, but for the future with rapidly improving standards of vocoders there
are likely to be significant improvements in this area.
In the long term VoIP is the way the market is moving, and now with increasing speed. Offering
not only great improvements in flexibility, but also major cost savings, but with the requirement
for large levels of investment, this is the way that the telecommunications market is moving.
However to remain competitive it will be necessary to adopt the new VoIP technology.
• IETF This is the Internet Engineering Task Force. It is a community of engineers that
defines some of the prominent standards used on the Internet (including VoIP protocols)
and seeks to spread understanding of how they work.
• ITU the International Telecommunication Union. This is an international organization
within the United Nations System used by where governments and private sector
companies to coordinate and standardize telecommunications networks, services and
standards on a global basis.
In addition to the organizations involved, there is also a variety of different VoIP protocols and
standards.
• H.248 H.248 is an ITU Recommendation that defines "Gateway Control Protocol" and it
is also referred to as IETF RFC 2885 (Megaco). It defines a centralized architecture for
creating multimedia applications and it extends MGCP. H.248 is the result of a joint
collaboration between the ITU and the IETF and it is another VoIP protocol.
• H.323 This is ITU Recommendation that defines "packet-based multimedia
communications systems." H.323 defines a distributed architecture for multimedia
applications, and it is thus a VoIP protocol.
• Megaco This is also known as IETF RFC 2885 and ITU Recommendation H.248. H.248
defines a centralized architecture for creating multimedia applications.
• Media Gateway Control Protocol (MGCP) This is also known as IETF RFC 2705. It
defines a centralized architecture for creating multimedia applications, and it is therefore a
VoIP protocol.
• Real-Time Transport Protocol (RTP) This VoIP protocol is defined under IETF RFC
1889 and it details a transport protocol for real-time applications. RTP provides the
transport mechanism to carry the audio/media portion of VoIP communication and is used
for all VoIP communications.
• Session Initiation Protocol (SIP) This is also known as IETF RFC 2543 and it defines
a distributed architecture for creating multimedia applications.
VoIP testing
This Voice over Internet Protocol, VoIP tutorial is split into several pages each of which addresses
different aspects of Voice over IP operation and technology:
[1] Voice over IP, VoIP technology tutorial [2] VoIP protocols [3] VoIP testing and
monitoring
IP networks carrying VoIP traffic are very complicated. They carry both voice and data traffic and
this results in a variety of traffic with different requirements being carried and this presents many
challenges. In order to ensure that all the requirements are met and the network operates to its
maximum efficiency can present many challenges. Obviously the design must be correct, but once
implemented testing of the network is needed to ensure that it is able to operate correctly when
installed, and then maintained correctly ensuring that its performance is maintained or optimised
to provide the performance meets the needs of the network provider and the user. For VoIP,
testing is an essential element of any network. However specialised VoIP testing techniques are
required.
• Signalling gateways
• Media gateways
• Gatekeepers
• Class 5 switches
• SS7 network
• Network management system
• Billing system
This variety of different entities within the VoIP network all communicate with each other using a
variety of protocols. To perform correctly it is necessary to ensure that they communicate
efficiently and that no bottlenecks are created. Analysing the performance of a VoIP network is not
always easy. However it can be achieved and significant improvements in performance can be
achieved if the VoIP testing scenarios are carefully chosen and planned, and the data analysed to
reveal any problems.
Each element within the overall VoIP testing regime is of importance as it will ensure that the
network is able to perform properly. A problem in any one element will result in the whole network
not performing properly. These tests can be of the form of a functional test in the laboratory
before deployment. VoIP testing of the individual elements is essential to make sure that any
problems do not manifest themselves during deployment. Isolating the problem, and fixing it at
this stage is considerably more costly.
• Line losses are not usually significant: The fact that a current source is used means
that voltage losses caused by line resistance are unlikely to cause a problem.
• Can be used for long distances: As voltage losses are not normally significant, 20mA
current loop systems can be sued for carrying data over long distances, sometimes up to
several kilometres.
• Can be isolated from ground: By using opto-isolators it is possible to isolate the
signalling system from ground.
• Provided a simple form of networking: As the system uses a current loop, it is
possible to run several teleprinters receiving data from one source by placing each
teleprinter in the loop. This meant that 20 mA current loop provided an early form of
networking.
While 20mA current loop provides a number of advantages, it also has some disadvantages that
must be considered.
Careers
Job RKI/PO/GTAC/579
Code:
Positions: 1
Description
About Redknee
Description
You are a born problem solver. You live to troubleshoot and resolve customer
technical issues. You enjoy the challenge of working with leading-edge
telecommunications technology. You expertly investigate and analyze customer
problems to ensure that products and solutions are accomplishing the customer’s
objectives. You autonomously execute special assignments and creatively
prioritize deadlines to get the job done.
You will provide support for Redknee products deployed on customer sites
across the globe;
Work to resolve production issues and meet the laid down SLA’s.
The ideal candidate will have 3+ years of relevant experience in technical/product support and hands-on
experience on majority of the below listed areas:
• Should have very good working knowledge of Oracle (Skills- SQL , PL/SQL)
• UNIX knowledge and expertise;
• Good knowledge/experience of Telecom, IN, VAS and related technologies
(like SMS, MMS, VMS, GPRS, IVR, USSD, Mediation, Billing, Service
Provisioning).
• Promotion Configuration and implementation.
• Good Knowledge of SS7, SIGTRAN, VOIP, MGCP, ISUP, MAP, TCAP, INAP
and CAMEL protocols.
• Experience with shell and perl scripting is advantageous;
• knowledge of any Telecom Billing systems is advantageous.
Soft Skills:
Exemplary team-player