Escolar Documentos
Profissional Documentos
Cultura Documentos
Contents
1.0 - Purpose
2.0 - Disclaimer
3.0 - Introduction
4.0 - Physical location of the data centre
5.0 - Sizing and capability audit
6.0 - The hot aisle/cold aisle design concept
7.0 - Specifying a raised floor
8.0 - Equipment racks and cabinets
9.0 - Heating, Ventilation and Air Conditioning (HVAC) within the data centre
10.0 - Electrical systems to and within the data centre
11.0 - Earthing, bonding and the Signal Reference Grid
12.0 - Fire detection, alarm and suppression within the data centre
13.0 - Communications cabling and containment
14.0 - Security, access control and CCTV
15.0 - Building Management Systems, from rack to room level
16.0 - Tiering, H&S and other project management issues
2.0 - Disclaimer
This document is intended for the use of persons qualified in the electrical, mechanical and construction requirements of a data
centre. This document quotes figures and extracts from international standards but this does not absolve the user from full
knowledge and usage of the original standards themselves. Every effort has been made to supply a complete and upto-date
technical prcis of the current international, European and British standards and regulations concerned but the fitness-forpurpose and final design remains the responsibility of the document user.
Except where other documents have been quoted, this document remains the copyright of Engineering Education Ltd and its
reproduction is forbidden under the Copyright, Designs and Patents Act 1988. Licences may be obtained from licenses@
engineeringeducation.co.uk.
3.0 - Introduction
A data centre is;
A building or portion of a building whose primary function is to house a computer
room and its support areas, according to TIA 942.
This design guide is based upon the requirements of TIA 942 Telecommunications Infrastructure Standard for Data Centers,
April 2005.
Although this is an American standard invoking other American standards and codes it is far more substantive than the
equivalent CENELEC EN 50173-5 Data centre standard, which is still at draft stage. However this document expands upon
the TIA 942 standard and incorporates all the requirements of European and British standards, Directives and Regulations.
These include EN 50173, EN 50174, EN 50310, BS 5839, BS 6701, BS 7671, the UK Building Regulations, the Disability
Discrimination Act and many others. They are all detailed in Appendix 1.
Many diverse areas need to be addressed to fully design and specify a data centre. It is essential that it is agreed at the start
of the project exactly who is responsible for every item or else the final build will be severely compromised if a vital design
element has been overlooked or is incompatible with other services.
A data centre design project can be split into the following sections;
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Location.
Construction.
Definition of the spaces and size available.
Planning the layout of the computer room floor.
Designing the raised floor.
Calculating day one and future IT requirements.
Calculating the day one and future air conditioning requirements.
Deciding upon the type and location of the air conditioning units.
Calculating day one and future power supply requirements.
Sizing and location of UPS and standby generators.
Designing the earth bonding and signal reference grid.
Designing the power distribution system within the computer room and within the equipment racks.
Lighting, emergency lighting and signage.
Access control, security and CCTV requirements.
Fire detection, alarm and suppression system, including hand-held fire extinguishers.
Specifying and designing the structured cabling system and its containment system.
Organising connections to external telecommunications providers and the Entrance room.
Integration of Building Management Systems with other command and monitoring networks and their appearance at
a control room.
19. Project management issues, health & safety and ongoing operational and maintenance issues.
Data centre projects are either green field new-build projects or conversion/renovation projects. In either case it is advisable to
undertake a complete audit of what exists already or on the proposed designs.
Apart from meeting the day one designs and proposed expansion plans it is also necessary to decide upon which level of
backup or redundancy will be built in to the finished location. For data centres these levels are now designated as being of Tier
1, 2, 3, or 4, with Tier 4 being the highest level of redundancy.
The Tiering level is described in great detail in the TIA 942 standard which in turn has taken much of its philosophy from the
Uptime Institute. A very brief summary is given in the table below. In the terminology of redundant systems N means enough
equipment to do the job, N+1 means one more additional unit to act as a redundant supply whereas 2(N+1) means two
independent paths to complete the job.
Site availability
Downtime (hours/yr)
Operations Centre
Redundancy for power, cooling
Gaseous fire suppression system
Redundant backbone pathways
Tier 1
99.671%
28.8
Not required
N
Not required
Not required
Tier 2
99.749%
22.0
Not required
N+1
Not required
Not required
Telecommunications
& Equipment Rooms
serving spaces
outside data centre
Data Centre
Support Staff
Offices
Operations Centre
Telecommunications
Room (s) serving
data centre spaces
Computer Room
Tier 3
99.982%
1.6
Required
N+1
Approved system
Required
Tier 4
99.995%
0.4
Required
2(N+1)
Approved system
Required
4.1
Parameter
Does the building and rooms exist,
or are building works required?
4.2
4.3
4.4
4.5
4.6
Is connection to mains/telecoms
services available?
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
Recommendation
Ref.
5.1
Parameter
What are the dimensions of the data
centre?
Recommendation
Ref.
5.2
5.3
5.4
TIA 942
5.5
TIA 942
5.6
TIA 942
5.7
TIA 942
5.8
5.9
To BS 5266 BS 5266.
5.10
5.11
5.12
TIA 942
COLD
AISLE
HOT
AISLE
COLD
AISLE
EQUIPMENT
COLD
AISLE
A
I
R
C
O
N
R
A
C
K
HOT
AISLE
R
A
C
K
COLD
AISLE
R
A
C
K
R
A
C
K
8.2 - Ventilation
This is a key area of differentiation between standard equipment racks and server racks. A server rack must cope with the
ventilation demands of many kilowatts worth of electrical equipment. A standard glass-fronted rack with horizontal fan tray
fitted can only cope with the cooling demands of less than a kilowatt.
It would appear that a suitably ventilated rack, supplied with adequate chilled air through a standard floor tile, can cope with
about two kilowatts of heat dissipation, where the motive force through the rack is only provided by the fans within the server
units themselves.
The amount of ventilation required is stated by several sources and is expressed as a ratio of open space to overall door
area, e.g.;
...servers require that the front and back cabinet doors to be at least 63% open for adequate airflow. SUN
One method of ensuring proper cooling is to specify rack doors that provide over 830 in2 (0.53 m2) of ventilation
area or doors that have a perforation pattern that is at least 63% open. APC
Racks (cabinets) are a critical part of the overall cooling infrastructure. HP enterpriseclass cabinets provide 65
percent open ventilation using perforated front and rear door assemblies. To support the newer high-performance
equipment, glass doors must be removed from older HP racks and from any third-party racks. HP
the cabinet should either have no doors or, if required for security, doors with a minimum 60% open mesh for
maximum airflow and is best not equipped with top mounted fan kits. Chatsworth
Ventilation through slots or perforations of front and rear doors to provide a minimum of 50% open space. Increasing
the size and area of ventilation openings can increase the level of ventilation. TIA 942
When the heat load goes above about 2 kW (about 5 average servers) then an escalation policy is required, which can take
the form of;
Increasing floor tile vent size up to 75% open area.
Replacing floor tiles with fan assisted grate tiles.
Adding specialised fan units to the top and/or bottom of the rack.
Using cabinets where the entire rear door is a fan unit.
The above solutions will take the heat dissipation capability up to about 6 kW per rack. Above that then more specialised racks
need to be used where the whole rack is fed by a chilled water supply. These designs can cope with loads in excess of 20
kW. New designs using liquid carbon dioxide claim cooling capacities of over 30 kW per rack.
It is also important that the front to back cooling scheme adopted in such racks is not compromised by gaps in the rack
allowing cooled air to mix with hot air drawn back through the gaps (Thermal Guidelines for Data Processing Environments
ASHRAE). For this reason all gaps in the rack must be filled in with blanking plates. Also excessive gaps for cabling at the
side of the racks should be sealed with an air dam kit and any cable entry points at the bottom of the rack should also be
sealed with a brush strip.
8.3 - Power
The rack needs to be powered and in Europe this would generally be provided by a 16 or 32 amp, 230 V single phase feed
through an IEC 60309 connector. At least two feeds are required for redundancy and backup purposes so a dual 32 amp feed
would be counted as supplying 32 x 230 = 7.36 kVA (remember that useful power is measured in watts, which is amps x volts
x power factor).
For loads above 7 kVA then either more 32 amp feeds are supplied or a three-phase supply is provided which would
normally deliver at least 22 kW through a five-pin version of the IEC 60309 connector. For three-phase supply Regulation 51410-01 of BS 7671 requires a warning notice to be secured in such a position that the warning is seen before access is gained
to live
parts.
Within the rack the power is distributed by what is widely known as a power distribution unit, or PDU. There does not seem
to be a widely accepted definition of a PDU and at its simplest it is just a power strip of sockets that distributes the incoming
electricity to the rack equipment. However more functionality is available in the form of;
Sequential start up.
Automatic crossover switch between two supplies.
Power line conditioning.
Reporting function about status and power usage. This in turn may be a simple LED readout on the unit or part of an
IP addressable managed system.
8.4 - Control and monitoring
A data centre server rack must be secure and be able to monitor and report its environmental status back to some central
control point. The monitoring system may be part of a buildingwide Building Management System (BMS), an add-on localised
monitoring scheme or a built in rack-monitoring scheme designed and dedicated to the task. TIA 942 states A Building
Management System (BMS) should monitor all mechanical, electrical, and other facilities equipment and systems.
The rack sensor system should be able to detect the following;
Temperature.
Smoke.
Water.
Humidity.
Access.
Vibration.
Airflow.
Particles in the incoming airflow.
And respond with one or more of the following;
Visual alarm on top of cabinet.
Audible alarm.
Networked alarm.
CCTV.
Cabinets
2
This Row of Tiles can be lifted
Rear
HOT AISLE
(Rear of Cabinets)
Rear
3
7-tile pitch location
Cabinets
Front
COLD AISLE
(Front of Cabinets)
Front
4
5
6
7
Cabinets
Rear
The 7-tile pitch requires that the front edges of the two facing cabinets are placed in line with the edge of a floor tile, and two
complete floor tiles, i.e. 1.2 m, separates the two facing cabinets, thus forming the cold aisle. The depth of the rack will cover
about one and a half floor tiles and so a complete floor tile is needed in the hot aisle for access. This arrangement means that
the set will repeat itself every seven tiles, or 4.2 metres.
Apart from the 7-tile arrangement TIA 942 also requires clearances of a minimum of 1 m of front clearance for installation of
equipment and a minimum of 0.6 m of rear clearance for service access at the rear although a rear clearance of 1 m (3 ft) is
preferable. Some racks have split rear doors to facilitate rear clearance.
IEEE 1100, referenced in TIA 942, suggests a clearance of two metres from building structural steel in case of lightning
flashovers.
Up until the early part of this century the average heat load developed in a rack was only around 1 kW and cooling did not
need to be a closely controlled activity, as simple whole room cooling would suffice. But now with 1U servers and blade servers
the potential heat generation is enormous. The average server has a running load of about 400 watts, meaning that a 2 kW
cooling capacity equates to only fiver servers per rack. Putting 42 of these servers in a rack, just because they fit, would
develop over 16 kW of heat, and blade servers would generate over 20 kW.
Underfloor plenum cooling can supply about six kW of cooling capacity by the use of one or more of the following upgrade
methods;
Use a larger floor tile, up to 75% open area.
Use a fan assisted floor grate.
Use specialised blowers in the rack to bring more airflow into the rack and distribute it across the front face of the
equipment.
Use rear doors on the racks that are full length blower units.
Beyond about six kW, underfloor plenum cooling of racks becomes impractical and the next stage is water-cooling of the entire
rack.
Water is much more effective at removing heat than air. A water-cooled rack can dissipate in excess of 20 kW of heat. These
racks need to be plumbed into an existing chilled water generation and distribution system that would need to be placed
outside of the equipment room. Liquid carbon dioxide cooling plants are also available now. CO2 is even more efficient than
water and can remove in excess of 30 kW of heat from a rack.
Directly cooled racks are thus much more efficient in terms of floor space used but they are more expensive to buy, need
plumbing in, and an external chiller plant still needs to be built.
For air conditioning applications for more than a medium sized rectangular computer room, it is advisable to use a
computational fluid dynamics software program to model the airflow and cooling capacity of an HVAC design.
HOT
AISLE
Computer Room
Air Conditioning
unit - CRAC
COLD
AISLE
The diagram above shows the CRAC unit as the source of the chilled air and pumping it into the underfloor plenum space. Air
escapes into the cold aisle through the floor vents, passes through the racks, cooling them on the way, and appears in the hot
aisle, where it rises. It then returns back to the CRAC unit to repeat the process. The CRAC units are located at the end of the
hot aisles to facilitate the shortest return path back to the CRAC. Once the room goes over a certain size it is advisable to
improve the return path by adding a ceiling plenum, with fans, to scavenge the hot air and direct it back to the CRAC units. It
has been suggested that this would be beneficial once the floor area extends beyond 400 m2, although a dedicated return
plenum would benefit any size computer room.
Another item to take into account is locating the floor vents at the correct distance from the CRAC unit. Too close and the air
velocity will cause a negative pressure at the vent relative to the air in the room above and suck in hot air instead of blowing
cold air out. The minimum distance is about two metres before effective cooling takes place. The maximum distance from the
CRAC unit again depends upon factors such as air volume from the CRAC unit, floor depth, obstructions, number and size of
floor vents etc., but a figure of ten metres seems to be commonly accepted.
Some items, particularly communications equipment, are not designed for front-to-rear cooling but side-to-side cooling, or
even both at the same time!
Side-to-side items may be cooled by;
Placing in a low density environment on a two post frame with chilled air generally supplied from a floor vent.
Placing in a standard server rack with a front-to-side cooling converter fan fitted.
Chilled water cooling matrices placed at the sides of the open frames that will allow chilled air to be directed in a
side-to-side direction.
APC, a major supplier of IT air conditioning, offers the following estimating tool to help calculate the cooling capacity required
of a computer room. Note the usual running load should be used for the IT equipment, not the nameplate rating, which is
usually one third higher than the normal running load.
The battery/UPS calculation is only required if the battery/UPS system is in the same computer room. TIA 942 recommends
that UPS systems greater than 100 kVA be placed in another room.
Note that allowance should also be made for future expansion and redundancy in air conditioning calculations.
Item
IT equipment
UPS with battery
Data required
Total IT load power
in Watts
Power system rated
power in Watts
Power Distribution
Lighting
People
Total
Floor area in sq m
Max No. of people
Watts
Watts
Watts
Watts
Watts
Fresh air
Even with air conditioning, the computer room needs to be ventilated. Air should be changed at least ten times per hour. British
building regulations also require an air supply of ten litres per second per person, doubling if printers or photocopiers are in
use.
Incoming air must be filtered with airborne particulate levels maintained within the limits of Federal Standard 209E, Airborne
Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000.
Air from sources outside the building should be filtered using High Efficiency Particulate Air (HEPA) filtration rated at 99.97%
efficiency (DOP Efficiency MIL-STD-282) or greater.
As the external temperature at British latitudes is below 22C for about 70% of the year some of the huge electricity bills
associated with cooling data centres can be mitigated by taking even larger volumes of outside air during the autumn, winter
and spring months, with the minimum ventilation rate maintained for the summer months.
Add up all the nameplate ratings of all the equipment and multiply by 0.67, this is the day 1 running load.
Multiply this by whatever expansion factor is expected to apply to the data centre.
Add 50% to the above to allow for inrush current.
Add 32% to allow for the UPS inefficiency and battery charging requirement.
Add 21.5 W per square metre of floor space to allow for lighting.
Double the amount reached so far to allow for air conditioning power requirements.
Multiply the total so far by 1.25 to provide a further overating factor, so that cables arent expected to work at their
full safe load.
8. Add a figure; say 5% for power factor correction*. Modern I.T. equipment is usually power factor corrected, but there
will be some power factor loss.
The figure thus arrived at is the amount of power that needs to be available in the data centre, even though it is unlikely to
need this full amount under normal conditions. This figure also leads to correct choice of the standby generator.
Lets take the example of a 200 square metre computer room with a day one nameplate load of 100 kW and a required
expansion capacity of 100%.
Day one running load
Long term load, after expansion
Add 50% for peak load factor
Add 32% for UPS inefficiency and battery charging
Add 21.5 W/m2 for lighting
Double this amount for power to run the air con
Multiply by 1.25 for the overating factor
Add 5% for power factor correction
=
=
=
=
=
=
=
=
100 x 0.67 = 67 kW
67 x 2 = 134 kW
134 x 1.5 = 201 kW
201 x 1.32 = 265 kW
265+(200x.021)=269 kW
269 x 2 = 538 kW
538 x 1.25 = 672 kW
672 x 1.05 = 706 kW
So we can see that the power supply to be designed in is more than ten times the day-one running load.
*Power factor. Remember that current times voltage equals volt-amperes, usually expressed as kVA. Useful work, or power, is
measured in watts, and volts x amps x power factor = watts. The power factor is the cosine of the phase difference between
the voltage and the current in an alternating current circuit. This phase separation is caused by a reactive, i.e. capacitive or
inductive, load. UPS systems are always measured in kVA output, as they do not know the power factor of the load they will
be connected to, and hence the real power, in watts, deliverable.
10.2 - UPS and backup requirements
Having understood the sizing implications the next step is to consider the methods of back-up and redundancy and how this
fits in with the Tiering philosophy of TIA 942.
TIA 942 summary
No. of delivery
paths
Utility entrance
Tier 1
1
Tier 2
1
Single feed
Single feed
Tier 3
1 active and
1 passive
Dual feed
Equipment
power cords
Generator fuel
capacity
8 hours, but no
generator required
if UPS backup time
is more than
8 minutes
N
N+1
N+1
2N
Redundancy
Tier 4
2 active
Remember that;
N means only enough items to do the task at hand. Any one point of failure will stop the system.
N+1 means one more item than is necessary, thus allowing for one point of failure.
2N means two complete independent paths.
Going to 2N, or even better 2N+1, will give the required resilience that a data centre needs but obviously at some major cost,
and not surprisingly 2N costs at least twice that for the provision of the minimum required service.
An uninterruptible power supply system (UPS) needs to be defined to back up the power supply system. This is usually based
on batteries and a double conversion on-line UPS. In this method the incoming AC is rectified and permanently charges a
battery pack which is also connected in parallel back into an inverter, to make available the mains voltage AC again. This is a
very reliable method and also isolates the I.T. load from sags, surges, spikes and most harmonics coming in from the mains
supply. The downside of this method is that it is very inefficient with up to 12% of the input power wasted in the rectificationinversion cycle.
Other kinds of UPS are available and one is based on the kinetic energy of a large rotating mass connected to a device which
acts as a motor when input power is available and a generator when the AC input fails. The kinetic energy stored in the
rotating flywheel will then produce electricity for a short time. Kinetic energy devices are smaller and cheaper and have less
maintenance associated with them but usually have back-up times measured in tens of seconds rather than the minutes
offered by a battery system.
UPS design options and requirements;
1. Size the electrical power required, in kVA.
2. Decide what is the critical load that needs to be backed up with a UPS. Some people include the air conditioning, and
some dont, expecting that the back-up generator will be online before the equipment overheats. Backing up the
aircon with the UPS will double the size of the UPS.
3. Decide upon the length of time the battery pack needs to backup the system. Battery packs are expensive, heavy and
take a lot of space, recommendations are;
a. TIA 942, 5-30 minutes.
b. SUN, 15 minutes.
c. Note that TIA 942 also specifies that a Tier 1 system does not need a generator if the battery system can backup
for at least 8 minutes
4. Decide upon the level of redundancy desired/affordable, e.g. N, N+1, 2N or 2(N+1).
5. Decide upon the location of the UPS and battery equipment. It should be close to the IT equipment and main power
feed to reduce cable losses. TIA 942 recommends that UPS systems larger than 100 kVA should be located in their
own separate room.
6. Decide upon size and location of the standby generator. It must be in a secure position, and in an area where noise
and fumes will not be disruptive. It should also be close to the UPS system and switchgear to minimise cable losses.
10.3 - Electrical distribution around the computer room
The electrical cabling, of adequate size to meet current and future design, must feed each equipment rack location and
planned location. For Tier 2 and above there must be duplicate, redundant feeds to each location.
Cabling may be fed into the top or bottom of racks, or both. Cabling run in the underfloor plenum space should be laid in the
cold aisle at low level. Cabling entering through the bottom of the rack should be sealed with a brush strip to prevent entry of
chilled air in an uncontrolled manner.
Cable should be terminated and presented on IEC 60309 connectors, of appropriate size for the current and suitable for
single or three phase connection as appropriate. Usual ratings are 16 or 32 amp. The higher power ratings of todays servers
would suggest that two 32 amp feeds would be required, giving around 7 kW. Higher power rating would require a three-phase
connection, providing around 22 kW.
EN 50174-2
ISO 11801:2002
ANSI/TIA/EIA-J-STD-607
Call Points
Control Panel
Sounders
The cables must be fire survivable as described in BS5839-1: 2002 Clause (26.2 d & e) which invokes, amongst others;
BS 60702-1: 2002
Mineral insulated cables and their terminations with a rated voltage not exceeding 750V.
BS 6387:1994
Performance requirements for cables required to maintain circuit integrity under fire
conditions.
Fire detectors come in a number of guises such as ionising smoke detectors, optical detectors, flame and heat detectors etc,
but the smoke detection system recommended for computer rooms is a highly sensitive system that gives very early warning
and is known as Aspirating Smoke Detection, ASD.
BS 6266-2002 recommends, A dedicated smoke detection system interfaced with the main building system, and an
aspirating smoke detection to monitor return air flows, for critical equipment areas such as centralised computer facilities.
BS 5839 describes many different types of smoke and flame detectors and most importantly, where they should be sited. The
siting of aspirating smoke detector inlets follows exactly the same rules as more conventional smoke detectors.
ASD is a high sensitivity, aspirating type laser-based optical smoke detection system that continually draws air within the
protected area through a network of pipes where it is passed through a calibrated detection chamber. It is capable of
providing very early warning of fire conditions thereby providing invaluable time to investigate and respond to a potential threat
of fire. ASD is very often referred to by a brand name, VESDA, Very Early Smoke Detection Apparatus. VESDA is a trademark
of Vision Products Pty Ltd of Australia.
A VESDA system can detect a fire within 70 seconds and activate a fire suppression response in under two minutes. A
sprinkler system would take four to six minutes under the same circumstances.
Conclusion
Various standards, such as TIA 942 and BS 6266 recommend aspirating smoke detectors for data processing applications
such as data centres because of their quick reaction time. The detection system should be able to give various levels of alarm
and needs to be optimised for the different areas encountered within a data centre. A data centre should have two levels of
fire detection and suppression. An aspirating smoke detector linked to a gaseous fire suppression system as the first response
and a pre-action sprinkler system as the last resort.
Automatic sprinklers
Detection and pre-action sprinkler
Detection and water sprays (mist)
Detection and total flood CO 2
Foam
High sensitivity smoke detection aspirating systems
Detection and dry powder
Detection and manual intervention
Detection and inert gas
Detection and fine particulate aerosol
Detection and halocarbon gas
Designation
IG-100
IG-01
IG-55
IG-541
Gas Blend
Nitrogen
Argon
Nitrogen/Argon mixture
Nitrogen/Argon/Carbon dioxide mixture
Halocarbon gasses
Trade Name
FE-13
FE-125
FM-200
FE-36
CEA-308
CEA-410
Designation
HFC 23
HFC 125
HFC 227ea
HFC 236fa
PFC-2-1-8
PFC-3-1-10
Chemical Formula
CHF 3
CF3 CHF2
CF3 CHFCF3
CF3 CH2 CF3
C3 F8
C4 F10
Chemical Name
Trifluoromethane
Pentafluoroethane
Heptafluoropropane
Hexafluoropropane
Perfluoropropane
Perfluorobutane
In general, inert gas systems appear to take up more space and be slightly more expensive than the halocarbon alternatives.
Manual means of fire suppression system discharge should also be installed. These should take the form of manual pull
stations at strategic points in the room. In areas where gas suppression systems are used, there is normally also a means of
manual abort for the suppression system.
See also;
BS 6266:2002
BS ISO 14520-1:2000
Entrance Room
Access Providers
Offices,
Operations Center,
Support Rooms
Computer Room
Telecom Room
(Routers, Backbone,
LAN/SAN Switches,
PBX, M13 Muxes)
Horizontal Cabling
Backbone Cabling
(LAN/SAN/KVM
Switches)
(LAN/SAN/KVM
Switches)
(LAN/SAN/KVM
Switches)
(Rack/Cabinet)
(Rack/Cabinet)
(Rack/Cabinet)
(Rack/Cabinet)
EN 50173-5
ENI (external network interface)
MD (main distributor)
ZD (zone distributor)
LDP (local distribution point)
EO (equipment outlet)
Zone distribution cabling
Main distribution cabling
Network access cabling
Distributor
Alignment of terminology
Distributor
in accordance
with EN 50173-1
ENI
Access Providers
ENI
Network Access
Cabling Subsystem
Offices,
Operations Center,
Support Rooms
Telecom Room
Main Distribution
Cabling Subsystem
Access Providers
Computer Room
MD
Entrance Room
Horizontal Cabling
Backbone Cabling
ZD
ZD
e Distribution
Distr
Zone
abling Subsystem
stem
Cabling
LDP
EO
EO
LDP
EO
EO
LDP
EO
EO
EO
(LAN/SAN/KVM
Switches)
(LAN/SAN/KVM
Switches)
(LAN/SAN/KVM
Switches)
(Rack/Cabinet)
(Rack/Cabinet)
(Rack/Cabinet)
(Rack/Cabinet)
LDP
EO
EO
EO
IEC 60332-3C
The best performing cable in a fire situation is the plenum style meeting
NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in
Air-Handling Spaces:2002
Optical fibre
ISO 11801 and EN 50173 now classify optical fibres as OM1, OM2, OM3 and OS1. OM means multimode fibre
and OS means singlemode fibre.
OM3 is a very high bandwidth fibre optimised for ten gigabit operation and is the obvious choice for new data
centre installations.
Singlemode fibre, OS1, is not needed within the data centre but it may be needed to connect to the outside world
of telecommunications and should be put in place to allow for direct high speed communications from routers
and SAN devices.
Optical connectors must also be specified. There are many to choose from and are Standards recognised. The
market leader for high speed data communications is now the LC connector.
Quality of the terminations should also be improved by allowing sophisticated Category 6 copper and optical fibre terminations
to be made in a clean factory environment by skilled people. Each cable assembly can be 100% checked in the factory and
whatever is sent to site is known to be of the highest quality.
There are no particular disadvantages to preconnectorised cabling, and it should be costneutral to the enduser, however
accurate surveys need to be carried out to ensure correct cable lengths are made up and installed.
A0102
A0103
A0104
A0105
A0106
A0107
A0108
A0109
Cable A
Panel 01
Cable B
Panel 01
10 11 12 13 14
15 16
17 18
20
19
21 22
23
24
A0112
A0111
A0110
A0101
Cable C
Panel 01
Panel to
panel link
Desk Pod
Cab le C
Desk 01
Cable A
Panel 02
CD014
CD013
CD012
CD011
Cab el B
Flo or 01
Panel to
floor link
A0101
A0102
A0103
A0104
BF014
BF013
BF012
BF011
Floor Box
EN 50174-1
ISO/IEC 14763-1:
TIA 942
and
BS 6701:2004
that all cables and components be suitably marked to uniquely identify them. The durability of all labelling must
also be suitable for the rigours of the environment in which they are placed and the expected timescale of
the installation, usually in excess of ten years.
The cables need to be contained and protected and separated from other services. For example EN 50174-2 requires a
separation of at least 200 mm between unscreened data and unscreened power cables, although distances can come down
if any of the cables are screened. BS6701 requires a 50 mm separation at all times between cables unless there is a nonmetallic divider separating the two groups. In the UK, BS6701 and EN 50174-2 requirements need to be overlaid and the
worst-case separation distances used for a correct installation.
BS6701 and EN 50174-2 overlaid
Type of Installation
Without a
divider
200 mm
Separation Distance
With a nonAluminium
metallic divider
divider
200 mm
100 mm
Steel
divider
50 mm
50 mm
50 mm
50 mm
50 mm
50 mm
30 mm
50 mm
50 mm
50 mm
0 mm
50 mm
50 mm
5 x 12 C6
1 x 8F OM3
6 x 8mm OM3
1 x 8 sm
5 x 12 C6
Server 5
Server 6
WAN
SAN
Server 1
Server 2
Server 3
Server 4
Modular designs and cluster concepts are bound to be more popular as the rate of change in Data Centres increases. The
cluster concept incorporates the air conditioning as well by rating each rack with a minimum 2 kW load dissipation and a
planned upgrade path up to 20 kW per rack.
Tier 1
Tier 2
Tier 3
Tier 4
Industrial grade
lock
Industrial grade
lock
Industrial grade
lock
Industrial grade
lock
Off site
monitoring
N/a
Industrial grade
lock
Off site
monitoring
Industrial grade
lock
Intrusion
detection
Intrusion
detection
Intrusion
detection
Monitor
Intrusion
detection
Card access
Intrusion
detection
Card access
Intrusion
detection
Delay egress
Card access
Intrusion
detection
N/a
Intrusion
detection
Intrusion
detection
Card access
Intrusion
detection
Card access
Card or
biometric access
Card access
Intrusion
detection
Card access
Card or
biometric access
Card access
Single person
interlock
Single person
interlock
Tier 1
No requirement
Tier 2
No requirement
Tier 3
Yes
Tier 4
Yes
N/a
No requirement
No requirement
No requirement
N/a
Yes
No requirement
No requirement
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Tier 1
No requirement
Tier 2
No requirement
Tier 3
Yes; digital
Tier 4
Yes; digital
N/a
N/a
20 f/s min
20 f/s min
Delay egress
CCTV requirements.
CCTV monitoring
Building perimeter
and parking
Generators
Access controlled doors
Computer room floors
UPS, telephone and
MEP rooms
CCTV
CCTV recording on
all cameras
Recording rate,
frames per second
CCTV
Access Control & Monitoring
Fire Alarms
BMS HVAC & Lighting
Environmental Monitoring
Building Based
Room Based
Rack Based
Common IP Cabling
Dedicated Cabling
Local Alarm/Control
Remote Alarm/Control
Access.
Vibration.
Air flow.
Particles in the incoming air flow.
Site availability
Downtime (hours/yr)
Operations Center
Redundancy for power,
cooling
Gaseous fire suppression
system
Redundant backbone
pathways
Tier 1
99.671%
28.8
Not required
N
Tier 2
99.749%
22.0
Not required
N+1
Tier 3
99.982%
1.6
Required
N+1
Tier 4
99.995%
0.4
Required
2(N+1)
Not required
Not required
Not required
Not required
Approved
system
Required
Approved
system
Required
For the last point it is worth noting that sound levels at work in Europe were reduced in February 2006. The EC Noise at Work
Directive 2003/10/EC was made on 6th February 2003 and repeals and replaces 86/188/EC as from (mainly) 15th February
2006.
Where is the money likely to go in a Data centre?
An American example.
Management Fees
and Insurance
11%
Control Room
3%
BMS System
1%
Security System
1%
Architects Fees
4%
Electrical Design
2%
Facilities inc.
Cabling Design
3%
Raised Floor
5%
Other Building
Works
6%
Plumbing
1%
Data Cabling
4%
Sprinkler and
FM200
Suppression
4%
Electrical, UPS
and generator
40%
HVAC
15%
Appendix I
Some Standards referenced in this document;
ANSI/TIA/EIA-568-B Commercial Building Telecommunications Cabling Standard.
ANSI/TIA/EIA-606-A Administration Standard for the Telecommunications Infrastructure of Commercial Buildings.
ANSI/TIA/EIA-J-STD-607 Commercial building grounding and bonding requirements for telecommunications.
ASHRAE Thermal Guidelines for Data Processing Environments.
BS EN 54 Fire detection and fire alarm systems.
BS 5499-4:2000 Safety signs, including fire safety signs. Code of practice for escape route signing.
BS 5266-1 The Code of Practice For Emergency lighting.
BS 5839-1:2002 Fire detection and fire alarm systems for buildings. Code of practice for system design, installation,
commissioning and maintenance.
BS 60702-1: 2002 Mineral insulated cables and their terminations with a rated voltage not exceeding 750V.
BS 6387:1994 Performance requirements for cables required to maintain circuit integrity under fire conditions.
BS 6266:2002 Code of practice for fire protection for electronic equipment installations.
BS 6701 Telecommunication cabling and equipment installations.
BS 7671 Requirements for electrical installations: IEE wiring regulations 16th Edition.
BS ISO 14520 P1: 2000(E) Gaseous fire-extinguishing systems. Physical properties and system design. General requirements.
BS 8300:2001 Design of buildings and their approaches to meet the needs of disabled people Code of practice, and
Building Regulations 2000 Part M Access and facilities for disabled people.
DETR Advice on Alternatives and Guidelines for Users of Fire Fighting and Explosion Protection Systems.
EN 50310 Application of equipotential bonding and earthing in buildings with information technology equipment.
EN 50173 Information technology - Generic cabling systems -- Part 1: General requirements and office areas.
EN 50174-1 Information technology cabling installation Part 1:Specification and quality assurance.
EN 50174-2 Information technology Cabling installation Part 2 installation and planning practices inside buildings.
EN 50346 Information technology - Cabling installation - Testing of installed cabling.
EN 12825 Raised access floors.
ETS 300 253 Equipment engineering earthing and bonding of telecommunications equipment in telecommunication centres
Federal Standard 209E, Airborne Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000.
IEC 60309 Plugs, socket-outlets and couplers for industrial purposes - Part 1: General requirements.
IEC 60320 Appliance couplers for household and similar general purposes - Part 1: General requirements.
IEC 60332-3C Tests on electric cables under fire conditions - Part 3-10: Test for vertical flame spread of vertically-mounted
bunched wires or cables.
IEC 60364-1 Electrical installations of buildings, various sections including; Part 5-548: Earthing arrangements and equipotential bonding for information technology equipment.
IEEE STD 1100-1999 Powering and Grounding Sensitive Electronic Equipment.
NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in Air-Handling Spaces:2002.
ISO/IEC 14763-1: Information Technology Implementation and operation of customer premises cabling Part
1:Administration.
ISO 11801:2002 Information technology cabling for customer premises.
ITU-T K.27 Bonding configurations and earthing inside a telecommunications building.
ITU-T K.31 Bonding configurations and earthing of telecommunications installations inside a subscribers building.
The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)',
MOB PF2 PS.
TIA 942 Telecommunications Infrastructure Standard for Data Centers, April 2005.
VDI 2054, Air conditioning systems for computer areas.