Você está na página 1de 37

Data centre design

Contents
1.0 - Purpose
2.0 - Disclaimer
3.0 - Introduction
4.0 - Physical location of the data centre
5.0 - Sizing and capability audit
6.0 - The hot aisle/cold aisle design concept
7.0 - Specifying a raised floor
8.0 - Equipment racks and cabinets
9.0 - Heating, Ventilation and Air Conditioning (HVAC) within the data centre
10.0 - Electrical systems to and within the data centre
11.0 - Earthing, bonding and the Signal Reference Grid
12.0 - Fire detection, alarm and suppression within the data centre
13.0 - Communications cabling and containment
14.0 - Security, access control and CCTV
15.0 - Building Management Systems, from rack to room level
16.0 - Tiering, H&S and other project management issues

Appendix 1- Standards referenced


1.0 - Purpose
This document is a design tool to assist designers to identify all the processes and activities required to fully define the
requirements of a data centre to industry standards and best practice parameters. It will allow a preliminary design stage to be
reached, with a client feedback loop, enabling full costed design proposals to be undertaken.

2.0 - Disclaimer
This document is intended for the use of persons qualified in the electrical, mechanical and construction requirements of a data
centre. This document quotes figures and extracts from international standards but this does not absolve the user from full
knowledge and usage of the original standards themselves. Every effort has been made to supply a complete and upto-date
technical prcis of the current international, European and British standards and regulations concerned but the fitness-forpurpose and final design remains the responsibility of the document user.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Except where other documents have been quoted, this document remains the copyright of Engineering Education Ltd and its
reproduction is forbidden under the Copyright, Designs and Patents Act 1988. Licences may be obtained from licenses@
engineeringeducation.co.uk.

3.0 - Introduction
A data centre is;
A building or portion of a building whose primary function is to house a computer
room and its support areas, according to TIA 942.
This design guide is based upon the requirements of TIA 942 Telecommunications Infrastructure Standard for Data Centers,
April 2005.
Although this is an American standard invoking other American standards and codes it is far more substantive than the
equivalent CENELEC EN 50173-5 Data centre standard, which is still at draft stage. However this document expands upon
the TIA 942 standard and incorporates all the requirements of European and British standards, Directives and Regulations.
These include EN 50173, EN 50174, EN 50310, BS 5839, BS 6701, BS 7671, the UK Building Regulations, the Disability
Discrimination Act and many others. They are all detailed in Appendix 1.
Many diverse areas need to be addressed to fully design and specify a data centre. It is essential that it is agreed at the start
of the project exactly who is responsible for every item or else the final build will be severely compromised if a vital design
element has been overlooked or is incompatible with other services.
A data centre design project can be split into the following sections;
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.

Location.
Construction.
Definition of the spaces and size available.
Planning the layout of the computer room floor.
Designing the raised floor.
Calculating day one and future IT requirements.
Calculating the day one and future air conditioning requirements.
Deciding upon the type and location of the air conditioning units.
Calculating day one and future power supply requirements.
Sizing and location of UPS and standby generators.
Designing the earth bonding and signal reference grid.
Designing the power distribution system within the computer room and within the equipment racks.
Lighting, emergency lighting and signage.
Access control, security and CCTV requirements.
Fire detection, alarm and suppression system, including hand-held fire extinguishers.
Specifying and designing the structured cabling system and its containment system.
Organising connections to external telecommunications providers and the Entrance room.
Integration of Building Management Systems with other command and monitoring networks and their appearance at
a control room.
19. Project management issues, health & safety and ongoing operational and maintenance issues.
Data centre projects are either green field new-build projects or conversion/renovation projects. In either case it is advisable to
undertake a complete audit of what exists already or on the proposed designs.
Apart from meeting the day one designs and proposed expansion plans it is also necessary to decide upon which level of
backup or redundancy will be built in to the finished location. For data centres these levels are now designated as being of Tier
1, 2, 3, or 4, with Tier 4 being the highest level of redundancy.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

The Tiering level is described in great detail in the TIA 942 standard which in turn has taken much of its philosophy from the
Uptime Institute. A very brief summary is given in the table below. In the terminology of redundant systems N means enough
equipment to do the job, N+1 means one more additional unit to act as a redundant supply whereas 2(N+1) means two
independent paths to complete the job.

Site availability
Downtime (hours/yr)
Operations Centre
Redundancy for power, cooling
Gaseous fire suppression system
Redundant backbone pathways

Tier 1
99.671%
28.8
Not required
N
Not required
Not required

Tier 2
99.749%
22.0
Not required
N+1
Not required
Not required

The relationship of the Spaces within a Data centre.


Building Shell
General Office Space

Telecommunications
& Equipment Rooms
serving spaces
outside data centre

Data Centre
Support Staff
Offices

Entrance Room (s)

Data Centre Electrical &


Mechanical Rooms

Operations Centre

Telecommunications
Room (s) serving
data centre spaces

Storage Rooms &


Loading Docks

Computer Room

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Tier 3
99.982%
1.6
Required
N+1
Approved system
Required

Tier 4
99.995%
0.4
Required
2(N+1)
Approved system
Required

Data centre design

3.0 - Physical location of the Data Centre


Physical location and architectural audit

4.1

Parameter
Does the building and rooms exist,
or are building works required?

4.2

Any known seismic problems?

4.3

Any known subsidence problems?

4.4

Any known flooding problems?

4.5

Any known security/ criminal problems


likely with this area?

4.6

Is connection to mains/telecoms
services available?

4.7

Is there a very close proximity to main


roads, railway lines, airports, oil or
chemical storage or works?

4.8

Is there easy access to the site?

4.9

Are their lifts/goods lifts available if not


on the ground floor?

4.10

Are there any excessive external noise


sources?

4.11

Will this unit be a cause of noise or


disturbance to adjoining offices?

4.12

Any potential EMC problems,


e.g. mobile phone masts, lift motors
on the other side of a wall etc?

4.13

Is there access to a suitable external


site for the air con heat exchangers?

4.14

Any other known safety or location


issues that need to be recorded such
as presence of asbestos?

4.15

Is the building or room susceptible


to lightning strikes?

4.16

Is out of hours access to the site


possible?

4.17

Are there any issues concerning


planning permission or conservation
zones or building listing?

4.18

Are there separate office, storage, or


parking areas available for contractors?

4.19

Has the room or design been audited to


comply with the Disability
Discrimination Act?*

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Recommendation

Ref.

Not on a 100 year flood plain.

TIA 942 (F.6)

Should be 0.8km away from a


major highway and 0.4km away
from chemical plants, dams etc.

TIA 942 (F.6)

Any interfering fields should be


less than 3V/m.

TIA 942 (F.2)

The requirements of the Disability


Discrimination Act may be taken
from; BS 8300:2001 Design of buildings
and their approaches to meet the
needs of disabled people Code
of practice, and Building
Regulations 2000 Part M Access
and facilities for disabled people.

Data centre design

3.0 - Physical location of the Data Centre


Physical location and architectural audit

5.1

Parameter
What are the dimensions of the data
centre?

Recommendation

Ref.

5.2

What are the dimensions of the


computer room?

5.3

What other areas have been allocated


e.g. office area, entrance room etc?

5.4

What is the height of the computer room?

Min of 2.6m from finished floor.

TIA 942

5.5

Is the floor load capacity acceptable?

The minimum distributed floor loading capacity


shall be 7.2kPA. The recommended distributed
floor loading capacity is 12kPA.

TIA 942

5.6

Where are the doors and what are their


sizes?

Doors shall be a minimum of 1m wide and 2.13m


high, without doorsills, hinged to open outward
(code permitting) or slide side-toside, or be
removable. Doors shall be fitted with locks and
have either no centre posts or removable centre
posts to facilitate access for large equipment.
Exit requirements for the computer room shall
meet the requirements of any other local
requirements.

TIA 942

5.7

Is there lighting in place and is it adequate?

Lighting shall be a minimum of 500 lux in the


horizontal plane and 200 lux in the vertical plane,
measured 1m above the finished floor in the
middle of 4 all aisles between cabinets.

TIA 942

5.8

Is emergency lighting and signage

Principally described in BS 5266-1, BS 5266-1


fitted/planned? The Code of Practice For
Emergency lighting, amongst others. Exit signage.
Principally described BS 5499-4
in BS 5499-4:2000 Safety signs, including fire
safety signs. Code of practice for escape route
signing.

5.9

Will the emergency lighting require its


own battery back-up supply?

To BS 5266 BS 5266.

5.10

Is the basic dcor acceptable?

Dcor to be finished in a light colour with minimal


glare and dust generation.

5.11

Does equipment not related to the support of


There should be no other services passing through TIA 942
the computer room (e.g., piping, ductwork etc.) the computer room.
pass through, or enter the computer room?

5.12

Is there a fresh water supply and


drainage network available?

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

TIA 942

Data centre design

6.0 - The hot aisle/cold aisle concept


In trying to design a standardised, modular and upgradeable space for I.T. and communications equipment much thought
needs to be given to rack location and the method of supplying power, communications and refrigerated air to it.
The standard model has been defined by TIA 942, ASHRAE and other authorative sources as being based on a front-to-back
cooling regime based on rows of racks facing each other. Cold air is supplied to the front of these racks through air vents
placed in the raised floor in front of them. The chilled air is fed to these vents from air conditioning units blowing into the plenum
space formed by the raised floor. The vented aisle is thus known as the cold aisle and the cold air is drawn through the
equipment racks by the I.T. equipments own fans and expelled out of the back into what is now the hot aisle. The rising hot
air from this aisle finds its way back to the air conditioning unit to be chilled and then to repeat the cycle.
The fronts of the two facing racks are two whole floor tiles apart and when the depth of the rack and the necessary access
clearance space behind it is taken into account we can see that the minimum realistic pitch before the process repeats itself
is seven tiles.
Feeding cold air through standard 25% open floor vents into a rack with no additional cooling methods normally limits the heat
dispersion to about 2kW per rack, or about five average servers. Other upgrade paths are available to get more air through
the rack and this will be explained in more detail later.
A lot of communications equipment is designed for side-to-side cooling and so additional consideration needs to be given to
cope with this variation but in general the hot aisle/cold aisle, 7-tile pitch system is generally considered to be the base model
by the relevant standards and industry sources.

COLD
AISLE

HOT
AISLE

COLD
AISLE

EQUIPMENT

Aisle Pitch (7 tiles)

COLD
AISLE
A
I
R
C
O
N

R
A
C
K

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

HOT
AISLE

R
A
C
K

COLD
AISLE

R
A
C
K

R
A
C
K

PLENUM SPACE UNDER RAISED FLOOR

Data centre design

7.0 - Specifying a raised floor


The raised floor height will be based on 600 x 600mm floor tiles with an anti static finish to IEC 61000-4-2 and not less than
300mm in height. A guide to floor heights, when used as an air distribution plenum, comes from VDI 2054, Air conditioning
systems for computer areas.
Floor area
200 - 500m2
500 - 1000m2
1000 - 2000m2
>2000m2

Height of raised floor according to VDI 2054


Approx 400mm
Approx 700mm
Approx 800mm
>800mm

Other guidance for floor height comes from;


450 600 mm, IBM
450 min, 600 ideal, SUN
300 600 mm, BS EN 12825
300 mm min, TIA 569
The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)',
MOB PF2 PS, became the de facto industry standard in the UK for about 20 years until the recent arrival of the BS EN
12825:2001 specification.
In July 2001 a European Standard EN 12825 Raised access floors, was approved by CEN as a voluntary specification for
private projects and mandatory for public projects.
For the floor strength the minimum distributed floor-loading capacity shall be 7.2kPA. The recommended distributed floor
loading capacity is 12kPA (TIA 942). From MOB PF2 PS and BS EN 12825 this means specifying Heavy Duty or preferably
Extra Heavy Duty floor grade.
The plenum area formed under the raised floor must be clean, sealed, dust free, fitted with a vapour barrier and sealed to a
level of air permeability of at least 3m3/h/m2 at 50 Pa (Building Regs Part F).
The reasons for pressure sealing the plenum area are;
Chilled air conditioned air will be able to escape through poorly finished floor tiles and service penetrations,
leading to;
- More electricity consumed to replace that air.
- An inability to deliver the volume of chilled air required at the floor vents.
- A variation in air pressure across the floor leading to an inability to deliver chilled air at the air vents.
Unsealed service penetrations (cables/pipes etc) into the plenum area are a fire risk and will allow the spread of fire
and smoke into or out of the computer room (Building Regs, part B).
Gaseous fire suppression systems rely on lowering the level of oxygen available to fires and depend upon a sealed
area to work in to prevent oxygen from re-supplying the fire. BS ISO 14520 P1: 2000(E), Gaseous fire-extinguishing
systems. Physical properties and system design. General requirements, requires a pressure test every twelve months.
An aspirating (early warning) smoke detection system shall be placed in the plenum zone (TIA 942). Where a need for a fire
suppression system in a sub floor space is deemed appropriate, consideration should be given to clean agent systems as a
means to accomplish this protection (TIA 942).
The under floor area must not be used for any other purpose other than the supply of air and the distribution of cables. Cables
must be fire rated according to the local jurisdiction and must be placed so as not to impede airflow. All redundant cables must
be removed (National Electrical Code 2002).

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

8.0 - Equipment racks and cabinets


Computing and communications equipment has been located in racks, usually 19-inch based, for at least the last thirty years.
Racks, or frames, come in all shapes and sizes; from a few hundred millimetres high to over two metres high; 600 or 800 mm
wide and from 600 to 1200 mm deep. The internal fittings are usually based on a 23-inch pitch for telecommunications and
19-inch for everything else. A handful of EIA, IEC and ETSI standards cover the physical dimensions of the rack, such as EIA310-D. The vertical spacings for the installed equipment are based on Rack Units, or just U, where one U is 44 mm.
The main frame of the rack can be based on a four-post construction, i.e. to make a rectangular frame, or the space-saving
two-post system which is essentially two pieces of vertically placed metal spaced 19-inches apart (apologies for mixing
metric and imperial units here but that is the common practice!). A server rack needs to be a four-post enclosed unit.
The purpose of the rack is;
To hold and securely locate electronic equipment.
To provide an organised routing for power and communications cabling.
To assist in the airflow and cooling of the equipment.
To provide the above in an aesthetically pleasing construction.
8.1 - Size
Racks/cabinets are usually 600 mm wide and with a useable internal space of 42U for 19-inch rack-mounted equipment. This
gives a rack height of just over two metres. Slightly larger (and of course smaller) versions are available but 42U seems a
popular choice. Depth is at least 800 mm but may be up to 1.2 m. A one-metre depth allowance seems average.
TIA 942 states
Refer to ANSI T1.336 for additional specifications for cabinets and racks. In addition to the requirements specified in T1.336,
cabinets and racks heights up to 2.4 m and cabinet depths up to 1.1 m may be used in data centers (although 2.1 m is
recommended).
Cabinets should have adjustable front and rear rails. The rails should provide 42 or more rack units (RUs) of mounting space.
Rails may optionally have markings at rack unit boundaries to simplify positioning of equipment. Active equipment and
connecting hardware should be mounted on the rails on rack unit boundaries to most efficiently utilize cabinet space.
If patch panels are to be installed on the front of cabinets, the front rails should be recessed at least 100 mm (4 in) to provide
room for cable management between the patch panels and doors.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

8.2 - Ventilation
This is a key area of differentiation between standard equipment racks and server racks. A server rack must cope with the
ventilation demands of many kilowatts worth of electrical equipment. A standard glass-fronted rack with horizontal fan tray
fitted can only cope with the cooling demands of less than a kilowatt.
It would appear that a suitably ventilated rack, supplied with adequate chilled air through a standard floor tile, can cope with
about two kilowatts of heat dissipation, where the motive force through the rack is only provided by the fans within the server
units themselves.
The amount of ventilation required is stated by several sources and is expressed as a ratio of open space to overall door
area, e.g.;
...servers require that the front and back cabinet doors to be at least 63% open for adequate airflow. SUN
One method of ensuring proper cooling is to specify rack doors that provide over 830 in2 (0.53 m2) of ventilation
area or doors that have a perforation pattern that is at least 63% open. APC
Racks (cabinets) are a critical part of the overall cooling infrastructure. HP enterpriseclass cabinets provide 65
percent open ventilation using perforated front and rear door assemblies. To support the newer high-performance
equipment, glass doors must be removed from older HP racks and from any third-party racks. HP
the cabinet should either have no doors or, if required for security, doors with a minimum 60% open mesh for
maximum airflow and is best not equipped with top mounted fan kits. Chatsworth
Ventilation through slots or perforations of front and rear doors to provide a minimum of 50% open space. Increasing
the size and area of ventilation openings can increase the level of ventilation. TIA 942
When the heat load goes above about 2 kW (about 5 average servers) then an escalation policy is required, which can take
the form of;
Increasing floor tile vent size up to 75% open area.
Replacing floor tiles with fan assisted grate tiles.
Adding specialised fan units to the top and/or bottom of the rack.
Using cabinets where the entire rear door is a fan unit.
The above solutions will take the heat dissipation capability up to about 6 kW per rack. Above that then more specialised racks
need to be used where the whole rack is fed by a chilled water supply. These designs can cope with loads in excess of 20
kW. New designs using liquid carbon dioxide claim cooling capacities of over 30 kW per rack.
It is also important that the front to back cooling scheme adopted in such racks is not compromised by gaps in the rack
allowing cooled air to mix with hot air drawn back through the gaps (Thermal Guidelines for Data Processing Environments
ASHRAE). For this reason all gaps in the rack must be filled in with blanking plates. Also excessive gaps for cabling at the
side of the racks should be sealed with an air dam kit and any cable entry points at the bottom of the rack should also be
sealed with a brush strip.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

8.3 - Power
The rack needs to be powered and in Europe this would generally be provided by a 16 or 32 amp, 230 V single phase feed
through an IEC 60309 connector. At least two feeds are required for redundancy and backup purposes so a dual 32 amp feed
would be counted as supplying 32 x 230 = 7.36 kVA (remember that useful power is measured in watts, which is amps x volts
x power factor).
For loads above 7 kVA then either more 32 amp feeds are supplied or a three-phase supply is provided which would
normally deliver at least 22 kW through a five-pin version of the IEC 60309 connector. For three-phase supply Regulation 51410-01 of BS 7671 requires a warning notice to be secured in such a position that the warning is seen before access is gained
to live
parts.
Within the rack the power is distributed by what is widely known as a power distribution unit, or PDU. There does not seem
to be a widely accepted definition of a PDU and at its simplest it is just a power strip of sockets that distributes the incoming
electricity to the rack equipment. However more functionality is available in the form of;
Sequential start up.
Automatic crossover switch between two supplies.
Power line conditioning.
Reporting function about status and power usage. This in turn may be a simple LED readout on the unit or part of an
IP addressable managed system.
8.4 - Control and monitoring
A data centre server rack must be secure and be able to monitor and report its environmental status back to some central
control point. The monitoring system may be part of a buildingwide Building Management System (BMS), an add-on localised
monitoring scheme or a built in rack-monitoring scheme designed and dedicated to the task. TIA 942 states A Building
Management System (BMS) should monitor all mechanical, electrical, and other facilities equipment and systems.
The rack sensor system should be able to detect the following;
Temperature.
Smoke.
Water.
Humidity.
Access.
Vibration.
Airflow.
Particles in the incoming airflow.
And respond with one or more of the following;
Visual alarm on top of cabinet.
Audible alarm.
Networked alarm.
CCTV.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

8.5 - Rack location


The standard model described in TIA 942 and elsewhere depends upon the hot-aisle/coldaisle concept described in section
6 of this document. In this model chilled air is pumped out into the plenum/raised floor area beneath the racks and made
available by vented floor tiles placed in front of the racks. This also requires the 7-tile pitch approach.
Front
1

Cabinets

2
This Row of Tiles can be lifted

Rear
HOT AISLE
(Rear of Cabinets)
Rear

3
7-tile pitch location

Align Front and Rear of Cabinets


with Edge of Floor Tiles
This Row of Tiles can be lifted

This Row of Tiles can be lifted

Cabinets
Front
COLD AISLE
(Front of Cabinets)
Front

Align Front and Rear of Cabinets


with Edge of Floor Tiles

4
5
6
7

Cabinets
Rear

The 7-tile pitch requires that the front edges of the two facing cabinets are placed in line with the edge of a floor tile, and two
complete floor tiles, i.e. 1.2 m, separates the two facing cabinets, thus forming the cold aisle. The depth of the rack will cover
about one and a half floor tiles and so a complete floor tile is needed in the hot aisle for access. This arrangement means that
the set will repeat itself every seven tiles, or 4.2 metres.
Apart from the 7-tile arrangement TIA 942 also requires clearances of a minimum of 1 m of front clearance for installation of
equipment and a minimum of 0.6 m of rear clearance for service access at the rear although a rear clearance of 1 m (3 ft) is
preferable. Some racks have split rear doors to facilitate rear clearance.
IEEE 1100, referenced in TIA 942, suggests a clearance of two metres from building structural steel in case of lightning
flashovers.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

8.6 - Cable management


Cables may enter from the top or bottom or both of the rack. If coming up from the bottom then a cable brush seal is required
to prevent chilled air from entering and confusing the front-to-rear airflow scheme.
All cables shall be neatly dressed and secured with minimum bend radii protected according to the standards or
manufacturers instruction. All cables must be adequately labelled as described in TIA 942, TIA 606 and elsewhere.
A vertical cable manager shall be installed between each pair of racks and at both ends of every row of racks. The vertical
cable managers shall be not less than 83 mm in width. Where single racks are installed, the vertical cable managers should
be at least 150 mm wide. The cable managers should extend from the floor to the top of the racks.
Horizontal cable management panels should be installed above and below each patch panel. The preferred ratio of horizontal
cable management to patch panels is 1:1.
8.7 - Health and Safety
Equipment racks can be very heavy, over 500 kg. It is essential that;
1. The concrete floor beneath the raised floor is strong enough, and finished flat.
2. The raised floor is strong enough, the pedestals are securely fixed and the floor is finished flat.
3. Equipment racks are leveled onto the raised floor.
4. Local seismic regulations for fixing are obeyed.
5. Heaviest equipment is placed at the bottom.
6. Extendible stabilizers are used when sliding heavy equipment out of a rack.
7. Racks are bayed together.
8. All racks, including doors, are earthed according to local regulations.
9. Any removed floor tile positions are surrounded by warning signs.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

9.0 - Heating, ventilation and air conditioning


TIA 942 recommends that the following conditions be maintained in the computer room;
Relative humidity: 40 to 50 %.
Dry Bulb Temperature: 20C to 25C.
Max dew point: 21C.
Max rate of change: 5C per hour.
A positive pressure will be maintained with respect to surrounding areas.
The precision air conditioning facility must be available 24 hours a day, 365 days per year and connected to the standby
generator in the event of a mains failure.
The ambient temperature and humidity shall be measured after the equipment is in operation. Measurements shall be done at
a distance of 1.5 m above the floor level every three to six metres along the center line of the cold aisles and at any location
at the air intake of operating equipment. Temperature measurements should be taken at several locations of the air intake of
any equipment with potential cooling problems. Details are contained in Thermal Guidelines for Data Processing Environments.
Air

conditioning may be achieved by either;


Direct expansion Computer Room Air Conditioning units (CRAC) in the computer room.
Centralised chiller units supplying chilled water to heat exchange units within the computer room.
Chilled water supplied directly to heat exchange units built into equipment racks.

Or any combination of the above.


Small to medium sized data centres tend to go for the direct expansion, DX, CRAC units placed in the computer room. Larger
facilities tend to go towards the centralized chiller and cold water distribution. Directly cooled racks have so far tended to be
an upgrade path when conventional room cooling runs out of capacity but there is no reason why they couldnt be designed
in from the start, especially when floor space is at a premium.
The mathematics of air conditioning shows that to remove one kilowatt of heat and cool an item by around 11C,
approximately 160 cfm (cubic feet per minute) or 74 litres/second of air needs to flow through that equipment.
The literature suggests that in practice an adequately constructed and sealed raised floor, supplied with adequate chilled air,
can supply about 320 cfm of air through a standard 25% floor vent, which implies that one floor vent, in these circumstances,
can cool around 2 kW of equipment if placed in front of an equipment rack. There are many variable in this equation, e.g.
Are the CRAC units supplying a sufficient volume of air at the correct temperature?
Is the underfloor plenum area deep enough and clutter free to allow free airflow?
Is the underfloor plenum sealed enough to maintain the correct excess air pressure?
Is the excess pressure evenly distributed around the floor area? This in turn depends upon the above factors, plus
depth of floor void and number, size and location of other floor vents.
A successful design thus depends upon;
Designing an appropriate raised floor.
Sizing the air conditioning requirements in light of day-one load, future expansion and redundancy requirements.
Correctly positioning the CRAC units and air return path.
Correct location of the equipment racks in the hot-aisle/cold aisle format with the 7-tile pitch layout.
Correct location of the floor vents to deliver the chilled air directly to the rack.
Correct construction and loading of the rack to maintain the desired airflow.
Correct location of the external air handling/chiller units, which need a strong mounting plinth, a secure area and
electrical and plumbing connections.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Up until the early part of this century the average heat load developed in a rack was only around 1 kW and cooling did not
need to be a closely controlled activity, as simple whole room cooling would suffice. But now with 1U servers and blade servers
the potential heat generation is enormous. The average server has a running load of about 400 watts, meaning that a 2 kW
cooling capacity equates to only fiver servers per rack. Putting 42 of these servers in a rack, just because they fit, would
develop over 16 kW of heat, and blade servers would generate over 20 kW.
Underfloor plenum cooling can supply about six kW of cooling capacity by the use of one or more of the following upgrade
methods;
Use a larger floor tile, up to 75% open area.
Use a fan assisted floor grate.
Use specialised blowers in the rack to bring more airflow into the rack and distribute it across the front face of the
equipment.
Use rear doors on the racks that are full length blower units.
Beyond about six kW, underfloor plenum cooling of racks becomes impractical and the next stage is water-cooling of the entire
rack.
Water is much more effective at removing heat than air. A water-cooled rack can dissipate in excess of 20 kW of heat. These
racks need to be plumbed into an existing chilled water generation and distribution system that would need to be placed
outside of the equipment room. Liquid carbon dioxide cooling plants are also available now. CO2 is even more efficient than
water and can remove in excess of 30 kW of heat from a rack.
Directly cooled racks are thus much more efficient in terms of floor space used but they are more expensive to buy, need
plumbing in, and an external chiller plant still needs to be built.
For air conditioning applications for more than a medium sized rectangular computer room, it is advisable to use a
computational fluid dynamics software program to model the airflow and cooling capacity of an HVAC design.

Hot air returning to CRAC


COLD
AISLE

HOT
AISLE

Computer Room
Air Conditioning
unit - CRAC

Airflow in a standard hot-aisle/cold aisle model

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

COLD
AISLE

Data centre design

The diagram above shows the CRAC unit as the source of the chilled air and pumping it into the underfloor plenum space. Air
escapes into the cold aisle through the floor vents, passes through the racks, cooling them on the way, and appears in the hot
aisle, where it rises. It then returns back to the CRAC unit to repeat the process. The CRAC units are located at the end of the
hot aisles to facilitate the shortest return path back to the CRAC. Once the room goes over a certain size it is advisable to
improve the return path by adding a ceiling plenum, with fans, to scavenge the hot air and direct it back to the CRAC units. It
has been suggested that this would be beneficial once the floor area extends beyond 400 m2, although a dedicated return
plenum would benefit any size computer room.
Another item to take into account is locating the floor vents at the correct distance from the CRAC unit. Too close and the air
velocity will cause a negative pressure at the vent relative to the air in the room above and suck in hot air instead of blowing
cold air out. The minimum distance is about two metres before effective cooling takes place. The maximum distance from the
CRAC unit again depends upon factors such as air volume from the CRAC unit, floor depth, obstructions, number and size of
floor vents etc., but a figure of ten metres seems to be commonly accepted.
Some items, particularly communications equipment, are not designed for front-to-rear cooling but side-to-side cooling, or
even both at the same time!
Side-to-side items may be cooled by;
Placing in a low density environment on a two post frame with chilled air generally supplied from a floor vent.
Placing in a standard server rack with a front-to-side cooling converter fan fitted.
Chilled water cooling matrices placed at the sides of the open frames that will allow chilled air to be directed in a
side-to-side direction.
APC, a major supplier of IT air conditioning, offers the following estimating tool to help calculate the cooling capacity required
of a computer room. Note the usual running load should be used for the IT equipment, not the nameplate rating, which is
usually one third higher than the normal running load.
The battery/UPS calculation is only required if the battery/UPS system is in the same computer room. TIA 942 recommends
that UPS systems greater than 100 kVA be placed in another room.
Note that allowance should also be made for future expansion and redundancy in air conditioning calculations.
Item
IT equipment
UPS with battery

Data required
Total IT load power
in Watts
Power system rated
power in Watts

Power Distribution

Power system rated


power in Watts

Lighting
People
Total

Floor area in sq m
Max No. of people

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Heat output calculation


Total IT running load, not
nameplate values
(0.04 x power system
rating) + (0.06 x total
IT load power)
(0.02 x power system
rating) + (0.02 x total
IT load power)
21.5 W/sq m
100 W per person

Heat output subtotal


Watts

Watts

Watts
Watts
Watts
Watts

Data centre design

Fresh air
Even with air conditioning, the computer room needs to be ventilated. Air should be changed at least ten times per hour. British
building regulations also require an air supply of ten litres per second per person, doubling if printers or photocopiers are in
use.
Incoming air must be filtered with airborne particulate levels maintained within the limits of Federal Standard 209E, Airborne
Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000.
Air from sources outside the building should be filtered using High Efficiency Particulate Air (HEPA) filtration rated at 99.97%
efficiency (DOP Efficiency MIL-STD-282) or greater.
As the external temperature at British latitudes is below 22C for about 70% of the year some of the huge electricity bills
associated with cooling data centres can be mitigated by taking even larger volumes of outside air during the autumn, winter
and spring months, with the minimum ventilation rate maintained for the summer months.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

10.0 - Electrical systems


8.7 - Health and Safety
The data centre must be supplied with sufficient electrical power to cope with day-1 demands and foreseeable expansion
plans. Suitable back-up equipment, e.g. Uninterruptible Power Supplies, UPS, and standby generators must also be
considered.
The design goals are to specify;
The main electricity feed from the utility company.
The distribution system around the data centre.
Power distribution within the equipment racks.
The UPS/Standby power system.
The power distribution system also needs to be planned in accordance with the Tier 1 to 4 requirements of TIA 942.
The first step is to understand the quantity of power required, at day one and when expansion plans are taken into account.
Some general rules;
nameplate values can be derated by 33% for normal running power.
UPS efficiency is typically 88%, i.e. 12% of the input power is consumed.
Recharging UPS batteries need 20% of rated power.
Lighting; allow 21.5 W per square metre.
Air conditioning can take 100% of its rated cooling capacity.
The normal I.T. running load is therefore the sum of all the nameplate ratings in all of the equipment, multiplied by about 0.67.
To size the power supply requirements however a number of conservative assumptions are made such as allowing for the
inrush current when the equipment starts and general overating factors as a margin of safety.
1.
2.
3.
4.
5.
6.
7.

Add up all the nameplate ratings of all the equipment and multiply by 0.67, this is the day 1 running load.
Multiply this by whatever expansion factor is expected to apply to the data centre.
Add 50% to the above to allow for inrush current.
Add 32% to allow for the UPS inefficiency and battery charging requirement.
Add 21.5 W per square metre of floor space to allow for lighting.
Double the amount reached so far to allow for air conditioning power requirements.
Multiply the total so far by 1.25 to provide a further overating factor, so that cables arent expected to work at their
full safe load.
8. Add a figure; say 5% for power factor correction*. Modern I.T. equipment is usually power factor corrected, but there
will be some power factor loss.
The figure thus arrived at is the amount of power that needs to be available in the data centre, even though it is unlikely to
need this full amount under normal conditions. This figure also leads to correct choice of the standby generator.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Lets take the example of a 200 square metre computer room with a day one nameplate load of 100 kW and a required
expansion capacity of 100%.
Day one running load
Long term load, after expansion
Add 50% for peak load factor
Add 32% for UPS inefficiency and battery charging
Add 21.5 W/m2 for lighting
Double this amount for power to run the air con
Multiply by 1.25 for the overating factor
Add 5% for power factor correction

=
=
=
=
=
=
=
=

100 x 0.67 = 67 kW
67 x 2 = 134 kW
134 x 1.5 = 201 kW
201 x 1.32 = 265 kW
265+(200x.021)=269 kW
269 x 2 = 538 kW
538 x 1.25 = 672 kW
672 x 1.05 = 706 kW

So we can see that the power supply to be designed in is more than ten times the day-one running load.
*Power factor. Remember that current times voltage equals volt-amperes, usually expressed as kVA. Useful work, or power, is
measured in watts, and volts x amps x power factor = watts. The power factor is the cosine of the phase difference between
the voltage and the current in an alternating current circuit. This phase separation is caused by a reactive, i.e. capacitive or
inductive, load. UPS systems are always measured in kVA output, as they do not know the power factor of the load they will
be connected to, and hence the real power, in watts, deliverable.
10.2 - UPS and backup requirements
Having understood the sizing implications the next step is to consider the methods of back-up and redundancy and how this
fits in with the Tiering philosophy of TIA 942.
TIA 942 summary

No. of delivery
paths
Utility entrance

Tier 1
1

Tier 2
1

Single feed

Single feed

Tier 3
1 active and
1 passive
Dual feed

Equipment
power cords

Single cord with


100% capacity

Generator fuel
capacity

8 hours, but no
generator required
if UPS backup time
is more than
8 minutes
N

Dual cord with


100% capacity on
each cord
24 hours

Dual cord with


100% capacity
on each cord
72 hours

Dual feed from


different substations
Dual cord with
100% capacity on
each cord
96 hours

N+1

N+1

2N

Redundancy

Tier 4
2 active

Remember that;
N means only enough items to do the task at hand. Any one point of failure will stop the system.
N+1 means one more item than is necessary, thus allowing for one point of failure.
2N means two complete independent paths.
Going to 2N, or even better 2N+1, will give the required resilience that a data centre needs but obviously at some major cost,
and not surprisingly 2N costs at least twice that for the provision of the minimum required service.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

An uninterruptible power supply system (UPS) needs to be defined to back up the power supply system. This is usually based
on batteries and a double conversion on-line UPS. In this method the incoming AC is rectified and permanently charges a
battery pack which is also connected in parallel back into an inverter, to make available the mains voltage AC again. This is a
very reliable method and also isolates the I.T. load from sags, surges, spikes and most harmonics coming in from the mains
supply. The downside of this method is that it is very inefficient with up to 12% of the input power wasted in the rectificationinversion cycle.
Other kinds of UPS are available and one is based on the kinetic energy of a large rotating mass connected to a device which
acts as a motor when input power is available and a generator when the AC input fails. The kinetic energy stored in the
rotating flywheel will then produce electricity for a short time. Kinetic energy devices are smaller and cheaper and have less
maintenance associated with them but usually have back-up times measured in tens of seconds rather than the minutes
offered by a battery system.
UPS design options and requirements;
1. Size the electrical power required, in kVA.
2. Decide what is the critical load that needs to be backed up with a UPS. Some people include the air conditioning, and
some dont, expecting that the back-up generator will be online before the equipment overheats. Backing up the
aircon with the UPS will double the size of the UPS.
3. Decide upon the length of time the battery pack needs to backup the system. Battery packs are expensive, heavy and
take a lot of space, recommendations are;
a. TIA 942, 5-30 minutes.
b. SUN, 15 minutes.
c. Note that TIA 942 also specifies that a Tier 1 system does not need a generator if the battery system can backup
for at least 8 minutes
4. Decide upon the level of redundancy desired/affordable, e.g. N, N+1, 2N or 2(N+1).
5. Decide upon the location of the UPS and battery equipment. It should be close to the IT equipment and main power
feed to reduce cable losses. TIA 942 recommends that UPS systems larger than 100 kVA should be located in their
own separate room.
6. Decide upon size and location of the standby generator. It must be in a secure position, and in an area where noise
and fumes will not be disruptive. It should also be close to the UPS system and switchgear to minimise cable losses.
10.3 - Electrical distribution around the computer room
The electrical cabling, of adequate size to meet current and future design, must feed each equipment rack location and
planned location. For Tier 2 and above there must be duplicate, redundant feeds to each location.
Cabling may be fed into the top or bottom of racks, or both. Cabling run in the underfloor plenum space should be laid in the
cold aisle at low level. Cabling entering through the bottom of the rack should be sealed with a brush strip to prevent entry of
chilled air in an uncontrolled manner.
Cable should be terminated and presented on IEC 60309 connectors, of appropriate size for the current and suitable for
single or three phase connection as appropriate. Usual ratings are 16 or 32 amp. The higher power ratings of todays servers
would suggest that two 32 amp feeds would be required, giving around 7 kW. Higher power rating would require a three-phase
connection, providing around 22 kW.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

10.4 - Electrical distribution within the rack


At its simplest the IEC 60309 connector is connected to a power strip which distributes the electricity to a number of standard
sockets, which in the UK would be a standard 13 amp BS 1363 socket or an IEC 60320 socket. In America the plugs and
sockets would be defined in the NEMA series.
The power distribution units can extend beyond simple distribution of the power and may offer;
Sequential start up to lower inrush current.
Simple filtering.
Monitoring of current with an LED readout.
Automatic switching between feeds.
Network reporting and remote control ability through a TCP/IP connection.
Other systems around take in mains voltage and distribute 48 V D.C around the rack to remove the need for each item of
equipment to have its own dedicated power supply.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

11.0 - Earthing, bonding and the Signal Reference Grid


Earthing is required for three reasons;
Safety from electrical hazards.
Reliable signal reference within the entire information technology installation.
Satisfactory electromagnetic performance of the entire information technology installation.
Correct earthing is required by law and described in various standards such as;
BS 6701
Telecommunication cabling and equipment installations.
BS 7671
Requirements for electrical installations: IEE wiring regulations 16th Edition.
Across Europe there is also;
EN 50310

EN 50174-2

Across the world we have;


IEC 60364-1

ISO 11801:2002
ANSI/TIA/EIA-J-STD-607

Application of equipotential bonding and earthing in buildings with information


technology equipment.
Information technology Cabling installation Part 2 installation and planning practices
inside buildings.

Electrical installations of buildings, various sections including; Part 5-548: Earthing


arrangements and equipotential bonding for information technology equipment.
Information technology cabling for customer premises.
Commercial building grounding and bonding requirements for telecommunications.

And from the world of telecommunications there is;


ETS 300 253
Equipment engineering earthing and bonding of telecommunications equipment in
telecommunication centres.
ITU-T K.27
Bonding configurations and earthing inside a telecommunications building.
ITU-T K.31
Bonding configurations and earthing of telecommunications installations inside a
subscribers building.
And a particular standard referenced by TIA 942 is;
IEEE STD 1100-1999
Powering and Grounding Sensitive Electronic Equipment.
It is essential that all metallic elements are correctly earthed according to the most relevant standard above. This includes all
equipment racks, cable containment and the metallic sheaths and armour of communications cables.
Note that whereas earthing means, the connection of the exposed conductive parts of an installation to the main earthing
terminal of that installation (BS 7671), bonding means, the electrical connection putting various exposed conductive parts
and extraneous conductive parts at a substantially equal potential (EN 50174-2). Thus the connection for bonding must be
capable of offering low enough impedance that a potential difference of not more then 1- volt rms can be maintained across
the frequency range of interest.
This leads on to the requirement for the Signal Reference Grid, SRG, or a System Reference Potential Plane, SRPP, as it is
referred to in CENELEC standards.
The SRG is there to offer a suitable low-impedance path to ground for high frequency interference signals that cannot be
achieved by simple earthing.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

No standard mandates an SRG but everybody seems to recommend one, e.g.;


TIA 942
Consideration should be given to installing a common bonding network (CBN) such as a
signal reference structure as described in IEEE Standard 1100 for the bonding of
telecommunications and computer equipment.
HP/Dell
Site preparation guide. If the system is on raised flooring, use a 2-foot x 2-foot (61-cm x
61-cm) grounding grid.
EN 50310
A system reference potential plane (SRPP) conductive solid plane, as an ideal goal in
potential equalising, is approached in practice by horizontal or vertical meshes. The mesh
width thereof is adapted to the frequency range to be considered. Horizontal and vertical
meshes may be interconnected to form a grid structure approximating to a Faraday
cage.
SUN
A signal reference grid should be designed for the computer room. This provides an
equal potential plane of reference over a broad band of frequencies through the use of a
network of low-impedance conductors installed throughout the facility.
The SRG should therefore be constructed on the floor below the IT equipment and be constructed of copper tapes
approximately 50 mm wide. The dimensions of the grid have typically been 24 x 24 inches (610 x 610 mm), however this only
effectively gives protection up to around 30 MHz.
With gigabit Ethernet operating at up to 100 MHz this needs to be reduced to 200 mm to be effective whereas ten gigabit
Ethernet, operating at 500 MHz, would ideally need an almost complete surface. When using 50-mm copper tape a grid
spacing of about 100 mm is the practical limit.
The SRG must be effectively bonded to the building steel and the main electrical and telecommunications grounding busbar,
and all items on top or crossing the SRG must be connected to it.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

12.0 - Fire detection, alarm and suppression


A fire design policy operates over a number of areas, all of which are related.
Design the building with materials and designs that minimise fire risk.
Operate the building with practices that reduce fire risk.
Detect fire and smoke with suitable apparatus.
Sound an alarm if fire is detected to evacuate a building, summon the fire brigade and set off fire extinguishants.
Suppress the fire with automatic fire extinguishants.
The principle fire safety legislation in the UK is the Fire Precautions (Workplace) Regulations 1997/1999. This is obviously a
major subject and one subject to laws and building regulations. TIA 942 Telecommunications Infrastructure Standard for Data
Centers, April 2005, requires the following for a data centre;
12.1 - Detection
The recommended smoke detection system for critical data centers where high airflow is present is one that will provide early
warning via continuous air sampling and particle counting and have a range up to that of conventional smoke detectors.
the system has four levels of alarm that range from detecting smoke in the invisible range up to that detected by
conventional detectors. The system at its highest alarm level would be the means to activate the pre-action system valve.
One system would be at the ceiling level of the computer room, entrance facilities, electrical rooms, and mechanical rooms as
well as at the intake to the computer room air-handling units.
A second system would cover the area under the access floor in the computer room, entrance facilities, electrical rooms, and
mechanical rooms.
A third system is also recommended for the operations center and printer room to provide a consistent level of detection for
these areas.
A fire alarm system consists of
1. Detectors.
a) Smoke, heat, flame etc.
2. Manual call points.
3. Alarms.
a) Bells, sirens, voice recording, visual etc.
4. Approved fire survival cable to link it all together.
5. A central control box to link it all together and to connect to other services.
In the UK fire detection is governed by;
BS 5839-1:2002
Fire detection and fire alarm systems for buildings. Code of practice for system design, installation,
commissioning and maintenance.
And specifically for computer rooms and other electronic installations;
BS 6266:2002
Code of practice for fire protection for electronic equipment installations.
Fire alarm and detection components are generally covered by;
BS EN 54
Fire detection and fire alarm systems.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Typical fire detection and alarm loop

Supervisor link for remote monitoring

Call Points

Control Panel
Sounders

FIRE DETECTION CONTROL LOOP


Detectors

Interface units to Sprinklers, BMS etc.

The cables must be fire survivable as described in BS5839-1: 2002 Clause (26.2 d & e) which invokes, amongst others;
BS 60702-1: 2002
Mineral insulated cables and their terminations with a rated voltage not exceeding 750V.
BS 6387:1994
Performance requirements for cables required to maintain circuit integrity under fire
conditions.
Fire detectors come in a number of guises such as ionising smoke detectors, optical detectors, flame and heat detectors etc,
but the smoke detection system recommended for computer rooms is a highly sensitive system that gives very early warning
and is known as Aspirating Smoke Detection, ASD.
BS 6266-2002 recommends, A dedicated smoke detection system interfaced with the main building system, and an
aspirating smoke detection to monitor return air flows, for critical equipment areas such as centralised computer facilities.
BS 5839 describes many different types of smoke and flame detectors and most importantly, where they should be sited. The
siting of aspirating smoke detector inlets follows exactly the same rules as more conventional smoke detectors.
ASD is a high sensitivity, aspirating type laser-based optical smoke detection system that continually draws air within the
protected area through a network of pipes where it is passed through a calibrated detection chamber. It is capable of
providing very early warning of fire conditions thereby providing invaluable time to investigate and respond to a potential threat
of fire. ASD is very often referred to by a brand name, VESDA, Very Early Smoke Detection Apparatus. VESDA is a trademark
of Vision Products Pty Ltd of Australia.
A VESDA system can detect a fire within 70 seconds and activate a fire suppression response in under two minutes. A
sprinkler system would take four to six minutes under the same circumstances.
Conclusion
Various standards, such as TIA 942 and BS 6266 recommend aspirating smoke detectors for data processing applications
such as data centres because of their quick reaction time. The detection system should be able to give various levels of alarm
and needs to be optimised for the different areas encountered within a data centre. A data centre should have two levels of
fire detection and suppression. An aspirating smoke detector linked to a gaseous fire suppression system as the first response
and a pre-action sprinkler system as the last resort.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

12.2 - Fire suppression


According to the SUN Data Centre Guide the ideal system would incorporate both a gas system and a pre-action water
sprinkler system in the ambient space.
According to the Fire Safety Advice Centre, (http://www.firesafe.org.uk/advicent.htm) the following methods are considered for
computer rooms;

Automatic sprinklers
Detection and pre-action sprinkler
Detection and water sprays (mist)
Detection and total flood CO 2
Foam
High sensitivity smoke detection aspirating systems
Detection and dry powder
Detection and manual intervention
Detection and inert gas
Detection and fine particulate aerosol
Detection and halocarbon gas

Telecom Computer Control Rooms


Yes
Yes
No
Yes
Yes (under floor)
Yes
No
Yes
Yes
No
Yes

12.3 - Gas suppression


EC Regulation 2037/2000 prohibits the sale and use of halons, including material that has been recovered or recycled, from
31st December 2002. Furthermore, with the exception of equipment deemed critical under the Regulation, all fire-fighting
equipment in the EU containing halons must be decommissioned before 31st December 2003.
The halon replacement market for clean agent gaseous suppression systems splits into inert gasses and halocarbon gasses.
Inert Gases
Inert gas agents are electrically non-conductive clean fire suppressants that are used in design concentrations of 35-50% by
volume to reduce ambient oxygen concentration to between 14 and 10%. Oxygen concentrations below 14% will not support
the combustion of most fuels (and human exposure must be limited).
Halocarbon Gas Systems
A number of fire extinguishing halocarbon gases with zero ozone depletion potential (ODP) have been developed. These
include both HFCs (hydrofluorocarbons) and PFCs (perfluorocarbons).
The DETR has published a document to give guidance on halon replacements, Advice on Alternatives and Guidelines for Users
of Fire Fighting and Explosion Protection Systems. Although products are not officially approved or recognised by this route.
Inert gasses
Trade Name
NN100
Argotec
Argonite
Inergen

Designation
IG-100
IG-01
IG-55
IG-541

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Gas Blend
Nitrogen
Argon
Nitrogen/Argon mixture
Nitrogen/Argon/Carbon dioxide mixture

Data centre design

Halocarbon gasses
Trade Name
FE-13
FE-125
FM-200
FE-36
CEA-308
CEA-410

Designation
HFC 23
HFC 125
HFC 227ea
HFC 236fa
PFC-2-1-8
PFC-3-1-10

Chemical Formula
CHF 3
CF3 CHF2
CF3 CHFCF3
CF3 CH2 CF3
C3 F8
C4 F10

Chemical Name
Trifluoromethane
Pentafluoroethane
Heptafluoropropane
Hexafluoropropane
Perfluoropropane
Perfluorobutane

In general, inert gas systems appear to take up more space and be slightly more expensive than the halocarbon alternatives.
Manual means of fire suppression system discharge should also be installed. These should take the form of manual pull
stations at strategic points in the room. In areas where gas suppression systems are used, there is normally also a means of
manual abort for the suppression system.
See also;
BS 6266:2002
BS ISO 14520-1:2000

Code of practice for fire protection for electronic equipment installations.


Gaseous fire-extinguishing systems. Physical properties and system design. General requirements.

12.4 - Pre-action sprinkler systems


The gaseous fire suppression is seen as the first line of defence. After that comes the sprinkler system. This must be of the
pre-action type. This means that the pipes are normally dry, and cannot therefore drip onto the equipment.
The smoke detection system can set the first phase of the sprinkler system by letting water enter into the piping but it still need
the additional heat of the fire to set off the sprinklers themselves. This is sometimes known as a double-knock system.
12.5 - Portable fire extinguishers
Portable fire extinguishers should also be placed strategically throughout the room. These should be unobstructed, and should
be clearly marked. Labels should be visible above the tall computer equipment from across the room. Appropriate tile lifters
should be located at each extinguisher station to allow access to the subfloor void for inspection, or to address a fire. A torch
should also be located with the tile lifter.
Conclusion
The fire safety plan is a multilayered approach that requires a coordinated plan for;
Designing for low flammability and fire risk.
Operating with low risk.
Emergency exits.
Emergency lighting.
Emergency exit signage.
Fire detection, appropriate to the area covered.
Fire alarm.
Multi-level automatic fire suppression.
Manual fire alarm and portable fire extinguishers.
Staff training and fire drills in place.
Maintenance plan for all equipment involved.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

13.0 - Communications cabling and containment


Cabling is required to connect all the coomunications and control devices within the data centre and the world beyond. Correct
choice and installation of the cabling is essential to guarantee error-free transmission of data.
The communications protocols within the data centre nowadys revolve mostly around Ethernet and Fibre Channel.
Communications speeds of at least 1 Gb/s should be designed for and now ten gigabit speeds need to be considered. Design
issues revolve around the selection of copper and/or optical fibre, grades of copper and fibre to be used, screened or
unscreened copper cabling and levels of redundancy and resilience to be built in to the cabling model.
13.1 - Spaces and hierarchy
The TIA 942 model shows the Spaces that need to be accommodated and the cabling interconnection hierarchy between
and within them.
Access Providers

Entrance Room

Access Providers

(Carrier Equip &


Demarcation)

Offices,
Operations Center,
Support Rooms

Computer Room

Main Dist Area

Telecom Room

(Routers, Backbone,
LAN/SAN Switches,
PBX, M13 Muxes)

(Office & Operations


Center LAN switches)

Horizontal Cabling
Backbone Cabling

Horiz Dist Area


(LAN/SAN/KVM
Switches)

Horiz Dist Area

Horiz Dist Area

Horiz Dist Area

Zone Dist Area

(LAN/SAN/KVM
Switches)

(LAN/SAN/KVM
Switches)

(LAN/SAN/KVM
Switches)

Equip Dist Area

Equip Dist Area

Equip Dist Area

Equip Dist Area

(Rack/Cabinet)

(Rack/Cabinet)

(Rack/Cabinet)

(Rack/Cabinet)

EN 50173-5 (Draft) is very similar but uses slightly different terminology.


TIA-942
Cross connect in the entrance room
Main cross-connect in the MDA (main distribution area)
Horizontal cross-connect in the MDA or HDA (horizontal distribution area)
Zone outlet or consolidation point in the ZDA (zone distribution area)
Outlet in the EDA (equipment distribution area)
Horizontal cabling
Backbone cabling (between MDA and HDAs)
Backbone cabling (from MDA to entrance room or from MDA to telecom room)
Telecommunications room

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

EN 50173-5
ENI (external network interface)
MD (main distributor)
ZD (zone distributor)
LDP (local distribution point)
EO (equipment outlet)
Zone distribution cabling
Main distribution cabling
Network access cabling
Distributor

Data centre design

Alignment of terminology
Distributor
in accordance
with EN 50173-1

ENI

Access Providers

ENI

Network Access
Cabling Subsystem

Offices,
Operations Center,
Support Rooms

Telecom Room

Main Distribution
Cabling Subsystem

Access Providers

(Carrier Equip &


Demarcation)

Computer Room

(Offi & Operations


(Office
Center LAN switches)

MD

Entrance Room

Main Dist Area


(Routers, Backbone,
LAN/SAN Switches,
PBX, M13 Muxes)

Horizontal Cabling
Backbone Cabling

Horiz Dist Area


(LAN/SAN/KVM
Switches)

ZD

ZD

e Distribution
Distr
Zone
abling Subsystem
stem
Cabling

LDP
EO

EO

LDP
EO

EO

LDP
EO

EO

EO

Horiz Dist Area

Horiz Dist Area

Horiz Dist Area

Zone Dist Area

(LAN/SAN/KVM
Switches)

(LAN/SAN/KVM
Switches)

(LAN/SAN/KVM
Switches)

Equip Dist Area

Equip Dist Area

Equip Dist Area

Equip Dist Area

(Rack/Cabinet)

(Rack/Cabinet)

(Rack/Cabinet)

(Rack/Cabinet)

LDP
EO

EO

EO

13.2 - Cable selection


TIA 942 recognises;
100-ohm twisted-pair cable (ANSI/TIA/EIA-568-B.2), Category 6 recommended UTP or ScTP
(ANSI/TIA/EIA-568-B.2-1);
Multimode optical fibre cable, either 62.5/125 micron or 50/125 micron (ANSI/TIA/EIA-568-B.3),
50/125 micron 850 nm laser optimized multimode fibre is recommended (ANSI/TIA-568- B.3-1);
Single-mode optical fibre cable (ANSI/TIA/EIA-568-B.3).
Coaxial media are 75-ohm (734 and 735 type) Telcordia Technologies GR-139-CORE) and coaxial
connector ANSI T1.404.
EN 50173-5 recognises any of the cabling media addressed in EN 50173, e.g. Cat 5, Cat 6, Cat 7 etc, but Class E/Cat 6 is
recommended for the main distribution and zone distribution cabling.
It would seem that within the Data Centre/Computer Room, a cable less than Category 6 performance should not be used.
Note that the American standards do not recognise Category 7/Class F.
None of the standards discuss 10GBASE-T or the forthcoming Augmented Category 6 standard as this has not yet been
published, or even finalised at the time of writing, but is expected later in 2006.
Products claiming Cat6A performance are already on sale but whether unscreened (UTP) products can meet the Alien
Crosstalk requirements and EMC regulations when operating at the 500 MHz frequencies invoked by 10GBASE-T is still a
matter of debate within the industry. Certainly a screened Cat 6 or Cat6A system is going to cope much better with the EMC
and Alien Crosstalk issues.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Cable selection issues


Copper cable
At least Category 6. Consider Cat6A or Cat 7 for higher bandwidth performance.
Consider unscreened or screened. Unscreened is cheapest and seems to cope with gigabit Ethernet speeds.
Consider screened for severe EMC problems or upgrade to 10GBASE-T operation.
Consider the fire performance of the cable. Unlike the USA there are no rules requiring very low flammability
cabling in Europe. As a minimum request zero halogen/low flammability cable to;

IEC 60332-3C
The best performing cable in a fire situation is the plenum style meeting

NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in
Air-Handling Spaces:2002

Or its higher performing companion known as Limited Combustible Plenum cable.

Optical fibre
ISO 11801 and EN 50173 now classify optical fibres as OM1, OM2, OM3 and OS1. OM means multimode fibre
and OS means singlemode fibre.
OM3 is a very high bandwidth fibre optimised for ten gigabit operation and is the obvious choice for new data
centre installations.
Singlemode fibre, OS1, is not needed within the data centre but it may be needed to connect to the outside world
of telecommunications and should be put in place to allow for direct high speed communications from routers
and SAN devices.
Optical connectors must also be specified. There are many to choose from and are Standards recognised. The
market leader for high speed data communications is now the LC connector.

13.3 - Preconnectorised cabling


Cabling is traditionally installed, as cable, which is then terminated on-site in patch panels, outlets and other connectors. There
is a big time advantage to be gained by terminating the cables off-site and installing the ready-made assemblies into the data
centre.
Preconnectorised cabling is most popular when time on site is at an absolute premium. This may be in a new build, such as
a data centre, where time scales are critical and many different trades are vying for the right to work on any particular bit of
floor space at any time.
Other time-critical areas are live sites that need additional cabling but where the costs and implications of downtime are
horrendous, such as a trading floor or call centre. Such a facility may want to have all its cabling upgraded or extended in one
overnight operation.
Busy city centre facilities will also suffer from a lack of parking and loading bays, on-site storage restrictions and security
worries associated with cable installers needing weeks of access time to the site.
Preconnectorised cabling should reduce time needed on site by around 75% compared to traditional installation.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Quality of the terminations should also be improved by allowing sophisticated Category 6 copper and optical fibre terminations
to be made in a clean factory environment by skilled people. Each cable assembly can be 100% checked in the factory and
whatever is sent to site is known to be of the highest quality.
There are no particular disadvantages to preconnectorised cabling, and it should be costneutral to the enduser, however
accurate surveys need to be carried out to ensure correct cable lengths are made up and installed.

A0102

A0103

A0104

A0105

A0106

A0107

A0108

A0109

Cable A
Panel 01

Cable B
Panel 01

10 11 12 13 14

15 16

17 18

20

19

21 22

23

24

A0112

A0111

A0110

A0101

Preconnectorised copper cabling.

Cable C
Panel 01

Panel to
panel link
Desk Pod
Cab le C
Desk 01

Cable A
Panel 02

CD014
CD013
CD012
CD011

Cab el B
Flo or 01

Panel to
floor link
A0101

A0102

A0103

A0104

BF014
BF013
BF012
BF011

Floor Box

13.4 - Cable containment


The cable containment must protect the cables and also the bend radius requirements of the cables. Containment may take
the form of basket, trays, conduit, trunking etc. If it is metallic, then all of the containment must be correctly earthed.
All cabling, patch panels, earthing and containment system must be adequately labelled and marked and records kept. This
aspect of cabling is described in the following;
ANSI/TIA/EIA-606-A

Administration Standard for the Telecommunications Infrastructure of Commercial Buildings.

EN 50174-1

Information technology cabling installation Part 1:Specification and quality assurance.

ISO/IEC 14763-1:

Information Technology Implementation and operation of customer premises cabling


Part 1:Administration.
Telecommunications Infrastructure Standard for Data Centers

TIA 942
and
BS 6701:2004

Telecommunications equipment and telecommunications cabling Specification for installation,


operation and maintenance.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

that all cables and components be suitably marked to uniquely identify them. The durability of all labelling must
also be suitable for the rigours of the environment in which they are placed and the expected timescale of
the installation, usually in excess of ten years.
The cables need to be contained and protected and separated from other services. For example EN 50174-2 requires a
separation of at least 200 mm between unscreened data and unscreened power cables, although distances can come down
if any of the cables are screened. BS6701 requires a 50 mm separation at all times between cables unless there is a nonmetallic divider separating the two groups. In the UK, BS6701 and EN 50174-2 requirements need to be overlaid and the
worst-case separation distances used for a correct installation.
BS6701 and EN 50174-2 overlaid
Type of Installation

Unscreened power cable


and unscreened IT cable
Unscreened power cable
and screened IT cable
Screened power cable
and unscreened IT cable
Screened power cable
and screened IT cable

Without a
divider
200 mm

Separation Distance
With a nonAluminium
metallic divider
divider
200 mm
100 mm

Steel
divider
50 mm

50 mm

50 mm

50 mm

50 mm

50 mm

30 mm

50 mm

50 mm

50 mm

0 mm

50 mm

50 mm

13.5 - Cabling standards summary


At present EN 50173 defines the cabling design. Soon the more specific EN 50173-5 standard will more precisely define data
centre cabling requirements. On a wider basis, ISO11801 and ANSI/TIA/EIA-568-B also define cable system design.
TIA 942 defines the cabling hierarchy for data centres and states the permissible range of cables. TIA 942 only invokes other
American standards such as ANSI/TIA/EIA-568-B.
EN 50174 parts 1,2 and 3 describe installation and quality assurance techniques.
EN 50310 describes the equipotential bonding system for information technology installations.
EN 50346 describes the testing methodology to prove compliance of the installed cabling.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

13.6 - Modular designs


Data Centre users rarely know exactly what the format of the I.T. equipment will be when the Data Centre goes live and
certainly dont know what we will be expected of it next year. For this reason many people like to design a generic centre based
on flexible modular units like the Capitoline Cluster Concept (www.capitoline.co.uk). In the example shown below a cluster
consists of five server racks with half of one rack dedicated to cabling interconnection. Each rack takes 60 Cat 6 cables and
one OM3 8-fibre cable back to the Main Distribution Frame. One cluster is dedicated to Wide Area
Networking/Router/Telecoms applications. It too has 60 Cat 6 cables but more optical fibre and also a singlemode link back
to the MDF to allow for direct high-speed connection into the outside world. The Storage Area Network, SAN, cluster is
identically cabled. The MDF mirrors the server, WAN and SAN zones and also has a dedicated area to connect to the Telecoms
Room and ENI. For additional resilience each Server cluster has Cat 6 cables wired directly to the WAN and SAN clusters.
Link to Telecoms
Room and External
Network Interface

5 x 12 C6
1 x 8F OM3

6 x 8mm OM3
1 x 8 sm
5 x 12 C6

Main Distribution Frame

Server 5

Server 6

WAN

SAN

Server 1

Server 2

Server 3

Server 4

4 x C6 from every server cluster


to the SAN and the WAN cluster

Modular designs and cluster concepts are bound to be more popular as the rate of change in Data Centres increases. The
cluster concept incorporates the air conditioning as well by rating each rack with a minimum 2 kW load dissipation and a
planned upgrade path up to 20 kW per rack.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

14.0 - Security, Access Control and CCTV


TIA 942 requires that the Data Centre be secure.
Security Access
Control/monitoring at:
Generators
UPS. Telephone
& MEP rooms
Fibre vaults
Emergency Exit Doors
Accessible exterior
windows
Security operations centre
Doors into computer
rooms
Perimeter building doors
Doors from lobby
to floors

Tier 1

Tier 2

Tier 3

Tier 4

Industrial grade
lock
Industrial grade
lock
Industrial grade
lock
Industrial grade
lock
Off site
monitoring
N/a
Industrial grade
lock
Off site
monitoring
Industrial grade
lock

Intrusion
detection
Intrusion
detection
Intrusion
detection
Monitor

Intrusion
detection
Card access

Intrusion
detection
Card access

Intrusion
detection
Delay egress

Card access

Intrusion
detection
N/a
Intrusion
detection
Intrusion
detection
Card access

Intrusion
detection
Card access
Card or
biometric access
Card access

Intrusion
detection
Card access
Card or
biometric access
Card access

Single person
interlock

Single person
interlock

Tier 1
No requirement

Tier 2
No requirement

Tier 3
Yes

Tier 4
Yes

N/a
No requirement
No requirement
No requirement

N/a
Yes
No requirement
No requirement

Yes
Yes
Yes
Yes

Yes
Yes
Yes
Yes

Tier 1
No requirement

Tier 2
No requirement

Tier 3
Yes; digital

Tier 4
Yes; digital

N/a

N/a

20 f/s min

20 f/s min

Delay egress

CCTV requirements.
CCTV monitoring
Building perimeter
and parking
Generators
Access controlled doors
Computer room floors
UPS, telephone and
MEP rooms
CCTV
CCTV recording on
all cameras
Recording rate,
frames per second

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

15.0 - Building Management Systems


Building Management Systems, or BMS, can cover a range of technologies that controls and optimises space heating, air
conditioning, hot water service and lighting in buildings.
TIA 942 makes the following statement;
A Building Management System (BMS) should monitor all mechanical, electrical, and other facilities equipment and systems.
The system should be capable of local and remote monitoring and operation.
Individual systems should remain in operation upon failure of the central Building Management System (BMS) or head end.
Consideration should be given to systems capable of controlling (not just monitoring) building systems as well as historical
trending. 24-hour monitoring of the Building Management System (BMS) should be provided by facilities personnel, security
personnel, paging systems, or a combination of these. Emergency plans should be developed to enable quick response to
alarm conditions.
We can consider a Data Centre as being in three layers for the BMS requirement;
Incorporation into a larger and pre-existing site BMS.
A BMS dedicated to the Data Centre facility.
Rack level monitoring and control.
With IP based networks more and more of these systems come together with one common cabling system. The exception
is the fire detection loop cabling which must be dedicated and fire survival grade. Many of the control systems rely on
automation protocols such as LONWorks and BACNET to communicate and control the end equipment but the higher levels
of communication between controllers is now reliant upon TCP/IP and Ethernet.

CCTV
Access Control & Monitoring
Fire Alarms
BMS HVAC & Lighting
Environmental Monitoring

Building Based
Room Based
Rack Based

Common IP Cabling
Dedicated Cabling
Local Alarm/Control
Remote Alarm/Control

The environmental monitoring parameters are;


Temperature.
Smoke.
Water.
Humidity.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Access.
Vibration.
Air flow.
Particles in the incoming air flow.

Data centre design

16.0 - Project Management and other issues


So far in this document we have considered the various Tiering levels defined in TIA 942 and from the Uptime research
institute. A data centre does not need to be on the same Tier for every facility. It is quite acceptable for the installation to be
Tier 2 for air conditioning and Tier 4 for power supply for example. It all depends upon what the customer wants and can
afford.

Site availability
Downtime (hours/yr)
Operations Center
Redundancy for power,
cooling
Gaseous fire suppression
system
Redundant backbone
pathways

Tier 1
99.671%
28.8
Not required
N

Tier 2
99.749%
22.0
Not required
N+1

Tier 3
99.982%
1.6
Required
N+1

Tier 4
99.995%
0.4
Required
2(N+1)

Not required

Not required

Not required

Not required

Approved
system
Required

Approved
system
Required

We can take further definitions from TIA 942.


N - Base requirement
System meets base requirements and has no redundancy.
N+1 redundancy
N+1 redundancy provides one additional unit, module, path, or system in addition to the minimum required to satisfy the base
requirement. The failure or maintenance of any single unit, module, or path will not disrupt operations.
2N redundancy
2N redundancy provides two complete units, modules, paths, or systems for every one required for a base system. Failure or
maintenance of one entire unit, module, path, or system will not disrupt operations.
2(N+1) redundancy
2(N+1) redundancy provides two complete (N+1) units, modules, paths, or systems. Even in the event of failure or maintenance
of one unit, module, path, or system, some redundancy will be not be disrupted.
Tier 1 Data Center: Basic
A Tier 1 data centre is susceptible to disruptions from both planned and unplanned activity. It has computer power
distribution and cooling, but it may or may not have a raised floor, a UPS, or an engine generator. If it does have UPS or
generators, they are single-module systems and have many single points of failure. The infrastructure should be completely
shut down on an annual basis to perform preventive maintenance and repair work. Urgent situations may require more
frequent shutdowns. Operation errors or spontaneous failures of site infrastructure components will cause a data center
disruption.
Tier 2 Data Centre: Redundant Components
Tier 2 facilities with redundant components are slightly less susceptible to disruptions from both planned and unplanned
activity than a basic data centre. They have a raised floor, UPS, and engine generators, but their capacity design is Need plus
One (N+1), which has a single threaded distribution path throughout. Maintenance of the critical power path and other parts
of the site infrastructure will require a processing shutdown.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Data centre design

Tier 3 Data Centre: Concurrently Maintainable


Tier 3 level capability allows for any planned site infrastructure activity without disrupting the computer hardware operation in
any way. Planned activities include preventive and programmable maintenance, repair and replacement of components,
addition or removal of capacity components, testing of components and systems, and more. For large sites using chilled water,
this means two independent sets of pipes. Sufficient capacity and distribution must be available to simultaneously carry the
load on one path while performing maintenance or testing on the other path. Unplanned activities such as errors in operation
or spontaneous failures of facility infrastructure components will still cause a data centre disruption. Tier 3 sites are often
designed to be upgraded to Tier 4 when the clients business case justifies the cost of additional protection.
Tier 4 Data Centre: Fault Tolerant
Tier 4 provides site infrastructure capacity and capability to permit any planned activity without disruption to the critical load.
Fault-tolerant functionality also provides the ability of the site infrastructure to sustain at least one worst-case unplanned
failure or event with no critical load impact. This requires simultaneously active distribution paths, typically in a System+System
configuration. Electrically, this means two separate UPS systems in which each system has N+1 Tier 4 Data Centre: Fault
Tolerant redundancy. Because of fire and electrical safety codes, there will still be downtime exposure due to fire alarms or
people initiating an Emergency Power Off (EPO). Tier IV requires all computer hardware to have dual power inputs as defined
by the Institutes Fault-Tolerant Power Compliance Specification.
Safety Audit
The installation must be audited for safety both at design stage, project handover and routine inspection. The requirements of
the fire safety programme are already outlined in section 12. Additional safety audit points are;

Raised Floors (especially lifting tiles or tripping or falling).


Lifting Hazards.
Electrical Shock Hazards.
Static Discharge Hazards.
Cutting Hazards.
Pinching / Amputation Hazards.
Fire Hazards.
Accidental Triggering a gaseous Fire Retardant System Dump.
Accidental Unplugging Network Cables or Power From Servers.
Infra red laser hazard.
Excessive noise.

For the last point it is worth noting that sound levels at work in Europe were reduced in February 2006. The EC Noise at Work
Directive 2003/10/EC was made on 6th February 2003 and repeals and replaces 86/188/EC as from (mainly) 15th February
2006.
Where is the money likely to go in a Data centre?
An American example.

Management Fees
and Insurance
11%
Control Room
3%
BMS System
1%
Security System
1%

Architects Fees
4%

Electrical Design
2%
Facilities inc.
Cabling Design
3%
Raised Floor
5%
Other Building
Works
6%
Plumbing
1%

Data Cabling
4%

Sprinkler and
FM200
Suppression
4%
Electrical, UPS
and generator
40%

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

HVAC
15%

Data centre design

Appendix I
Some Standards referenced in this document;
ANSI/TIA/EIA-568-B Commercial Building Telecommunications Cabling Standard.
ANSI/TIA/EIA-606-A Administration Standard for the Telecommunications Infrastructure of Commercial Buildings.
ANSI/TIA/EIA-J-STD-607 Commercial building grounding and bonding requirements for telecommunications.
ASHRAE Thermal Guidelines for Data Processing Environments.
BS EN 54 Fire detection and fire alarm systems.
BS 5499-4:2000 Safety signs, including fire safety signs. Code of practice for escape route signing.
BS 5266-1 The Code of Practice For Emergency lighting.
BS 5839-1:2002 Fire detection and fire alarm systems for buildings. Code of practice for system design, installation,
commissioning and maintenance.
BS 60702-1: 2002 Mineral insulated cables and their terminations with a rated voltage not exceeding 750V.
BS 6387:1994 Performance requirements for cables required to maintain circuit integrity under fire conditions.
BS 6266:2002 Code of practice for fire protection for electronic equipment installations.
BS 6701 Telecommunication cabling and equipment installations.
BS 7671 Requirements for electrical installations: IEE wiring regulations 16th Edition.
BS ISO 14520 P1: 2000(E) Gaseous fire-extinguishing systems. Physical properties and system design. General requirements.
BS 8300:2001 Design of buildings and their approaches to meet the needs of disabled people Code of practice, and
Building Regulations 2000 Part M Access and facilities for disabled people.
DETR Advice on Alternatives and Guidelines for Users of Fire Fighting and Explosion Protection Systems.
EN 50310 Application of equipotential bonding and earthing in buildings with information technology equipment.
EN 50173 Information technology - Generic cabling systems -- Part 1: General requirements and office areas.
EN 50174-1 Information technology cabling installation Part 1:Specification and quality assurance.
EN 50174-2 Information technology Cabling installation Part 2 installation and planning practices inside buildings.
EN 50346 Information technology - Cabling installation - Testing of installed cabling.
EN 12825 Raised access floors.
ETS 300 253 Equipment engineering earthing and bonding of telecommunications equipment in telecommunication centres
Federal Standard 209E, Airborne Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000.
IEC 60309 Plugs, socket-outlets and couplers for industrial purposes - Part 1: General requirements.
IEC 60320 Appliance couplers for household and similar general purposes - Part 1: General requirements.
IEC 60332-3C Tests on electric cables under fire conditions - Part 3-10: Test for vertical flame spread of vertically-mounted
bunched wires or cables.
IEC 60364-1 Electrical installations of buildings, various sections including; Part 5-548: Earthing arrangements and equipotential bonding for information technology equipment.
IEEE STD 1100-1999 Powering and Grounding Sensitive Electronic Equipment.
NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in Air-Handling Spaces:2002.
ISO/IEC 14763-1: Information Technology Implementation and operation of customer premises cabling Part
1:Administration.
ISO 11801:2002 Information technology cabling for customer premises.
ITU-T K.27 Bonding configurations and earthing inside a telecommunications building.
ITU-T K.31 Bonding configurations and earthing of telecommunications installations inside a subscribers building.
The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)',
MOB PF2 PS.
TIA 942 Telecommunications Infrastructure Standard for Data Centers, April 2005.
VDI 2054, Air conditioning systems for computer areas.

Tel: 0800 0830 646 Fax: 0870 2429 825


Email: info@hardware.com

Você também pode gostar