Você está na página 1de 64

Operations & Production

Management

Submitted To:
Prof. Zia ur Rehman

Submitted By:
Yasir Dogar 911
Falak Zaib 2152
Kashif Sohail 925
Zeeshan 940
Waseem 918
Farasat Abbas 941
Section E (Afternoon)
8th Semester

Topic: Nestle Milk Pack

Hailey College of Commerce

University Of The Punjab


Plant Location
Introduction
In the previous unit you have learnt how the entrepreneur conducts the detailed
analysis comprising of technical, financial, economic and market study before
laying down a comprehensive business plan. For implementation of this plan, he
has to take various crucial decisions namely location of business, layout (the
arrangement of physical facilities), designing the product, production planning
and control and maintaining good quality of product. This lesson deals with
various aspects of plant location and layout. Investment in analyzing the aspects
of plant location and the appropriate plant layout can help an entrepreneur achieve
economic efficiencies in business operations. These decisions lay the foundation
of the business of small entrepreneurs.

Plant Location
Every entrepreneur is faced with the problem of deciding the best site for location
of his plant or factory.
What is plant location?
Plant location refers to the choice of region and the selection of a particular site
for setting up a business or factory.
But the choice is made only after considering cost and benefits of different
alternative sites. It is a strategic decision that cannot be changed once taken. If at
all changed only at considerable loss, the location should be selected as per its
own requirements and circumstances. Each individual plant is a case in itself.
Businessman should try to make an attempt for optimum or ideal location.
What is an ideal location?
An ideal location is one where the cost of the product is kept to minimum, with a
large market share, the least risk and the maximum social gain. It is the place of
maximum net advantage or which gives lowest unit cost of production and
distribution. For achieving this objective, small-scale entrepreneur can make use
of locational analysis for this purpose.

Locational Analysis
Locational analysis is a dynamic process where entrepreneur analyses and
compares the appropriateness or otherwise of alternative sites with the aim of
selecting the best site for a given enterprise. It consists the following:
• Demographic Analysis: It involves study of population in the area in terms of
total population (in no.), age composition, per capita income, educational level,
occupational structure etc.
• Trade Area Analysis: It is an analysis of the geographic area that provides
continued clientele to the firm. He would also see the feasibility of accessing the
trade area from alternative sites.
• Competitive Analysis: It helps to judge the nature, location, size and quality
of competition in a given trade area.
• Traffic analysis: To have a rough idea about the number of potential
customers passing by the proposed site during the working hours of the shop, the
traffic analysis aims at judging the alternative sites in terms of pedestrian and
vehicular traffic passing a site.
• Site economics: Alternative sites are evaluated in terms of establishment costs
and operational costs under this. Costs of establishment is basically cost incurred
for permanent physical facilities but operational costs are incurred for running
business on day to day basis, they are also called as running costs.

Selection Criteria
The important considerations for selecting a suitable location are given as
follows:
• Natural or climatic conditions.
• Availability and nearness to the sources of raw material.
• Transport costs-in obtaining raw material and also distribution or marketing finished
products to the ultimate users.
• Access to market: small businesses in retail or wholesale or services should be located
within the vicinity of densely populated areas.
• Availability of Infrastructural facilities such as developed industrial sheds or sites, link
roads, nearness to railway stations, airports or sea ports, availability of electricity, water,
public utilities, civil amenities and means of communication are important, especially for
small scale businesses.
• Availability of skilled and non-skilled labour and technically qualified and trained
managers.
• Banking and financial institutions are located nearby.
• Locations with links: to develop industrial areas or business centers result in savings and
cost reductions in transport overheads, miscellaneous expenses.
• Strategic considerations of safety and security should be given due importance.
• Government influences: Both positive and negative incentives to motivate an
entrepreneur to choose a particular location are made available. Positive includes cheap
overhead facilities like electricity, banking transport, tax relief, subsidies and
liberalization. Negative incentives are in form of restrictions for setting up industries in
urban areas for reasons of pollution control and decentralization of industries.
• Residence of small business entrepreneurs want to set up nearby their homelands
• One study of locational considerations from small-scale units revealed that the native
place or homelands of the entrepreneur was the most important factor.
• Heavy preference to homeland suggests that small-scale enterprise is not freely mobile.
Low preference for Government incentives suggests that concessions and incentives
cannot compensate for poor infrastructure.
Significance
From the discussion above, we have already learnt that location of a plant is an
important entrepreneurial decision because it influences the cost of production
and distribution to a great extent. In some cases, you will find that location may
contribute to even 10% of cost of manufacturing and marketing. Therefore, an
appropriate location is essential to the efficient and economical working of a plant.
A firm may fail due to bad location or its growth and efficiency may be restricted.

CHECK YOUR PROGRESS


1. The factor least important to consider when selecting a location for a new furniture store
is
1. The weather of the community
2. The future of the community
3. The other businesses in the community
4. The age distribution of the population in the community
2. When selecting a site for a business it is important to
1. Purchase the property when possible
b. Lease the property to avoid the problem of mortgage payments
c. Rent or buy the property, whichever must be done in order to obtain the specific
site
d. Make comparisons between the rentals of neighboring stores and property for sale

Plant Layout
Plant layout refers to the arrangement of machines, departments, workstations, storage areas,
aisles, and common areas within an existing or proposed facility. Layouts have far-reaching
implications for the quality, productivity, and competitiveness of a firm. Layout decisions
significantly affect how efficiently workers can do their jobs, how fast goods can be produced,
how difficult it is to automate a system, and how responsive the system can be to changes in
product or service design, product mix, and demand volume.

The basic objective of the layout decision is to ensure a smooth flow of work, material, people,
and information through the system. Effective layouts also:
• Minimize material handling costs;
• Utilize space efficiently;
• Utilize labor efficiently;
• Eliminate bottlenecks;
• Facilitate communication and interaction between workers, between workers and their
supervisors, or between workers and customers;
• Reduce manufacturing cycle time and customer service time;
• Eliminate wasted or redundant movement;
• Facilitate the entry, exit, and placement of material, products, and people;
• Incorporate safety and security measures;
• Promote product and service quality;
• Encourage proper maintenance activities;
• Provide a visual control of operations or activities;
• Provide flexibility to adapt to changing conditions.

Basic Layouts
There are three basic types of layouts: process, product, and fixed-position; and three hybrid
layouts: cellular layouts, flexible manufacturing systems, and mixed-model assembly lines. We
discuss basic layouts in this section and hybrid layouts later in the chapter.
Process Layouts
Process layouts, also known as functional layouts, group similar activities together in
departments or work centers according to the process or function they perform. For example, in a
machine shop, all drills would be located in one work center, lathes in another work center, and
milling machines in still another work center. In a department store, women's clothes, men's
clothes, children's clothes, cosmetics, and shoes are located in separate departments. A process
layout is characteristic of intermittent operations, service shops, job shops, or batch production,
which serve different customers with different needs. The volume of each customer's order is
low, and the sequence of operations required to complete a customer's order can vary
considerably.

The equipment in a process layout is general purpose, and the workers are skilled at operating
the equipment in their particular department. The advantage of this layout is flexibility. The
disadvantage is inefficiency. Jobs or customers do not flow through the system in an orderly
manner, backtracking is common, movement from department to department can take a
considerable amount of time, and queues tend to develop. In addition, each new arrival may
require that an operation be set up differently for its particular processing requirements. Although
workers can operate a number of machines or perform a number of different tasks in a single
department, their workload often fluctuates--from queues of jobs or customers waiting to be
processed to idle time between jobs or customers.

Material storage and movement are directly affected by the type of layout. Storage space in a
process layout is large to accommodate the large amount of in-process inventory. The factory
may look like a warehouse, with work centers strewn between storage aisles. In-process
inventory is high because material moves from work center to work center in batches waiting to
be processed. Finished goods inventory, on the other hand, is low because the goods are being
made for a particular customer and are shipped out to that customer upon completion.

Process layouts in manufacturing firms require flexible material handling equipment (such as
forklifts) that can follow multiple paths, move in any direction, and carry large loads of in-
process goods. A forklift moving pallets of material from work center to work center needs wide
aisles to accommodate heavy loads and two-way movement. Scheduling of forklifts is typically
controlled by radio dispatch and varies from day to day and hour to hour. Routes have to be
determined and priorities given to different loads competing for pickup.

Process layouts in service firms require large aisles for customers to move back and forth and
ample display space to accommodate different customer preferences.

The major layout concern for a process layout is where to locate the departments or machine
centers in relation to each other. Although each job or customer potentially has a different route
through the facility, some paths will be more common than others. Past information on customer
orders and projections of customer orders can be used to develop patterns of flow through the
shop.
Product Layouts
Product layouts, better known as assembly lines, arrange activities in a line according to the
sequence of operations that need to be performed to assemble a particular product. Each product
or has its own "line" specifically designed to meet its requirements. The flow of work is orderly
and efficient, moving from one workstation to another down the assembly line until a finished
product comes off the end of the line. Since the line is set up for one type of product or service,
special machines can be purchased to match a product's specific processing requirements.
Product layouts are suitable for mass production or repetitive operations in which demand is
stable and volume is high. The product or service is a standard one made for a general market,
not for a particular customer. Because of the high level of demand, product layouts are more
automated than process layouts, and the role of the worker is different. Workers perform
narrowly defined assembly tasks that do not demand as high a wage rate as those of the more
versatile workers in a process layout.

The advantage of the product layout is its efficiency and ease of use. The disadvantage is its
inflexibility. Significant changes in product design may require that a new assembly line be built
and new equipment be purchased. This is what happened to U.S. automakers when demand
shifted to smaller cars. The factories that could efficiently produce six-cylinder engines could not
be adapted to produce four-cylinder engines. A similar inflexibility occurs when demand volume
slows. The fixed cost of a product layout
(mostly for equipment) allocated over fewer units can send the price of a product soaring.

The major concern in a product layout is balancing the assembly line so that no one workstation
becomes a bottleneck and holds up the flow of work through the line. shows the product flow in
a product layout. Contrast this with the flow of products through the process layout shown in.
A product layout needs material moved in one direction along the assembly line and always in
the same pattern. Conveyors are the most common material handling equipment for product
layouts. Conveyors can be paced (automatically set to control the speed of work) or unpaced
(stopped and started by the workers according to their pace). Assembly work can be performed
online (i.e., on the conveyor) or offline (at a workstation serviced by the conveyor).

Aisles are narrow because material is moved only one way, it is not moved very far, and the
conveyor is an integral part of the assembly process, usually with workstations on either side.
Scheduling of the conveyors, once they are installed, is simple--the only variable is how fast they
should operate.

Storage space along an assembly line is quite small because in-process inventory is consumed in
the assembly of the product as it moves down the assembly line. Finished goods, however, may
require a separate warehouse for storage before they are shipped to dealers or stores to be sold.

Product and process layouts look different, use different material handling methods, and have
different layout concerns. Summarize the differences between product and process layouts.
Fixed-Position Layouts
Fixed-position layouts are typical of projects in which the product produced is too fragile,
bulky, or heavy to move. Ships, houses, and aircraft are examples. In this layout, the product
remains stationary for the entire manufacturing cycle. Equipment, workers, materials, and other
resources are brought to the production site. Equipment utilization is low because it is often less
costly to leave equipment idle at a location where it will be needed again in a few days, than to
move it back and forth. Frequently, the equipment is leased or subcontracted, because it is used
for limited periods of time. The workers called to the work site are highly skilled at performing
the special tasks they are requested to do. For instance, pipefitters may be needed at one stage of
production, and electricians or plumbers at another. The wage rate for these workers is much
higher than minimum wage. Thus, if we were to look at the cost breakdown for fixed-position
layouts, the fixed cost would be relatively low (equipment may not be owned by the company),
whereas the variable costs would be high (due to high labor rates and the cost of leasing and
moving equipment).

Because the fixed-position layout is specialized, we concentrate on the product and process
layouts and their variations for the remainder of this chapter. In the sections that follow, we
examine some quantitative approaches for designing product and process layouts.

Designing Process Layouts


In designing a process layout, we want to minimize material handling costs, which are a function
of the amount of material moved times the distance it is moved. This implies that departments
that incur the most interdepartmental movement should be located closest to each other, and
those that do not interact should be located further away. Two techniques used to design process
layouts, block diagramming and relationship diagramming, are based on logic and the visual
representation of data.
Block Diagramming
We begin with data on historical or predicted movement of material between departments in the
existing or proposed facility. This information is typically provided in the form of a from/to
chart, or load summary chart. The chart gives the average number of unit loads transported
between the departments over a given period of time. A unit load can be a single unit, a pallet of
material, a bin of material, or a crate of material--however material is normally moved from
location to location. In automobile manufacturing, a single car represents a unit load. For a ball-
bearing producer, a unit load might consist of a bin of 100 or 1,000 ball bearings, depending on
their size.

The next step in designing the layout is to calculate the composite movements between
departments and rank them from most movement to least movement. Composite movement,
represented by a two-headed arrow, refers to the back-and-forth movement between each pair of
departments.

Finally, trial layouts are placed on a grid that graphically represents the relative distances
between departments in the form of uniform blocks. The objective is to assign each department
to a block on the grid so that nonadjacent loads are minimized. The term nonadjacent is defined
as a distance farther than the next block, either horizontally, vertically, or diagonally. The trial
layouts are scored on the basis of the number of nonadjacent loads. Ideally, the optimum layout
would have zero nonadjacent loads. In practice, this is rarely possible, and the process of trying
different layout configurations to reduce the number of nonadjacent loads continues until an
acceptable layout is found.

The layout solution in grid 2 represents the relative position of each department. The next step in
the layout design is to add information about the space required for each department.
Recommendations for workspace around machines can be requested from equipment vendors or
found in safety regulations or operating manuals. In some cases, vendors provide templates of
equipment layouts, with work areas included. Workspace allocations for workers can be specified
as part of job design, recommended by professional groups, or agreed upon through union
negotiations. A block diagram can be created by blocking in the work areas around the
departments on the grid. The final block diagram adjusts the block diagram for the desired or
proposed shape of the building. Standard building shapes include rectangles, L shapes, T shapes,
and U shapes.

Relationship Diagramming
The preceding solution procedure is appropriate for designing process layouts when quantitative
data are available. However, in situations for which quantitative data are difficult to obtain or do
not adequately address the layout problem, the load summary chart can be replaced with
subjective input from analysts or managers. Richard Muther developed a format for displaying
manager preferences for departmental locations, known as Muther's grid.2 The preference
information is coded into six categories associated with the five vowels, A, E, I, O, and U, plus
the letter X. As shown in, the vowels match the first letter of the closeness rating for locating two
departments next to each other. The diamond-shaped grid is read similar to mileage charts on a
road map. For example, reading down the highlighted row in Figure 7.4, it is okay if the offices
are located next to production, absolutely necessary that the stockroom be located next to
production, important that shipping and receiving be located next to production, especially
important that the locker room be located next to production, and absolutely necessary that the
tool room be located next to production.

The information from Muther's grid can be used to construct a relationship diagram that
evaluates existing or proposed layouts.
Computerized Layout Solutions
The diagrams just discussed help formulate ideas for the arrangement of departments in a process
layout, but they can be cumbersome for large problems. Fortunately, several computer packages
are available for designing process layouts. The best known is CRAFT (Computerized Relative
Allocation of Facilities Technique) and CORELAP (Computerized Relationship Layout
Planning). CRAFT takes a load summary chart and block diagram as input and then makes pair
wise exchanges of departments until no improvements in cost or nonadjacency score can be
found. The output is a revised block diagram after each iteration for a rectangular-shaped
building, which may or may not be optimal. CRAFT is sensitive to the initial block diagram
used; that is, different block diagrams as input will result in different layouts as outputs. For this
reason, CRAFT is often used to improve upon existing layouts or to enhance the best manual
attempts at designing a layout.

CORELAP uses nonquantitative input and relationship diagramming to produce a feasible layout
for up to forty-five departments and different building shapes. It attempts to create an acceptable
layout from the beginning by locating department pairs with A ratings first, then those with E
ratings, and so on.

Simulation software for layout analysis, such as PROMODEL and EXTEND provide visual
feedback and allow the user to quickly test a variety of scenarios. Three-D modeling and CAD-
integrated layout analysis are available in VisFactory and similar software. All these computer
packages are basically trial-and-error approaches to layout design that provide good, but not
necessarily optimal, process layouts.

Service Layouts
Most service organizations use process layouts. This makes sense because of the variability in
customer requests for service. Service layouts are designed in much the same way as process
layouts in manufacturing firms, but the objectives may differ. For example, instead of
minimizing the flow of materials through the system, services may seek to minimize the flow of
customers or the flow of paperwork. In retail establishments, the objective is usually related to
maximizing profit per unit of display space. If sales vary directly with customer exposure, then
an effective layout would expose the customer to as many goods as possible. This means instead
of minimizing a customer's flow, it would be more beneficial to maximize it (to a certain point).
Grocery stores take this approach when they locate milk on one end of the store and bread on the
other, forcing the customer to travel through aisles of merchandise that might prompt additional
purchases.

Another aspect of service layout is the allocation of shelf space to various products. Industry-
specific recommendations are available for layout and display decisions. Computerized versions,
such as SLIM (Store Labor and Inventory Management) and COSMOS (Computerized
Optimization and Simulation Modeling for Operating Supermarkets), consider shelf space,
demand rates, profitability, and stockout probabilities in layout design. Finally, service layouts
are often visible to the customer, so they must be aesthetically pleasing as well as functional.

Designing Product Layouts


A product layout arranges machines or workers in a line according to the operations that need to
be performed to assemble a particular product. From this description, it would seem the layout
could be determined simply by following the order of assembly as contained in the bill of
material for the product. To some extent, this is true. Precedence requirements specifying which
operations must precede others, which can be done concurrently and which must wait until later
are an important input to the product layout decision. But there are other factors that make the
decision more complicated.

Product layouts or assembly lines are used for high-volume production. To attain the required
output rate as efficiently as possible, jobs are broken down into their smallest indivisible
portions, called work elements. Work elements are so small that they cannot be performed by
more than one worker or at more than one workstation. But it is common for one worker to
perform several work elements as the product passes through his or her workstation. Part of the
layout decision is concerned with grouping these work elements into workstations so products
flow through the assembly line smoothly. A workstation is any area along the assembly line that
requires at least one worker or one machine. If each workstation on the assembly line takes the
same amount of time to perform the work elements that have been assigned, then products will
move successively from workstation to workstation with no need for a product to wait or a
worker to be idle. The process of equalizing the amount of work at each workstation is called
line balancing.
Line Balancing
Assembly line balancing operates under two constraints, precedence requirements and cycle time
restrictions.
Precedence requirements are physical restrictions on the order in which operations are
performed on the assembly line. For example, we would not ask a worker to package a product
before all the components were attached, even if he or she had the time to do so before passing
the product to the next worker on the line. To facilitate line balancing, precedence requirements
are often expressed in the form of a precedence diagram. The precedence diagram is a network,
with work elements represented by circles or nodes and precedence relationships represented by
directed line segments connecting the nodes. We will construct a precedence diagram later in
Example 7.2.
Cycle time, the other restriction on line balancing, refers to the maximum amount of time the
product is allowed to spend at each workstation if the targeted production rate is to be reached.
Desired cycle time is calculated by dividing the time available for production by the number of
units scheduled to be produced:

The line balancing process can be summarized as follows:


1. Draw and label a precedence diagram.
2. Calculate the desired cycle time required for the line.
3. Calculate the theoretical minimum number of workstations.
4. Group elements into workstations, recognizing cycle time and precedence constraints.
5. Calculate the efficiency of the line.
6. Determine if the theoretical minimum number of workstations or an acceptable efficiency
level has been reached. If not, go back to step 4.

Computerized Line Balancing


Line balancing by hand becomes unwieldy as the problems grow in size. Fortunately, there are
software packages that will balance large lines quickly. IBM's COMSOAL (Computer Method
for Sequencing Operations for Assembly Lines) and GE's ASYBL (Assembly Line Configuration
Program) can assign hundreds of work elements to workstations on an assembly line. These
programs, and most that are commercially available, do not guarantee optimal solutions. They
use various heuristics, or rules, to balance the line at an acceptable level of efficiency. The POM
for Windows software lets the user select from five different heuristics: ranked positional weight,
longest operation time, shortest operation time, most number of following tasks, and least
number of following tasks. These heuristics specify the order in which work elements are
considered for allocation to workstations. Elements are assigned to workstations in the order
given until the cycle time is reached or until all tasks have been assigned.

Hybrid Layouts
Hybrid layouts modify and/or combine some aspects of product and process layouts. We discuss
three hybrid layouts: cellular layouts, flexible manufacturing systems, and mixed-model
assembly lines.
Cellular Layouts
Cellular layouts attempt to combine the flexibility of a process layout with the efficiency of a
product layout. Based on the concept of group technology (GT), dissimilar machines are grouped
into work centers, called cells, to process parts with similar shapes or processing requirements.
Shows a family of parts with similar shapes.) The cells are arranged in relation to each other so
that material movement is minimized. Large machines that cannot be split among cells are
located near to the cells that use them, that is, at their point of use.

The layout of machines within each cell resembles a small assembly line. Thus, line-balancing
procedures, with some adjustment, can be used to arrange the machines within the cell. The
layout between cells is a process layout. Therefore, computer programs such as CRAFT can be
used to locate cells and any leftover equipment in the facility.

Consider the process layouts in Machines are grouped by function into four distinct departments.
Component parts manufactured in the process layout section of the factory are later assembled
into a finished product on the assembly line. The parts follow different flow paths through the
shop. Three representative routings, for parts A, B, and C, are shown in the figure. Notice the
distance that each part must travel before completion and the irregularity of the part routings. A
considerable amount of "paperwork" is needed to direct the flow of each individual part and to
confirm that the right operation has been performed. Workers are skilled at operating the types of
machines within a single department and typically can operate more than one machine at a time.
Gives the complete part routing matrix for the eight parts processed through the facility. In its
current form, there is no apparent pattern to the routings. Production flow analysis (PFA) is a
group technology technique that reorders part routing matrices to identify families of parts with
similar processing requirements. The reordering process can be as simple as listing which parts
have four machines in common, then which have three in common, two in common, and the like,
or as sophisticated as pattern-recognition algorithms from the field of artificial intelligence.

The advantages of cellular layouts are as follows:


• Reduced material handling and transit time. Material movement is more direct. Less
distance is traveled between operations. Material does not accumulate or wait long
periods of time to be moved. Within a cell, the worker is more likely to carry a partially
finished item from machine to machine than wait for material handling equipment, as is
characteristic of process layouts, where larger loads must be moved farther distances.
• Reduced setup time. Since similar parts are processed together, the adjustments required
to set up a machine should not be that different from item to item. If it does not take that
long to change over from one item to another, then the changeover can occur more
frequently, and items can be produced and transferred in very small batches or lot sizes.
• Reduced work-in-process inventory. In a work cell, as with assembly lines, the flow of
work is balanced so that no bottlenecks or significant buildup of material occurs between
stations or machines. Less space is required for storage of in-process inventory between
machines, and machines can be moved closer together, thereby saving transit time and
increasing communication.
• Better use of human resources. Typically, a cell contains a small number of workers
responsible for producing a completed part or product. The workers act as a self-managed
team, in most cases more satisfied with the work that they do and more particular about
the quality of their work. Labor in cellular manufacturing is a flexible resource. Workers
in each cell are multifunctional and can be assigned to different routes within a cell or
between cells as demand volume changes.
• Easier to control. Items in the same part family are processed in a similar manner through
the work cell. There is a significant reduction in the paperwork necessary to document
material travel, such as where an item should be routed next, if the right operation has
been performed, and the current status of a job. With fewer jobs processed through a cell,
smaller batch sizes, and less distance to travel between operations, the progress of a job
can be verified visually rather than by mounds of paperwork.
• Easier to automate. Automation is expensive. Rarely can a company afford to automate
an entire factory all at once. Cellular layouts can be automated one cell at a time. Figure
7.12 (on page 302 of your textbook) shows an automated cell with one robot in the center
to load and unload material from several CNC machines and an incoming and outgoing
conveyor. Automating a few workstations on an assembly line will make it difficult to
balance the line and achieve the increases in productivity expected. Introducing
automated equipment in a job shop has similar results, because the "islands of
automation" speed up only certain processes and are not integrated into the complete
processing of a part or product.

Several disadvantages of cellular layouts must also be considered:


• Inadequate part families. There must be enough similarity in the types of items processed
to form distinct part families. Cellular manufacturing is appropriate for medium levels of
product variety and volume. The formation of part families and the allocation of
machines to cells is not always an easy task. Part families identified for design purposes
may not be appropriate for manufacturing purposes.
• Poorly balanced cells. It is more difficult to balance the flow of work through a cell than
a single-product assembly line, because items may follow different sequences through the
cell that require different machines or processing times. The sequence in which parts
enter the cell can thus affect the length of time a worker or machine spends at a certain
stage of processing. Poorly balanced cells can be very inefficient. It is also important to
balance the workload among cells in the system, so that one cell is not overloaded while
others are idle. This may be taken care of in the initial cellular layout, only to become a
problem as changes occur in product designs or product mix. Several imbalances may
require the reformation of cells around different part families, and the cost and disruption
that implies.
• Expanded training and scheduling of workers. Training workers to do different tasks is
expensive and time-consuming and requires the workers' consent. Initial union reaction to
multifunctional workers was not positive. Today, many unions have agreed to participate
in the flexible assignment of workers in exchange for greater job security. Although
flexibility in worker assignment is one of the advantages of cellular layouts, the task of
determining and adjusting worker paths within or between cells can be quite complex.
• Increased capital investment. In cellular manufacturing, multiple smaller machines are
preferable to single large machines. Implementing a cellular layout can be economical if
new machines are being purchased for a new facility, but it can be quite expensive and
disruptive in existing production facilities where new layouts are required. Existing
equipment may be too large to fit into cells or may be underutilized when placed in a
single cell. Additional machines of the same type may have to be purchased for different
cells. The cost and downtime required to move machines can also be high.

Cellular layouts have become popular in the past decade as the backbone of modern factories.
Cells can differ considerably in size, in automation, and in the variety of parts processed. As
small, interconnected layout units, cells are common in services, as well as manufacturing.

Flexible Manufacturing Systems


The idea of a flexible manufacturing system (FMS) was proposed in England in the 1960s with
System 24 that could operate without human operators 24 hours a day under computer control.
The emphasis from the beginning was on automation rather than the reorganization of work flow.
Early FMSs were large and complex, consisting of dozens of CNC machines and sophisticated
material handling systems.
The systems were very automated, very expensive, and controlled by incredibly complex
software. The FMS control computer operated the material handling system, maintained the
library of CNC programs and downloaded them to the machines, scheduled the FMS, kept track
of tool use and maintenance, and reported on the performance of the system.

There are not many industries that can afford the investment required for a traditional FMS as
described. Fewer than 400 FMSs are in operation around the world today. Currently, the trend in
flexible manufacturing is toward smaller versions of the traditional FMS, sometimes called
flexible manufacturing cells. It is not unusual in today's terminology for two or more CNC
machines to be considered a flexible cell and two or more cells, an FMS.

FMS layouts differ based on the variety of parts that the system can process, the size of the parts
processed, and the average processing time required for part completion. Four types of FMS
layouts:
• Progressive layout: All parts follow the same progression through the machining stations.
This layout is appropriate for processing a family of parts and is the most similar to an
automated group technology cell.
• Closed-loop layout: Arranged in the general order of processing for a much larger variety
of parts. Parts can easily skip stations or can move around the loop to visit stations in an
alternate order. Progressive and closed-loop systems are used for part sizes that are
relatively large and that require longer processing times.
• Ladder layout: So named because the machine tools appear to be located on the steps of a
ladder, allowing two machines to work on one item at a time. Programming the machines
may be based on similarity concepts from group technology, but the types of parts
processed are not limited to particular part families. Parts can be routed to any machine in
any sequence.
• Open-field layout: The most complex and flexible FMS layout. It allows material to
move among the machine centers in any order and typically includes several support
stations such as tool interchange stations, pallet or fixture build stations, inspection
stations, and chip/coolant collection systems.
Mixed-Model Assembly Lines
Traditional assembly lines, designed to process a single model or type of product, can be used to
process more than one type of product, but not efficiently. Models of the same type are produced
in long production runs, sometimes lasting for months, and then the line is shut down and
changed over for the next model. The next model is also run for an extended time, producing
perhaps half a year to a year's supply; then the line is shut down again and changed over for yet
another model; and so on. The problem with this arrangement is the difficulty in responding to
changes in customer demand. If a certain model is selling well and customers want more of it,
they have to wait until the next batch of that model is scheduled to be produced. On the other
hand, if demand is disappointing for models that have already been produced, the manufacturer
is stuck with unwanted inventory.

Recognizing that this mismatch of production and demand is a problem, some manufacturers
concentrated on devising more sophisticated forecasting techniques. Others changed the manner
in which the assembly line was laid out and operated so that it really became a mixed-model
assembly line. First, they reduced the time needed to change over the line to produce different
models. Then they trained their workers to perform a variety of tasks and allowed them to work
at more than one workstation on the line, as needed. Finally, they changed the way in which the
line was arranged and scheduled. The following factors are important in the design and operation
of mixed-model assembly lines:
• Line balancing: In a mixed-model line, the time to complete a task can vary from model
to model. Instead of using the completion times from one model to balance the line, a
distribution of possible completion times from the array of models must be considered. In
most cases, the expected value, or average, times are used in the balancing procedure.
Otherwise, mixed-model lines are balanced in much the same way as single-model lines.
• U-shaped lines. To compensate for the different work requirements of assembling
different models, it is necessary to have a flexible workforce and to arrange the line so
that workers can assist one another as needed.
• Flexible workforce. Although worker paths are predetermined to fit within a set cycle
time, the use of average time values in mixed-model lines will produce variations in
worker performance. Hence, the lines are not run at a set speed. Items move through the
line at the pace of the slowest operation. This is not to say that production quotas are not
important. If the desired cycle time is exceeded at any station on the line, other workers
are notified by flashing lights or sounding alarms so that they can come to the aid of the
troubled station. The assembly line is slowed or stopped until the work at the errant
workstation is completed. This flexibility of workers helping other workers makes a
tremendous difference in the ability of the line to adapt to the varied length of tasks
inherent in a mixed-model line.
• Model sequencing. Since different models are produced on the same line, mixed-model
scheduling involves an additional decision--the order, or sequence, of models to be run
through the line. From a logical standpoint, it would be unwise to sequence two models
back-to-back that require extra long processing times. It would make more sense to mix
the assembling of models so that a short model (requiring less than the average time)
followed a long one (requiring more than the average time). With this pattern, workers
could "catch up" from one model to the next.
Another objective in model sequencing is to spread out the production of different models as
evenly as possible throughout the time period scheduled.

Nestle Milk Pack Pakistan


Swiss dairy giant Nestle has made Pakistan the home of worlds largest ever milk production
plant. The 2 million-liter-a-day Punjab-based milk processing facility will rise to over three
million liters in coming years. Pakistan is the world's fourth-largest milk producer, and Asia's
second-largest, behind India, so the location of Nestlé's latest investment is fitting. Since Nestle
started investing in Pakistan 18 years ago, the company has established the country's largest milk
collection network. Today, Nestle collects milk from 140,000 farmers over an area of 100,000
square kilometres in Punjab who, as a result, receive over CHF120 million per year directly from
the company. The Nestle investment says much about the extraordinary rate of development of
this commodity and the mutually beneficial relationship that Nestle and Pakistan's milk
processing industry enjoy. The company has five production facilities in different parts of
Pakistan: two multi-product factories in Sheikhupura and Kabirwala, respectively, and three
bottled water plants, one in Islamabad and two more in Karachi.

Plant Location
As we talk about the plant location of Nestle Milk pack. It is very obvious that this company has
gone through the proper channel of research in selcting the location for there plant. It is situated
in the center of Punjab.

It is an ideal location is one where the cost of the product is kept to minimum, with access to
large market share, the least risk and the maximum social gain. It is the place of maximum net
advantage or which gives lowest unit cost of production and distribution.

Location Analysis
Nestle locational analysis is a dynamic process which is analyzed and compares the
appropriateness or otherwise of alternative sites with the aim of selecting the best site for Nestle
milk producing enterprise. It consists the following:
• Demographic Analysis:
It involves study of population in the area in terms of total
population (in no.), age composition, per capita income, educational
level, occupational structure etc.
The results where as population was limited in rural area away
from cities the employment opportunities where made for the locals.
Income level has raised by 30%. As most of the skilled labour required
should be trained to required job level.

• Trade Area Analysis:


It is an analysis of the geographic area that provides continued
clientele to the firm. He would also see the feasibility of accessing the
trade area from alternative sites.
• Competitive Analysis:
It helps to judge the nature, location, size and quality of
competition in a given trade area.
• Site economics:
Alternative sites are evaluated in terms of establishment costs and
operational costs under this. Costs of establishment is basically cost
incurred for permanent physical facilities but operational costs are
incurred for running business on day to day basis, they are also called
as running costs.
Nestle

Selection Criteria
The important considerations for selecting a suitable location are given as
follows:
• Natural or climatic conditions.
• Availability and nearness to the sources of raw material.
• Transport costs-in obtaining raw material and also distribution or marketing finished
products to the ultimate users.
• Access to market: small businesses in retail or wholesale or services should be located
within the vicinity of densely populated areas.
• Availability of Infrastructural facilities such as developed industrial sheds or sites, link
roads, nearness to railway stations, airports or sea ports, availability of electricity, water,
public utilities, civil amenities and means of communication are important, especially for
small scale businesses.
• Availability of skilled and non-skilled labour and technically qualified and trained
managers.
• Banking and financial institutions are located nearby.
• Locations with links: to develop industrial areas or business centers result in savings and
cost reductions in transport overheads, miscellaneous expenses.
• Strategic considerations of safety and security should be given due importance.
• Government influences: Both positive and negative incentives to motivate an
entrepreneur to choose a particular location are made available. Positive includes cheap
overhead facilities like electricity, banking transport, tax relief, subsidies and
liberalization. Negative incentives are in form of restrictions for setting up industries in
urban areas for reasons of pollution control and decentralization of industries.
• Residence of small business entrepreneurs want to set up nearby their homelands
• One study of locational considerations from small-scale units revealed that the native
place or homelands of the entrepreneur was the most important factor.
• Heavy preference to homeland suggests that small-scale enterprise is not freely mobile.
Low preference for Government incentives suggests that concessions and incentives
cannot compensate for poor infrastructure.

Significance
From the discussion above, we have already learnt that location of a plant is an
important entrepreneurial decision because it influences the cost of production
and distribution to a great extent. In some cases, you will find that location may
contribute to even 10% of cost of manufacturing and marketing. Therefore, an
appropriate location is essential to the efficient and economical working of a plant.
A firm may fail due to bad location or its growth and efficiency may be restricted.

CHECK PROGRESS
1. The factor least important to consider when selecting a location for a new
furnitures store is
• The weather of the community
• The future of the community
• The other businesses in the community
• The age distribution of the population in the community
2. When selecting a site for a business it is important to
 Purchase the property when possible.
 Lease the property to avoid the problem of mortgage payments.
 Rent or buy the property, whichever must be done in order to obtain the
specific site.
 Make comparisons between the rentals of neighboring stores and property
for sale.

Purchase Policy
The Board approved the following Purchase Policy, Rules and Procedure. It also desired that the
same be reviewed after one year.

PURCHASE POLICY, RULES AND PROCEDURE


These rules for purchase of equipment/ consumables for Departments/ Sponsored/ Consultancy
Projects have been framed in order to provide a conducive working environment for teachers and
students to promote excellence expected from institutions like PEC, so that the procurement of
the needed equipment/ stores is done in time and without procedural wrangles which permits
laboratories and research works to be pursued with greater vigour.

DIRECT PURCHASE
A buyer may make purchase of goods up to a value of Rs. 15,000/- on each occasion after
ensuring the reasonability of prices. The purchase may be effected either through a permanent
imprest held in the name of HOD or his nominee/Principal Investigator or through a temporary
advance of up to Rs. 15,000/- that may be specifically drawn for the purchase in the name of a
buyer or through credit after obtaining the approval of Competent Financial Authority. A
certificate in the following format must be recorded:
“I, _________________________, am personally satisfied that the goods
purchased are of the requisite quality and specifications and have been
purchased from a reliable supplier at a reasonable price.”

PURCHASE BY PURCHASE COMMITTEE THROUGH SPOT QUOTATIONS


Goods up to a value of Rs. 1,00,000/- may be purchased on the recommendation of a local
purchase committee. The composition of the committee for such purchase shall consist of at least
three faculty members/Group A officers and one representative of the Finance Section. In
order to ensue reasonability of the prices, the committee may obtain minimum three quotations
from reliable suppliers. The Committee will jointly record a certificate in the following format:
“Certified that we, members of the purchase committee, are jointly and individually satisfied that
the goods recommended for purchase are of the requisite quality and specifications, priced at the
prevailing market rate and the supplier recommended is reliable and competent to supply the
goods in question.”
If necessary, the committee may make cash purchases by drawing advance up to Rs. 25,000/-.
Note: Large Purchases should not be split in smaller lots so as to qualify under direct purchase.

PURCHASE THROUGH QUOTATION/TENDER


The following procedure for obtaining tenders should be followed as far as possible for purchase
of goods/ equipment valuing more than Rs. 1, 00,000/-. Tender should be obtained by:
(i) Direct invitation to a limited number of firms (Limited tender) (ii) Advertisement (open
tender)
(iii) Invitation to one firm only (Single tender)
Limited tender system should ordinarily be adopted in the cases of all orders the limited value of
which is less than Rs. 25,00,000/-.
The open tender system, that is invitation to the tender by public advertisement and should be
adopted in all cases in which the estimated value of the demand is Rs. 25,00,000/- and above.
The Single tender system must be adopted in case of articles that are specifically certified as
proprietary nature by giving full justification on record. In case of purchase on the basis of single
tender/single bidder, the following certificate must be obtained from the vendor:
“I/We have not supplied the quoted stores at a rate less than the instant quote within the current
financial year.”

PROCESSING OF QUOTATIONS
Quotations may be invited or received through post/ courier service/ press by the department or
Store Purchase Section from the firms listed on the approved panel of suppliers. The quotation
letters should be signed by HOD of the concerned department and DDO in the case of central
purchase. The notice inviting quotations could be sent by raising an indent or blank
NIQ format. A panel of approved vendors for various items shall be maintained by Store
purchase section. The buyer may also recommend names of the firms for inclusion in the
approved panel. Thereafter, on the due date & time the individual quotations shall be
opened in the presence of a Committee of at least three members including one member from the
Finance Section and the buyer and an official of the store purchase section or the officer/ official
who initially invited the quotations. All the quotations will be signed by the officials present at
the time of opening. A comparative statement shall be prepared either by the buyer or the store
purchase section as the case may be. The comparative statement along with the quotations will
be submitted to the purchase committee for necessary recommendation. The accepted quotations
will be circled on the original quotations and on the comparative statement. Also a justification
for a particular choice, i.e. being the lowest quotation or on technical grounds should be
recorded on the Comparative statement. Normally the purchase shall be approved on the basis of
at least three quotations. However, the director / his nominee can relax these conditions on
sufficient grounds on the recommendation of the purchase committee. In case of proprietary
items may be procured from the proprietary source on the basis of single quotations after
certification of the proprietary nature of the item by the supplier/ seller. In such cases, wherever,
possible, the purchase price of similar item paid previously may be used as a benchmark to
ensure reasonability of price.
The store purchase section will prepare the supply order and send the file to audit. The audit shall
pre audit the supply order. Since it also maintained budgetary record of recurring/ non-recurring
expenditure of all departments, it shall certify availability of funds extra. Thereafter, approval for
the purchase shall be obtained from the college purchase committee, if applicable, and the supply
order duly checked shall be sent to store purchase section for issue to the vendor.

INTERNATIONAL PURCHASE
For procurement of items from outside India against the open general import licence or otherwise
in foreign currency, all the rules and procedures laid down in earlier shall apply. However, the
role of the various purchase committees will be to recommend the purchase rather than make
purchases. The quotation should be obtained directly from the foreign supplier or alternatively,
the sole selling agent. All further processing including pre audit and placement of order
shall be through store purchase section irrespective of the value of the purchase. The procedure
of processing subsequent to receipt of goods shall be the same as that of purchase of indigenous
stores.
DISCREPANCY IN SUPPLY
Where stores supplied are found not acceptable due to damage in transit, wrong supply and are
consequently rejected, the department concerned or Store Purchase Section shall immediately
notify such rejection specifying the grounds on which such rejection has been made to the
supplier directly depending upon who initiated the purchase and take necessary action for getting
the items as the specification of the Supply Order.

MAINTENANCE OF RECORDS, DISPOSAL/WRITE-OFF STORES,


TRANSFER OF STORES
This section describes the records pertaining to stores that must be maintained by store purchase
section and departments. This section also describes the procedure for stock verification, the
procedure for Write-off, Disposal, Transfer of stores from one department to another, Up
gradation as well as processing of documents. The following records need to be maintained by
department and store
purchase section.
i) Existing Asset Register one each for college and projects.
ii) Existing stock register for consumables/ non-consumables and assets.
iii) Existing inventories of officials.

WRITE OFF AND DISPOSAL


The HOD shall constitute a stores survey and disposal committee of not less than three members
at least two of whom class ‘A’ officers. This committee shall survey the non-consumable stores
and recommend write-off for items, which are not usable and serviceable. The committee shall
record the reasons for recommending write-off. HOD shall forward the report to store purchase
section for obtaining the approval of competent financial authority and deletion from the record.

TRANSFER OF STORES
Transfer of stores within the College from one department to another and from one official to
another can be done. A transfer voucher will be filled by official of the department and sent to
Store Purchase Section for entering in the records.

GENERAL PROCEDURE FOR PROCESS


1. Every HOD who has been delegated with powers for the purchase of consumable and non-
consumable items is expected to exercise the same vigilance in respect of expenditure incurred
from public moneys as a person of ordinary prudence would exercise in respect of expenditure of
his own pocket.
2. The expenditure should not be prima facie more than the occasion demands.
3. No authority should exercise its power of sanctioning expenditure to pass an order that will be
directly or indirectly to its own advantage.
4. The responsibility and accountability of every HOD delegated with financial powers to
procure any item or service on is total and indivisible. This College expects that the head of the
department will have the public interest in mind and making a procurement decision. This
responsibility is not discharged merely by the selection of the cheapest offer but must conform
to the following yardsticks of financial propriety: -
a) Whether the offers have been invited in accordance with governing rules and after following a
fair and reasonable procedure in the prevailing circumstances.
b) Whether the authority is satisfied with the selected offer will adequately meet the requirement
for which it is being procured.
c) Whether the price on offer is reasonable and consistent with the quality required.
d) Above all, whether the offer being accepted is the most appropriate one taking al the relevant
factors into account and in keeping with the standard of financial propriety.
5. All purchase orders above a total value of Rs. 25,000/- will be sent to D.D.O for getting the
same pre audited /vetted before placing the order with the agency.
6. All purchase cases of value more than Rs. 5,00,000/- will be placed before the college
purchase committee (CPC) by the Store Purchase Section for recommendation.

STOCK VERIFICATION
The HOD shall appoint a committee of at least three faculty members to conduct bi-annual stock
verification of all items of various stock registers of the department.

IMPLEMENTATION OF THE RULES


The College shall lay down guidelines specifying normal time for each of the processing
function under these rules so that all actions are completed expeditiously.

INTERPRETATION OF THE RULES


Wherever difficulties arise in interpreting these rules or relaxations are required for smooth
functioning of research and teaching work, the Director shall be the Competent Authority for
approval on behalf of the Board of Governors.

Standardization
Roadmap for this paper
Standardization’s historical record of economic success speaks for itself and needs no further
analysis here.
The use of standardization in NCPI, however, requires careful attention because its critical value
to the IT
landscape is not yet widely understood. Here is how the major sections of this paper will present
standardization as a new business strategy for NCPI:
Standardization vs. Uniqueness
Both of these have their proper place in business and in life, but infrastructure of any kind is a
clear
candidate for standardization, not uniqueness. The contrary trend in NCPI has been toward one-
time
unique engineering, which has led to systems that are difficult to design, deploy, maintain, and
manage.
Fundamental Characteristics of Standardized NCPI
Standardizing NCPI introduces two simple but powerful fundamental
characteristics, modular building-block architecture and
increased human learning. Their intrinsic value is intuitive –
most adults can remember the limitless ways of configuring
children’s blocks, and no one questions the benefits of learning.
Their combined influence on NCPI is profound. From these two
fundamental characteristics come an array of benefits that spread
throughout the infrastructure and touch nearly every aspect of it.
How Standardization Drives NCPI Business Value
The clincher for modular standardization is its multi-faceted, point by point
contribution to NCPI “business value” – benefit received per dollar spent.
The benefits that flow from modular architecture and increased
human learning contribute in multiple ways to every one of the three
major components of NCPI business value: availability, agility, and
total cost of ownership (TCO). (For more about this NCPI business
value equation and why it is an appropriate metric, see APC White
Paper #117, “Network-Critical Physical Infrastructure:
Optimizing Business Value.”)
Benefits
contribute
to
business
value
NCPI business value
Value
Availability Agility
TCO
2005 American Power Conversion. All rights reserved. No part of this publication may be
used, reproduced, photocopied, transmitted, or
stored in any retrieval system of any nature, without the written permission of the copyright
owner. www.apc.com Rev 2005-0 5
Standardization vs. Uniqueness
Standardization and uniqueness are familiar opposites. It is not difficult to recognize the crucial,
but very
distinct, roles played by the two; everyday experience is filled with examples of how each has its
proper
place in the effective delivery of a product or process.
Uniqueness is not for infrastructure
Uniqueness can be a wonderful thing. A striking building, Mom’s peach pie, a piano sonata, art
of every kind
– no one would argue that standardization has any place in experiences valued for their sensory
qualities or
other interesting characteristics. Certain things are intended to be unique, and they are the better
for it.
Infrastructure is different. Infrastructure consists of system underpinnings that support and
deliver the part of
the system we are actually interested in. In each of the above examples there are elements that
can be
considered “infrastructure”: the building’s construction materials, Mom’s measuring spoons, the
piano keys,
the canvas that holds the paint. The job of infrastructure is to be functional and reliable – it is just
supposed
to work.
The time-tested characteristic that makes infrastructure effective, reliable, predictable, and
worry-free is the
opposite of uniqueness; it is standardization. Because of standardization, the infrastructure of
our day-today
pursuits has become part of the woodwork of modern life – so commonplace and commonsense
that we
rarely think about it. One would expect data center infrastructure to follow the same paradigm,
but until now
there has been little movement in that direction. Nearly 40 years after its birth, IT physical
infrastructure is
still, in many ways, a craft industry: disparate components from different vendors are typically
custom
engineered into one large infrastructure system that is unique to the facility.
Unique NCPI means unique problems
One-time engineering of an entire NCPI results in a unique system, with unique problems that
require unique
diagnosis and repair – a process that is not only expensive and time-consuming, but also provides
little
learning that can be applied to further unique problems in the future, or to problems at other data
centers in
the organization. Standardization eliminates the need for one-time engineering and eliminates the
overhead
of dealing with unique problems in the infrastructure, freeing up resources for developing the
data
processing functionality of the IT layer supported by the infrastructure, which is the real mission
of the data
center.
The goal of NCPI standardization is to drive out the inefficiencies and error-prone complexity of
one-time
unique engineering – to transparently manage the routine business of IT physical infrastructure
and create
that same signature quality expected of any infrastructure: it just works.
2005 American Power Conversion. All rights reserved. No part of this publication may be
used, reproduced, photocopied, transmitted, or
stored in any retrieval system of any nature, without the written permission of the copyright
owner. www.apc.com Rev 2005-0 6
Configurable solutions using standardized building blocks
Customization of connections and components simply to get things to work (the Rube Goldberg
effect) adds
no real value; it merely introduces complexity and increases opportunities for human error.
However, the
ability to configure – and reconfigure – NCPI size or functionality to fit rapidly changing
business needs is
critical to the effectiveness and value of NCPI.
How can standardization be used to advantage when a critical IT requirement is flexibility? As
this paper will
show, the key to harnessing the power of standardization in a changeable environment is
modularity – preengineered,
standardized building blocks that can be configured as the user wishes (Figure 1). The ability
to quickly assemble standardized components into a logical and understandable configuration to
respond to
changing functional and financial requirements is one of the primary benefits of NCPI
standardization – it is
called agility.
One step further: Standardized data centers
NCPI designed this way – configured from standardized modular elements – provides significant
benefits in
the deployment and operation of a data center, as described throughout this paper. For broader IT
operations that span multiple data centers, the benefits of standardization can be extended even
further by
deploying the same, or similar, NCPI at all installations – incorporating standardization not only
within a data
center but also across data centers. Data centers that are the same in as many respects as possible
– from
the same floor plan to the same labels on circuit breakers – take full advantage of
standardization’s
enormous potential for efficiencies in design, installation, operation, maintenance, error
avoidance, and cost.
Most of the benefits described in this paper are magnified significantly when standardized NCPI
is deployed
in multiple data centers.
Figure 1 – Unique engineering vs. standardized modular building blocks
Unique one-time engineering
Good for art, bad for infrastructure
Standardized modular components
Changeable, scalable, repeatable,
understandable, integrated
2005 American Power Conversion. All rights reserved. No part of this publication may be
used, reproduced, photocopied, transmitted, or
stored in any retrieval system of any nature, without the written permission of the copyright
owner. www.apc.com Rev 2005-0 7
Increased
HUMAN
LEARNING
Standardized
NCPI
Makes things …
Modular Understandable
• Scalable
• Changeable
• Portable
• Swappable
• Avoid errors
• Anticipate problems
• Share knowledge
• Increase productivity
Value added to EQUIPMENT Value added to PEOPLE
Building-block
architecture
Fundamental Characteristics of Standardized NCPI
The benefits of standardization in NCPI affect every dimension: the way it occupies physical
space, its
functionality, and its evolution over time – from initial design and installation to reconfiguration
at each
refresh cycle. These benefits take a variety of forms and occur in many places throughout NCPI

Modularity: Divide and standardize


The cornerstone of standardization in NCPI is modularity. Modularity is achieved by dividing
up a complete
product or process into smaller chunks – modules – of similar size or functionality that can be
assembled as
needed to create variations of the original product/process. Flashlight batteries are a familiar
example:
batteries (modules) are combined in different numbers to obtain varying amounts of power.
Blade servers
and RAID arrays are examples of modularity in IT equipment – multiple units combined to
create varying
amounts of server or storage capacity. Modules needn’t be identical: Lego™ bricks are modular,
but they
are in some ways the same and in some ways different – color, size, and shape are different, but
sizes and
connections are standardized so that the bricks (modules) can work together as an integrated
system.
Different modular systems incorporate different
amounts of sameness and difference – that is,
varying levels of standardization – into their modules,
depending upon the desired goal in dividing up
functionality.
Flashlight batteries, blade servers, and RAID arrays
are examples of very basic modularity, with little or
no variation in the units that make up a complete
system. A more complex system with multiple
functions to be integrated – such as NCPI – requires
careful engineering by the manufacturer in order to
modularize in ways that optimize the balance
between level of standardization and amount of
flexibility to users. NCPI provides opportunities for
effective modular design at a variety of levels. Some
examples:
• Interchangeable UPS power and battery
modules. Enables scalability of power,
redundancy, and runtime and can be hotswapped
for repair without system shutdown.
• Standardized modular wiring distribution.
Breaks down room wiring into row-level or
rack-level modules. Eliminates confusing
and mistake-prone wiring tangles, and
simplifies and speeds the process of unplugrearrange-
reconnect. Modular power
distribution can range from rack-sized units
that serve an entire row to power strips that
serve a single rack.

Rack-level air distribution. Breaks down room airflow into local control at the racks for precise
cooling of hot spots.
• High-density clusters. Integration of racks, power distribution, and cooling into a self-
contained,
enclosed “room” to isolate and cool heat-intensive IT equipment. (In this case, a “module” is the
whole integrated cluster.)
Modular components with standardized structure and connections make everything easier, faster,
and
cheaper – from manufacture and inventory at the vendor, through design and engineering at the
planning
table, to installation and operation at the customer site. Modular design is the source of one
critically
important component of NCPI business value (agility, the ability to respond to changing or
unexpected
business opportunities) and a major contributor to the other two (availability and total cost of
ownership).
• Modular systems are scalable. Modular NCPI can be deployed at a level that meets current IT
needs, with the ability to add more later. This ability to “rightsize” can provide a significant
reduction
in total cost of ownership.
• Modular systems are changeable. Modular design provides great flexibility in reconfiguring
NCPI to
meet changing IT requirements.
• Modular systems are portable. Self-contained components, standard interfaces, and
understandable structure save time and money when modular systems are installed, upgraded,
reconfigured, or moved.
• Modular components are swappable. Modules that fail can be easily swapped out for upgrades
or
repair – often without system shutdown.
The portable and swappable nature of modular components allows work to be done at the
factory, both
before delivery (such as pre-wiring of power distribution units) or after (such as the repair of
power modules).
In-factory work has, statistically, a far lower rate of defects than work done on site – for
example, factoryrepaired
UPS power modules are 500-2000 times less likely to cause outages, introduce new defects, or
inhibit return to fully operational status compared to field-repaired modules. The ability to
perform factory
repair is a significant reliability advantage.2
For larger IT operations that occupy multiple facilities, modular architecture facilitates keeping
as much as
possible the same between installations (see earlier paragraph, One step further: Standardized
data centers.)
Selected elements of a master NCPI design can be modified, added, or eliminated to
accommodate
differences in size or function between data centers without affecting other parts of the design,
thereby
maximizing the extent of infrastructure the data centers have in common.

Human learning: The power of understanding


Modularity enhances the effectiveness of equipment. Understandability enhances the
effectiveness of
people. Standardization is, by its nature, a simplifying process; a standardized system facilitates
learning at
every level. Increased knowledge and understanding enables people to work more efficiently and
with fewer
mistakes, helps them to teach others, and empowers them to participate in problem-solving. In a
standardized environment, things are not only more understandable but also more predictable
and
repeatable, making problems less likely to occur and easier to recognize when they do.
When things are easier to understand and more predictable, they are easier to explain, to
document, to
operate, to troubleshoot, and to fix. As these effects build upon each other, they enable staff to:
• Avoid errors. The most significant human-learning effect of standardization is reduced human
error in the data center. Studies have shown that human error is the cause of 50-60% of data
center
downtime,3 and the potential to reduce it represents the single largest user entitlement to
increased
availability. Reducing human error is a classic benefit of standardization – from fewer errors in a
standardized assembly process to fewer errors in diagnosing trouble in a standardized system.
Standardized systems make documentation and training easier and more effective, resulting in
more
skilled staff who are less likely to make mistakes. Standardized controls, interfaces, and
connections provide additional protection by making correct operation more self-evident. If
documentation itself is standardized, error-avoidance is further enhanced by having information
easily accessible in expected places and formats.
• Anticipate problems. Understanding how things work, combined with standardized
procedures for
such things as equipment monitoring and predictive maintenance, is a powerful defense against
what might otherwise be considered “unexpected.”
• Share knowledge. Having structure and function “make sense” fosters ongoing learning by
encouraging sharing of information – when people understand things, they are more likely to
engage
in conversation, collaborate on analysis and problem-solving, and learn from each other. This
enhanced climate of knowledge and insight permeates everything that needs to be done with, or
understood about, NCPI.
• Increase productivity. As these learning effects interact and proliferate, there is an overall
increase in productivity. A more knowledgeable staff means that time spent on NCPI-related
matters is used more efficiently. With equipment and procedures easier to understand, less time is
spent training and being trained. With reduced human error, less time is spent recovering from
human-caused problems and less help desk time is spent responding to calls related to such
problems. All these economies of time free up human resources for the functional business of the
data center – the work of the IT equipment that is powered, cooled, and protected by NCPI –
rather
than for management of the NCPI layer itself.

How Standardization Drives NCPI Business Value


As shown in the previous section, modular structure and increased human learning – two
fundamental and
empowering characteristics of standardized NCPI – provide a wide range of direct and
commonsense
benefits. This section will look at standardization more closely, and from a different viewpoint –
a bottom-line
viewpoint – to demonstrate, point by point, the value of standardization to the enterprise.
Modularity and
increased human learning spawn benefits in three critical areas of performance which, taken
together,
constitute the business value of NCPI.
The NCPI business value “equation”
What gives Network-Critical Physical Infrastructure high business value? Since its primary
function is to
keep the IT operation up and running, availability is the first component of NCPI business
value. The ability
to respond quickly to changing IT needs is also critical to success, making agility another
important
component. The total cost of buying and operating NCPI over its lifetime – total cost of
ownership, or TCO –
is the third major component of business value (Figure 3). (For more about NCPI business
value, see APC

Reliability of equipment. Standardized modular components can be mass produced in greater


volume
than non-modularized systems, which reduces production defects. Modular components can be
returned to
the manufacturer for factory service, which greatly improves the quality of repairs. (For more
about these
two advantages, see earlier section, Fundamental Characteristics of Standardized NCPI.) In
addition,
modular systems with standardized hookups can be
configured at the factory the same way they will be
configured on site, allowing for factory pre-testing to
discover defects. Standardized modular components
facilitate internal redundancy (no downtime at the time
of component failure) and hot-swap replacement (no
downtime during swap-out of a failed component).
Standardized equipment monitoring systems enable
easy-to-understand management tools that encourage
predictive maintenance to identify problems before
they escalate from trouble to major expense, and to
reduce reliance on scheduled preventative
maintenance, which creates additional exposure to
human error.
Mean time to recover (MTTR). A failed modular
component can be quickly swapped-out for
replacement, so recovery isn’t delayed while waiting
for repair. Standardization makes things easier to
understand and operate, making diagnosis of
problems faster and increasing the potential for
diagnosis and correction by the user.
Human error. Of all the ways to increase availability, reducing human error offers by far the
greatest
opportunity. With standardized equipment and procedures, functionality is more transparent,
routines are
simplified and easier to learn, and things operate as expected – all reducing the likelihood of
everything from
typing the wrong command to pulling the wrong plug.

Speed of deployment. With modular components, planning and design is faster because the
system’s
structure can be configured in a logical way that aligns with design objectives, both in the
physical
arrangement of units and by using only the number and type of units needed to meet the current
IT
requirements. Deployment does not have to wait while management tries to justify the expense
of an
oversized data center design that attempts to predict the future ten years out. Special NCPI
requirements
don’t adversely affect planning time because flexibility of design is built into modular
architecture. Delivery is
faster because standardized, mass-produced units can be inventoried and ordered “off the shelf.”
On-site
configuration and hookup is faster not only because connections are standardized and simplified,
but also
because there is less equipment to install when using only the number of building blocks needed.
Commissioning is faster because standardized modules can be connected up at the factory just as
they will
be on site, allowing for factory pre-test. Compared to traditional “legacy” all-in-one-piece
infrastructure with
static custom design and one-time engineering, these efficiencies combine to cut concept-to-
commissioning
time from months to weeks, and reconfiguration time from weeks to days.
In addition, the time taken at all stages of
deployment is further shortened by the next
attribute – the ability to scale the design to meet
only current IT requirements, thereby deploying a
smaller infrastructure with less equipment than in
typical legacy systems.
Ability to scale. With modular building-block
architecture, functionality is available in bite-sized
pieces that can be optimally configured for IT
spaces of any size, from wiring closets to large data
centers. Of even greater significance is the ability to
design the infrastructure to support only the IT
requirements needed at startup. Then, as IT
requirements increase, more building blocks can be
added without re-engineering the whole system and
without the need for shutdown of critical equipment.
This strategy of “rightsizing” can result in significant
cost savings over the life of the data center. (See
APC White Paper #37, “Avoiding Costs From
Oversizing Data Center and Network Room
Infrastructure.”)
Ability to reconfigure. With typical IT refresh
cycles of two years, the ability to reconfigure,
upgrade, or move is a significant component of
NCPI agility. Modular elements can be unplugged,
rearranged, and reconnected. Beyond reconfiguration
driven by business need, there is also the
steady increase in power density of IT equipment
resulting from shrinking physical size – blade
servers – which will periodically require reconfiguration
of racks, power, and cooling. Modular
hot-swappable components also provide the ability
to reconfigure for different levels of redundancy, different voltages, or different plug types. Not
only does
modular structure simplify the physical process of disconnecting, moving, and reconnecting, but
the
manufacturer’s careful design of the equipment’s modularity can minimize the need for redesign
and
maximize the ability to reuse equipment in a new configuration.

Capital cost. Standardized modular architecture reduces capital cost in two major ways: (1) It
enables the
infrastructure size to be scaled to align more closely with present IT requirements, rather than
building out
initial capacity to support the maximum projected requirements – you only buy what you need –
and (2) its
straightforward and understandable structure simplifies every step of the deployment process,
from planning
to installation. That simplification means less time spent in each stage, and often means a
reduced need to
bring in outside help. For example, standardized modular power distribution at the rack level
provides cost
savings from both scalability and simplicity: power and cabling can be deployed for only the
racks installed,
reducing the need for electrical contract work to wire the room. Similarly, standardized modular
rack units
with integrated cabling and airflow provide infrastructure scalability and simplified design and
installation that
minimizes the need for design consulting and custom installation services. (For more about the
substantial
cost savings that can be obtained from properly scaling infrastructure size – “rightsizing” – see
APC White
Paper #37, “Avoiding Costs From Oversizing Data Center and Network Room Infrastructure.”)
Non-energy Operating cost. Simplified, easyto-
learn design means training is faster and
more effective, and operation/maintenance
procedures are more efficient and less prone to
mistakes. Standardized, understandable
equipment and procedures mean more
maintenance can be done by IT staff, reducing
the need for vendor-supplied maintenance.
Standardized equipment monitoring systems
enable easy-to-understand management tools
that encourage predictive maintenance to
identify problems before they escalate from
trouble to major expense. Standardized
modular components enable swapping out of
modules for factory service, which is more
reliable and less expensive than on-site repair.
Fewer help-desk resources are needed to
support downtime-related issues, because of
the overall improvement in availability (see
earlier section How Standardization Increases
AVAILABILITY).
Energy cost. Electricity cost over the lifetime
of the data center is the single largest
component of TCO. Scaling the infrastructure
to meet present IT needs, with the ability to add
on incrementally as IT needs grow, means you
only power and cool what you need. The resulting savings in electricity are substantial over the
life of the
data center. Modular internal UPS design enables UPS sizing more closely matched to load
requirement,
resulting in better UPS operating efficiency and reducing the size of the UPS modules needed to
achieve
redundancy. Modular cooling design, such as rack-level air distribution units, enables more
accurate airflow
for increased cooling efficiency, so less energy is consumed by cooling equipment.

Induction Process
10 things to think about when implementing an employee induction process

1. Identify the business objectives and desired benefits

Effective induction can have many benefits including reducing turnover costs, engaging and
motivating new and existing employees, contributing to the implementation of good systems and
processes and gaining feedback and ideas from new hires looking at an organisation through
“fresh eyes”. Thinking about how a new or improved induction process could benefit your
organisation will help you determine the focus and shape of the programme. If you are keen to
help new hires build internal networks for example, a programme which brings all new hires
together may be important. If your key business driver is to ensure consistent standards and
messages across a multi-site organisation, an e-learning solution may be most appropriate.

2. Secure early commitment

Don’t underestimate the powerful effect that induction can have in developing commitment to a
new organisation. A good induction process shows that the company cares and is committed to
setting people up for success. It can also help to identify problems or barriers at an early stage
and allow the appropriate action to be taken. Conversely a poor induction experience could make
some new entrants doubt their decision to join your organisation representing a risk in terms of
future retention and reputation.

3. Agree roles and responsibilities of different players in the process

Clearly identify the roles and responsibilities of the different players in the induction process.
These may include the HR/ L&D functions, the line manager, the administration function,
mentors or buddies and of course the individual themselves. This is perhaps best achieved via a
detailed induction checklist which allocates specific responsibilities and timelines to the various
stakeholders.

4. Think of induction as a journey

Thinking about your induction process as a journey rather than a one-off event is essential. It
may be useful to consider the induction journey in terms of the first 3 days, first 3 weeks and first
3 months. This approach might include a mini induction during the first 3 days with an
immediate supervisor covering essentials such as security, housekeeping, organisation charts,
initial objectives and introductions to key personnel. A more comprehensive induction training
session may follow during the first 3 weeks and then a review meeting after 3 months to check
that everything is on track. Giving consideration to what post-programme support may be
needed is also important. This may include additional training, quick reference guides, key
contact lists or personal support which could be provided by mentors or buddies.

5. Engage staff prior to joining

A good induction process should start from the moment an employee accepts an offer with your
organisation. Develop a comprehensive induction checklist and also give thought to what could
be covered pre-arrival to prepare someone for life within your organisation. This may include a
pre-joining visit, regular phone and email contact or access to the company intranet site.
Ensuring that all the relevant administrative and IT arrangements are in place will also be a big
factor in getting a new employee up and running as soon as possible and creating good first
impression.

6. Have clear learning objectives for training sessions

When designing content for induction training, it is important to start by identifying the desired
outcomes of the training. Michael Meighan advises thinking in terms of what a new entrant
“must know”, “should know” and “could know”. The “must knows” will include key policies
and procedures, regulatory, health and safety and personnel matters essential for a person to do
their particular job. “Should knows” may be things that the person ought know in order to fit in
within the organisation and “could knows” may be of interest but would not be essential for a
new entrant to do their job e.g. organisational history. When designing the training also ensure
that training sessions and induction materials take account of different learning preferences and
where possible include a variety of delivery styles.

7. Respect the induction needs of different audiences

One size does not necessarily fit all and recognising that different groups of new employees may
have varying induction needs is essential. Within the same organisation, the induction needs of a
senior director, a school leaver and indeed a returning expatriate are likely to be quite different.
Whilst the fundamentals of the induction process may remain the same, ensuring that the content
of induction training sessions is appropriately tailored and relevant to the needs of different
audiences will be vital in securing engagement.

8. Ensure a quality experience

For most people, the induction programme will be their first experience with the Learning and
Development function within the organisation - and all too often this can be less than positive. It
is important to remember that this is a unique opportunity for L&D to “set out its stall” with new
hires. Developing carefully tailored content and choosing competent trainers who motivate and
engage their audiences will be key ingredients in delivering a high quality experience.

9. Keep induction material up to date

All too often organisations will make a significant investment in designing a new induction
process and then fail to keep key content up to date. It is vital that at the outset an owner for the
process is identified and it is agreed how induction content will be updated by key stakeholders
on an on-going basis. Using e-based induction materials can be one way to ensure that it can be
easily maintained and updated. Whilst this may mean a more significant up-front investment, e-
based induction materials may also help reduce expenditure on classroom based training and the
associated travel and delivery costs particularly in multi-site organisations.

10. Evaluation

Finally, as with any new process it is important to continuously evaluate the success of your
induction process and make appropriate changes as required. Some measures which may be
helpful in assessing the success of your approach could include:
1) Feedback from new hires who have gone through the process – this could take the from of
course evaluation sheets if you are delivering an induction training session or could be achieved
via 1:1 interviews with a selected group of new entrants after their first 3 months with the
organisation.
2) Retention rates for new entrants – monitoring these will be particularly important for
organisations who implemented a new process in an attempt to reduce attrition levels amongst
new joiners.
3) Exit Interviews – data from individuals choosing to leave the organisation can provide
valuable information about the success of an induction process.
4) Monitoring common queries – where your organisation has a HR Service Centre it may also
be useful to monitor the types of common queries coming from new joiners to review whether
additional information should be included in the induction process
5) Employee Engagement Survey – where your organisation has a regular employee engagement
survey, this could prove valuable in measuring changes in levels of commitment and engagement
following the introduction of a new induction process.

Positive outcomes of a good induction process


• High levels of motivation and commitment amongst new employees.
• High retention rates for new joiners within the organisation.
• Positive influence on existing staff involved in the induction process – who are reminded of the
positives attributes of their organisation and motivated by their involvement in the process
• Organisation is perceived externally as a good employer, who cares and works hard to integrate
new staff – likely to act as a positive attraction tool for new hires.
• Positive impact on the implementation of processes and procedures within the organisation.

induction training and induction checklist


induction training design guide and free induction training checklist
Induction Training is absolutely vital for new starters. Good induction training ensures new
starters are retained, and then settled in quickly and happily to a productive role. Induction
training is more than skills training. It's about the basics that seasoned employees all take for
granted: what the shifts are; where the notice-board is; what's the routine for holidays, sickness;
where's the canteen; what's the dress code; where the toilets are. New employees also need to
understand the organisation's mission, goals, values and philosophy; personnel practices, health
and safety rules, and of course the job they're required to do, with clear methods, timescales and
expectations.

On the point of values and philosophy, induction training offers a wonderful early opportunity to
establish clear foundations and expectations in terms of ethics, integrity, corporate social
responsibility, and all the other converging concepts in this area that are the bedrock of all good
modern responsible organisations. See also love and spirituality in organisations: trainers and
new starters - anyone - can bring compassion and humanity to work. The starting point is
actually putting these fundamantal life-forces on the workplace agenda.

Professionally organized and delivered induction training is your new employees' first proper
impression of you and your organization, so it's also an excellent opportunity to reinforce their
decision to come and work for you.

Proper induction training is increasingly a legal requirement. Employers have a formal duty to
provide new employees with all relevant information and training relating to health and safety
particularly.

As a manager for new employees it's your responsibility to ensure that induction training is
properly planned. Even if head office or another 'centre' handles induction training - you must
make sure it's planned and organised properly for your new starter. An induction training plan
must be issued to each new employee, before the new employee starts, and copied to everyone in
the organisation who's involved in providing the training, so the new starter and everyone else
involved can see what's happening and that everything is included. Creating and issuing a
suitable induction plan for each new starter will help them do their job better and quicker, and
with less dependence on your time in the future. Employees who are not properly inducted need
a lot more looking after, so failing to provide good induction training is utterly false economy.

As with other types of training, the learning can and development can be achieved through very
many different methods - use as many as you need to and which suit the individuals and the
group, but remember that induction training by its nature requires a lot more hand-holding than
other types of training. Err on the side of caution - ensure people are looked after properly and
not left on their own to work things out unless you have a very specific purpose for doing so, or
if the position is a senior one.

As with other forms of training their are alternatives to 'chalk and talk' classroom-style training.
Participation and 'GAAFOFY' methods (Go Away And Find Out For Yourself) can be effective,
particularly for groups and roles which require a good level of initiative. Here are some examples
of training methods which can be used to augment the basics normally covered in classroom
format:

• on the job coaching

• mentoring

• delegated tasks and projects

• reading assignments

• presentation assignments

• attending internal briefings and presentations, eg 'lunch and learn' format

• special responsibilities which require obtaining new skills or knowledge or exposure

• video

• internet and e-learning

• customer and supplier visits

• attachment to project or other teams

• job-swap

• shadowing (shadowing another employee to see how they do it and what's involved).
Be creative as far as is realistic and practicable. Necessarily induction training will have to
include some fairly dry subjects, so anything you can do to inject interest, variety, different
formats and experiences will greatly improve the overall induction process. There are lots of
ideas for illustrating concepts and theories relating to induction training on the acronyms
page (warning: contains adult content), and also thestories page.

Induction training must include the following elements:

• General training relating to the organisation, including values and philosophy as well as
structure and history, etc.

• Mandatory training relating to health and safety and other essential or legal areas.

• Job training relating to the role that the new starter will be performing.

• Training evaluation, entailing confirmation of understanding, and feedback about the quality
and response to the training.

And while not strictly part of the induction training stage, it's also helpful to refer to and discuss
personal strengths and personal development wishes and aspirations, so that people see
they are valued as individuals with their own unique potential, rather than just being a name
and a function. This is part of making the job more meaningful for people - making people feel
special and valued - and the sooner this can be done the better.

For example the following question/positioning statement is a way to introduce this concept of
'whole-person' development and value:

"You've obviously been recruited as a (job title), but we recognise right from the start that you'll
probably have lots of other talents, skills, experiences (life and work), strengths, personal aims
and wishes, that your job role might not necessarily enable you to use and pursue. So please give
some thought to your own special skills and unique potential that you'd like to develop (outside
of your job function), and if there's a way for us to help with this, especially if we see that there'll
be benefits for the organisation too (which there often are), then we'll try to do so..."

Obviously the organisation needs to have a process and capacity for encouraging and assisting
'whole person development' before such a statement can be made during induction, but if and
when such support exists then it makes good sense to promote it and get the ball rolling as early
as possible. Demonstrating an true investment in people - as people, not just employees -
greatly increases feelings of comfort and satisfaction among new-starters. It's human nature -
each of us feels happier when someone takes a genuine interest in us as an individual.

Including a learning styles self-assessment questionnaire or a multiple intelligences self-


assessment questionnaire within the induction process also helps to 'draw out' strengths and
preferences among new starters, and will additionally help build a platform for meaningful work
and positive relations between staff and employer. Ensure that new starters are given control of
these self-tests - it is more important that they see the results than the employer, although it's fine
and helpful for the employer to keep a copy provided permission is sought and given by the staff
members to do so. Line-managers will find it easier to manage new starters if they know their
strengths and styles and preferences. Conducting a learning styles assessment also helps the
induction trainer to deliver induction training according to people's preferred learning
styles.

So much of conventional induction training necessarily involves 'putting in' to people


(knowledge, policies, standards, skills, etc); so if the employer can spend a little time 'drawing
out' of people (aims, wishes, unique personal potential, etc) - even if it's just to set the scene for
'whole person development' in the future - this will be a big breath of fresh air for most new
starters.

Use a feedback form of some sort to check the effectiveness and response to induction training -
induction training should be a continuously evolving and improving process. Free examples of
training feedback forms and induction training feedback forms are available on the free
resources section.

Take the opportunity to involve your existing staff in the induction process. Have them create
and deliver sessions, do demonstrations, accompany, and mentor the new starters wherever
possible. This can be helpful and enjoyable for the existing staff members too, and many will
find it rewarding and developmental for themselves. When involving others ensure delivery and
coverage is managed and monitored properly.

Good induction training plans should feature a large element of contact with other staff for the
new person. Relationships and contacts are the means by which organisations function, get things
done, solve problems, provide excellent service, handle change and continually develop. Meeting
and getting to know other people are essential aspects of the induction process. This is especially
important for very senior people - don't assume they'll take care of this for themselves - help
them to plan how to meet and get to know all the relevant people inside and outside the
organisation as soon as possible. Certain job roles are likely to be filled by passive introverted
people (Quality, Technical, Production, Finance - not always, but often). These people often need
help in getting out and about making contacts and introductions. Don't assume that a director will
automatically find their way to meet everyone - they may not - so design an induction plan that
will help them to do it.

induction training checklist


Here is a simple checklist in three sections, to help you design an induction plan to suit your
particular situation(s).
See also the free induction training checklist working tool with suggested training items (which
is an MSExcel working file version of this page).

Whilst the order of items is something that you must decide locally, there is some attempt below
to reflect a logical sequence and priority for induction training subjects. Consider this an
induction checklist - not an agenda. This checklist assumes the induction of an operational or
junior management person into a job within a typical production or service environment. (See
the training planner and training/lesson plan calculator tool, which are templates for planning and
organising these induction training points, and particularly for planning and organising the
delivery of job skills training and processes, and transfer of knowledge and policy etc.)
general organisational induction training checklist
• Essential 'visitor level' safety and emergency procedures

• Washrooms

• Food and drink

• Smoking areas and policy

• Timings and induction training overview

• Organisational history and background overview

• Ethics and philosophy

• Mission statement(s)

• Organisation overview and structure

• Local structure if applicable

• Departmental structure and interfaces

• Who's who (names, roles, responsibilities)

• Site layout

• Other sites and locations

• Dress codes

• Basic communications overview

• Facilities and amenities

• Pay

• Absenteeism and lateness


• Holidays

• Sickness

• Health insurance

• Pension

• Trades Unions

• Rights and legal issues

• Personnel systems and records overview

• Access to personal data

• Time and attendance system

• Security

• Transport and parking

• Creche and childcare

• Grievance procedures

• Discipline procedures

• Career paths

• Training and development

• Learning Styles Self-Assessment

• Multiple Intelligences Self-Assessment

• Appraisals

• Mentoring

• Awards and Incentives

• Health and Safety, and hazard reporting

• Physical examinations, eye test etc.

• Emergency procedures, fire drill, first aid

• Accident reporting

• Personal Protective Equipment


• Use, care, and issue of tools and equipment

• Other housekeeping issues

• General administration

• Restricted areas, access, passes


job and departmental induction training checklist
The induction training process also offers the best opportunity to help the new person more
quickly integrate into the work environment - particularly to become known among other staff
members. Hence the departmental tours and personal introductions are an absolutely vital part of
induction. Organisations depend on its people being able to work together, to liaise and cooperate
- these capabilities in turn depend on contacts and relationships. Well-planned induction training
can greatly accelerate the development of this crucial organisational capability.

• Local departmental amenities, catering, washrooms, etc.

• Local security, time and attendance, sickness, absenteeism, holidays, etc.

• Local emergency procedures

• Local departmental structure

• Department tour

• Departmental functions and aims

• Team and management

• People and personalities overview (extremely helpful, but be careful to avoid sensitive or
judgemental issues)

• Related departments and functions

• How the department actually works and relates to others

• Politics, protocols, unwritten rules (extremely helpful, but be careful to avoid sensitive or
judgemental issues)

• The work-flow - what are we actually here to do?

• Customer service standards and service flow

• How the job role fits into the service or production process

• Reporting, communications and management structures

• Terminology, jargon, glossary, definitions of local terms


• Use and care of issued equipment

• Work space or workstation

• Local housekeeping

• Stationery and supplies

• Job description - duties, authority, scope, area/coverage/territory

• Expectations, standards, current priorities

• Use of job specific equipment, tools, etc.

• Use of job specific materials, substances, consumables

• Handling and storage

• Technical training - sub-categories as appropriate

• Product training - sub-categories as appropriate

• Services training - sub-categories as appropriate

• Job specific health and safety training

• Job-specific administration, processing, etc.

• Performance reporting

• Performance evaluation

• Training needs analysis method and next steps

• Initial training plans after induction

• Training support, assistance, mentor support

• Where to go, who to call, who to ask for help and advice

• Start of one-to-one coaching

• Training review times and dates

• Development of personal objectives and goals

• Opportunities for self-driven development

• Virtual teams, groups, projects open to job role

• Social activities and clubs, etc.


• Initial induction de-brief and feedback

• Confirmation of next training actions

• Wider site and amenities tour


other induction training activities for managerial, executive, field-based or international
roles
Here are some typical activities to include in the induction training plans for higher level people.
The aim is to give them exposure to a wide variety of experiences and contacts, before the
pressures of the job impact and limit their freedom. As with all roles, induction also serves the
purpose of integrating the new person into the work environment - getting them known.
Induction training is not restricted to simply training the person; induction is also about
establishing the new person among the existing staff as quickly as possible. This aspect of
induction is particularly important for technical personalities and job roles, who often are slower
to develop relationships and contacts within the organisation.

• Site tours and visits

• Field accompaniment visits with similar and related job roles

• Customer visits

• Supplier and manufacturer visits

• Visits and tours of other relevant locations, sites and partners

• Attendance of meetings and project groups

• Shop-floor and 'hands-on' experiences (especially for very senior people)

• Attendance at interesting functions, dinners, presentations, etc.

• Exhibition visits and stand-manning

• Overseas visits - customers, suppliers, sister companies, etc.

structuring the induction training plan


You should strive to organise the induction plan and give it to the new starter before they join
you. This means thins need to be planned well in advance because the plan will necessarily
involve other people's time and availability.
Develop a suitable template, into which you can slot the arranged activities. Depending on the
needs of the situation the induction training plan may extend over a number of weeks,
progressively reducing the pre-arranged induction content, as the person settles into their job.

Here's an example of how a week's induction might be shown using a template planner. A
schedule is also a useful method for circulating and thereby confirming awareness and
commitment among staff who will be involved with the induction of the the new starter.

Seeing a professionally produced induction plan like this is also very reassuring to the new
starter, and helps make a very positive impression about their new place of work. Adding a notes
and actions section helps the new starter to keep organised during a time that for most people can
be quite pressurised and stressful. Anything you can do to make their lives easier will greatly
help them to settle in. get up to speed, and become a productive member of the team as quickly
as possible.
induction training plan example
induction training plan (name, date, organisation, etc)
mon tues wed thur

• times • times • times • times

• activities/subjects • activities/subjects • activities/subjects • activities/subjec


am
• with whom • with whom • with whom • with whom

• location • location • location • location

notes
&
action
s

• times • times • times • times

lunch • with whom • with whom • with whom • with whom

• location • location • location • location

pm • times • times • times • times

• activities/subjects • activities/subjects • activities/subjects • activities/subjec

• with whom • with whom • with whom • with whom


• location • location • location • location

notes
&
action
s

induction training review and feedback


As with any type of training, it is vital to review and seek feedback after induction training.
Different induction feedback templates and sample forms are available on the free
resources section.

It is particular important to conduct exit interviews with any new starters who leave the
organisation during or soon after completing their induction training.

Large organisations need to analyse overall feedback results from new starters, to be able to
identify improvements and continuously develop induction training planning.

Seek feedback also from staff who help to provide the induction training for new starters, and
always give your own positive feedback, constructive suggestions, and thanks, to all those
involved in this vital process.

Quality
PRACTICAL CONSIDERATIONS IN
DEVELOPING QA/QC SYSTEMS
Implementing QA/QC procedures requires resources, expertise and time. In developing any QA/QC
system, it is
expected that judgements will need to be made on the following:
• Resources allocated to QC for different source categories and the compilation process;
• Time allocated to conduct the checks and reviews of emissions estimates;
• Availability and access to information on activity data and emission factors, including data quality;
• Procedures to ensure confidentiality of inventory and source category information, when required;
• Requirements for archiving information;
• Frequency of QA/QC checks on different parts of the inventory;
• The level of QC appropriate for each source category;
• Whether increased effort on QC will result in improved emissions estimates and reduced uncertainties;
• Whether sufficient expertise is available to conduct the checks and reviews.
In practice, the QA/QC system is only part of the inventory development process and inventory agencies
do not
have unlimited resources. Quality control requirements, improved accuracy and reduced uncertainty need
to be
balanced against requirements for timeliness and cost effectiveness. A good practice system seeks to
achieve
that balance and to enable continuous improvement of inventory estimates.
Within the QA/QC system, good practice provides for greater effort for key source categories and for
those
source categories where data and methodological changes have recently occurred, than for other source
categories. It is unlikely that inventory agencies will have sufficient resources to conduct all the QA/QC
procedures outlined in this chapter on all source categories. In addition, it is not necessary to conduct all
of these
procedures every year. For example, data collection processes conducted by national statistical agencies
are not
likely to change significantly from one year to the next. Once the inventory agency has identified what
quality
controls are in place, assessed the uncertainty of that data, and documented the details for future inventory
reference, it is unnecessary to revisit this aspect of the QC procedure every year. However, it is good
practice to
check the validity of this information periodically as changes in sample size, methods of collection, or
frequency
of data collection may occur. The optimal frequency of such checks will depend on national
circumstances.
While focusing QA/QC activities on key source categories will lead to the most significant improvements
in the
overall inventory estimates, it is good practice to plan to conduct at least the general procedures outlined
in
Section 8.6, General QC Procedures (Tier 1), on all parts of the inventory over a period of time. Some
source
categories may require more frequent QA/QC than others because of their significance to the total
inventory
estimates, contribution to trends in emissions over time or changes in data or characteristics of the source
category, including the level of uncertainty. For example, if technological advancements occur in an
industrial
source category, it is good practice to conduct a thorough QC check of the data sources and the
compilation
process to ensure that the inventory methods remain appropriate.
It is recognised that resource requirements will be higher in the initial stages of implementing any QA/QC
system than in later years. As capacity to conduct QA/QC procedures develops in the inventory agency
and in
other associated organisations, improvements in efficiency should be expected.
General QC procedures outlined in Table 8.1, Tier 1 General Inventory Level QC Procedures, and a peer
review
of the inventory estimates are considered minimal QA/QC activities for all inventory compilations. The
general
procedures require no additional expertise in addition to that needed to develop the estimates and compile
the
inventory and should be performed on estimates developed using Tier 1 or higher tier methods for source
categories. A review of the final inventory report by a person not involved in the compilation is also good
practice, even if the inventory were compiled using only Tier 1 methods. More extensive QC and more
rigorous
review processes are encouraged if higher tier methods have been used. Availability of appropriate
expertise
may limit the degree of independence of expert reviews in some cases. The QA/QC process is intended to
ensure
transparency and quality.
Quality Assurance and Quality Control Chapter 8
8.6 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
There may be some inventory items that involve confidential information, as discussed in Chapters 2 to 5.
The
inventory agency should have procedures in place during a review process to ensure that reviewers
respect that
confidentiality.
8 .3 ELEMENTS OF A QA/QC SYSTEM
The following are the major elements to be considered in the development of a QA/QC system to be
implemented in tracking inventory compilation:
• An inventory agency responsible for coordinating QA/QC activities;
• A QA/QC plan;
• General QC procedures (Tier 1);
• Source category-specific QC procedures (Tier 2);
• QA review procedures;
• Reporting, documentation, and archiving procedures.
For purposes of the QA/QC system, the Tier 2 QC approach includes all procedures in Tier 1 plus
additional
source category-specific activities.
8 .4 INVENTORY AGENCY
The inventory agency is responsible for coordinating QA/QC activities for the national inventory. The
inventory
agency may designate responsibilities for implementing and documenting these QA/QC procedures to
other
agencies or organisations. The inventory agency should ensure that other organisations involved in the
preparation of the inventory are following applicable QA/QC procedures.
The inventory agency is also responsible for ensuring that the QA/QC plan is developed and
implemented. It is
good practice for the inventory agency to designate a QA/QC coordinator, who would be responsible for
ensuring that the objectives of the QA/QC programme are implemented.
8 .5 QA/QC PLAN
A QA/QC plan is a fundamental element of a QA/QC system, and it is good practice to develop one. The
plan
should, in general, outline QA/QC activities that will be implemented, and include a scheduled time
frame that
follows inventory preparation from its initial development through to final reporting in any year. It should
contain an outline of the processes and schedule to review all source categories.
The QA/QC plan is an internal document to organise, plan, and implement QA/QC activities. Once
developed, it
can be referenced and used in subsequent inventory preparation, or modified as appropriate (i.e. when
changes in
processes occur or on advice of independent reviewers). This plan should be available for external review.
In developing and implementing the QA/QC plan, it may be useful to refer to the standards and guidelines
published by the International Organization for Standardization (ISO), including the ISO 9000 series (see
Box
8.2). Although ISO 9000 standards are not specifically designed for emissions inventories, they have been
applied by some countries to help organise QA/QC activities.
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.7
BOX 8.2
ISO AS A DATA QUALITY MANAGEMENT SYSTEM
The International Organization for Standardization (ISO) series programme provides standards for
data documentation and audits as part of a quality management system. Though the ISO series is
not designed explicitly for emissions data development, many of the principles may be applied to
ensure the production of a quality inventory. Inventory agencies may find these documents useful
source material for developing QA/QC plans for greenhouse gas inventories. Some countries (e.g.
the United Kingdom and the Netherlands) have already applied some elements of the ISO
standards for their inventory development process and data management.
The following standards and guidelines published under the ISO series may supplement source
category-specific QA/QC procedures for inventory development and provide practical guidance
for ensuring data quality and a transparent reporting system.
ISO 9004-1: General quality guidelines to implement a quality system.
ISO 9004-4: Guidelines for implementing continuous quality improvement within the
organisation, using tools and techniques based on data collection and analysis.
ISO 10005: Guidance on how to prepare quality plans for the control of specific projects.
ISO 10011-1: Guidelines for auditing a quality system.
ISO 10011-2: Guidance on the qualification criteria for quality systems auditors.
ISO 10011-3: Guidelines for managing quality system audit programmes.
ISO 10012: Guidelines on calibration systems and statistical controls to ensure that
measurements are made with the intended accuracy.
ISO 10013: Guidelines for developing quality manuals to meet specific needs.
Source: http://www.iso.ch/
8 .6 GENERAL QC PROCEDURES (TIER 1)
The focus of general QC techniques is on the processing, handling, documenting, archiving and reporting
procedures that are common to all the inventory source categories. Table 8.1, Tier 1 General Inventory
Level QC
Procedures, lists the general QC checks that the inventory agency should use routinely throughout the
preparation of the annual inventory. Most of the checks shown in Table 8.1 could be performed by cross-
checks,
recalculation, or through visual inspections. The results of these QC activities and procedures should be
documented as set out in Section 8.10.1, Internal Documentation and Archiving, below. If checks are
performed
electronically, these systems should be periodically reviewed to ensure the integrity of the checking
function.
It will not be possible to check all aspects of inventory input data, parameters and calculations every year.
Checks may be performed on selected sets of data and processes, such that identified key source
categories are
considered every year. Checks on other source categories may be conducted less frequently. However, a
sample
of data and calculations from every sector should be included in the QC process each year to ensure that
all
sectors are addressed on an ongoing basis. In establishing criteria and processes for selecting the sample
data sets
and processes, it is good practice for the inventory agency to plan to undertake QC checks on all parts of
the
inventory over an appropriate period of time.
Quality Assurance and Quality Control Chapter 8
8.8 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
TABLE 8.1
TIER 1 GENERAL INVENTORY LEVEL QC PROCEDURES
QC Activity Procedures
Check that assumptions and criteria for the selection of
activity data and emission factors are documented.
• Cross-check descriptions of activity data and emission
factors with information on source categories and ensure
that these are properly recorded and archived.
Check for transcription errors in data input and reference
• Confirm that bibliographical data references are properly
cited in the internal documentation.
• Cross-check a sample of input data from each source
category (either measurements or parameters used in
calculations) for transcription errors.
Check that emissions are calculated correctly.
• Reproduce a representative sample of emissions
calculations.
• Selectively mimic complex model calculations with
abbreviated calculations to judge relative accuracy.
Check that parameter and emission units are
correctly recorded and that appropriate conversion
factors are used.
• Check that units are properly labelled in calculation sheets.
• Check that units are correctly carried through from
beginning to end of calculations.
• Check that conversion factors are correct.
• Check that temporal and spatial adjustment factors are used
correctly.
Check the integrity of database files.
• Confirm that the appropriate data processing steps are
correctly represented in the database.
• Confirm that data relationships are correctly represented in
the database.
• Ensure that data fields are properly labelled and have the
correct design specifications.
• Ensure that adequate documentation of database and model
structure and operation are archived.
Check for consistency in data between source
categories.
• Identify parameters (e.g. activity data, constants) that are
common to multiple source categories and confirm that there
is consistency in the values used for these parameters in the
emissions calculations.
Check that the movement of inventory data among
processing steps is correct.
• Check that emissions data are correctly aggregated from
lower reporting levels to higher reporting levels when
preparing summaries.
• Check that emissions data are correctly transcribed between
different intermediate products.
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.9
TABLE 8.1 (CONTINUED)
TIER 1 GENERAL INVENTORY LEVEL QC PROCEDURES
QC Activity Procedures
Check that uncertainties in emissions and removals are
estimated or calculated correctly.
• Check that qualifications of individuals providing expert
judgement for uncertainty estimates are appropriate.
• Check that qualifications, assumptions and expert
judgements are recorded. Check that calculated
uncertainties are complete and calculated correctly.
• If necessary, duplicate error calculations or a small sample
of the probability distributions used by Monte Carlo
analyses.
Undertake review of internal documentation.
• Check that there is detailed internal documentation to
support the estimates and enable duplication of the emission
and uncertainty estimates.
• Check that inventory data, supporting data, and inventory
records are archived and stored to facilitate detailed review.
• Check integrity of any data archiving arrangements of
outside organisations involved in inventory preparation.
Check methodological and data changes resulting in recalculations.
• Check for temporal consistency in time series input data for
each source category.
• Check for consistency in the algorithm/method used for
calculations throughout the time series.
Undertake completeness checks.
• Confirm that estimates are reported for all source categories
and for all years from the appropriate base year to the
period of the current inventory.
• Check that known data gaps that result in incomplete source
category emissions estimates are documented.
Compare estimates to previous estimates.
• For each source category, current inventory estimates
should be compared to previous estimates. If there are
significant changes or departures from expected trends, recheck
estimates and explain any difference.
The checks in Table 8.1, should be applied irrespective of the type of data used to develop the inventory
estimates and are equally applicable to source categories where default values or national data are used as
the
basis for the estimates.
In some cases, emissions estimates are prepared for the inventory agency by outside consultants or
agencies. The
inventory agency should ensure that the QC checks listed in Table 8.1, Tier 1 General Inventory Level QC
Procedure, are communicated to the consultants/agencies. This will assist in making sure that QC
procedures are
performed and recorded by the consultant or outside agency. The inventory agency should review these
QA/QC
activities. In cases where official national statistics are relied upon – primarily for activity data – QC
procedures
may already have been implemented on these national data. However, it is good practice for the inventory
agency to confirm that national statistical agencies have implemented adequate QC procedures equivalent
to
those in Table 8.1.
Due to the quantity of data that needs to be checked for some source categories, automated checks are
encouraged where possible. For example, one of the most common QC activities involves checking that
data
keyed into a computer database are correct. A QC procedure could be set up to use an automated range
check
(based on the range of expected values of the input data from the original reference) for the input values
as
recorded in the database. A combination of manual and automated checks may constitute the most
effective
procedures in checking large quantities of input data.
Quality Assurance and Quality Control Chapter 8
8.10 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
8 .7 SOURCE CATERGORY-SPECIFIC QC
PROCEDURES (TIER 2)
In contrast to general inventory QC techniques, source category-specific QC procedures are directed at
specific
types of data used in the methods for individual source categories and require knowledge of the emission
source
category, the types of data available and the parameters associated with emissions.
It is important to note that Tier 2 source category-specific QC activities are in addition to the general QC
conducted as part of Tier 1 (i.e. include QC checks listed in Table 8.1). The source category-specific
measures
are applied on a case-by-case basis focusing on key source categories (see Chapter 7, Methodological
Choice
and Recalculation) and on source categories where significant methodological and data revisions have
taken
place. It is good practice that inventory agencies applying higher tier methods in compiling national
inventories
utilise Tier 2 QC procedures. Specific applications of source category-specific Tier 2 QC procedures are
provided in the energy, agriculture, industrial processes and waste chapters of this report (Chapters 2 to
5).
Source category-specific QC activities include the following:
• Emission data QC;
• Activity data QC;
• QC of uncertainty estimates.
The first two activities relate to the types of data used to prepare the emissions estimates for a given
source
category. QC of uncertainty estimates covers activities associated with determining uncertainties in
emissions
estimates (for more information on the determination of these uncertainties, see Chapter 6, Quantifying
Uncertainties in Practice).
The actual QC procedures that need to be implemented by the inventory agency will depend on the
method used
to estimate the emissions for a given source category. If estimates are developed by outside agencies, the
inventory agency may, upon review, reference the QC activities of the outside agency as part of the
QA/QC plan.
There is no need to duplicate QC activities if the inventory agency is satisfied that the QC activities
performed
by the outside agency meet the minimum requirements of the QA/QC plan.
8 .7 .1 Emissions data QC
The following sections describe QC checks on IPCC default factors, country-specific emission factors,
and direct
emission measurements from individual sites (used either as the basis for a site-specific emission factor or
directly for an emissions estimate). Emission comparison procedures are described in Section 8.7.1.4,
Emission
Comparisons. Inventory agencies should take into account the practical considerations discussed in
Section 8.2,
Practical Considerations in Developing QA/QC Systems, when determining what level of QC activities to
undertake.
8.7.1.1 IPCC DEFAULT EMISSION FACTORS
Where IPCC default emission factors are used, it is good practice for the inventory agency to assess the
applicability of these factors to national circumstances. This assessment may include an evaluation of
national
conditions compared to the context of the studies upon which the IPCC default factors were based. If
there is
insufficient information on the context of the IPCC default factors, the inventory agency should take
account of
this in assessing the uncertainty of the national emissions estimates based on the IPCC default emission
factors.
For key source categories, inventory agencies should consider options for obtaining emission factors that
are
known to be representative of national circumstances. The results of this assessment should be
documented.
If possible, IPCC default emission factor checks could be supplemented by comparisons with national site
or
plant-level factors to determine their representativeness relative to actual sources in the country. This
supplementary check is good practice even if data are only available for a small percentage of sites or
plants.
8.7.1.2 COUNTRY-SPECIFIC EMISSION FACTORS
Country-specific emission factors may be developed at a national or other aggregated level within the
country
based on prevailing technology, science, local characteristics and other criteria. These factors are not
necessarily
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.11
site-specific, but are used to represent a source category or sub-source category. Two steps are necessary
to
ensure good practice emission factor QC for country-specific factors.
The first is to perform QC checks on the data used to develop the emission factors. The adequacy of the
emission
factors and the QA/QC performed during their development should be assessed. If emission factors were
developed based on site-specific or source-level testing, then the inventory agency should check if the
measurement programme included appropriate QC procedures.
Frequently, country-specific emission factors will be based on secondary data sources, such as published
studies
or other literature.1 In these cases, the inventory agency could attempt to determine whether the QC
activities
conducted during the original preparation of the data are consistent with the applicable QC procedures
outlined
in Table 8.1 and whether any limitations of the secondary data have been identified and documented. The
inventory agency could also attempt to establish whether the secondary data have undergone peer review
and
record the scope of such a review.
If it is determined that the QA/QC associated with the secondary data is adequate, then the inventory
agency can
simply reference the data source for QC documentation and document the applicability of the data for use
in
emissions estimates.
If it is determined that the QA/QC associated with the secondary data is inadequate, then the inventory
agency
should attempt to have QA/QC checks on the secondary data established. It should also reassess the
uncertainty
of any emissions estimates derived from the secondary data. The inventory agency may also reconsider
how the
data are used and whether any alternative data, (including IPCC default values) may provide a better
estimate of
emissions from this source category.
Second, country-specific factors and circumstances should be compared with relevant IPCC default
factors and
the characteristics of the studies on which the default factors are based. The intent of this comparison is to
determine whether country-specific factors are reasonable, given similarities or differences between the
national
source category and the ‘average’ source category represented by the defaults. Large differences between
country-specific factors and default factors should be explained and documented.
A supplementary step is to compare the country-specific factors with site-specific or plant-level factors if
these
are available. For example, if there are emission factors available for a few plants (but not enough to
support a
bottom-up approach) these plant-specific factors could be compared with the aggregated factor used in
the
inventory. This type of comparison provides an indication of both the reasonableness of the country-
specific
factor and its representativeness.
8.7.1.3 DIRECT EMISSION MEASUREMENTS
Emissions from a source category may be estimated using direct measurements in the following ways:
• Sample emissions measurements from a facility may be used to develop a representative emission factor
for
that individual site, or for the entire category (i.e. for development of a national level emission factor);
• Continuous emissions monitoring (CEM) data may be used to compile an annual estimate of emissions
for a
particular process. In theory, CEM can provide a complete set of quantified emissions data across the
inventory period for an individual facility process, and does not have to be correlated back to a process
parameter or input variable like an emission factor.
Regardless of how direct measurement data are being used, the inventory agency should review the
processes
and check the measurements as part of the QC activities.
Use of standard measurement methods improves the consistency of resulting data and knowledge of the
statistical properties of the data. If standard reference methods for measuring specific greenhouse gas
emissions
(and removals) are available, inventory agencies should encourage plants to use these. If specific standard
methods are not available, the inventory agency should confirm whether nationally or internationally
recognised
standard methods such as ISO 10012 are used for measurements and whether the measurement equipment
is
calibrated and maintained properly.
For example, ISO has published standards that specify procedures to quantify some of the performance
characteristics of all air quality measurement methods such as bias, calibration, instability, lower
detection
limits, sensitivity, and upper limits of measurement (ISO, 1994). While these standards are not associated
with a
1 Secondary data sources refer to reference sources for inventory data that are not designed for the
express purpose of
inventory development. Secondary data sources typically include national statistical databases, scientific
literature, and other
studies produced by agencies or organisations not associated with the inventory development.
Quality Assurance and Quality Control Chapter 8
8.12 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
reference method for a specific greenhouse gas source category, they have direct application to QC
activities
associated with estimations based on measured emission values.
Where direct measurement data from individual sites are in question, discussions with site managers can
be
useful to encourage improvement of the QA/QC practices at the sites. Also, supplementary QC activities
are
encouraged for bottom-up methods based on site-specific emission factors where significant uncertainty
remains
in the estimates. Site-specific factors can be compared between sites and also to IPCC or national level
defaults.
Significant differences between sites or between a particular site and the IPCC defaults should elicit
further
review and checks on calculations. Large differences should be explained and documented.
8.7.1.4 EMISSION COMPARISONS
It is standard QC practice to compare emissions from each source category with emissions previously
provided
from the same source category or against historical trends and reference calculations as described below.
The
objective of these comparisons (often referred to as ‘reality checks’) is to ensure that the emission values
are not
wildly improbable or that they fall within a range that is considered reasonable. If the estimates seem
unreasonable, emission checks can lead to a re-evaluation of emission factors and activity data before the
inventory process has advanced to its final stages.
The first step of an emissions comparison is a consistency and completeness check using available
historical
inventory data for multiple years. The emission levels of most source categories do not abruptly change
from
year to year, as changes in both activity data and emission factors are generally gradual. In most
circumstances,
the change in emissions will be less than 10% per year. Thus, significant changes in emissions from
previous
years may indicate possible input or calculation errors. After calculating differences, the larger percentage
differences (in any direction) should be flagged, by visual inspection of the list, by visual inspection of
the
graphical presentation of differences (e.g. in a spreadsheet) or by using a dedicated software programme
that
puts flags and rankings in the list of differences.
It is good practice to also check the annual increase or decrease of changes in emissions levels in
significant subsource
categories of some source categories. Sub-source categories may show greater percentage changes than
the aggregated source categories. For example, total emissions from petrol cars are not likely to change
substantially on an annual basis, but emissions from sub-source categories, such as catalyst-equipped
petrol cars,
may show substantial changes if the market share is not in equilibrium or if the technology is changing
and
rapidly being adopted in the marketplace.
It is good practice to check the emissions estimates for all source categories or sub-source categories that
show
greater than 10% change in a year compared to the previous year’s inventory. Source categories and sub-
source
categories should be ranked according to the percentage difference in emissions from the previous year.
Supplementary emission comparisons can also be performed, if appropriate, including order-of-magnitude
checks and reference calculations.
ORDER-OF-MAGNITUDE CHECKS
Order of magnitude checks look for major calculation errors and exclusion of major source categories or
subsource
categories. Method-based comparisons may be made depending on whether the emissions for the source
category were determined using a top-down or bottom-up approach. For example, if N2O estimates for
nitric acid
production were determined using a bottom-up approach (i.e. emissions estimates were determined for
each
individual production plant based on plant-specific data), the emissions check would consist of comparing
the
sum of the individual plant-level emissions to a top-down emission estimate based on national nitric acid
production figures and IPCC default Tier 1 factors. If significant differences are found in the comparison,
further
investigation using the source category-specific QC techniques described in Section 8.7, Source
Category-
Specific QC Procedures (Tier 2), would be necessary to answer the following questions:
• Are there inaccuracies associated with any of the individual plant estimates (e.g. an extreme outlier may
be
accounting for an unreasonable quantity of emissions)?
• Are the plant-specific emission factors significantly different from each other?
• Are the plant-specific production rates consistent with published national level production rates?
• Is there any other explanation for a significant difference, such as the effect of controls, the manner in
which
production is reported or possibly undocumented assumptions?
This is an example of how the result of a relatively simple emission check can lead to a more intensive
investigation of the representativeness of the emissions data. Knowledge of the source category is
required to
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.13
isolate the parameter that is causing the difference in emissions estimates and to understand the reasons
for the
difference.
REFERENCE CALCULATIONS
Another emission comparison may be used for source categories that rely on empirical formulas for the
calculation of emissions. Where such formulas are used, final calculated emission levels should follow
stochiometric ratios and conserve energy and mass. In a number of cases where emissions are calculated
as the
sum of sectoral activities based on the consumption of a specific commodity (e.g. fuels or products like
HFCs,
PFCs or SF6), the emissions could alternatively be estimated using apparent consumption figures:
national total
production + import – export ± stock changes. For CO2 from fossil fuel combustion, a reference
calculation
based on apparent fuel consumption per fuel type is mandatory according to the IPCC Guidelines.
Another
example is estimating emissions from manure management. The total quantity of methane produced
should not
exceed the quantity that could be expected based on the carbon content of the volatile solids in the
manure.
Discrepancies between inventory data and reference calculations do not necessarily imply that the
inventory data
are in error. It is important to consider that there may be large uncertainties associated with the reference
calculations themselves when analysing discrepancies.
8 .7 .2 Activi ty data QC
The estimation methods for many source categories rely on the use of activity data and associated input
variables
that are not directly prepared by the inventory agency. Activity data is normally collated at a national
level using
secondary data sources or from site-specific data prepared by site or plant personnel from their own
measurements. Inventory agencies should take into account the practical considerations discussed above
when
determining the level of QC activities to undertake.
8.7.2.1 NATIONAL LEVEL ACTIVITY DATA
Where national activity data from secondary data sources are used in the inventory, it is good practice for
the
inventory agency or its designees to evaluate and document the associated QA/QC activities. This is
particularly
important with regard to activity data, since most activity data are originally prepared for purposes other
than as
input to estimates of greenhouse gas emissions. Though not always readily available, many statistical
organisations, for example, have their own procedures for assessing the quality of the data independently
of what
the end use of the data may be. If it is determined that these procedures satisfy minimum activities listed
in the
QA/QC plan, the inventory agency can simply reference the QA/QC activities conducted by the statistical
organisation.
It is good practice for the inventory agency to determine if the level of QC associated with secondary
activity
data includes those QC procedures listed in Table 8.1. In addition, the inventory agency may establish
whether
the secondary data have been peer reviewed and record the scope of this review. If it is determined that
the
QA/QC associated with the secondary data is adequate, then the inventory agency can simply reference
the data
source and document the applicability of the data for use in its emissions estimates.
If it is determined that the QC associated with the secondary data is inadequate, then the inventory agency
should
attempt to have QA/QC checks on the secondary data established. It should also reassess the uncertainty
of
emissions estimates in light of the findings from its assessment of the QA/QC associated with secondary
data.
The inventory agency should also reconsider how the data are used and whether any alternative data,
including
IPCC default values and international data sets, may provide for a better estimate of emissions. If no
alternative
data sources are available, the inventory agency should document the inadequacies associated with the
secondary
data QC as part of its summary report on QA/QC (see Section 8.10.2, Reporting, for reporting guidance).
For example, in the transportation category, countries typically use either fuel usage or kilometer (km)
statistics
to develop emissions estimates. The national statistics on fuel usage and kms travelled by vehicles are
usually
prepared by a different agency from the inventory agency. However, it is the responsibility of the
inventory
agency to determine what QA/QC activities were implemented by the agency that prepared the original
fuel
usage and km statistics for vehicles. Questions that may be asked in this context are:
• Does the statistical agency have a QA/QC plan that covers the preparation of the data?
• What sampling protocol was used to estimate fuel usage or kms travelled?
• How recently was the sampling protocol reviewed?
• Has any potential bias in the data been identified by the statistical agency?
Quality Assurance and Quality Control Chapter 8
8.14 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
• Has the statistical agency identified and documented uncertainties in the data?
• Has the statistical agency identified and documented errors in the data?
National level activity data should be compared with previous year’s data for the source category being
evaluated. Activity data for most source categories tend to exhibit relatively consistent changes from year
to year
without sharp increases or decreases. If the national activity data for any year diverge greatly from the
historical
trend, the activity data should be checked for errors. If the general mathematical checks do not reveal
errors, the
characteristics of the source category could be investigated and any change identified and documented.
Where possible, a comparison check of activity data from multiple reference sources should be
undertaken. This
is important for source categories that have a high level of uncertainty associated with their estimates. For
example, many of the agricultural source-categories rely on government statistics for activity data such as
livestock populations, areas under cultivation, and the extent of prescribed burning. Similar statistics may
be
prepared by industry, universities, or other organisations and can be used to compare with standard
reference
sources. As part of the QC check, the inventory agency should ascertain whether independent data have
been
used to derive alternative activity data sets. In some cases, the same data are treated differently by
different
agencies to meet varying needs. Comparisons may need to be made at a regional level or with a subset of
the
national data since many alternative references for such activity data have limited scope and do not cover
the
entire nation.
8.7.2.2 SITE-SPECIFIC ACTIVITY DATA
Some methods rely on the use of site-specific activity data used in conjunction with IPCC default or
countryspecific
emission factors. Site or plant personnel typically prepare these estimates of activity, often for purposes
other than as inputs to emissions inventories. QC checks should focus on inconsistencies between sites to
establish whether these reflect errors, different measurement techniques, or real differences in emissions,
operating conditions or technology.
A variety of QC checks can be used to identify errors in site-level activity data. The inventory agency
should
establish whether recognised national or international standards were used in measuring activity data at
the
individual sites. If measurements were made according to recognised national or international standards
and a
QA/QC process is in place, the inventory agency should satisfy itself that the QA/QC process at the site is
acceptable under the inventory QA/QC plan and at least includes Tier 1 activities. Acceptable QC
procedures in
use at the site may be directly referenced. If the measurements were not made using standard methods and
QA/QC is not of an acceptable standard, then the use of these activity data should be carefully evaluated,
uncertainty estimates reconsidered, and qualifications documented.
Comparisons of activity data from different reference sources may also be used to expand the activity data
QC.
For example, in estimating PFC emissions from primary aluminium smelting, many inventory agencies
use
smelter-specific activity data to prepare the inventory estimates. A QC check of the aggregated activity
data from
all aluminium smelters can be made against national production statistics for the industry. Also,
production data
can be compared across different sites, possibly with adjustments made for plant capacities, to evaluate
the
reasonableness of the production data. Similar comparisons of activity data can be made for other
manufacturing-based source categories where there are published data on national production. If outliers
are
identified, they should be investigated to determine if the difference can be explained by the unique
characteristics of the site or there is an error in the reported activity.
Site-specific activity data checks may also be applied to methods based on product usage. For example,
one
method for estimating SF6 emissions from use in electrical equipment relies on an account balance of gas
purchases, gas sales for recycling, the amount of gas stored on site (outside of equipment), handling
losses,
refills for maintenance, and the total holding capacity of the equipment system. This account balance
system
should be used at each facility where the equipment is in place. A QC check of overall national activity
could be
made by performing the same kind of account balancing procedure on a national basis. This national
account
balancing would consider national sales of SF6 for use in electrical equipment, the nation-wide increase
in the
total handling capacity of the equipment (that may be obtained from equipment manufacturers), and the
quantity
of SF6 destroyed in the country. The results of the bottom-up and top-down account balancing analyses
should
agree or large differences should be explained. Similar accounting techniques can be used as QC checks
on other
categories based on gas usage (e.g. substitutes for ozone-depleting substances) to check consumption and
emissions.
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.15
8 .7 .3 QC of uncertainty estimates
QC should also be undertaken on calculations or estimates of uncertainty associated with emissions
estimates.
Good practice for estimating inventory uncertainties is described in Chapter 6, Quantifying Uncertainties
in
Practice, and relies on calculations of uncertainty at the source category level that are then combined to
summary
levels for the entire inventory. Some of the methods rely on the use of measured data associated with the
emission factors or activity data to develop probability density functions from which uncertainty
estimates can
be made. In the absence of measured data, many uncertainty estimates will rely on expert judgement.
It is good practice for QC procedures to be applied to the uncertainty estimations to confirm that
calculations are
correct and that there is sufficient documentation to duplicate them. The assumptions on which
uncertainty
estimations have been based should be documented for each source category. Calculations of source
categoryspecific
and aggregated uncertainty estimates should be checked and any errors addressed. For uncertainty
estimates involving expert judgement, the qualifications of experts should also be checked and
documented, as
should the process of eliciting expert judgement, including information on the data considered, literature
references, assumptions made and scenarios considered. Chapter 6 contains advice on how to document
expert
judgements on uncertainties.
8 .8 QA PROCEDURES
Good practice for QA procedures requires an objective review to assess the quality of the inventory, and
also to
identify areas where improvements could be made. The inventory may be reviewed as a whole or in parts.
QA
procedures are utilised in addition to the Tier 1 and Tier 2 QC. The objective in QA implementation is to
involve
reviewers that can conduct an unbiased review of the inventory. It is good practice to use QA reviewers
that
have not been involved in preparing the inventory. Preferably these reviewers would be independent
experts
from other agencies or a national or international expert or group not closely connected with national
inventory
compilation. Where third party reviewers outside the inventory agency are not available, staff from
another part
of the inventory agency not involved in the portion of the inventory being reviewed can also fulfil QA
roles.
It is good practice for inventory agencies to conduct a basic expert peer review (Tier 1 QA) prior to
inventory
submission in order to identify potential problems and make corrections where possible. It is also good
practice
to apply this review to all source categories in the inventory. However, this will not always be practical
due to
timing and resource constraints. Key source categories should be given priority as well as source
categories
where significant changes in methods or data have been made. Inventory agencies may also choose to
perform
more extensive peer reviews or audits or both as additional (Tier 2) QA procedures within the available
resources.
More specific information on QA procedures related to individual source categories is provided in the
source
category-specific QA/QC sections in Chapters 2 to 5.
EXPERT PEER REVIEW
Expert peer review consists of a review of calculations or assumptions by experts in relevant technical
fields.
This procedure is generally accomplished by reviewing the documentation associated with the methods
and
results, but usually does not include rigorous certification of data or references such as might be
undertaken in an
audit. The objective of the expert peer review is to ensure that the inventory’s results, assumptions, and
methods
are reasonable as judged by those knowledgeable in the specific field. Expert review processes may
involve
technical experts and, where a country has formal stakeholder and public review mechanisms in place,
these
reviews can supplement but not replace expert peer review.
There are no standard tools or mechanisms for expert peer review, and its use should be considered on a
case-bycase
basis. If there is a high level of uncertainty associated with an emission estimate for a source category,
expert peer review may provide information to improve the estimate, or at least to better quantify the
uncertainty.
Expert reviews may be conducted on all parts of a source category. For example, if the activity data
estimates
from oil and natural gas production are to be reviewed but not the emission factors, experts in the oil and
gas
industry could be involved in the review to provide industry expertise even if they do not have direct
experience
in greenhouse gas emissions estimation. Effective peer reviews often involve identifying and contacting
key
industrial trade organisations associated with specific source categories. It is preferable for this expert
input to be
sought early in the inventory development process so that the experts can participate from the start. It is
good
practice to involve relevant experts in development and review of methods and data acquisition.
Quality Assurance and Quality Control Chapter 8
8.16 IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas
Inventories
The results of expert peer review, and the response of the inventory agency to those findings, may be
important
to widespread acceptance of the final inventory. All expert peer reviews should be well documented,
preferably
in a report or checklist format that shows the findings and recommendations for improvement.
AUDITS
For the purpose of good practice in inventory preparation, audits may be used to evaluate how effectively
the
inventory agency complies with the minimum QC specifications outlined in the QC plan. It is important
that the
auditor be independent of the inventory agency as much as possible so as to be able to provide an
objective
assessment of the processes and data evaluated. Audits may be conducted during the preparation of an
inventory,
following inventory preparation, or on a previous inventory. Audits are especially useful when new
emission
estimation methods are adopted, or when there are substantial changes to existing methods. It is desirable
for the
inventory agency to develop a schedule of audits at strategic points in the inventory development. For
example,
audits related to initial data collection, measurement work, transcription, calculation and documentation
may be
conducted. Audits can be used to verify that the QC steps identified in Table 8.1 have been implemented
and that
source category-specific QC procedures have been implemented according to the QC plan.
8 .9 VERIFICATION OF EMISSIONS DATA
Options for inventory verification processes are described in Annex 2, Verification. Verification
techniques can
be applied during inventory development as well as after the inventory is compiled.
Comparisons with other independently compiled, national emissions data (if available) are a quick option
to
evaluate completeness, approximate emission levels and correct source category allocations. These
comparisons
can be made for different greenhouse gases at national, sectoral, source category, and sub-source category
levels,
as far as the differences in definitions enable them.
Although the inventory agency is ultimately responsible for the compilation and submission of the
national
greenhouse gas inventory, other independent publications on this subject may be available (e.g. from
scientific
literature or other institutes or agencies). These documents may provide the means for comparisons with
other
national estimates.
The verification process can help evaluate the uncertainty in emissions estimates, taking into account the
quality
and context of both the original inventory data and data used for verification purposes. Where verification
techniques are used, they should be reflected in the QA/QC plan. Improvements resulting from
verification
should be documented, as should detailed results of the verification process.
8 .10 DOCUMENTATION, ARCHIVING AND
REPORTING
8 .10.1 Interna l documentation and archiving
As part of general QC procedures, it is good practice to document and archive all information required to
produce the national emissions inventory estimates. This includes:
• Assumptions and criteria for selection of activity data and emission factors;
• Emission factors used, including references to the IPCC document for default factors or to published
references or other documentation for emission factors used in higher tier methods;
• Activity data or sufficient information to enable activity data to be traced to the referenced source;
• Information on the uncertainty associated with activity data and emission factors;
• Rationale for choice of methods;
• Methods used, including those used to estimate uncertainty;
• Changes in data inputs or methods from previous years;
• Identification of individuals providing expert judgement for uncertainty estimates and their
qualifications to
do so;
Chapter 8 Quality Assurance and Quality Control
IPCC Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories
8.17
• Details of electronic databases or software used in production of the inventory, including versions,
operating
manuals, hardware requirements and any other information required to enable their later use;
• Worksheets and interim calculations for source category estimates and aggregated estimates and any
recalculations
of previous estimates;
• Final inventory report and any analysis of trends from previous years;
• QA/QC plans and outcomes of QA/QC procedures.
It is good practice for inventory agencies to maintain this documentation for every annual inventory
produced
and to provide it for review. It is good practice to maintain and archive this documentation in such a way
that
every inventory estimate can be fully documented and reproduced if necessary. Inventory agencies should
ensure
that records are unambiguous; for example, a reference to ‘IPCC default factor’ is not sufficient. A full
reference
to the particular document (e.g. Revised 1996 IPCC Guidelines for National Greenhouse Gas Inventories)
is
necessary in order to identify the source of the emission factor because there may have been several
updates of
default factors as new information has become available.
Records of QA/QC procedures are important information to enable continuous improvement to inventory
estimates. It is good practice for records of QA/QC activities to include the checks/audits/reviews that
were
performed, when they were performed, who performed them, and corrections and modifications to the
inventory
resulting from the QA/QC activity.
8 .10.2 Reporting
It is good practice to report a summary of implemented QA/QC activities and key findings as a
supplement to
each country’s national inventory. However, it is not practical or necessary to report all the internal
documentation that is retained by the inventory agency. The summary should describe which activities
were
performed internally and what external reviews were conducted for each source category and on the
entireinventory in accordance with the QA/QC plan. The key findings should describe major issues
regarding quality
of input data, methods, processing, or archiving and show how they were addressed or plan to be
addressed in

Você também pode gostar