Escolar Documentos
Profissional Documentos
Cultura Documentos
(wileyonlinelibrary.com) DOI: 10.1002/qre.1769 Published online 17 February 2015 in Wiley Online Library
1. Introduction
n this paper, we deal with the strategic part of the maintenance management definition and process (as in EN 13306:20101), which
I is related to the determination of maintenance objectives or priorities and the determination of strategies. A more operational
second part of the definition refers to the strategy implementation (maintenance planning, maintenance control, supervision,
and continuous improvement).
Part of this strategy-setting process that we refer to is devoted to the determination of the maintenance strategies that will be
followed for the different types of engineering assets (i.e., specific physical assets such as production processes, manufacturing
facilities, plants, infrastructure, and support systems). In fact, maintenance management can also be considered as ‘…the
management of all assets owned by a company, based on maximizing the return on investment in the asset’,2 also their safety
and their respect for the environment. Within this context, criticality analysis is a process providing a systematic basis for deciding
what assets should have priority within a maintenance management program3 and has become a clear business need in order to
maximize availability during assets’ operational phase.
This type of asset criticality analysis, performed during the operational phase and for maintenance purposes, is therefore different
to criticality analysis, which is carried out during asset design. At that point, the objective (more linked to asset reliability assessment)
is to identify critical areas so that different design alternatives to achieve a specified availability target can be optimized and
compared.4 Description of specific techniques for criticality analysis during design can be found in MIL-STD-1629A,5 also in the works
of Palaez and Bowles6 and Pillay and Wang.7 These techniques use, for instance, the Risk Priority Number (RPN) method, fuzzy logic, or
approximate reasoning to prioritize failure modes (not assets).
When prioritizing assets for maintenance purposes and during their operational phase, a large number of quantitative and
qualitative techniques can be found in the literature.3 On some occasions, there are no hard data about historical failure rates, but
the maintenance organization may require a certain gross assessment of asset priority to be carried out. In these cases, qualitative
methods may be used, and an initial asset assessment, as a way to start building maintenance operations effectiveness, may be
obtained.8
Escuela Técnicas Superior de Ingenieros de la Universidad de Sevilla, Camino de los Descubrimientos s/n., 41092, Seville, Spain
519
*Correspondence to: Adolfo Crespo Márquez, Escuela Técnicas Superior de Ingenieros de la Universidad de Sevilla, Camino de los Descubrimientos s/n., 41092 Seville, Spain.
†
E-mail: adolfo@esi.us.es
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
This paper, however, is about the process to follow on other occasions, lately becoming more frequent, when the
maintenance organization has important amounts of data for complex in-service assets for which a certain maintenance
strategy has been previously developed and implemented. Therefore, we have evidences of asset behavior for current
operational conditions of the asset, and we launch the criticality analysis with the purpose of adjusting asset maintenance
strategies to business needs over time.
Most of the current quantitative techniques for asset criticality analysis use weighted scoring methods defined as a variation of the
RPN method used in design.9 These weighted scoring methods might appear simple, but in order to reach acceptable results, a
precise procedure should be considered when determining factors and scores and combining processes or algorithms.4 The analysis
involves another important issue, which is the level of detail required, compromising objective effectiveness (missing focus of
subsequent maintenance efforts) and also data collection efforts.
The criticality number to obtain (C), as a measure of risk associated to an asset, is derived by attaching a numerical value to
the probability of failure (function loss) of the asset (the higher probability, the higher the value) and attaching another value
to the severity of the different categories of asset functional loss consequences (the more serious consequences for each
category, the higher the value). The criteria and the relative weighting to assess severity and probability may vary widely
for different companies according to their maintenance objectives and key performance indicators. The two numbers are
multiplied to give a third, which is the criticality number (C). Of course, assets with a higher C will be recognized to be the
more critical assets and will deserve special attention from the maintenance (sometimes now called: asset) management
organization.
The reader may notice now how the ‘detectability’ factor, used as part of the equation of RPN in design, is not considered now in
operations in C. This is because detectability is not an attribute of the assets we are ranking, but of the failure modes for which design
alternatives were explored.10
The inspection and maintenance activities will be prioritized on the basis of quantified risk caused by failure of the assets.11 The
high-risk assets are inspected and maintained usually with greater frequency and thoroughness and are maintained in a greater
manner, to achieve tolerable risk criteria.12
Although this technique is becoming popular, some authors mention11 that most of the risk analysis approaches are
deficient in uncertainty and sensitivity analysis. This may constrain yielding proper results, and decisions based on misleading
results may generate nonessential maintenance efforts spent in less important areas. To avoid this, risk analysis for asset
criticality should be evaluated in a well-planned manner, ensuring that significant sources of risk are reduced or
eliminated.13
In this paper, we propose a criticality analysis by taking into account the following process design requirements:
1. The process must be applicable to a large scale of in-service systems within a plant or plants. A reason for this is the fact
that preventive maintenance (PM) programs to be evaluated using this criticality analysis are set by plant equipment
(placed in a technical location in the plant), and therefore, we are forced to deal with this large number of items in
the analysis.
2. The scope of the analysis should be, as mentioned in the previous point, the same for which the current PM program is
developed and implemented.
3. The analysis should support regular changes in the scale adopted for the severity effects of the functional losses of the assets
(this is a must to align maintenance strategy in dynamic business environments).
4. The process must allow easy identification of new maintenance needs for assets facing new operating conditions.
5. General guidelines to design possible maintenance strategy to apply to different types of assets according to the results of the
analysis (criticality and sources of it) should be provided.
6. Connection with the enterprise asset management system of the company should be possible to automatically reproduce the
analysis, with a certain cadence, over time.
7. The process should be tested in industry, showing good practical results.
In the following, the paper is organized as follows. Section 2 shows first, briefly, the proposed criticality analysis process
description. Then, more precisely and in the different subsections, the notation of the mathematical model supporting the process
is introduced, as well as every step of the process to follow, including model equations. Along this second section of the paper, a
practical example helps to exemplify the process and model implementation. Section 3 tries to turn the previous process and model
into a powerful management tool. In order to do so, this section explains how to handle data requirements properly and how to
benefit from model outputs and results using suitable graphical representations. Section 4 is devoted to the interpretation of possible
results, offering clear guidelines to ensure maximum benefits from the analysis. The conclusions of the paper are finally presented in
Section 5.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
in Table I).
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
In the model mathematical formulation, if afz is the average frequency of functional loss for frequency level z, then the frequency
factor vector is defined as follows:
af z
ffz ¼ ; for z ¼ 1; …; l levels of functional loss frequency
af 1
2.2. Criteria and criteria effect levels to assess functional loss severity
This part of the analysis should reflect the business drivers recognized by management and shareholders.4 For the severity
classification, this study focuses the attention on both safety and cost criteria (similarly to Arunraj and Maiti11). For the safety severity
categories, similar hazard severity categories to the ones used in MIL-STD-882C are adopted. This standard proposes four effect
categories that can now be reframed as follows:
• catastrophic, which could result in multiple fatalities;
• critical, resulting in personal injuries or even one single fatality;
• marginal; and
• negligible.
As cost factors, we may select different criteria for which the functional loss effect can be classified in different levels that can, at
the same time, be converted into cost using a certain contract or standard that the company must honor.
In the example for this paper, the following criteria are selected (assuming that we are dealing with the criticality of a collective
passenger’s transportation fleet):
• Operational reliability: measuring the potential impact of a functional loss to the system where the asset is installed. The effects
could be classified into different levels like having no effect, stopping the system less than x minutes (S < x), stopping the system
more than x minutes (S > x), or leaving the system out of order. Each of these effect levels can be later translated into cost of the
functional loss, and the corresponding factors could be obtained.
• Comfort: evaluating whether the functional loss of the element may have no effect on comfort or affect a passenger, a car, or the
whole train. Again, each of these affection levels can be later translated into cost of the functional loss, and the corresponding
factors could be obtained.
• The ‘corrective maintenance cost’ could be selected as another cost-related criterion. Effects could be classified into very high,
high, medium, and low corrective maintenance cost, and we could proceed similarly to what has been presented in Table I,
classifying the elements’ costs and finding average costs and the corresponding factors for each effect classification level.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Table II. Table presenting nonadmissible effects categories (represented in shaded cells)
NA, having no effect; OO, leaving the system out of order; S > x, stopping the system less than x minutes; S < x, stopping the system
more than x minutes.
and [Mi] = 3, 3, 4, 4 as maximum levels of admissible effects for each criterion; finally, it will be considered as MS = 100.
where vij = eij/eik, with k = Mi and j ≤ Mi, and vij = 1 for j = Mi and ∀i, where eij is the effect j of severity criterion i and vij is the fractional
value of effect j for severity criterion i.
In the example we are following, the effects matrix is included (last four rows) in Table III, where relative values for the different
523
effects for each criterion are presented. Units for these relative values are based on cost (for i = 2, 3, 4) or on a dimensionless rule
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
of proportionality of the effect (i = 1). The interpretation of Table III is as follows: a comfort functional loss, for instance, presented in
Table II as having an effect on the entire train, may have a potential cost of $4500; or, as another example, an admissible effect on
operational reliability, stopping the system more than x minutes (S > x), may cause a potential cost of $10 000.
At this point, it is important to understand that, for a given functional loss, these are maximum possible effects per criterion, but
not actual observed effects (later, real observed effects of functional losses will be considered in the analysis, which are in fact
conditional probabilities to reach a certain effect once a functional loss takes place). In the example that is presented, the
corresponding effects severity matrix (according to Equation (1) and for MS = 100) is included in the last four rows of Table IV.
The interpretation of Table IV is as follows: a comfort functional loss impacting the train (maximum effect) will count for 20 points
of severity (up to 100), while a comfort functional loss impacting only one car will count for 13.3 points of severity (this is calculated
proportionally to the functional loss potential cost values). Notice how a nonadmissible effect of a functional loss will count for 100
(maximum value) regardless of the effect in any other criteria.
½f ebz ¼ 0; 0; 1; 0
The frequency factor to apply to this element would be the result of the following scalar product:
X
z¼l
fr ¼ f f z f erz (2)
z¼1
In our example,
And therefore 5.4 would be the frequency to consider for the element when finally calculating its criticality.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Assume, as an example, that for the effects severity matrix expressed in Table IV, the criticality analysis review team
retrieves potential effects of a functional loss of an element r; this could be represented with the following potential effects
matrix:
2 3
1 0 0 0
660 1 1 07
7
pebij ¼ 6 7
40 0 0 15
0 0 0 0
Then, we can model the severity of the functional loss of element r as follows:
!
X
i¼n X
j¼m
Sr ¼ Min MS; perij Sij (3)
i¼1 j¼1
This, in fact, represents a weighted average type of algorithm, where the weights are introduced through the value of
the different criteria effects, as calculated in Equation (1). In this way, consistency in the severity calculation of one element
with respect to another is ensured. It has been experienced how, by giving maximum severity to inadmissible effects, like
in our previous example, the different roles of actors represented in the review team are safeguarded (e.g., safety
department people in the review team of our example), discussions in the meetings are reduced, and consensus is more
easily reached.
Notice how, in case of good data integrity for frequency and functional loss effects of the elements under analysis, the review team can
and must concentrate its efforts on the selection of the severity criteria and on establishing proper weights according to business needs.
C r ¼ f r Sr (4)
C b ¼ 1100 ¼ 100
rerij ¼ current probability of the effect j of criteria i for the functional loss of element r;
X
j¼m
with rerij ¼ 1
525
j¼1
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Assume, as an example, that for the effects severity matrix expressed in Table IV, the criticality analysis review team could retrieve
data concerning real element functional loss effects for asset r; this could be represented with the following real effects matrix:
2 3
0 0 0 0
6 6 0 0:5 0:1 0 7
7
rerij ¼ 6 7
4 0:2 0:3 0:8 0:9 5
0:8 0:2 0:1 0:1
Then, we can model the severity of the functional loss of element r as follows:
!
X
i¼n X
j¼m
′
Sr ¼ Min MS; rerij Sij (5)
i¼1 j¼1
C ′r ¼ f r S′r (6)
C ′b ¼ 131:1 ¼ 31:1
Assets FE PE1 (safety) PE2 (operational reliability) PE3 (comfort) PE4 (CM cost)
a L L cM VH H
b L VH H H M
c M L M L H
d H H H H M
e M L M M L
f L L L H M
g L M VH M L
h H H L L L
i VH L VH M L
j L L L L L
526
CM, corrective maintenance; FE, frequency effects; H, high, L, low; M, medium; PE, potential effects; VH, very high.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
2. Convert these qualitative data into quantitative data (Table VI) considering the weights of each criterion and the model
explained in the previous paragraph. It is very important to separate qualitative and quantitative datasets. A reason for this is
related to the possibility to test sensitivity of changes in the criteria weights for criticality assessment, modifying the final
ranking of the assets, but not changing the data retrieved in step 1.
3. Set the quantitative criteria for the assignment of the category low, mid, or high criticality to an asset, like that in Table VII. This
decision may condition organizational efforts to be dedicated later to the management of the different categories of assets. This
is a business issue, and consensus should be reached within the review team and the management team before any further
process development.
4. Populate, with assets, the potential criticality matrix representation (Figure 1). Notice that this matrix considers current
(observed) frequencies and potential severities, for each equipment functional loss. Notice that shaded areas in the matrices
in Figures 1 and 2 mean minimum criticality level of all elements in that area.
5. Retrieve quantitative data, using a unique matrix, for current observed frequency and severity effects per element (Table VIII).
6. Populate, with assets, the current criticality matrix representation (Figure 2). Notice that this matrix considers current (observed)
frequencies and severities, for each equipment functional loss.
7. To make further analysis easy, populate, with assets having both potential and observed functional loss severity, a criticality
matrix representation (Figure 3). In the matrix in Figure 3, we can compare results obtained in two previous criticality matrices:
potential and current.
8. Automatically generate the criticality report, listing the asset ranking per criticality levels and, within each level, classifying the
assets by common frequencies and severities. A rationale for this is related to the type of strategy that will be required to
manage them. Realize that, for managers, the most important outputs from this analysis are these lists of assets falling under
different criticality categories and subcategories.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
r fe L M H VH L M H VH L M H VH L M H VH L M H VH S′r Criticality
a 1 1.1 0.1 0.7 0.5 0.5 0.0 13.5 2.13 3.75 19.4 19.4
b 1 0.8 0.2 0.2 0.3 0.5 0.1 0.8 0.1 0.1 0.9 1.32 19.5 4.53 5.79 31.1 31.1
c 3.3 1.0 0.5 0.5 0.8 0.2 0.0 7.5 0.0 2.22 9.7 32.1
d 5.4 0.8 0.1 0.1 0.3 0.2 0.5 0.1 0.9 0.0 1.0 1.66 18.0 3.6 6.3 29.6 159.6
e 3.3 1.0 1.0 0.1 0.2 0.7 1.0 0.0 0.0 10.1 1.2 11.3 37.3
f 1 1.0 1.0 0.7 0.2 0.1 1.0 0.0 0.0 2.13 1.2 3.3 3.3
g 3.3 0.8 0.2 0.2 0.3 0.5 0.5 0.5 1.0 1.32 19.5 2.0 1.2 24.0 79.3
h 5.4 0.8 0.1 0.1 1.0 1.0 1.0 1.66 0.0 0.0 1.2 2.9 15.4
i 6.7 1.0 0 0 1.0 0.2 0.8 1.0 0.0 30.0 3.2 1.2 34.4 230.5
j 1 1.0 1.0 1.0 1.0 0.0 0.0 0.0 1.2 1.2 1.2
H, high, L, low; M, medium; VH, very high.
year-old power plant. The purpose of the study was to audit current maintenance management. For this particular case, after a first
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Figure 7. Case study: priority for 9921 systems—power plant—result 1. ERP, enterprise resource planning; TL, technical location
Figure 8. Current criticality matrix representation. MP, maintenance plans; PM, preventive maintenance; TL, technical location
round of analysis (prior to the method application), the review team decided to rank only 4816 of those assets, considered now as
technical locations in the enterprise resources planning of the company. The only reason for this was the consideration of the rest
of the assets (5015, up to the total amount of 9921) as auxiliary equipment of the plant that were not relevant for the suggested
maintenance study.
After 2 months, the criticality analysis process was finished; the team realized that 2123 technical locations of the plant (Figure 7)
had assets with preventive maintenance plans (MPs) assigned, having null or very low severity in consequences and deserving
probably less maintenance efforts. It was curious to see (Figure 8) how preventive maintenance efforts in low-criticality assets, after
4 years of operation of the plant and considering a planning horizon of five more years, would be almost 40% higher than that in
medium-criticality and high-criticality items. For this case study, around 70% of the 41,387.7 h was discarded, directly changing hours
assigned to them in the MPs introduced in enterprise resource planning, and 10% of those hours was dedicated to high-criticality
equipment preventive MPs. After 5 years of operation, in 2013, the same performance of the plant was reached with a 30% savings
in overall maintenance cost (direct and indirect cost) only by redirecting and aligning maintenance efforts to business needs using
this criticality analysis. Similar results have been obtained when dealing with this issue in other scenarios, even for companies
531
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
6. Conclusions
This paper contains the design of a process and model for criticality analysis with maintenance purposes and specific design
constraints. The methodology ensures analysis consistency to business needs and for existing data of in-service complex engineering
assets. At the same time, there is an effort to describe how to turn this process into a practical management tool. Issues arising related
to extensive data handling and easy result representation are addressed. Finally, guidelines for result interpretation are offered. The
authors believe that this type of analysis will become a must for complex in-service asset maintenance strategy review and redesign.
Further research can use this methodology for the improvement of specific operational scenarios or to refine the different steps of the
process presented in this work.
Acknowledgements
This research is funded by the Spanish Ministry of Science and Innovation, Project EMAINSYS (DPI2011-22806) ‘Sistemas Inteligentes
de Mantenimiento. Procesos Emergentes de E-maintenance para la Sostenibilidad de los Sistemas de Produccio´n’, besides FEDER
funds.
References
1. EN 13306:2010. Maintenance terminology. European Standard. CEN (European Committee for Standardization), Brussels, 2010.
2. Wireman T. Developing Performance Indicators for Managing Maintenance. Industrial Press: New York, 1998.
3. Crespo Márquez A. The Maintenance Management Framework. Models and Methods for Complex Systems Maintenance. Springer Verlag: London,
UK, 2007.
4. Moss TR, Woodhouse J. Criticality analysis revisited. Quality and Reliability Engineering International 1999; 15:117–121.
5. US Department of Defense. Procedures for performing a failure mode and effects analysis, MIL-STD-1629A, 1977.
6. Palaez CE, Bowles JB. 1994. Using fuzzy logic for system criticality analysis, Proc. Reliability and Maintainability Symp., Anaheim, CA, January;
449–455.
7. Pillay A, Wang J. Modified failure mode and effects analysis using approximate reasoning. Reliability Engineering & System Safety 2003; 79:69–85.
8. NORSOK Standards. Criticality analysis for maintenance purposes. NORSOK standard Z-008 Rev. 2, Nov., 2001.
9. Duffuaa SO, Raouf A, Campbell JD. Planning and Control of Maintenance Systems. John Wiley and Sons: Indianapolis, 2000.
10. Taghipour S, Banjevic D, Jardine AKS. Prioritization of medical equipment for maintenance decisions. Journal of the Operational Research Society
2010; 62(9):1666–1687.
11. Arunraj NS, Maiti J. Risk-based maintenance—techniques and applications. Journal of Hazardous Materials 2007; 142:653–661.
12. Brown SJ, May IL. Risk-based hazardous protection and prevention by inspection and maintenance. Journal of Pressure Vessel Technology 2000;
122:362–367.
13. Khan FI, Haddara M. Risk-based maintenance (RBM): a new approach for process plant inspection and maintenance. Process Safety Progress 2003; 23
(4):252–265.
14. Moubray J. Reliability-Centered Maintenance (2nd edn). Industrial Press Inc.: New York, 1999.
15. Bevilacqua M, Braglia M. The analytic hierarchy process applied to maintenance strategy selection. Reliability Engineering and System Safety 2000;
70:71–83.
16. Molenaers A, Baets H, Pintelon L, Waeyenbergh G. Criticality classification of spare parts: a case study. International Journal of Production Economics
2012; 140:570–578.
17. Moubray J. 2014. Maintenance management. A New Paradigm. Reliability Web.com; http://reliabilityweb.com/index.php/articles/
maintenance_management_a_new_paradigm/
18. Resnikoff H. Mathematical Aspects of Reliability-centered Maintenance. Dolby Access Press: Los Altos, CA, 1978.
Authors' biographies
Adolfo Crespo Márquez is currently a Full Professor at the School of Engineering of the University of Seville and Head of the
Department of Industrial Management. He holds a PhD in Industrial Engineering from this same university. His research works have
been published in journals such as the International Journal of Production Research, International Journal of Production Economics,
European Journal of Operations Research, Journal of Purchasing and Supply Management, International Journal of Agile Manufacturing,
Omega, Journal of Quality in Maintenance Engineering, Decision Support Systems, Computers in Industry and Reliability Engineering and
System Safety, and International Journal of Simulation and Process Modeling, among others. Professor Crespo is the author of five books,
the last three with Springer-Verlag in 2007, 2010, and 2012 about maintenance and supply chain management. Professor Crespo leads
the Spanish Research Network on Dependability Management and the Spanish Committee for Maintenance Standardization (1995–
2003). He also leads a research team related to maintenance and dependability management currently with five PhD students and
four researchers. He has extensively participated in many engineering and consulting projects for different companies, for the Spanish
Departments of Defense, Science, and Education, and for the European Commission (Institute for Prospective Technological Studies).
He is the President of INGEMAN (a National Association for the Development of Maintenance Engineering in Spain) since 2002.
Pedro Moreu de León is an expert in maintenance engineering and maintenance management. He holds a PhD in Industrial
Management from the University of Seville. Assistant Professor Moreu is the author of several books and publications on these topics,
is Chairman of the Technical Committee for Standardization of Maintenance CTN AEN-151 ‘Maintenance’ of Asociación Española de
532
Normalización (AENOR; Spanish Association for Standardization) (since June 2002) and Expert Spanish Officer (AENOR) and Secretary
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533
A. CRESPO MÁRQUEZ ET AL.
Delegation European Committee for Standardization CEN 319 ‘Maintenance’ and Convenor of the Working Group for Maintenance
Management standard (WG7, ‘Maintenance Management’) of the Committee.
Antonio Sola Rosique is a civil engineer currently acting as Vice President of INGEMAN (a National Association for the Development
of Maintenance Engineering in Spain). His professional experience (34 years) is very much related to the field of dependability and
maintenance engineering in different types of power generation plants for Iberdrola Generación (nuclear, thermal, hydro, and
combined cycles gas steam). Antonio has coordinated various collaborative projects between Iberdrola and the University of Seville
and is an active member in the board of European Safety, Reliability and Data Association, Asociación Española de Mantenimiento,
Asociación Española de la Calidad, and AENOR. At present, Antonio is about to finish a PhD within the field of Maintenance and Risk
Management in the University of Seville.
Juan Francisco Gómez Fernández is currently part of the Spanish Research and Development Group in Industrial Management of
the University of Seville. He has obtained his PhD in Industrial Management from this university and awarded with the Best Master
Thesis in Maintenance in Europe by the European Federation of National Maintenance Societies. He has authored publications in
journals such us Computers in Industry, Reliability Engineering and System Safety, International Journal of Simulation and Process
Modeling, Journal of Quality in Maintenance Engineering, and Journal of Management Mathematics, among others. Juan is the author
of one book with Springer-Verlag in 2012 about maintenance management in network utilities. In relation to practical application
and experience, he has managed network maintenance and deployment departments in various national distribution network
companies, from both private and public sectors. He has conduced and participated in engineering and consulting projects for
different companies, related to information and communications technologies, maintenance management, and outsourcing services
in distribution network companies.
533
Copyright © 2015 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2016, 32 519–533