Você está na página 1de 330

Techni

cal
lyco-
sponsor
ed by

and

Changwon Sect
ion

IEEECatal
og Number:CFP16XAD-
ART
ISBN:978-1-
5090-1671-6
2016 International Conference on Industrial Engineering, Management Science and
Application (ICIMSA)

Copyright © 2016 by the Institute of Electrical and Electronic Engineers, Inc.


All rights reserved.

Copyright and Reprint Permissions


Abstracting is permitted with credit to the source. Libraries are permitted to photocopy
beyond the limit of U.S. copyright law for private use of patrons those articles in this
volume that carry a code at the bottom of the first page, provided the per-copy fee
indicated in the code is paid through Copyright Clearance Center, 222 Rosewood Drive,
Danvers, MA 01923.

For other copying, reprint or republication permission, write to IEEE Copyrights


Manager, IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08854. All rights
reserved.

IEEE Catalog Number: CFP16XAD-ART


ISBN: 978-1-5090-1671-6

Printed copies of this publication are available from:


Curran Associates, Inc
57 Morehouse Lane
Red Hook, NY 12571 USA
Phone: (845) 758-0400
Fax: (845) 758-2633
E-mail: curran@proceedings.com

Produced by IEEE eXpress Conference Publishing


For information on producing a conference proceedings and receiving an estimate, contact conferencepublishing@ieee.org
http://www.ieee.org/conferencepublishing
Welcome Message from our General Chairs
Welcome to the International Conference on Industrial Engineering, Management Science and
Applications 2016 in Jeju on behalf of the conference committee, we would like to convey our
appreciation to all authors for participating and contributing their valuable works to make this conference
great. It is our great honor to welcome you and wish you both, professional success at ICIMSA, and a
great time in the beautiful city of Jeju

ICIMSA 2016 aimed to discover a new progressive technology by enhancing the previous technologies
and to solve the technical problems that may have occurred in the process of merging technology in
various fields of industry.

At ICIMSA 2016, IT experts, researchers and practitioners from each field have been invited to share
ideas and research technologies. Moreover they will be encouraged to cooperate with each other to
overcome the confronted technical problems. As a result, this conference will become a place of
knowledge with a variety of effects created on every field related to industrial engineering, management
science and applications.

The conference seeks contributions presenting novel research results in all aspects of industrial
engineering, management science and applications. Submitted papers have been reviewed by our Program
Committee Members and Reviewers The accepted papers will be published by IEEE and will be
submitted to be indexed by EI, ISI and Scopus. We would like to thank all organizers, supporters, and
organizing committee members who are listed in the following pages. Each reviewers has reviewed at
least four papers, while each paper has been reviewed by at least three international reviewers. Our
success would not have been possible without their support and contribution, as we strongly believe their
collaboration and support was invaluable to making this International Conference fruitful and insightful.
Lastly, we would be greatly honored and pleasured to have your continued contributions in future
ICIMSA.

With warm regards,

Sangmin Park, Inchon National University


Xiaoxia Huang, University of Science and Technology Beijing, China
Nikolai Joukov, New York University and modelizeIT Inc , USA

iii
iv
Organizing Committee

General Chairs
Sangmin Park, Inchon National University
Xiaoxia Huang, University of Science and Technology Beijing, China
Nikolai Joukov, New York University and modelizeIT Inc , USA
Steering Committee
Nikolai Joukov, New York University and modelizeIT Inc , USA
Borko Furht, Florida Atlantic University, USA
Bezalel Gavish, Southern Methodist University, USA
Kin Fun Li, University of Victoria, Canada
Kuinam J. Kim, Kyonggi University, Republic of Korea
Naruemon Wattanapongsakorn, King Mongkut’s University of Technology Thonburi, Thailand
Xiaoxia Huang, University of Science and Technology Beijing, China
Dato’ Ahmad Mujahid Ahmad Zaidi, National Defence University of Malaysia, Malaysia
Publicity Chairs
Dan (Dong-Seong) Kim, University of Canterbury, New Zealand
Changchao Gu, Sinopec Management Institute, China
Suresh Thanakodi, National Defence University of Malaysia, Malaysia
Workshop Chairs
Donghwi Lee, University of Colorado, USA
Financial Chairs
Kyoungho Choi, Institute of Creative Advanced Technologies, Science and Engineering , Republic of Korea
Program Chairs
Kuinam J. Kim, Kyonggi University, Korea

Organizers & Supporters


The Korean Federation of Science and Technology Societies(KOFST)
Institute of Creative Advanced Technologies, Science and Engineering (iCatse)
Chinese Management Science Society (CMSS)
Korean Industry Security Forum (KISF)
Korea Information Assurance Society (KIAS)
Kyonggi University, Republic of Korea
Chonnam National University, Republic of Korea
University of Science and Technology Beijing, China
King Mongkut’s University of Technology Thonburi, Thailand
River Publishers, netherlands
IEEE Changwon Section

v
vi
Program Committee
Ana Paula Ferreira Barroso, UNIDEMI, Portugal
Andreas Dewald, University of Erlangen, Germany
Andrew Kusiak, Department of Mechanical and Industrial Engineering, USA
Ankur Gupta, Model Institute of Engineering and Technology, India
António Carlos da Silva Abelha, University of Minho, Portugal
António Grilo, UNIDEMI, Portugal
Baghdadi, Youcef, Sultan Qaboos University, Oman
Camacho, Sonia, McMaster University, Canada
Carlos Alberto Lourenço dos Santos, University of Aveiro, Portugal
Catalina Lucía Alberto, UNIVERSIDAD NACIONAL DE CÓRDOBA, ARGENTINA
Chin-Yuan, Fan National Applied Research Laboratories Science&Technology Policy Research and
Information Center, Taiwan
Cláudio Alves, Universidade de Minho, Portugal
Cruz-Benito, Juan, Instituto Universitario de Ciencias de la Educación, España
DANG Thanh Tin, Ho Chi Minh city University of Technology, Viet Nam
El-Houssaine AGHEZZAF Ghent University, Belgium
Elias, Micheline, Ecole Centrale Paris, SAP Academic Chair on Business Intelligence New York, USA
Fatemeh Almasi, Amirkabir University of Technology, Iran
Haeil Ahn, Seokyeong University, Republic of Korea
Hailiang YANG, The University of Hong Kong, Hong Kong
HAO Gang, University of Hong Kong, Hong Kong
Hardeep Singh, FEROZEPUR COLLEGE OF ENGINEERING TECHNOLOGY (FCET) / PUNJAB
TECHNICAL UNIVERSITY INDIA, India
Hefu Liu, University of Science and Technology of China, China
HEMA ZULAIKA HASHIM, Universiti Teknologi MARA, Malaysia
Ilias Santouridis, Technological Educational Institute (TEI) of Thessaly, Greece
Isabel Maria Correia Cruz, Universidade de Coimbra, Portugal
Jae Kyung (JK) Woo, University of Hong Kong, Hong Kong
José Machado, University of Minho, Portugal
Justin Varghese, King Khalid University Abha, KSA
Kaba, Abdoulaye, Al Ain University of Science and Technology, UAE
kadry, seifedine, American University of the Middle East, KUWAIT
Kamran Behdinan, University of Toronto, Canada
Leszek A. Maciaszek, Wroclaw University of Economics, Poland and Macquarie University, Sydney
Australia
M. Nasiruddin Munshi, University of Dhaka, Bangladesh
Manuel Filipe Vieira Torres dos Santos, Algoritmi Research Centre / University of Minh,o Portugal
Marco Aiello, University of Groningen, The Netherlands
Marco Anisetti, Università degli Studi di Milano, Italy
Meng-Shan Tsai, National Kaohsiung Normal University, Taiwan
Mikhail M. Komarov, National Research University Higher School of Economics, Russia
MOHD. SHOKI MD. ARIFF, UNIVERSITI TEKNOLOGI MALAYSIA, Malaysia
Mohsen Abdelnaein Hassan Mohamed, Assiut University, Egypt

vii
Paweł Błaszczyk, University of Silesia, Poland
Ramayah Thurasamy, Universiti Sains Malaysia, Malaysia
Reggie Davidrajuh, University of Stavanger, Norway
Rika Ampuh Hadiguna, Andalas University, Indonesia
Rionel Belen Caldo, Lyceum of the Philippines University – Laguna (LPU-L)/De La Salle University
(DLSU)
Rui Kang, Beihang University, China
Rui Pedro Figueiredo Marques, University of Aveiro, Portugal
Rusmadiah Bin Anwar, Universiti Teknologi Malaysia, MALAYSIA
S.Sarifah Radiah Shariff, Universiti Teknologi MARA, Shah Alam Selangor, Malaysia
Sabin-Corneliu Buraga, Alexandru Ioan Cuza’ University of Iasi, Romania
Sandeep Grover, YMCA University of Science and Technology, India
Seren OZMEHMET TASAN, Dokuz Eylul University,Turkey
Seyedmohsen Hosseini, University of Oklahoma, USA
Sharul Kamal Abdul Rahim, universiti teknologi malaysia, malaysia
Sheroz Khan, International Islamic University Malaysia, Malaysia
Shimpei Matsumoto, Hiroshima Institute of Technology, Japan
Siana Halim, Petra Christian University Surabaya, Indonesia
Sidi WU, Waseda University, Japan
Sim-Hui Tee, Multimedia University, Malaysia
Simon Collart-Dutilleul, Ifsttar, France
SITI NOOR HAJJAR MD LATIP, Universiti Teknologi MARA, Malaysia
Siti Zulaiha Ahmad, MARA University of Technology Perlis, Malaysia
Supachart Iamratanakul, Kasetsart University, Thailand
Suprakash Gupta, Indian Institute of Technology (BHU), India
Svetlana Maltseva, National Research University, Russia
T. Błaszczyk, University of Economics, Katowice, Poland
Tahira Awan, International Islamic University, Pakistan
Tantikorn Pichpibul, Panyapiwat Institute of Management, Thailand
Xianda Kong, Tokyo Metropolitan University, Japan
Yoshinobu Tamura, Yamaguchi University, Japan
Zety Sharizat Bt Hamidi, Universiti Teknologi MARA, Malaysia

viii
Table of Contents

A Hybrid Fuzzy Multi-Criteria Decision-Making Approach for Mitigating Air Traffic Congestion .................. 1
Miriam F. Bongo and Lanndon A. Ocampo

Constrained Maximization of Social Welfare with Fiscal Transfer Scheme .......................................................... 6


Shinji Suzuki and Takehiro Inohara

Modelling Commercial Framework of the Strategic Elementary Design Concept.............................................. 11


Mohd Shaleh Mujir, Norashikin Hasan and Oskar Hasdinor Hassan

A Service-Oriented Cloud Application for a Collaborative Tool Management System ...................................... 15


Marcus Röschinger, Orthodoxos Kipouridis and Willibald A. Günthner

Automated Documentation for Rapid Prototyping ................................................................................................ 20


Napajorn Wattanagul and Yachai Limpiyakorn

Generation of Images of WND Equivalent from HTML Prototypes .................................................................... 24


Ekarat Prompila and Yachai Limpiyakorn

Voice Recognition Using k Nearest Neighbor and Double Distance Method ....................................................... 29
Ranny

Intelligent Robot's Behavior Based on Fuzzy Control System .............................................................................. 34


Eva Volna, Martin Kotyrba and Michal Jaluvka

Effects of Required Coefficient of Friction for Female in Different Gait ............................................................. 39


Yu Ting Chen, Kai Way Li and Yi Yang Chen

An Application of Statistical Modeling for Classification of Human Motor Skill Level ..................................... 43
Wenqi Ma and David B. Kaber

Analysis of Vehicle Crash Injury-Severity in a Superhighway: A Markovian Approach .................................. 48


John Carlo F. Marquez, Darwin Joseph B. Ronquillo, Noime B. Fernandez and Venusmar C. Quevedo

Assessing Participation in Decision-Making among Employees in the Manufacturing Industry....................... 53


Shiau Wei Chan, Abdul Razak Omar, R. Ramlan, Izzuddin Zaman, Siti Sarah Omar and Khan Horng Lim

Estimation of Response Surface Based on Central Composite Design Using Spatial Prediction ....................... 58
Haeil Ahn

Reducing Operational Downtime in Service Processes: A Six Sigma Case Study ............................................... 63
Raid Al-Aomar, Saeed Aljeneibi and Shereen Almazroui

Ownership Structure, Internal Control and Real Activity Earnings Management............................................. 68


Xiao Wang, Jie Gao, Chunling Shang and Qiang Lu

Sustained Quality Award Status in Developing Country: A Study on the Dubai Quality Award Recipients ... 72
M. Doulatabadi and S. M. Yusof

ix
Software Reliability Model Selection Based on Deep Learning............................................................................. 77
Yoshinobu Tamura, Mitsuho Matsumoto and Shigeru Yamada

A Comprehensive Analysis Method for System Resilience Considering Plasticity.............................................. 82


Yanbo Zhang, Rui Kang, Ruiying Li and Chenxuan Yang

Reliability and Failure Behavior Model of Optoelectronic Devices ...................................................................... 86


Ning Tang, Ying Chen and ZengHui Yuan

Optimal Replacement Policy Based on Cumulative Damage for a Two-Unit System ......................................... 91
Shey-Huei Sheu, Zhe-George Zhang, Tzu-Hsin Liu and Hsin-Nan Tsai

Method Construction for Risk Factors Identification of Public Hospital Operation .......................................... 94
Shirui Peng, Guofeng Su, Jianguo Chen and Zhiru Wang

A Study on Device Security in IoT Convergence .................................................................................................... 99


Hyun-Jin Kim, Hyun-Soo Chang, Jeong-Jun Suh and Tae-shik Shon

Design of Arc Fault Pressure and Temperature Detectors in Low Voltage Switchboard ................................. 103
Lee Choo Kuan and Ahmad Azri Sa’adon

Performance Evaluation of Discrete-Time Hedging Strategies for European Contingent Claims .................. 107
Easwar Subramanian, Vijaysekhar Chellaboina and Arihant Jain

Explicit Solutions of Discrete-Time Hedging Strategies for Multi-Asset Options ............................................. 112
Easwar Subramanian, Vijaysekhar Chellaboina and Arihant Jain

Optimal Static Hedging of Uncertain Future Foreign Currency Cash Flows Using FX Forwards ................. 117
Anil Bhatia and Sanjay P. Bhat

A Neural Network Approach to the Operational Strategic Determinants of market Value in


High-Tech Oriented SMEs...................................................................................................................................... 122
Jooh Lee and He-Boong Kwon

Green Banking: A Proposed Model for Green Housing Loan............................................................................. 126


Alexander V. Gutierrez

A Feasibility Study and Design of Biogas Plant via Improvement of Waste Management and
Treatment in Ateneo de Manila University ........................................................................................................... 130
S. Granada, R. Preto, T. Perez, C. Oppus, G. Tangonan and P. M. Cabacungan

Effective Data Collection and Analysis of Solar Radio Burst Type II Event Using Automated
CALLISTO Network System.................................................................................................................................. 133
N. H. Zainol, S. N. U. Sabri, Z. S. Hamidi, M. O. Ali, N. N. M. Shariff, Nurulhazwani Husien,
C. Monstein and M. S. Faid

The Dependence of Log Periodic Dipole Antenna (LPDA) and e-CALLISTO Software to
Determine the Type of Solar Radio Burst (I -V) ................................................................................................... 138
S. N. U. Sabri, N. H. Zainol, M. O. Ali, N. N. M. Shariff, NurulHazwani Hussien, M. S. Faid, Z. S. Hamidi and
C. Monstein

x
Monitoring the Level of Light Pollution and Its Impact on Astronomical Bodies Naked-Eye Visibility
Range in Selected Areas in Malaysia Using the Sky Quality Meter .................................................................... 143
M. S. Faid, Nurulhazwani Husien, N. N. M. Shariff, M. O. Ali, Z. S. Hamidi, N. H. Zainol and S. N. U. Sabri

Solar Radio Bursts Detected by CALLISTO System and Their Related Events ............................................... 148
Nurulhazwani Husien, N. H. Zainol, Z. S. Hamidi, S. N. U. Sabri, N. N. M. Shariff, M. S. Faid, M. O. Ali and
C. Monstein

Simulation Modeling of Sewing Process for Evaluation of Production Schedule in Smart Factory ................ 153
Sooyoung Moon, Sungjoo Kang, Jaeho Jeon and Ingeol Chun

A Knowledge Management Framework for Studying the Child Obesity ........................................................... 156
Raslapat Suteeca and Prompong Sugunnasil

Hybrid Clustering System Applied in Patent Quality Management - Take Intelligent Car
Industry for Example .............................................................................................................................................. 160
Chin-Yuan Fan and Shu-Hao Chang

An Integrated QFD and Kano's Model to Determine the Optimal Target Specification .................................. 165
Dian Retno Sari Dewi and Dini Endah Setyo Rahaju

A Simultaneous Integrated Model with Multiobjective for Continuous Berth Allocation and
Quay Crane Scheduling Problem ........................................................................................................................... 170
Nurhidayu Idris and Zaitul Marlizawati Zainuddin

A Multi Objective Mixed Integer Programming Approach for Sustainable Production System..................... 175
Ma. Teodora E. Gutierrez

The Determining Factors in Prescribing Anti-Hypertensive Drugs to First-Ever Ischemic Stroke Patients .. 178
Rasvini Rajendran and Zaitul Marlizawati Zainuddin

Location Routing Inventory Problem with Transshipment Points Using p-Center .......................................... 183
S. Sarifah Radiah Shariff, Mohd Omar and Noor Hasnah Moin

Optimal Worker Assignment under Limited-Cycled Model with Multiple Periods - Consecutive
Delay Times is Limited ............................................................................................................................................ 188
Xianda Kong, Hisashi Yamamoto and Shiro Masuda

Method for an Energy-Cost-Oriented Manufacturing Control to Reduce Energy Costs: Energy Cost
Reduction by Using a New Sequencing Method.................................................................................................... 193
Stefan Willeke, Georg Ullmann and Peter Nyhuis

Economic Production Quantity with Imperfect Quality, Imperfect Inspections, Sales Return, and
Allowable Shortages ................................................................................................................................................ 198
Muhammad Al-Salamah

Analysis and Prediction Cost of Manufacturing Process Based on Process Mining.......................................... 206
Thi Bich Hong Tu and Minseok Song

Application of TPM in Production Process of Aluminium Stranded Conductors ............................................. 211


Orapadee Joochim and Jumnong Meekaew

xi
'Creating Awareness on Light Pollution' (CALP) Project: Essential Requirement for
School-University Collaboration ............................................................................................................................ 216
N. N. M. Shariff, M. R. Osman, M. S. Faid, Z. S. Hamidi, S. Sabri, N. H. Zainol, M. O. Ali and N. Husien

Signal Detection of the Solar Radio Burst Type III Based on the CALLISTO System
Project Management ............................................................................................................................................... 220
Z. S. Hamidi, N. H. Zainol, M. O. Ali, S. N. U. Sabri, N. N. M. Shariff, M. S. Faid, Nurulhazwani Husien and
C. Monstein

A Risk Management Approach for Collaborative NPD Project ......................................................................... 224


Ioana Filipas Deniaud, François Marmier, Didier Gourc and Sophie Bougaret

e-CALLISTO Network System and the Observation of Structure of Solar Radio Burst Type III .................. 229
M. O. Ali, S. N. U. Sabri, Z. S. Hamidi, Nurulhazwani Husien, N. N. M. Shariff, N. H. Zainol, M. S. Faid and
C. Monstein

Use of Earned Value Management in the UAE Construction Industry .............................................................. 234
Mohamed Morad and Sameh M. El-Sayegh

Significant Factors Affecting the Size and Structure of Project Organizations ................................................. 238
Sameh M. El-Sayegh, Mustafa Kashif, Mohammed Al Sharqawi, Nilli Nikoula and Mei Alhimairee

Real-World Software Projects as Tools for the Improvement of Student Motivation and
University-Industry Collaboration......................................................................................................................... 243
Zsolt Csaba Johanyák

The Impact of Information Technology and the Alignment between Business and Service
Innovation Strategy on Service Innovation Performance .................................................................................... 247
Fotis Kitsios and Maria Kamariotou

Optimal Strategies for Escaping from the Middle Income Trap: Automotive Supply Chain in Thailand...... 252
Kanda Boonsothonsatit and Orapadee Joochim

The Design of Models for Coconut Oil Supply Chain System Performance Measurement .............................. 256
Meilizar, Lisa Nesti and Putranesia Thaha

Application of Conjoint Analysis in Establishing Aviation Fuel Attributes for Air Travel Industries ............ 262
Karla Ishra S. Bassig and Hazeline A. Silverio

Internet of Things-Enabled Supply Chain Performance Measurement Model ................................................. 270


Abdallah Jamal Dweekat and Jinwoo Park

Forecasting Method under the Introduction of a Day of the Week Index to the Daily Shipping Data
of Sanitary Materials ............................................................................................................................................... 273
Koumei Suzuki, Kazuhiro Takeyasu and Hirotake Yamashita

Text Mining Analysis on the Questionnaire Investigation for High School Teachers' Work Load ................. 278
Kazuhiro Takeyasu, Tatsuya Oyanagi, Yasuo Ishii and Daisuke Takeyasu

The Analysis to the Questionnaire Investigation on the Rare Sugars ................................................................. 283
Yuki Higuchi, Kazuhiro Takeyasu and Hiromasa Takeyasu

xii
Predicting Customer Lifetime Value through Data Mining Technique in a Direct Selling Company............. 288
Arsie P. Mauricio, John Michael M. Payawal, Maida A. Dela Cueva and Venusmar C. Quevedo

Knowledge Sharing and the Innovation Capability of Chinese Firms: The Role of Guanxi ............................ 293
Oswaldo Jose Jimenez Torres and Dapeng Liang

Two-Level Hierarchical Routing Based on Road Connectivity in VANETs ...................................................... 298


Zubair Amjad, Wang-Cheol Song and Khi-Jung Ahn

An Overview of Wax Crystallization, Deposition Mechanism and Effect of Temperature & Shear ............... 303
Azwan Harun, Nik Khairul Irfan Nik Ab Lah, Zulkafli Hassan and Hazlina Hussin

Countercyclical Buffer of Basel III and Cyclical Behavior of Palestinian Banks' Capital Resource ............... 308
Ahmed N. K. Alfarra, Xiaofeng Hui, Ehsan Chitsaz and Jaleel Ahmed

Author Index ............................................................................................................................................................ 313

xiii
xiv
Back to Contents

A Hybrid Fuzzy Multi-Criteria Decision-Making


Approach for Mitigating Air Traffic Congestion
1
Miriam F. Bongo 2*
Lanndon A. Ocampo
Department of Industrial Engineering Department of Mechanical and Manufacturing Engineering
School of Engineering School of Engineering
University of San Carlos University of San Carlos
Cebu City, Philippines Cebu City, Philippines
miriam.bongo@yahoo.com don_leafriser@yahoo.com
*corresponding author

Abstract — This paper presents a novel hybrid MCDM the umbrella of ATS, there are four major divisions that function
approach based on concepts of fuzzy DEMATEL, ANP, and fuzzy as one and elicit judgment regarding the most suitable air traffic
TOPSIS to address air traffic congestion in the Philippines. The flow management (ATFM) action to be applied in case of air
proposed approach enables the decision-makers under the Air traffic congestion. Also working under the principle that ATS,
Traffic Services (ATS) to elicit judgment on which ATFM action unlike airlines management and airport management, is an
(i.e., ground holding, airborne holding, rerouting, and speed independent entity with no biases on profit and reputation, thus,
controlling) is most suitable to be applied in the occurrence of air makes it as significant for this paper to look into how a final
traffic congestion. A case study at Mactan Cebu International judgment is made in this entity alone.
Airport (MCIA) is conducted to illustrate the application of the
proposed framework based on pairwise comparison among The causes of delays mainly point to both adverse weather
criteria and evaluation of ATFM actions with respect to the and air traffic congestion [2]. Adverse weather is considered
previously given set of criteria. The results revealed that the uncontrollable since its evidence cannot be further manipulated
decision-makers are most concerned with the criterion on safety. or even predicted. In the Philippines, the leading cause of delays
Correspondingly, ground holding is perceived to be the most is air traffic congestion with 40% of total delays accounted for
suitable ATFM action to be applied during air traffic congestion it. When air traffic congestion is eradicated, or at least held to
as its degree of safety once implemented is highest. minimum, related delays and resource congestion can be
potentially minimized. Due to the fact that physical resources
Keywords — air traffic congestion, ANP, fuzzy DEMATEL,
defining air traffic congestion are limited, such as airports [3], it
fuzzy TOPSIS, MCDM
further warrants a need to minimize accumulated delays by
I. INTRODUCTION implementing any of the available ATFM actions such as
ground holding, airborne holding, rerouting, and speed
Flight delays have become the most prominent issue controlling.
affecting not only the on-time schedule reliability of airlines but
also the operational reputation of an airport and the quality of In order to deal with air traffic congestion, prior outstanding
customer service experienced by air passengers. As a studies engaged in optimization models and learning algorithms
consequence, the order and flow of scheduled flights is seriously are framed within the context of implementing directly one or a
disrupted and a significant impact on the aviation safety is at combination ATFM actions based on a single criterion alone
stake. In the same way, airport and airlines are confronted with (e.g., cost minimization). However, such techniques are not
huge losses and low credit [1]. In the Philippines, demand for air inclusive and fail to integrate the multi-criteria characteristic of
travel is continuously growing by an average of 11% annually the problem which involves relationships inherent in the
which prompted its major airports to constantly address it given decision structure. With multi-criteria decision-making
the limited airport and airspace resources. With the growth of air (MCDM) approach, it can illustrate a decision problem and
passengers, come the consequence of delays even under normal present solutions according to a given set of interrelated or even
weather conditions. According to an official record disclosed by conflicting criteria.
Air Traffic Services (ATS) which functions directly under the
In literature, MCDM is widely applied in relation to air
Civil Aviation Authority of the Philippines (CAAP), on an
transportation system such as selecting the preferred
average daily basis, 14% of flights are delayed. These statistics
alternative/candidate airport for ‘building a new runway’ [4],
further revealed that indeed flight delays are considered as a
evaluating airline service quality [5] and further improving such
major threat to the airlines management, airport management,
[6], evaluating airports service quality [7], and assessment for
and even the ATS providers, as its effects ripple and accumulate
potential multi-airport systems [8]. Further, results from the
from one flight leg to other succeeding flight legs. In reality, the
application of MCDM revealed promising, significant impact to
major decision-makers in the commercial aviation industry
the full generality of the issues in air transportation system.
include the airlines management, airport management, and ATS
providers. Furthermore, it is imperative to understand that under

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


1
Back to Contents

Despite the viability of solving decision management it incorporates the vagueness, or technically the fuzziness, of
problems in the context of air transportation system using human perception in decision-making. In consideration to the
MCDM, no paper has yet investigated which ATFM action is fact that there may be qualitative data and criteria involved in
most suitable to be applied in the case of air traffic congestion. the selection of the best alternative of this paper, the use of fuzzy
This is a significant problem to be solved as its impact is numbers is deemed appropriate for such area.
widespread relative to the entire air transportation system
specially in a developing country like the Philippines. A. Evaluation criteria for mitigating air traffic congestion
This section aims to highlight a few of the criteria that may
The aim of this paper is to develop a multi-criteria decision be associated with the mitigation of air traffic congestion. Since
support system that will guide the major decision-makers under the application of MCDM in air traffic management is a new
ATS in selecting an ATFM action to be implemented during air approach and has not yet been widely defined, the authors
traffic congestion. To achieve this objective, an integrated attempted to extract variables that can be taken as criteria in the
decision-making trial and evaluation laboratory (DEMATEL), context of this paper. These criteria are denoted as C1 through
analytical network process (ANP), and technique for order of C12 and are described as follows:
preference by similarity to ideal solution (TOPSIS) is employed
along with the concepts of fuzzy set theory. Such MCDM x C1: Cost of using the flight routes
methods have the property of extracting the relationship of each x C2: Landing/Take-off fee
criterion and ultimately arriving at an order of preference [4]. x C3: Fuel cost
Therefore, the gap that is advanced in this paper is the x C4: Crew cost
application of MCDM approach in the context of mitigating air x C5: Passenger cost
traffic congestion. Finally, the contribution of this paper is x C6: Customer goodwill
twofold: (1) this is a novel paper that attempts to address air x C7: Safety
traffic congestion using hybrid MCDM methods; and (2) this x C8: Equitable treatment of competing air carriers
paper takes into account the preferences of the decision-makers x C9: Utilization of runway and terminal
under the umbrella of ATS when it comes to deciding which x C10: Environmental value
ATFM action should be applied during air traffic congestion. x C11: Economic value
x C12: Social value
II. MCDM APPROACH BASED ON FUZZY DEMATEL, ANP,
AND FUZZY TOPSIS B. Alternatives involved
The proposed hybrid MCDM methodology resembles One general approach applied in the case of resource
closely with the hybrid methods developed in literature that congestion is ATFM. Cavca et al. [11] comprehensively
address decision problems in air transportation systems. described ATFM as a service reached by balancing demand and
However, with the knowledge that the issue on air traffic capacity on the elements of airspaces by means of ground
congestion is not yet addressed using this approach, the holding, airborne holding, rerouting, and speed controlling.
application of specific MCDM methods in the context of this ATFM further supports tactical interventions on the flow of
paper is justified in the succeeding sub-sections. aircraft at a given airport or in a network of airports [12]. These
ATFM actions are denoted as A1 through A4 and are further
Since a number of criteria are significant in the selection of defined below:
different ATFM actions under different resource congestion
conditions and these criteria have inherent interrelationships, x A1: Ground holding
DEMATEL is used. With this method, the interrelationships x A2: Airborne holding
present in the criteria set are determined and this provides x A3: Rerouting
insights on what specific criterion must be prioritized. It is also x A4: Speed controlling
believed to be an effective procedure for analyzing structure and
relationships among criteria [9]. That is, criteria with higher III. IMPLEMENTATION OF MCDM METHODS
impact to another are given higher priority, and for criteria with A. The Case of Mactan Cebu International Airport (MCIA)
lower impact are considered to have lower priority. Once the
relationship among criteria is known in terms of its impact, ANP In the event of air traffic congestion in MCIA, major
can be integrated to compute for the weights of each criterion. decision-makers communicate with one another and arrive at an
Saaty [10] proposed ANP to deal with the dependence and the ATFM action to be imposed to mitigate the condition. The
feedback decision-making. This method is widely applied to decision is expected to be made in a short amount of time
various decision management problems. Finally, for the considering the impact it brings with respect to various criteria.
prioritization of alternatives (i.e., ATFM actions), TOPSIS is For this paper, there are 12 criteria and 4 alternatives taken into
used as it identifies ranking based on positive ideal solutions and account as summarized in Section II-A and Section II-B,
negative ideal solutions simultaneously. Since the decision- respectively. MCDM methods based on fuzzy DEMATEL,
makers are more comfortable in eliciting judgment using ANP, and fuzzy TOPSIS are used in this case, as described in
linguistic variables which can be represented by fuzzy numbers, the following section.
fuzzy set theory is thus used. With fuzzy set theory, the B. A step-by-step methodology of the hybrid approach
uncertainty of human decision-making processes can be well-
handled. Further, it is emphasized by Kou [6] that fuzzy The initial step involves the participation of four decision-
numbers are best used when assessing qualitative data because makers (i.e., area control center head, ATFM assistant,

2
Back to Contents

TABLE 1. DESCRIPTION OF THE LINGUISTIC EXPRESSIONS FOR EVALUATING THE CRITERIA


Linguistic expression Description Triangular fuzzy number
No influence (NI) Base criterion has no influence to the other criterion (0.0, 0.1, 0.3)
Very low influence (VLI) Base criterion has very low influence compared to the other criterion (0.1, 0.3, 0.5)
Low influence (LI) Base criterion has low influence compared to the other criterion (0.3, 0.5, 0.7)
High influence (HI) Base criterion has high influence compared to the other criterion (0.5, 0.7, 0.9)
Very high influence (VHI) Base criterion has very high influence compared to the other criterion (0.7, 0.9, 1.0)

TABLE 2. DESCRIPTION OF THE LINGUISTIC EXPRESSIONS FOR EVALUATING ALTERNATIVES WITH RESPECT TO THE CRITERIA
Linguistic expression Description Triangular fuzzy number
Very good (VG) Performance of such alternative has very huge impact to the criterion (7, 9, 10)
Good (G) Performance of such alternative has huge impact to the criterion (5, 7, 9)
Fair (F) Performance of such alternative has fair impact to the criterion (3, 5, 7)
Poor (P) Performance of such alternative has slight impact to the criterion (1, 3, 5)
Very poor (VP) Performance of such alternative has no impact at all to the criterion (0, 1, 3)

aerodrome controller, and terminal radar approach controller) influence on criterion ݆, they should indicate this by݃௜௝ . Thus,
where pairwise comparisons in terms of influence among the matrix ‫ ܩ‬ൌ ሾ݃௜௝ ሿ௡ൈ௡ of direct relationships can be obtained.
criteria is made using a survey questionnaire. These decision-
makers’ expertise is considered essential in eliciting judgment Step 4: Normalize the direct-influence matrix ‫ܩ‬. The
in order to mitigate air traffic congestion. A set of linguistic normalized matrix ܺ is acquired by using (5). Its diagonal is
expressions used in making pairwise comparisons among zero, and the maximum sum of rows or columns is one.
criteria along with its corresponding triangular fuzzy number is ܺ ൌ ‫ܩݒ‬ (5)
shown in Table 1. The fuzzy DEMATEL and ANP methods are
described below: where
Step 1: Aggregate linguistic values from the decision-makers’ ͳ ͳ
evaluation. This step is done in order to get the aggregated fuzzy ‫ ݒ‬ൌ ݉݅݊௜ǡ௝ ቐ ǡ ቑ ݅ǡ ݆ǡ ‫ א‬ሼͳǡʹǡ ǥ ǡ ݊ሽ
௜௝ ௜௝
linguistic values of the pairwise comparison among criteria ‫ܥ‬௝ Ǥ ƒš σ௡௝ୀଵ ݃௖ ƒš σ௡௜ୀଵ ݃௖
௜ ௝
Aggregation of decisions can be computed as in (1) through (3):

Step 5: Attain a total-influential matrix ܶ௖ . When the

ܽ௜௝ ൌ ௞
‹൛ς௞ୀଵ ௟
ܽ௜௝௞ ൟೖ (1) normalized direct-influential matrix ܺ is obtained, the total-
భ influential matrix ܶ௖ can be generated from (6), in which ‫ܫ‬

ܽ௜௝ ൌ ௞
൛ς௞ୀଵ ௠ ೖ
ܽ௜௝௞ ൟ (2) denotes the identity matrix,


ܽ௜௝ ௞
ൌ ݉ܽ‫ݔ‬൛ς௞ୀଵ ௨
ܽ௜௝௞ ൟ ೖ (3) ܶ௖  ൌ ܺሺ‫ ܫ‬െ ܺሻିଵ ǡ ™Š‡ Ž‹ ܺ κ ൌ ሾͲሿ௡ൈ௡ (6)
κ՜ஶ

Step 6: Analyze the results. The matrix components are


Step 2: Defuzzify corresponding linguistic values by means of
separately expressed as vectors ‫ ݎ‬and ‫ݏ‬, respectively, using (7)
signed distance method. Based from the results of the survey,
and (8). When ሺ‫ݎ‬௜ െ ‫ݏ‬௜ ሻ is positive, the criterion is part of the
the triangular fuzzy set numbers need to be defuzzified first in
net cause cluster, otherwise, it is part of the net effect cluster.
order to get the crisp value of each decision-maker’s preference.
An influential network relations map can be created by
For this paper, the signed distance of a triangular fuzzy number
mapping the data set of ሺ‫ݎ‬௜ ൅ ‫ݏ‬௜ ǡ ‫ݎ‬௜ െ ‫ݏ‬௜ ሻ.
is employed as defuzzification method as shown in (4). Based
from maximum membership grade principle, this approach is ௜௝
ܶ௖ ൌ ൣ‫ݐ‬௖ ൧௡ൈ௡ ǡ݅ǡ ݆߳ሼͳǡʹǡ ǥ ǡ ݊ሽ
proven better than using the most common centroid ௜௝
defuzzification method [13]. Once defuzzified, these values ‫ ݎ‬ൌ ൣσ௡௝ୀଵ ‫ݐ‬௖ ൧ ൌ ሾ‫ݐ‬௖௜ ሿ௡ൈଵ ൌ ሺ‫ݎ‬ଵ ǡ ǥ ǡ ‫ݎ‬௜ ǡ ǥ ǡ ‫ݎ‬௡ ሻԢ (7)
௡ൈଵ
will then serve as an input to the succeeding steps. ௜௝ ᇱ ௝
‫ݏ‬ൌ ൣσ௡௜ୀଵ ‫ݐ‬௖ ൧ ଵൈ௡ ൌ ൣ‫ݐ‬௖ ൧௡ൈଵ ൌ ሺ‫ݏ‬ଵ ǡ ǥ ǡ ‫ݏ‬௜ ǡ ǥ ǡ ‫ݏ‬௡ ሻԢ (8)
௟ାଶ௠ା௨
݀ሺ‫ܣ‬ሚǡ Ͳሻ ൌ (4) where vector ‫ ݎ‬and vector ‫ ݏ‬express the sum of the rows and the

sum of the columns from the total-influential matrix ܶ௖ ൌ
Step 3: Apply DEMATEL and ANP methods according to Tzeng ௜௝
ൣ‫ݐ‬௖ ൧௡ൈ௡ , respectively, and the superscript Ԣ denotes the
et al. [14] by calculating the direct-influence matrix by scores. transpose.
An assessment of the relationship between each mutual
influence criterion is made according to the opinions of Step 7: Find the normalized total-influential matrix ܶ௖௡௢௥ . The
decision-makers, using a linguistic scale rating shown in Table total-influential matrix can be normalized and presented as
1, with scores represented by natural language: ‘no influence’ ௜௝
ܹ ൌ ൣ‫ݓ‬௜௝ ൧௡௫௡ where ‫ݓ‬௜௝ ൌ ‫ ݐ‬ൗ ௝ ,‫ ݐ‬௝ ൌ σ௡௜ୀଵ ‫ ݐ‬௜௝ . For the case
(NI), ‘very low influence’ (VLI), ‘low influence’ (LI), ‘high ‫ݐ‬
influence’ (HI), and ‘very high influence’ (VHI). The decision- of this paper, the normalized total-influential matrix ܶ௖௡௢௥ also
makers are required to indicate the direct influence by a pairwise represents the weighted supermatrix ܹ஼‫ כ‬.
comparison, and if they believe that criterion ݅has an effect and

3
Back to Contents

Step 8: Obtain the DEMATEL-ANP supermatrix. Limit the In this matrix, each element ‫ݒ‬෤௜௝ is a fuzzy normalized
weighted supermatrix by raising it to a sufficiently large power number which ranges within the closed interval [0, 1].
߮ until it converges and becomes a long-term stable
supermatrix to obtain global priority vector, which defines the Step 12: Determine fuzzy positive ideal solution and fuzzy
influential weights ‫ ݓ‬ൌ ሺ‫ݓ‬ଵ ǡ ǥ ǡ ‫ݓ‬௝ ǡ ǥ ǡ ‫ݓ‬௡ ሻ from Ž‹ఝ՜ ሺܹ஼‫ כ‬ሻఝ negative ideal solution. Obtain the fuzzy positive ideal solution
ሺ‫ܣ‬ା ሻ and fuzzy negative ideal solution ሺ‫ ିܣ‬ሻ as in:
for the criteria.
Before we proceed with the next step, it is important to ‫ܣ‬ା ൌ ሺ‫ݒ‬෤ଵା ǡ ‫ݒ‬෤ଶା ǡ ‫ݒ‬෤ଷା ǡ ǥ ǡ ‫ݒ‬෤௡ା ሻ ൌ ቄƒš ‫ݒ‬௜௝ ȁሺ݅ ൌ ͳǡʹǡ ǥ ǡ ݉Ǣ ݆ ൌ ͳǡʹǡ ǥ ǡ ݊ሻቅ (13)

generate the evaluation from the same roster of decision-makers ି
‫ ܣ‬ൌ ሺ‫ݒ‬෤ଵି ǡ ‫ݒ‬෤ଶି ǡ ‫ݒ‬෤ଷି ǡ ǥ ǡ ‫ݒ‬෤௡ି ሻ ൌ ቄ‹ ‫ݒ‬௜௝ ȁሺ݅ ൌ ͳǡʹǡ ǥ ǡ ݉Ǣ ݆ ൌ ͳǡʹǡ ǥ ǡ ݊ሻቅ (14)

using another set of survey questionnaire. The performance of
each alternative with respect to each criterion is rated using Step 13: Calculate the distance of each alternative from fuzzy
linguistic expressions ‘very good’ (VG), ‘good’ (G), ‘fair’ (F), positive ideal solution and fuzzy negative ideal solution,
‘poor’ (P), and ‘very poor’ (VP). Table 2 provides the respectively. Acquire the distance of each alternative from ‫ܣ‬ା
description of these linguistic expressions and its corresponding and ‫ ିܣ‬by using (15) and (16),
triangular fuzzy number. ݀௜ା ൌ σ௡௝ୀଵ൫‫ݒ‬෤௜௝ ǡ ‫ݒ‬෤௝ା ൯ (15)
The method of fuzzy TOPSIS is presented in the following ݀௜ି ൌ σ௡௝ୀଵ൫‫ݒ‬෤௜௝ ǡ ‫ݒ‬෤௝ି ൯ (16)
steps in continuation to the previously accomplished fuzzy
DEMATEL and ANP methods: where, ݀௜ା and ݀௜ି are the primary and secondary distant
measures, respectively. The distance measurement between
Step 9: Find the aggregated fuzzy weight [9]. Aggregate the two triangular fuzzy numbers of ሺ݈ଵ ǡ ݉ଵ ǡ ‫ݑ‬ଵ ሻ and ሺ݈ଶ ǡ ݉ଶ ǡ ‫ݑ‬ଶ ሻ,
weights of criteria to get the aggregated fuzzy weight ‫ݓ‬ ෥௝ of can be calculated by the vertex method as follows:
criterion ‫ܥ‬௝ , and pool the decision-makers’ opinions to get the
aggregated fuzzy rating ‫ݔ‬෤௜௝ of alternative ‫ܣ‬௜ under criterion ‫ܥ‬௝ . ଵ
෥ǡ ݊෤ሻ ൌ ට ሾሺ݈ଵ െ ݈ଶ ሻଶ ൅ ሺ݉ଵ െ ݉ଶ ሻଶ ൅ ሺ‫ݑ‬ଵ െ ‫ݑ‬ଶ ሻଶ ሿ (17)
݀௩ ሺ݉

Recall that the weights of criteria are obtained as a scalar
quantity from the application of fuzzy DEMATEL and ANP. Step 14: Calculate the closeness coefficient ሺ‫ܥ‬௖ ሻ of each
The equation used in aggregating the decisions is shown in (1) alternative and rank order of alternatives. The ‫ܥ‬௖ takes into
through (3): account the ݀௜ା and ݀௜ି simultaneously. The relative ‫ܥ‬௖ index of
Step 10: Construct the fuzzy decision matrix and the normalized each alternative with respect to the fuzzy positive ideal solution
fuzzy decision matrix. Normalizing fuzzy numbers is is obtained as:
accomplished by using linear scale transformation represented ௗ೔ష
by (9) and (10) to convert the different units into a comparable ‫ܥ‬௖ ൌ (18)
൫ௗ೔శ ାௗ೔ష ൯
unit.
௟೔ೕ ௠೔ೕ ௨೔ೕ Rank the alternatives using ‫ܥ‬௖ index in decreasing order.
‫ݎ‬ǁ௜௝ ൌ ൬ ǡ ǡ ൰ Ǣ‫ݑ‬௝ା ൌ ƒš ‫ݑ‬௜௝

Ǣ‫ ݆׊‬ା (9) The larger the index value, the better the performance of the
௨ೕశ ௨ೕశ ௨ೕశ ௜
௟ೕష ௟ೕష ௟ೕష alternatives with respect to each criterion.
‫ݎ‬ǁ௜௝ ൌ ൬ ǡ ǡ ൰ Ǣ݈௝ି ା
ൌ ‹ ݈௜௝ Ǣ‫ି ݆׊‬ (10)
௨೔ೕ ௠೔ೕ ௟೔ೕ ௜
IV. RESULTS AND DISCUSSIONS
As mentioned ݈, ݉, and ‫ ݑ‬are the smallest possible value,
the most promising value, and the largest possible value, In the application of the MCDM approach based on fuzzy
DEMATEL, ANP, and fuzzy TOPSIS, the following key
respectively. For benefit criteria, the larger ‫ݎ‬ǁ௜௝ has the greater
results are obtained using Microsoft Excel© 2016. For the first
preference; while for the cost criteria, the smaller ‫ݎ‬ǁ௜௝ has the part of the methodology involving fuzzy DEMATEL and ANP,
greater preference. Table 3 presents the matrix components expressed as vectors ‫ݎ‬
To continue, we calculate for the normalized fuzzy decision and ‫ݏ‬, respectively. Further evaluation of the vector sums,
matrix by using (11): shows that C7 through C12 (i.e., safety, equitable treatment of
competing air carriers, utilization of runway and terminal,
ܴ෨ ൌ ൣ‫ݎ‬ǁ௜௝ ൧௡ൈ௠ (11) environmental value, economic value, social value,
respectively) are part of the net cause cluster; this implies that
where, ‫ݎ‬ǁ௜௝ is the normalized value of ‫ݔ‬෤௜௝ ൌ ൫݈௜௝ ǡ ݉௜௝ ǡ ‫ݑ‬௜௝ ൯. these criteria influence the priorities given to all other criteria.
Step 11: Construct the weighted normalized fuzzy decision While the remaining criteria, C1 through C6 (i.e., cost of using
matrix. Calculate the weighted normalized value ‫ݒ‬෤௜௝ by the flight routes, landing/take-off fee, fuel cost, crew cost,
passenger cost, customer goodwill, respectively), belong to the
multiplying the weights ൫‫ݓ‬௝ ൯ of criteria with the normalized net effect cluster. Note that the criteria that fall under this
fuzzy decision matrix ‫ݎ‬ǁ௜௝ . The weighted normalized decision cluster are dominantly pertaining to tangible and intangible
matrix ܸ෨ for each criterion is calculated through the following costs associated with air traffic congestion, and remarkably of
relation: the main interest of the airlines management and not of ATS’.
In principle, the results impeccably create sense relative to the
ܸ෨ ൌ ൣ‫ݓ‬௝ ‫ݎ‬ǁ௜௝ ൧ ൌ ൣ‫ݒ‬෤௜௝ ൧ ݅ ൌ ͳǡʹǡ ǥ ǡ ݆݉ ൌ ͳǡʹǡ ǥ ǡ ݊ (12)
௡ൈ௝ low influence perceived by the ATS providers to these criteria.

4
Back to Contents

TABLE 3. CHARACTERISTICS OF CRITERIA AND ITS CORRESPONDING WEIGHT aspect of air transportation, thus, listing ground holding on the
Vector ࢘ Vector ࢙ topmost preferred ATFM action to be implemented during air
Criteria ሺ࢘࢏ െ ࢙࢏ ሻ Weight
(sum) (sum) traffic congestion. As an extension to this paper, the authors
C1 3.41 3.95 -0.53 0.073 suggest to broaden the generality of the scope. While the output
C2 4.21 4.70 -0.49 0.090 is sensitive only to the judgment made by ATS, other decision-
C3 3.19 3.46 -0.27 0.068 makers consist of airlines management and airport
C4 3.21 3.72 -0.51 0.068 management, who may also influence the elicited decision
C5 3.11 4.03 -0.92 0.066 during air traffic congestion, may be included in the roster of
C6 3.87 4.29 -0.42 0.081 entities involved. Through this, a collaborative decision-
C7 4.78 3.93 0.85 0.101 making approach can be achieved as it is practically applied in
C8 4.41 3.34 1.07 0.093
reality.
C9 4.46 4.04 0.42 0.094
C10 3.76 3.08 0.68 0.079 ACKNOWLEDGMENT
C11 4.50 4.44 0.07 0.095
C12 4.36 4.30 0.06 0.092 The first author wishes to thank the Engineering Research
and Development for Technology (ERDT) of the Philippine
In the integration of the results from fuzzy DEMATEL and Department of Science and Technology (DOST) for the
ANP, a final set of weights for each criterion is obtained as financial support provided through the author’s full graduate
shown in the rightmost column of Table 3. The results revealed scholarship grant.
that ATS providers are most concerned with the aspect on
safety (C7) which follows with the main function of the entity REFERENCES
relating to expediting and maintaining an orderly flow of air [1] Gao, M., Chi, H., Hu, Y., Xu, B. (2012). Models responding to large-area
traffic. Correspondingly, flight route cost (C1), fuel cost (C3), flight delays in aviation production engineering. Systems Engineering
Procedia, Vol. 5, pp. 68–73.
crew cost (C4), and passenger cost (C5), are given least
priorities as these criteria are not at all charged to their entity [2] Jungai, T., Hongjun, X. (2012). Optimizing Arrival Flight Delay
Scheduling Based on Simulated Annealing Algorithm. Journal on Physics
but to the airlines as required. These weights, once applied in Procedia, Vol. 33, pp. 348 – 353.
fuzzy TOPSIS, generate key results as summarized in Table 4. [3] Barnhart, C., & Vaze, V. (2012). Modeling Airline Frequency
Note that the distance of each alternative from the fuzzy Competition for Airport Congestion Mitigation. Transportation Science.
positive ideal solution (݀௜ା ) and fuzzy negative ideal solution Institute for Operations Research and the Management Sciences, Vol. 46,
(݀௜ି ), along with its corresponding closeness coefficient (‫ܥ‬௖ ), No. 4, pp. 512–535.
further determines the ranking of each alternative. The results [4] Janic, M. (2015). A multi-criteria evaluation of solutions and alternatives
showed that ground holding (A1) is the most suitable ATFM for matching capacity to demand in an airport system: the case of London.
Transportation Planning and Technology, Vol. 38, No. 7, pp. 709-737.
action to be applied during air traffic congestion. With respect
[5] Tsaur, S.-H., Chang, T.-Y., Yen, C.-H. (2002). The evaluation of airline
to the ATS’s views on safety implications, as numerically service quality by fuzzy MCDM. Tourism Management, Vol. 23, pp.
justified using the weights from fuzzy DEMATEL-ANP 107–115.
method, this alternative is also believed to be the safest to [6] Kuo, M. S. (2011). A novel interval-valued fuzzy MCDM method for
perform. This preference is then followed by the possible improving airlines’ service quality in Chinese cross-strait airlines.
implementation of speed controlling (A4) and rerouting (A3). Transportation Research Part E, Vol. 47, pp. 1177–1193.
In contrast, airborne holding (A2) ranks last among all [7] Kuo, M., & Liang, G. (2011). Combining VIKOR with GRA techniques
alternatives with the belief that this ATFM action is least safe to evaluate service quality of airports under fuzzy environment. Expert
Systems with Applications, Vol. 38, No. 3, pp.1304–1312.
to be implemented.
[8] Zietsman, D., & Vanderschuren, M. (2014). Analytic Hierarchy Process
TABLE 4. RANKING OF ALTERNATIVES USING ITS CLOSENESS COEFFICIENT assessment for potential multi-airport systems: The case of Cape Town.
Journal of Air Transport Management, Vol. 36, pp. 41-49.
Alternative ࢊ࢏ା ࢊ࢏ି ࡯ࢉ Rank [9] Chen, C. (2000). Extensions of the TOPSIS for group decision-making
A1 0.35 0.62 0.64 1 under fuzzy environment. Fuzzy Sets and Systems: Vol. 114, pp. 1–9.
A2 0.53 0.28 0.35 4 [10] Saaty, T. L. (1996). Decision making with dependence and feedback: The
A3 0.42 0.42 0.50 3 analytic network process. Pittsburgh: RWS Publications Publishers.
A4 0.36 0.47 0.56 2 [11] Ozgur, M., & Cavca, A. (2014). 0–1 integer programming model for
procedural separation of aircraft by ground holding in ATFM. Aerospace
V. CONCLUSIONS Science and Technology. Vol. 33 No. 1–8.
[12] Barnhart, C., Bertsimas, D., Caramanis, C., & Fearing, D. (2012).
This paper presented its core in the application of MCDM Equitable and Efficient Coordination in Traffic Flow Management.
approach based on fuzzy DEMATEL, ANP, and fuzzy TOPSIS Journal on Transportation Science. Published by Informs, Vol. 46, No. 2,
for mitigating air traffic congestion in Mactan Cebu pp. 262–280.
International Airport (MCIA). Since the application of MCDM [13] Lin, L., & Lee, H.-M. (2010). Fuzzy assessment for sampling survey
in the context of mitigating air traffic congestion is novel, the defuzzification by signed distance method. Expert Systems with
Applications, Vol. 37, pp. 7852–7857.
criteria and alternatives involved were carefully extracted based
[14] Gwo-Hshiung Tzeng, W.-Y. C.-L. (2013). A new hybrid MCDM model
on literature and was further evaluated by expert decision- combining DANP with VIKOR to improve e-store business. Knowledge-
makers from ATS. The results of the hybrid methodology Based Systems, Vol. 37, pp. 48–61
revealed that ATS providers give most importance to the safety

5
Back to Contents

Constrained Maximization of Social Welfare with


Fiscal Transfer Scheme
Shinji Suzuki Takehiro Inohara
Graduate school of Decision Science and Technology Graduate school of Decision Science and Technology
Tokyo Institute of Technology Tokyo Institute of Technology
2-12-1 Ookayama, Meguro-ku, Tokyo, Japan. 2-12-1 Ookayama, Meguro-ku, Tokyo, Japan.
suzuki.s.cd@m.titech.ac.jp. inohara.t.aa@m.titech.ac.jp.

Abstract—This study present a fiscal transfer scheme in two- local government. [7] show a political economy model of a
tiers governmental system in a nation consisting of a national country whose citizens have heterogeneous preferences over
government and municipalities. A benevolent national govern-
national policy and some regions considers secession from the
ment tries to maximize Benthamitie social welfare considering
voluntary reorganization of municipalities. For achieving the country. They demonstrate that there is a critical degree of
purpose, the national government offers a different amount of polarization, above which a breakup of an efficient country
fiscal transfer to each combination of municipal merger. We show cannot be prevented without transfers, but can be avoided
that Pareto optimality is achievable through the reorganization using them. [10] considered an optimal incentive scheme for
of municipalities.
municipal merger between two municipalities, in which the
Index Terms—Fiscal Transfer; Public Goods; Municipal national government offers transfers to municipalities whose
Merger; Coalition formation amount depends on whether they chose merger or not. His
model implies that the national government rationally desires
to offer an incentive to a municipality when it will make a
I. I NTRODUCTION change of its decision resulting in an achievement of merger.
Social welfare is not easily maximized when public good On the other hand, our study offers an incentive scheme in
provision is decided by majority voting. As most countries a generalized situation where the number of local government
have a number of local government besides its national gov- is 𝑛 and municipal decisions regarding public good provision
ernment, maximized social welfare is hardly achieved in a and choice of merger combination are conducted by voting.
whole nation. In this study, we explore an incentive scheme Therefore, our study considers the possibility of municipal
of fiscal transfer from a national government to local govern- mergers through real political process; approvals of the related
ments which achieves social welfare maximization and Pareto municipalities are necessary for achieving merger. We present
optimality. We model fiscal relationship in two-tiers govern- “a politically achievable optimal solution” of policy vector
mental system in a nation consisting of a national government consisting of national tax rate, local tax rates, partition of
and municipalities. We consider a game in which benevolent municipalities, and fiscal transfers.
The structure of this paper is as follows. In Section 2, we
national government tries to maximize social welfare by con-
present the fundamental structure of the model. In Section 3,
trolling both municipal taxations and municipal mergers. More
we show the game of municipal reorganization and analyze the
concretely, the national government announces national tax rate
result of the game. In Section 4, concluding remark is given.
and offers fiscal transfers to all the possible combinations of
municipal merger, simultaneously. That is, each combination II. F UNDAMENTAL S TRUCTURE
of merger (and a municipality which does not participate any In this section, we explain the fundamental structure of the
municipal merger) is given a different amount of fiscal transfer model. Firstly, we define individual utility function consisting
from others by the national government. of private consumption, national public good, and local public
Considering municipal merger besides taxation and fiscal good. Then, we explain the ordinary fiscal process in which
transfer has realistic validity because promotion of municipal the national government does not consider the promotion for
mergers (or reorganization of local governments) has been municipal merger.
conducted in the Scandinavian countries, Japan and so on, as
a method of promoting administrative efficiency. A number of A. Individual Utility
precedent studies in public economics including [5] analyze We assume two-tiers administrative system in a nation; the
fiscal transfer between upper-tier and lower tier government(s) national government (hereafter we call it NG) and municipality.
. Nevertheless, most of them does not consider the possibilities There is 𝑛 number of municipalities in a nation and the set
of voluntary integrations among lower-tier governments. [7] of municipalities is denoted by 𝑁 . The objective of NG is to
and [10] are exceptions that consider the effect of transfer maximize social welfare. On the other hand, at municipal level,
considering the possibility of integration (or disintegration) of we assume that the provision of local public good is decided

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


6
Back to Contents

by majority voting and the median incomer is the decisive be the monetary input for producing public good 2 and let 𝛾2
voter regarding public good provision. Thus, hereafter we call with 0 < 𝛾2 < 𝛾1 , 𝛾1 be the fixed cost for producing a unit of
the median incomer in 𝑖 “the median voter in municipality 𝑖”. public good 2. Then, 𝑞𝑖 can be written as a production function
Regarding the validity of applying the median voter theorem 𝑞𝑖 ≡ 𝐼𝑖 /𝛾2 . Similarly as public good 1, we assume that an
formalized by [4], see [9]. With respect to the social welfare additional cost 𝑝𝛿𝑖 is generated when providing a public good
function, we adopt Benthamitie social welfare function. Thus, equally to all the inhabitants in the municipality, where 𝑝𝑖 is the
we define individual utility in advance of the explanation of population of 𝑖 and 𝛿 ∈ (0, 1) such that 𝛿 > 𝛿 is a parameter.
the game, because the national government’s utility consists of Let 𝑦𝑖 be the average personal income in 𝑖. Then, as each 𝑖 is
the sum of individuals living in the nation and each municipal given fiscal transfer 𝐹 ({𝑖}) from the national government, we
utility coincides that of median voter. can express 𝑔𝑖 as
An individual 𝑗 in a municipality 𝑖 is denoted by 𝑗𝑖 and she
𝑞𝑖 𝑡𝑖 𝑝𝑖 𝑦𝑖 + 𝐹 ({𝑖})
has the following utility 𝑔𝑖 = = . (3)
𝑝𝛿𝑖 𝛾2 𝑝𝛿𝑖
𝑈𝑖𝑗 (𝑐𝑖𝑗 , 𝐺, 𝑔𝑖 ) ≡ 𝛼1 𝑐𝛽𝑖𝑗 𝛽
+ 𝛼2 𝐺 + 𝛼3 𝑔𝑖𝛽 (1)
B. Ordinary Political Process
where (i) 𝛼1 , 𝛼2 , 𝛼3 ∈ (0, 1) with 𝛼1 + 𝛼2 + 𝛼3 = 1 are
parameters which signify weighted preference between private If we consider the political process of for the provision of
good, public good 1 and public good 2, (ii) 𝛽 ∈ (0, 1) is a public good 1 and 2, it will be adequate for us to express it
parameter which signifies that marginal utility is declining, (iii) as follows if we do not consider municipal mergers.
𝑐𝑖𝑗 denotes private consumption, (iv) 𝐺 denotes consumption 1) NG decides national tax rate 𝑇 .
of public good 1, (v) 𝑔𝑖 denotes consumption of public good 2) Each municipality decides its own local tax rate 𝑡𝑖 .
2.
Provided public good 1 and public good 2 are consumed 3) NG decides the amount of fiscal transfer to each munic-
completely, respectively. Public good 1 can benefit the whole ipality 𝐹 ({𝑖}).
area of the nation. Thus, the production and provision are After the third stage has finished, 𝐺 is provided to all
conducted by NG. On the other hand, public good 2 is individuals in the nation and 𝑔𝑖 is provided to all individuals
produced by each municipality and it is provided only to the in 𝑖. It is adequate for us to set that the amount of 𝐹 ({𝑖}) is
individuals living in the municipality. The revenue source of decided in the third stage because it prevents moral hazard of
public good 1 is financed by taxing every individual in the municipalities such that setting 𝑡𝑖 = 0 is optimal for all 𝑖 ∈ 𝑁 .
whole nation income proportional tax rate 𝑇 . Public good 2 is Let Λ𝑖 be the set of individuals living in municipality 𝑖. For
produced by using 𝑖’s self revenue source and fiscal transfer all 𝑖 ∈ 𝑁 and for all 𝑗 ∈ Λ𝑖 , 𝑗𝑖 ’s private budget constraint is
offered by NG. Regarding 𝑖’s self revenue source, it is financed expressed by 𝑐𝑖𝑗 ≤ (1 − 𝑇 − 𝑡𝑖 )𝑦𝑖𝑗 . As every individual tries to
by taxing every inhabitant in 𝑖 income proportional tax rate 𝑡𝑖 . maximize their utilities, 𝑐𝑖𝑗 = (1 − 𝑇 − 𝑡𝑖 )𝑦𝑖𝑗 obviously holds.
Here, we explain the derivation of 𝐺. First of all, let 𝑄 Let 𝑚𝑖 denote the median voter in 𝑖. Let 𝑡𝑖𝑚∗ be the tax rate
be the production of public good 1. 𝑄 can be written as preferred by 𝑚𝑖 . Then, 𝑡𝑖𝑚∗ coincides the municipal tax rate
a production function which signifies the relation between a in 𝑖. [ ]
monetary input 𝐼 and a fixed cost for producing a unit of public Let 𝑈𝑖𝑗 , 𝑖, 𝑡𝑖𝑚∗ , 𝐹 ({𝑖}) denotes the utility of 𝑗𝑖 when 𝑇 ,
good 𝛾1 . That is, 𝑄 ≡ 𝐼/𝛾1 . We assume that an additional cost 𝑡∗𝑖𝑚 and 𝐹 ({𝑖}) are all achieved in pre-merger situation. We
𝑃 𝛿 is generated when providing a public good equally to all have
the inhabitants in the nation, where 𝑃 is the population of the [ ]
nation and 𝛿 ∈ (0, 1) is a parameter. We use 𝑃 𝛿 as a proxy 𝑈𝑖𝑗 𝑇, 𝑖, 𝑡𝑖𝑚∗ , 𝐹 ({𝑖}) (4)
[ ∑ ]𝛽
variable of the size of the land area of a government because [ ]𝛽 𝑇 𝑃 𝑦 − 𝑖∈Λ𝑖 𝐹 ({𝑖})
the size of land area of a government generally increases as the = 𝛼1 (1 − 𝑇 − 𝑡𝑖𝑚∗ )𝑦𝑖𝑗 + 𝛼2
𝛾1 𝑃 𝛿
population of the government increases. The expansion of the
[ ]𝛽
area generally increases per unit cost of public good provision. 𝑡𝑖𝑚∗ 𝑝𝑖 𝑦𝑖 + 𝐹 ({𝑖})
However, introducing the size of area as a variable makes the +𝛼3 .
analysis more complicated. 𝛾2 𝑝𝛿𝑖
Let 𝑦 be the national average personal income. Fiscal III. T HE GAME
transfer to municipalities by NG is financed by using a part of
In this section, we firstly explain the structure of the game in
the national revenue source. Let 𝐹 ({𝑖}) ≥ 0 be the quantity
which municipal reorganization is induced by a fiscal transfer
∑ to municipality 𝑖. Then, we have rationally
of fiscal transfer
scheme offered by NG. Then, we analyze the result of the
𝐼 = 𝑇 𝑃 𝑦 − 𝑖∈𝑁 𝐹 ({𝑖}). As a consequence, we can express
game.
𝐺 as follows;

𝑄 𝑇 𝑃 𝑦 − 𝑖∈𝑁 𝐹 ({𝑖}) A. Set Up of the Game
𝐺= 𝛿 = (2)
𝑃 𝛾1 𝑃 𝛿 Firstly, we explain the individual utility after municipal re-
Similarly, let us explain the derivation of 𝑔𝑖 . First of all, let organization is realized. 𝑆 ⊆ 𝑁 can be regarded as a municipal
𝑞𝑖 be the production of public good 2 in municipality 𝑖. Let 𝐼𝑖 coalition. 𝑆 can be a newly-established municipality by merger

7
Back to Contents

or a municipality which remains independent because it did deviate. It is adequate for us to assume that the administrative
not participate any merger. The transfer that NG offers to plan of the newly-established municipality 𝑆ˆ is devised in
each 𝑆 is denoted by 𝐹 (𝑆). We regard each 𝐹 (𝑆) is non advance in the meetings among the related municipalities.
negative because introducing a negative fiscal transfer will not We regard these meetings are organized by an inter-municipal
be supported by municipalities. Moreover, for all 𝑆 ⊆ 𝑁 , committee for merger among the municipalities included in 𝑆ˆ
𝐹 (𝑆) ≤ 𝑇 𝑃 must hold. If 𝑆 = {𝑖}, 𝐹 (𝑆) = 𝐹 ({𝑖}). Thus, we (hereafter COM𝑆ˆ ). Thus, let 𝑂 be the set of players. 𝑂 consists
can regard 𝐹 (𝑆) as a function 2𝑁 → ℜ+ . Next, let Θ𝑘 be a of NG, all 𝑚𝑖 s with 𝑖 ∈ 𝑁 and all possible COM𝑆ˆ .
partition of 𝑁 indexed 𝑘 and let Θ be the set of Θ𝑘 . Moreover, As a consequence, the political process can be written as
let us define Θ𝑘 ≡ {𝑆𝑘𝑙 }𝑙∈𝐿 where 𝐿 denotes the index set the following four stage dynamic game.
of 𝑙 and let 𝐾 be the index set∑of 𝑘. NG’s budget constraint
requires that for all 𝑘 ∈ 𝐾, 𝑙∈𝐿 𝐹 (𝑆𝑘𝑙 ) ∈ [0, 𝑇 𝑃 𝑦] must
1) NG decides the national( tax) rate 𝑇 and offers a fiscal
transfer scheme Υ = 𝐹 (𝑆) 𝑆⊆𝑁 .
hold. The provision (consumption) of public good 1 when the 2) Each 𝑚𝑖 with 𝑖 ∈ 𝑁 chooses a combination of municipal
reorganization of municipalities which results in Θ𝑘 is realized merger such that 𝑖 can participate, or decides not to
can be denoted by
∑ participate in any merger. If a coalition 𝑆ˆ is chosen by
𝑇 𝑃 𝑦 − 𝑙∈𝐿 𝐹 (𝑆𝑘𝑙 ) ˆ COM ˆ is established.
all the 𝑖𝑚 s with 𝑖 ∈ 𝑆, 𝑆
𝐺= .
𝛾1 𝑃 𝛿 3) Each established COM𝑆ˆ proposes the post-merger tax rate
𝑡𝑆ˆ . Regarding the COM𝑆ˆ s which are not established, they
Finally, let us consider the municipal public good consumption
have only one action “do nothing”.
after the reorganization of municipalities. Let 𝑡𝑘𝑙 be the post-
4) Regarding 𝑆ˆ such that COM𝑆ˆ is established, each 𝑚𝑖 with
reorganization tax rate, 𝑝𝑘𝑙 be the post-reorganization pop-
𝑖 ∈ 𝑆ˆ decides whether to approve the merger or not. If
ulation, and 𝑦𝑘𝑙 be the post-reorganization average income
all 𝑖𝑚 s with 𝑖 ∈ 𝑆ˆ approve it, the merger 𝑆ˆ is achieved.
in some 𝑆𝑘𝑙 . For each 𝑖 ∈ 𝑆𝑘𝑙 , if ∣𝑆𝑘𝑙 ∣ ≥ 2, the post-
However, if one of them disapproves, the merger 𝑆ˆ fails.
reorganization values mean the post-merger values. On the
After the reorganization of municipalities are completed,
other hand, if 𝑆𝑘𝑙 = {𝑖}, 𝑝𝑘𝑙 = 𝑝𝑖 and 𝑦𝑘𝑙 = 𝑦𝑖 because
𝐹 (𝑆) is automatically distributed to each existing 𝑆 ⊆ 𝑁 .
these values are not influenced by fiscal transfer scheme and
municipal reorganization. From the above, the national government can choose the
Then, the provision (consumption) of public good 2 when combination of (𝑇, Υ) ∑ in the first stage such that 𝑇 ∈ [0, 1]
𝑆𝑘𝑙 is realized can be denoted by and for all 𝑘 ∈ 𝐾, 𝑙∈𝐿 𝐹 (𝑆𝑘𝑙 ) ∈ [0, 𝑇 𝑃 𝑦]. Each 𝑚𝑖 can
𝑡𝑘𝑙 𝑝𝑘𝑙 𝑦𝑘𝑙 + 𝐹 (𝑆𝑘𝑙 ) choose in the second stage a combination of merger 𝑆ˆ ⊆ 𝑁
𝑔𝑘𝑙 = . such that 𝑖 ∈ 𝑆ˆ and ∣𝑆∣ ˆ ≥ 2, or choose {𝑖} which signifies
𝛾2 𝑝𝛿𝑆 that 𝑖 does not participate in any merger. For example, assume
As a consequence, the indirect utility of 𝑗𝑖 such that 𝑖 ∈ 𝑆𝑘𝑙 that 𝑁 = {𝐴, 𝐵, 𝐶}. Then 𝑚𝐴 can choose an action among
when 𝑆𝑘𝑙 ∈ Θ𝑘 is realized can be written as {𝐴, 𝐵, 𝐶}, {𝐴, 𝐵}, {𝐴, 𝐶} and {𝐴}. Moreover, each 𝑚𝑖 can
[ ] choose an action between 𝑌 𝑒𝑠 or 𝑁 𝑜 in the fourth stage if
𝑈𝑖𝑗 𝑇, 𝑆𝑘𝑙 , 𝑡𝑘𝑙 , 𝐹 (𝑆𝑘𝑙 ) (5) COM𝑆ˆ such that 𝑖 ∈ 𝑆ˆ is established, with respect to the
[ ∑ ]𝛽 ˆ 𝑡 ˆ ) proposed by the established COM ˆ in
[ ]𝛽 𝑇 𝑃 𝑦 − 𝑙∈𝐿 𝐹 (𝑆𝑘𝑙 ) policy vector (𝑆, 𝑆 𝑆
= 𝛼1 (1 − 𝑇 − 𝑡𝑘𝑙 )𝑦𝑖𝑗 + 𝛼2 the third stage. Each COM𝑆ˆ can choose 𝑡𝑆ˆ ∈ [0, 1 − 𝑇 ] if it
𝛾1 𝑃 𝛿
[ ]𝛽 is established, otherwise it has only one action 𝑑𝑜 𝑛𝑜𝑡ℎ𝑖𝑛𝑔.

+𝛼3
𝑡𝑘𝑙 𝑝𝑘𝑙 𝑦𝑘𝑙 + 𝐹 (𝑆𝑘𝑙 )
. [ profile of players]is denoted by 𝜔 ∈ Ω. In the
A strategy
game 𝐺 ≡ {Ω𝑜 }𝑜∈𝑂 , {𝑈𝑜 }𝑜∈𝑂 , 𝑈𝑜 in a strategy profile 𝜔 ∈
𝛾2 𝑝𝛿𝑘𝑙
Ω is defined as follows:
B. The Structure of Game Let Γ𝑘(≡ (𝑡𝑘𝑙))𝑙∈𝐿 be the vector of municipal tax rates in
Next, let us explain the game of the political process for Θ𝑘 , Υ = 𝐹 (𝑆) 𝑆⊆𝑁 be a vector showing transfers offered to
municipal reorganization. Here, we think that the amount of each 𝑆 ⊆ 𝑁 by NG. The]utility of NG when a strategy profile
[
fiscal transfer to each 𝑆 ⊆ 𝑁 is decided in the first stage 𝜔 realizes 𝑇, Θ𝑘 , Γ𝑘 , Υ is
simultaneously with national tax rate, because this set up offers
∑ ∑ [ ]
incentive for municipal merger to municipalities. This is the 𝑈NG (𝜔) = 𝑈𝑘𝑙𝑗 𝑇, 𝑆𝑘𝑙 , 𝑡𝑘𝑙 , 𝐹 (𝑆𝑘𝑙 ) . (6)
difference from ordinary budgetary process. 𝑙∈𝐿 𝑗∈Λ𝑘𝑙
We regard each 𝑚𝑖 as the decisive voter in 𝑖 regarding the
proposals for 𝑔𝑖 , 𝑡𝑖 and municipal merger and separation, as where Λ𝑘𝑙 denotes the set of individuals living in municipality
well as the decision regarding local public good provision. 𝑆𝑘𝑙 , 𝑈𝑘𝑙𝑗 denotes the utility of arbitrary individual 𝑗 in
With respect to the validity of this setting, see [6] and [8]. municipality 𝑆𝑘𝑙 .
Let 𝑆ˆ denote a coalition such that 𝑆ˆ ⊆ 𝑁 and ∣𝑆∣ ˆ ≥ 2. Note that for each 𝑖 ∈ 𝑁 which does not participate in any
ˆ
Thus, 𝑆 expresses a combination of merger or an newly- merger, 𝑆𝑘𝑙 = {𝑖}. Then,
established government. Each 𝑚𝑖 with 𝑖 ∈ 𝑁 seeks the best [ ] [ ]
merger combination for her from which there is no incentive to 𝑈𝑘𝑙𝑗 𝑇, 𝑆𝑘𝑙 , 𝑡𝑘𝑙 , 𝐹 (𝑆𝑘𝑙 ) = 𝑈𝑖𝑗 𝑇, 𝑖, 𝑡∗𝑖𝑚 , 𝐹 ({𝑖}) .

8
Back to Contents

The payoff of each 𝑚𝑖 is 𝑡 ∈ [0, 1], and 𝑡 + 𝑇 ∗ ∈ [0, 1]


{ [ ]
ˆ 𝑡 ˆ , 𝐹 (𝑆)
𝑈𝑖𝑚[ 𝑇, 𝑆, ˆ if the case occurs holds, where 𝑈∗𝑙𝑚 is the utility of the median in-
𝑈𝑖𝑚 (𝜔) = 𝑆 ]
𝑈𝑖𝑚 𝑇, 𝑖, 𝑡∗𝑖𝑚 , 𝐹 ({𝑖}) otherwise. comer in 𝑆𝑙∗ , 𝑡𝑖𝑚∗ is the tax rate preferred by 𝑚𝑖
where the case signifies a situation in which 𝜔 achieves a with 𝑖 ∈ 𝑆𝑙∗ and 𝐹 ∗ ({𝑖}) is the transfer offered to
merger 𝑆ˆ ∈ Θ𝑘 such that 𝑖 ∈ 𝑆. ˆ each 𝑖 ∈ 𝑆𝑙∗ in the first stage of this game.
The payoff of each COM𝑆ˆ is Condition 2
{ [ ] Next, let us consider the second stage. Let
𝑈𝑆𝑚ˆ
ˆ 𝑡 ˆ , 𝐹 (𝑆)
𝑇, 𝑆, ˆ if the case occurs [ ]
𝑈COM𝑆ˆ (𝜔) = 𝑆
𝜔 𝑇, 𝑆, 𝑡, 𝐹 (𝑆) denote a strategy profile in which
0 otherwise.
[ ] all of 𝑇, 𝑆, 𝑡 and 𝐹 (𝑆) is realized. Then, 𝑆𝑙∗ must
where 𝑈𝑆𝑚 ˆ
ˆ 𝑡 ˆ , 𝐹 (𝑆)
𝑇, 𝑆, ˆ denotes the utility of median in- satisfy the following condition besides Condition 1.
𝑆
comer (voter) in 𝑆ˆ when all of 𝑇, 𝑆, ˆ 𝑡 ˆ and 𝐹 (𝑆)
ˆ are achieved, [ ]
𝑆 There is no 𝑆 ′ ⊆ 𝑁 and no 𝜔 𝑇 ∗ , 𝑆 ′ , 𝑡′ , 𝐹 ∗ (𝑆 ′ ) ∈ Ω
and the case signifies a situation in which 𝜔 achieves a merger
𝑆ˆ ∈ Θ𝑘 such that 𝑖 ∈ 𝑆. ˆ We assumed that COM ˆ tries to
𝑆 such that for all 𝑖 ∈ 𝑆 ′ ,
maximize the utility of post-merger median incomer. This is [ [ ]]
because such an assumption assures the stability of coalition 𝑈𝑖𝑚 𝜔 𝑇 ∗ , 𝑆 ′ , 𝑡′ , 𝐹 ∗ (𝑆 ′ ) (9)
ˆ [9] explains the validity in detail.
𝑆. [ [ ]]
> 𝑈𝑖𝑚 𝜔 𝑇 ∗ , 𝑆𝑖∗ , 𝑡∗𝑖 , 𝐹 ∗ (𝑆𝑖∗ )
C. Solution of the Game
In this section, we investigate the solution of the game. We where
′ ′ ′
define the policy vector of the solution of the game as ∙ 𝑡 is the tax rate realized in 𝑆 . That is, 𝑡 is the

[ ] solution to the following maximization problem


Φ∗ ≡ 𝑇 ∗ , Θ∗ , Γ∗ , Υ∗ . [ ]
max 𝑈′𝑚 𝑇 ∗ , 𝑆 ′ , 𝑡, 𝐹 ∗ (𝑆 ′ ) (10)
𝑡
where Θ∗ ≡ {𝑆𝑙∗ }𝑙∈𝐿 is the realized partition of 𝑁 , Γ∗ ≡
(𝑡 ∗ ∗ ∗ subject to for all 𝑖 ∈ 𝑆 ′
( 𝑙 )∗𝑙∈𝐿 )be the vector of realized tax rates in Θ , Υ =
𝐹 (𝑆) 𝑆⊆𝑁 is the vector of transfers offered to each 𝑆 ⊆ 𝑁 . [ ]
𝑈𝑖𝑚 𝑇 ∗ , 𝑆 ′ , 𝑡, 𝐹 ∗ (𝑆 ′ ) (11)
Φ∗ should be realized in an SPNE (sub-game perfect Nash [ ]
Equilibrium) strategy profile. Therefore, it is adequate for us ≥ 𝑈𝑖𝑚 𝑇 ∗ , 𝑖, 𝑡𝑖𝑚∗ , 𝐹 ∗ ({𝑖}) ,
to analyze the condition where Φ∗ is achieved in an SPNE 𝑡 ∈ [0, 1], and 𝑡 + 𝑇 ∗ ∈ [0, 1].
using backward induction. However, regarding the second
stage, each municipal median voter simultaneously chooses the where
combination of merger. As a result, plural patterns of municipal – 𝑈′𝑚 is the utility of each median incomer in
partition can be rationally selected if we use simple Nash 𝑆′.
equilibrium as solution concept in the second stage. Therefore, – 𝑡𝑖𝑚∗ is the tax rate preferred by each 𝑚𝑖 such
equilibrium refinement is necessary regarding the second stage. that 𝑖 ∈ 𝑆 ′ .
We adopt the solution concept shown in [9] in which the – 𝐹 ∗ ({𝑖}) is the transfer offered to 𝑖 ∈ 𝑆 ′ in
stability of coalition regarding the combination of mergers the first stage of the game.
[ ∗ ′ ′ ∗ ′ ]
are satisfied. This solution has similarity with strong Nash ∙ 𝜔 𝑇 , 𝑆 , 𝑡 , 𝐹 (𝑆 ) is a strategy profile in
equilibrium (SNE) due to [2], although it is not identical with which all of 𝑇 ∗ , 𝑆 ′ , 𝑡∗𝑖 and 𝐹 ∗ (𝑆 ′ ) are realized.
SNE itself.1 As a consequence, we regard that Φ∗ is realized in ∗ ′ ′
∙ 𝐹 (𝑆 ) is the transfer offered to 𝑆 in the first
a refined SPNE considering the stability of coalition formation stage of the game.
∗ ∗ ∗
in the second stage. The following shows the conditions, ∙ 𝑆𝑖 is an 𝑆𝑙 such that 𝑖 is an element of 𝑆𝑙 .
∗ ∗
Condition 1 ∙ 𝑡𝑖 is the tax rate realized in 𝑆𝑖 .
∗ ∗ ∗
Let us consider the third and fourth stages of the ∙ 𝐹 (𝑆𝑖 ) is the transfer offered to 𝑆𝑖 in the first

game. Regarding each 𝑆𝑙∗ , 𝑡∗𝑙 must be the solution to stage of the
[ ∗ ∗ ∗ ∗ ∗ ] game.
∙ 𝜔 𝑇 , 𝑆𝑖 , 𝑡𝑖 , 𝐹 (𝑆𝑖 ) is a strategy profile in
the following maximization problem
[ ] which all of 𝑇 ∗ , 𝑆𝑖∗ , 𝑡∗𝑖 and 𝐹 ∗ (𝑆𝑖∗ ) are realized.
max 𝑈∗𝑙𝑚 𝑇 ∗ , 𝑆𝑙∗ , 𝑡, 𝐹 ∗ (𝑆𝑙∗ ) (7) Condition 3
𝑡
Finally, let us consider the first stage. NG maximizes
subject to for all 𝑖 ∈ 𝑆𝑙∗
social welfare choosing the national tax rate 𝑇 and
[ ] choosing the amount of transfer to each 𝑆 ⊆ 𝑁 ,
𝑈𝑖𝑚 𝑇 ∗ , 𝑆𝑙∗ , 𝑡, 𝐹 ∗ (𝑆𝑙∗ ) (8)
[ ] 𝐹 (𝑆).
≥ 𝑈𝑖𝑚 𝑇 ∗ , 𝑖, 𝑡𝑖𝑚∗ , 𝐹 ∗ ({𝑖}) , ( ∗ ∗)
𝑇 ,Υ (12)
1 Of course, we can define a condition based on coalition-proof Nash ∑∑ [ ]
equilibrium (CPNE) which is a weaker stability concept defined by [3]. [1] = argmax 𝑈𝑖𝑗 𝑇, 𝑆, 𝑡𝑆 , 𝐹 (𝑆)
used CPNE as a solution concept to their game. 𝑇,𝐹 (𝑆) 𝑖∈𝑁 𝑗∈Λ𝑖

9
Back to Contents

subject to relations, which is a theoretical contribution. However, as our



1) for all 𝑘 ∈ 𝐾, 𝑙∈𝐿 𝐹 (𝑆𝑘𝑙 ) ∈ [0, 𝑇 𝑃 𝑦] model has a complicated structure, it is not easy to show
2) 𝑇 ∈ [0, 1] and for all 𝑡𝑘𝑙 , 𝑡𝑘𝑙 + 𝑇 ∈ [0, 1]. the property of OPVIC without restricting the number of
3) for all 𝑖 ∈ 𝑁 , 𝑆𝑖∗ ∈ Θ∗ satisfies Condition 1 and municipalities. It will be adequate for us to start the analysis
Condition 2. of the property when 𝑛 = 3, which is an extension of [10].
Condition 2 needs additional explanation. Note that It should be mentioned here with respect to the validity of
regarding municipal median [voter’s utility in] Inequalities our assumption that the national government and municipalities
[ ] have different objectives; the national government’s objective
(9) and (11), we use 𝑈𝑖𝑚 𝜔 𝑇, 𝑆, 𝑡, 𝐹 (𝑆) instead of
[ ] is to maximize social welfare whereas each municipality tries
𝑈𝑖𝑚 𝑇, 𝑆, 𝑡, 𝐹 (𝑆) which denotes the utility of 𝑚𝑖 with 𝑖 ∈ 𝑆 to maximize the utility of its median voter. This assumption
when all of 𝑇, 𝑆, 𝑡𝑆 and 𝐹 (𝑆) are achieved: for 𝑚𝑖 such that is due to the impossibility of application of the median voter
𝑖∈/ 𝑆, achievement of 𝑆 itself does not determine
[ the value] of theorem regarding policy choice at national level in our model.
𝑈𝑖𝑚 . It is determined in a strategy profile 𝜔 𝑇, 𝑆, 𝑡, 𝐹 (𝑆) . It would be better that the median voter theorem is applicable
Now, we have the following proposition; at both national and local level, which enables us to derive a
solution in a game where both national and local governments
Proposition
tries to maximize the utilities of their own median voters. Then,
Φ∗ achieves Pareto optimality among the citizens in the nation.
we can compare it with the solution in a game where the
Proof national government is benevolent. It is worth investigating
If Φ∗ is not Pareto optimal, there exists another policy vector the model where the median voter theorem is applicable both
Φ which achieves Pareto improvement. Assume that Φ′ is
′ at national and municipal levels, which is our future task.
realized in a different partition Θ′ . Then, by Inequality (9), there However, note that there are some situations where it is
exist an median voter who is worse off in Θ′ compared with in rational for the national government to maximize social welfare
Θ∗ . This contradicts that Φ′ achieves Pareto improvement. even though its objective is to accomplish the policy decided
Next, assume that Φ′ is realized in the same partition Θ∗ under majority voting. Suppose that the national government
∗ ′ has to address fiscal reconstruction due to financial deficit,
(as ′in Φ) . Then, assume that′ another transfer scheme Υ = caused by “soft budget program” which induces excessive
𝐹 (𝑆) 𝑆⊆𝑁 is realized in Φ whereas the national tax rate and
all municipal tax rates are equal to those in Φ∗ .Then, every fiscal transfer to local governments. Then, a Pareto optimal
individual should have equal or higher utility in Υ′ compared fiscal transfer scheme will be supported by majority in the
with in Υ∗ and there exists an individual who is better off in whole nation and it might maximize social welfare.
Υ′ . Thus, let Λ∗𝑙 be the set of inhabitants in 𝑆𝑙∗ and 𝑈∗𝑙𝑗 be the R EFERENCES
utility of an arbitrary inhabitant 𝑗 ∈ Λ∗𝑙 . We have [1] A. Alesina and E. Spolare, “War, peace, and the size of countries,” Journal
∑ ∑ [ ] of Public Economics, vol.89, 2005, pp.1333-1354.
𝑈∗𝑙𝑗 𝑇 ∗ , 𝑆𝑙∗ , 𝑡∗𝑙 , 𝐹 ′ (𝑆𝑙∗ ) (13) [2] J. P. Aumann, “Acceptable points in general cooperative n-person games,”
𝑙∈𝐿 𝑗∈Λ∗𝑙 In: A.W. Tucker and R.D. Luce (eds.) Contributions to the Theory of
∑ ∑ [ ] Games IV. Princeton: Princeton University Press, 1959, pp.287-324.
> 𝑈∗𝑙𝑗 𝑇 ∗ , 𝑆𝑙∗ , 𝑡∗𝑙 , 𝐹 ∗ (𝑆𝑙∗ ) . [3] B. Bernheim, B. Peleg and M. Whinston, “Coalition-proof equilibria I.
𝑙∈𝐿 𝑗∈Λ∗𝑙 concepts,” Journal of Economic Theory, vol.42, 1987, pp.1-12.
[4] D. Black, “On the rationale of group decision-making.” Journal of
However, it contradicts that Φ∗ maximizes Benthamite social Political Economy, vol.56, 1948, pp.23-34.
welfare. It is a contradiction. Applying this inference to all [5] R.Boadway, P.Pestieau and D.Wildasin, “Tax transfer policies and the
voluntary provision of public goods”, Journal of Public Economics,
combinations of partition of governments, national tax rate, vol.39, 1989, pp.157-176.
municipal tax rates and transfers, we can prove that Φ∗ achieves [6] P. Bolton and G. Roland, “The Breakup of nations: a political economy
Pareto optimality. Q.E.D. analysis,” Quarterly Journal of Economics , vol.112, 1997, pp.1057-1090.
[7] O. Haimanko, M. Le Breton and S. Weber, “Transfers in a polarized
As Φ∗ satisfies the incentive compatibility of each munic- country: bridging the gap between efficiency and stability,” Journal of
Public Economics,vol.89, 2005, pp.1277-1303.
ipal median voter regarding the choice of municipal coalition [8] S. Suzuki, “Consolidation of local municipalities and income distribution,”
and the post-reorganization tax rate, we call Φ∗ the optimal Hogaku Seijigaku Ronkyu, vol.63, 2004, pp.425-452.
policy vector satisfying incentive compatibility (OPVIC). [9] S. Suzuki and T. Inohara, “Political integration and the number of
governments”, IEEE International Conference on Systems, Man, and
Cybernetics (IEEE SMC 2015), pp. 586-591, Oct. 2015.
IV. C ONCLUDING REMARK [10] E. Weese, “Political mergers as coalition formation: an analysis of the
In this study, we presented a fiscal transfer scheme of the na- Heisei municipal amalgamations,” Quantitative Economics, Vol.6, 2015,
pp.257-307.
tional government whose objective is to maximize Benthamite
social welfare by inducing reorganization of municipalities. We
showed that OPVIC as a result of the game achieves Pareto
optimality among individuals.
As we considered the decisions of municipalities not re-
stricting the number of municipalities, our model can be re-
garded as a generalized modeling of vertical intergovernmental

10
Back to Contents

MODELLING COMMERCIAL FRAMEWORK OF THE


STRATEGIC ELEMENTARY DESIGN CONCEPT
Mohd Shaleh Mujir1, Norashikin Hasan 2 and Oskar Hasdinor Hasan3
1
Industrial Design Department, FSSR, UiTM, Melaka, Malaysia / 2 Business Studies & Management
Department, Faculty Of Bussines Management, UiTM, Malaysia /
3 Industrial Ceramic Design Department, FSSR, UiTM, Selangor, Malaysia

iideshaleh@yahoo.co.uk

ABSTRACT basis towards form and shape development of an


object or products.
This paper discussed the Strategic Elementary design
development framework and process through an
However, these approach is very subjective towards
integrated industrial design approach for commercial
the succesfullnes of the final products and
Bus and Coach fabrication and production. The
sistematically different with the design approach
research applies visual inconsistencies analysis to
among International Art and design School in United
identify significance features as design comparative
Kingdom, European Country, United State or few
studies between bus manufacturers with
University in Japan and South Korea and which
international design benchmark. This research
continually adopted in established design firm and
explores the important of elementary design
leading to succesfull design development of certain
approach as industrial design fundamental design
brand and products champions. This research
characteristics formation practice by Malaysian Small
investigate these theoritical relation and how the
Medium Enterprise company, Art and Design school
'ideation design process' teaching and how it
influence could change the design perception and
formulate and contribute towards commercial design
value in Malaysian product design which currently in
and manufacturing strategy. In Malaysian design
reality is not competetive globally
scenario, aesthetics and subject matter styling led
design direction theory interpretes as significant
RESEARCH BACKGROUND
Malaysian design identity. Designer is generally considered as creative experts
in developing the idea, shape and 3D form. Designers
Keywords: Industrial Design Element, Visual produce important conceptual design for new
Analysis, Commercial Production products and the backbone of R&D for some
industries. The creativity understanding limited to
INTRODUCTION only established production and corporate leader
This research explores the important of elementary team where design success should continue with
design approach as fundamental three dimensional commercial success and profitable value.
(3D) design characteristic formation in industrial
design development process. In Malaysian design Malaysian Industrial Designer recruited from
scenario, aesthetics and subject matter styling led established institution is commonly familiar with
design direction theory radically explore mostly by Malay design philosophical believe and practice. The
design schools and design firms interpretes as primary market analysis suggest that only few local
significant Malaysian design identity. Aesthetics and design R&D output with Malaysian origin become a
Subject Matter design base is strongly associated with global product champion for the company altough it
the Malays culture Symbolic, believe and methaphor was develop with advanced and niche technology.
meaning or 'Perlambangan' which also become the Malaysian made design products only contribute to

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


11
Back to Contents

mimimum commercial impact and failed to be believe create negative impact towards product
competetive in global market. performance, quality and commercial value where
the new integration formula which qualify to
The research investigates the local design factor on innovative matrixs, structured design element and
the influence of ignoring design trend, new material systematic creative direction with the importance of
exploration, manufacturing technologies and design economy on industrial design best practice
capabilities towards corporate branding contribute should be formulate to model with the advance of
to low commercialization value and sustainability. design and technology.
The main question of the research directed to how
far existing designer creativity led to succes in RESEARCH KEY QUESTION
developing innovative commercial products concept. The ambiguous creativity of design styling among
The focus studies also investigate how 3D design designers during design sketching processes of
form exploration and direction practice being Malaysian design practice lead to a subjective 3D
develop without ignoring commercial, corporate form output. This phenomenon practiceas becomes
value and succeeding competetive products. an important creativity feeder for design and
manufacturing. Thus, how do designers qualify and
PROBLEM STATEMENT assess innovative design direction through their
Malaysian Industrial Designer recruited and design and sketching assignments with respect to
graduated from established local higher institution is monetize commercial manufacturing and corporate
commonly familiar with Malay design philosophical value?
believe and subject matter aesthetic simplification RQ2.
practice while developing the 3D shape, form, Designers develop design elements of styling subject
appearance of the new product and ignore the matter to sketch and 3D rather than transforming
important of industrial design best practice priority. corporate branding and manufacturing consideration
However, there are only a few new product and uniformly. We refer to this phenomenon as
industrial design firm succeeding the global product “expertise creativity.” Thus, we are interested in
champion from these practice while Malaysian made understanding what types of design innovative
design products only contribute to minimum direction elements, which could also be value and
commercial impact and failed to be competetive in measure by production teams and corporate
global market. Thus, how do these design teaching branding. What are the characteristics of these
and philosophical design believe practice and their elements?
influence in the industry continually contribute RQ3.
innovatively to produce competetive products and Designers may sketch and develop through their
create profitable long term investment ? creative intuative and understanding 3D design
consideration in developing new products. How,
The ambiguous creativity of aesthetics design styling then, are these elements integrate with overall
base on subject matter among designers during corporate styling and design value strategy?
design ideation processes ignoring the design
element associated with industrial design The research also aim towards improvisation of the
consideration and prority lead to a subjective 3D design education policy, theory, process, method
form output and scientifically difficult to be and practice in Malaysian Higher Institution should be
measured, justify and failed to convinced the improved, monitor, advised, guide, integrate
importance of function, practicality and open to strategically to aligned and relevance with the
argument by marketing, technical experts and Industrial Development especially the new National
production engineers. This phenomenon practices of Economic Model and NKRA direction.
design teaching at local design school influence
graduate designers with local philosophical design

12
Back to Contents

LITERATURE REVIEW the goals of design and target their user by focusing
Design is often seen to be central to competitiveness on semantic element and syntactic analysis.
and delivering value to customers in the ‘new
economy’. It is understood as including a range of This research will be the new thinking platform for
activities that firms perform in the development, local design expertise of Malaysian product design
branding and marketing of new and improved who play an important factor elevating performance
products, services and processes. Thus design, in its and contribution in Malaysian Industries and provide
broadest sense, is where the intellectual content for new direction for the Design Education policy in
value-added in production processes is created. Malaysian Higher Institution which could be improved
strategically to aligned with the Industrial
Designing a product requires thought, at least in Development especially the New Economic Model and
most cases. The thought is a thinking process of NKRA direction.
transforming idea to reality and thinking while
designing is a heterogeneous process, composed of METHODS
very different elements. The success of design often The role of local designer play an important factor
involves the integration of many different elevating impresive performance and contribution in
complementary intellectual assets. Design is Malaysian Industries. Therefore, the focus studies
understood as a ‘knowledge agent’ that can investigate the 'creativity ' factor and design
contribute to innovation processes. Bertola and development theory and philosophical practice
Teixeira (2003) stated that design contributes to among local designer in selected industries. Thus,
innovation, both in product and/or process, acting as the objectives of this study are ; (i) to investigate
a knowledge agent by collecting, analyzing, and designer styling ideation and direction lead approach
synthesizing the knowledge contained in the domains based on common Malaysian design practice, (ii) to
of Users’ community knowledge ,Organizational assess the designers' integration value of form
knowledge and Network knowledge. development design process that qualifies to
advanced design value strategy, (iii) to investigate
There are three stages of thinking process; the influence of design innovative integration and
divergence, transformation and convergence and commercial design direction, and (iv) to make
each of this stages comprise of methods that make recommendation based on the characteristics
designing more manageable. Methods appropriate at analysis of innovative form and styling framework in
this stage involve both rational and intuitive actions the context of syntactics that can be strategize for
and common errors of newcomers to design valuable commercialization formula for Malaysian
methodology is to be far too speculative at this stage design.
and fail to see the point of fact-finding before they Various design institution and firms will be targeted
take any critical decisions and before they discover as focus respondent group studies where design
what is it they are looking for. workshop with video ethnography observation will be
conduct to fullfill the research data projection.
Exploration analysis to investigate design innovation Design elementary analsyis will also utilise the 3D
as suggested by Andrea (2013) acknowledge the Digital modeling for form and shape formulation and
importance of qualified human resources in design evaluation.
field where universities and education systems
should aware the increasing importance design in Mixed methods through qualitative inquiry and
coming years as a source of competitivenes for the quantitative inquiry will be employed to uncover the
firms. theory. A video observation based on protocol
While Jamaludin (2013) found that it is important to analysis of designer skecthing and form development
understand the relationship between basic form and activities at several design academies and
the product charecter so that designer can achive practitioners in Malaysia will be conducted and

13
Back to Contents

strengthened by semi-structured interview on ACKNOWLEDGEMENT


experts on evaluation of selected sketches in the Authors would like to acknowledge the Universiti
analysis of representation in relation to the design Teknologi MARA (UiTM) for the financial support
innovation of 3D form and shape formation syntactics under the Excellent Fund Scheme and Ministry of
in design. Education under Knowledge Transfer Programme
Grant Scheme. Authors also like to thanks Masdef (M)
Sdn Bhd for their technical assistance.

REFERENCES

Abidin, S.Z., Warell, A., & Liem, A. (2011). The significance of


form elements: A study of representational content in design
sketches. Proceedings of DESIRE’11, 2nd International Conference
on Creativity and Innovation in Design, Eindhoven, ACM
Eindhoven, 21-30.

Liem, A., Abidin, S.Z., & Warell, A. (2009). Designers’ perceptions


of typical characteristics of form treatment in automobile styling.
Proceedings of Design and Semantics of Form and Movement, 5th
International workshop Design & Semantics of Form & Movement
(DeSForM 2009), Taipei, 144-155.

Abidin, S.Z., Bjelland, H.V., & Øritsland, T.A. (2008). The


embodied mind in relation to thinking about form development.
Proceedings of NordDesign 2008 Conference, Tallin, DS50, 265-
274.

The characteristic of form in relation to product emotion


Jamaludin ZA Shariman, UiTM Malaysia, E& PDE,(2013), page 716
-721, ISBN 9781904670421

M. S. Mujir, O. H. Hassan, M. Y. Shaharudin, M.A.H. Al Balkhis and


W.Z. Yusuf, (2012) “Design Research and Development Process of
Single Deck Bus for Commercial Production”, Proceeding of 2012
IEEE International Symposium on Business, Engineering and
Industrial Applications. (ISI/Scopus)

M. Y. Shaharudin, M. A.H. Al Balkhis, M. S. Mujir and O.H. Hassan,


(2012) “Conceptual Framework of Design Study on Façade of Hi-
deck Bus Using FRP Composite”, Proceeding 2012 IEEE Business,
Engineering & Industrial Applications Colloquium. ISI/Scopus)

Ontology Understanding in Enhancing Car Styling Ideation. 227


Wan Mohd Yusof; M. Shaleh Mujir (2012)(Universiti Teknologi
MARA, Malaysia) SHUSER 2012. IEEE Symposium On Humanities,
Figures 2 show the research activites process from Literarure Science & Engineering. Renaissance K. Lumpur. 24 - 27 Jun 2012
review stage to the Final Report writing process.
Macmillan, J. Steele, S. Austin, etall. (2001). Development and
verification of a generic framework for conceptual design. Design
CONCLUSION Studies, Volume 22, Issue 2, March 2001, Pages 169-191

H.H. Tang, Y.Y. Lee, J.S. Gero (2010). Comparing collaborative


The new formula will challenge and improve existing co-located and distributed design processes in digital and
and conventional method and mindset on developing traditional sketching environments: A protocol study using the
function– behaviour–structure coding scheme. Design Studies,
design especially in Design Schools and Industry. Design Studies Journal
The finding will justify the need to new and positive
Michael Tovey, John Owen. (2000). Sketching and direct CAD
systematic critical thinking solution among design modelling in automotive design. Design Studies, Volume 21, Issue
academician and designer, benefited industry 6, November 2000, Pages 569-588

towards profitable commercial products which in Jami J. Shah, Steve M. Smith, Noe Vargas-Hernandez, (2003)
long term will increase design economic value, Metrics for measuring ideation effectiveness, Design Studies,
Volume 24, Issue 2, March 2003, Pages 111-134
stragtegically championing the new Malaysian design
identity globally

14
Back to Contents

A service-oriented cloud application for a


collaborative tool management system

Marcus Röschinger, Orthodoxos Kipouridis, Willibald A. Günthner


Institute for Materials Handling, Material Flow, Logistics
Technical University of Munich
Munich, Germany
kontakt@fml.mw.tum.de

Current efforts in manufacturing and logistics research focus also complicates the realization of collaborative services [4].
on developing innovative systems that enable enterprises to These conclusions also apply to the field of industrial tool
respond faster and more effective to changing market conditions, management (TM). However, TM systems are essential for
thus, paving the way for the fourth industrial revolution. stable and efficient production processes and require significant
Amongst others, an intensified collaboration within supply chains monetary, time and human resources [5]. Against this
on the basis of virtualized processes is agreed to be a promising background, in this paper a concept of a cloud based TM
approach. Especially also in the area of industrial tool system that provides services with regard to a machining tool´s
management there are numerous challenges regarding a life cycle is presented. For this, a service-oriented approach is
horizontal integration. In this paper, a service-oriented and cloud
employed as it enables the integration of multiple stakeholders
based concept is presented that focuses on providing
manufacturing oriented services, based on commonly accessible
within and among an organization.
tool related data. It is illustrated how companies within a
machining tool supply chain can benefit from highly available II. RELEVANT ASPECTS OF TOOL MANAGEMENT
tool data and deployed services along a tool´s life cycle.
In this chapter initially the fundamental processes in TM
tool management; service orientation; cloud application;
with regard to the developed concept are introduced.
collaboration; information systems; industry 4.0; supply chain Afterwards, related approaches for TM systems are presented
management; life cycle management; process automation in order to illustrate the demand for a collaborative and service-
oriented TM system.
I. INTRODUCTION
A. Fundamental processes in tool management
Nowadays, increasingly shorter product life cycles, a The tasks of TM can be classified into two activity
growing number of variants as well as the coordination of categories, planning and logistics. On one hand, the tools that
complex international value and supply chains pose significant are required for a manufacturing process as well as their
challenges for manufacturing companies. Current industry and operational use must be planned. On the other hand, the tool
research efforts towards more flexible and adaptive inventory and demand must be managed after which, if
manufacturing concepts are often grouped under the key applicable, procurement actions need to be initiated so that it is
phrases, industrial internet and industry 4.0. Following the assured that a tool is available at a certain machine on time [6].
advances in telecommunications and computer science that However, apart from the physical tool flow, the assigned
allow for much larger connectivity and computing power, great information flow must also be coordinated as each task along a
potentials are attributed to the use of web-based services within tool´s life cycle depends on specific data and hereof deduced
manufacturing and logistics processes in order to increase their information. In the following paragraphs, this aspect shall be
transparency, efficiency and flexibility. Moreover, another field described in detail on the basis of a case study carried out in a
of research focus is to intensify the collaboration between machining tool supply chain.
companies within supply chains. In this context, integrated and
digital information flows are an essential premise. For this As shown in Fig. 1, a tool´s life cycle begins with its
purpose, several information and communication technologies construction and production, whereby it is either custom-made
are available for which the cloud computing paradigm is held on per order basis, or it is “generic” and destined for general
to be predestinated [1, 2]. distribution. Especially in the first case, the tool manufacturer
must intensively exchange several data with the user company
Although a positive effect of intensive collaboration is to make sure the complex and costly product serves its
widely agreed upon and examined [3], there is often a lack of purpose. Hence, the tool manufacturer has to determine,
use cases and applicability studies based on concrete scenarios. respectively measure, a lot of data for each tool containing
Furthermore, in the area of product life cycle management, geometrical (nominal and actual values), operational (e.g.
various information systems are being used, which causes not maximum rotation speed), organizational (e.g. customer and
only interface problems and interrupted information flows but payment) as well as logistical (e.g. storage and shipping) data.
The presented results originated from the research and development
project “ToolCloud”, founded by the German Federal Ministry of Education
and Research (BMBF) within the Framework Concept “Research for
Tomorrow´s Production” (fund number 02PJ2731) and managed by the
Project Management Agency Forschungszentrum Karlruhe, Production and
Manufacturing Technologies Division (PTKA-PFT).

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


15
Back to Contents

Engineering & offers an automated readout, which is why there are numerous
Production
industrial applications. However it comes with a price, as the
tool data must be transferred between RFID transponders and
information systems to make it available for higher level
Tool Logistics
planning and control activities independently from the tool´s
location. In this data transfer, risks of inconsistent and wrongly
assigned data arise. Furthermore, RFID transponders can
Tool
Management hardly be integrated into every tool without a negative impact
on its utilization as the tool product spectrum is too
Tool Planning
heterogeneous with respect to geometry and size. Nevertheless,
not only tools but also related processes in TM lack
standardization. In this context, [5] proposes a basic model for
life cycle oriented TM focusing on the optimization of a tool´s
Utilization & utilization and for this purpose claiming broader
Monitoring
standardization. Further approaches target the area of
Tool Machine Maintenance production scheduling and therefore concentrate on the
Tool User
Manufacturer Manufacturer Service efficient and cost effective assignment of tools to machines and
work orders [11-14].
Fig. 1. Machining tool´s lifecycle and assigned supply chain.
In the tool´s utilization phase, the user company primarily In conclusion, even though diverse approaches and
requires the operational and geometrical data for the set-up of solutions address specific issues in TM, there is a lack of an
machines. In most cases, however, it is necessary to first integrated concept for uninterrupted information flows with
assemble the tool into a holder, whereby the geometry changes regard to a tool´s whole life cycle, which also engages all
and a new measurement is required. Furthermore, additional parties involved. A promising approach in this area are cloud
data is generated and updated by the user, e.g. condition, based services that can individually be adapted to processes
location and inventory, as well as correction data according to and requirements. Moreover, as the services fall back on an
the utilization on specific machines. After a certain time of always up-to-date cloud database, they are not only available
utilization, the tool needs to be maintained, whereby the time- and location-independently but also provide updated and
corresponding processes of re-sharpening and measuring can consistent data at all times. The potential of the system is
take place within the user company or at appropriate service presented in the following chapter where the service-oriented
providers. When a tool´s limited number of possible application for TM is introduced.
maintenance cycles is reached, its life cycle ends and it must be
substituted. III. SERVICE-ORIENTATION IN TOOL MANAGEMENT
The supply chain of machining tools, where only the tool As highlighted in the previous paragraphs, TM is an
manufacturer, the end-user and the maintenance service essential part of modern manufacturing systems. The growing
provider are involved, ought to be extended in order to include demand for information sharing among applications in TM has
companies that act as providers of manufacturing, grinding and brought forward the need to ensure an effective collaboration
measuring equipment. These companies also need to take tool between the parties involved. From a technological
data into consideration, e.g. when constructing and putting new perspective, modern advances in the area of communication
machines into operation. As each of the companies applies networks and computing enable the storage of large amounts of
several methods, quite often even paper documents, to store data as well as a high-speed reliable transfer. Based on such
and exchange tool data, the information flow in and between business and technological drivers, a new paradigm has been
the companies as well as within each of them is often brought forward that focuses on offering the means to
problematic and hardly automated. Due to the above, there is a enterprises to achieve a higher level of flexibility to
need to address the challenges that arise in TM by making manufacturing processes by providing shared services at
transparent tool data available across companies as well as enterprise and shop-floor level. In the following, a brief
utilizing a tool without manual data transfer. description of the service-oriented enterprise system approach
is given before it is adopted to present the developed concept
B. Related approaches for tool management systems for a service-oriented cloud based TM system.
During the past years, various approaches for implementing
A. Service-oriented enterprise system approach
effective TM systems were proposed, however, most
approaches target specific areas and tasks in TM, which has led Service-orientation often refers to service-oriented software
to several individual solutions that, amongst others, fuel engineering as a design paradigm using services as
incompatibility issues [7]. Whereas older approaches focus on fundamental elements for developing new applications [15]. In
the automated tool supply and data exchange with machines [6, the context of enterprise information systems, services are
8] and are widely used today, more current projections also reusable logical objects that offer a specific functionality,
concentrate on the automation of the information flow. individually or in combination with other services. They are
Regarding this, the usage of RFID in TM is an often pursued offered from a service provider to a service consumer, by
tactic [9, 10]. The benefit of the RFID technology is that it means of clearly defined interfaces. These interfaces are an
allows for making the tool data locally available on objects and agreed upon way of exchanging information and specify how

16
Back to Contents

the provider and the consumer interact with each other. The
usage of services for providing technological resources that
support business operations is a main feature of the Service-
Oriented Enterprise (SOE) concept that has emerged as a
promising way to empower enterprises to be more responsive
to customer needs by loosely coupling their systems [16].
An important concept that offers the foundations for
realizing SOE is the Service-Oriented Architecture (SOA).
SOA is an architectural style that enables an on-demand
provision of resources and for this purpose describes a model
of interaction between three parties: the service provider, the
service consumer and the service broker. The first two were
introduced in the previous paragraph, the latter is responsible
for exposing and maintaining a list of available services, which
is called the service registry, and provides the means to
publish, remove and discover a service. A major characteristic Fig. 2. Automated tool identification to address digital data sets.
of SOA is that it features loosely coupled components that can
be accessed through technology-independent interfaces, as the Data Matrix Codes engraved by laser, was examined and
service registry enables a decoupling of services from the amongst others offers the advantages of very little required
systems on the consumer side. Further characteristics of SOA space and robustness in the manufacturing environment.
include standards based interfaces and a modular structure [17]. Besides objects, locations also need to be clearly identified to
Though SOA should not be confused with Service-Oriented realize the service-oriented TM. For this purpose the use of
Computing (SOC), which refers to the technical unique and standardized numbers, either implemented in fix
implementation on the operational level. Thus, it defines how installed identification equipment or encoded in optical codes
the service-orientation should be applied to the system for the utilization of movable identification equipment, is
architecture, as well as which technologies come into play. proposed.
Such technologies can be web-services and cloud computing, On the basis of identifiable objects and locations, various
which is a programming model for enabling an ubiquitous, on- services, which can be modularized, adapted and therefore
demand network access to a shared pool of computing applied according to particular use cases, were developed. For
resources such as networks, storage, applications and services the provision of these services, a layered architecture, shown in
[18]. Fig. 3, is proposed. It follows the template presented in [19]
and is modified to adapt to the needs of a modern
B. Concept for a service-oriented tool management system manufacturing enterprise. On the left side, required
Based on the described mechanics that SOA offers, a components for the service provider are shown. Based on cloud
concept of a cloud based service-oriented TM was developed. technology, resources, such as data storage and application
It is centered around capturing and storing all relevant tool data hosting, can be virtualized and outsourced to one or more
in remote storage locations utilizing cloud computing. Thereby, providers that offer persistent storage of the tool data.
stored data can be both, the static data, which persists
throughout a tool´s life cycle, as well as dynamic data, which The overlying service layer has access to the stored data
changes over time. For this, a data model was developed based and exposes the functionality realized for the TM system. In
on existing standards for categorizing tools and their data, this connection, two basic kinds of services, namely core
which is why the management of a broad spectrum of current business and value added ones, can be distinguished. Core
and prospective tools is possible. On the basis of the cloud business services focus on generating, calling and updating tool
database, the goal of supporting an efficient collaboration relevant data, as well as on its visualization on various end
between all parties involved is realized through the deployment devices. Another essential core business service is the
of services. These operate on the data and provide the means to automatic set-up of machines by loading respective set-up
access, update, aggregate and visualize it in a way that is parameters based on the identified tool. Whereas core business
meaningful for respective processes in a manufacturing system. services center the shop-floor level, value added services
address more strategic enterprise levels and comprise
In order to achieve a reliable mapping of every object to its additional and more complex functionality. In this case, data is
stored data, a unique and standardized identification scheme is not only transferred but also processed and aggregated in order
required (see Fig. 2). Against this background, to every tool, to provide reliable and projectable information. In other words,
holder and machine, unique serialized numbers, showing a value added services leverage the access of third-party
homogeneous structure, are assigned. However, for generic applications on tool data to provide integrated functionality like
tools, for which no individual but only tool class data is tracking and tracing, inventory, condition and maintenance
available, class level identification is sufficient. Regarding the management as well as the communication with other
labelling of tools, the developed concept does not depend on a information and control systems. In addition, the proposed
specific technology and therefore allows high flexibility as well concept provides the option to apply analytic methods that
as the use of already applied identification methods. operate on the tool data to realize services like tool usage
Nevertheless, the use of optical identification technologies, e.g. prognosis, fault prediction and service planning.

17
Back to Contents

B2B
in safety-critical manufacturing operations. This aspect is being
Access Web SOAP REST ... Company
taken into account by the design of the TM system through
Company
Security, Quality of Service (QoS) Business Process Choreography
Company
user management, monitoring and discrete role definition.

Service Integration (ESB)


Enterprise Components C. Application and potentials of services in tool management
Services Setup of Machines Service
Directory
ERP PLM SCM CAM With a view to illustrate the use and potentials of the above
Track & Trace
… introduced service-oriented TM system, the impact on
Monitoring
Assets processes along a tool´s life cycle, as introduced in chapter II,
Virtual Resources Sensors
(RFID,
Machines
ID is described and exemplarily shown in Fig. 4. Hence, the tool
Servers Storage Processing
Optical) Tool
ID
Holder
ID
manufacturer generates the dataset for a new tool and assigns a
unique identifier as well as access rights to it. Similarly, other
Fig. 3. System architecture of the developed tool management system. relevant data, e.g. for machines or tool holders, can be
generated by other companies. Once a new tool´s dataset is
The specific business processes and use cases that arise in a
captured in the cloud, it can be queried within the supply chain
tool supply chain are supported through the choreography
to support different processes at an early stage. Additional
layer. It provides orchestration functions that group services
useful services for the tool manufacturer contain stock and
and enable them to perform as unified applications. In order to
distribution management functionalities, such as updating the
make use of the services, interfaces are exported as service
location and status of an order at which these are
descriptors as well as indexed and advertised by a service
comprehensible also for the assigned customer. Consequently,
broker. It is important to note that due to the flexibility of SOA,
a collaborative transparency can be achieved within the whole
separating the interface implementations from the binding
tool supply chain as services allow a reliable tracking and
protocols, the choice of a service provider can be deferred
tracing of tools, machines and related machining equipment.
offering greater business agility. The top layer, namely the
access layer, specifies these interfaces and defines the means to Considering the tool user, an important task within TM is
interact with services. A very important role in the concept the handling of tool related aggregations, e.g. between tools,
(that also has led to the wide adoption of SOA) is the holders and machines. This aspect is reflected in a deployed
leveraging of web services, as web service technologies offer services recording of which tool was assembled into which
an intersystem communication standard. Moreover, they are holder at which place and time and for which duration. The
based on simple, text based, and open standard technologies, same principle applies to the utilization of tools and holders in
such as XML and HTTP, as well as supported by most of the machines at which a service can also identify if a specific
IT industry. combination of tool and machine is permitted. Furthermore, the
deployment of the proposed TM system can contribute to an
In order to intergrade with the proposed TM system, the IT
increase of the quality of end products. This is possible through
systems of the service consumers (right part in Fig. 3), should
provision of valuable information regarding the quality of end
comply with some basic SOA requirements. So, starting on the
products that were manufactured using specific tools. By
lower layer, objects are identified using an appropriate
capturing and analyzing the correlation between utilized tools,
identification technology. Manufacturing equipment is also
materials and the quality of the end product it is possible to
assigned to this level as it can be seen as a consumer of
identify which tools performed better in a specific task.
services, above all since machines are already connected in
networks for remote monitoring and maintenance. The layer As previously mentioned, one of the benefits of the TM
above contains the enterprise scale components that implement system is the direct involvement of machines in a tool’s life
main business functions, as well as operational systems such as cycle. The machine’s control software communicates through
ERP, PLM and CAM systems. Finally, regarding B2B interfaces with the respected service and accesses the stored
interaction, one of the benefits of the concept is that it enables data allowing for an automatic set-up. In doing so, the correct
the same communication mechanisms using services within a assignment of a tool, its data and its position in the machining
single application, between applications as well as between system is ensured due to automatic identification, avoiding
business partners. errors and increasing the speed of the process. Afterwards,
during its utilization, a tool´s condition can be monitored by
To accomplish a seamless integration in a heterogeneous
deploying another cloud service which captures the tool´s
environment of manufacturing, the layer of Enterprise System
running meter. These state how much material a specific tool
Bus (ESB) comes into play. ESB is a logical architectural
has already machined and enable predictions regarding its
component that provides an integration consistent with the
replacement and maintenance. Hereof, not only several parties
principles of SOA [20]. Services interact via the ESB, which
within the user company, such as operations scheduling and
facilitates mediated interactions. Furthermore, system aspects
procurement, can benefit but also external maintenance service
like security, availability, reliability and data integrity are
providers.
major parts of the system and are realized in the Quality-of-
Service layer through existing technologies like VPN and So, service providers can predict future arriving tools early
HTTPS. A detailed description of the technologies used is not and schedule required capacities for sharpening and measuring.
in the scope of this work. However, it should be noted that In this connection the TM offers additional benefits by
issues regarding accountability arise when using distributed reducing manual efforts for the set-up of grinding and
tool data that can be accessed and edited by more than one user measuring machines as tool specific machining programs can

18
Back to Contents

and cloud computing. In the context of this work the need for a
• transparent, up-to-date and consistent data new, suitable business model for similar service-oriented cloud
• time- and location-independent accessible based systems was identified. This is necessary as current
• seamless integration and connection of new users business models often focus neither an intensive collaboration
web-interfaces for capture and query between the parties involved nor the provision and utilization
aggregation tool
of cloud based services. Therefore, the development of such
analysis of generation or
tool utilization correction of and machine business models is essential and subject of further research.
tool data location: tool user tool data for
status: received machine set-up REFERENCES
condition [1] M. Annunziata, S. Biller, “The industrial internet and the future of
monitoring
status: shipped work,” Mechanical Engineering, vol. 137, pp. 30-35, September 2015.
[2] H. Kagermann, W. Wahlster, J. Helbig, “Recommendations for
26,80 implementing the strategic iniative INDUSTRIE 4.0,” Frankfurt am
Ø7,43
Main, Germany: acatech, 2013.
5,00
[3] S. Qrunfleh, M. Tarafbar, “Supply chain information systems strategy:
Tool Manufacturer Tool User Impacts on supply chain performance and firm permormance,” Int. J.
Production Economics, vol. 147B, pp. 340-350, January 2014.
Fig. 4. Exemplary services for the tool manufacturer and the tool user. [4] S. Rachuri et al., “Information sharing and exchange in the context of
product lifecycle management: Role of standards,” Computer-Aided
be stored in the cloud database too. Moreover, the tool data is Design, vol. 40, pp. 789-800, July 2008.
updated automatically and directly from the measuring [5] D. Heeschen, F. Klocke, K. Arntz, “Life cycle oriented milling tool
machine, which reduces paper documents and avoids incorrect management in small scale production,” in Proc. CIRP 29, pp. 293-298,
data communication to the tool user all at once. 2015.
[6] J. Mayer, “Werkzeugorganisation für flexible Fertigungszellen und
According to the above described exemplary services, the -systeme,” dissertation, Universtität Stuttgart, Stuttgart, Germany, 1988.
developed TM system addresses requirements of the total tool [7] W. Eversheim et al., “Tool Management: The Present and the Future,”
supply chain and increases the transparency and efficiency of CIRP Annals, vol. 40, pp.631-639, 1991.
the underlying processes, while at the same time enabling a [8] H. A. ElMaraghy, “Automated Tool Management in Flexible
modular implementation according to specific use case Manufacturing,” Manufacturing Systems, vol. 4, pp. 1-13, 1985.
requirements. As an example, craft producers and smaller [9] J. C. Aurich, M. Faltin, F. A. Gómez Kempf, “RFID leads to an efficient
companies might, in contrast to larger industrial enterprises, tool management,” ZWF, vol. 104, pp. 642-647, August 2009.
not need tracking and tracing functionalities but are highly [10] G. Wang, H. Nakajima, Y. Yan, X. Zhang, L. Wang,” A Methodology
interested in digitalizing the machine set-up. Furthermore, a of Tool Lifecycle Management and Control Based on RFID,” in Proc.
IEEE IEEM, pp. 1920-1924, 2009.
developed mobile application is a promising compromise if
[11] A. Matta, T. Tolio, F. Tontini, “Tool management in flexible
there is no available internet connection on machine level. In manufacturing systems with network part program,” Production
this fashion, a tool can be identified by means of optical codes Research, vol. 42, pp. 3707-3730, September 2004.
using a smartphone, whereupon its data can be queried, [12] M. S. Akturk, S. Onen, “Dynamic lot sizing and tool management in
visualized and updated. Hence, the processes in TM are indeed automated manufacturing systems,” Computer & Operations Research,
not fully automated but paper documents become unnecessary vol. 29, pp. 1059-1079, November 2002.
and up-to-date, transparent as well as consistent data can be [13] P. Udhayakumar, S. Kumanan, “Sequencing and scheduling of job and
guaranteed where needed, especially also on the shop-floor. tool in a flexible manufacturing system using ant colony optimization
algorithm,” Advanced Manufacturing Technology, vol. 50, pp. 1075-
1084, October 2010.
IV. CONCLUSIONS [14] B. Denkena, M. Krüger, J. Schmidt, “Condition-based tool management
Currently, the processes in the area of TM are often less for small batch production,” Advanced Manufacturing Technology, vol.
74, pp. 471-480, September 2014.
automated, which can be explained by a lack of uninterrupted
[15] H. Breivold and M. Larsson, “Component-Based and Service-Oriented
and digital information flows. Consequently, a tool´s up-to- Software Engineering: Key Concepts and Principles”, Euromicro
date data is hardly comprehensible and accessible, especially conference on Software Engineering and Advanced Applications
when considering its whole life cycle and the associated tool (SEAA), Germany, August 2007.
supply chain. In this context, a service-oriented approach for a [16] W. H. Young, “Network-Centric Service-Oriented Enterprise,” p10,
cloud based TM was proposed in this paper. It aims to integrate Springer, 2010.
all relevant data along a tool´s life cycle into a central and [17] Centric Operations Industry Forum (NCOIF), “Industry Best Practices in
consistent system in order to provide up-to-date data in the Achieving Service Oriented Architecture (SOA),” Virginia, USA, April
2005.
right place, respective process step, at the right time. Through
this, the collaboration within the tool supply chain can be [18] P.Mell and T. Grance, “The NIST Definition of Cloud Computing,”
National Institute of Standards and Technology, U.S. Department of
intensified and all parties involved can benefit from more Commerce, September 2011.
transparent and efficient processes. For this, the paper also [19] A. Arsanjani, “Service-oriented modeling and architecture. How to
illustrates how one of the main challenges regarding the identify , specify and realise services for your SOA,” IBM, November
deployment of a TM system across enterprises, namely the 2004.
problematic horizontal integration of heterogeneous IT [20] M. Keen et. al., “Patterns: How to implement an SOA with the
infrastructures, can be countered by using principles of SOA Enterprise System Bus,” Redbook, August 2004.

19
Back to Contents

Automated Documentation for Rapid Prototyping


Napajorn Wattanagul Yachai Limpiyakorn
Department of Computer Engineering Department of Computer Engineering
Chulalongkorn University Chulalongkorn University
Bangkok 10330, Thailand Bangkok 10330, Thailand
Napajorn.w@gmail.com Yachai.L@chula.ac.th

Abstract— Prototyping is used as a technique during the user interface controls for different presentation models such
evolutionary development process to understand the customer's as Windows and HTML. These greatly facilitate the creation
requirements and reduce the risk of poorly defined of image of desired forms and web pages [2]. Another
requirements. Making changes early in the development strength of Microsoft Visio is its easier integration with
lifecycle is extremely cost effective. The document accompanied Microsoft Office software.
with the final prototype will serve as the basis for further
development. However, documentation is tedious and resource In general, the process of Rapid prototyping involves: 1)
consuming. This paper thus presents the methodology to gathering user requirements, 2) analyzing collected
automate documenting the Rapid prototype of which the requirements, 3) developing initial prototype, 4) examining
strength is its ability to quickly construct interfaces the users the prototype and providing feedback on additions or changes,
can test their expectations. The implementation of the proposed and 4) improving the prototype with the user feedback.
approach would contribute to software process improvement. Iterations on steps 3 and 4 will be conducted if changes are
introduced. Once the final prototype has been achieved, the
Keywords—rapid prototyping; markup language; information specification is delivered to the development team serving as
processing, documentation; process improvement the basis for further development. However, the
documentation of Rapid prototypes is resource consuming.
I. INTRODUCTION This paper thus presents an approach to generating the
Prototyping is used as a technique during the evolutionary documents associated with Rapid prototypes created by
development process to understand the customer's Microsoft Visio. The prototype description document is useful
requirements, and hence, develop a better requirements for software design, software pricing, and communication
definition for the system. The technique is commonly used to among development team.
clarify the user requirements so that they could be refined at a The remainder of this paper is organized as follows.
very early stage of the development. This, in turn, leads to the Section II presents the concept of data exchange via markup
accurate specification of requirements, and the subsequent language, XML, and the technology used for the
construction of a valid and usable system. implementation. Section III describes the proposed
There are two major types of prototyping: Throwaway methodology. Finally, section IV concludes the paper.
Prototyping and Evolutionary Prototyping. An Evolutionary
prototype is continually refined and rebuilt into the final II. BACKGROUND
functional system. Evolutionary prototypes form the heart of
the new system and they may be used on an interim basis until A. XML (eXtensible Markup Language)
the final system is delivered. On the other hand, a Throwaway XML describes a class of data objects called XML
prototype is quickly created to get feedback on user documents, and partially describes the behavior of computer
requirements. This simple working model will eventually be programs which process them. XML is an application profile
discarded rather than becoming part of the final delivered or restricted form of the Standard Generalized Markup
software system which will be formally developed based on Language (SGML; ISO 8879: 1986) [3]. By construction,
the identified requirements. XML documents are conforming SGML documents. XML
The focus of this paper is on Throwaway prototyping, also documents are made up of storage units called entities, which
called Rapid prototyping, of which the strength is its ability to contain either parsed or unparsed data. Parsed data are made
rapidly construct interfaces the users can test. A GUI Builder up of characters, some of which form character data, and some
is usually used to create a click dummy system that looks like of which form markup. Markup encodes a description of the
the goal system, but does not provide any functionality. document's storage layout and logical structure. XML
Several commercial and open source software tools exist to provides a mechanism to impose constraints on the storage
facilitate the construction of Rapid prototypes. layout and logical structure [4].

Among a variety of commercial software, Microsoft Visio B. Node.js


[1] is one of the widely-used tools that offers a broader
graphics palette and finer control of layouts. Visio provides Node.js is open source software. It can run on Linux, Mac
several libraries of shape templates, several of which include and Windows. Node.js uses an event-driven, non-blocking I/O
model that makes it lightweight and efficient [5]. It is one of

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


20
Back to Contents

the more interesting developments recently gaining popularity TABLE I. EXAMPLE SKETCH GUI SHAPES OF STENCIL TOOL
in the server-side JavaScript space [6]. It helps developing fast
Sketch GUI Shape Description
web application because there are lots of packages available
for free.
Textbox uses to receive requirement
from user
C. AngularJS
AngularJS is considered as one of popular JavaScript Button uses to response to user e.g.
framework in front-end development developed by Google. confirm
AngularJS is a toolset for building the framework most suited
to individual application development [7]. Many companies
use AngularJS as a front-end web application. Combo Box shows the list menu

III. METHODOLOGY
Image demonstrates illustrator
Throwaway or Rapid prototyping can be fast constructed
and this is the most obvious reason for using it to get quick
feedback on user requirements, so that they would be refined Link uses to connect to next page(s)
early in the development process. It is assumed that Microsoft
Visio is used for developing the prototypes in this work. Once
the final prototype has been committed, it is exported in two Menu shows system menu
file formats: XML and PNG. The output PNG file contains
screen layouts, while the XML file provides the information
of screen elements such as textbox, table, label, button, image,
and hyperlink. Since the details of these screen elements are
not sufficient, the web application is developed to support data http://www.noname.com

preparation for automating documentation. Developers are


also allowed to input additional attributes with reference from Demo Application

the final prototype. Fig. 1 illustrates the overview of described


Menu item No. Name Create Date Enter Text Enter Text

Menu item

procedure. Menu item

Menu item

In this work, the prototypes are constructed with Microsoft Menu item

Menu item

Visio using the Stencil tool named “Sketch GUI Shapes”.


Table I shows some examples of GUI elements provided by
Stencil tool. Example of the Rapid prototype created for a web Lorem ipsum onsectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt
ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis

application is shown in Fig. 2. Image


nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo
consequat.

Add Cancel

Lorem ipsum onsectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt
ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis
Export Image
nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo
consequat.

Rapid Prototype Add Cancel

Fig. 2. Example interface created with Visio Stencil tool.

A. Export Rapid Prototype


Rapid prototypes are iteratively developed based on the
Upload to process described earlier. After iteratively having gathered
Web Application user requirements, analyzed, created an interim prototype, the
final prototype is obtained. And two output files are then
exported to create the prototype description document. The
Extract and screen images are stored in the screen.png file. The structured
Configure rules
information of screen elements is contained in data.xml file as
shown in Fig. 3. For example, <Pages> refers to individual
Prototype Prototype
Generate
screens and in <Pages> consists of different <Shape> that uses
Description Description
Template Prototype Description Document in one screen.
Based on the proposed method, there is a constraint
imposed on the type of stencils selected that is limited to
Sketch GUI. The tool implemented in this work merely
Fig. 1. Procedure to automate documenting Rapid prototypes.
supports the generation of prototype description document

21
Back to Contents

using the specific syntax of the configuration file in data.xml. According to the Standard of HTML input attributes,
The reason why Sketch GUI is selected is that it provides a developers are required to add some important attributes
variety of shapes sufficient for developing web application reference from w3school [8] as shown in Table II. Fig. 5
and wireframe. shows the screen shot with textbox attributes associated with
Sketch GUI Shape: Text field, as shown in Table III.
<?xml version="1.0" encoding="utf-8"?>
<VisioDocument> TABLE II. EXAMPLE HTML STANDARD INPUT ATTRIBUTES
<DocumentProperties>
<Title></Title> Attribute Description
<HyperlinkBase href=""></HyperlinkBase>
</DocumentProperties> The value attribute specifies the initial value for
value
<Pages> an input field
<Page ID="0" Name="Page-1" NameU="Page-1">
<PageProps> The readonly attribute specifies that the input
<PageScale Unit="IN_F"> readonly
field is read only (cannot be changed):
1.000000000000000
</PageScale> The disabled attribute specifies that the input
<DrawingScale Unit="IN_F">
field is disabled.
1.000000000000000
</DrawingScale> disabled A disabled element is un-usable and un-
</PageProps>
<Shape ID="33" UniqueID="{FABF30B5-77DB-4EB0-B557-055070B15FD6}" clickable.
Name="Menu item" NameU="Menu item" Master="5">
<Text>Menu item</Text>
Disabled elements will not be submitted.
<XForm>
The maxlength attribute specifies the maximum
<PinX Unit="IN_F"> maxlength
1.047736220472441 allowed length for the input field
</PinX>
<PinY Unit="IN_F"> The size attribute specifies the size (in
size
9.375000000000000 characters) for the input field
</PinY>
</XForm> This attribute specifies that the user must fill in a
required
</Shape> value before submitting a form
</Page>
</Pages> The hint is displayed in the input field before the
</VisioDocument> placeholder
user enters a value.

Fig. 3. Example of data.xml structure.

TABLE III. EXAMPLE OF TEXTBOX ATTRIBUTES REQUIRED FOR USER


B. Generate Prototype Description FILL IN

In this work, the tool is implemented as a web application Sketch GUI Shape Attributes
to facilitate the document generation. The system architecture
value, readonly, disabled, maxleangh,
is shown in Fig. 4.
size, required, placeholder
The prototype description generator is implemented with
AngularJS and Node.js for the development of front-end and
back-end, respectively.
AngularJS works with the browser to upload data.xml and
screen.png file from the user. Next, the contents are
transferred through Node.js which works as web server.
Node.js will then use XML parser to locate and extract data http://www.noname.com

from data.xml, and insert into the database as well as send the Textbox Configure attributes to txtUsername
extracted data back to the user as a response. txtUsername Value Size Max Lenght Source Event

Read Only Disabled Required


Browser Server Yes / No Yes / No Yes / No

Description Requirements
AngularJS Node.js
Request DB
Request
HTML / CSS Web Server
User Response Response
Server-Side Save Cancel
JSON / XML
JavaScript
JavaScript
Other Services
AngularJS Template

Fig. 4. Architecture of prototype description generator. Fig. 5. Screen shot for fillig in required attributes of text box.

22
Back to Contents

Prototype Description Document {DATE / TIME}


Once all the information has been collected, the system
then prepares the data and sends back to Node.js to generate
{SCREEN NAME} the description document associated with the input prototype.
Fig. 6 illustrates the prototype description template which
defines the position and content of each screen elements as
marked up by braces {…}. The templates will be instantiated
with the details extracted from the final prototype contained in
{Screen Image} data.xml and screen.png, combined with those filled in by
developers as shown in Fig. 5.
The file format of prototype description document is
.docx. It is a part of the committed Software Requirements
{ELEMENT NAME}
Specification (SRS), meaning the user requirements have been
{ELEMENT ID} {TYPE} {VALUE} {SIZE} {MAXLENGHT} {REQUIRED}
validated and agreed upon. The prototype description
{SOURCE} {EVENT}
document can be used to illustrate the events occurring in each
screen, the input, and any referencing if exists. Additionally,
List of Attributes the document can be used for software pricing based on the
{ATTRIBUTE} {ATTRIBUTE} {ATTRIBUTE} {ATTRIBUTE}
number of screens, input, output, and its complexity. The
{ATTRIBUTE} {ATTRIBUTE} {ATTRIBUTE} {ATTRIBUTE} document can also be used for communications among the
development team members, sometimes called story.

{DESCRIPTION REQUIREMENTS}
IV. CONCLUSION
The Rapid prototyping concentrates on experimenting with
the customer requirements that are poorly understood [9].
With the feedback from users, iteratively refinement yields
Approval Design Date the final prototype serving as the basis for further
development. However, the documentation chore of rapid
Fig. 6. Prototype description template. prototypes is resource consuming. This paper thus presents an
approach to automating the creation of prototype description
document by managing data exchange via XML. A web
application is also implemented using AngularJS and Node.js
TABLE IV. DESCRIPTION OF COMPONENTS IN PROTOTYPE DESCRIPTION working with the browser and server, respectively. The
TEMPLATE
system is designed to support the prototypes created with
Element Description Microsoft Visio providing Sketch GUI. The presented
approach of automated documentation would increase
Screen name Describe screen’s name for communication productivity of the project and contribute to process
improvement.
Screen Image Describe overall screen consists of lots of
elements REFERENCES
[1] Microsoft Visio Website. [Online]. 2016. Available:
Element Name Define element for clearly understanding https://products.office.com/en/Visio. [2016, January 28]
[2] Prototyping. [online]. 2016. Available: www.teach-
Element ID Unique ID for reference and developers use it to ict.com/as_a2_ict_new/ocr/A2_G063/331_systems_cycle/prototyping_
access the element RAD/miniweb/pg4.htm. [2016, January 16]
[3] ISO 8879:1986 Information processing -- Text and office systems --
Standard Generalized Markup Language (SGML). [Online]. 2016.
Type Type of element Available: https://www.iso.org/obp/ui/#iso:std:iso:8879:ed-1:v1:en
[4] T. Bray et al. Extensible Markup Language (XML) 1.1 (Second
Value Define default value Edition). W3C Recommendation 16 August 2006 : 1-45.
[5] Node.js. [Online]. 2016. Available: https://nodejs.org/. [2016, January
Size Define size of element 22]
[6] Tilkov, Stefan, and Steve Vinoski. "Node. js: Using JavaScript to build
Source Define source of data high-performance network programs." IEEE Internet Computing 6
(2010): 80-83.
Event Type of event [7] AngularJS [Online]. 2016. Available: https://angularjs.org/. [2016,
January 22]
Description Describe details about requirements, step to [8] W3School HTML Reference. [Online]. 2016. Available:
http://www.w3schools.com/ [2016, January 28]
Requirements produce of action or event, source
[9] Sommerville. Software Engineering 8. England : Pearson Educational
Limited. 2007.

23
Back to Contents

Generation of Images of WND Equivalent from


HTML Prototypes
Ekarat Prompila Yachai Limpiyakorn
Department of Computer Engineering Department of Computer Engineering
Chulalongkorn University Chulalongkorn University
Bangkok 10330, Thailand Bangkok 10330, Thailand
Ekarat.P@student.chula.ac.th Yachai.L@chula.ac.th

Abstract—HTML prototypes are regarded as a kind of HTML prototypes. The method is a supplement to the
Evolutionary prototypes. During the evolutionary approach presented in the former work [2]. GraphViz [3]
development, the prototype is refined and rebuilt. The creation accompanied with its graph description language, DOT [4],
of prototype description document is also required as the are selected as the development aids in this work.
supplementary. Windows Navigation Diagram (WND) is
usually part of the prototype description document, serving as The remainder of this paper is organized as follows.
the design model of user interfaces. Since the document Section II presents the background knowledge and
preparation demands extra project resources, this paper technology applied in this research. Section III describes the
presents an approach to generating images of windows proposed methodology. Finally, section IV concludes the
navigation equivalent diagrams from the HTML prototype paper.
that would contribute to process improvement. The graph
description language, DOT, is selected to aid the construction.
II. BACKGROUND
It is expected that the implementation of the proposed method
would help reduce resources in document preparation,
improve integrity of deliverables, and increase project A. HTML Prototype
productivity. The approaches of prototyping can be generally
categorized into two types:
Keywords—evolutionary prototype; windows navigation
diagram; markup language; information processing 1) Throwaway prototyping approach. see the
prototype’s value solely in the testing. After a prototype was
I. INTRODUCTION tested, and the lessons-learned are noted, the prototype
itself looses its value and can be throw away [5]
Prototyping is a common technique suggested for
2) Evolutionary prototyping approach. can reuse the
requirements validation during the early phase of software
project. In general, prototypes are mainly classified into two prototype itself after the testing. Here , the new prototype is
categories: 1) Throwaway prototypes, and 2) Evolutionary created from an altered verson of the old one [5]
prototypes. Throwaway or Rapid prototypes are created to HTML prototypes are regarded as Evolutionary
understand the user requirements. They are speedily prototypes. The events or screen linkages are usually added
constructed, usually with a GUI Builder, and sometimes they using HTML tags. Sometime, Java scripts can be attached
are called Paper prototyping. Throwaway prototypes can be for practical usages.
regarded as a simple working model to visually show the
users what their requirements may look like. The users can The HTML prototype is the HTML document that
examine their expectations and feedback to the developers contains three main parts: 1) HTML version information, 2)
for refinement. The process is iteratively carried out until the a declarative header section, and 3) a body which contains
clarified requirements are achieved. The prototype model is the contents. The body part of the HTML document is the
thrown away, then. Contrary to Throwaway prototypes, focus of this work as the sources of details extracted for
Evolutionary prototypes are functional systems constructed output generation.
during evolutionary development. They are continually
refined and rebuilt into the final operable systems. B. Windows Navigation Diagram (WND)
The prototype description document, if exists, is usually WND provides the high level abstraction of how all the
part of the committed Software Requirements Specification screens, forms, and reports are related. In the diagram, each
(SRS) that serves as the basis for further development. In state of the user interface is represented as a box. A box
general, the document contains WND (Windows Navigation typically corresponds to a user interface component, such as
Diagram), which is one of the UML diagrams. It represents a a window, form, button, or report [6]. It can explain the
design model for the user interfaces. The purpose of a system relation in two levels:
windows navigation diagram is to show how the users may 1) Overall Level: High level abstraction of how all the
traverse from one window to another along major, screens, forms, and reports are related, and which event is
application-meaningful paths [1].
the triggle for these relations. The events without form
Documentation chores are tedious and resource linkage will be not shown in this level because it may be
consuming. This paper thus presents a method to generate chaotic and hard to understand.
images of windows navigation equivalent diagrams from

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


24
Back to Contents

2) Form Level: High level Abstraction of interested selected as a tool to facilitate the creation of WND in this
screen, form, or report is related with the others that work. DOT embraces three kinds of items: graphs, nodes,
interested one is called “Main Form”. All events on main and edges. A main graph contains a subgraph which defines
form will be shown although there is no form linkage. a subset of nodes and edges. The main (outermost) graph
can be a graph (undirected) or a digraph (directed) [4].
C. DOT WND is a directed graph. The image of windows navigation
DOT is text file format of the suite GraphViz. It has a equivalent diagram for each HTML screen will be created as
human-readable syntax that describes network data, the final output from the implemented system. The scope of
including subgraphs and elements appearances, i.e. color, this work includes the generation of WND-like images from
width, label [7]. Drawing graphs with DOT can set many the HTML pages which contain the events or screen
attributes such as shape (shape type, shape color), label (font linkages implemented with Java script.
color, font name, font size, alignment) etc.
DOT is a plain text graph description language. It START
contains all passes for the graph drawing algorithm. These
contributions are [8]:
Loop
1) rank: an efficient way of ranking the nodes using a 1. Extract XML File
network simplex algorithm. The nodes are placed in discrete HTML Associated Contents
Prototypes
ranks.
2) ordering: improved heuristics to reduce edge
crossings by setting the order of nodes within ranks to avoid Loop
edge crossings. 2. Write WND
3) position: a method for computing the node DOT File with DOT
coordinates as a rank assignment problem; or it sets the
actual layout coordinates of nodes.
4) make splines: a method for setting spline control WND
3. Execute DOT
points for edges. to Image File
(Image File)

Fig. 1 shows the four passes composing the graph


drawing procedure of DOT.
END

Fig. 2. Method of generation of WND from HTML screen.

A. Extract Contents from HTML Prototypes


All HTML Prototypes will be put together within the same
folder which may contain other file types such as style sheet
Fig. 1. Graph drawing algorithm of DOT [8]. file (.css) etc. Starting with reading an individual HTML
Prototype and filtering only the HTML file, the required
contents will then be extracted.
D. GraphViz Java API
Graphviz is open source graph visualization software. The required contents for creating a windows navigation
diagram can be located from the tag “<a href>” that defines a
Graph visualization is a means of representing structural
hyperlink, and “Event Attribute of HTML” listed in Table I.
information as diagrams of abstract graphs and networks. It The scope of work also includes java script after an event is
has been widely used for applications in networking, operated where “window.location” is the source code for
bioinformatics, software engineering, database and web linkage. An example of links script on HTML event attribute
design, machine learning, and in visual interfaces for other is shown in Fig. 3. All the extracted contents will be saved in
technical domains [3]. the structural document as XML format. An example is
API (Application Programming Interface) is the shown in Fig. 4.
collection of all the public methods and fields that belong to
a set of classes, including its interface types. The API
defined the way that developers can use the classes in their TABLE I. EVENT ATTRIBUTE OF HTML [11]
own java program [9]. Attribute Value Description
GraphViz Java API uses the Java class that can simply
Onload Script Fires after the page has finished loading
call DOT from Java programs. This facilitates the usage of
GraphViz in Java programs [10]. Onresize Script Fires when the browser window is resized

Onunload Script Fires once a page has unloaded (or the


III. RESEARCH METHODOLOGY browser window has been closed)
Onblur Script Fires the moment that the element loses
The generation of windows navigation diagram from a focus
HTML prototype consists of three main processes as Onchange Script Fires the moment when the value of the
illustrated in Fig. 2. DOT, graph description language, is element is changed

25
Back to Contents

Attribute Value Description <?xml version="1.0" encoding="UTF-8" standalone="no"?>


Onfocus Script Fires the moment when the element gets <RADS>
focus <Screen ID="1">
<Code>TT-CHU-01</Code>
Onreset Script Fires when the Reset button in a form is <Name>Login</Name>
clicked <WND stereoType="window">
Onsearch Script Fires when the user writes something in a <Event ID="1">
search field (for <input="search">) <EventLabel>Forget Password</EventLabel>
Onselect Script Fires after some text has been selected in <EventType>hyperlink</EventType>
an element <EventNextTo>forgot_password.html
</EventNextTo>
Onsubmit Script Fires when a form is submitted
</Event>
<Event ID="2">
Onkeydown Script Fires when a user is pressing a key
<EventLabel>Help</EventLabel>
<EventType>button</EventType>
Onkeypress Script Fires when a user presses a key <EventNextTo>Help.html</EventNextTo>
</Event>
Onkeyup Script Fires when a user releases a key <Event ID="3">
<EventLabel>Login</EventLabel>
Onclick Script Fires on a mouse click on the element <EventType>button</EventType>
<EventNextTo>Change_Role.html</EventNextTo>
Ondblclick Script Fires on a mouse double-click on the </Event>
element </WND>
onmousedown Script Fires when a mouse button is pressed down </Screen>
on an element <Screen ID="2">
onmousemove Script Fires when the mouse pointer is moving <Code>TT-CHU-02</Code>
while it is over an element <Name>Forget Password</Name>
<WND stereoType="window">
onmouseout Script Fires when the mouse pointer moves out of <Event ID="1">
an element <EventLabel>confirm</EventLabel>
onmouseover Script Fires when the mouse pointer moves over <EventType>button</EventType>
an element <EventNextTo>Login.html</EventNextTo>
onmouseup Script Fires when a mouse button is released over </Event>
an element </WND>
</Screen>
onwheel Script Fires when the mouse wheel rolls up or
</RADS>
down over an element
oncopy Script Fires when the user copies the content of Fig. 4. Example XML file contains extracted data from HTML prototype.
an element
oncut Script Fires when the user cuts the content of an
element B. Write WND with DOT
onpaste Script Fires when the user pastes some content in
an element
WND created by DOT can set shapes, graphics styles,
drawing size, spacing, node/edge placement, cluster (or
subgraph), and concentrators. In this work, each form or
screen is represented by a cluster, and a linkage between
each form is represented by an edge that directly connects
one cluster to another.
The first step to describe WND of each form is to create
all related screens represented by clusters or subgraphs. All
labels and events are defined within each screen. The
stereotype associated with an event is also concurrently
defined. A stereotype is one of the three types of extensibility
mechanisms in the Unified Modeling Language (UML)
which define new types of modeling elements extending the
semantics of existing types or classes in the UML meta-
model. Notation consists of the name of the stereotype
written on double angle brackets << >>, attached to the
extended model element [12]. The only one stereotype used
in this work is “<<window>>”.
The second step is to create events, each of which is
<p class="login button">
represented by a node with style-filled. In this work, events
<input name="login" type="button" are classified into three types as described below:
onClick="window.location.href='login.html'" value="Login"/>
<input name="help" type="button" 1) Button: Events for type “button”that are usually
onClick="goToHelp()" value="Help"/>
</p> applied with onClick and onDblClick.
<p class="change_link"> 2) Hyperlink: Events for linkage that are usually used
<a href="forgetpassword.html">forget password</a>
</p>
with tag <a href>.
</form> 3) OnEvent: The other events such as “OnChange” that
<script type="text/javascript">
function goToHelp() {
are usually used with event attribute of HTML as listed in
window.location = 'help.html' Table I.
};
</script> The final step is to build a connection between an event
Fig. 3. Example of links script on HTML event attribute. and the related form. A connection is represented by edge
attribute. The syntax or notation of edge is “->” which

26
Back to Contents

requires the identification of the source event and the target 5) Input Path: Path and filename for input file that to
screen. For example, X -> Y denotes the source node X is be execeted or converted such as “c:\\input\\graph.dot”.
connected with the destination node Y, or Event X links to 6) -o: is command line options for setting the output file.
screen Y. Fig. 5 shows example DOT source code for 7) Output Path: Path and filename for output file such
creating WND equivalent.
as “c:\\output\\graph.gif”
digraph G {
graph [fontsize=10 fontname="Verdana" compound=true]; Example of the generated output WND-like image is
node [shape=record fontsize=9 fontname="Verdana"];
subgraph cluster_0 { // Main Form shown in Fig. 6.
node [style=filled];
"<<hyperlink>>\n Forgot Password" "<<button>>\n Help"
"<<button>>\n Login";
label = "<<window>>\n login";
color=blue;
}
subgraph cluster_1 { // Forgot Password Form
node [style=filled];
"<<button>>\n Confirm"
label = "<<window>>\n Forgot Password";
}
subgraph cluster_2 { // Help Form
node [style=filled];
"<<button>>\n back" ;
label = "<<window>>\n Help";
}
subgraph cluster_3 { // Change Role Form
node [style=filled];
"<<button>>\n Driver" "<<button>>\n Passenger"; Fig. 6. Example of image of windows navigation diagram equivalent.
label = "<<window>>\n Chang Role";
}
// Edges that directly connect one cluster to another
"<<hyperlink>>\n Forgot Password" -> "<<button>>\n Confirm"
IV. CONCLUSION
[lhead=cluster_1][label = "Click"]; This paper presents a method to automate the generation
"<<button>>\n Help" -> "<<button>>\n back"
[lhead=cluster_2][label = "Click"]; of WND-like images from HTML prototypes. The required
"<<button>>\n Login" -> "<<button>>\n Driver" contents are extracted from the target HTML markup tags
[lhead=cluster_3][label = "Click"];
}
and then stored in the XML format file. Next, the graph
description file is created from the contents in XML format
Fig. 5. Example DOT source code for creating WND equivalent.
using DOT. The GraphViz Java API is then applied for
converting the dot file into the image file.

C. Execute DOT to Image File The proposed method does not support the new tags of
HTML5, and it may not support some java scripts of screen
The GraphViz Java API is applied for executing DOT linkages.
files to obtain the image files. The API applied in this work
refers to the program structure described in [10]. The The generated output images of WND equivalent can be
command line used for execution is shown below. documented as part of the Software Requirements
Specification, serving as the basis for further development.
Dot.exe -Toutput format -Kinput format -Gdpi[=value] The implemented system is expected to help reduce the cost
Inputpath -o Outputpath of documentation task and increase the integrity and
consistency of project deliverables.
To execute the command line, the GraphViz Java API
calls “Runtime.exec(arguments[])” which requires passing in REFERENCES
the values of seven arguments as described below: [1] M. Page-Jones and L. L. Constantine, "The window-navigation
diagram", in Fundamentals of Object-oriented Design in UML,
1) Dot Executable file Path: Executable file that Boston: AWL, 2000, pp.198.
contains a program for compile dot file [3] such as [2] E. Prompila, Y. Limpiyakorn, "Automating Documentation of HTML
"c:/Program Files (x86)/Graphviz 2.38/bin/dot.exe" Prototype", in ICITCS, Kuala Lumpur, 2015, pp. 1-5.
2) Output format: Using command Line Options -T and [3] Gephi. (2008-2016). Graphviz - Graph Visualization Software
[Online]. Available: http://www.graphviz.org/
follow by output format such as "-Tgif" that -T is setting
[4] E. Koutsofios and S. North, “Drawing graphs with dot”, AT&T Bell
output language to one of the supported formats [3]. The Laboratories, Murray Hill, NJ ,January 5, 2015.
supported formats are gif, dot, fig, pdf, ps, svg, png, plain. [5] C.Gengnagel, E. Nagy and R. Stark, "Evolutionary Versus
3) Input format: Using command Line Options -K and Throwaway Approaches", in Rethink! Prototyping, vol. 1, 2016,
pp.131.
follow by input format such as “-Kdot”that -K is
[6] A. Dennis, B. H. Wixom, and D. Tegarden, “Human-Computer
specification which default layout algorithm to use [3]. The Interaction Layer Design,” in Systems Analysis and Design: An
supported formats is dot format only for this research. Object Oriented Approach with UML, 5th ed., Hoboken: WILEY,
4) Scale Image: Using command Line Options -G and 2015, pp. 375- 376.
follow by “dpi=” and dpi value such as “-Gdpi=106”. -G [7] Gephi. (2008-2016). GraphViz DOT Format [Online]. Available:
http://www.gephi.org
command line option is setting a graph attribute which is
[8] E.R. Gansner, E. Koutsofios, S. C. North and K. Vo. “A Technique
scale image as DPI (Dots per Inch) for this research. for Drawing Directed Graphs”. IEEE Trans. Sofware Eng., 19(3):214-
230, May 1993.

27
Back to Contents

[9] P. Shaw, " What's the difference between an interface add an API? ", [11] The company Refsnes Data. (1999-2016). HTML <html> Tag
in Java interface design FAQ, 1.0 Mobi format edition, 2010. [Online]. Available: http://www.w3schools.com
[10] L. Szathmary. (2003-2015). GraphViz Java API [Online]. Available: [12] J. Jürjens, "UML Extension Mechanisms", in Secure Systems
http://www.github.com Development with UML, 2005, pp. 32.

28
Back to Contents

Voice Recognition using k Nearest Neighbor and


Double Distance Method

Ranny
Department of Computer Science
University of Multimedia Nusantara
Indonesia
ranny@umn.ac.id

Abstract— Voice recognition process is started with voice due to the number of data and the complexity of the data. The
feature extraction using Mel Frequency Cepstrum Coefficient frequency domain signal is simple to be analyzed because the
(MFCC). The purpose of the MFCC method is to get the signal pattern of the signal is get from the data. Besides, the MFCC
feature that correlate to the human voice. The converted signal method also can extract the feature from the whole data voice
from analog to digital is needed in the MFCC method. The digital and it represents the voice of each subject. There are many kind
signal has a time domain and it make the analysis harder. So, the of MFCC based on the number of the filter that use to get the
domain time is converted to time domain for make the analysis human signal pattern [2]. The MFCC also has different type
more accurate. Furthermore, after get the feature, the based on the number of the coefficient result [2]. This research
recognition step is using k Nearest Neighbor (kNN) method with
using the 24 number of triangle filter and the number of the
k number is one. Euclidean Distance is used to get the similarity
of the data training and data testing. The previous research
coefficients is 13 for each frame on voice data.
shows that kNN has a high accuracy if use normal data, but it has The next step is testing process. The testing step based on
lower accurate when using outlier data. Base on this problem, the previous research that using kNN method as the recognition
this research develop a new method to handle the outlier data method. The determination number of k and method of
using kNN and double distance measurement. The double measuring distance is common topics research. The topic is
distance method is note each distance of each data to the center of concern on the method but cannot handle for outlier data. The
the kNN data. The calculation of the distance is used on
purpose of the research is to develop a new method that can
recognition step. The accuracy of the method is tested on
increase the accuracy using outlier data. The develop method is
experiment. The experiment is using 11 subjects as data training
and data testing. Each voice of subject is recorded three times. note the distance for each training data to the center of the
The result of the experiment with kNN method with one data center of the data.
center is 84.85% and the experiment result using double distance The expansion method with record the distance can keep
measurement is 96.97%. The result shows that the double the correlation between data on each class, especially for
distance method increase the accuracy of voice recognition. outlier data. The developing method in this research is called
double distance method. The purpose of testing step is to do the
Keywords—voice recognition; Mel Frequency Cepstrum
recognition process using kNN method, with k=1 that
Coefficients, k Nearest Neighbor; Euclidean Distance
represented the center of class using the average of the data.
I. INTRODUCTION The kNN method is combined by the doubles distance method.
The using of simple algorithm is to get the accuracy of the
The voice recognition is one of the systems that developed method
based on pattern recognition. The process of voice recognition
on computer science is develop using many kind of method, for The accuracy of two methods is compared using the
example is Dynamic Time Wrapping (DTW), Linear Vector training data and non-training data. The framework of the
Quantization (LVQ), Artificial Neural Network (ANN) and research is showed on Figure 1. The diagram shows that the
etc.[3][4][7]. Each method has advantages and disadvantages system is started with recording the audio input. The next step
based on the time process and accuracy rate of recognition. is training step, the training step do the extraction feature. The
This research discuss is about voice recognition using kNN and method of the extraction feature is using MFCC. The feature of
double distance measurement. This method is compared using audio input is saved as database and used on testing step. The
data training and data testing to get the accuracy. testing is divided into two parts, the first part using kNN
method, the second using double distance method.
The frame work of the voice recognition consists of two
steps are training step and testing step. The training step is The experiment of the research is using voice data that
voice extraction feature using Mel Frequency Cepstrum recorded by 11 subjects, each subject pronounce a word of
Coefficients (MFCC). The MFCC method is convert the ‘computer’ in Bahasa. The voice data is recorded three times
domain of signal from time domain to frequency domain. The by each subject. The condition of the recorded process is on
time domain signal is more difficult to processed and analyzed minimum noise. The data is divided into two groups, the first

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


29
Back to Contents

group is used for training step and the second group is used for
testing step. The result of the experiment is used for analyze
the accuracy of the method.

Fig 3 Frame blocking


Fig 1. Framework of Voice Recognition
The Fig 3 showing the process of overlapping data on
II. METHODOLOGY frame blocking step. The size of each frame is M and the long
of overlapping data is N. The purpose of windowing step is to
A. Mel Frequency Cepstrum Coefficients keep the data on each frame in the same range of sampling
Human voice is captured as analog signal, to process the number. The common method of windowing is Hamming
analog signal is needed to convert the digital signal with window [2][6]. The next step on MFCC is Fast Fourier
various level of sampling rate, 14400Hz, 16000Hz, and Transform (FFT); this step is to change the time domain to the
8000Hz [2][7]. The level of the sampling rate is influence the frequency domain. The changing domain make the data easier
shape of the digital signal. The digital signal is saved on to be analyzed [2][4][6].
various formats as wav, mp3, mp4, etc. The difference of the
format is on the compressing format data. If the format has The sampling data is still on random range of frequency.
higher compressing data it makes quality of the voice is lower. The mel frequency wrapping is to get the frequency of human
voice, so it make the data clear or no noise. The mel frequency
The MFCC method is one of the extraction voice feature wrapping is also to get the feature of sampling data [2][4]. The
aims to get the voice feature based on the discrete signal last step of MFCC is cepstrum. The domain of sampling data is
[2][3][6]. The signal discrete based on the time was changed to changed back to the time domain on cepstrum step. The result
be the frequency domain is aim to make the analyses process of cesptrum is used to be the MFCC coefficients and declare as
easier. The MFCC method is develop based on the voice feature extraction.
psychophysical study which states that human voice is on
linearly [2]. This state is used for do the filtering with scale of The size of feature extraction data is n x m, which n is a
mel. Mel scale is linear on frequency with the number is less number of desired data (that is inputted on cepstrum step),
than 1000 Hz and logarithmic on more than 1000 Hz [2][4]. while m is the size of data that got from frame blocking when
The next step is reconvert the log mel spectrum to time data changed from analog to digital. The result of MFCC
spectrum using discrete cosine transform (DCT) and the result became the input on the training and recognition step.
is called as mel frequency cepstrum coefficients. The result of B. k Nearest Neighbor
the MFCC method is voice feature which became the input on
The kNN method is a method based on supervised learning.
next process. Figure 2 show the step on MFCC process.
The measurement of similarity distance between the data is the
based method of kNN on data classification.
Here is the algorithm of kNN [5][10]:
1-NN Algorithm:
1. Calculate the distance between testing data to each training
data.
2. Determine one data label that has the most minimum
distance.
Fig 2 Step of MFCC
3. Classified the testing data to the label data (based on the
The purpose of frame blocking process is to do the
number 2 step).
segmentation of voice signal [2][4]. The pronunciation of each
subject has difference speed, so the segmentation process k-NN Algorithm:
makes the long of voice data is more consistent. The result of
the segmentation process is an overlapping data. The 1. Determine k value
overlapping data is to make the data on each frame is to keep 2. Calculate the distance of each training data to the data label.
the continuity data.
3. Determine the data label that has the most minimum distance
4. Classified the training data to the data label on 3.

30
Back to Contents

5. Repeat the step 2 to 4 until the number of each class is k. Training data and set
Based on the both algorithm, the research use 1-NN
Algorithm due to this algorithm suitable for the research , , }
problem that need only one single input data to be recognized.
The Euclidean Distance [9] is used in the algorithm. , , }

III. IMPROVEMENT AND EXPERIMENT , , ,

A. Purposed Method is the average of the training data Y


The voice recognition system needs a large number of
training data to improve the accuracy level. Commonly, the 1- ⋯
NN Algorithm calculates the average of data training and use
the average value to represent the class or the label. The result
of recognition is got from the shortest distance between the Calculate the value of p for each training data, so we get:
testing and the average value of the class. This make the
process of recognition spend more time hence it need a method and
with high speed and good accuracy.
So, if we have testing data Z with voice features:
The voice recognition system also needs a method that can
handle the outlier data testing to increase the accuracy. The 1- , , , }
NN is not strong enough to handle it. Here is the simulation of
the outlier data. The simulation is using two data series as the
training data on database. As can be seen on the Fig 4, one of Then here is the formula to calculate the similarity distance:
the members of Series1 is an outlier and the outlier value is
close to the Series2 area. This phenomena makes the system ̅ ⋯ ̅
has a lower accuracy. The purpose of this research is to handle
the outlier data by makes the distance of the outlier data is
getting closer to the correct class. and
150
111.333 120 ⋯
100 110 109 115
3333
Series1
So, the result of the recognition is ,
50 Series2
38.75
11.6666 B. Experiment
12 14 9
0 6667
0 2 4 6 8 The purpose of the experiment is to get the accuracy of the
kNN and double distance method. The data of the experiment
Fig 4 Simulation of outlier data is got from 11 (eleven) voice recording subject. Each subject
voice is recorded three times by saying “computer”. The
The main idea of the method is modify the calculation of process of the recording is on quite condition and free from
Euclidean distance. The variable that notes the average of the noise. The speed of pronunciation is constant for each subject
distance value between each data on the class is added to the and consistent without intonation variation. The sampling rate
Euclidean distance. This innovation is explained on the
that used in the experiment is 8000Hz. The Audacity Program
simulation below:
is used to record the voice data and saved on .wav format. The
Fig 5 shows the frame rate that been used is 8000Hz with
Training data and set mono condition.
, , }

, , }

̅ , ̅ , ̅ , ̅
is the average of the training data X.
Fig 5 Experiment data file on .wav format
̅ ⋯ ̅
The experiment consist of two parts; the first part is
experiment using kNN method with k = 1. The testing voice
data is compared to the average of each subject’s voice

31
Back to Contents

feature. The minimum distance is known as the recognition No Subject Total of Minimum
result. positive result Euclidean
The double distance method is used on the second distance
experiment. The first experiment data is also used on the 8 H 3 21.8908
second experiment. The recognition result is got by the 9 I 3 11.1467
minimum distance.
10 J 3 11.7415
The total number of positive result of recognition is
divided by the total number of experiment and multiplied by 11 K 3 16.2913
the 100 is used as the accuracy percentage. The both of the
experiment are compared to get the conclusion of the research. After the text edit has been completed, the paper is ready
for the template. Duplicate the template file by using the Save
IV. RESULT AND DISCUSSION As command, and use the naming convention prescribed by
The experiment that using the kNN method with k = 1 is your conference for the name of your paper. In this newly
calculate the similarity between the testing data to the center of created file, highlight all of the contents and import your
each label data. The center data is the average of each subject’ prepared text file. You are now ready to style your paper; use
feature voice. The testing is compiled three times for each the scroll down window on the left of the MS Word Formatting
subject. So, we got 33 numbers of experiment result. The result toolbar.
of testing is showed on the Table 1. Based on testing result we
got 28 positive recognitions. The A, B, and K subject are the V. CONCLUSION AND FUTURE WORKS
successfully recognized by the system; otherwise the G subject This paper has described the improvement of kNN method
only once can be recognized. The C, D, E, F, G, I and J are to handle the outlier data by using the double distance method.
three times successfully recognized for each subject. Based on The testing experiment show that the double distance method is
the testing, the accuracy level is 84.85% that get from 28 divide improved the recognize accuracy. The accuracy of the double
to 33 and multiplied by 100%. distance is higher than the average kNN method, especially for
Table 1 Testing result with kNN average method outlier data. The difference type of data can be the future work.
Total of Minimum The data can be on image or other format. Besides, the
No Subject positive Euclidean comparison of the double distance method and other machine
result distance learning method such as Hidden Markov Model, Neural
Network, Linear Predictive Code, etc. can be the next research
1 A 2 15.9144
topic.
2 B 2 28.9086
3 C 3 20.5702 ACKNOWLEDGMENT
4 D 3 20.7815 This research is supported by the University of Multimedia
5 E 3 33.5138 Nusantara (www.umn.ac.id). The publication of this paper is
6 F 3 15.3762 supported by Ministry of Research, Technology and Higher
7 G 1 21.863 Education of Indonesia. The writer would like to say thank you
8 H 3 37.4621 to Pattern Recognition Laboratory of University of
9 I 3 22.7772 Tarumanagara that support the data collection process. Thanks
10 J 3 22.7772 to the colleague of Machine Learning 2011 class in Computer
11 K 2 31.685 Science, University of Indonesia that also support the idea and
experiment.
The testing of double distance method is on the second
experiment using the data on the first experiment. The result of REFERENCES
the second testing is showed on the Table 2. The total of
[1] Fomby, T. "K-Nearest Neighbors Algorithm: Prediction and
positive result is 32 voices. Only the B subject has been Classification." (2008).
recognized less than three times, it is only two times. So, the
[2] Ganchev, T.D. Speaker Recognition. Patras, 2005.
accuracy of the second experiment is 96.97%.
[3] Gilke, Mandar, Rohit Kothalikar and Varum Pius Rodrigues. "MFCC-
Table 2 Testing result with double distance method based Vocal Emotion Recognition Using ANN." Singapore: IACSIT
No Subject Total of Minimum Press, 2012.
positive result Euclidean [4] H. Hermansky and N. Morgan. “RASTA Processing of Speech.” IEEE
Trans. on Speech and Audio Processing. 2(1994): 578-589.
distance [5] Kaghyan, Sahak and Hakob Sarukhanyan. "Activity Recognition Using
1 A 3 8.265 K-Nearest Neighbor Algorithm On Smartphone With Tri-Axial
2 B 2 0 Accelerometer." 1 (2012).
[6] Md. Rashidul Hasan, Mustafa Jamil, Md. Golam Rabbani Md. Saifur
3 C 3 11.0303 Rahman, “Speaker identification using mel frequency cepstral
4 D 3 14.3835 coefficients” ICECE 2004, 28-30 December 2004, Dhaka, Bangladesh.
5 E 3 19.2088 [7] Muda, Lindasalwa, Mumtaj Begam and I Elamvazuthi. "Voice
6 F 3 9.0439 Recognition Algorithms using Mel Frequency Cepstral Coefficient
(MFCC) and Dynamic Time Warping (DTW) Techniques." JOURNAL
7 G 3 8.2326 OF COMPUTING (2010): 138.

32
Back to Contents

[8] Shrawankar, Urmila and Vilas Thakare. "A Hybrid Method For [9] Thakur, Akanksha Singh and Namrata Sahayam. "Speech Recognition
Automatic Speech Recognition Performance Improvement In Real Using Euclidean Distance." 3.3 (2013).
World Noisy Environment." Journal of Computer Science, (2013): 94- [10] Tenkomo,K. “K-Nearest Neighbor Tutorial.”
104. http://people.revoledu.com/kardi/tutorial/KNN/index.html. (2006).

33
Back to Contents

Intelligent Robot’s Behavior Based on Fuzzy Control


System

Eva Volna, Martin Kotyrba, Michal Jalůvka


Department of Informatics and Computers
University of Ostrava
Ostrava, Czech Republic
{eva.volna, martin.kotyrba}@osu.cz, Jaluvka1@seznam.cz

Abstract— The article presents a design of behavior logic for new information. There are two fundamental approximate
an autonomous robot at 2D mapping of the area. The robot reasoning methods:
decision is based on the fuzzy set theory and fuzzy logic to enable
to deduce conclusions based on imprecise description of the given • Linguistically based fuzzy logical deduction, i.e.
situation using linguistically formulated fuzzy IF-THEN rules. finding a formal conclusion when fuzzy IF–THEN
These rules are integrated to the robot’s programmable device. rules are treated as linguistically characterized logical
The robot construction was based on the robotic construction set implications.
LEGO MINDSTORMS EV3. The robot functionality contains
EV3 components controlling, area mapping, saving of gained • Fuzzy approximation of a function, i.e. finding a
data, and map creation of the given area. The article presents the function which approximates some only imprecisely
developed program which realizes these robot functionalities. In known function, whose course is estimated using the
the conclusion, mapping results are evaluated and compared with linguistic description.
the real situation.
The interpretation of the linguistic description significantly
Keywords— Linguistic Fuzzy Logic Controller (LFLC);
depends on the above-selected method. The most specific
intelligent system; robot behavior feature of LFLC is the possibility to realize a fuzzy logical
deduction when the rules are interpreted as linguistically
I. FUZZY LOGIC CONTROLLER characterized logical implications. In the concept of LFLC, we
deal with the evaluating linguistic expressions (possibly with
Linguistic Fuzzy Logic Controller (LFLC) 2000 is a
signs) which have the general form [2].
complex tool to design linguistic descriptions and fuzzy control
based on these descriptions. The methodology and theoretical
results upon which LFLC 2000 is based are described in [2]. 〈linguistic modifier〉〈atomic term〉 (2)
The theory provides elaboration of that part of the semantics
which consists of the so-called evaluating and conditional where (atomic term) is one of the words ‘small, medium,
linguistic expressions. The former are expressions such as big’, or ‘zero’ (possibly also arbitrary symmetric fuzzy
‘small, roughly medium, very big’, etc. The latter are the well- number) and (linguistic modifier) is an intensifying adverb,
known fuzzy IF–THEN rules. These are usually gathered into such as ‘very’, ‘roughly’, etc.
sets called linguistic descriptions which take the form [7]. The meaning of each linguistic expression has two
constituents: the intension Int and extension Ext in
ℜ ∶= IF is THEN is ℬ some model (this is often called the possible world). Intension
⋮ (1) of the linguistic expression is a formal characterization of the
ℜ ∶= IF is THEN is ℬ property denoted by it on the level of formal syntax. It can be
interpreted as a fuzzy set of special formulas [5]. However, it is
where , ℬ are the mentioned evaluating linguistic a rather abstract concept, which in concrete situation (context)
expressions. They characterize the property of some features of determines a fuzzy set of elements. Mathematically, this means
objects, e.g. size, volume, force, strength, etc. Since we are not that a model w is given whose support is some set U (taken
usually interested in concrete objects and their features, we usually as a closed interval of real numbers). Then the
replace them with real numbers which are then represented by extension of is a fuzzy set of elements Ext ⊂ U, which
variables X and Y. Thus, values of X and Y represent, e.g. is determined by its intension Int . Note that for each
values of temperature, pressure, price, etc. The linguistic concrete situation, a different model should be considered.
expression of the form ‘X is ’ is called the evaluating However, intension is still the same. Unlike fuzzy
linguistic predication. Fuzzy IF–THEN rules serve as a basis approximation, where we deal with fuzzy sets in a model (i.e.
for approximate reasoning, which is a method for finding a on the level of semantics), logical deduction must proceed on
conclusion on the basis of the imprecise initial information syntax. Let us consider a linguistic description consisting of
concentrated in the form of linguistic description and some two rules:

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


34
Back to Contents

ℜ ∶= IF is AND is THEN is The input variable distance of obstacle is expressed in mm,


(3)
ℜ ∶= IF is AND is THEN is ranges 0 - 1000 mm, and comprises a set of terms which
represents the space between two adjacent circles. This
These rules are assigned intensions ℜ , ℜ , distance reaches, in fact, 75mm. In LFLC, the linguistic
which can schematically be written as description is defined using available linguistic expressions.
Concerning variable distance of obstacle, we used the
ℜ = ∧ ⟹ following: extremely small (ex sm), significantly small (si sm),
(4) very small (ve sm), roughly small (ro sm), small (sm), more or
ℜ = ∧ ⟹ less medium (ml me), roughly medium (ro me), medium (me),
significantly medium (si me), big (bi), roughly big (ro bi), very
Furthermore, let X, Y, Z be interpreted in a model which big (ve bi), significantly big (si bi), extremely big (ex bi). Each
will consist of three sets U = V = W = [0, 1]. Then small of these terms has a trapezoidal membership function. These
values are values around 0.3 (and smaller) and big ones terms are shown in the above-mentioned order, see Fig. 2. The
around 0.7 (and bigger). The value 0.3 is represented in the membership function ex sm is highlighted in red.
formal system by a certain intension Sm’x and similarly, the
value 0.25 is represented by Sm’y. Then the inference rule of
modus ponens is applied on Sm’x, Sm’y and the implication
(4). The result is the intension Bi’z. The latter is to be
interpreted as a fuzzy set ′ ⊂ . To obtain one concrete
value, the resulting fuzzy set B’ should further be defuzzified
[2]. The software was developed at the Institute for Research Fig. 2. Fuzzification of the input variable distance of obstacle
and Applications of Fuzzy Modeling (IRAFM) at the
University of Ostrava. Its demo version LFLC 2000 (a limited The input variable angle expresses the rotation of the
number of rules) is freely available on the website sensor, which is in the range 0°-90°. It is composed of a set of
http://irafm.osu.cz/en/c100_software. terms which have triangular membership function, see Fig. 3.
Concerning variable angle, we used the following linguistic
II. 2D MAPPING OF SPACE USING LFLC expressions: very small (ve sm), small (sm), more or less
medium (ml me), medium (me), big (bi), very big (ve bi),
A. The Proposed Rule Base
significantly big (si bi). In Fig. 3, the membership function ve
The base of rules is designed so that the robot could decide sm is highlighted in red.
how it should get the coordinates of obstacles. Ideally, the
sensor scanning obstacles behaves like a radar which is set
forward relative to the driving of the robot and rotated by 180°
right or left (Fig. 1). Scanning of the sensor is based on the
assumption that its location is [0, 0] (marked with a purple dot).
The distance of obstacle and angle are important input
variables for our decision. It is obvious that the rule base would Fig. 3. Fuzzification of the input variable angle
be too extensive, therefore it is sufficient to propose a rule base
scanning the first quadrant, i.e. an angle between 0 ° - 90 °. The The output variable coordinate depends on the input
next proposal of the rule base ensures in which quadrant the variables distance of obstacle and angle and represents the x-
obstacle is scanned. In fact, the resulting location (square coordinate, i.e. the order of the square in the horizontal
region) has the dimensions 15 × 15 mm. direction from [0, 0], see Fig. 4. The variable coordinate is
composed of a set of terms which have triangular membership
function. Thus, we used the following linguistic expressions:
very small (ve sm), small (sm), more or less medium (ml me),
medium (me), big (bi), very big (ve bi), significantly big (si
bi). The base of IF-THEN rules is composed according to eq.
(3) as follows: If the measured length is located between
neighboring circles (the input variable distance of obstacle) at
an appropriate angle (the input variable angle and also it is
located entirely within the square areas), the obstacle is
located precisely in this square. If the measured length
overlaps the two square areas, we decide that the closest
intersecting area from the sensor is the resultant square. The
proposed rule base includes 98 IF-THEN rules.
ℜ ∶= IF is AND is THEN is
⋯ (4)
ℜ ∶= IF is AND is THEN is

Fig. 1. Full view of the scanned area


The base of IF-THEN rules representing the quadrant in
which the obstacle is located is composed as follows: This rule

35
Back to Contents

base has the input variable angle and the output variable sign. into five parts (layers) representing particular functionalities,
The input variable angle assumes values of the rotated angle see Fig. 6 [4].
and the rotated robot. The values can be defined from -359° to
359°. Using these rules , we get the real obstacle position, i.e.
the quadrant where it is located (Fig. 1). To obtain the y
coordinate, we will gradually increase the angle by 90°. It is
important to respect that the angle value was from the defined
range of the interval for this variable.

Fig. 6. The use-case diagram – robot behavior

LFLC program allows you to generate a CSV file [3],


where combinations of input variable values distance of
obstacle and angle are in each row and after performing
inference, the final output value is also inserted. Thanks to this
file, it is easy to find such an output value from the row in
which the input values approximately match with the values
entering the defined base of rules.
There is class IOVariableClass created in the layer
inferences, which is adapted for obtaining output values
Fig. 4. The output variable - coordinates
according to the respective input values received from the
B. The Test Robot CSV file. The ordered n-tuple of input variables are stored
here, where n is the number of input linguistic variables
The test robot is assembled from Lego Mindstorm EV3 [1]. including the appropriate output value. We use two instances
The robot has a Programmable EV3 with implemented Linux of IOVariableClass, because we use two bases of rules. Each
operating system LeJOS [6]. Furthermore, it uses two large instance stores data, where we receive output values according
motors for movement, one medium motor to rotate the sensor, to given input values, which is done in real time during the
and one ultrasonic sensor. It is designed so that the size of the runtime.
belt length, by means of which it moves, and the distance
between the two belts would be the same as a sensed square There is class EV3Controller created in the layer EV3
region. Let us consider the size of the chassis with belts (13.5 × brick control, which controls particular parts of the robot
13.5 cm) identical with an area of size 15 × 15 cm. The (motors, sensors). The objective of the class is to define
medium motor allows to rotate the sensor from -105 ° to 105 °. instances of motors (left and right large motor, medium motor)
Scanning the sensor is defined in the range from -90° to 90°. and the ultrasonic sensor. Important parameters for controlling
The top view of the robot is shown in Fig. 5. these devices are the following: maximal measurable distance
of the sensor (it is set up to 1000 mm), angle of rotation of the
sensor (it is set to 15°), and traveled distance of motors -
movement occurs through the square (it is set to 15 cm).
Additional parameters for synchronization of the two large
motors are the following: diameter of the drive wheel (i.e. 30
mm) and distance between the centers two drive wheels (i.e.
104.5 mm). Such defined parameters ensure rotation as well as
displacement of the robot in the desired accuracy. GetDistance
is an important method in which scanning of five samples is
performed with subsequent averaging which is used to
monitor that the values do not exceed the maximal removable
distance. Thanks to this instance, we have secured movement
of the robot, its rotation and obtaining input values for further
processing.
Fig. 5. The test robot
There are two classes, Quadrant and Memory, created in
the layer memory management. Quadrant represents a square
C. Implementation
area whose parameters are Boolean. This class is a basic cell
The proposed approach, based on fuzzy logic, implements for storing the scanned space with indications of obstacles in
the behavior of the robot for mapping the space. It is divided the area. Memory includes a two-dimensional array of class
Quadrant, which is by default set to 101 × 101. The reason for

36
Back to Contents

introducing this class is a transfer of coordinates from the There is class ImageBuilder created in the layer output
coordinate system into the memory as a two-dimensional array image which, after finishing the scanning process, creates an
that represents a 2D map of the scanned area. The X-axis, Y- output image of the map, which is drawn from the instance
axis respectively, is converted to an index of a row, column Memory (i.e. two-dimensional array of dimension 101 × 101).
respectively, in the field. Here we enter the final coordinates The instance of ImageBuilder defines colors of pixels for the
representing the location of obstacles to the map. scanning area (it is divided into squares), for which the
following applies (Fig. 8-11):
There is class Main created in the layer scanning process,
which ensures the whole process of the robot. It decides which • white color
direction the robot will move in a way that it calculates the
number of areas in each direction that are explored from o indication of explored squares is false
sensor coordinates. (However, it is also ensured that the o indication of obstacles is false
sensing distance is not exceeded). Furthermore, it sets the
angle of the robot and direction the robot should take as well • black color
as how many areas are available. There is a creation of the o indication of explored squares is true
output image realized in the layer as well. The coordinates of
the robot and the sensor are initially set to [0;0] and [0;1]. o indication obstacles is true
These coordinates do not have the same values for the reason
that the sensor is about one square area farther than the body • gray color
of the robot, but mapping of space is oriented according to the o indication of explored squares is true
coordinates of the sensor. This does not apply to the
movement and rotation of the robot. It depends on the o indication obstacles is false
coordination of robot’s body so that it is necessary to • blue color
continuously recalculate the coordination of the sensor
according to the coordination of the body (Fig. 5). Files for the o represents the initial location of the robot’s
rule bases are also set in this layer. The process of scanning body
the space is set to n iterations (in our experiments, n = 15). • red color
Each iteration consists of sub-activities (Fig. 7):
o represents the initial placement of the sensor
• scanning of the space
After initialization, the instance creates an image of the
• recording coordinates into the memory map with resolution 101 × 101 pixels, which is saved in a BMP
• determining the direction of the travel file. Here, one pixel represents a space of dimension 15 × 15
cm. Due to the miniature image consisting of elementary
• updating the location and direction of the travel of the pixels, we are able to store more than one output image on the
robot EV3 device.
The creation of the output image is done after all iterations III. EXPERIMENTAL STUDY
and the program is terminated.
Consider a specified space where we place the test robot
that scans this space and create its map. The scanning area is
covered with a grid with dimensions of one cell 5 × 5 cm. This
grid facilitates to determine the location of obstacles and the
test robot and it also helps to facilitate the orientation of the
robot in space. During the experiment, the robot senses its
surroundings and records coordinates with the corresponding
color of pixels into its memory. After scanning, it creates the
output image, i.e. a map of the scanned area. The resulting
map was compared with a map of the real space - so it should
look like an ideal final map created by robot.
The location of the robot and its positioning on the starting
position is chosen in each experiment differently. They are
successively displayed in Figures 8 – 10 a) The scanned area.
(b) The map of the area. (c) The map created by the robot.
Black areas represent obstacles, blue areas represent the body
of the robot, and red area represents the sensor.
In Figure 11 is shown a continuous scanning of the area
during experiment 1 (15 iterations). The figure is in a bad
Fig. 7. The test robot quality due to the miniature image consisting of 101 × 101
pixels, where one pixel represents a space of dimension 15 ×
15 cm.

37
Back to Contents

• The robot does not move to the precise distance and


does not rotate under the angle which we want to
achieve. This is due to the deviation of the track and
rewriting the parameters of the scanned square area.
• The sensor does not have an appropriate location. It
causes poor evaluation of the direction for the next
iteration of scanning, e.g. the body of the robot is built
next to the obstacle, but the adjacent sensor sees a clear
Fig. 8. Experiment 1 (a) The scanned area. (b) The map of the area. (c) The path. This results in rotating the robot, causing a
map created by the robot collision of the sensor with the obstacle. In any case,
there has been a continual rotation left and right.
• During scanning, locations of obstacles are rewritten. It
happens that the sensor does not notice the obstacle at
a certain angle that is in front of it. This causes that the
previously recorded obstacle is then evaluated as an
area without obstacles, thus the black point is
overwritten with a gray point in the resulting map (Fig.
11).
• Recording areas outside the scanned area is caused by
Fig. 9. Experiment 2 (a) The scanned area. (b) The map of the area. (c) The the fact that the obstacle is made of a material through
map created by the robot.
which ultrasound waves pass.
ACKNOWLEDGMENT
This work was supported by the SGS grant of University of
Ostrava.
REFERENCES
[1] Bagnall, B. Maximum LEGO EV3: Building Robots with Java Brains.
Variant Press. 2014.
[2] Dvořák, A., Habiballa, H., Novák, V., Pavliska, V. The concept of
LFLC 2000 - its specificity, realization and power of applications.
Computers in industry, 2003, 51(3), 269-280.
[3] Habiballa, H., Volna, E., Janosek, M., Kotyrba, M. Modelling and
Fig. 10. Experiment 3 (a) The scanned area. (b) The map of the area. (c) The
reasoning with fuzzy logic redundant knowledge bases, In Proceedings
map created by the robot.
27th European Conference on Modelling and Simulation, ECMS 2013,
Norway, pp. 361-366 3.
IV. CONCLUSION [4] Jalůvka, M. Using fuzzy logic for autonomous robot control - 2D
The main purpose of LFLC software system is the design, mapping of space (in Czech). Ostrava 2015. Bachelor thesis. Faculty of
science, University of Ostrava.
testing and learning of linguistic descriptions. We used these
descriptions to control robot’s behavior. The resulting map of [5] Novak, V. Perfilieva, I. and Mockor, J. Mathematical Principles of
Fuzzy Logic, Kluwer Academic Publisher, Boston, 1999.
all executed experiments almost coincides with a map of the
[6] Solorzano, J., Bagnall, B., Stuber, J., and Andrews, P. leJOS, Java for
real space. We analyzed the test results and reached the LEGO Mindstorms, 2012.
following conclusions explaining the inaccuracy of the [7] Zadeh, L. A. Toward a theory of fuzzy information granulation and its
scanning, the elimination of which will be the focus of our centrality in human reasoning and fuzzy logic. Fuzzy sets and systems,
future work. 1997, 90(2), 111-127.

Fig. 11. Experiment 1 – continuous scanning. The Figure is of a bad quality due to the miniature image consisting of 101 × 101 px, where one pixel represents a
space of dimension 15 × 15 cm. There are shown 1st, 5th, 10th and 15th iteration.

38
Back to Contents

Effects of Required Coefficient of Friction for


Female in Different Gait
Yu Ting Chen Kai Way Li Yi Yang Chen
Department of Industrial Management Department of Industrial Management Department of Physical Education
Chung Hua University Chung Hua University National Hsinchu University of Education
Hsin-Chu, Taiwan Hsin-Chu, Taiwan Hsin-Chu, Taiwan
p2f3020@gmail.com kai@chu.edu.tw hc7022709@gmail.com

Abstract—The aim of this study was investigate the effects of forces, at the same instant [8, 9]. Fig.1 shows the first and
required coefficient of friction (RCOF) in level walking and stair second peak values of RCOF in the typical gait cycle. The first
climbing (ascending and descending) in different gait ways. We peak values usually occur when the foot step on the ground, and
recruited eight healthy female adult subjects to join our the second peak values was occur before the foot push off.
experiment. The RCOF of the subjects walked on the level
Those values were the basis to be prevent slipping or falling
walkway and the stair with barefoot and wearing lab shoes in full
stride and half stride were analyzed using a Bertec® force plate. [10,11] .
Descriptive statistics, two-way ANOVA and correlation analyses
were adopted to analyze the data. The results showed that the
RCOF of stair climbing was higher than level walking condition
whether the subjects were barefoot or shod. The RCOFs of full
stride were higher than half stride.

Keywords—gait; required coefficient of friction; stairs


climbing; slips and falls.

I. INTRODUCTION
Slip and fall incidents in workplaces were very serious
issues of occupational safety and health [1]. The literature has
shown that falls resulted in a fracture easily especially for
women who were overweight or in menopause. The loss of
Fig. 1. Required coefficient of friction (RCOF) during walk on the flat floor,
labor productivity, medical payments and injury compensation the first peak and the second peak[8].
due to falls at work are heavy burden [2] for our society. When
walking, sliding of the foot on the floor due to lack of friction
between the floor and the shoe sole or foot occurs. This
contributes to the risk of falls [3]. However, there are many Based on the literature reviewed, the process of gait, the
kinds of floor in the workplace such as flat floor, lumpy floor or floor of different types of materials, friction, reaction force and
stairs climbing. The literature has shown that falls on the stairs individual capability are all factors affecting the risk of slips and
are a common accident in industrial injury for both young and falls. The reaction force and required coefficient of friction are
older workers, and the fall-related deaths occur on stairs are normally collected using a force plate. The purpose of this study
approximately 10% [4]. How to prevent the falls accidents was to investigate the variability of required coefficient of
occurs for stairs climbing is an important issue. For falls, the friction of the lower limbs in the different gait conditions.
frictional demands and the ground reaction force of the foot on
the floor for walking are considerable relevance [5]. II. METHOD
The ground reaction force (GRF) and required coefficient of
A. Participants
friction (RCOF) can be measured by the force plate. There were
many studies determined the required coefficient of friction In this experiment, eight female adults who have no lower
with the force plate. One of the studies was to investigate the limb disease history were recruited as human subjects.
effects of RCOF in three kinds of gait speed [6]. Another study Participants signed an informed consent prior to join the
used three force plates to measure the different RCOF between experiment. Their body weight, height, foot length, thigh and
calf length were measured and recorded. The basic data of the
the right foot and left foot [7]. The RCOF is obtained by
subjects are shown in Table I.
dividing the dimension of measured the vertical ground reaction

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


39
Back to Contents

TABLE I. SUBJECTS BASIC INFORMATION (N = 8) force plate collected the ground reaction force for each of the
experimental conditions (see Fig. 4 and Fig. 5)

Age Height Weight Thigh Calf Foot


(year) (cm) (kg) length length length
(cm) (cm) (cm)
Mean 21 162 55 41 39 20
Std. 2 6.8 12.4 4.5 3.6 1.6

B. Stride Length
In this study, there were two stride length conditions: full
stride and half stride. Before the experiment, subject walked on
a three-meter walkway. The gait was shot using a high speed
camera. The first and the last gait cycle were used to determine
the subject’s average stride length. The half stride condition was
that the subject was requested to walk on his averaged stride
length. The full stride was that the subject was requested to walk Fig. 3. Bertec® Force Plate.
on her double half stride.

C. Apparatus
Both a walkway and a staircase were used in the experiment.
The walkway was approximately 5 m long and 1.2 m wide. A
Bertec® force plate was installed in the walkway. The staircase
included four steps, each with a run of 32 cm, a rise of 18 cm
and was 90 cm width, and the force plate was placed on the
ground as the first step (see Figs. 2 and 3). The sampling rate
of the force plate was 1000 Hz.

Fig. 4. The staircase with marked the start lines and force plate placed.

Fig. 2. The staircase (left) and the walkway (right) in the experiment.

D. Experimental Procedure
There were three factors: walkway, gait, and footwear. The
walkway factor included walking on a level walkway, stair
ascending and stair descending. This comprised a 3-level
condition. The gait pattern included walking on full or half
stride. The footwear condition included barefoot and shod
Fig. 5. The walkway with marked the start lines and force plate placed.
conditions. There were a total of 12 experimental conditions.
Each subjects Performed two trials in each conditions. When the
subjects walked on the walkway, a start line based on the E. Data Analysis
subject’s stride length was marked on the floor in front of force The RCOF data collected were analyzed using descriptive
plate. The subject needed to walk twice both for full stride and statistics, two-way ANOVA and correlation analyses.
half stride in barefoot and footwear. Then, the subject needed to
walk up and down stairs in the full stride and half stride. The Microsoft® Excel 2010 and SAS® 8.0 were adopted for data
foot stepping on the force plate must be their preferred foot. The processing and statistical analyses.

40
Back to Contents

III. RESULTS
Fig. 6 shows the first peak value of RCOF (RCOF 1) under
barefoot condition. The descending RCOF1 of full stride and
half stride were approximately the same level. For level walking
and ascending, full stride has higher RCOF 1 than half stride.
Fig. 7 shows the first peak value of RCOF under shod condition.
The half stride has higher RCOF1 than full stride in descending.
For level walking, The RCOF1 for the half stride and full stride
were approximately the same. For ascending condition, the
RCOF 1 of the full stride was higher than those of the half stride
condition. For descending, however, the RCOF1 of half stride
condition was higher than those of the full stride.

Fig. 8. Second peak value of RCOF in different gait under barefoot condition.

Fig. 6. First peak value of RCOF in different gait under barefoot condition.

Fig. 9. Second peak value of RCOF in different gait under shod condition.

The two-way ANOVA results of the RCOF1 from the eight


subjects and 12 walking conditions were shown in Table II. The
different mode (walking on the straight walkway, stair
ascending and descending) and step (full stride and half stride)
were both statistically significant (p<0.0001). Fig. 10 shows the
interaction of different mode and step are reached statistically
significant level (p<0.0001).

TABLE II. RCOF 1 OF TWO-WAY ANOVA

Fig. 7. First peak value of RCOF in different gait under shod condition. Source DF SS MS F Value Pr > F

Fig. 8 shows the second peak force of RCOF (RCOF2) with Mode 2 0.04 0.02 35.88 0.0001
barefoot. The full stride of RCOF 2 was higher than those of Step 1 0.01 0.01 18.38 0.0001
half stride condition in descending and was slightly higher than
those in ascending condition. However, the RCOF2 of the full Mode*Step 2 0.01 0.01 11.46 0.0001
stride condition was slightly lower than those in half stride
Shod 1 0.00 0.00 2.09 0.15
condition in level walking. Fig. 9 shows the second peak value
of RCOF under shod condition. The RCOF 2 values of the full Mode*Footwear 2 0.00 0.00 0.13 0.88
stride conditions were higher than those in the half stride
condition both in ascending and descending. However the Step*Shod 1 0.00 0.00 0.3 0.59
RCOF values of the full stride and half stride of RCOF 2 were Mode*Step*Shod 2 0.00 0.00 2.21 0.11
approximately same.

41
Back to Contents

IV. DISCUSSION AND CONCLUSION


Slips and falls cause injuries and fatalities in the workplace.
The understanding of the RCOF for both level walking and stair
climbing is important to develop fall prevention strategy.
However, when we are walking, both the ground reaction force
and stride length affect the RCOF value. In this study, the
RCOF value of stair climbing condition was higher than that of
flat floor condition. The RCOF value of full stride condition was
significantly higher than that of half stride condition. This
implies that level walking and a half stride gait pattern will
result in less risk of slipping and falling under our experimental
scenario.

MREFERENCES
Fig. 10. The interaction of different mode and step on RCOF1.
[1] Courtney TK, Sorock G, Manning DP, Collins JW,.Holbein-Jenny MA.
The ANOVA results of the RCOF2 were shown in Table III. (2001) .Occupational slip, trip, and fall-related injuries-can the
contribution of slipperiness be isolated? Ergonomics ,44,1118-37.
The different mode (walking on the straight walkway, stair
[2] Cherry N, Parker G, McNamee R, Wall S,Chen Y, Robinson J.
ascending and descending) and step (full stride and half stride) (2005).Falls and fractures in women at work. Occupational
had both reached statistically significant level (p<0.0001). Fig. Medicine ,55,292-7.
11 shows the interaction of different mode and step are reached [3] Perkins PJ. Measurement of slip between the shoe and ground during
statistically significant level (p< 0.0001). walking. Walkway surfaces: measurement of slip resistance. In: Anderson
C, Senne J, (editors). ASTMSTP 649. American Society for Testing and
Materials, Philadelphia, PA; 1978, p.71-87.
TABLE III. RCOF2 OF TWO-WAY ANOVA
[4] Startzell, J., Owens, D., Mulfinger, L., & Cavanagh, P. (2000). Stair
negotiation in older people: A review. Journal of the American Geriatrics
Source DF SS MS F Value Pr > F Society, 48(5),567-580.
Mode 2 1.04 0.52 59.56 0.0001 [5] Yoon HY, Lockhart TE. Nonfatal occupational injuries associated with
slips and falls in the United States. International Journal of Industrial
Step 1 0.35 0.35 39.72 0.0001 Ergonomics 2006; 36:83-92..
[6] Chang WR, Chang CC, Matz S, Lesch MF (2008). A methodology to
Mode*Step 2 0.43 0.22 24.75 0.0001 quantify the stochastic distribution of friction coefficient required for
level walking. Applied Ergonomics, 39, 766-771
Shod 1 0.01 0.01 1.12 0.29
[7] Finoa P, Lockhatdt TE. (2014). Required coefficient of friction during
Mode*Footwear 2 0.01 0.01 0.71 0.49 turning at self-selected slow, normal, and fast walking speeds. Journal of
Biomechanics, 47, 1395-1400
Step*Shod 1 0.01 0.01 1.13 0.29 [8] Perkins PJ, Wilson MP. Slip resistance testing of shoes—
newdevelopments. Ergonomics 1983;26:73–82
Mode*Step*Shod 2 0.02 0.01 1.1 0.33 [9] Change WR, Matz S, Chang CC. (2012). A comparison of required
coefficient of friction for both feet in level walking. Safety Science, 50,
240-243
[10] Buczek FL, Cavanagh PR, Kulakowski BT, Pradhan P. Slip resistance
needs of the mobility disabled during level and grade walking. In: Gray
BE, editor. Slips, Stumbles and Falls: Pedestrian Footwear and Surfaces,
ASTM STP 1103. Philadelphia: American Society for Testing and
Materials, 1990:39–54.
[11] Christina KA, Cavanagh PR.(2002). Ground reaction forces and frictional
demands during stair descent: effects of age and illumination. Gait and
Posture, 15, 153-158.

Fig. 11. The interaction of different mode and step on RCOF2.

42
Back to Contents

An Application of Statistical Modeling for


Classification of Human Motor Skill Level

Wenqi Ma David B. Kaber


Strategic Research Group Edwards P. Fitts Department of Industrial and Systems
Yanfeng Automotive Interiors Engineering
Holland, MI, US North Carolina State University
wenqi.ma@yfai.com Raleigh, NC, US
dbkaber@ncsu.edu

Abstract—Automation technology has expanded dramatically these methods require subjective decisions to be made by
in the recent years; however human manual work is still required experienced clinicians and measurements are usually presented
in many domains. To facilitate appropriate design of manual skill in form of completion time, scores, or ratings. In order to
training and to reduce individual differences in manual generate objective measures of motor skill, in an efficient
performance, there is a need to assess individual skill level in manner, several other studies have made use of computer-
advance of training exercises. Unfortunately, current motor skill based virtual realities (VRs) and different types of control input
assessment approaches do not provide direct indicators of human devices (e.g., haptic controls). For example, in [5] a VR-based
operator skill levels (e.g., high, medium, or low). Therefore, there simulation of the Nine-Hole-Peg-Test (NHPT) [6] was
is also a need to identify appropriate methods by which to classify
developed. Experimental results demonstrated the system to
novice operators based on their initial motor performance. In the
present study, a statistical model was developed to classify
provide all the advantages of the classical NHPT (i.e., easy to
participant motor ability level using a set of features based on administer, standardized methodology, and validated results) as
kinematic parameters generated through a computerized motor well as the benefits of use of a virtual environment (e.g., a
test. The final model achieved a classification accuracy of ~98% controlled test scenario, adjustable parameters, quantitative and
using a 75/25 cross validation approach. Results verified the objective measurements). Another study [7] collected
reliability of the motor-control test and validated the quantitative movement and motor control measures using a SensAble
motor skill classification algorithm. Based on this work, it is Technologies, Phantom haptic interface with a virtual
expected that the algorithm and the test could be applied for environment. The responses were related with measures
design of novel manual skill training approaches to compensate collected from a set of clinical assessment tests, including the
for performance gaps among novice operators. NHPT, Purdue Pegboard Test (PPT) [8], and others. Results
demonstrated a high reliability and validity of measurements
Keywords—motor skill assessment; motor skill classification; obtained in the virtual tasks for assessing arm and hand motor
psychomotor testing; manual performance; statistical modeling function.
Although validated as accurate and reliable, the quantitative
I. INTRODUCTION
and objective measures collected from the virtual tasks
Due to advances in automation and information technology, developed in the above studies do not directly indicate a
reliance on the human workforce has significantly decreased. specific skill level (e.g., high, medium, or low) for an
However, human work is still required in many domains, such individual operator. In order to generate such information from
as industrial assembly operations. In order to promote raw motor response data, a classification model must be
uniformity of operator performance from one production unit developed. Several prior studies [9, 10] have demonstrated the
to the next, proper training methods should be designed. use of simple linear regression models for this purpose.
Related to this, prior research [1] has revealed differences However, in many cases, the data collected on human
among operators in motor performance characteristics and performance did not satisfy the strict parametric assumptions of
baseline skill levels. Huegel [1] used a simple virtual reality linear regression analysis [11]. For example, human motor
(VR)-based target-hitting task to classify participants into three response time has been identified as a key measure for
types of learners (i.e., high, low, or transitional) based on their evaluating motor skill; however, computer science research has
task performance. Consequently, rather than treating a group of demonstrated such response time to follow a lognormal
novice operators as comparable in skill, it would be more distribution [12]. Consequently, such data poses violations of
effective and efficient to identify individual skill levels before the assumptions of linear regression and many results and
assigning them to training conditions or specific types of work conclusions of prior skill classification studies are actually
tasks. invalid. To achieve more accurate results, Support Vector
Numerous clinical-based techniques have been developed Machines (SVMs) have been applied for solving many
and applied to assess human motor skill level [2, 3, 4]. All classification problems with human performance data sets [14,
15]. SVMs are a common machine learning algorithm with

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


43
Back to Contents

advantages of flexible model form, robust computation and C. Tasks and procedures
unique solution delivery [13]. All participants were asked to complete 40 trials of a
On this basis, as part of an earlier study, we developed a computer-based dice manipulation task and three trials of the
simple VR-based block manipulation task [16]. Experimental PPT. The dice manipulation task, developed in [17], involved
results generated with this task indicated the potential for use manipulation of a virtual die using the haptic-VR workstation
as a method to evaluate participant motor ability level. described above. The training virtual environment included a
However, the reliability of the task was not validated through single virtual die placed near the left-side of a virtual work
cross-validation with other standardized assessment methods. surface and a square near the right-side of the surface (see Fig.
In the present study, human motor performance was measured 2). A 2D image of a single side of a die (stimulus) was
in another simple VR-based task and a standardized presented at the top of the screen in the display area. The goal
psychomotor test was used to identify the specific skill level of of the task was to move the die as quickly and accurately as
participants. Both linear regression and a SVM algorithm were possible to the target square with the top surface of the die
applied in order to develop an accurate classification model on matching the stimulus.
the basis of performance in the simple task. The overarching The three trials of the standardized PPT were presented to
objective of the study was to verify the capability of the basic participants after the 40 trials of the dice manipulation tasks. In
VR simulation for assessing human motor skill level and to each trial, participants were required to insert as many small
identify the most accurate algorithm for a skill classification pins as possible into a column of holes on task board with a 30-
model. second time limit. The PPT was only used to test right hand
motor skill in order to be consistent with the dice manipulation
II. METHOD task.

A. Participants D. Data processing


A group of 21 right-handed participants (11 male, 10 During the 40 trials of the dice manipulation task,
female, average age = 30.2, SD = 9.2) were recruited for this kinematic response data were automatically captured with the
experiment. All participants were required to have 20/20 or haptic device and recorded in the form of time series with a
corrected vision. (If vision was corrected, participants were frequency of 10 Hz. The raw data included the elapsed task
required to wear the corresponding prosthetic to reach 20/20 time, and stylus displacement and orientation in the Cartesian
acuity during the experiments.) Any participant with current or coordinate frame. Based on the raw kinematic data, a set of 29
chronic wrist disorders was excluded. All requirements were task performance parameters were computed for each trial,
confirmed through an online questionnaire before the including time spent in different phases (reaction, target
beginning of the experiment. approach, manipulation), speed and acceleration in moving the
virtual die along x-, y- and z-axes during target approach,
B. Apparatus speed and acceleration in rotating the die in yaw, roll and pitch
The VR simulation was presented to participants using a during the manipulation phase, and deviation distance and
PC integrated with a stereoscopic display and an NVIDIA® 3D angle at the target position. These particular task parameters
VisionTM Kit, including 3D goggles and an emitter (see Fig. were selected as bases for motor skill assessment based on the
1). Stereoscopic rendering of the task simulation was facilitated results of prior research [17, 18, 19, 20] investigating
by an OpenGL quad-buffered stereo, high-performance video performance in virtual object manipulation tasks.
card (NVIDIA® QuadroTM). A SensAble Technologies After computing the 29 parameters for each of the 40 trials
Phantom Omni® Haptic Device was used as the haptic control of the task, a 40 × 29 matrix was obtained. For each parameter,
interface. The Omni included a boom-mounted stylus that four statistical features – mean, standard deviation (SD),
supported 6 degrees-of-freedom (DOFs) of movement and 3 minimum, and maximum – were generated to describe
DOFs of force-feedback. All data on participant performance participant performance across trials. In addition, the 40 data
with the Omni was recorded automatically by the simulation points corresponding to one parameter were fitted using a
software. learning model in order to compute the task learning
percentage (k-value), according to the following formula [21]:
Y n = Y1 × n b (1)

Fig. 1. Virtual reality work station Fig. 2. Dice manipulation task

44
Back to Contents

where Yn is task performance in the nth trial. Taking the natural problem can still be solved using the SVM methodology,
log of both sides, the formula becomes: discarding the ordering information would potentially degrade
the predictive performance of the resulting classifier [23]. To
ln(Yn) = ln(Y1) + b × ln(n) (2) make use of such information with the original SVM
After solving for the coefficient b, the learning parameter, by algorithm, we used a simple method, as described in [23], and
using linear regression, the learning percentage (k) can be developed ordinal classification models. In this method, the
calculated as: original dataset (with attributes of high, medium, and low for
each participant) was converted into two new datasets:
k = 2b (3)
New dataset 1: Class attribute of “low” was converted to 0
Therefore, five features corresponded to each of the 29 and all others to 1.
parameters, yielding a set of 145 (i.e., 29 × 5) features for each New dataset 2: Class attribute of “high” was converted to 1
participant. These features were considered as potential and all others to 0.
predictors of motor skill in the classification models.
Subsequently, a binary SVM algorithm was applied to the
Based on group performance in the PPT (mean score = two new datasets separately, generating classification models
14.52, SD = 1.42), each individual was assigned to one of three with probabilities of Pr(Target > low) (i.e., the likelihood new
motor performance levels, including high, low and medium. participant performance would be higher than a low performer)
High performers were identified as those achieving an average and Pr(Target > medium) (i.e., the likelihood new participant
score at least one SD higher than the overall average PPT score performance would be higher than a medium performer). The
(higher than 15.94). Low performers were identified as those probabilities that a new participant belonged to each of the
persons achieving an average score at least one standard three groups were then computed as:
deviation lower than the overall average PPT scores (lower
than 13.10). Medium performers were identified as those Pr(Target is low) = 1 – Pr(Target > low)
persons generating an average score in between the high and Pr(Target is medium) = Pr(Target > low) – Pr(Target >
low performer levels (between 13.11 and 15.93). (The specific medium)
PPT score criteria are identified here in case the reader seeks to Pr(Target is high) = Pr(Target > medium)
make comparison with the extensive normative data on the The class with the highest estimated probability was assigned
PPT task for various study populations.) Following our to the new participant.
method, the 21 participants were labeled with three levels,
including seven participants in each level. The level To evaluate the classification accuracy, a 75/25 cross
information for each individual served as an a priori label for validation was applied for the linear regression model and
developing the classification models. SVM models. The entire experimental data set was randomly
partitioned such that 75% (16 response data points) was used to
E. Model construction train the model while the remaining 25% (5 data points) was
A linear regression model was initially developed using the used to test the model accuracy. To reduce bias, the process
above described statistical and learning features as predictors was repeated 500 times such that all data points were used as
of motor skill level and average PPT scores as responses. All both training and model verification observations. The average
145 features were available to serve as predictors; therefore, rate of error was computed as an evaluation criterion for each
the potential model factor set was much larger than the number replication in order to compare model accuracy.
of standardized test responses (i.e., 21 average PPT scores).
Thus, in order to guarantee the validity of the linear model, a F. Hypotheses
forward selection algorithm was applied. For selection of There were three hypotheses for this study. The first
features to fit the model to the data set, the adjusted R-squared hypothesis (H1) was that the computer-based VR task would
value was used as a criterion. prove to be a reliable tool for evaluating human motor skill. It
was expected that features generated from the VR task
As mentioned above, SVM has a flexible model form performance data would be highly related to results obtained
allowing for use of different kernel functions. Considering the on the standardized PPT tasks. The second hypothesis (H2)
influence of data characteristics on classification results, it was that the SVM algorithm would generate higher motor skill
would be difficult to predict the best-performing kernel classification accuracy than the linear regression model. This
function. Therefore, three different kernel functions, including hypothesis was formulated based on a review of results of prior
the radial basis function (RBF) [22], linear function, and a studies [13, 14, 15, 22]. The third hypothesis (H3) was based
polynomial function, were tested to identify the model with the on the results presented in [23] and posited that ordinal SVM
highest classification accuracy. Again, a forward selection models would yield higher prediction accuracies than original
algorithm was applied to determine the most influential feature SVM models.
set for inclusion in the SVM model.
In this study, the motor performance level determined for III. RESULTS
each participant was used as an a priori label and it contained
ordering information (i.e., high > medium > low). However, Through application of the forward selection algorithm and
the original form of the SVM algorithm does not utilize such use of the adjusted R-squared criterion, a total of 15 features
information for developing classification models. Although this were selected as predictors for inclusion in the linear
classification model. For each of the 15 intermediate linear

45
Back to Contents

models, error rate was computed using the 75/25 cross- disease diagnosis [26]. However, for the specific data collected
validation method. The changing trend of the adjusted R- in this study, a polynomial kernel function did not perform as
squared value and rate of error, along with the increase in good as it might with other data sets.
number of predictors during the selection process, is presented
in Fig. 3. For the first 11 predictors, the model accuracy To further improve the predictive model accuracy, ordinal
improved as more predictors were added to the model. SVM models were developed to exploit ordinal information
However, beginning with the model with 12 predictors, the rate contained in the participant skill level labels. The ordinal SVM
of error started to rebound with the addition of more predictors. model produced a decreased error rate as compared to the
In this case, the greater number of predictors actually degraded original SVM model with the same kernel function, which
the model classification accuracy. Therefore, the final linear supported the third hypothesis (H3). However, this
model was chosen as the one with the first 11 predictors, which improvement in accuracy was not significantly greater (as
had a prediction error rate of 8.88% and adjusted R-squared summarized in Table 2), which was likely influenced by the
value of 0.9966. relatively small number of classification groups. These results
were consistent with the conclusion of [23], which indicated
Both original and ordinal SVM models using the RBF, that the degree of improvement in ordinal model performance
linear, and polynomial kernel functions were developed. The over the original SVM model would increase with the number
rates of error for these non-linear models are summarized in of classes. Therefore, while model prediction accuracy was not
Table 1, along with the selected number of features for each substantially improved in this study, as a result of applying
model. By comparing prediction error rates generated by the ordinal SVM, it is possible that an ordinal model would show
various models, it was found that ordinal SVM with the linear superior performance to non-ordinal SVM in addressing
kernel function produced the lowest error rate, i.e., the highest similar motor skill classification problems with more levels of
prediction accuracy. Two-sample t-tests were conducted to classification.
compare the classification accuracy between original and
ordinal SVM models. Results are summarized in Table 2.

IV. DISCUSSION
The first hypothesis (H1), stating that human performance
in the haptic-VR dice manipulation task would prove to be a
reliable and efficient indicator of motor skill, as compared with
standardized psychomotor tasks, was supported. The dice
manipulation task performance records were used to generate a
set of statistical features. Linear regression analysis revealed
subsets of these features to have utility for predicting average
PPT scores. The resulting linear regression model produced an
adjusted R-squared value of 0.9966, indicating a good fit of
performance in the dice manipulation task to the standardized
Fig. 3. Adjusted R2 values and error rates for linear models with various
psychomotor test (PPT) results. Therefore, the computer-based predictor counts
dice manipulation task proved to be a valid and reliable method
for assessing motor skill. Most importantly, the dice TABLE I. SUMMARY OF CLASSIFICATION ACCURACY FOR VARIOUS SVM
ALGORITHMS AND KERNAL FUNCTIONS
manipulation task can be administered with efficiency as
compared to other physical psychomotor tests (e.g., PPT, Algorithm Kernel function Rate of error (%)
Number of
NHPT, etc.) and use of a haptic control interface with the VR features
simulation allows for spatial and temporal data on participant RBF 4.56 10
motions to be captured automatically, which supports rich Original SVM Linear 2.44 15
kinematic parameter calculations. Polynomial 26.24 4

The second hypothesis (H2) stated that a non-linear RBF 3.61 9


classification model would achieve higher classification Ordinal SVM Linear 1.96 14
accuracy than a linear model. Results also supported this Polynomial 23.44 5
hypothesis. The ordinal SVM with linear kernel function
produced the lowest error rate and was significantly more
TABLE II. COMPARISON OF CLASSIFICATION MODEL ACCURACY
accurate than the linear regression model (p < 0.001) in skill
classification. The original SVM algorithms applying the RBF Two-sample t-test
Statistical
and linear kernel functions both achieved reductions in the power (1 – β)
prediction error rate that was obtained with the linear RBF: Original vs. Ordinal
SVM t = 1.13, p = 0.129 0.1930
regression modeling approach. However, the SVM model Linear: Original vs. Ordinal
using a polynomial kernel function degraded the classification SVM t = 1.30, p = 0.096 0.2321
accuracy with a significantly high error rate of 26.24%. Polynomial: Original vs.
Previous research demonstrated the capability of polynomial Ordinal SVM t = 1.19, p = 0.117 0.2074
kernel functions in studies concerning natural language Ordinal SVM with linear
kernel vs. Linear model t = 11.24, p < 0.001 1.0
processing [24], facial expression pattern recognition [25], and

46
Back to Contents

V. CONCLUSIONS relevance of robotic outcome measures,” 2009 IEEE International


Conference on Rehabilitation Robotics, pp. 576–581.
The objective of the present research was to verify the [8] J. Tiffin, Purdue Pegboard Examiner Manual, Chicago, IL, 1968.
capability of a custom VR-based simulation for evaluating [9] J.D. Chase, and S.P. Casali, “Introducing a methodology for selecting
motor performance and to develop a classification model to cursor-control devices for computer users with physical disabilities,”
accurately predict individual motor skill level. Performance Proceedings of the 3rd Annual Mid-Atlantic Human Factors Conference,
measures collected on a simple dice-manipulation task were pp. 115–120, 1995.
related to results obtained in a standardized psychomotor test [10] M. Jipp, C. Bartolein, and E. Badreddin, “Predictive validity of
using linear regression. Moreover, various SVM models were wheelchair driving behavior for fine motor abilities: definition of input
variables for an adaptive wheelchair system,” Conference Proceedings -
developed and compared in terms of motor skill classification IEEE International Conference on Systems, Man and Cybernetics, pp.
accuracy. The simple motor task was verified as a reliable and 39–44, 2009.
efficient means for assessing motor performance. The most [11] G. McLachlan, Discriminan Analysis and Statistical Pattern
accurate classification algorithm incorporated a non-linear Recognition, 1st ed., John Wiley & Sons, Inc., Hoboken, New Jersey,
model exploiting ordinal information contained in training and 1992.
verification data sets. This model was validated and proved [12] M.V. Bonto-Kane, “Statistical modeling of human response times for
superior in classification accuracy as compared to the linear task modeling in HCI,” Doctoral Dissertation, North Carolina State
University, Raleigh, NC, 2009.
model.
[13] L. Auria, and R.A. Moro, “Support Vector Machines (SVM) as a
Although the classification model validated in this study technique for solvency analysis,” DIW Berlin Discussion Paper No. 811,
produced high motor skill classification accuracy, the model 2008.
was only verified using results generated from the same group [14] M. Dose, C. Gruber, A. Grunz, C. Hook, J. Kempf, G. Scharfenberg, and
B. Sick, “Towards an automated analysis of neuroleptics’ impact on
of participants. In addition, the model validation was limited to human hand motor skills,” 2007 4th Symposium on Computational
use of a computational cross-validation method. To overcome Intelligence in Bioinformatics and Computational Biology (IEEE Cat.
these limitations in future work, it would be worthwhile to No.07EX1576), pp. 494–501.
apply the model with a new group of participants and verify the [15] T. Gruber, B. Meixner, J. Prosser, and B. Sick, “Handedness tests for
classified results with performance in other motor tests. preschool children: a novel approach based on graphics tablets and
Moreover, the simple VR-based dice manipulation task could support vector machines,” Applied Soft Computing Journal, 12(4), pp.
1390–1398, 2012.
also be evaluated for predicting motor skill in other types of
visual-spatial and fine-motor control tasks. It is possible that [16] M.P. Clamann, W. Ma, and D.B. Kaber, “Evaluation of a virtual
realityoand haptic simulation of a block design test,” Proc. of the 2013
performance in the fundamental motions as part of dice IEEE International Conference on Systems, Man, and Cybernetics.
orientation and discrete movement might also be predictive of [17] W. Jeon, M. Clamann, B. Zhu, G-H. Gil, and D.B. Kaber, “Usability
skill in graphomotor (drawing) production or tasks involving evaluation of a virtual reality system for motor rehabilitation,” Proc. of
constructional praxis (3D object manipulation). Such validation the 2012 Applied Human Factors & Ergonomics Conference.
could support use of the dice manipulation task for predicting [18] F. Amirabdollahian, G.T. Gomes, and G.R. Johnson, “The Peg-In-Hole:
performance in some high level skilled physical work. a VR-based haptic assessment for quantifying upper limb performance
and skills,” 9th International Conference on Rehabilitation Robotics,
pp.422–425, 2005.
REFERENCES [19] C. Emery, E. Samur, O. Lambercy, H. Bleuler, and R. Gassert,
[1] J. Huegel, “Progressive haptic guidance for a dynamic task in a virtual “Haptic/VR assessment tool for fine motor control,” Haptics: Generating
training environment,” Doctoral Dissertation, Rice University, Houston, and Perceiving Tangible Sensations, pp. 186–193, 2010.
TX, 2009 [20] L. Zollo, L. Rossini, M. Bravi, G. Magrone, S. Sterzi, and E.
[2] R.C. Lyle, “A performance test for assessment of upper limb function in Guglielmelli, “Quantitative evaluation of upper-limb motor control in
physical rehabilitation treatment and research,” International Journal of robot-aided rehabilitation,” Medical & Biological Engineering &
Rehabilitation Research, 4, pp. 483–492, 1981. Computing, 49(10), pp. 1–14, 2011.
[3] E. Taub, N.E. Miller, T.A. Novack, E.W. Cook, 3rd, W.C. Fleming, C.S. [21] S. Konz, and S. Johnson, Work Design: Occupational Ergonomics, 6th
Nepomuceno, J.S. Connell, and J. E. Crago, “Technique to improve ed., Scottsdale, AZ: Holcomb Hathaway, 2004.
chronic motor deficit after stroke,” Archives of Physical Medicine and [22] C. Staelin, “Parameter selection for support vector machines,” HP
Rehabilitation, 74(4), pp. 347–354, 1993. Laboratories Israel, 2002.
[4] C.M. Light, P.H. Chappell, and P.J. Kyberd, “Establishing a [23] E. Frank, and M. Hall, “A simple approach to ordinal classification,”
standardized clinical assessment tool of pathologic and prosthetic hand EMCL ’01 Proceedings of the 12th European Conference on Machine
function: normative data, reliability, and validity,” Archives of Physical Learning, pp. 145–156, 2001.
Medicine and Rehabilitation, 83(6), pp. 776–783, 2002. [24] Y. Goldberg, and M. Elhadad, M. “SplitSVM : fast , space-efficient,
[5] C. Emery, E. Samur, O. Lambercy, H. Bleuler, and R. Gassert, nonheuristic, polynomial kernel computation for NLP applications,”
“Haptic/VR assessment tool for fine motor control,” Haptics: Generating Proceedings of the 46st Annual Meeting of the Association of
and Perceiving Tangible Sensations, pp. 186–193, 2010. Computational Linguistics (ACL), pp. 237–240, 2008.
[6] V. Mathiowetz, K. Weber, N. Kashman, and G. Volland, “Adult norms [25] F. Wang, K. He, Y. Liu, L. Li, and X. Hu, “Research on the selection of
for the Nine Hole Peg Test of finger dexterity,” The Occupational kernel function in SVM based facial expression recognition,” 2013 IEEE
Therapy Joural of Research, 5, pp. 24–33, 1985. 8th Conference on Industrial Electronics and Applications (ICIEA), (2),
[7] P. Feys, G. Alders, D. Gijbels, J. De Boeck, T. De Weyer, K. Coninx, C. pp. 1404–1408, 2013.
Raymaekers, V. Truyens, P. Groenen, K, Meijer, H. Saveberg, and O.B. [26] Y. Mo, and S. Xu, “Application of SVM based on hybrid kernel function
Eijnde, “Arm training in Multiple Sclerosis using Phantom: Clinical in heart disease diagnoses,” 2010 International Conference on Intelligent
Computing and Cognitive Informatics, pp. 462–465.

47
Back to Contents

Analysis of Vehicle Crash Injury-Severity in a


Superhighway: A Markovian Approach
John Carlo F. Marquez1, Darwin Joseph B. Ronquillo2, Noime B. Fernandez3, Venusmar C. Quevedo4
Industrial Engineering Department
Adamson University
Manila, Philippines 1000
jcmarquez000@gmail.com1, darwinjosephronquillo@gmail.com2, fernandez.noime@yahoo.com3, vcquevedo@gmail.com4

Abstract— According to the World Health Organization increase the awareness of the different road accidents and
(WHO), road traffic accidents is one of the top causes of death preventive measures in order to minimize the occurrence of
worldwide that claims roughly 1.3 million lives annually. In the this accident, but instead of decreasing the frequency of
Philippines, the Philippine National Police-Highway Patrol accidents it is still gradually increasing every year.
Group (PNP-HPG) data showed that there were 15,572 road
accidents nationwide for the whole of 2014, with 1,252 persons The reason why this study was conducted is for the
killed and 9,347 others injured. A comprehensive study was proponents to know if what are the significant factors that
conducted which aims to determine which road traffic accident affect the severity of crash injury accidents in Commonwealth
factors are significant with the severity of road traffic accidents, Avenue, the significant factors per level of severity of road
determine which factors are significant per level of severity, and traffic accidents in Commonwealth Avenue, and the critical
determine the human factors associated in the occurrence of road driver error/s related to the level of severity of crash and other
traffic accidents that are significant with the levels of severity. factors affecting the casualty of having road accidents.
Markov Chain Switching Approach enables to determine the
probability of occurrence of road traffic accidents and the
significant factors that affects the severity of a road traffic II. REVIEW OF RELATED LITERATURE AND STUDIES
accident since it takes into consideration the heterogeneity of the
variables and the different time-varying and time-dependent A. Road Traffic Accidents
constraints which are not considered by other regression With the sudden increase of frequency of road traffic
techniques used in road traffic accidents severity literature. accidents all over the world it is now considered as a growing
problem that threatens the lives of many people all over the
Keywords—Markov Chain Switching Model; Road Traffic globe. According to the World Health Organization, road
Accidents; Injury Severity; Commonwealth Avenue traffic accidents cause the death of more than 1.2 million and
the injury of between 20 and 50 million people annually
I. INTRODUCTION worldwide with more than 90% of deaths in low and middle
income countries. (Ismail & Abdelmageed, 2010). Traffic
Road traffic is one of the most common causes of death accidents are dependent events. In general, accidents are
worldwide. In which during 2013, an estimation of about 1.24 defined by a series of variables that helps to analyze traffic
million people died from road traffic accidents which is one accident and to identify significant factors that affect injury
on the reason that it claimed the ninth spot on the list of the severity.(de Oña, López, Mujalli, & Calvo, 2013).
World’s Health Organization’s (WHO) Top Ten Causes of
Death. Road traffic accidents are the only cause of death that Despite of continuous improvements in vehicle technology
is not associated with any human diseases and in 2012, road and road engineering, road accidents are still one of the major
traffic accidents claimed nearly 3,500 lives each day. accidental causes of death and injury. (WHO, 2004). Traffic
accidents is an important issues in the world and both the
In the Philippines, a report coming from the Metropolitan location and frequency of traffic accidents are subject to
Manila Development Authority (MMDA). that in 2012 there changes as time passed, and these accidents can be measured
were a total of 82, 757 road accidents that were recorded in as discrete random, which represents a low occurrence
Metro Manila alone, an average of 227 accidents per day. Of probability. (Soler-Flores, 2013).
which 412 are fatal and the victims were mostly pedestrians.
According to the status report published by the Department of The concern with road accidents has become global and an
Health (DOH), road accidents rank fourth in the top causes of important social issue. According to a study made in Britain,
death in the country during 2013. The only cause of death that there were 7000 deaths and injuries recorded each year which
is not of health-related issue. Ranking fourth in the top causes usually occurs among cyclists and pedestrians. (Bartrip, 2010).
of death is very alarming since there are a lot of campaigns, The National Highway Traffic Safety Administration of the
ordinances, and other efforts made by the Metropolitan Manila United States of America report, states that the fatality rate per
Development Authority (MMDA), Land transportation Office 100 million vehicle miles traveled has reached the lowest
(LTO), and Local Government Units (LGUs) in order to point of 1.10 in 2010 while the injury rate per 100 million

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


48
Back to Contents

VMT is the same as those of 2009. (“Research Note 2010 width, number of lanes, pavement condition, horizontal and
Motor Vehicle Crashes : Overview,” 2012). In China, an vertical curvature of the road design. The result of the study
average of one person died in every four reported traffic gave a highlight to the potential issues of the driver behaviors
crashes. Also, Urban Roads in China had alarmingly higher and roadway characteristics that affect road traffic safety.
rates of crashes in China. They are situated in urban areas (Nam & Song, 2008)
where traffic volume is high and road traffic conditions are
particularly complicated. (Loo, Cheung, & Yao, 2011) III. METHODOLOGY
B. Accident Factors The data used was provided by the Metropolitan Manila
Development Authority (MMDA) which was composed of a
Analysis of crash data that are collected through police five-year historical data of the entire road traffic accidents
reports and integrated with road inventory data reveals occurred along the road stretch of the Commonwealth Avenue
important relationships that can help focus on high-risk located at Quezon City. The factors that were considered was
situations and coming up with safety countermeasures. (Dong, only limited to the available statistics from the database of
Clarke, Yan, Khattak, & Huang, 2014). The factors MMDA. These factors were then transmuted into numerical
contributing to road site accidents are age, alcohol intake, values as shown in table I.
gender, vehicle speed, whether seatbelts were used, whether
the driver was ejected from the vehicle, whether the crash was
a head-on collision, whether an airbag was deployed, and TABLE I. VARIABLE DESCRIPTION OF FACTORS
whether one of .the vehicles was following too close behind Variable Description
another vehicle. (Zhang, 2010) The study in USA shows that Injury Severity 1. Fatal; 2. Non-Fatal; 3. No Injury
1. Night 0:01-06:00; 2. Morning 06:01-
drivers’ adaptation to weather conditions influence the Time 12:00; 3. Afternoon 12:01-18:00; 4. Evening
changes in roadway-surface conditions and results injury 18:01-0:00
severity that influenced by many factors, including age and 1. January; 2. February; 3. March; 4. April;
5. May; 6. June; 7. July; 8. August;
gender.(Morgan & Mannering, 2011). Month
9.September; 10. October; 11. November;
12.December
Human factor remains the leading cause of road traffic 1. U-turn slot; 2. T-Junction; 3. Not at
accidents, such as over speeding, carelessness and fatigue. Type of Junction Junction; 4. Rotunda; 5. Cross Roads; 6.
(Khan, Al Asiri, & Iqbal, 2010). A study has concluded that Y-Junction; 7. Others
1. Traffic Light; 2. School Zone; 3. None; 4.
there is a significant potential human behavioral contribution Type of Junction Control
Others
to pedestrian injury at the study sites. (Cinnamon, Schuurman, Darkness Indication 1. Dark; 2. Light
& Hameed, 2011) Road transport is definitely a huge benefit Weather 1. Dry; 2. Fair; 3. Wet
to our society, but it also has problem such as delays because 1. Angle Impact; 2. Hit and Run; 3. Hit
of traffic congestion and transport accidents. (Quddus, 2008) Type of Collision
Object; 4. Hit Pedestrian; 5. No Collision; 6.
Rear-end; 7. Self-Accident; 8. Slide Swipe;
9. Head-on; 10. Others
C. Markov Chain Switching Approach Accident Factor
1. Human Error; 2. Machine Error; 3. No
Error; 4. Other
Markov-switching models of severity have the advantage Age 1: Below Legal Age; 2. Legal Age
to obtain unobserved heterogeneity in accident data that could
relate to different weather conditions and other different From the statistics collected from the MMDA,
factors that may not be known to the analyst. (Malyshkina & samples were chosen through a purposive or judgmental
Mannering, 2009) Some other study shows how incidents are sampling, a non-probability sampling technique wherein the
very notorious for their delays to road users. Heterogeneity researchers selected units to be sampled based on their
models such as Markov switching random parameters models knowledge and professional judgment (Castillo, 2009). The
give a promising method to decompose unobserved collected data had undergone an evaluation and validation
heterogeneity for an ordered or unordered discrete result data process wherein each data entry was evaluated if it complies
analysis. (Xiong, Tobias, & Mannering, 2014) with the different requirements that was set by the proponents
There are other studies of determining road accidents such as the completeness of the data entry.
frequency. A Bayesian network used to estimate a frequency
which is applicable in any situation and any set of data that is IV. DATA AND RESULTS
available. (Soler-flores, 2013). Other model use in The Markov Switching Approach Analysis was used to
determining crash frequency is multivariate model. It is a determine the different factors that are significant with the
useful method for analysis since it account accounts the severity of road traffic accidents along Commonwealth
correlation among specific crash types. (Dong, Richards, Avenue. Multinomial Logistics Regression was used to
Clarke, Zhou, & Ma, 2014). analyze the factors that are found significant with the severity.
A study is conducted in Arkansas to evaluate the From the factors that are found significant with the levels of
intersection accidents using traffic safety analysis methods. In severity, the factors are now analyzed for its relevance from
2004, Arkansas was the top three in the states with highest the different level of severity. The different critical driver
traffic fatalities. A statistical approach was made using error/s in the occurrence of road traffic accidents along
Poisson, Negative Binomial and Logistics Regression Model. Commonwealth Avenue is also analyzed to determine if which
The study considered significant contributors namely road

49
Back to Contents

among the factors are relative per level of the severity of the B. Multinomial Logistic Regression Analysis for testing the
accident. Significant factors per level of severity
There is a total of 11,894 road traffic accidents that are Multinomial Logistic Regression enables to test for the
recorded from 2009 up to March of 2013. With the validation significance of the significant factors gathered from the
process of data, it was trimmed down to 2,029 road traffic Markov Chain Switching approach with respect to the three
accidents record. The data are grouped into two categories, severity level (Fatal, Non-Fatal, and No Injury) that are
data will complete details and with incomplete details. The observed in every road traffic accident. The data result in the
historical data with complete details are used in determining table below shows the results. The regression analysis was
the significant factors and the historical data incomplete computed at 95% level of significance.
details are compiled into one spreadsheet wherein they can be Illustrated on table III is the results gathered from the IBM
used for future reference. SPSS Statistics software with the Multinomial Logistic
Regression using both the B coefficient and odds ratio(Exp
A. Markov Chain Switching Approach (B)). For a factor to be considered significant with the level of
Markov Chain Switching model allows to tests for the severity, the Exp (B) should be greater than 1 (>1) and the B
significance of a variable to the severity level with respect to coefficient should be greater than zero (>0). The factors that
different time-varying constraints. Through this method, the were found significant with the road traffic accidents resulting
significant variables of severity of road traffic accidents are to fatal injuries are: Accident factor; Month; Gender; Type of
extracted. The probability of occurrence and expected duration Junction; and Type of Junction Control.
for the significant predictor variables are also identified and
interpreted. The test for significance is performed at a 95% TABLE III. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR
level of significance. Shown in Table I is the 10 variables SIGNIFICANCE FOR ACCIDENTS WITH FATAL INJURIES (CI=95%)
available for testing. These variables are demographic and
95% C.I. for Exp(B)
accident-related in nature. The results that were gathered from Severity B Exp(B) Lower Upper
computing Markov Switching Approach from the different Bound Bound
factors, the factors that have a significant effect with the Intercept 17.849
severity (Fatal, Non-Fatal, No Injury) of road accidents in
Accident_Factor .547 1.727 .364 8.199
Commonwealth Avenue were able to distinguish with its
- 4.361E-
behavior with respect to their values based both from regimes Age_Indicator 9.053E-07 1.879E-06
13.915 07
1 and regime 2.
Collision_Type -.423 .655 .356 1.207
The variable will be considered a significant factor, its p- Gender 13.836 1020575.781 0.000 2943024.102
value should be less than 0.05 (< 0.05) for both regimes 1 and
Junction_Control .267 1.307 .070 24.404
2. The factors that have probabilities that the level of severity
is dependent are: Accident Factor, Age Indicator, Collision Junction_Type .454 1.574 .741 3.343
Type, Month, Gender, Junction Control, Junction Type, and Month .092 1.096 .639 1.881
Weather. While factors: Darkness Indication, and Time have Weather -2.371 .093 .004 2.304
probabilities that the level of severity is independent.
Illustrated on table IV is the results gathered from the IBM
TABLE II. MARKOV CHAIN SWITCHING APPROACH TEST FOR
SPSS Statistics software using Multinomial Logistic
SIGNIFICANCE (CI=95%) Regression which used both the B coefficient and odds ratio,
Exp(B), as a basis to determine the factors that were found
Expected
Probability of
Duration Standard Error
significant to non-fatal injuries. The factors that were found
Occurrence significant are: Accident factor; Month; Gender; Type of
Variable (Months)
Regime Regime Regime Regime Regime Regime Junction; and Type of Junction control.
1 2 1 2 1 2
Accident
66.6016% 99.9014% 2.99 1013.98 0.002706 0.000063
Factor TABLE IV. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR
Age SIGNIFICANCE FOR ACCIDENTS WITH NON-FATAL INJURIES (CI=95%)
95.1478% 99.9013% 20.61 1013.59 0.003937 0.000116
Indicator
Collision
76.2777% 99.9271% 4.22 1372.02 0.003871 0.002274 95% C.I. for Exp(B)
Type Severity B Exp(B) Lower Upper
Darkness
0.0063% 99.8518% 1.00 674.58 0.00539 0.1678 Bound Bound
Indicator
Intercept 19.575
Month 71.3337% 99.6336% 3.49 272.93 0.001855 0.0262
Accident_Factor .425 1.529 .331 7.069
Gender 97.7917% 79.3084% 45.28 4.83 0.001286 0.026085
- 1.106E-
Junction Age_Indicator 1.106E-06 1.106E-06
98.0600% 99.9013% 51.55 1013.08 0.00286 0.000979 13.715 06
Control
Junction Collision_Type -.271 .763 .418 1.392
62.9393% 50.7012% 2.70 2.03 0.019 0.0259
Type
Junction_Control .455 1.576 .089 27.898
Time 0.0625% 99.8520% 1.00 675.73 0.005423 0.4171
Junction_Type .517 1.677 .798 3.523
Weather 99.9014% 0.0002% 1013.87 1.00 0.0286 0.007286 Month .132 1.141 .669 1.946

50
Back to Contents

95% C.I. for Exp(B) Injury_Severity B df Exp(B)


Severity B Exp(B) Lower Upper Buildings And Etc.
Bound Bound Avoided Hitting Pedestrian -.121 1 .886
Weather -2.111 .121 .005 2.818 Lost Control 15.615 1 6047664.724
Moving Backward/ Backing
Gender 14.215 1491077.823 0.000 1735352.149 Inattentively
-.155 1 .856
Sudden Stop 14.557 1 2099189.408
Table V shows the results gathered after applying the Slept -.380 1 .684
Multinomial Logistic Regression which used both the B Bad Turn -.210 1 .810
coefficient and odds ratio, Exp(B), as a basis to determine if Driver Error .388 1 1.473
what factors were found significant to non-fatal injuries. It Miscalculated Movement -.380 1 .684
was found out that the Collision Type; Age Indicator; and Lost Balance 14.557 1 2099189.408
Weather are significant to road traffic accidents resulting with Inattentive/ Too Fast 15.158 1 3829174.649
no injuries. Bad Overtaking -.199 1 .820
Disobey Traffic Lights/ Signs -0.456 1 0.84
TABLE V. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR
SIGNIFICANCE FOR ACCIDENTS WITH NO INJURIES (CI=95%) Table VII illustrates the results gathered after the
application of the Multinomial Logistic Regression from the
95% C.I. for Exp(B) IBM SPSS Statistics software, and using both the B
Severity B Exp(B)
Lower Bound Upper Bound
coefficient and odds ratio, Exp(B), as a basis to determine the
Intercept -20.848 critical driver errors that were found significant with road
Accident_Factor -.547 .579 .122 2.748 traffic accidents with non-fatal injuries, which are: Alcohol
Age_Indicator 16.915 22175601.732 22175601.732 22175601.732 Suspected; Avoided Hitting Other Vehicle, Buildings And
Etc.; Lost Control; Sudden Stop; Driver Error; Lost Balance;
Collision_Type .423 1.526 .828 2.810
and Inattentive/ Too Fast.
Gender -16.835 4.883E-08 0.000 3.124
Junction_Control -.267 .765 .041 14.296
TABLE VII. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR
Junction_Type -.454 .635 .299 1.349 SIGNIFICANCE FOR HUMAN FACTORS RELATIVE TO NON-FATAL INJURIES
(CI=95%)
Month -.092 .912 .532 1.564
Injury_Severity B df Exp(B)
Weather 2.371 10.709 .434 264.295 Intercept -1.792 1
Alcohol Suspected 1.281 1 3.600
C. Multinomial Logistic Regression Analysis for testing the
Avoided Hitting Other Vehicle, 1.099 1 3.000
Significant factors per Critical Driver Errors Buildings And Etc.
Avoided Hitting Pedestrian -.511 1 .600
Human error has been one of the major contributor of the
Lost Control 1.827 1 6.214
occurrence of an accident, the proponent was able to draw 14
Moving Backward/ Backing -.693 1 .500
driver errors which were based on the most usual errors that Inattentively
were committed that can be observed from the entire road Sudden Stop 17.078 1 26110021.061
stretch and based from the data that came from the Slept -13.478 1 1.402E-06
Metropolitan Manila Development Authority (MMDA). These Bad Turn -1.041 1 .353
human errors that are usually committed are: Alcohol Driver Error 1.106 1 3.021
Suspected; Avoided Hitting Other Vehicle, Buildings and Etc.; Miscalculated Movement -13.478 1 1.402E-06
Avoided Hitting Pedestrian; Lost Control; Moving Backward/ Lost Balance 17.078 1 26110021.061
Backing Inattentively; Sudden Stop; Slept; Bad Turn; Driver Inattentive/ Too Fast 1.910 1 6.754
Error; Miscalculated Movement; Lost Balance; Inattentive/ Bad Overtaking -.960 1 .383
Too Fast; Bad Overtaking; and Disobey Traffic Lights/ Signs. Disobey Traffic Lights/ Signs -.856 1 .215

The result from the performed statistical test is illustrated Shown in table VIII is the results after the application of
in table VI in which the B coefficient and odds ratio, Exp(B), the Multinomial Logistic Regression that was used to
was used as a basis to determine if which factors are found determine if what critical driver error/s are found significant
significant between the critical driver errors and road traffic with road traffic accidents with no injuries. In which the
accidents with fatal injuries which are: Alcohol Suspected; factors: Avoided Hitting Pedestrian; Moving Backward/
Avoided Hitting Other Vehicle, Buildings And Etc.; Lost Backing Inattentively; Driver Slept; Bad Turn; Miscalculated
Control; Sudden Stop; Driver Error; Lost Balance; and Movement; Bad Overtaking; and Disobey Traffic Lights/
Inattentive/ Too Fast. Signs were found significant.

TABLE VI. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR


SIGNIFICANCE FOR HUMAN FACTORS RELATIVE TO FATAL INJURIES (CI=95%) TABLE VIII. MULTINOMIAL LOGISTIC REGRESSION ANALYSIS TEST FOR
SIGNIFICANCE FOR HUMAN FACTORS RELATIVE TO NO INJURIES (CI=95%)
Injury_Severity B df Exp(B)
Injury_Severity B df Exp(B)
Intercept -18.254 1 Intercept 18.254 1
Alcohol Suspected 15.952 1 8466730.613 Alcohol Suspected -15.952 1 1.181E-07
Avoided Hitting Other Vehicle, .385 1 1.469

51
Back to Contents

Injury_Severity B df Exp(B) Injury). This statistical test was used since all of the variables
Avoided Hitting Other Vehicle, -.385 1 .681
Buildings And Etc.
are in categorical form, in which it is the most applicable test
Avoided Hitting Pedestrian .121 1 1.129 to use in order to determine the relationship of the variables.
Lost Control -15.615 1 1.654E-07
There were seven (7) human errors that were significant
Moving Backward/ Backing .155 1 1.168
Inattentively with the probability of road traffic accidents with fatal and
Sudden Stop -14.557 1 4.764E-07 non-fatal injuries which are: Alcohol Suspected; Avoided
Slept .380 1 1.462 Hitting Other Vehicle, Buildings and Etc.; Lost Control;
Bad Turn .210 1 1.234 Sudden Stop; Driver Error; Lost Balance; and Inattentive/ Too
Driver Error -.388 1 .679 Fast. There were also seven (7) factors that were found
Miscalculated Movement .380 1 1.462 significant with the probability of road traffic accidents with
Lost Balance -14.557 1 4.764E-07 no injury which are: Avoided Hitting Pedestrian; Moving
Innatentive/ Too Fast -15.158 1 2.612E-07 Backward/ Backing Inattentively; Miscalculated Movement;
Bad Overtaking .199 1 1.220 Slept; Bad Turn; Bad Overtaking; and Disobey Traffic Lights/
Disobey Traffic Lights/ Signs 0.158 1 1.112 Signs.

V. CONCLUSION References
The proponents were able to draw 10 independent [1] Bazzi, M., Blasques, F., Koopman, S. J., & Lucas, A. (2014). Time
variables (Time, Month, Type of Junction, Type of Junction Varying Transition Probabilities for Markov Regime Switching Models.
Control, Darkness Indication, Weather, Type of Collision, [2] Dong, C., Clarke, D. B., Yan, X., Khattak, A., & Huang, B. (2014).
Accident Factor, Age, and Gender) from the different road Multivariate random-parameters zero-inflated negative binomial
traffic accidents that occurred along Commonwealth Avenue. regression model: an application to estimate crash frequencies at
The dependent variable is the Level of Severity which is then intersections. Accident; Analysis and Prevention, 70, 320–9.
doi:10.1016/j.aap.2014.04.018
classified into three (3) levels, Fatal Injury, Non-Fatal Injury
[3] Dong, C., Richards, S. H., Clarke, D. B., Zhou, X., & Ma, Z. (2014).
and No Injury. Examining signalized intersection crash frequency using multivariate
zero-inflated Poisson regression. Safety Science, 70, 63–69.
The results obtained through Markov Chain showed doi:10.1016/j.ssci.2014.05.006
an interesting result. Markov Chain Switching Approach was
[4] Jung, S., Jang, K., Yoon, Y., & Kang, S. (2014). Contributing factors to
able to determine 8 out of 10 significant variables that has an vehicle to vehicle crash frequency and severity under rainfall. Journal of
effect with the severity of road traffic accidents. The eight Safety Research, 50, 1–10. doi:10.1016/j.jsr.2014.01.001
significant variables: Accident Factor, Age Indicator, [5] Libres, G. T. E. D. C., Galvez, M. A. L. I., Cordero, C. J. N., &
Collision Type, Month, Gender, Junction Control, Junction Program, B. S. C. E. (2008). Analysis of Relationship between Driver
Type, and Weather; all obtained a p-value of less than 0.05 Characteristic and Road Accidents along Commonwealth Avenue,
and testing for a 5% level of significance. This implies that (March), 1–5.
implies that the null hypothesis that the factor is not [6] Maata, H. E. (2013). ASSESSMENT OF VEHICLE SPEEDS AND
TRAFFIC SAFETY ALONG COMMONWEALTH AVENUE Time of
significant to severity of crash injury accident is rejected. Type of, (October).
Therefore these variables are considered significant with [7] Malyshkina, N. V, & Mannering, F. L. (2009). Markov switching
respect to the severity of road traffic accidents. multinomial logit model: An application to accident-injury severities.
Accident; Analysis and Prevention, 41(4), 829–38.
The eight (8) factors that were found significant or doi:10.1016/j.aap.2009.04.006
affects the severity of road traffic accidents are analyzed [8] Malyshkina, N. V, & Mannering, F. L. (2010b). Zero-state Markov
further in order to know if which among these factors are switching count-data models: an empirical assessment. Accident;
relevant with the three levels of severity (Fatal, Non-Fatal and Analysis and Prevention, 42(1), 122–30. doi:10.1016/j.aap.2009.07.012
No Injury). Multinomial Logistic Regression was the most [9] Malyshkina, N. V, Mannering, F. L., & Tarko, A. P. (2009). Markov
appropriate statistical test to be used to analyze the relevance switching negative binomial models: an application to vehicle accident
with respect to the level of severity since the said factors are frequencies. Accident; Analysis and Prevention, 41(2), 217–26.
doi:10.1016/j.aap.2008.11.001
of categorical type.
[10] Morgan, A., & Mannering, F. L. (2011). The effects of road-surface
Among the eight (8) factors that are found significant, conditions, age, and gender on driver-injury severities. Accident;
there are five (5) factors that were found relative to the Analysis and Prevention, 43(5), 1852–63. doi:10.1016/j.aap.2011.04.024
occurrence of road traffic accidents which could resulted into [11] Soler-flores, F. (2013). Expert system for road accidents frequency
estimation based in Naïve-Poisson, 646–651.
fatal and non-fatal injuries, which are: Accident Factor,
Gender, Month, Type of Junction, and the Type of Junction [12] Soler-Flores, F. (2013). Naïve-Poisson, a mathematical model for road
accidents frequency estimation., 384–391. Retrieved from
Control. While the occurrence road traffic accidents that could http://www.ictic.sk/archive/?vid=1&aid=2&kid=50201-59
result into No injury, the factors that were found significant [13] WHO. (2004). GLOBAL STATUS REPORT. Global Status Report on
are: Age, Type of Collision, and Weather. Road Safety.
Multinomial Logistic Regression was used in order to [14] Xiong, Y., Tobias, J. L., & Mannering, F. L. (2014). The analysis of
vehicle crash injury-severity data: A Markov switching approach with
identify if which among the human errors are significant with road-segment heterogeneity. Transportation Research Part B:
the three levels of severity of injury (Fatal, Non-Fatal and No Methodological, 67, 109–128. doi:10.1016/j.trb.2014.04.007

52
Back to Contents

Assessing Participation in Decision-Making among


Employees in the Manufacturing Industry
Shiau Wei Chan*, Abdul Razak Omar, Ramlan, R. Izzuddin Zaman
Siti Sarah Omar, Khan Horng Lim Faculty of Mechanical and Manufacturing Engineering
Faculty of Technology Management and Business Universiti Tun Hussein Onn Malaysia
Universiti Tun Hussein Onn Malaysia Batu Pahat, Johor, Malaysia
Batu Pahat, Johor, Malaysia izzuddin@uthm.edu.my
swchan@uthm.edu.my

decision-making by exercising their self-directed and self-


Abstract— Participation in decision-making (PDM) results in
control activities.
a positive attitude for workers toward their job and an increase
in productivity and efficiency. Thus, the main objective of this To date, many studies on employees’ PDM had been
paper is to determine the implementation of PDM among implemented in other countries. However, there is a little study
employees of the manufacturing industry. The second objective is conducted in Malaysia (4). In previous studies, a majority of
to determine their PDM in terms of age, gender, qualification and PDM studies focused on the service sector rather than the
length of employment. In this study, a total of 160 manufacturing manufacturing sector. Therefore, this study aims to determine
employees were chosen randomly from manufacturing industries the implementation of PDM among employees, particularly in
in Batu Pahat, Malaysia. Questionnaires were administered to
the manufacturing industry, as well as to examine PDM in
the employees. Then, the data generated was analyzed
statistically using descriptive and inferential analysis. The results
terms of age, gender, qualifications, and length of employment.
revealed that the implementation of PDM remains in the infancy II. LITERATURE REVIEW
stage. Besides that, employee PDM in the manufacturing industry
does not significantly differ in terms of gender, age, Participation in decision-making is (PDM) is a process
qualifications, or length of employment. This present research is which enables employees to have some positive impact on
vital to the manufacturing industry in order to gain insight on their work. Shlomo (5) mentioned that employee involvement
PDM among employees in the manufacturing field. in decision-making can help employers and employees’ co-
determination rights. According to Sager & Gastil (6), the
Keywords— Participation in decision-making (PDM), PDM style consists all of the decisions which are receive from
manufacturing industry a superior to their subordinates. In his study, the PDM hss
needed the high involvement of employees at the
I. INTRODUCTION administrative level and policies. Also, Steinheider, Bayerl, &
Wuestewald (7) pointed out five types of PDM, which are
A lack of employee PDM has been known as motivation to level of participation, decision domain, structure, and rationale
leave a profession and a source of suffering from job stress. for the process and target of participation. In addition, Cotton,
This is because a lack of involvement in decision-making Vollrath, Froggatt, Lengnick-Hall & Jennings (8) mentioned
would lead to a lack of self-worth, feeling of being a mere there are five basic of attributes related to PDM, which are (a)
employee, decreasing self-esteem and job satisfaction, despair, indirect participative deciding versus direct participative
sadness, anger, discouragement and lack of motivation (1). For deciding; (b) informal participative deciding versus formal; (c)
instance, the strike actions of staff of Bank of Ghana, Ghana standard of employee influence in decision process; (d) short-
Railways Company, and Barclays Bank among others were due term versus long-term participative deciding; and (e) content
to the Boards of the various organizations did not allow of decisions. Researchers categorized these attributes to six
employees to become involved in decision-making on issues dimensions of participation which are participation of
related to their welfare and sustainability. consultation, duration of participation, participation in
PDM is very important for the manufacturing industry as it working decisions, representative participation, and informal
can reduce turnover and increase effectiveness. It can result in participation, employee ownership.
good effects on the workers and increase their self-esteem. PDM has various advantages for an organization. Rice (9)
They not only will work more efficiently, but also will have a stated that implementation of PDM is very useful to generate
more positive attitude toward the organization (2). The an idea in a short time. Besides, PDM cultivates employees to
subordinates can provide some suggestions and ideas to their maximize high productivity to achieve the company goal. In
company to achieve its goal by participating in decision- addition, employees who have involved in the decisions of the
making. As a result, development and necessary changes occur organizational, they will be satisfied with their hard work (10).
fruitfully in (a) setting goals; (b) making decisions; (c) solving PDM has become a guideline to help managers figure out the
problems; and (d) designing and implementing organizational employees who are willing to give ideas and allow them to
changes (3). Employees will be committed to the desired goals voice their ideas. The company can take action with the ideas
of the organization if they have the chance to engage in

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


53
Back to Contents

given from employees like enhancing the maintenance system, III. METHODOLOGY
safety equipment restore and always listen to employee’s
views. It may reduce turnover and avoid unnecessary costs A. Research Design
like accidents and buy new equipment. PDM may generate Quantitative research involves the numerical measurement
opportunities to diverse workforces in organizational plan and and analysis methods to address the research objective (21).
it also encourages diverse workforce in the company. It helps This study applies a quantitative research approach to
to achieve company goal in terms of maximization of profit, determine the implementation of PDM in the manufacturing
good motivation, as well as an awaking effect towards a industry as well as to identify their PDM in terms of age,
diverse workforce on their job because employees will be gender, qualification, and length of employment. The
happy to involve in the company plans and it may reduce the researcher use survey as sources of data. A pre-test had been
absences rate and mistake in the company (11). conducted to confirm the validity and reliability of the
There have been some studies done on PDM in Malaysia, questionnaire. After that, the questionnaires were distributed to
but only a few researchers focus on employee PDM in the 160 employees from manufacturing industry. The obtained data
manufacturing industry. The researchers mostly focused on was analysed quantitatively using SPSS software version 22.0.
targeted population of citizens, teachers, youth, nurses and so Types of the analysis employed were descriptive and
forth. Ting (4) performed a research on the level and effects inferential statistics, including independent T-test and analysis
PDM on employee groups for manufacturing and servicing of variance (ANOVA).
sector in Malaysia. The targeted respondents were those
employees who worked in either manufacturing or servicing B. Participants
sectors. This researcher also concluded that both The targeted population was employees who are working in
organizational commitment and job satisfaction are strongly the manufacturing industry at Batu Pahat, Malaysia. The
related with the employee’s PDM and that it is one of the population size in the manufacturing industry is about 300
elements for companies to reduce the employee turnover rate. employees. Based on Krejcie & Morgan’s (22) table, the
On the other hand, earlier studies also explored the PDM sample size for 300 employees is 169. They were selected
in terms of four elements, i.e. gender, age, qualification, and randomly from the manufacturing industry. Among the
length of employment. Gill, Stockard, Johnson & Williams participants, there are 113 male respondents (70%) and 47
(12) pointed out that women are more affected by the female respondents (30%). In addition, the age range in 0
environment; they look for more information and dedicate below 19 years old, 1 within 25-35 years old, 2 within 36-45
more time to the decision process. On the contrary, Wood (13) years old, 3within 46-55 years, and 4 at age 56 and above.
mentioned that men are more dominant, assertive, objective, There are 79 respondents (49.4 %) within 25 to 35 years old,
and realistic. Furthermore, María & María (14) conducted a 51 respondents (31.9%) within 36 to 45 years old, 15
study of factors that affect decision-making in gender and age respondents within 46 to 55 years of age, and 15 respondents
differences. The findings of the study are the different age older than 55 years. Moreover, there is only 1 respondent with
range of people will influence the different outcome in terms a Doctorate degree (0.6%), 13 with a Master’s Degree (8.1%),
of experience, emotional and knowledge. Johnson (15) 47 (29.4%) with a Bachelor’s Degree, 63 respondents have
reported that although young and elderly participants complete Diploma (39.4%) certificates while 36 respondents completed
their tasks in similar amounts of time, young participants high school (22.5%). Furthermore, there are 66 respondents
consider more options in that time frame and therefore make (41.3%) working for 0-4 years, 60 respondents working for 5-
higher quality decisions. However, the results of a study by 10 years (37.5%), 21 respondents working for 10-20 years
Dodi (16) indicated that there is not a significant influence of (13.1%), and 13 respondents (8.1%) working over 20 years in
age in terms the PDM. the organization.

Knowledge is one of the factors that may be of great value C. Instrumentation


for an organization in order to increase the employees’ PDM
In this study, the researcher utilized a questionnaire to
with their work (17). Yang, Kao & Huang (18) found no
collect the data. There were two sections of the questionnaire.
significant differences between the level of education and
Section A was demographic information for the participants
PDM. Higher education background does not mean that
while the section B was the PDM questionnaire with 30 items.
employees have more opportunities to involve in decision
The questionnaire was modified from the studies of Rumpf
making based on findings by Bruiyan (19) because the
(23) and Faduma (24). It is based on five Likert scales, which
findings showed that most respondents who were involved in
are ranging from 1= strongly disagree and 5=strongly agree.
decision-making only qualified at a middle level of education.
The researcher conducted a pre-test by distributing 15 sets of
Based on the Dodi (16), respondent age and education were
this questionnaire to the employees who are working in the
negatively correlated with all the aspect of PDM in his study.
manufacturing industry at Batu Pahat, Malaysia. The Cronbach
On the other hand, Sophia, Kostas, & Cosmas (20) indicated
Alpha Value for this instrument was 0.867, which was deemed
the opportunities to involve in decision-making is affected by
as good based on George and Mallery (25).
employee's length of employment. Bruiyan (19) found that
employees have higher participate in decision-making among
those with more service experience while less working
experience resulted in the lowest involvement in decision-
making.

54
Back to Contents

IV. FINDINGS

A. Descriptive Statistics Analysis Table III revealed the output of the ANOVA analysis and
whether it has a significant difference between our employee’s
Descriptive statistics show the mean and standard education qualifications over the PDM. Based on the ANOVA
deviation of each item under the PDM. Item Q30, which is table, the significance level is 0.29 (p = .297), which is greater
“This organization encourages differences of opinion” had the than 0.05. Therefore, there is no significant difference in the
lowest mean of 2.79 with standard deviation 1.041. Moreover, highest education’s qualification to the PDM.
item Q24 which is “I can tell what impact my work makes on
Table III. ANOVA test for employee’s education
the product or service” had the highest mean of 3.74 with
qualification
standard deviation of 0.906. According to Wiersma (26) in
Table I, the average mean of all the items is 3.4275 and the Sum of df Mean F Sig.
standard deviation is 0.497. This result shows a moderate Squares Square
Between Group 1.217 4 .304 1.238 .297
agreement level of mean. It may be concluded that this present Within Group 38.097 155 .246
research company had implemented the PDM in their Total 39.315 159
company at the moderate level.

Table I. Level of Mean Measurement Table IV shows the ANOVA analysis and whether we have
Mean Range Central Tendency Level a statistically significant difference between our group means.
Based on the ANOVA table, the significance level is 0.783 (p
High 3.68-5.00 = .783), which is greater than 0.05 and, therefore, there is not a
significant difference between age and the PDM.
Moderate 2.34-3.67
Table IV. ANOVA table for age group of employees
Low 1.00-2.33 Sum of df Mean F Sig.
Squares Square
Between Group .269 3 .090 .359 .783
Within Group 39.045 156 .250
Total 39.315 159
B. Inferential Statistics Analysis
Inferential statistics are commonly used to compare the
average performance of two or more groups on a single
Table V shows that the results of ANOVA analysis to
measure to see if there is a difference. Whenever comparisons
determine it whether it has a significant difference between the
are made to the average performance between two groups and
length of employment of employees in terms of PDM. Based
above, statistic techniques like t-test or ANOVA should be
on the ANOVA table, the significance level is 0.148 (p = .148),
considered in order to get the presentable results (4). An
which is greater than 0.05, and therefore, there is not a
independent sample T-test was done to identify any significant
significant difference between working experience and the
difference between male and female to the PDM as shown in
PDM.
Table II. Levene’s Test for equality of variances, which known
as a test that determines if the two conditions have about the Table V. ANOVA table for length of employment of
same or different amounts of variability between scores. F employees
value of Levene’s Test is 0.880, and the value in the Sig.
Sum of df Mean F Sig.
column is 0.350. A significance value of greater than 0.05 Squares Square
means that the variability between the male and female are Between Group 1.323 3 .441 1.810 .148
about the same. It means that the variability in the two different Within Group 37.992 156 .244
gender of employees is not significantly different in terms of Total 39.315 159
PDM in the manufacturing industry. Besides that, the result of
the t-test for equality of means indicates the significant value is
0.823 which is greater than 0.05. It can be concluded that there
is no statistically significant difference between males and V. DISCUSSION AND CONCLUSION
females in terms of PDM.
The first objective of this research study is to determine the
Table II. T-Test for Gender Group implementation of PDM among employees in the
manufacturing industry in Batu Pahat. In order to achieve this
objective, the research determined the average means for the
entire item and concluded that the average mean of all of items
is 3.42. It considers a moderate level of central tendency based
on the agreement level mean measurement table. Therefore,
this present research company had implemented PDM but
remain at the infancy stage of PDM due to a moderate level of
mean.

55
Back to Contents

The second objective in this research study is to determine participation rate in decision-making among those with more
the employee’s PDM in terms of gender, age, length of service experience while those with less working experience
employment and education qualification in the manufacturing have the lowest involvement in decision-making.
industry at Batu Pahat. The first component to be determined is
gender group. There is only two group of gender which is male The sample size of this research is relatively small. There
and female thus, the independent sample T-test is used to were only 160 questionnaires which could be used in data
determine the gender and the PDM. In the independent T test analysis. Normally, research with more respondents will be
result, the independent sample T-test was 0.350, so there is no more precise if there are more questionnaires done. First, a
significant difference of employee’s gender in terms of PDM in future researcher may extend the target scope to all sectors of
the manufacturing industry at Batu Pahat. The present result is organizations in Malaysia to compare with the result of present
supported by Ting’s (4), Burke (27) and Cortis & Cassar (28) research. Second, future researchers should widen the
outcomes that male and female does not have a significant respondents’ scope such as accessing the different departments.
difference in terms of PDM. This finding confronted with the It would produce more accuracy outcome in the similar
finding of Gill, Stockard, Johnson & Williams (12), women research topic. Lastly, future researchers may examine more
dedicated more time in decision-making and it also opposite human resource variables such as job motivation, employee
with Wood’s study findings. (13); men are more dominant in performance and job satisfaction.
decision-making. In addition, Maria & Maria (14) depicted In conclusion, the researcher found that the present research
their findings of the study which is significant difference companies had implemented the PDM in manufacturing at the
between men and women is also opposite with this research moderate level. In addition, employee age, gender, education
finding. qualifications, and length of employment did not have a
The second tested component in objective two is employee significant effect on PDM in the manufacturing industry at
education qualifications concerning PDM in the manufacturing Batu Pahat, Johor.
industry in Batu Pahat. The ANOVA test has been applied to
carry out the research outcome. The one-way ANOVA result ACKNOWLEDGMENT
for this component shows a 0.297 significance value which This work was supported by Universiti Tun Hussein Onn
means that there is no significance between employee’s age Malaysia under Grant No. U431.
group and PDM in the manufacturing of Batu Pahat. Therefore,
it can be concluded that higher educated employees do not REFERENCES
have more opportunities for decision-making relative to low
educated of employees. These results oppose the study by Dodi [1] Yoder, K. & Wise, P. S ( 1999). Lending and managing in Nursing. 2nd
(16) in which respondent age and education were negatively Edition. Louis. Mosby Company. pp.24.
correlated with all the aspects of PDM in his study. [2] Subbulakshmi. S, Nagarajan,S. K. & Felix. W. J. (2014) A study on
participation in decision making among members of quality circle in
The third tested component in objective two is employee’s manufacturing companies. Journal of Business and Management, 16(4),
age group in terms of PDM in the manufacturing industry in pp. 25-31.
Batu Pahat. The ANOVA test is applied to carry out the [3] Galbraith, J.R., & Lawler, E. E. (1993), Organization for the future. The
research outcome. The one-way ANOVA results for this new logic for managing complex organizations. San Francisco: Jossey-
component show 0.783 significance value, which means that Bass.
there is no significance between employee’s age group and [4] Ting, K. S. (2012). The level and effects of participation in decision
PDM in the manufacturing of Batu Pahat. This result is making (PDM) on employee group for the manufacturing and servicing
sectors in Malaysia. Master's project report, University Tunku Abdul
consistent with results of a study by Dodi (16) in which there Rahman.
was no significant influence of age in terms of PDM. This
[5] Shlomo M. (2002) Workers’ participation in decision-making processes
ANOVA result is confronted with María & María (14) which is and firm stability. British Journal of Industrial Relations, 40(4), pp. 687-
the different age range of people will influence the different 707.
outcome in terms of experience, emotional and knowledge. [6] Sager, K. L., & Gastil, J. (1999). Reaching consensus on consensus: A
These results are also opposed to those of Johnson (15), who study of the relationships between individual decision-making styles and
found that young participants consider more options in that the use of the consensus decision rule. Communication Quarterly, 47(1),
time frame, and, therefore, make higher quality decisions. pp. 67–79.
[7] Steinheider, B., Bayerl, P. S., & Wuestewald, T. (2006). The effects of
The fourth tested component in objective two is employee’s participative management on employee commitment, productivity, and
length of employment in terms of PDM in the manufacturing community satisfaction in a police agency. Paper presented at the annual
meeting of the International Communication Association, Dresden
industry in Batu Pahat. The ANOVA test is applied to International Congress Centre, Dresden, Germany. Retrieved from
determine the research findings. The one-way ANOVA result http://www.allacademic.com/meta/p93097_index.html
for this component was 0.148 of significance value which [8] Cotton, J. L., Vollrath, D. A., Froggatt, K. L., Lengnick-Hall, M. L. &
means that there is no significance between employee’s length Jennings K. R. (1988). Employee participation: Diverse forms and
of employment and PDM in manufacturing at Batu Pahat. At different outcomes. Academy of Management Review, 13 (1), pp. 8 –
any rate, the present results are contrary to those of Sophia, 22.
Kostas, & Cosmas (20) that indicated the opportunities to [9] Rice. K (1987). Empowering Teachers : A search for professional
involve in decision-making is affected by employee's length of autonomy. Master’s thesis, Dominican College of San Rafael.
employment. At the same time, its findings are opposite those
of Bruiyan (19) and his research findings of a higher

56
Back to Contents

[10] Helms, M. M. (2006). “Theory X and Theory Y” Encyclopedia of [20] Sophia, A., Kostas, K..& Cosmas, N.( 2014). Participation in decision
Management Education. Retrieved November 1, 2008 from making, productivity and job satisfaction among managers of fish farms
http://www.enotes.com/management-encyclopedia/ theory-x-theory-y in Greece. International Business Research, 7(12), pp. 1913-9011.
[11] Newstrom J.W. & Davis Keith (2004). Organizational behavior, human [21] Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M. (2013).
behavior at work (11th Edition), McGraw- Hill Co. Ltd. New Delhi, Business Research Methods. 9th Edition. Canada: Cengage Learning.
pp.187-200. [22] Krejcie, R. V. & Morgan, D. W. (1970). Determining sample size
[12] Gill, S., Stockard. J., Johnson. M & Williams. S (1987). Measuring for research activities. Education and Psychological Measurement, 30,
gender differences: The expressive dimension And critique of 607-610.
androgyny scales, Sex Roles, 17(7), 375-400. [23] Rumpf, P. (1996). Participation in employee involvement programs.
[13] Wood, J. T. (1990). Gendered lives: Communication, gender, and Master thesis, Victoria University of Technology.
culture. Belmont, CA: Wadsworth. [24] Faduma, H. A. (2014). Utilizing ISO 10018:2012, quality management –
[14] Maria, L. S. & Maria, T. S. A. (2007). Factors that affect decision guidelines on people involvement and competence principles to enhance
making: Gender and age differences. International Journal of quality in a clinical laboratory setting. Master thesis, University
Psychology and Psychological Therapy, pp. 381-391. Dominguez Hills.
[15] Johnson, M. (1990). Age differences in decision making: A process [25] George, D. & Mallery, P. (2003). SPSS for windows step by step: A
methodology for examining strategic information processing. The simple guide and reference. 11.0 update (4th ed.). Boston: Allyn &
Journal of Gerantology, 45(2), pp. 75–78. Bacon.
[16] Dodi W. I. (2015). Employee participation in decision-making: Evidence [26] Wiersma, W. (1995). Research methods in education: An introduction
from a state-owned enterprise In Indonesia. Management, 20, 159-172. (6th ed.). Boston: Allyn and Bacon.
[17] Bassy, M, (2002). Motivation and work: Investigation and analysis of [27] Burke, R.J. (2002). Organizational values, job experiencesand
motivation factors at work. Sweden: Linkoping University. satisfactions among managerial and professional women and men:
[18] Yang, H-L., Kao, Y-H., & Huang, Yi-Ching (2006). The job self- advantage men. Women in Management Review, 17(5), pp. 228-36.
efficacy and job involvement of clinical nursing teachers. Journal of [28] Cortis, R. & Cassar, V. (2005). Perceptions of and about women as
Nursing Research, 14(3), pp. 237-249. managers: investigating job involvement, self-esteem and attitudes.
[19] Bruiyan. H.A (2010). Employee participation in decision making in Women in Management Review, 20(3), pp.149-164. M.
RMG sector of Bangladesh: Correlation with motivation and Young, The Technical Writer’s Handbook. Mill Valley, CA: University
performance. Journal of Business and Technology (Dhaka), pp 1-11. Science, 1989.

57
Back to Contents

Estimation of Response Surface based on Central


Composite Design using Spatial Prediction

Haeil Ahn
Department of Industrial Engineering
Seokyeong University
Seoul, Korea 136-704
hiahn@skuniv.ac.kr

Abstract—An attempt is given to the employment of spatial In the SP model, the multivariable polynomial is employed
prediction (SP) models such as kriging and splines into the to represent the large-scale variation and usually fit into the
estimation of response surface in central composite design scattered data to reproduce the irregular surfaces. For this
(CCD), with a view to enhancing the reproducibility of reason, the SP model lacks in the concept of orthogonality or
actual response surfaces by combining CCD and SP. A rotatability of the model, whereas the orthogonality concept
brief introduction to CCD and SP is presented in the plays a key role in central composite design (CCD).
paper, and a numerical example is examined to discuss on In this paper, we give a try to the problem of employing
the merits of the method. In the modeling process, a spatial prediction methods in estimating the response surfaces
computer program is developed to carry out the spatial of experimental designs based on central composite design
prediction. Software for plotting is utilized in order to (CCD) in order to take advantage of the two methods by
represent the response surfaces graphically. combining them and thus to reproduce more plausible
response surfaces of the experiment.
Keywords—central composite design; polynomial regression;
spatial prediction; kriging II. CENTRAL COMPOSITE DESIGN

A. Background
I. INTRODUCTION
Normally, CCD consists of three types of design points
When it comes to designing an experiment of ongoing
such as factorial points, axial points and the design center. In
processes, multivariable polynomial regression models such as
the literature, there are several types of CCD [1]. In this study,
central composite design (CCD), Box-Behnken design (BBD),
the CCD of second type (CCD2) is considered for study. Let
and mixture design (MD), are in extensive use as in Myers and
Montgomery [7]. In a usual modeling process, the multi- k be the number of variables and F = 2k be the number of
variable polynomials are fit for the observed data. Low-order factorial points. The design points of CCD2 are shown in
polynomial may not reflect the actual responses of the Figure 1. x3
involved experiments, which is often called lack-of-fit. For x2 ( −1,1,1) (1,1,1)
( −1, −1,1)
this reason, an experimenter might be in trouble, unless he or ( −1,1) (1,1)
she designs the experiment with some additional design points (1, −1,1)

or replicates to measure lack-of-fit and fit higher-order terms. (α1 , 0) (α 2 , 0) x2 x1


As the number of design points increases, the higher-order x1 (α1 , 0, 0) (α 2 , 0, 0)
model is conceivable. However, the higher-order model might
entail some problems. Among those problems encountered by ( −1,1, −1)
many experimenters are what is called over-fitting. To avoid ( −1, −1) (1, −1) (1,1, −1)
over-fitting, one must refrain from employing high-order ( −1, −1, −1) (1, −1, −1)

polynomials. Usually, the polynomial models higher than the Figure 1 Design points of CCD2 where k = 2 and k = 3
second order are not considered because higher order model
shows irregular fluctuations or ‘over-fitting’ as in CCD [1]. The multivariable polynomial model in relation to CCD2
In the literature, spatial prediction methods such as kriging can be identified as follows:
k k k −1 k
and splines are available as in Cressie [3]. The methods might y = β0 + βi xi + βii xi2 +  βij xi xj + ε (1)
be able to help experimenters overcome those problems that i=1 i=1 i=1 i< j =2
can arise in fitting a polynomial model into the data.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


58
Back to Contents

For design points, an index set U is conceivable. For any For example, the regression matrix X of CCD2 with
index u ∈ U , the centered model can be rewritten as follows: k = 2 is given as follows:
k k k −1 k
1 x12 x22 x1 x2
yu = β0′ + βi xiu + βii (xiu2 − xi2 ) +  βij xiu xju + eu
x1 x2
(2)
i=1 i=1 i =1 i< j =2 1 −1 −1 1 1 1  
Here, the intercept of the centered model can be identified 1 −1 1 1 1 −1  
1 1 −1 1 1 −1  F
 
k
as β 0′ = β 0 + i =1
β ii xi2 . For convenience sake, let mi = xi2 1 1 1 1 1 1 
for i = 1,2,..., k . Note that m1 = m2 = ⋅⋅⋅ = mk = m . That is 1 0 0 0 0 0  
m = xi2 =  u =1 xiu2 / N
N
for i = 1, 2, ⋅⋅ ⋅, k (3) .. .. .. .. .. ..   nC
1 0 0 0 0 0   (10)
2
The centered square columns of (x − m) for i = 1, 2, ⋅ ⋅ ⋅, k iu 1 −α1 0 α12 0 0  
can be given as follows:   
1 α1 0 α12 0 0  
x1 − m x22 − m ⋅⋅⋅ xk2 − m
2 1 0 −α1 0 α12 0  
  
 1 0 α1 0 α12 0 
1 − m 1 − m ⋅ ⋅ ⋅ 1 − m

 4k
1 −α 2 0 α 22 
1 ⋅−⋅ ⋅m 1 − m ⋅ ⋅ ⋅ 1 − m
F 1
0 0


⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ α2 0 α 22 
1 − m 1− m ⋅ ⋅ ⋅ 1 − m    0 0  
1 0 −α 2 0 α 22 0  
 −m −m ⋅ ⋅ ⋅ −m   1 
 0 α2 0 α 22 0  
 −⋅ m
⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅
  nC
 −m ⋅ ⋅ ⋅ −m  
B. Orthogonal and Rotatable CCD
α −m
2
−m ... − m   (4) The CCD2 is known to be orthogonal and rotatable at the
... − m  
1
α −m
2
−m same time. Rotatable CCD2 is subject to the same conditions
...  
1
 ... ... ... as rotatable CCD [4, 5]. Necessary and sufficient conditions
 −m −m ... α12 − m   for rotatability are that all odd moments through order four are
 −m −m ... α12 − m   4 k zero, that is to say,
α 22 −m −m ... − m 
 N N

α 22 −m −m ... − m 


x 4
iu = 3 xiu2 x 2ju (11)
 ... ... ... ... 
u =1 u =1

 −m −m ... α 2 − m 
2  According to Kim [4], the orthogonality condition for
 − m ... α 2 − m 
 CCD2 is as follows:
−m 
2

The m can be identified as follows: F (F + 4k + nC ) − F


α12 + α22 = (12)
m = ( F + 2α12 + 2α 22 ) N (5) 2
The centered model can be rewritten as In this case, N = F + 4k + nC . The rotatability condition
k k k−1 k for CCD2 is as follows:
yu = β0′ + βi xiu + βii (xi2u − m) +  βij xiu xju + eu (6)
i=1 i=1 i=1 i< j=2
α14 + α24 = F (13)
for any u ∈ U . Since the centered square columns are By solving these two equations simultaneously, we can
orthogonal to the column of ones, it can be shown that obtain an exactly orthogonal and rotatable CCD2 as in Kim
[4]. We can devise numerously different CCD2s using α1 and
(7)
α 2 values as shown in Kim and Park [5].
In addition, it is not difficult to observe that centered square For the sake of notational convenience, let us define
columns are also orthogonal to linear and interaction columns. χij ≡ xi x j i = 1, ⋅⋅⋅, k j = 1, ⋅⋅⋅, k (14)
The condition that centered square columns are orthogonal to
each other is required. The following is a necessary condition: χ ii ≡ xi2 − m i = 1, ⋅⋅⋅, k (15)
N The centered model can be rewritten as follows:
 (x
u =1
2
iu − m)( x 2ju − m) = 0 , (8) k k
yu = β0′ + βi xiu + βii χiiu +   βij χiju + eu
k −1 k
(16)
for and . The condition for i =1 i =1 i =1 i< j =2

centered square columns can be determined as follows: Owing to the orthogonal decomposition, parameter
estimates and the variation due to each term can be calculated
 F + 2α12 + 2α22 
m2N =  N =
( F + 2α12 + 2α 2
2)
2 2

=F
directly from design and observation points, which is one of
 (9) the merits of orthogonal decomposition of the model.
 N  N

59
Back to Contents

C. Numerical Example The temperature at the design center was measured 5 times,
In order to examine the heat distribution of induction range, but only once at the rest of design points. The temperatures at
surface temperature of the radiation panel was measured with all the points are presented in terms of Celsius in Table 2.
an infrared thermometer in accordance with design points of Figure 2 shows regression analysis result of multivariable
CCD2 where k = 5 and nC = 5 . polynomial model. As it turned out, most of monomial terms
are insignificant, except intercept and second-order terms.
Table 1 α1 and α 2 for various k , F and nC The regression matrix and the data for centered model is
shown in Table 3. We can make use of the data either in Table 2
k F nC 1 2 3 4 5 6 7 8 9 10
or Table 3.
α1 - - - 0.000 0.3566 0.5095 0.6318 0.7409 0.8453 0.9533 Table 3 Regression Data for Centered Model
2 4
Centered Regression Variables
α 2
- - - 1.4142 1.4128 1.4082 1.3999 1.3868 1.3667 1.3348 Design
point 1 x1 x2 χ 11 χ 22 χ 12 y
α - - - 0.3187 0.5041 0.6426 0.7617 0.8709 0.9760 1.0824
3 8
1
1 1 -1 -1 0.5149 0.5149 1 262
α - - - 1.6813 1.6784 1.6728 1.6638 1.6507 1.6319 1.6045 2 1 -1 1 0.5149 0.5149 -1 259
2
3 1 1 -1 0.5149 0.5149 1 264
α - - - 0.0000 0.4112 0.5862 0.7242 0.8445 0.9547 1.0593 4 1 1 1 0.5149 0.5149 -1 261
1
4 16 5 1 0 0 -0.4851-0.4851 0 420
α 2
- - - 2.0000 1.9992 1.9963 1.9913 1.9839 1.9735 1.9594 6 1 0 0 -0.4851-0.4851 0 422
7 1 0 0 -0.4851-0.4851 0 423
α 1
- - - - - - 0.2629 0.508 0.6723 0.8074 8 1 0 0 -0.4851-0.4851 0 418
5 32 9 1 0 0 -0.4851-0.4851 0 419
α 2
- - - - - - 2.3783 2.3772 2.3746 2.3705 10 1 -0.3566 0 -0.3579-0.4851 0 460
11 1 0.3566 0 -0.3579-0.4851 0 455
α 0.4112 0.5862 0.7242 0.8445 0.9547 1.0593 1.1616 1.2652 1.3756 1.5079
1 12 1 0 -0.3566-0.4851-0.3579 0 467
5 16
13 1 0 0.3566 -0.4851-0.3579 0 464
α 1.9991 1.9963 1.9913 1.9839 1.9735 1.9594 1.9405 1.9146 1.8772 1.8141
2 14 1 -1.4128 0 1.5109 -0.4851 0 255
15 1 1.4128 0 1.5109 -0.4851 0 258
Table 2 Temperatures at Design Points 16 1 0 -1.4128-0.4851 1.5109 0 266
17 1 0 1.4128 -0.4851 1.5109 0 259
Regression Variables ℃ Total 17 0.0 0.0 0.0 0.0 0.0 6032
Design
point 1 x1 x2 x1
2
x2
2
x1 x2 y
1 1 -1 -1 1 1 1 262 For the parsimony of model, we decided to carry out the
2 1 -1 1 1 1 -1 259 stepwise selection of terms. The result is shown in Figure 3.
3 1 1 -1 1 1 1 264
4 1 1 1 1 1 -1 261 Regression Analysis: y versus x1, x2, x11, x22, x12
5 1 0 0 0 0 0 420 Stepwise Selection of Terms
6 1 0 0 0 0 0 422 α to enter = 0.15, α to remove = 0.15
7 1 0 0 0 0 0 423 Analysis of Variance
8 1 0 0 0 0 0 418 Source DF Adj SS Adj MS F-Value P-Value
9 1 0 0 0 0 0 419 Regression 2 132060 66030.2 146.27 0.000
10 1 -0.3566 0 0.1272 0 0 460 X11 1 68392 68392.2 151.50 0.000
11 1 0.3566 0 0.1272 0 0 455 X22 1 63671 63671.2 141.04 0.000
12 1 0 -0.3566 0 0.1272 0 467 Error 14 6320 451.4
13 1 0 0.3566 0 0.1272 0 464 Lack-of-Fit 10 6303 630.3 146.58 0.000
14 1 -1.4128 0 1.9960 0 0 255 Pure Error 4 17 4.3
15 1 1.4128 0 1.9960 0 0 258 Total 16 138380
16 1 0 -1.4128 0 1.9960 0 266 Model Summary
17 1 0 1.4128 0 1.9960 0 259 S R-sq R-sq(adj) R-sq(pred)
Total 17 0 0 8.24648.2464 0 6032 21.2471 95.43% 94.78% 94.19%
Mean 0 0 0.48510.4851 0 354.82 Figure 3 ANOVA for the Final Model
The final model has to be the following.
Regression Analysis: y versus x1, x2, x11, x22, x12
Analysis of Variance yˆ = 354.82 −92.46⋅ (x12 −0.4851) −89.21⋅ (x22 −0.4851) (17)
Source DF Adj SS Adj MS F-Value P-Value
Regression 5 132103 26420.5 46.29 0.000
There seems to be no need for canonical analysis. However,
x1 1 5 5.1 0.01 0.927 the extremely significant lack-of-fit indicates that the model
x2 1 28 28.3 0.05 0.828 might need more monomial terms.
x1x1 1 68392 68392.2 119.84 0.000
x2x2 1 63671 63671.2 111.57 0.000
Coefficients
x1x2 1 2 2.4 0.00 0.949
Term Coef SE Coef T-Value P-Value VIF
Error 11 6278 570.7
Constant 354.82 5.15 68.85 0.000
Lack-of-Fit 7 6261 894.4 207.99 0.000
X11 -92.46 7.51 -12.31 0.000 1.00
Pure Error 4 17 4.3
X22 -89.21 7.51 -11.88 0.000 1.00
Total 16 138380
Regression Equation
Figure 2 ANOVA for Polynomial Model y = 354.82 - 92.46 X11 - 89.21 X22
Figure 4 Parameter Estimates of Final Model

60
Back to Contents

The parameter estimates do not change, even though some the model with micro-scale variation results in an
regression variables are eliminated, due to the fact that the approximation. The large-scale variation of y ( x) is a poly-
design is orthogonal. The estimated response surface is rather
nomial of degree d − 1 , say μ ( x) = pd ( x) . A k -variable
simple and should look like the following.
polynomial of degree d − 1 has k + d −1 Ck = r monomials
denoted by q j (x) ’s. Let μ ( x) = pd ( x) be a multivariable
polynomial of order d − 1 . Then,

l
E[Y ( x )] = μ ( x ) = pd ( x) = j =1
β j q j ( x) (20)
where β j is coefficient of each monomial. In case of response
surface models, this polynomial can be transformed into the
Figure 5 Estimated Surface indicating Lack-of-Fit canonical forms. As a result, the total number of monomials r
However, we feel misgivings that stem mainly from the fact reduces to l (l < r ) . Using vector and matrix notation, let us
that the estimated surface does not seem adequate enough to define a unitary vector
represent the actual response surface, because lack-of-fit is so
significant. u = ( q1 (x) q2 ( x) ⋅⋅⋅ ql (x) )′ (21)
The observations and regression matrix are as follows:
III. SPATIAL PREDICTION MODEL
 y ( x1 )   q1 ( x1 ) q2 (x1 ) ⋅⋅⋅ ql (x1 ) 
The spatial prediction method named “kriging” is an  y(x2 )   
analogue for spatial stochastic process of Wiener-Kolmogorov y = X =  q1 (x 2 ) q2 ( x 2 ) ⋅⋅⋅ ql (x 2 )  (22)
⋅⋅⋅  ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅
theory. In this method, an irregular surface y ( x) is regarded  y(x )   q ( x ) q (x ) ⋅⋅⋅ q (x ) 
 N   1 N 2 N l N 
as a realization of multi-dimensional variable Y (x) where
Given the vector y at design points {x1 , x 2 , , x N } and
{Y (x) : x ∈ D ⊂  k } is thought of as a real-valued stochastic order d , the polynomial part can be rewritten as
k
process defined on a domain D of  . The coordinate data l
E[Y (x)] = μ(x) = pd (x) = β j q j (x) = β 'u (23)
in the domain of interest D is obtainable by measuring j =1
y (x), x ∈ D in the form of {(xi , y (xi )), i = 1, ⋅⋅⋅, N } . The As mentioned earlier, if the small-scale variation ( ε ) is
domain of interest is not restricted in 2-D space. In case of assumed to be negligible, the problem reduces to ordinary
response surface design, the domain of interest D can be least square (OLS) or multivariable polynomial regression
defined in k dimensional space. A set of scattered problem. Using the normal equation (X'X)β = X'y , the
observations in multi-dimensional space can be analyzed to
estimate of β̂ can be obtained by
reproduce the actual surface.
Spatial prediction methods are traditionally divided into two βˆ ≡ b = (X'X)−1 ⋅ X'y (24)
study areas such as kriging and splines. An irregular surface If the small-scale variation is incorporated in the model, the
y ( x) is regarded as the realization of stochastic process predictor at point x is defined as follows:
Y (x) . The observations of the realizations consist of two N
Yˆ (x) =  λi (x)Y (xi ) (25)
parts, that is, large-scale and small-scale variations. In some l =1
cases, micro-scale variation is considered as an additional
where λi = λi ( x) = λ ( x, x i ) and the statistic is known as
source of variation. That is to say, Y (x) can be represented as
the best linear unbiased predictor (BLUP). Without
a summation of variations of large-scale ( μ ), small-scale( ε ),
considering micro-scale variation, the predictor can be
and micro-scale( θ ). obtained from the following
Y ( x) = μ ( x) + ε ( x) + θ ( x) (18) ŷ ( x ) = y ′λ = y ′Bu + y ′Aκ = b′u + c′κ (26)
where μ (x) is a deterministic polynomial representing trend,
where A = K −1 − K −1 X ( X′K −1 X ) X′K −1 , B = K −1 X ( X′K −1 X )
−1 −1

whereas ε (x) and θ (x) are random variables with zero


, and λ = Bu + Aκ . The matrix K is the distance matrix based
expectations. In most cases, at the two distinct design points x
and x ' , ε ( x) and ε (x ') are correlated, while θ (x) and θ ( x ') on the kernel function. The identity that c′ = y ′A and b′ = y ′B
are not. Given a series of observation points and a set of holds. The algebraic identity X′a = 0 also holds, which
indicates the orthogonal decomposition. Each element of the
design points {x1 , ⋅⋅⋅, x N } , the estimate of y ( x) at prediction

point x can be thought of as kernel vector κ = κ ( x, x1 ) ,κ ( x, x2 ) , ⋅⋅⋅,κ ( x, xN )  is called
yˆ ( x) = μˆ ( x) + εˆ ( x) (19) variance function or variogram in kriging and kernel function
If the micro-scale variation is omitted in the model, then the in splines. There are different types of models depending on
prediction is equivalent to interpolation. The prediction with the order of the model and the stationarity of the variogram.

61
Back to Contents

On the other hand, the spliners represent the small-scale We were able to obtain more plausible response surface just
variation of the actual surface as a weighted sum of kernel by employing spatial prediction method. The surface has a
functions. For instance, in case that k = 2 , the typical spline dimple in the middle, which seems to be the major reason for
model looks like the following. why the ANOVA table for polynomial model indicates lack-
N of-fit.
y ( x) = β 0 + β1 x1 + β 2 x2 + β12 x1 x2 +  γ i ⋅ κ (x, xi ) (27)
i =1 IV. CONCLUSION
Usually, large-scale variation μ (x) does not have any In central composite design, an experimenter designs the
second-order term. However, we are going to employ second- experiment with some additional runs or replicates at the
order terms into large-scale variation and fit the model into the design center, which help measure lack-of-fit. Nevertheless,
data, just as shown in multivariable polynomial model for an experimenter might be in trouble whenever he or she
CCD2. That is to say, encounters lack-of-fit, because it demands to fit higher-order
y (x) = β 0 + β1 x1 + β 2 x2 + β11 x12 + β 22 x22 + β12 x1 x2 terms. To fit the higher-order model requires the more design
N points. Moreover, an experimenter may well think that the
(28)
+  γ i ⋅ κ ( x, x i ) orthogonality of the model is desirable. However, the
i =1 orthogonal regression model for multivariable polynomial
There are different kernel functions frequently used in higher than the second-order is not known in the literature.
practice. Among those, thin plate spline (TPS), multi-quadric Besides, the higher-order models are not free of over-fitting.
(MQ) or any other radial basis functions (RBF) are most Meanwhile, the spatial prediction method enables an
famous. According to Michelli [6], in which the validity of experimenter to estimate and investigate the response surface
those kernels or basis functions is investigated, the distance even with minimum required runs to fit the designated model.
matrix K based on the kernel function should be The merit of employing SP method lies in the fact that the
conditionally positive definite in order for a kernel to be valid, response surface is obtainable either with or without any
and closely related to the order of the model. Thin plate spline replication. It is not necessary that an experimenter has to bear
is used in this study for the kernel function, because it is the design points or replication in mind. The drawback of SP
known to be naturally smooth. method is that it requires much of work for programming
κ ( x, x j ) = κ ( x − x j ) = x − x j ln x − x j because any commercialized software package for this type of
2
(29)
problem solving is not available.
According to Cressie [3], there is a close link between For this reason, it is highly recommended that the SP
kriging and splines. If one thinks kriging is a primal form, then method should be taken into consideration when designing an
the splines are the dual form of kriging. In many respects, experiment and analyzing the response surfaces. The SP
kriging and splines are closely related to each other. method can be regarded as a tool for the pilot study of the
The result of spatial prediction after fitting the model into surfaces of experimental design problems. It pays to make an
the same data as before is shown in Figure 6. experimenter more confident over the behavior of the response
surfaces of the experiments in which he or she is engaging.
V. REFERENCES
[1] Box, G. E. P. and Hunter, J. S., Multifactor Experimental Designs for
Exploring Response Surfaces, The Annals of Mathematical Statistics,
Vol. 28: 195-241, 1957.
[2] Box, G. E. P. and Wilson, K. B., On the Experimental Attainment of
Optimum Conditions, Journal of the Royal Statistical Society, Series B,
Vol. 13: 1-45, 1951.
[3] Cressie, N.A.C., Statistics for Spatial Data, John Wiley & Sons, USA,
1991
[4] Kim, H. J., Extended central composite designs with the axial points
indicated by two numbers, The Korean Communications in Statistics,
Vol. 9: 595-605, 2002.
[5] Kim, H. J. and Park, S. H., Statistical Properties of Second type Central
Figure 6 Surface Estimated using Spatial Prediction Composite Designs, The Korean Journal of Applied Statistics, Vol.
19(Issue 2): 257-270, 2006.
The polynomial part of the model reflecting the large-scale [6] Michelli, C. A. (1986), Interpolation of Scattered Data: distance
variation is of second-order. In addition, thin plate spline Matrices and Conditionally Positive Definite Functions, Constructive
(TPS) kernel function is employed to represent the small-scale Approximation, 24(2), 11-22, 1986
variation. In order to estimate the response surface, the [7] Myers, R. H. and Montgomery, D. C., Response Surface Methodology–
Process and Product Optimization Using Designed Experiments, 2nd
programming language of Microsoft Visual Basic. NET is Edition, (Wiley Series in Probability and Statistics, 2002).
used. The 3D plot is a result of Gnuplot 5.0. As a matter of
fact, the estimated surface is an interpolant of the data points
in that the interpolating surface passes through the data points
or the mean value of replicates at the design center.

62
Back to Contents

Reducing Operational Downtime in Service


Processes: A Six Sigma Case Study
Raid Al-Aomar*, Saeed Aljeneibi, and Shereen Almazroui
Master of Engineering Management Program
College of Engineering, Abu Dhabi University
Abu Dhabi, UAE
*Email: raid.alaomar@adu.ac.ae

Abstract—This paper presents a Six Sigma approach for reducing ensures complete understanding of process steps, measures
operational downtime in service processes. The objective is to process capability at the process Critical-to-Quality metrics
improve and bring betterment in the service processes for timely (CTQs), applies Six Sigma tools and analysis to improve
delivery of products and services. A case study of downtime process performance, and implements methods to control the
reduction in service processes at a local company is used to achieved improvement. Details of Six Sigma DMAIC process
illustrate the Six Sigma application. The company has been can be found in Breyfogle [10].
experiencing unexpected increase in downtime in the provision of
goods and services supporting aviation equipment. The Six Sigma In this paper, the Six Sigma DMAIC methodology is
implementation has significantly reduced the loss in operational applied to reduce the downtime of operations in a service
effectiveness by reducing downtime. The paper also presents the company to a tolerable level. Several attempts were made to
tools and techniques used in the DMAIC process to reduce overcome the issue including circular distribution, brain-
downtime and improve the overall service level. storming sessions, and punctuality scorecard. However, as no
tangible improvement was achieved, Six-sigma was adopted to
Keywords—Six Sigma; Downtime Reduction; Service Operations. reduce the downtime frequency at key process tasks/functions
I. INTRODUCTION to improve labor productivity and resources utilization and to
improve the overall processes performance.
Process improvement through downtime reduction is a
common business practice. At the operational level, process II. DMAIC METODOLOGY
improvement has a more focused view as downtime impacts The “Define” stage of DMAIC specifies the set of CTQs that
productivity, utilization of resources, and service level. Such characterize the quality attributes of interest to the improvement
performance aspects are increasingly becoming the focus of Six study. Measurement stage is used to estimate the quality
Sigma studies in both manufacturing and transactional attributes of the underlying system. The measured values are
operations [1, 2, and 3]. also utilized in the “Analyze” and “Improve” stages to identify
In service operations, the uptime is a key indicator of causes of variability and to set effective improvement actions.
operational effectiveness. This particularly important to costly Finally, Control actions are prescribed to maintain the achieved
services of 24-7 nature such as technical support, shipping process improvement. Figure 1 shows the DMAIC process.
services, emergency services, maintenance and repair, and call
center. In such services, every lost minute of operation often
translates into significant cost to the firm in addition to the
negative impact on customer care and service. Examples of Six
Sigma application to service processes can be found in Ray and
John [4], Hensley and Dobie [5], and Johannsen et al. [6].
Six-Sigma is one of the commonly practiced data-driven
approaches with proven results in quality measurement,
improvement, and design and is widely used in industries
belonging to almost every sector of the economy [7]. Six Sigma
measures the quality level at each quality aspect in terms of a
Sigma Rating. Six Sigma targets product/service attributes as
well as the overall process improvement. It improves process
performance by reducing products defects and process
variability. Details of Six Sigma methods and applications can
be found in Keller [8] and Zu et al. [9].
Six Sigma improves quality and performance using the
structured approach of DMAIC (Define-Measure-Analyze-
Improve-Control). DMAIC is a problem-solving approach that Fig. 1. The Six Sigma DMAIC Process

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


63
Back to Contents

DMAIC tollgates provide a concise roadmap to various improvement was achieved. Thus, a Six Sigma study is initiated
process stakeholders. Based on that, a plan is developed to to deliver all the ordered components and requested technical
collect process data and measure process performance, assistance in a more timely efficient manner (i.e., with minimal
opportunities, and defects. A data-driven and well-structured delay and interruption caused by frequent process downtime).
approach is then followed to analyze sources of variation, solve The following sections present the application of DMAIC
problems, and enhance performance toward the nearly-perfect tollgates for downtime reduction. Full details are not shown for
Six Sigma target. Table 1 summarize the commonly used tools brevity.
and expected deliverables at each DMAIC tollgate.
A. Define Stage
Table 1. Tools of DMAIC Tollgates
The “Define” stage of DMAIC was focused on defining the
underlying problem and rationalizing the need for initiating the
Six Sigma project. This was formalized in terms of a Six Sigma
project charter and management approved. The charter has
limited the project scope to processes performed by the teams
responsible for supporting aviation services and products. This
stage also requires a clear understanding of the service process
flow and a concise definition of the CTQ metric of the process
(i.e. process downtime). To gain more process knowledge, the
project team studied the details of the service process and
developed a process flowchart (Figure 2).

III. APPLICATION CASE STUDY


The Six Sigma DMAIC process was applied to reduce the
downtime in a local service company (a growing group of
multiple businesses). The group’s primary business is in
providing aviation parts required for aircrafts (i.e., accessories
and spare parts) and other specific pieces of equipment to major
airlines in the region. The group has expanded dramatically to
supply several other types of aviation equipment such as airport
ground machinery. The company also manufactures and
partially outsources the production of these parts to carefully
selected world-class manufacturers mainly in France, UK, and
USA. The company stores aircraft parts and equipment in
custom-designed warehouses where they are subject to minor Fig. 2. The Service Process Flowchart
repairs and quality assurance tests. Once approved from the
quality control department, they can be delivered to the clients.
B. Measure Stage
The need for the Six Sigma project for downtime
reduction stems from the company’s intention to excel in The “Measure” stage of DMAIC was focused on data
achieving high level of customer satisfaction through a flawless collection and measurement of current state performance to
and uninterrupted provision of both products and services. The estimate process capability and Sigma level. The efficiency of
primary short-term objective is to provide standardized the process was measured in terms of time required to dispatch
manufacturing and delivery approvals. The top management and deliver the service or product required by the customer.
believes that achieving such standards mainly depends on the Digital clocks were used to collect the precise readings of times
quality provision of both goods and services. Thus, the consumed by different teams at different tasks and at random
company is continuously trying to improve their aviation times. The data was recorded in spreadsheets and then
business by conducting quality audits, following a Just-in-Time transferred to MINITAB software for analysis. Any observed
(JIT) approach, and adopting lean and Six-Sigma tools and time that exceeded the standard time of the task was considered
techniques. as a delay caused by some kind of downtime in the process and
The company has an established system of 24 hours was allocated to the corresponding task and team.
service to support the orders of aviation parts and accessories Data was collected across the underlying aviation service
and for post delivery services. However, the company is process performed by 8 service teams with a focus on the
experiencing unexpected increase in downtime in the provision downtime accumulated monthly. The sheets of collected data
of goods and services. In the last seven months, records showed are not shown for brevity. Figure 3 shows a summary downtime
several bounced back orders due to the downtime. Several
minutes within the data collection period (last 7 months).
attempts were made to overcome the issue but no significant

64
Back to Contents

Fig. 3. Monthly distribution of collected downtime minutes Fig. 5. Pareto chart of downtime at 8 service teams

To check the accuracy of collected data, the AIAG standard The Pareto chart in Figure 5 shows that 80% of downtime is
is used for Measurement System Analysis (MSA) [11]. The contributed by the first 4 service teams. Pareto analysis was also
standard has a tolerance level of 10% for the used measurement applied to the tasks performed by the team with highest
system. Figure 4 shows that the measurement’s tolerance is downtime contribution (Development Team). Figure 6 ranks
below 10% and the overall accuracy is 93.3%. The analysis is the team tasks based on downtime contribution. Similar
based on 942 downtime entries. analysis was applied to all 8 teams of the service process to
identify the specific tasks that need improvement.

Fig. 4. The accuracy of downtime measurement system

The overall short term Sigma level of the service process was
estimated monthly from the collected data. Results are shown
in Table 2.
Table 2. The current state monthly Sigma level
Fig. 6. Pareto chart of downtime at various tasks of the development team
Month February March April May June July August
To arrive at the root causes of downtime in each service task
Downtime (min) 8079 6961 7047 6263 6559 5405 6645
(RCA), a fishbone diagram is utilized. Experts’ opinion,
Monthly minutes 40320 44640 43200 44640 43200 44640 44640
operational records, and brainstorming were adopted to identify
%yield 0.80 0.84 0.84 0.86 0.85 0.88 0.85
root causes. Figure 7 shows a fishbone diagram of downtime in
Z-Scrore 0.84 1.01 0.98 1.08 1.03 1.17 1.04
the “update script” (the task with highest downtime
Sigma Level (ST) 2.34 2.51 2.48 2.58 2.53 2.67 2.54
contribution).
As shown in Table 2, monthly downtime has resulted in a
reduced process Sigma level. All reported Sigma levels are less
than 3.0 with an average of 2.52 indicating low process
capability and low performance and service level. As
mentioned earlier, such low performance has resulted in delays
and financial issues with customers.
C. Analyze Stage
The “Analyze” stage of DMAIC was focused on identifying
the causes of process variability that lead to the reported process
downtime. Pareto analysis, Root Cause Analysis (RCA), and
ANOVA were the main Six Sigma tools utilized at this stage.
The project team has allocated the observed downtime to the 8
teams involved in the aviation service process. Figure 5 shows
a Pareto chart of the downtime contributed by the 8 service
Fig. 7. Fishbone diagram used in RCA
teams.

65
Back to Contents

Based on the Pareto and RCA, the root causes of downtime Figure 10 shows that most of the downtime is contributed by
in each service tasks were identified for all teams and an employee with 4-5 experience years working in the evening
improvement action plan is developed accordingly. As the shift. Beyond, the 4-5 years level, downtime decreases with the
aviation service runs a 24-7 mode, the project team has also increase in the years of experience. The p-value in the ANOVA
analyzed the impact of working shift on the produced results in Figure 11 confirms that “years of experience” has a
downtime. Figure 8 shows the downtime produced per each significant impact on the process downtime.
shift of operation

Fig. 8. Average downtime per shift


Fig. 11. ANOVA analysis of downtime by years of experience
As shown in Figure 8, the evening shift produces most of the
process downtime. To verify that statistically, ANOVA analysis
using MINITAB [12] were utilized to assess the impact of shift D. Improve Stage
change on process downtime. Figure 9 summarizes the
ANOVA results graphically. The “Improve” stage of DMAIC was focused on
implementing the improvement action plan derived from the
“Analyze” stage. As mentioned earlier, the action plan includes
specific improvement action at each process task that
contributes to process downtime. The details were not shown
for brevity. Design of Experiments (DOE) was also utilized to
further optimize the process by setting the two critical factors
(operating shift and years of experience) with highest impact on
process performance (i.e., process downtime). Thus, a simple
DOE is conducted to identify the operating levels that lead to
reduced downtime. Figure 12 shows the resulting cubic plot of
process downtime at different levels of shift and years of
experience.

Fig. 9. ANOVA analysis of downtime by shift

As shown in Figure 9, the analysis confirms that the operating


shift has a significant impact on the process downtime. Similar
analyses were conducted to assess the impact of workers’ years
of experience on process downtime. Results are summarized in
Figures 10 and 11.

Fig. 12. Cubic plot of process downtime

Based on DOE results, the project team recommended that


the supervisory staff from the night shift (those with high
experience) should be mixed with the low-experience staff of
the evening shift. The objective is to reduce the overall
Fig. 10. Average downtime per years of experience downtime in the evening shift and to concurrently improve the

66
Back to Contents

performance of workers with less than 5 years of experience. the X-bar-R chart for SPC of downtime level and developed a
The project team has then observed the process downtime after FEMA matrix to analyze and monitor risks involved in the
changing the process operating pattern and implementing the service process. Details were not shown for brevity.
key items in the process improvement action plan. Results
showed that the service process has exhibited less levels of IV. CONCLUSION
down time. Figure 13 depicts the down time reduction pattern The paper presented a Six Sigma application to reduce the
in the last 4 months (September through December). downtime in a service process and improve the overall process
effectiveness. The study has targeted an important service that
supports aviation equipment in a local company (an expensive
service that provides a 24-7 parts and technical support to
aviation companies). High downtime in such process has
negatively impacted the availability of the provided services and
has resulted in delays and interruptions in equipment usage at
the customer end. The paper presented a briefing on how
DMAIC tollgates were used to develop a specific process
improvement action plan. Results showed that implementing the
improvement actions has resulted in reduced process downtime
and improved process capability and Sigma level.
REFERENCES
[1] J.E. Ståhl, P. Gabrielson, C. Andersson, M. Jönsson, “Dynamic
manufacturing costs: Describing the dynamic behavior of downtimes
Fig. 13. Process downtime reduction after improvement from a cost perspective,” CIRP Journal of Manufacturing Science and
Technology, 5(4), 284–295, 2012.
The process improvement was also observed through the [2] N. Roth and M. Franchetti, “Process improvement for printing operations
increase in the overall short term Sigma level estimated for the through the DMAIC Lean Six Sigma approach,” Lean Six Sigma Journal,
1(2), 119-133, 2010.
last 4 months. Table 3 shows that the reported Sigma level in
[3] M. George, Lean Six Sigma for Service: How to Use Lean Speed and Six
the last four months have relatively improved. Indeed, in the Sigma Quality to Improve Services and Transactions, McGraw-Hill,
last two months they exceeded 3.0 which indicates a capable 2003.
process and an improved performance and service level. [4] S. Ray and B. John, “Lean Six‐Sigma application in business process
Consequently, customers complaints were less and the impacts outsourced organization,” Lean Six Sigma Journal, 2(4), 371-380, 2011.
of the downtime financial issues were significantly reduced. [5] R. Hensley and K. Dobie, “Assessing readiness for six sigma in a service
setting,” Managerial Service Quality, 15, 82–101, 2005.
Table 3. The improved state monthly Sigma level
[6] F. Johannsen, S. Leist, and G. Zellner, “Six sigma as a business process
Month September October November December management method in services: analysis of the key application
Downtime (min) 5430 4355 2122 2276 problems,” Inf Syst E-Bus Management, 9, 307–332, 2011.
Monthly minutes 43200 44640 43200 44640 [7] M. Harry and R. Schroeder, Six Sigma: The breakthrough management
%yield 0.87 0.90 0.95 0.95 strategy revolutionizing the world's top corporations, Doubleday, 2000.
Z-Scrore 1.15 1.30 1.65 1.64 [8] P. Keller, Six sigma demystified. New York: McGraw-Hill, 2005.
Sigma Level (ST) 2.65 2.80 3.15 3.14 [9] X. Zu, L.D. Fredendall, and T.J. Douglas, “The evolving theory of quality
management: The role of Six Sigma,” Journal of Operations
E. Control Stage Management, 26, 630–650, 2008.
[10] F. Breyfogle, Implementing Six Sigma: Smarter Solutions using Statistical
The “Control” stage of DMAIC was focused on sustaining Methods, Wiley, 1999.
the improved level of performance for the aviation service [11] Automotive Industry Action Group, 2015, (http://www.aiag.org).
process. This was attained by monitoring the generated process [12] Q. Brook, Lean Six Sigma and Minitab: The Complete Toolbox Guide for
downtime and by controlling the various elements of the service all Lean Six Sigma Practitioners, 3rd edition, OPEX Resources, 2010.
process. The key Six Sigma tools used in this stage include SPC
and FMEA in addition to existing standard operating
procedures and process control plans. The project team utilized

67
Back to Contents

Ownership structure, internal control and real


activity earnings management

Xiao Wang Jie Gao Chunling Shang Qiang Lu


Department of Department of Department of Department of
Management Science and Management Science and Management Science and Management Science and
Engineering Engineering Engineering Engineering
Harbin Institute of Harbin Institute of Harbin Institute of Harbin Institute of
Technology Technology Technology Technology
Shenzhen Graduate Shenzhen Graduate Shenzhen Graduate Shenzhen Graduate
School School School School
Shenzhen, China Shenzhen, China Shenzhen, China Shenzhen, China

Abstract—Reasonable assurance of the reliability of financial China’s unique institutional characteristics provide
report is one of the main objectives of internal control. Thus, opportunities for us to investigate the problem.
high-quality internal control should help to improve the quality
of corporate’s financial report theoretically. In this paper, we This article makes some contribution as follows. In theory,
examine the effect of internal control on earnings management, the point-cut of the unique ownership structure in China can
which was measured by real activity earnings management. provide a new angle for the research into internal control on
Besides, we further test the moderating effect of ownership the basis of exist domestic research. In practice, the results of
structure on the association between internal control quality and the empirical study in the paper contribute to a universal
earnings management. The results indicate that higher quality of guiding significance for further perfection of the internal
internal control can restrain real activity earnings management. control system and optimization of the equity structure in
Specially, this restrain function is more obvious in un-stated Chinese listed companies.
firms compared to state-owned firms. And the restrain function
cannot be found in companies exist separation of corporate In this paper, we first examine the relation between
controlling right from the right to control cash flow. internal control efficiency and real activity earnings
manipulation. Then we test the moderating impact of
Keywords—internal control; earnings management; ownership ownership structure. The paper proceeds as follows. Section 2
structure describes the related research. Section 3 illustrates the
motivation develops empirical hypotheses. Section 4 is the
I. INSTITUTIONAL BACKGROUND empirical study. Section 5 is the conclusion.
Internal control deficiencies have long been the main cause
of accounting scandals and captured more and more attention II. LITERATURE REVIEW
both from regulators and the stakeholders. In reference to Previous researches on internal control and earnings
experiences of the Sarbanes-Oxley Act, China issued internal management in abroad have concluded the relationship
control standards in 2008, which could be called C-SOX, a between internal control quality and accrued and real activity
similar framework about corporate internal control system. earnings management. Doyle (2007) found that firms with
According to US COSO (Committee of Sponsoring weak internal control over financial reporting generally have
Organizations of the Treadway Commission) integrated lower accruals quality. And the relation between weak internal
framework, internal control system is designed to provide controls and lower accruals quality is driven by weakness
reasonable assurance to achieve effectiveness and efficiency disclosures that relate to overall company-level controls,
of operations, reliability of financial reporting and compliance which may be more difficult to "audit around." But there is no
with laws and regulations. It is said that the quality of internal such effect for account-specific weaknesses [1]. Beneish
control system has direct impact on the financial report quality. (2008) obtained the consistent conclusions that companies
Thus, in theory, internal control efficiency can improve with serious internal control deficiencies have stronger
earnings quality. There have been many foreign researches motivation of earnings management and higher extent to carry
indicated that internal control efficiency can restrain earnings earnings manipulation [2]. Chan et al(2008)draw the same
manipulation, which improve earnings quality as a result. conclusion and consider that Section 404 of the Sarbanes-
Under China’s institutional background, because of the Oxley Act has promoted the improvement of internal control
existence of special single-largest-shareholder ownership and earnings quality[3]. Ashbaugh - Skaife (2007) found that
structure, the relationship between internal control efficiency firms whose auditors confirm remediation of previously
and earnings manipulation cannot be predicted. Hence, reported internal control deficiencies exhibit an increase in

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


68
Back to Contents

accrual quality relative to firms that do not remediate their degree of earnings management. Either the Sarbanes-Oxley
control problems [4]. Act or Internal Control Standards of China, providing
reasonable assurance of financial report quality is one of the
The related research in China starts late and domestic fundamental goals. High quality internal control can decrease
scholars cannot come to the consistent conclusions. With the the occurrence of managers' opportunism behaviors whether in
promulgation of Internal Control Standards in 2008, more and the accounting information processing or the production and
more scholars begin to focus on the empirical studies about operation of an enterprise, thereby increasing earnings quality.
the related problems of internal control and reach some On the contrary, the lower quality of internal control may be
empirical evidence. Guoqing Zhang(2008) indicated that almost nonexistent, which could hardly contain managers'
the improvement of internal control quality does not self-interested behaviors. Then it provides a chance for
necessarily go with higher earnings quality and there exist managers to manipulate earnings in order to maximize their
some inner factors of the firms influencing the quality of benefit which can result in quality degradation of the earnings.
internal control and earnings [5].Zhongbo Yu and Gaoliang Based on the analyses above, we put forward the following
Tian came to a similar conclusion that there was no significant hypothesis.
correlation between internal control quality and earnings
management [6].However, others oppose it. Yugui Hao and H1: Other things being equal, internal control can inhibit
Yongxin Sun point out that the higher the internal control real activity earnings management, the higher the quality of
quality, the less the profit manipulation is, which can be internal control, the lower degree of real activity earnings
helpful to increase the earnings quality [7]. Lirong Chen and management.
Shuguang Zhou demonstrate a negative correlation between
the degree of earnings management and the detail of the B. State-owned Ownership Sstructure
information disclosure of internal control, but a positive Internal control can be implemented effectively based on
correlation between the degree of earnings management and appropriate corporate governance. Corporate governance
internal control deficiencies [8]. Jianfang Ye et al. (2012) also structure always has an impact on the orientation of managers'
showed that internal control deficiencies are significantly values and behaviors, as well as the internal control quality.
positively associated with the extent of accrual-based and real As the foundation of property right of governance structure,
activity-based earning management [9]. Hongxing Fang and ownership structure is the critical factor influencing agency
Yuna Jin conclude that higher internal control quality can conflict between controlling shareholders and minority
significantly inhibit both the accrual-based and real activity- shareholder, and also determines the way that the governance
based earnings management. They also find that companies system operates. Therefore, reasonable ownership structure is
disclosing audit report on internal control have lower degree helpful to improve the corporate governance structure which
of earnings management compared to companies without can fundamentally raise the quality of internal control.
internal control audit report [10].Yan Tong and Feng Xu Different with the west, there usually exists special ownership
(2013)arrived at the same conclusion and earnings quality structure which dominance by a single shareholder state-
can cause a reaction to the internal control[11]. owned shares in Chinese listed companies, which can affect
By combing the exiting literature, we find foreign scholars the construction of internal control system. In state-owned
always use internal control deficiencies as the proxy of companies, the government carry out the regulatory authority
internal control and have come to an accordant conclusion that but don't have residual claim right, thus the supervisory cost
there is a negative correlation between internal control does not match the benefits which can lead to inefficient
deficiencies and earnings management. Constrained by the supervision of the managers. In addition, the minority
information sources of internal control, the scholars in China shareholders would not like to monitor effectively because of
have different ideas and still lack the research including real the free-rider problem, either. As a result, state-owned
activity earnings management. Therefore, we will carry out enterprises always have higher agency costs and more serious
this research to explore the relation between internal control agency problems. Hence, state-owned companies in China
and earnings management based on the moderating effect of have more serious Insider Control problem that is more likely
Chinese ownership structure. to initiate managers' opportunism behaviors. By contrast, in
non-state firms, majority shareholders will supervise the
managers strictly to secure their interests so that inhibit the
III. MOTIVATION AND HYPOTHESES behavior of managers. This makes majority shareholders and
managers more easily behave as opportunists which is against
A. Internal Control and Real Activity Earnings Management the effectively operation of internal control and cannot
According to agency theory, there are different goals improve the earnings quality. There raises the next hypothesis.
between owner and manager. Maximizing the value of the
H2: Other things being equal, compared with state-owned
company is the owners' objective and the goal of the companies, internal control can inhibit real activity earnings
managers is to maximize their self-interest. As a result, the management more effectively in non-state firms.
managers may deviate from maximizing firm value because of
influence of their cognition problems. Internal control, as an
C. Separation of Cash Flow Right and Control Right
effective corporate governance mechanism, can supervise and
restrict managers' self-interested behaviors, which will reduce For major shareholder-controlled companies, the control
the space for manipulation of the profit and descend the right held by controlling holders of listed companies may
exceed their cash flow right, which leads to the deviation of

69
Back to Contents

the two rights. Present empirical studies indicate that the abnormal levels are measured as the estimated residual from
characteristics of the corporation governance structure affect the three equations. Aggregating the three real activity
the controlling shareholders' tunnelling behavior. The higher manipulation measures into one proxy, we get the proxy of
levels of the separation of cash flow right and control right, real activity earnings management in equation (4).
the more motivated the controlling shareholders would like to
carry out the tunnelling behavior, which can occupy the RM it = A C F O it + A P R O D it + A D IS X it (4)
interests of small shareholders. Less powerful cash flow right Based on the previous studies, we use ICD (internal
drives the controlling shareholders to seize the private benefits control deficiencies) as proxy for the independent variable,
of control rights. What's more, larger control right makes it internal control quality. From the perspective of ultimate
easier to establish profitable accounting policies which controller, we choose ownership property (State: state-owned
contributes to more private benefits. Chengbing Chu (2013) and non- state-owned) and the separation of cash flow right
finds that of the separation of cash flow right and control right and control right (SOC) as the moderating variables to
have negative effects on internal control effectiveness [12]. measure the ownership structure. In contrast to the prior
Thus it is not good for internal control in firms existing literature, we select several control variables. LNA, the log
deviation of the two rights, as well as the earnings quality. value of total assets, is used to control for relative firm size.
Then we propose the third hypothesis. ROA, computed using net income divided by total assets in
H3: Other things being equal, internal control cannot year t, is used to control for firm performance. We choose
inhibit real activity earnings management effectively when GROW (Assets Growth Rate) as the proxy for the firms in
there is the separation of cash flow right and control right. growth, which is measured as the increase in assets from year
t-1 to year t divided by the amount of total assets in year t-1.
We use BS to control for director board size and LEV
IV. EMPIRICAL STUDY
represents the financial leverage.
A. Sample Collection and Variables
B. Models
The data for this paper are mainly obtained from CCER
database in China. We choose the firms listed in Shenzhen and In order to examine the relation between internal control
Shanghai Stock Exchange from 2010 to 2012 as the sample. deficiencies and real activity earnings management, as well as
Banks and financial institutions are eliminated. ST and *ST the moderate effects of ownership structure, we establish three
firms are also eliminated. After taking out the extreme values regression models as follows. We use model (1) to test
at 5% level, we get 3926 observations. Hypothesis 1. Model (2) and model (3) are used to examine
Hypothesis 2 and Hypothesis 3, respectively.
The dependent variable in this study is real activity
earnings management. Following Roychowdhury (2006) [13], RM=β0+β1ICD+β2LNA+β3LEV+β4GROW+β5BS+β6Year
we examine it from the following three aspects: the abnormal dummies+β7Industrydummies+ε (1)
level of cash flow from operation, the abnormal level of RM=β0+β1ICD+β2LNA+β3LEV+β4GROW+β5BS+β6State
production cost, and the abnormal level of discretionary +β7ICD*State+β8Yeardummies+β9Industrydummies+ε (2)
expenditure.
RM=β0+β1ICD+β2LNA+β3LEV+β4GROW+β5BS+β6SOC
CFOit / Ait −1 = β 0 + β1 (1 / Ait −1 ) + β 2 ( Sit / Ait −1 ) + β 3 ( ΔSit / Ait −1 ) +β7ICD*SOC+β8Yeardummies+β9Industrydummies+ε (3)
+ β 4 ( ΔSit −1 / Ait −1 ) + β 5 ( ECit / Ait −1 ) + β 6 (TCit / Ait −1 )
+ β 7 ( OCit / Ait −1 ) +ε it (1) C. Empirical Results
Table 1 presents the regression results regarding the
P R O D it / Ait − 1 = β 0 + β 1 (1 / Ait − 1 ) + β 2 ( S it / Ait − 1 ) (2) relation between internal control and real activity earnings
+ β 3 ( Δ S it / Ait − 1 ) + β 4 ( Δ S it − 1 / Ait − 1 ) + ε it management. The coefficient is negative and significant at the
level of 0.01, indicating that firms with more internal control
D ISX it / Ait − 1 = β 0 + β 1 (1 / Ait − 1 ) + β 2 ( S it − 1 / Ait − 1 ) + ε it (3)
deficiencies have lower level of real activity earnings
In equation (1), CFOit is the net operating cash flows management. On the contrast, higher internal control quality
reported in the statement of cash flows in year t; Sit is the sales can restrain real activity earnings management, consistent with
in year t; ΔSit is the change in sales from year t-1 to t; ΔSit −1 is H1.
TABLE 1 INTERNAL CONTROL AND RM
the change in sales from year t-2 to t-1; ECit is cash paid to Pred.Sign Coefficient
employees in year t; TCit is tax payments in year t; and OCit is Intercept -4.002***
cash received relating to other operating activities in year t. In (-5.78)
equation (2), PRODit is the sum of the cost of goods sold in ICD - -0.229***
(-11.76)
year t, equals the change in inventory from year t-1 to t. In
LNA + 0.161***
equation (3), DISXit denotes the discretionary expenditures, (4.94)
the sum of operating costs and administration costs. We first GROW + -0.004
estimate the normal level of cash flow from operations, the (2.71)
abnormal level of production cost, and the abnormal level of LEV 0.404**
discretionary expenditure with the following equation. The (2.14)

70
Back to Contents

ROA -0.004 V. CONCLUSION


(-0.71)
BS 0.134 Internal control problems have drawn more and more
(0.78) attention because of the frequent cases in accounting scandals
Year both in China and abroad. This article tries to learn the
controlled
Industry
construction and run process of the internal control system in
controlled China through examining the relation between internal control
Obs. and real activity earnings management. Combing with the
3926
unique stock right structure of Chinese listed companies, we
Adj.R2 0.014
test moderating impact of ownership structure on the relation
F-value 10.27 between internal control and real activity earnings
*** (**, *) indicates two-tailed significance at the 1 % (5 %, 10%) level. management in order to provide new ideas for optimizing
Taking State as the moderating variable, we get the equity structure. Through the empirical study, we conclude
regression results in table 2, Panel A. It is noteworthy that the that higher quality of internal control can restrain real activity
coefficient of ICD*State is positive and significant at the 0.01 earnings management. Specially, this restrain function is more
level, which indicate that in state-owned firms, the higher the obvious in non-stated-owned firms compared to the state firms.
internal control quality, the higher level of real earnings And the restrain function cannot be found in companies exist
management. We can learn that state-owned ownership separation of corporate controlling right from the right to
structure may inhibit internal control increase earnings, control cash flow.
consistent with H2.
In order to test H3, we use SOC as the other moderating REFERENCES
variable .The regression results are put in table 2, Panel B. We
can find that the coefficient of SOC is positive and significant [1] Doyle J, Ge W, McVay S. Accruals Quality and Internal Control over
Financial Reporting[J].The Accounting Review, 2007, 82(5): 1141-1170
at the 0.01 level, showing that the higher level of the
[2] Beneish M D, Billings M B, Hodder L D. Internal control weaknesses
separation of cash flow right and control right, the higher and information uncertainty[J]. The Accounting Review, 2008, 83(3):
degree of real activity earnings management. We can conclude 665-703.
that the separation of cash flow right and control right have [3] Chan K C, Farrell B, Lee P. Earnings management of firms reporting
negative effects on real activity earnings management. But the material internal control weaknesses under Section 404 of the Sarbanes-
coefficient of the cross term is not significant, which cannot Oxley Act[J]. Auditing: A Journal of Practice & Theory, 2008, 27(2):
verify H3. 161-179.
[4] Ashbaugh-Skaife H, Collins D W, Kinney Jr W R. The discovery and
TABLE 2 RESULTS OF MODERATING FUNCTION reporting of internal control deficiencies prior to SOX-mandated
audits[J]. Journal of Accounting and Economics, 2007, 44(1): 166-192.
Panel A Panel B
[5] Guoqing Zhang. Internal control and earnings quality [J]. Economic
Pred. Coefficient Pred. Coefficient Management, 2008, (23):112-119.
Sign Sign
[6] Zhongbo Yu, Gaoliang Tian. Do the internal control evaluation reports
Intercept -3.134*** Intercept -2.903*** really work?[J]Journal of Shanxi University of Finance and Economics,
(-14.13) (-17.70) 2009 (10): 110-118.
ICD - -0.321*** ICD - -0.251***
[7] Yugui Hao,Yongxin Sun. Study of the relation between internal control
(-11.32) (-8.88)
and earnings quality o [J].The Chinese accounting association branch of
State - -0.099*** SOC + 0.305***
high engineering college and university academic essays in 2010, 2010:
(-4.60) (2.78)
572-582.
ICD* 0.161*** ICD* - -0.318
State (3.80) SOC (-1.41) [8] Lirong Chen,Shuguang Zhou. Research of the relation between internal
control effeciency and earnings management [J].Communication of
LNA + 0.123*** LNA + 0.123***
Finance nd Accounting, 2010, (30): 78-83
(12.13) (4.62)
GROW + 0.014*** GROW + 0.014*** [9] Jianfang Ye, Danmeng Li, Binying Zhang. The effect of internal control
(2.67) (-1.24) deficiencies and their remediation on earnings management [J]. Auditing
LEV 0.278*** LEV 0.078 Research, 2012 (6): 50-59.
(5.28) (1.04) [10] Hongxing Fang,Yuna Jin.Coporate goverance, internal control and
ROA 0.001 ROA 0.008 inefficient investment: theoretical analysis and empirical evidence [J].
(0.12) (1.03) Accounting Research, 2013(7):63-70.
BS 0.174** BS 0.181** [11] Yan Tong, Feng Xu. Dynamic dependence of internal control efficiency
(3.12) (3.25) and earnings quality: research of listed companies in China [J]. China
Year Year Soft Science, 2013 (2): 111-122.
controlled controlled [12] Chengbing Chu. The effect of pyramid shareholding structureon internal
Industry Industry control effeciency[J].Journal of Central University of Finance &
controlled controlled
Economics, 2013, (002): 78-83.
Obs. Obs.
3926 3926 [13] Roychowdhury S. Earnings Management through Real activity
Adj.R2 0.154 Adj.R2 0.150 Manipulation[J]. Journal of Accounting & Economics, 2006 42(3):335.
F-value 90.60 F-value 116.14
*** (**, *) indicates two-tailed significance at the 1 % (5 %, 10%) level.

71
Back to Contents

Sustained Quality Award Status in Developing Country: A study on


the Dubai Quality Award Recipients
M. Doulatabadi K.Y. Wong
Razak School of Engineering and Advanced Technology Department of Manufacturing and Industrial Engineering
Universiti Teknologi Malaysia (UTM) Universiti Teknologi Malaysia (UTM)
Kuala Lumpur, Malaysia Johor Bahru, Malaysia
dmehran2@live.utm.my wongky@mail.fkm.utm.my

Abstract – Quality has been regarded as one of the most European Foundation for Quality Management (EFQM)
important factors for achieving competitive advantage with Excellence Model and the Baldrige Criteria for
an emphasis on excellence. Achieving excellence through Performance Excellence (BCPE) are two examples of
quality award is a challenging task, but sustaining this globally accepted major quality excellence award models.
achievement is even more challenging. Given the importance The first and immediate aim of these models is the
and benefits of continuously practicing with a quality award continuous improvement of performance towards
to organizations, very limited empirical research has been achieving excellence [4,6]. Self-assessment and
carried out especially in organizations in developing
benchmarking is the main element of these models [8,9].
countries. This study attempts to reduce this knowledge gap
by focusing exclusively on organizations that have received a
The United Arab Emirates as the first developing
Dubai Quality Award in the United Arab Emirates. The main country in the Middle East including North Africa
purpose of this research is to identify those critical factors (MENA) has enhanced competitiveness in the region by
perceived as crucial for sustaining quality award status. To creating several significant quality and excellence award
achieve this purpose, a structured questionnaire survey was programs and schemes [10].
carried out to elicit the opinions of quality managers from 138 Given the importance and benefits of continuously
quality award organizations about the importance of critical practicing with quality award programs to organizations
factors and their current practices. Survey results indicated there is a clear lack of research concerning this issue
that eight factors as the most critical factors to sustain quality especially among organizations in developing countries.
award status. The results of this paper can be used by quality
managers to prioritize the implementation of the proposed
From the limited literature, it can also be seen that there is
critical factors for long-term sustainability towards higher limited empirical research related to sustaining quality
quality levels. practices through a quality award that examined both
manufacturing and service organizations. Therefore, this
Keywords – Critical factors, Dubai Quality Award, sustaining, leaves much room and opportunity for further empirical
survey, United Arab Emirates. studies to be made in this context.
This empirical study therefore pursues to investigate
those factors that are critical for sustaining quality award
INTRODUCTION status with focusing exclusively on the Dubai Quality
Award as a premier and the most prestigious quality award
In increasingly competitive business environment programs recognized by all public and private
around the world, quality has been regarded as one of the organizations within the United Arab Emirates.
most important factors for achieving competitive This paper is organized into five sections. In the first
advantage with an emphasis on ‘excellence’. Organizations section, a description of the literature on the previous
have now come to understand that in order to keep pace critical factors that have been reported with respect to
with global competitiveness; they need to excel in every sustaining quality practices is presented. Then, the
aspect of their business operations [1,2]. In response to methodology of study followed by a discussion on the
these challenges, a variety of quality improvement survey analysis and respective results are provided.
approaches have been proposed as the prime driver for Finally, the paper ends with final conclusions, and
enhanced business performance towards achieving suggestions for future research.
excellence [3]. These approaches have been widely
recognized as the most important competitive priority for
many organizations toward organizational success [4]. LITRATURE REVIEW
Among the quality improvement approaches which
have been proposed, Business Excellence (BX), as a One of the most recommended and common ways for
modern operation management practice based on the achieving excellence is to participate in a quality award
concept of Total Quality Management (TQM), has gained program that exist around the world [6,11,12]. On the
widespread attention of organizations [5,6]. It has been whole, the participation in a quality award program is
developed as the result of intense world-wide competition considered as an effective pillar that can be undertaken by
based on the quality award models/frameworks to the any organization striving for excellence status. In other
improvements of overall business performance [7]. words, receiving a quality award is an accreditation level

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


72
Back to Contents

that organizations need to achieve before they move on to these studies due to some methodological and sampling
become a world-class company [3, 13]. limitations (i.e. different region/industry/company size) the
In the UAE, following the successful implementation confirmation of their outcomes remains unclear. This
of the EFQM Excellence Model in European organizations, argument is supported by [22] who notes that the
Dubai Quality Awards (DQA) program was established to importance of the critical factors identified by previous
develop and promote quality practice toward research may be different from one region to another, more
organizational excellence amongst UAE based empirical research therefore required to be conducted in
organizations. It is a fully government sponsored program this context. In a similar view, [23] stressed that critical
which is entirely based on the EFQM Excellence Model. It factors, however, do not change regularly, but they are
is given each year to successful companies for their effort subject to change during the different stages and they need
and commitment to continuous quality improvement in therefore to be reviewed and modified for the different
three categories includes Dubai Quality Appreciation Prize times and situations.
(DQAP), Dubai Quality Award (DQA) and Dubai Quality A review of current literature indicates that previous
Award - Gold (DQAG). Since its creation, the DQA has research on critical success factors of quality management
also had a significant impact on business community at mostly have examined those factors that are crucial for
national and regional levels through promoting the need for implementation process [3,5,24,25,26,27]. However, only
quality improvement and building a culture of excellence a few empirical studies have attempted to examine critical
between the UAE companies [14]. success factors that contribute to sustaining quality
With the increasing levels of international competition excellence practices such as the research by [16,18, 31,32].
and the demand of major customers for quality, becoming It is suggested by [6], consistent with [4], that more
a ‘world-class’ company has become a main vision of research should be undertaken to explore these apparent
many organizations in every line of business activities [6]. contradictions in more detail. Apart from that, it is also
For that reason, over the past few years, a large number of apparent that most of previous studies discussed on the
organisations around the globe adopted quality award ‘hard’ elements of critical success factors of quality
programs primarily as a means of self-assessment of their practices, while very few studies were found to be
progress on the key dimensions of the model and for exclusively addressing the ‘soft’ critical success factors
introducing organisation wide quality [15]. There has been [33, 34].
an increasing interest in the area of organizational self- Despite to the broadly published literature on the
assessment using quality award models among implementation of quality practices where very few studies
organizations around the world and it is the same with UAE have been carried out regarding to sustaining quality and
based organizations. its related issues. This is especially lacking with respect to
It has been strongly suggested that quality management Asian organizations, particularly within the UAE context.
practices will only produce significant benefits if it is In the UAE context, some studies, while identifying critical
implemented and sustained in a long-term basis [4]. success factors for quality management practices,
However, it has been reported that organizations are facing investigated the relationship between these elements and
challenges not only to implement of quality excellence but organizational performance [24, 28] conducted a survey
also on sustaining this practices for the long-term. with the aim to validate measurement of the critical success
Organizations using a business excellence framework and factors of quality management practices by focusing in
those who have reached award status have now come to UAE service and manufacturing industry. This study has
realize that achieving excellence is a challenging task and shown that critical success factors of quality management
sustaining the position of being excellent is harder and practices could be reduced to eight factors, similar to those
more challenging task. While many large organizations presented by [24]. A study carried out an empirical study
have succeeded in sustaining this achievement through through survey questionnaire to identify critical ‘soft’
participating in quality award process, the results of recent factors for successful TQM implementation process. They
studies showed that there are as many failures as successes highlighted 16 critical factors for successful
in sustaining this practice after receiving the award [4, 6, implementation of TQM practices in the banking sector.
16, 17, 18]. Another study attempted to investigate the most critical
A comprehensive review of literature confirms that the success factors for achieving organizational excellence for
success and failure of practices with quality management engineering companies in the UAE and Saudi Arabia.
approaches are generally linked with certain critical factors Their research suggested 15 traditional critical success
[10]. As a definition, critical success factors refer to "the factors without focusing on the ‘soft’ factors of quality
limited number of areas in which satisfactory results will practices.
ensure successful competitive performance for the
individual, department, or organization” [19].
Alternatively, the critical success factors (CSFs) have been METHODOLOGY
used significantly to present or identify a few main factors
that a manager or an organization, should focus on to be Survey Population and Sample
successful.There are some studies that have examined the
critical success factors for organizational excellence The target population for the questionnaire survey was
[5,20,21]. However, because of the inconsistent results of included 168 local and foreign companies and institutions

73
Back to Contents

that have received award between the years 1995 and 2010. its purposes to make it easy for the respondents to answer.
However, due to significant cultural differences at Three common types of survey question including multiple
organizational levels between local and international questions, ranking scales and yes/ no questions were
companies in the context of quality practices, only 138 included in developing the questionnaire. Respondents
UAE-based organizations were included as the target were requested to tick their response at the appropriate box
sample. This sample was contained all three award and select the degree of their agreement or write in the
organizations including DQAP certificate, DQA award, space provided. To guide the respondents on how to answer
and DQA-GOLD Prize. These organizations are physically the questions, a short introduction with a clear instruction
operating in four main states (Emirates) of the UAE was provided for each section.
namely Abu-Dhabi, Dubai, Sharjah, and Al-Ain. The respondents were requested to determine the level
Selected sample consisted of different size of of importance of the critical factors rating on a five-point
organizations including ‘micro’ (with less than 20 interval scale ranging from 1 (being not important) to 5
employees); ‘small’ (21 to 100 employees); ‘medium’ (101 (being an extremely important). The first page highlighted
to 250 employees); and ‘large’ (with more than 250 the objectives of the study.
employees) based on Dubai Chamber of Commerce and
Industry classification. It is notable that this sample Survey Data Analysis
represented 82 per cent of the whole population of the
study from various business activities including In other to analyze the collected data from the survey,
manufacturing, service, construction, healthcare, the Statistical Package for Social Sciences (SPSS) software
professional, trade, education, finance, and tourism. 21.0 as the latest version was applied to analyze general
For the purpose of this study, senior officer or characteristics of the respondents (demographic variable).
management representative in charge of quality was The tests used were descriptive and inferential statistical
selected as the primary information sources for answering analysis in the form of frequency, percentages, cumulative
the survey questions. These included quality directors, frequency, and cumulative percentages. Some initial
quality mangers; business excellence managers, quality statistical techniques such as reliability test, means,
assurance/quality control managers, or operations variance and standard deviations were also used in
managers. However, only a person within each company interpreting the results.
was invited to participate in the survey based on
availability and interest. The reason for choosing these Survey Reliability and Validity Test
individuals was mainly due to their first-hand knowledge
and years of experience in driving and planning of quality In this study, to determine whether or not the survey
activities in respective companies. questionnaire measures critical factors in a useful way,
Table I presents a summary of selected population, Cronbach’s coefficient alpha was adopted as the most
sample and unit of analysis and respondents in more common and the best index to calculate internal
details. consistency. Table II shows the final results of internal
consistency analysis for the eight critical factors. As it can
TABLE I be seen, the alpha values for each factor has internal
TARGET POPULATION AND SAMPLE OF SURVEY consistency value ranged between 0.748 and 0.947 which
Description Participants/respondents are considered as acceptable value. From the results
Target population All 168 DQA recipients across the UAE
including public and private, local and
obtained, it can be concluded that the proposed critical
foreign companies and institutions. factors are sufficiently reliable and the questionnaire is a
Target sample Only the 138 local companies from three consistent instrument for further analyses since all the
different award categories i.e. DQAP factors have reliability coefficients with alpha value above
certificate, DQA, and DQA- GOLD awards. 0.70 (α > 0.70).
Unit of analysis UAE based organizations from
manufacturing, service, trade, finance,
tourism, construction, professional, TABLE II
healthcare and education. INTERNAL CONSISTENCY TEST RESULT
Target respondents Senior quality coordinator; quality manager; # Main Factors Cronbach Da Mean
business excellence manager, quality C01 Top management commitment 0.870 4.4097
assurance/quality control manager, C02 Strategy and quality planning 0.869 4.2810
operations manager, manager representative. C03 Empowerment and involvement 0.784 4.0879
C04 Education and training 0.838 4.0754
C05 Teamwork and cooperation 0.748 4.1538
Survey Questionnaire Design C06 Recognition and reward 0.947 4.3407
C07 Communication and relationship 0.839 4.3375
The survey questionnaire of the study which was C08 Work culture and climate 0.775 4.3344
administrated for the final survey was basically developed a
Cronbach’s Coefficient Alpha (D
into an A4-size questionnaire in five (5) pages includes
four sections with a total of 72 questions. All questions For validity of the survey questionnaire, literature
were arranged sequentially with a clear title according to review was first carried out to elicit critical factors and

74
Back to Contents

related sub-factors of the survey questionnaire. The main TABLE III


aim was to ensure that the empirical findings accurately THE MEAN SCORES FOR CRITICAL FACTORS OF DQAP VS. DQA

reflect the proposed critical factors and provide confidence. P P p-
# Main Factors Results
DQAP DQA value
Following this review, the proposed critical factors and Recognition and Not
related sub-factors as well as content of the questionnaire C06 4.389 4.250 0.276
reward Sig.
were evaluated by quality experts including of academics Top management Not
C01 4.377 4.468 0.439
and practitioners. Through this evaluation, necessary commitment Sig.
corrections and adjustments were done after the Work culture and Not
C08 4.375 4.2589 0.315
climate Sig.
questionnaire was reviewed by the panel of experts.
Communication and Not
To ensure that the survey instrument serve its purpose C07 4.370 4.276 0.434
relationship Sig.
and could understand by potential respondents, it was Strategy and Not
C02 4.278 4.285 0.946
piloted by quality directors/managers before main data planning Sig.
collection. The aim of the pilot study was to evaluate the Teamwork and Not
C05 4.118 4.218 0.454
cooperation Sig.
effectiveness, content, clarity and style of the questions.
C03 Empowerment and Not
Each participant was requested to verify and assess the involvement
4.082 4.098 0.893
Sig.
survey questionnaires in terms of its content, wordings, Education and Not
C04 4.058 4.107 0.732
relevance, and clarity. Through the pilot study, some training Sig.
corrections were made on the questions structures to
improve and enhanced the survey instrument. Lastly, in
order to ensure that the survey questions were meaningful CONCLUSIONS
to potential respondents, the final questionnaire was
reviewed by an English native expert with sufficient This paper was addressed the issue of quality practices
background and knowledge on the topic of quality before by focusing exclusively on organizations based in the
the actual data collection. United Arab Emirates with Dubai Quality Awards status.
The paper has provided a detailed discussion of those
critical factors perceived as crucial for sustaining quality
SURVEY RESULTS AND DISCUSSION practices in the context of the UAE. The need for such a
study arises as there is a paucity of empirical research
Having analyzed the proposed critical factors, an relating to quality excellence practices through a quality
attempt was made to examine to what extent there are award in the published literature and more specifically in
significant differences on the perception of the respondents the context of the UAE.
on the degree of importance of the critical factors. The prime purpose was to recognize and empirically
The data analysis involved a comparison between two assess ‘soft’ critical factors that contribute to the success of
independent categories of the quality award recipients (i.e. sustaining quality excellence practices. Based on the
DQAP and DQA) to observe the differences in perception survey findings, Leadership and Management
by using independent samples t-test. Commitment identified as the highest and most crucial
To help achieve this purpose, the following research driving factors followed by Recognition and Reward as the
null hypotheses were formulated for testing the significant second highly ranked critical factor for sustaining quality
difference in perceived level of importance: excellence practices. Communication and Relationship,
Work Culture and Climate, and Strategy and Quality
Ho: μ1 - μ2 = 0, i.e. there is no significant Planning are other critical factors with a near equal
differences between perceived importance of DQAP importance. The findings and conclusions from this study
and DQA companies. offer significant contribution to the enhancement of
existing body of knowledge related to quality excellence
H1: μ1 - μ2 ≠ 0, i.e. there is a significant difference practices in general, and in the context of UAE, in
between the perceived importance of DQAP and particular. In terms of originality, this study is considered
DQA companies. as one of the pioneer empirical studies that examine the
UAE based organizations with the DQA status.
In order to analyze the above hypothesis, an
independent-samples t-test was carried out using SPSS.
This test was more suitable to compare the mean values ACKNOWLEDGMENT
between two different groups of respondents.
Based on the results, since the p-value of each factor The authors would like to acknowledge all invited
is more than 0.05 (significant level), therefore, the null experts panel were reviewed the survey questions as well
hypothesis was not rejected. In this case, it can be as the Dubai Quality Award (DQA) companies that
concluded that the result is not significant, meaning both of participated in this research project. The authors would like
the DQAP and DQA companies have almost the same to extend a special thanks to Ms. Azizah Yusof from UTM-
perceptions on the importance of the proposed critical Lead for her valuable help.
factors. The results of this test are summarized in Table III.

75
Back to Contents

REFERENCES [14] DQA booklet (2014). A Guideline for Dubai Quality


Awards Applicants. Retrieved from
[1] Dahlgaard-Park, S.M., Chen, C.K., Jang, J.Y. & http://www.dqg.com/dqa/dqa.criteria booklet.
Dahlgaard, J.J. (2013). Diagnosing and prognosticating the
[15] Brown, A. R. (2013c). Managing challenges in
quality movement–A review on the 25 years quality
sustaining business excellence, International Journal of
literature (1987–2011). Total Quality Management &
Quality and Reliability Management, 30 (4), 461-475.
Business Excellence, 24(1-2), 1-18.
[16] Grigg, N.P., & Mann, R.S. (2008c). Rewarding
[2] Harrington, H.J. (2005). The five pillars of
excellence: An international study into business excellence
organizational excellence. Handbook of business strategy,
award processes. Quality Management Journal, 15(3), 26–
6(1), 107-114.
40.
[3] Yusof, S. M., & Aspinwall, E. M. (2001). Case studies
[17] Prajogo, D.I, & Sohal, A.S. (2004a). The
on the implementation of TQM in the UK automotive
sustainability and evolution of quality improvement
SMEs. International Journal of Quality & Reliability
programmes – an Australian case study. Total Quality
Management, 18(7), 722–743.
Management and Business Excellence, 15 (2), 205-220.
[4] Brown, A. R. (2014). Organizational paradigms and
[18] Meers, A., & Samson, D. (2003). Business excellence
sustainability in excellence. International Journal of
initiatives: dependencies along the implementation path.
Quality & Service Sciences, 6 (2/3), 181–190.
Measuring business excellence, 7 (2), 66-77.
[5] Zairi, M., & Alsughayir, A. A. (2011). The adoption of
[19] Rockart, J. and Bullen, C., 1981. A primer on critical
excellence models through cultural and social adaptations
success factors. Center for Information Systems Research
:An empirical study of critical success factors and a
Working Paper No 69. Sloan School of Management, MIT,
proposed model. Total Quality Management & Business
Cambridge, Massachusetts.
Excellence, 22(6 ), 641-654.
[20] Nasseef, M.A. (2009). A Study Of The Critical Success
[6] Mann, R.S, Adebanjo, D., & Tickle, M. (2011a).
Factors For Sustainable TQM: A Proposed Assessment
Deployment of business excellence in Asia: An
Model For Maturity And Excellence. Unpublished Doctor
exploratory study. International Journal of Quality &
of Philosophy Thesis, University of Bradford: Bradford.
Reliability Management, 28 (6), 604–627.
United Kingdom.
[7] Dahlgaard-Park, S.M., & Dahlgaard, J.J. (2010).
[21] Zairi, M., & Whymark, J. (2003). Best Practice
Organizational learnability and innovability: A system for
Organizational Excellence. UAE: e-TQM College
assessing, diagnosing and improving innovations.
Publishing House.
International Journal of Quality and Service Sciences,
2(2), 153-174. [22] Zairi, M. (2002a). Beyond TQM implementation: the
new paradigm of TQM sustainability, Total Quality
[8] Araújo, M., & Sampaio, P. (2014). The path to
Management and Business Excellence, 13 (8), 1161 -1172.
excellence of the Portuguese organisations recognised by
the EFQM model, Total Quality Management & Business [23] Nunnally, J. C. (1978). Psychometric Theory. New
Excellence, 25 (5), 427-438. York. McGraw-Hill.
[9] Dahlgaard, J.J., Dahlgaard-Park, S.M., & Chen, C.K. [24] Saraph, J.V., Benson, G.P. and Schroeder, R.G. 1989.
(2015). Quality excellence in Taiwan: theories and An instrument for measuring factors of quality
practices, Total Quality Management and Business management. Decision Sciences, Vol. 20 No. 4, pp. 810-
Excellence, 26 (1), 1–2. 29.
[10] Doulatabadi, M. & Yusof, S.M. (2014). Driving ‘soft’ [25] Black, S.A., & Porter, L.J. (1996). Identification of the
factors for sustaining quality excellence: Perceptions from critical factors of TQM. Decision Sciences, 27 (1), 1-21.
quality managers, Industrial Engineering and Engineering
Management (IEEM), 2014 IEEE International [26] Sohal, A.S., & Terziovski, M. (2000). TQM in
Conference, Kuala Lumpur, Malaysia, Page(s): 808 – 812 Australian manufacturing: Factors critical to success.
- DOI: 10.1109/IEEM.2014.7058750. International Journal of Quality & Reliability
[11] Adebanjo, D. (2001). TQM and business excellence: Management, 17 (2), 158–167.
is there really a conflict?", Measuring Business Excellence, [27] Dale, B.G., Zairi, M., Van der Wiele, A. & Williams,
5 (3), 37 - 40. A.R.T. (2000). Quality is dead in Europe – Long live
[12] Laszlo, G.P. (1996). Quality awards - recognition or excellence: True or false?, Measuring Business Excellence,
model? The TQM Magazine, 8 (5), 14-18. 4 (3), 4–10.

[13] Lu, D. (2011). In Pursuit of World Class Excellence. [28] Badri, M A., Davis, D. and Donna, D. (1995). A study
Retrieved from http://bookboon.com/en/business/in- of measuring the critical factors of quality management,
pursuitof-world-class-excellence. International Journal of Quality and Reliability
Management, 12(2): 36 – 53.

76
Back to Contents

Software Reliability Model Selection Based on


Deep Learning
Yoshinobu Tamura Mitsuho Matsumoto, Shigeru Yamada
Graduate School of Science and Engineering, Graduate School of Engineering,
Yamaguchi University Tottori University
Tokiwadai 2-16-1, Ube-shi, Yamaguchi, 755-8611, Japan Minami 4-101, Koyama, Tottori-shi, 680-8552, Japan
Email: tamura@yamaguchi-u.ac.jp Email: M15T7019Y@edu.tottori-u.ac.jp, yamada@sse.tottori-u.ac.jp

Abstract—In the past, many software reliability models have and remove software faults latent in the software system
been proposed by several researchers. Several model selection and ascertain whether the software system satisfies its
criteria such as Akaike’s information criterion, mean square
requirements.
errors, predicted relative error and so on, are used for the
selection of optimal software reliability models. These assessment
∘ Software maintenance
criteria can be useful for the software managers to assess the past Software maintenance is the modification and revision of
trend of fault data. However, it is very important to assess the a software system after its first delivery.
prediction accuracy of model after the end of fault data in the In the past, many software reliability models [1]–[3] have
actual software project. In this paper, we propose a method of
optimal software reliability model selection based on the deep
been applied to assess the reliability for quality manage-
learning. Also, we show several numerical examples of software ment and testing-progress control of software development.
reliability assessment in the actual software projects. Moreover, However, it is difficult for the software managers to select
we compare the methods to estimate the cumulative numbers of the optimal software reliability model for the actual software
detected faults based on the deep learning by using the fault data development project. As an example, the software managers
sets of actual software projects.
can assess the software reliability for the past data sets by
Index Terms—deep learning; software reliability model; opti-
mal model selection using the model evaluation criteria. On the other hand, the
estimation results based on the past fault data cannot be
I. I NTRODUCTION guaranteed for the future data sets of actual software projects.
The selection method of the optimal software reliability
In actual software development processes, the comprehen-
model based on the deep learning is proposed in this paper.
sive use of the technologies and methodologies in software en-
Also, several numerical examples of software reliability as-
gineering is needed for improving software quality/reliability.
sessment by using the fault data in the actual software projects
In particular, the waterfall model is well known as the se-
are shown. Moreover, we compare the methods to estimate
quential phases in software development process. We show
the cumulative numbers of detected faults based on the deep
software engineering activities in the respective software life-
learning with that based on neural network.
cycle phases as follows:
∘ Software specification A. Optimal Software Reliability Model Based on Neural Net-
Software specifications are served as contract between the work
customer and manufacturer of a software system. The structure of the neural networks in this paper is shown
in Fig. 1. Let 𝑤𝑖𝑗 1
∘ Software design (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼; 𝑗 = 1, 2, ⋅ ⋅ ⋅ , 𝐽) be the
After software requirements are analyzed and specified, connection weights from 𝑖-th unit on the sensory layer to 𝑗-
2
preliminary and detailed designs have to be made. th unit on the association layer, 𝑤𝑗𝑘 (𝑗 = 1, 2, ⋅ ⋅ ⋅ , 𝐽; 𝑘 =
∘ Software implementation 1, 2, ⋅ ⋅ ⋅ , 𝐾) denote the connection weights from 𝑗-th unit on
Implementation is the process of transforming design the association layer to 𝑘-th unit on the response layer. More-
documents into an executable form. over, 𝑥𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼) represent the normalized input values
∘ Software testing of 𝑖-th unit on the sensory layer, and 𝑦𝑘 (𝑘 = 1, 2, ⋅ ⋅ ⋅ , 𝐾) are
By executing test-cases, the testing is performed to detect the output values. We apply the actual number of detected

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


77
Back to Contents

Sensory Response
Layer Layer where 𝑑𝑘 (𝑘 = 1, 2, ⋅ ⋅ ⋅ , 𝐾) are the target input values for
wij wjk the output values. We apply 5 kinds of model type to the
1 1 1 target input values 𝑑𝑘 (𝑘 = 1, 2, ⋅ ⋅ ⋅ , 5) for the output values,
i.e., the exponential NHPP (nonhomogeneous Poisson process)
model, the delayed S-shaped NHPP model, the Logarithmic
2 2 2 Poisson execution time model, the exponential SDE (stochastic
differential equation) model, and the S-shaped SDE model,
respectively. Then, the number of units 𝐼 in sensory layer is
. . .

. . .

. . .
xi yk 37 because of 5 models.

B. Optimal Software Reliability Model Based on Deep Learn-


ing
I J K
wIJ wJK The structure of the deep learning in this paper is shown
xI Association yK in Fig. 2. In Fig. 2, 𝑧𝑙 (𝑙 = 1, 2, ⋅ ⋅ ⋅ , 𝐿) and 𝑧𝑚 (𝑚 =
Layer
1, 2, ⋅ ⋅ ⋅ , 𝑀 ) means the pre-training units. Also, 𝑜𝑛 (𝑛 =
1, 2, ⋅ ⋅ ⋅ , 𝑁 ) is the amount of compressed characteristics. Sev-
Fig. 1. The structure of our neural network based on back-propagation. eral algorithms in terms of deep learning have been proposed
[5]–[10]. In this paper, we apply the deep neural network to
learn the software testing phase.
faults per unit time 𝑁𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼) to the input values
As with the neural network, we apply the following amount
𝑥𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼).
of information to the parameters of pre-training units.
Considering the amount of characteristics for the software
reliability models, we apply the following amount of infor- ∙ The estimated values of Akaike’s information criterion
mation as parameters 𝜆𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼) to the input data ∙ The estimated values of mean square errors
𝑥𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼). ∙ The estimate of all parameters included in model
∙ The estimated error in the testing time point of 25%, 50%,
∙ The estimated values of Akaike’s information criterion
and 75% for data sets
∙ The estimated values of mean square errors
∙ The estimate of all parameters included in model Moreover, we apply 5 kinds of model type to the amount
∙ The estimated error in the testing time point of 25%, 50%, of compressed characteristics, i.e., the exponential NHPP
and 75% for data sets (nonhomogeneous Poisson process) model, the delayed S-
shaped NHPP model, the Logarithmic Poisson execution time
The input-output rules of each unit on each layer are given
model, the exponential SDE (stochastic differential equation)
by
( ) model, and the S-shaped SDE model, respectively.
𝐼

1
ℎ𝑗 = 𝑓 𝑤𝑖𝑗 𝑥𝑖 , (1) II. N UMERICAL E XAMPLES
𝑖=1
⎛ ⎞
∑𝐽 We focus on actual software project data sets [11] in order
𝑦𝑘 = 𝑓 ⎝ 2
𝑤𝑗𝑘 ℎ𝑗 ⎠ , (2) to assess the performance of our method.
𝑗=1
A. Estimation Results Based on Trend Analysis for Each
where a logistic activation function 𝑓 (⋅) which is widely-
Model
known as a sigmoid function given by the following equation:
1 We apply 5 models as the typical two kinds of curve types
𝑓 (𝑥) = , (3) to the actual fault data sets. The estimation results by using
1 + 𝑒−𝜃𝑥
where 𝜃 is the gain of sigmoid function. We apply the multi- 90% of data set for each software development project are
layered neural networks by back-propagation in order to learn shown in Figs. 3 ∼ 5. Similarly, The estimation results by
the interaction among software components [4]. We define the using 80% of data set for each software development project
error function given by the following equation: are shown in Figs. 6 ∼ 8. From Figs. 3 ∼ 8, we can confirm
that the model best fitted for the past data are not always fitted
𝐾
1∑ for the future in fact. Then, it is necessary for the software
𝐸= (𝑦𝑘 − 𝑑𝑘 )2 , (4)
2 managers to offer the model best fitted for the future.
𝑘=1

78
Back to Contents

[Pre-Training Units]
1st
Input and Output Compressed
Layer zm Characteristics
Continued
Deep
1 1 Learning
1

. . . . .
2 2 2
. . . . .
. . .

. . .

. . .
zl
. . . . . on

L M N

zl [Pre-Training Units] on
m nd
Input and Output Layer
as
Hidden Layer

Fig. 2. The structure of deep learning.

4000 2000
Actual Estimate Actual Estimate
CUMULATIVE NUMBER OF

CUMULATIVE NUMBER OF

3000 1500
DETECTED FAULTS

DETECTED FAULTS

2000 1000
Actual (100%) Actual (100%)
Actual (90%) Actual (90%)
Exp (NHPP) Exp (NHPP)
1000 S−shape (NHPP) 500 S−shape (NHPP)
Log (NHPP) Log (NHPP)
Exp (SDE) Exp (SDE)
S−shape (SDE) S−shape (SDE)
0 0

0 5 10 15 20 0 20 40 60

TIME (MONTHS) TIME (MONTHS)

Fig. 3. The estimation results by using 90% of data set of software project Fig. 4. The estimation results by using 90% of data set of software project
1. 2.

B. Estimated Results of the Optimal Model Based on the Deep the type of model. Moreover, the estimation results based on
Learning the deep learning for the long-term prediction of 80% perform
significantly better than that of neural network.
The estimated results of the optimal model based on the
neural network and deep learning are shown in Tables I and II. III. C ONCLUSION
From Tables I and II, we found that the estimated recognition In the past, many software reliability models have been
rates based on the deep learning perform better than that of proposed. In fact, these software reliability models have been
the neural network. In particular, the estimation results based applied to many software development projects. However, it is
on the deep learning give a high-recognition rates in terms of very difficult for the software managers to select the optimal

79
Back to Contents

Table I
T HE COMPARISON OF PREDICTION ACCURACY AFTER 90% OF ACTUAL FAULT DATA .

Accurate Model Model selected by Neural Network Model selected by Deep Learning
S-shape (NHPP) Exp (SDE) Exp (NHPP)
S-shape (SDE) Exp (SDE) S-shape (SDE)
S-shape (SDE) Exp (SDE) S-shape (NHPP)
Recognition Rate by Neural Network Recognition Rate by Deep Learning
Type of Model 0% 67%
Name of Model 0% 33%

Table II
T HE COMPARISON OF PREDICTION ACCURACY AFTER 80% OF ACTUAL FAULT DATA .

Accurate Model Model selected by Neural Network Model selected by Deep Learning
Exp (NHPP) Exp (SDE) Log (NHPP)
S-shape (NHPP) Exp (SDE) Exp (SDE)
S-shape (SDE) Exp (SDE) S-shape (SDE)
S-shape (SDE) S-shape (SDE) S-shape (SDE)
S-shape (NHPP) S-shape (SDE) S-shape (SDE)
S-shape (SDE) S-shape (SDE) S-shape (SDE)
Recognition Rate by Neural Network Recognition Rate by Deep Learning
Type of Model 67% 83%
Name of Model 33% 50%

2500 4000
Actual Estimate Actual Estimate
Actual (100%)
Actual (90%)
CUMULATIVE NUMBER OF

CUMULATIVE NUMBER OF

2000 Exp (NHPP)


3000
DETECTED FAULTS

DETECTED FAULTS

S−shape (NHPP)
Log (NHPP)
1500 Exp (SDE)
S−shape (SDE) 2000
Actual (100%)
1000 Actual (80%)
Exp (NHPP)
1000 S−shape (NHPP)
500 Log (NHPP)
Exp (SDE)
S−shape (SDE)
0 0

0 5 10 15 20 0 5 10 15 20

TIME (MONTHS) TIME (MONTHS)

Fig. 5. The estimation results by using 90% of data set of software project Fig. 6. The estimation results by using 80% of data set of software project
3. 1.

software reliability model for the actual software development optimal software reliability model based on the deep learning.
project. As an example, the software managers can assess the In particular, it is difficult to assess the reliability by only using
software reliability for the past data sets by using the model the past fault data, because the estimation results based on the
evaluation criteria such as the Akaike’s information criterion, past fault data cannot be guaranteed for the future data sets
mean square errors, predicted relative errors, and so on. On of actual software projects. In this paper, we have compared
the other hand, the estimation results based on the past fault the methods of reliability assessment based on neural network
data cannot guarantee for the prediction of future data sets in with that of deep learning. Moreover, we have shown that the
actual software projects. proposed method based on the deep learning can assess better
This paper focuses on the learning for the software devel- than that based on neural network. Thereby, we have found
opment projects. We have proposed the selection method of that our method can assess the software reliability in the future

80
Back to Contents

2000 [4] E. D. Karnin, “A simple procedure for pruning back-propagation trained


Actual Estimate
neural networks,” IEEE Trans. Neural Networks., vol. 1, pp. 239-242,
June 1990.
CUMULATIVE NUMBER OF

1500
[5] D.P. Kingma, D.J. Rezende, S. Mohamed, M. Welling, “Semi-Supervised
DETECTED FAULTS

Learning with Deep Generative Models,”Proceedings of Neural Infor-


mation Processing Systems, 2014.
1000
Actual (100%)
[6] A. Blum, J. Lafferty, M.R. Rwebangira, and R. Reddy, “Semi-supervised
Actual (90%)
Exp (NHPP) learning using randomized mincuts,”Proceedings of the International
500 S−shape (NHPP) Conference on Machine Learning, 2004.
Log (NHPP)
Exp (SDE) [7] E.D. George, Y. Dong, D. Li, and A. Alex, “Context-dependent pre-
S−shape (SDE) trained deep neural networks for large-vocabulary speech recogni-
0
tion,”IEEE Transactions on Audio, Speech, and Language Processing,
0 20 40 60
Vol. 20, No. 1, pp.30-42, 2012.
TIME (MONTHS) [8] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol,
“Stacked Denoising Autoencoders: Learning Useful Representations in
a Deep Network with a Local Denoising Criterion,”Journal of Machine
Fig. 7. The estimation results by using 80% of data set of software project Learning Research, Vol. 11, No. 2, 3371–3408, 2010.
2. [9] H.P. Martinez, Y. Bengio, and G.N. Yannakakis, “Learning deep phys-
iological models of affect,”IEEE Computational Intelligence Magazine,
Vol. 8, No. 2, pp. 20–33, 2013.
2500 [10] B. Hutchinson, L. Deng, and D. Yu, “Tensor Deep Stacking Net-
Actual Estimate
Actual (100%)
works,”IEEE Transactions on Pattern Analysis and Machine Intelli-
Actual (90%)
CUMULATIVE NUMBER OF

2000 Exp (NHPP) gence, Vol. 35, No. 8, pp. 1944–1957, 2013.
DETECTED FAULTS

S−shape (NHPP) [11] W.D. Brooks and R.W. Motley, Analysis of Discrete Software Reliability
Log (NHPP)
1500 Exp (SDE) Models, Technical Report RADC–TR–80–84, Rome Air Development
S−shape (SDE) Center, Berlin, 1980.
1000

500

0 5 10 15 20

TIME (MONTHS)

Fig. 8. The estimation results by using 80% of data sets of software project
3.

with high accuracy based on the past software project data.


In the future study, it will be necessary to analyze by
using many training data sets in actual software development
projects. Thereby, the proposed method based on the deep
learning will be useful for the software managers to assess
the software reliability.

ACKNOWLEDGMENTS

This work was supported in part by the Telecommunications


Advancement Foundation in Japan, and the JSPS KAKENHI
Grant No. 15K00102 and No. 25350445 in Japan.

R EFERENCES

[1] M.R. Lyu, ed., Handbook of Software Reliability Engineering, IEEE


Computer Society Press, Los Alamitos, CA, 1996.
[2] S. Yamada, Software Reliability Modeling: Fundamentals and Applica-
tions, Springer–Verlag, Tokyo/Heidelberg, 2014.
[3] P.K. Kapur, H. Pham, A. Gupta, and P.C. Jha, Software Reliability
Assessment with OR Applications, Springer–Verlag, London, 2011.

81
Back to Contents

A Comprehensive Analysis Method for System


Resilience Considering Plasticity
Yanbo Zhang† , Rui Kang†‡ , Ruiying Li†‡∗ , Chenxuan Yang†
† School of Reliability and Systems Engineering, Beihang University, Beijing 100191, China
‡ Science and Technology on Reliability and Environmental Engineering Laboratory, Beijing 100191, China

Abstract—Resilience is generally described as the ability of a evaluation, without considering the influences brought by the
system to absorb disturbance and to recover rapidly back to its severity of external disturbances or internal failures. However,
original state. However, the system may not recover thoroughly the resistance and recovery capacity of practical systems is
after every interruption. This paper defines resilient limit and
system plasticity, and proposes an analysis method for system limited, and interrupted systems cannot necessarily recover to
resilience considering plasticity, which provides a comprehensive its original state within the acceptable time limit. To solve
evaluation for both resilience and plasticity of systems suffering this problem, the concepts of system resilient limit and system
from performance degradation. The resilience and plasticity of an plasticity are provided in this paper and the evaluation method
airborne GPS/INS navigation system is then analyzed to illustrate proposed in [8] is also improved so as to better describe
the effectiveness of our method.
the resilient behaviors of systems. A GPS/INS navigation
Keywords—Resilience triangle; Resilient limit; Plasticity; Per- system is then analyzed in various electromagnetic interference
formance evaluation environments to reflect its resilience and plasticity processes.

I. I NTRODUCTION II. R ESILIENT LIMIT AND PLASTICITY


The concept of resilience was initially used in ecology [1], In general, resilience refers to the capacity of the system to
and then resilience theory advanced dramatically and found withstand the impact and to recover back to its normal state
its wide application in a range of areas, such as engineering when interruptions occur. Considering that the capacity of both
[2], sociology [3] and economics [4]. In engineering, system resistance and recovery is limited, let 𝑇 ∗ be the maximum
resilience is not only concerned with the influences of the acceptable recovery time, and 𝐷∗ be the severity threshold of
failures on system performance, but also focuses on the time the interruptions under which the system can absorb and fully
needed for the interrupted system to recover back to its normal recover, so the resilient limit 𝐾 can be defined by
operation [5]. For example, Fiksel [6] defined resilience as the
ability of the system to tolerate disruptions while maintaining 𝐾 = {𝑇 ≤ 𝑇 ∗ and 𝐷 ≤ 𝐷∗ } (1)
its original structure and function; Haimes [7] conceptualized
resilience as the ability of systems to withstand a major However, since the severity of interruptions is often difficult
disruption within acceptable degradation parameters and to to measure, in most cases, we represent the severity with the
recover within an acceptable time and composite costs and initial performance loss 𝑋0 . According to (1), the system is
risks; and Zobel [8] continued to describe resilience as the resilient when and only when 1) the initial performance loss
ability of the system to return to its original state or move to 𝑋0 is less than 𝑋 ∗ , which is the performance loss threshold
a new, more desirable state after being disturbed. in accordance with 𝐷∗ , and 2) the time needed for perfect
In order to quantify system resilience, researchers pro- recovery 𝑇 does not exceed 𝑇 ∗ . However, either when 𝑋0 ≥
posed different evaluation frameworks and methods, both 𝑋 ∗ or 𝑇 ≥ 𝑇 ∗ happens, the system will move to a plastic state,
performance-based and structure-based [9]. Bruneau et al. [10] which cannot recover to the original state within required time
introduced the concept of resilience triangle and proposed a without external intervention, as shown in Fig.1.
simple quantification method of the loss of system resilience, The plastic state refers to the state where the interrupted sys-
characterizing the loss of system resilience with the time tem fails to recover to the original performance level 𝑄0 within
integral of performance loss. Then, this notion was further the required time 𝑇 ∗ . Compared with the previous resilience
extended by Zobel et al. [11], who took the upper bound of the concepts which focuses on the system performance loss and
acceptable recovery time 𝑇 ∗ into consideration, aiming at the the recovery time only, the resilience considering plasticity
approximate prediction and calculation of system resilience. also concerns whether the severity of external disturbances
Note that if this upper bound of recovery time 𝑇 ∗ is exceeded, and internal failures exceeds the severity threshold 𝐷∗ .
we can simply think that the resilience strategy of this system In practical applications, the resilience and plasticity be-
fails. haviors of the system are diverse. Take the cloud computing
Although the resilience triangle and its extended work system as an example, when a server suffers a small scale D-
seem reasonable for the system resilience evaluation, previous DoS attack, the response time of legitimate traffic may increase
studies only concern the effects of time limit on the resilience rapidly, then the system behaves in a resilient state. Otherwise,

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


82
Back to Contents

where 𝑄1 is the performance level after the recovery process,


7 𝑇1 is the recovery time, and 𝑇2 is the maintenance time. We
3HUIRUPDQFH/HYHO 
4 then have 0 < 𝑅𝑃 ≤ 1, and a larger 𝑅𝑃 usually indicates
a better resilience strategy with better performance robustness
; . and recovery rate, and vice versa. Fig.2(a) shows a resilience
' ;
process, where 𝑋0 ≤ 𝑋 ∗ and 𝑇1 ≤ 𝑇 ∗ ; Fig.2(b), (c) and (d)
demonstrate three types of plasticity processes, with 𝑇1 > 𝑇 ∗ ,
𝑋0 > 𝑋 ∗ and both 𝑇1 > 𝑇 ∗ and 𝑋0 > 𝑋 ∗ , respectively.


W W7
7LPH 4
7
4
7

3HUIRUPDQFH/HYHO 

3HUIRUPDQFH/HYHO 
; ;
Fig. 1. Resilient limit and plastic state of the system ; . ;
' ' .

if the number of malicious requests keeps increasing and 


W W7 W7

W W7 W7
7LPH 7LPH
succeeds in exhausting the bandwidth, the service may be D E
interrupted, and hence the system exhibits plasticity and cannot 7 7
recover perfectly without external intervention [12]. System 4 4

3HUIRUPDQFH/HYHO 

3HUIRUPDQFH/HYHO 
plasticity can be used to describe the system with imperfect 4 . ; 4 ;
; ' ; . '
recovery, and can also be used as a resilience strategy to reduce
the effects of adverse events. When a disruptive event occurs,
such strategy allows the system to shut down part of its low  
W W7 W77 W7 W W7 W7 W77
priority functions or degrade its performance automatically. 7LPH 7LPH
F G
It helps curb the influence of external disturbances on the
system and also helps maintain its structure, avoiding devas-
Fig. 2. Resilience calculation considering plasticity
tating failures and guaranteeing the normal operation of basic
services. Let’s take a mobile communication network as an
example. We provide two scenarios, 1) when a communication
IV. C ASE STUDY
base station breaks down and cannot be substituted by other
stations, it would be difficult to support the normal traffic with Fig.3 shows a typical structure diagram of airborne G-
full services due to a lack of network node; and 2) when traffic PS/INS integrated navigation system. The ephemeris and
surges and causes network congestion, leading to the increase pseudorange data the GPS obtained from satellites and the
of delay, delay jitter and package loss, the voice services will estimates of positions from INS are processed and calculated
be degraded [13]. In these cases, the network supplier can in the Kalman filter, then the estimations of position and
limit its low priority services such as data and short messages velocity of the aircrafts are presented, which can be further
proactively, in an attempt to avoid large network congestions applied to the accurate calibration of INS [14]. When GPS
and to guarantee its basic voice service. is disturbed by electromagnetic interference, the carrier noise
power density rate (𝐶/𝑁0 ) decreases, deteriorating its posi-
III. R ESILIENCE CALCULATION CONSIDERING PLASTICITY
tioning accuracy. In extreme situations, the receiver may lose
In order to describe the resilient behaviors of systems ac- lock and the INS has to take over the whole navigation work.
curately, we combine the concept of system plasticity into the Moreover, considering the existence of gyro drift error, the
original resilience definition and define the system resilience position deviation is accumulated over time, impacting the
as the intrinsic ability of the system to withstand a certain capability of localization and navigation of GPS/INS system
performance loss after interruptions and to recover to its gradually [15].
normal state within required time.
Since the resilience triangle model is built upon the real-
)HHGEDFN&RUUHFWLRQ
time performance of systems and the integral of the area is
relatively difficult to achieve, the system resilience 𝑅𝑃 can 3RVLWLRQ
be calculated using triangle area as (VWLPDWHV
,168QLW
(VWLPDWLRQRI
'DWD)XVLRQ
⎧ SRVLWLRQYHORFLW\
 𝑋 0 𝑇1 .DOPDQ)LOWHU
⎨ exp(− 2𝑄0 𝑇 ∗ ),
 𝑋0 < 𝑋 ∗ , (SKHPHULV
3VHXGRUDQJH
*365HFHLYHU
𝑅𝑃 =


⎩ exp[− 𝑋0 𝑇1 + (𝑄0 − 𝑄1 )(𝑇1 + 𝑇2 ) ], 𝑋0 ≥ 𝑋 ∗ ,
∗ 2𝑄0 𝑇
(2) Fig. 3. The structure diagram of GPS/INS integrated system

83
Back to Contents

We consider GPS’s susceptibility to interference and its and the position is updated [15]. And the sampling period 𝑇0∗
poor reliability under dynamic environment, and then analyze is the maximum acceptable recovery time.
the resilient and plastic behaviors of GPS/INS system under In this case, the navigation performance declines with
varying electromagnetic jamming conditions using our method the increase of jamming intensity. Considering the complex
provided in Section III. mechanism and multiple factors, a simplified linear equation is
formulated to express the relationship between the interference
A. GPS/INS system performance and the performance, namely,
Considering the fact that accurate positioning for aircrafts
is one of the key performance outputs of navigation systems, d𝑃 (𝑡) = d𝐼(𝑡) ⋅ C (4)
this paper chooses real-time navigation precision 𝑁 (𝑡) as the
performance index, and the initial performance level during where 𝐼(𝑡) signifies the jamming intensity at time 𝑡, and C is
normal operation is set as 100%, namely, 𝑄0 = 100%. We a negative constant.
also assume that the navigation precision of GPS/INS system 2) Case 2: GPS lock-lose case: As shown in Fig.4(b),
starts with 𝑁0 = 0.3 meters, and the real-time navigation at time 𝑡1 , the interference strength exceeds the tracking
performance level at time 𝑡, 𝑃 (𝑡), can be expressed as threshold 𝐷𝐿 but stays below the damage threshold 𝐷𝐺 , the
tracking loop loses lock and the receiver is disabled. Then the
𝑁0 INS operates in a stand-alone mode, and the systematic errors
𝑃 (𝑡) = × 100% (3)
𝑁 (𝑡) increases with time until GPS recaptures satellite signal and
resumes its work [16], while 𝑇1∗ is the maximum acceptable
If the navigation precision at time 𝑡 is 1.5 meters, then the
recovery time.
corresponding real-time performance level is 20%. In the
3) Case 3: GPS collapse case: In Fig.4(c), at time 𝑡2 , GPS
meanwhile, if the navigation precision is slightly superior to
is intensively jammed and the jamming signal oversteps the
𝑁0 , 𝑃 (𝑡) is considered as 100%.
damage threshold 𝐷𝐺 , and thus the limiter tube is damaged,
B. The resilience and plasticity process analysis leading to the collapse of GPS. Therefore 𝐷𝐺 is namely the
severity threshold of the navigation system, 𝐷∗ . In this case,
For different jamming intensity, the GPS/INS system often
plastic damage is produced and the system cannot recover
shows different resilient behaviors, and three possible cases
to its normal state without the assistance of external inter-
are illustrated in Fig.4.
ventions, with 𝑇2∗ as the maximum acceptable recovery time.
Moreover, the maintenance time follows some certain proba-
bility distribution, with the constraints of aircrafts retrieving
7 7 
1DYLJDWLRQ3HUIRUPDQFH 

1DYLJDWLRQ3HUIRUPDQFH 

4 4 and restoration time, maintenance labors and resources, etc.


;
'/ '/ C. Simulation process
;
'* '
Considering the complexity of systems and the randomness
of interruptions, it is difficult to build a real-time performance
function, making it unrealistic to obtain resilience with analytic
 
W W7 W7 W W7 W7 method. To solve these issues, this paper analyzes the re-
7LPH 7LPH
D E
silience and plasticity processes of GPS/INS navigation system
and estimates the average system resilience with Monte Carlo
7 method. For simplicity of the model, we assume that
1DYLJDWLRQ3HUIRUPDQFH 

4
∙ jamming signals only last for a short time and this time
'* '
; can be ignored compared to that for system recovery;
∙ jamming intensity follows normal distribution, with 𝜇 =
10 and 𝜎 2 = 25.
The simulation is conducted starting with the random sam-

W W7 W7 pling based on the predefined parameters for different cases,
7LPH
F and the system performance is recorded according to the
jamming intensity sampling results. The simulation iterates
Fig. 4. Resilience and plasticity processes of GPS/INS navigation system for 10,000 times and thus we obtain its average resilience and
the probability of the system failing to be repaired within the
1) Case 1: GPS degradation case: As shown in Fig.4(a), maximum acceptable recovery time. The simulation procedure
at time 𝑡0 , GPS receiver is slightly jammed, and the jamming is illustrated in Fig.5.
intensity is below the tracking threshold 𝐷𝐿 . Consequently,
a certain degree of positioning error is generated, leading to D. Discussion
a descending navigation performance. The navigation system After iterations for 10,000 times, the average resilience
cannot recover back to normal until the receiver is resampled of the navigation system is 0.8546, while the probability of

84
Back to Contents

V. C ONCLUSION
ZOSK#
O#T[S#
Considering that the existing methods fail to describe the
O"#%
4U resilient and plastic behaviors of systems simultaneously, this
?KY
paper introduces the concepts of system resilient limit and
0GSSOTM system plasticity, focuses on the influence of interruptions
JKTYOZ_*O
on system resilience, and proposes an analysis framework
4U
*O"#*%
for system resilience considering plasticity. The GPS/INS
?KY
navigation system is taken as an example to evaluate its
)GYK
*O%"*2%
4U RUYY9O
resilience. For different jamming intensity, different types of
?KY resilience strategies are taken. Our method can serve as an
)GYK )GYK XKYZUXKUT 4U effective tool for the decision makers to evaluate the resilience
RUYY9O RUYY9O ZOSK%
strategies and to achieve the optimal tradeoff balance between
?KY T[S
the quality of services and the recovery time. However, since
86O#9O: system plasticity cannot be described simply by performance
loss and recovery time after interruptions, further studies will
O
focus on the system plasticity analysis.
86#Y[S86O
6XUHELGOR#T[S
ACKNOWLEDGMENT
This work was supported by the National Natural Science
KTJ
Foundation of China (No.61304220; 61573043).

Fig. 5. Simulation procedures of GPS/INS system


R EFERENCES
[1] Holling C S, “Resilience and stability of ecological systems,” Annual
review of ecology and systematics 1973:1-23.
[2] Manyena S B, “The concept of resilience revisited,” Disasters
2006:30(4):434-450.
delayed recovery is 0.67%. We can conclude that the resilience [3] Bhamra R, Dani S, & Burnard K, “Resilience: the concept, a literature re-
strategies of the given navigation system is quite successful view and future directions,” International Journal of Production Research
and the system exhibits a significant capability of withstand- 2011:49(18):5375-5393.
[4] Martin R, & Sunley P, “On the notion of regional economic resilience:
ing electromagnetic interference and recovering back within conceptualization and explanation,” Journal of Economic Geography
required time after being interrupted. Fig.6 presents both the 2014:15(1):1-42.
frequency histogram and the cumulative frequency curve of [5] Whitson J C, & Ramirez-Marquez J E, “Resiliency as a component
importance measure in network reliability,” Reliability Engineering &
the resilience of the navigation system. Since plasticity is System Safety 2009:94(10):1685-1693.
considered in the resilience analysis, some extreme situations [6] Joseph F, “Designing resilient, sustainable systems,” Environmental Sci-
with relatively small resilience values are shown in Fig.6, ence & Technology 2003:37(23):5330-5339.
[7] Haimes Y Y, “On the definition of resilience in systems,” Risk Analysis
indicating that navigation system enters plastic state after 2009:29(4):498-501.
severe interference. [8] Zobel C W, “Comparative visualization of predicted disaster resilience,”
In Proceedings of the 7th International ISCRAM Conference 2010:1-5.
[9] Hosseini S, Barker K, & Ramirez-Marquez J E, “A review of definitions
and measures of system resilience,” Reliability Engineering & System
1000 10000 Safety 2016:145:47-61.
[10] Bruneau M, Chang S E, Eguchi R T, Lee G C, O’Rourke T D, Reinhorn
A M, ... & von Winterfeldt D, “A framework to quantitatively assess
and enhance the seismic resilience of communities,” Earthquake spectra
2003:19(4):733-752.
[11] Zobel C W, & Khansa L, “Characterizing multi-event disaster re-
Cumulative frequency curve

silience,” Computers & Operations Research, 2014:42:83-94.


[12] Thenmozhi R, Karthikeyan P, Vijayakumar V, Keerthana M, & Amud-
Frequencies

havel J, “Backtracking performance analysis of Internet protocol for


500 5000
DDoS flooding detection,” In Circuit, Power and Computing Technologies
(ICCPCT), 2015 International Conference on. IEEE, 2015:1-4.
[13] Agarwal A, “Quality of Service (QoS) in the New Public Network
Architecture,” IEEE Canadian Review, 2000:36:22-25.
[14] Wang W, Liu Z Y, & Xie R R, “Quadratic extended Kalman filter
approach for GPS/INS integration,” Aerospace Science and Technology
2006:10(8):709-713.
[15] Austin R, Unmanned aircraft systems: UAVS design, development and
0 0 deployment, Vol. 54, John Wiley & Sons, 2011.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
[16] Parkinson B W, Progress in Astronautics and Aeronautics: Global
Resilience distribution of GPS/INS system
Positioning System: Theory and Applications, Vol. 2, Aiaa, 1996.

Fig. 6. Resilience distribution of GPS/INS navigation system

85
Back to Contents

Reliability and failure behavior model of


optoelectronic devices
Ning Tang,Ying Chen, ZengHui Yuan
Science and Technology on Reliability and Environmental Engineering Laboratory
School of Reliability and System Engineering, Beihang University
Beijing, China
632057638@qq.com
Abstract—Recent dramatic development of science and competitive. Ying Chen [5] proposed five basic failure
technology has made optoelectronic devices, a crucial pole in mechanisms relationship, they are competition, triggering,
many sophisticated systems, more powerful, accurate and promotion, suppression and damage accumulation. And a
complicated than ever. As a result, the development of these failure mechanism tree which considering the correlation of
optoelectronic products not only increases the complexity of
their structures, but leads to increasing challenge to predict the
failure mechanism is used to predict and simulate the life of
reliability of them. Based on this situation, this paper provides the product. According to the type of injury, damage
an engineering method to obtain the reliability prediction of accumulation can be divided into destructive accumulation
optoelectronic products. In addition, several kinds of software and parameters joint cumulative. Based on Crack Growth
have to be utilized to assist with computation in the method. Model ,Mason [6] professor in Case Western Reserve
Pspice is used to simulate the electrical stress, Calce FAST University and NASA Lewis Research Center in Cleveland
mainly resolve life prediction problems correlating electro- reaserched the bilinear damage rule when the stress is same
mechanism individually, and MATLAB is utilized to fit the and loading sequence is different. Advanced Life Cycle
degradation curves according to enough data which have been Engineering at the University of Maryland [7] use Coffin-
obtained above. Finally, a system failure mechanism tree which
considering each failure mechanism is established. The result
Manson model to calculate system cumulative damage when
of lifetime prediction of sun sensor would be obtained the temperature cycling and vibration are exist
according to the failure mechanism tree. simultaneously. There are also a number of scholars research
damage accumulate in domestic, Yuan Wei, Nanjing
Keywords—failure behavior;nonlinear sequence University of Aeronautics and Astronautics [8] published a
accumulation; failure mechanism tree study of metal fatigue life prediction under multiaxial
random loading conditions. Fei Chai and Michael Pecht [9]
I. INTRODUCTION proposed the Miner linear rule which be widely used in
With the development of the science and technology , engineering, but it is conservative so that the results is larger
optoelectronic products plays an increasingly important role, than the standard values. For optoelectronic products, the
therefore, the life prediction of optoelectronic products is failure mechanism is diverse and the failure behavior of
also expected to become urgent. However, the mechanism of system is complex, it is necessary to consider the relationship
damage caused by different environmental stress is different, of the fault mechanism if you want to assess the reliability of
and they couldn’t be seen equivalently. How to use a the system. Ying Chen [5] proposed failure mechanism tree
systematic approach to accumulate the damage caused by which could describe failure mechanisms that have a
various stresses becomes a popular research area. At present, complex relationships.
there are two ways to estimate lifetime of products. One is In this paper, A nonlinear sequence accumulation model
based on the estimated statistics method, typical manual are [10] was proposed when the different stress affect the
GJB Z299C-2006 and MIL-HDBK-217F US. Predicting the system in sequence. Finally, we use failure mechanism tree
reliability of electromechanical products is usually based on to calculate the life of a system, and this method considers
historical data. The second method is based on the physical the correlation of all failure mechanisms.
of failure by choosing failure mechanism according to the
physical model, obtaining the relevant parameters and
II. THE FAILURE BEHAVIOR AND FAILURE
calculating the prediction life of the product. Correlations
among the failure mechanisms have already been studied by CORRELATION
some scholars. Keedy and Feng [1,2] study a medical stents The concept of failure behavior is the variation which
which be accompanied with degradation and a failure could be observed from the surface of product and appear
mechanisms of random vibration. Two probability model of along with time. It includs the appearance, development,
failure process are given by them , based on the assumption coupling of failure mechanism and the process which results
that the failure process is unrelated, the reliability of system in system failure.
can be obtained., The reliability of the binary system have
The logical relationship of failure mechanisms refers to
already been studied by Wang and Xing [3]. Huang and
that each component contains a variety of failure
Askin [4] studied the reliability analysis method of electronic
mechanisms, and they are independent or mutual influence .
equipments, many failure models that they studied is
There are five relationships among the failure mechanisms
National Natural Science Foundation of China,61503014.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


86
Back to Contents

and they are competition, triggering, promotion, suppression C. Nonliner cumulative damage modle under heat and
and damage accumulation. After analysis, the sun sensor’s vibration operate in sequence
main logical relationships are damage accumulation and CASPaR center [11]of he University of Georgia have
parameters joint accumulation. It is competition that the studied cumulative damage model respectively when the
system has some failure mechanisms and they are loading sequence is different. By doing experiments, they
independent, meanwhile, each of them would lead to the obtain the cumulative damage model as follows,
system invalid. Similarly, if two or more failure
mechanisms cause the same type damage in the same number.of.load.steps
ni
position, the relationship be called as damage accumulation. CDI =  Ni
(6)
There are many different kinds of damage accumulations, i =1
they are linear accumulation under the same stress condition, where CDI is the cumulative damage index, ni is actual
bilinear cumulative when the same stress operates in order number of applied cycles for the i th load step, and Ni is the
and nonlinear accumulation when thermal and vibration number of cycles to failure for the i th load step. CDI ranges
operate together. from 0 to 1.0 with 0 being the undamaged state and 1.0
being the fully damaged state. Failure is typically defined
A. The linear cumulative damage model when stress is when the CDI exceeds a critical value of 0.7.
same.
When the first stress is vibration and the last is
The average injury caused by each cycle is 1 / N, this temperature cycle, the cumulative damage model is as
damage can be accumulated. n times damage caused by follows,
constant amplitude load is equal to its recycle ratio C = n / N .
mT mV
Damage D of variable amplitude loading is equal to the sum n  n 
CDI =  T  +  V  (7)
of the cycle radio,
N
 T N
 V
l
where CDI is the cumulative damage index, its value is
D =  ni / Ni
0.7. When the first stress is vibration and the last is
=1
i (1)
temperature cycle, mT and mV are the fitting parameters
where i is the level number of the main load, n i is the according to the experimental results.
number of cycles of i th stress level. N i is the life of i th
stress level. The formula is as follows, III. THE PROCESS OF FAILURE BEHAVIOR OF
l OPTOELECTRIC PRODUCTS
D =  ni / N i = D f
i =1 (2) The structure of the optoelectric product is composed of
When the accumulated damage amount reaches a critical both the circuit part and the optical part. Common failure
mechanisms of the circuit part is consist of thermal fatigue,
value D f ,the product is invalid. vibration fatigue, electromigration, TDDB and hot carrier.
Failure mechanisms of the optical part mainly include the
B. Bilinear cumulative damage model of the same kind of coloring effect of quartz glass, the aging of silicon rubber
stress effect in order and the degradation of silicon photo cell.
Based on Crack Growth Model ,Mason [6] professor in
Case Western Reserve University and NASA Lewis IV. CASE ANALYSIS
Research Center in Cleveland reaserched the bilinear A sun sensor could obtain its vector of the orientation
damage rule when the stress is same and loading sequence is in the celestial coordinate system, Which mainly includes
different. Bilinear damage rule formula is as follows: the optical probe and a signal processing circuit section.
NI I = Nf − NI = Nf 1 − exp ZNfφ 
  ( ) (3)
A. The structure and composition of the sun sensor
Where, The structure of sun sensor are shown in Fig 1, and the
 N 1 0 .2 5  basic principle and composition of sun sensor are shown in
1  ln[0.3 5( N ) ]  Fig 4. The incident light through the cylindrical mirror, plate
φ = ln  2

N N
ln ( 1 )  ln[1 − 0.65( 1 ) 0.25 ] 
glass, flat glass disc, the silicon photocell, and finally get
N2  N2  (4) into the signal processing circuit board.
N
ln [ 0 .3 5 ( 1 ) 0 .2 5 ]
N2 B. Environmental and working stress in life cycle
Z =
N 1φ (5) In the process of rocket launching, the sun sensor is
mainly affected by vibration, and the vibration is random
Where N 1 and N2 are respectively the number of faigure vibration. After entering the track, it is no longer affected by
failure cycles under the different stress level conditions. N I vibration, but mainly affected by the spatial temperature
is the number of cycles that the product suffers in the first cycle, the temperature range from -50 to 50 degrees, so the
stage, N II is the number of cycles that the product suffers life cycle of the sun sensor integrated section is shown in
from the first stage to failure. Fig. 2. Vibrational spectra are shown in Fig. 3.

87
Back to Contents

Fig. 2. Sun sensor’s comprehensive stress Fig. 3. Vibration spectrum


Fig. 1. Structure of sun sensor profile of simulation analysis

Fig. 4. The basic principle and composition of the sun sensor

In the space, there are ionospheric plasma, According to the electrical stress simulation results, the
magnetospheric plasma and auroral plasma and other low- damage which is cased by electromigration could been
energy charged ion. There are also solar cosmic rays, calculated , TDDB and hot carrier. then, inputing some
galactic cosmic rays, earth radiation belt and other Energetic important parameters as shown in Table II, the prediction
charged particles. In the space, circuit portion of the sun life calculated by the Calce FAST software are presented in
sensor will appear thermal fatigue, vibration fatigue and table III.
electrical stress injuries. The glass coloring effect, surface 4) Silicone rubber life expectancy of the optical part.
sputtering erosion, charge and discharge effects and The prediction life of the silicone rubber according to
radiation-induced pollution effects will appear in the optical the selected aging failure model was calculated.
portion of the sun sensor.
The failure physical model of silicon rubber is,
C. Single Stress Analysis and Damage Calculation a
y = Be− Kt , K = Ae − E / RT (8)
1) Using Calce SARA Software to simulate emperature
stress distribution of the sun sensor. Where , B is the constant, K is the velocity constant, α is
The sun sensor circuit board model could be established empirical constant, τ is the ageing time , y is index which
by the Calce SARA software. Entering the board boundary indicates the degree of degeneration. E is the activation
temperature, the thermal simulation results for each energy, R is gas constant, T is absolute temperature, A is
temperature couldn’t been obtained. Finally, the temperature the frequency factor. In this paper, the value of α , E , A and B
profile was entered into the life sectional bar. As seen from are 0.38,27.03,623.45and 1.0208 respectively.
Table 6 , the thermal prediction life results of the circuit Firstly, the constant K could been obtained according to
portion when running the procedure would been obtained. the formula. Among them, A=623.45,E=27.03 KJ/mol-1,
−1 −1
2) Using Calce SARA Software to simulate vibration R=8.314 J ⋅ mol ⋅ K , the absolute temperature which could
stress distribution of the sun sensor. obtained according to the thermal analysis of the probe is
Using the model established in the last step, the 72.249 ℃ (345.409K). K = Ae
− E / RT
=0.0509. The silicon
vibration results could been obtained by inputing the rubber compression permanent deformation model could
vibration power spectral density. Finally , the vibration
and y = 1 − ε ,the
a
− Kt

profile was entered into the life sectional bar, and the been obtained according to y = Be
vibration prediction life results of the circuit portion when formulate is as follows,
running the procedure was obtained. a
ε = 1 − y = Be− Kt (9)
3) Using Pspice, Cadence, Cyber and other software to α

simulate the electric stress of the sun sensor. B


ln
Establishing Pspice circuit model of the signal t= 1- ε (10)
processing circuit board which is normal state. Making a K
transient circuit simulation for it. Finally, the output The threshold value of silicone rubber deformation is
parameters of the circuit could be obtained, such as voltage 20%, and then the prediction life could been obtained,
and current curves change over the time. namely, t =82.698years.
The degradation of the light transmittance of silicone
rubber in the space radiation environment was calculated.

88
Back to Contents

In order to predict the degradation of the light elect W is Width T is thickness J is Metal T is chip
transmittance of silicone rubber, the least square fitting bar in rom of the chip of the metal layer operating
igra metallization layer of the current temperature
MATLAB software was used according to the experimental
tion layer (m) chip (m) density (K)
data. The formula which was selected is as follows, E modle Eox
1/E modle Vox 1/E modle T is chip
is gate
Tr = a ⋅ e bt + c ⋅ e dt (11)
TD is voltage of Xox(off) is
oxide field
operating
DB oxide layer effective oxide temperature(
acceleration
Where, t is time, a,b,c,d are the fitting parameters, (V) thickness (m) K)
factor (m/v)
Tr is the light transmittance. hot Isub is
Id is leakage W is channel
carri substrate -
Fitting the degradation data of light transmission of er
current (A) width(m)
current (A)
silicone rubber ,the curve equation is
TABLE III. ELECTROMIGRATION, TDDB AND HOT CARRIER
Tr = 6.35e −0.3635t + 75.9e −0.01287 t (12) MECHANISM -- THE RESULTS OF FAILURE TIME (25℃)
If the threshold of degradation of silicone rubber is 60%, T chip Mean time Mean
and then y = 60% * 82.45 = 49.47, the failure life of silicone fault operati to failure
Mean
time to
rubber was 33.259 years according to the formulate model. time to
location ng of failure
type failure of
(Chip temper electromig of hot
5) life expectedncy of Quartz glass and Silicon TDDB
code) ature ration carrier
(year)
photovoltaic cells under space radiation environment (K) (year) (year)
Similarly, the degradation formula of quartz glass and U1 4051 339.15 23.20 25.38 15.82
silicon photovoltaic cells were obtained respectively,
U10 AD574 317.95 19.75 26.54 15.17
Tr = 14.07e −8.47 t + 78.18e −0.02809 t
(13) U11 LM108A 312.15 22.11 26.38 17.48
Tr = 402 e −0.01196 t (14) U17 DS26F32M 330.19 22.55 29.59 16.88
The light transmittance degradation threshold of quartz U18 DS26F31M
glass and silicon photovoltaic cells is 60% , Similarly, the 329.99 22.73 28.37 17.49
prediction of quartz glass and silicon photovoltaic cells by U22 80C32E 318.75 21.72 30.53 16.51
calculating are 12.29years and 24.47years respectively. U23 54AC373 319.75 21.71 27.32 17.66
U24 6664RH 317.35 20.24 27.11 15.35
TABLE I. THE CALCULATION RESULTS OF THE FAILURE TIME OF
THE CIRCUIT MODULE UNDER THE CONDITION OF TEMPERATURE CYCLE U26 4060 316.45 24.24 29.35 15.14
AND VIBRATION
U28 CC4011 316.55 18.74 26.93 17.26

( U32 HS-565BH 307.55 >30 27.88 21.43
fa tempera
vibrati
ult (temperature ture U39 54AC02
(vibration) on) 319.35 22.29 28.69 16.49
loc type cycles) cycles)
failure mode Mean U40 54AC138
ati failure mode Mean 320.45 26.20 29.65 15.61
on time to
time to
failure
failure D. More stress damage accumulation of the sun sensor
Solder joint Solder joint 11.32 12.33
U3 4051
cracking cracking years years
Equation (1) shows the accumulation formula. In this
U2 80C3 Interconnection Solder joint 12.60 13.11 paper, where n is the actual number of applied cycles, N is
2 2E site cracking cracking years years the number of cycles to failure, and the subscripts T and V
Interconnection Solder joint 18.92 13.86 refer to thermal and vibration loading respectively.
U4 4051
site cracking cracking years years Meanwhile, mT=0.91,mV=0.93. U3 was took as an example
U2 6664 Interconnection Solder joint 19.13 14.31 to calculate. CDI=0.7, nV=0.166h, NV=11.32, NT=12.33, So,
4 RH site cracking cracking years years
nT=8.07. Similarly, the accumulation life of the rest
R2 resista Interconnection Solder joint 19.56 14.77
F4 nce site cracking cracking years years components could be got.
R1 resista Interconnection Solder joint 19.98 15.16
03 nce site cracking cracking years years E. Using the failure mechanism tree to establish system
R1 resista Interconnection Solder joint 20.54 16.73 model and predicte life.
02 nce site cracking cracking years years
R2 resista Interconnection Solder joint 21.35 17.23 Failure mechanism tree describes the correlation of the
C4 nce site cracking cracking years years respective failure mechanism. Fig. 7 is the failure mechanism
U2 54AC Interconnection Solder joint 21.91 17.83 tree of output signal degradation of the sun sensor.
3 373 site cracking cracking years years
Interconnection Solder joint 22.31 18.14 There are two kinds of failure mechanism correlations.
U1 4051
site cracking cracking years years they are Joint parameter and damage accumulation, the
former is represented by APA, the latter is represented by
TABLE II. SOME IMPORTANT PARAMETERS OF ELECTRO UMACO. Fig. 5 shows the symbol of joint parameter and
MIGRATION, TDDB AND HOT CARRIERS damage accumulation.

89
Back to Contents

Fig. 5a) illustrates failure mechanism accumulation TABLE IV. THE PREDICTION LIFE RESULTS OF THE SUN SENSOR
correlation, where M1,…,Mn are failure mechanisms and F distributed parameter Average
equi
is their common consequence. According to the destructive pme
distributio life(
type, If M1,… Mn have damage accumulation correlation, n Type mean value standard deviation
nt year)
the threshold of system due to this kind of damage is Xth,
sun normal
then senor distribution
12.26 2.55 12.113

X th
ΔX i =
ti
(15)

a)parameter joint b) Damage accumulation

Fig. 5. The symble of failure mechanism.

Where ΔXi is the damage in unit time due to Mi, it is the


failure time due to Mi when it works alone. Then lifetime of
system is Fig. 6. The failure mechanism tree.
X X th
ς = th =
ΔX λ1ΔX 1 + ... + λi ΔX i
ACKONWLEDGMENT
X th
=
X th X The authors would like to thank National Natural Science
λ1 + ... + λi th
t1 ti Foundation of China for support research activities.
1 1
= =
λ1 λn n
λi
t1
+ ... +
tn t REFERENCES
(16)
i =1 i

Where ΔX is the accumulated damage in unit time. And


[1] Keedy E, Feng QM Reliability Analysis and Customized preventive
maintenance policies for stents with stochastic dependent competing
λi is a scaling factor of Mi, i=1,2,…,n. risk processes. IEEE Transactions on reliability 2013; 62 (4):887-898.
System failure probability F(t) is, [2] Keedy E, Feng Q. A Physics-of-failure based reliability and
maintenance modeling framework for stent deployment and
1 operation. Reliability engineering and system safety 2012;103:94-
F (t) = P(ς ≤ t) = P( ≤ t)
n
λi 101
t [3] Wang C, Xing L, Levitin G. Reliability analysis of multi-trigger
i =1 i
(17)
binary systems subject to competing failures. Reliability Engineering
Assume XMi(t) indicates some kind of damage that Mi and System Safety 2013; 111: 9-17.
brings to the system and varies with time t, and XMith is the [4] Wei Huang,Ronald G. Reliability analysis of electronic devices
threshold of damage caused by Mi. When damage XMi with multiple competing failure modes involving performance aging
increases to the threshold XMith, mechanism Mi will result in degradation. Quality and Reliability Engineering International2003;
system failure. So system lifetime ς is, 19( 3):241-254
[5] Chen Y, Yang L, Ye C, et al. Failure mechanism dependence and
ς = min {arg t { X Mi ( t ) = X Mith }} reliability evaluation of non-repairable system[J]. Reliability
(18)
Engineering & System Safety, 2015, 138: 273-283.
Based on the lifetime data of the failure mechanisms
calculated in the above steps, we can obtain the output [6] Manson S S, Halford G R. Practical implementation of the double
linear damage rule and damage curve approach for treating
paramenters and the distribution of failure life according to cumulative fatigue damage[J]. International Journal of Fracture,
Monte Carlo simulation method and the failure mechanism 1981, 17(2): 169-192.
tree as shown in Fig. 6 . And then results of lifetime [7] Pang J H L, Wong F L, Heng K T, et al. Combined vibration and
prediction of sun sensor are obtained in the table 4. thermal cycling fatigue analysis for SAC305 lead free solder
assemblies[C]//Electronic Components and Technology Conference
(ECTC), 2013 IEEE 63rd. IEEE, 2013: 1300-1307.
V. CONCLUTIONS [8] Yuan Wei.A study on some problems in the life prediction of metal
The paper describes the life prediction method of the fatigue under multi axial random load condition [D]. Nanjing
general optoelectronic products, and establishs a life University of Aeronautics & Astronautics, 2012.
prediction method which based on nonlinear damage [9] Fei Chai and Michael Pecht.Strain-Range-Based Solder Life
Predictions Under Temperature Cycling With Varying Amplitude and
accumulation and integrate multiple failure mechanisms. Mean. IEEE Transactions On Device And Maerials Reliability, 2014,
Meanwhile the paper proposes a non-linear formula when 14(1): 351-357.
the stresses is different but operating in order, and then [10] Andy Perkins and Suresh K. Sitaraman. a study into the sequencing
proposes a new method using failure mechanism tree to of thermal cycling and vibration tests. Electronic Components and
calculate life of the system which has more failure Technology Conference. 2008,584-592.
mechanisms.

90
Back to Contents

Optimal Replacement Policy based on cumulative


damage for a Two-unit System

Shey-Huei Sheu Zhe-George Zhang


Department of Statistics and Informatics Science Department of Decision Sciences
Providence University Western Washington University
Taichung 433, Taiwan Bellingham, WA98225-9077, USA
shsheu@pu.edu.tw George.Zhang@wwu.edu

Tzu-Hsin Liu Hsin-Nan Tsai


Department of Statistics and Informatics Science Department of Statistics and Informatics Science
Providence University Providence University
Taichung 433, Taiwan Taichung 433, Taiwan
ounce_liou@yahoo.com.tw D9601401@mail.ntust.edu.tw

Abstract—This article studies a two-unit system with failure et al. [9] discussed the damage models from the reliability
interactions. The system is subject to two types of shocks (I and viewpoint. Some applications with cumulative damage models
II). A type I shock, removed by a minimal repair, causes a minor have been made by Nakagawa and Kijima [10], Ito and
failure of unit A and type II shock causes a complete system Nakagawa [11], Chien et al. [12].
failure that calls for a corrective replacement. Each unit A minor
failure also results in a random amount of damage to unit B and In a multi-unit system, a failure in one unit may affect the
such a damage to unit B can be accumulated to trigger a other units. For example, a cooling system failure in a
preventive replacement or a corrective replacement action. computer can cause some damage of the CPU or other units.
Besides, unit B with cumulative damage of level z may become Murthy and Nguyen [13] called such a phenomenon the
minor failed with probability π(Z) at each unit A minor failure “failure interaction” and considered two types of failure
and fixed by a minimal repair. This paper proposes a more interactions: induced failure and shock damage. Nakagawa and
general replacement policy which prescribes that a system is Murthy [14] studied a two-unit (unit A and B) system with
preventively replaced at the Nth type I shock, or the time which shock damage interaction. Satow and Osaki [15] proposed a
the total damage to unit B exceeds a pre-specified level Z (but less two-parameter (T, k) replacement model for a two-unit system
than the failure level K where K>Z) or is replaced correctively under shock damage interactions.
either at the first type II shock or when the total damage to unit
B exceeds a failure level K, whichever occurs first.
II. GENERAL MODEL
Keywords—Optimal maintenance; minimal repair; Cumulative We study a more general two-threshold replacement policy
damage model; Shock model; Replacement policy. for a two-unit system subject to shocks in which minimal repair
or replacement takes place according to the following scheme.
I. INTRODUCTION
1. A system consisting of two units, denoted by A and B,
Some systems are subject to random external shocks. Each is subject to shocks that arrive according to a non-
shock weakens the system and makes it less efficient to run.
Therefore, it is desirable to find the optimal replacement for a homogeneous Poisson process (NHPP) { N ( t ) , t ≥ 0} with
system. In the classical age replacement policy (Barlow and intensity function (failure rate function) r ( t ) and mean value
Hunter, [1], a system is replaced at a failure or at age T,
whichever occurs first. Makabe and Morimura [2] proposed a t

new replacement model where a system is replaced at the Nth function Λ ( t ) =  r ( u ) du where t is the age of the system.
0
failure. Several extensions of these policies have been made by
Berg et al. [3], Park [4], Montoro-Cazorla and Pérez-Ocón [5],
Assume that r ( t ) is a continuous and positive increasing
Zhao and Nakagawa [6], Sheu and Zhang [7]. function for t ≥ 0 .
Cox [8] proposed cumulative damage models to analyze the 2. The shock can be classified into two types. A type I
system degradation process due to a sequence of random shock, removed by minimal repair (the failure rate function is
shocks causing some damage to a system. The system fails unchanged by any minimal repair at failures), causes a minor
when the cumulative damage exceeds a threshold level. Esary

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


91
Back to Contents

failure of unit A, whereas a type II shock causes a major (b) Let η ( z ) be the cost of minimal repair for a unit B
system failure and results in a system corrective replacement.
minor failure when the total damage to unit B is z.
3. The probability of type II shock depends on the
number of shocks since the last replacement. Let M be the (c) Let β j (t ) be the cost per unit time of maintenance
number of type I shocks until the first type II shock from the
last replacement, and Pk denote the probability that the first k of the system at time t ∈  S j , S j +1 ) , where S j is the arrival
shocks of the system are type I shocks. Assume that the domain instant of the jth shock for j = 0,1, 2, .
of Pk is {0,1, 2,} and 1 = P0 ≥ P1 ≥ P2 ≥  . Denote
For the presented model, let U i denote the length of the ith
{Pk } as an abbreviation for a sequence of probabilities. Let successive replacement cycle (renewal interval) for
i = 1, 2, and let Vi denote the operational cost over the
pk = P ( M = k ) = Pk −1 − Pk = Pk −1 (1 − Pk Pk −1 ) with

domain {1, 2,3,} .


renewal interval U i . Thus {(U ,V )} constitutes a renewal
i i

reward process. If D ( t ) denotes the expected cost of


4. Each unit A minor failure results in a random amount
of damage to unit B. These damages to unit B can be operating the unit over the time interval [0,t ] , then it is well-
accumulated to trigger a preventive replacement or a corrective known that
replacement action. A damage amount W j to unit B caused by
D ( t ) E [V1 ]
the jth unit A minor failure has a distribution function lim = = B(N, Z ) .
G j ( y ) = P(W j ≤ y ) ( j = 1, 2,...) . Thus, the total damage t →∞ t E [U1 ]
to unit B up to the jth unit A minor failure (2)
j Hence, the expected cost per unit time is given by
Z j =  Wi ( j = 1, 2,...) has a distribution function −Λ( t )
Λ( t )
N −1
i =1 ( N) ∞ e
RNG ( Z ) PN 0 r ( t ) dt
1, j =0 ( −1)!
N
( j)
G (z) = P(Z j ≤ z) =   Z G ( K − y) − G ( Z − y)  dG( j) ( y) 
G1 *G2 *...*Gj (z), j =1,2,3, 0  j+1
N −1 j +1  
(1) +RZ Pj+1  ∞ −Λ(t ) 
× e Λ( t ) r ( t ) dt
j

 0
j =0
where “*” represents the Stieltjes convolution. When the 
j!
cumulative damage to unit B exceeds a critical level K, a
e−Λ(t ) Λ( t )
j
corrective replacement is immediately conducted. N −1 ∞
+RII G( j) ( Z ) Pj − Pj+1   r ( t ) dt
5. Unit B is also subject to a minor failure with j =0
0 j!
probability π ( z) at each unit A minor failure instant if the  Z G ( K − y) dG( j) ( y) 
total damage to unit B is z. Such a failure is also corrected by a 0 j+1
N −1 
+RK Pj+1  ∞ −Λ(t )
minimal repair (the failure rate function is unchanged by any
( ) r t dt 
j
× e Λ t
minimal repair at failures). j =0
 0
() 
j! 
6. The system is preventively replaced at the Nth type I
e−Λ(t ) Λ( t )
j −1
shock or at the time which the total damage to unit B exceeds a N −1 ∞
pre-specified level Z (but less than the failure level K where +G( j) ( Z ) Pj  α j ( t ) r ( t ) dt
K>Z) or is replaced correctively either at the first type II shock j =1
0
( j −1)!
or when the total damage to unit B exceeds a failure level K,  Z π ( y)η ( y) dG( j) ( y) 
whichever occurs first. N −1 0 
+Pj  ∞ e−Λ(t ) Λ t j−1
We define the following various costs. × ( ) r t dt 
j =1
 0 ( j −1)!
() 
(a) The cost of replacement at the Nth type I shock is 
RN . The cost of replacement at the time which the total
e−Λ(t ) Λ( t )
j
∞ N −1
damage to unit B exceeds pre-specified level Z is RZ . The cost +
0
β j ( t ) G( j) ( Z ) Pj j!
dt
B( N, Z ) =
j =0
of replacement at the first type II shock and at the time which −Λ( t )
Λ( t )
j
N −1 ∞
e
the total damage to unit B exceeds a failure level K is RII and 
j =0
0
G ( j)
( Z ) Pj
j!
dt
RK , respectively. .

92
Back to Contents

(3) [6] Zhao, X. and Nakagawa, T., “Optimization problems of replacement


first or last in reliability theory.” European Journal of Operational
Research, 2012, 223, 141-149.
[7] Sheu, S. H. and Zhang, Z. G., “An optimal age replacement policy for
multi-state systems,” IEEE Transactions on Reliability, 2013, 62, 722-
III. CONCLUSIONS 735.
We consider a more general replacement model with either [8] Cox, D. R., Renewal theory. London: Methuen, 1962.
preventive replacement or corrective replacement. The [9] Esary, J. D., Marshall, A. W. and Proschan, F., “Shock model and wear
expected net cost rate was derived. process.” The Annals of Probability, 1973, 1, 627-649.
[10] Nakagawa, T. and Kijima, M., “Replacement policies for a cumulative
damage model with minimal repair at failure. ”IEEE Transactions on
Reliability, 1989, 38, 581-584.
REFERENCES [11] Ito, K. and Nakagawa, T., “Comparison of three cumulative damage
models.” Quality Technology and Quantitative Management, 2011, 8,
57-66.
[1] Barlow, R. E. and Hunter, L. C., “Optimum preventive maintenance
policies,” Operations Research, 1960, 8, 90-100. [12] Chien, Y. H., Sheu, S. H. and Zhang, Z. G., “Optimal maintenance
policy for a system subject to damage in a discrete time process.”
[2] Makabe, H. and Morimura, H., “Some considerations on preventive
Reliability Engineering & System Safety, 2012, 103, 1-10.
maintenance policies with numerical analysis,” Journal of Operations
Research Society of Japan, 1965, 7, 154-171. [13] Murthy, D. N. P. and Nguyen, D. G., “Study of two-component system
with failure interaction.” Naval Research Logistics, 1985, 10, 239-247.
[3] Berg, M., Bievenu, M. and Cléroux, R., “Age replacement policy with
age-dependent minimal repair”. INFOR, 1986, 24, 26-32. [14] Nakagawa, T. and Murthy, D. N. P., “Optimal replacement policies for a
two-unit system with failure interactions,” RAIRO Operations Research,
[4] Park, K. S., “Optimal number of minor failures before replacement”,
1993, 27, 427-438.
International Journal of Systems Science, 1987, 18, 333-337.
[15] Satow, T. and Osaki, S., “Optimal replacement policies for a two-unit
[5] Montoro-Cazorla, D. and Pérez-Ocón, R., “Two shock and wear systems
system with shock damage interaction,” Computers and Mathematics
under repair standing a finite number of shocks.” European Journal of
with Applications, 2003, 46, 1129-1138.
Operational Research, 2011, 214, 298-307.

93
Back to Contents

Method Construction for Risk Factors Identification


of Public Hospital Operation
Shirui Peng Guofeng Su
Institute of Safety Science and Technology, Institute of Safety Science and Technology
Dept. of Engineering Physics, Tsinghua University Dept. of Engineering Physics, Tsinghua University
Beijing, China Beijing, China
psr14@mails.tsinghua.edu.cn sugf@tsinghua.edu.cn

Jianguo Chen Zhiru Wang


Institute of Safety Science and Technology, Institute of Safety Science and Technology,
Dept. of Engineering Physics, Tsinghua University Dept. of Engineering Physics, Tsinghua University
Beijing, China Beijing, China
chenjianguo@mail.tsinghua.edu.cn wzrzpp@tsinghua.edu.cn

Abstract—Except traditional clinical risk, hospitals face systematic methods. With the purpose to promote the
increasingly number of hazards threatening their daily reliability and prevent the conceivable adverse consequence,
operation, including natural disasters, public health crisis, and risk assessment is firstly developed in complex industry
societal security incidents etc. To promote the ability of hazard sectors, such as nuclear power plants, airspace and aerospace,
prevention and emergency response, hospitals need take and mineral mining etc., by realizing risk assessment and risk
measures in risk assessment and risk management, the first step control[2]. Since then, hospitals adopt the method of risk
of which is risk factors identification. Here an event-oriented assessment to regulate their clinical work, whereas the daily
method for risk factors identification were proposed, whereby operation work is usually ignored.
the medium and root risk-caused factors of potential incident as
In recent years, hospitals in China mainland are facing
complete as possible were figured out. The study also provide the
increasingly number of hazards threatening their daily
bases for further risk assessment and risk management, since we
operation, such as natural disasters, breakdown of devices or
tease out the relationship between the risk events and their risk
systems, fires, violent criminals aiming at medical care
factors.
personnel, and so on. In order to provide high quality and
Keywords—risk factors identification; assessment index; continuous medical service and ensure the safety of patients
hospital operation safety and medical staff, hospitals as well as concerned government
departments now pay more attention on risk management of
I. INTRODUCTION daily operation of hospitals. In this study, our group
In the past century and even in nowadays, public hospitals investigated 22 tertiary public hospitals in Beijing, whose
play an irreplaceable role in developing countries[1]. In routine routine works of fundamental operation is restraint under many
work, public hospitals are fronted with large amounts of risk, guidance and criteria. Their work experience in hospital daily
since health care involves so many complexities and operation is of great significance in the method construction.
uncertainties and they undertake most tasks of them.
Risk factors identification is the first step for risk
To promote risk management, hospitals need to carry out assessment and management[3]. The purpose of risk factors
Beijing Municipal Administration of Hospitals

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


94
Back to Contents

identification is to find out the medium and root causes of event-oriented analytical method, building the bridge between
hazards or accidents, through the control of which we can the emergencies and the risk factors.
prevent the happening of adverse incidents or decrease the
II. EMERGENCIES SELECTION AND CLASSIFICATION
negative effect of them.
There are different method to classify the emergencies
probably occurring in hospitals. In general, hazards can be
divided into natural disasters and human-made accidents[5]. In
the National System of Emergency response Platform of
Communication and consultation

China, there are 4 types of emergencies/hazards in the first

Monitoring and review


level, natural disasters, accidents, public heath events, societal
security events[11, 12]. Combined with the practical work of
management experience in these hospitals, fire disasters and
transportation accidents, constitute parts of accidents, are
added into the first level in the study, which take a great
proportion of their work. According to the properties,
influence, and position etc., the hazards or incidents, on the
range of the 6 first level emergencies, are classified into 92
basic emergencies.
Fig.1 Iterative process of risk management from ISO [4]
III. RISK ANALYSIS AND DECOMPOSITION
Researchers have done some work in the field. To deal A. The first level risk factors
with natural and human-made disasters, such as earthquake,
windstorm, rainstorm and fire etc., the Word Health
Organization (WHO) provides a standard scoring method for
the vulnerability of buildings, medical devices, and critical
support systems of hospitals in the risk prone area, with the
goal that hospitals in disasters can be reserved and provide
effective health service[5]. Focusing on the safety of human,
the survey of employee safety investigation conducted by
Occupational Safety and Health Administration (OSHA), an
agency of US Department of Labor, surveyed the occupational
injuries and disease occurring in hospitals, divided detailedly
into categories of sharp instrument injury, devices failures, Fig.2 initial triangle module of public safety[11, 12]
slipping, and patient attacks etc.[6]
The triangle module is widely used to deal with the
In China mainland, related researches are conducted in emergencies in public safety research area. (see fig.2). The
specific domains in hospital, for example, errors of drug initial triangle model is composed of three sides, which
management, breakdown of devices, and hospital fires[7-10] respectively stand for emergencies, emergency response, and
etc. Nonetheless, there is no research on the overall risk hazard-affecting objects. Inside the triangle, there is
assessment of hospital fundamental operation (nonclinical). hazard-pregnancy environment, including substance, energy,
The methods referred in the above focus on the process of and information. The three sides and the relationship of them
daily operation. As a further study, we propose an present the complete dynamic process of emergencies on
public safety. The happening of emergency companied with

95
Back to Contents

the state of out of control of substance, energy and happening in the desert of Xinjiang (a first level administrative
information, which will affect hazard-affecting elements subdivision of China) or in the metropolitan city, like Tokyo,
without effective emergency response measures. the destructive effects are greatly different, for the
Hazard-affecting objects are indispensable. For example, levee vulnerability of hazard-effecting elements differs significantly.
failure is not an emergency if there are no residents or facilities In addition, with considerate and efficient emergency response
adjacent the sides of the river, since there are no managements, hospitals are able to relieve the adverse effects
hazard-affecting objects. Meanwhile, the ability of emergency of accidents, such as electricity or water support outage. The 3
meeting response is necessary to be considered, for people can items consist the first level risk factors (for further risk
decrease the effect of emergencies or prevent them, if detailed assessment, they are also the first level index).
plan made and proper measures taken, and the adverse if not.
B. The second level risk factors
In order to describe the each aspect of risk elements
The next step, the second level risk factors were
beforehand, a revised triangle model was proposed in this
established, the subdivision of first level ones, with the
study(see fig.3). There are three edges respectively represent
consideration of the experience of medical staff, the work of
dangerousness of hazard-caused elements, vulnerability of
organizations (including WHO, OSHA etc.), and the basic idea
hazard-affecting elements, and ability of emergency response.
of public safety.
The first side, the dangerousness of hazard-affecting elements,
like the earthquake scale or the amount of rainfall, means the In terms of the dangerousness of hazard-caused elements, the
destructiveness and the probability of occurrence of dangerousness of natural hazard-caused elements (like weather
emergencies, while the second side represents the probable or geological conditions), unsafe human behaviours, unsafe
casualties, losses of asset and estates, and the breakdown of states of substance or equipment, and the disadvantage of the
support systems caused by emergencies. The ability of environment or situation were considered. On the part of the
Emergency response consists of the emergency preparedness vulnerability of hazard-effecting elements in hospitals, there
of goods and materials, the dealing strategies, the training of are 5 components need to be taken into account, the
personnel and the coordinated action with concerned vulnerability of architectures, support systems, personnel,
organizations or departments. properties and facilities, and the hospital’s reputation. Finally
the ability of emergency response is composed of the
substance allocation sufficiency, the completeness of the
pre-arranged planning, the emergency response manoeuver
quality, and the effectiveness and efficiency of system of
direction and organization.

C. The third level risk factors

The third level factors are the list of items, describing the
second level factors, and directly triggering hazards or
accidents in most cases. Causal connection and the operability
Fig.3 revised triangle module of emergencies of assessment are the two standard of selecting these items.
First intuitive and more detailed risk factors were identified
In this model, each side represents a respect of the risk
completely within the limit of every second level factor. In the
elements of the basic emergencies. Destructiveness of
next step, the access to assess these risk factors needs to be
emergencies/hazards depends on the states of the three aspects.
considered. For instance, the frequency of err operation in
Obviously the Ms8.0 earthquake is much more destructive
laboratory or on special equipment is the significant risk factor
than the Ms5.0 one. And if there is Ms8.0 earthquake
of emergencies, like fire, equipment failures, and so on.

96
Back to Contents

Although it is important, it cannot be included into the third TABLE II is an example of the list of third level risk
level factors (belonging to the unsafe behaviours of human), as factors under the cover of “the unsafe behaviors of human,”
data of the occurrence frequency (in the other word, we can’t and there are 50 third level factors concluded in the research.
assess it) is difficult to obtain. As a replacement, we need to Meanwhile, it is necessary to point out that the list of the third
consider the completeness of related management steps that level risk factors is the complete one for all emergencies
are able to control the frequency of err operation, such as possible happening in the investigated city public hospitals,
qualification of staff, and training and education (including the which can be altered with regard to different areas, and not all
pre-post training, and job training on a regular basis etc.). third level risk factors are related to the each emergency.

TABLE I. LIST OF SECOND LEVEL RISK FACTORS

Fist level Factors Second Level Factors

the dangerousness of natural hazard-caused elements

the unsafe behaviours of human


the dangerousness of hazard-causing elements
the unsafe states of substance or equipment

the disadvantage of environment or situation

the vulnerability of architecture

the vulnerability of support system

the vulnerability of hazard-affecting elements the vulnerability of personnel

the vulnerability of property and facilities

the reputation of hospital

substance allocation sufficiency

the completeness of the pre-arranged planning

the ability of emergency response the emergency response manoeuver quality

the effectiveness and efficiency of system of direction and


organization

TABLE II. THE THIRD LEVEL RISK FACTORS UNDER THE COVER OF “THE UNSAFE BEHAVIORS OF HUMAN”

Fist level Factor Second Level Factors Third Level Factors

Personnel allocation insufficiency

Professional credentials lack of staff


the dangerousness of
the unsafe behaviors of human Health problem of staff
hazard-caused elements
Deficiency of management rules

Poor training and education

D. Further work scale, whether companied with other severe weather, like
rainstorm or thunder and lighting, are taken into account. They
The final work is to point out the third risk factors that
are the forth level risk index. Although 3 level classes of risk
need to be assessed aiming at each emergency and what
factors are logically complete, the forth level risk index for
respects of the third level risk factors need to be considered. In
further risk assessment, critical for practical assessment, is
terms with high wind, for example, the Beaufort wind force
indispensable. The 3 level risk factors are, at the same time,

97
Back to Contents

the 3 level risk index; the 4th level risk index, designed for The work is the bases to determine the weights of each risk
assessment operation, is the hint for staff what should be factor, for further risk assessment and risk management, since
record and evaluated exactly. On account of different the relationship between the adverse events and risk factors is
emergencies, there are great differences in the fourth index. clear.

IV. DISCUSSION V. ACKNOWLEDGEMENTS


[1] P. De Vos, P. Ordu Ez-García, M. Santos-Pe A, and P. Van der Stuyft,
From the three-level risk factors, evaluators are able to map "Public hospital management in times of crisis: Lessons learned from
into the risk factors from the emergencies. Compared with the Cienfuegos, Cuba (1996–2008)," Health Policy, vol. 96, pp. 64-71, June
2010.
process-oriented method of risk identification greatly [2] A. J. Card, J. R. Ward and P. J. Clarkson, "Trust-Level Risk Evaluation
dependent on fragmented experience, it is easier to completely and Risk Control Guidance in the NHS East of England," Risk Analysis,
vol. 34, pp. 1469-1481, 2014.
get the risk factors if the election of adverse events is [3] F. H. van Duijne, D. van Aken and E. G. Schouten, "Considerations in
developing complete and quantified methods for risk assessment,"
appropriate, because the hospital daily operation covers so Safety Science, vol. 46, pp. 245-254, 2008.
much process that it is hard to get a comprehensive list of them. [4] Z. Jiliang, Research on Risk Assessment of Government Emergency.
Beijing: Chinese Academy of Government, 2013.
And identifying the parts related to possible emergencies is a [5] WHO, "Hospital Safety Index Guide for Evaluators," 2015.
more complex and more laboursome work, for there is a gap [6] OSHA, "Worker Safety in Hospitals," 2012.
between technologists who are familiar with the operation [7] F. Yang and F. Yang, "On the Risk Assessment of Fire Hazards in
Hospitals," The Science Education, pp. 190-192, June 2015.
process and management staff who are responsible for the [8] S. Li, S. Lai, X. Zhen, J. Cai, G. Xie, and L. Zheng, "Application
Research on Disaster Vulnerability Analysis on Emergency
whole hospital safety. Management of Hospital Pharmacy," Pharmacy Today, pp.
378-381+384, May 2015.
To construct accurate and effective system of risk factors, [9] S. Duan, Y. Li, and T. Li, "Risk Management and Quality Control of
especially in the steps of the possible emergencies selection Medical Equipment," Chinese Medical Equipment Journal, pp. 139-141,
Feb. 2014.
and third level risk factors identification, there are two points [10] J. Lei, P. Guan, K. Gao, X. Lu, Y. Chen, Y. Li, Q. Meng, J. Zhang, D.
F. Sittig, and K. Zheng, "Characteristics of health IT outage and
need to be brought to the forefront. The first one is the suggested risk management strategies: An analysis of historical incident
historical data collection, which will prompt people what kind reports in China," International Journal of Medical Informatics, vol. 83,
pp. 122-130, 2014.
of hazards or accidents happened and the procedure apt to go [11] W. Fan, Y. Liu, W. Weng, and S. Shen, Introduction to Public Safety
Science. Beijing: China Science Publishing & Media Ltd., 2013.
wrong. The other is the communication with staff from related
[12] H. YUAN, Q. Huang, G. Su, and W. Fan, Theory and Practice of Key
departments. Their experience of daily operation compensate Technologies of Emergency Platform System. Beijing: Tsinghua
University Press, 2012.
for the lack of daily err operation, for there is no related data
unless adverse events happened.

98
Back to Contents

A Study on Device Security in IoT Convergence

Hyun-Jin Kim Hyun-Soo Chang Jeong-Jun Suh Tae-shik Shon


Department of Department of Convergence Security Department of
Computer Engineering Computer Engineering Industry Team Cyber Security
Ajou University Ajou University Korea Internet & Security Ajou University
Suwon, Republic of Korea Suwon, Republic of Korea Agency (KISA) Suwon, Republic of Korea
ny24007@gmail.com ics_dant@naver.com jjun2@kisa.or.kr tsshon@ajou.ac.kr

Abstract—IoT(Internet of Things) is global infrastructure II. IOT INTERNATIONAL STANDARDS ORGANIZATION


that enables any object to communicate. The term IoT is being
widely used and that technologies are being applied to various In this section, we summarize IoT international standards
areas. IoT technologies is expected to increase many services’ organizations ITU-T, IEC/ISO, oneM2M and their IoT
convenience, expandability, accessibility and interoperability, but documentations for each organization.
existing ICT environment could be exposed new security threats
due to increased openness and IoT device’s specialty. Actually, A. ITU-T
IoT devices’ vulnerabilities and exploits are reported. If attacks
ITU-T(International Telecommunication Union
on IoT devices succeed, it can cause much damage over the
various areas. Therefore, we categorize IoT devices and examines
Telecommunication Standardization Sector) is proceeding
threats of each category. Finally we deduct security requirements international standardization activities related to IoT. In 2005,
of IoT devices. ITU-T emphasized importance of the IoT over the ‘ITU
Internet Report 2005 – The Internet of Things’ and organized
Keywords—IoT; Internet of Things; IoT domain; IoT device; JCA-IoT and IoT-GSI to carry forward IoT standardization
security threats; activities and major documents follows:[1]-[4]
• Y.2060-Overview of the Internet of things: Y.2060
I. INTRODUCTION standard is about IoT’s overall outline and it includes
Recently IoT technologies are applied in various areas, so IoT’s terms, definitions, concept and scope,
industries heighten interest. IoT includes services and characteristics, high-level requirements, reference
technologies based on connectivity with objects on the Internet models. And this standard is referenced in many other
and by connecting independently distributed objects it IoT standards.
increases efficiency and makes new services like a Wireless • Y.2066-Common requirements of the Internet of things
Sensor Network (WSN) that provides situation based services : Y.2066 standard provides IoT common requirements
without human interactions by connecting physical actuators or based on IoT’s generic use cases and role categorization.
sensors applied ICT technologies.
• Y.2068-Funtional framework and capabilities of the
But while existing devices conducted process in closed
Internet of things : Y.2068 standard describes IoT
environment, as adopting IoT technologies the devices has
functional framework based on Y.2060 standard and
openness and because of increased external point of contact,
Y.2066 standard and basic capabilities that satisfy
security incident rate become much higher. And there are some
Y.2066 standard’s common requirements.
IoT specific characteristics like a low power consumption, low
computing power, so existing ICT technologies can be • Y.2069-Terms and definitions for the Internet of things
restricted. Therefore security requirement for IoT devices and : Y.2069 standard specifies terms and definitions related
technologies is necessary. IoT from a ITU-T point of view.
The rest of this paper is organized as follows: we
summarize international standards organizations related on IoT
and their standard documents in section II, categorize IoT B. ISO/IEC JTC 1
devices using standards’ IoT domains categories in section III, As the interest in IoT is gradually increase and importance
and examine each IoT categories’ reported vulnerabilities and is emphasized in the industry, ISO/IEC JTC 1 established
exploits in section IV. We suggest security requirements for ‘Special Working Group on Internet of Things’, SWG 5, for
IoT devices in section V, and finally, conclude the paper in IoT standardization and this working group is proceeding
Section VI. various standardization works.

This work was supported by Institute for Information & communications


• Internet of Things(IoT) Preliminary Report 2014: This
Technology Promotion(IITP) grant funded by the Korea government(MSIP) report includes ongoing ISO/IEC JTC1works, IoT mind
(R0166-15-1034,A Development of Standard for Infrastructure Security
Technologies in ICT Convergence Service and No.B0713-15-0007,
Development of International Standards Smart Medical Security Platform
focused on the Field Considering Life Cycle of Medical Information)

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


99
Back to Contents

map, market requirements, collection of standards and home and health care. Therefore this paper selects energy,
so on.[5] smart home and health care as major service domains.

C. oneM2M TABLE I. IOT SERVICE CATEGORIZATON OF MAJOR ORGANIZATION


oneM2M is a global scale Partnership Project organized
Organization IoT Service Category
by 7 major standard organization around the world
- Health care
(ETSI(Europe), TIA, ATIS(North America), ARIB,
- Security, Emergency administration
TTC(Japan), CCSA(China), TTA(Korea)) to develop global
IoT service platform's standard technology. Followings are - Transportation
the major standard of the oneM2M.[6]-[15] ITU-T - Agriculture
- Smart grid
• TS0001-Functional Architecture: defines functional - Smart Home
structure of oneM2M in end-to end.
- Digital signpost
• TS0002-Requirements: defines oneM2M’s informatic - Health care
role model and common requirements related to - Information and communication
technology. - Manufacture and industry
• TS0003-Security Solutions: defines security solution - Finance and banking
for M2M systems. - Food and Agriculture
- Transportation
• TS0004-Service Layer Core Protocol Specification: - Home
defines service layer’s core protocol. ISO/IEC
- Water management
• TS0005-Management Enablement(OMA): describes - Education
how to use resources of OMA DM and OMA LWM2M - Energy
and description of the corresponding message flow. - Entertainment and sports
- Public safety and defense
• TS0006-Management Enablement(BBF) : describes
BBF and BBF TR-069 protocol and its message flow. - Retail and accommodation
- Government
• TS0009-HTTP Protocol Binding: describes oneM2M - Smart home
system protocol and how to binding RESTful HTTP. - Smart vehicle
• TS0010- MQTT Protocol Binding: describes how to - Smart grid
oneM2M
binding oneM2M protocol on MQTT transport layer. - E health
- Intelligent transportation system
• TS0011- Common Terminology: describes technical
- Intelligent agriculture
terms and abbreviations in the oneM2M specification.

B. IoT Devices of each Service Domain


III. IOT DEVICES CATEGORIZATION Physical devices that can communicate and have additional
In this section, we examine international standards capabilities like sensing, data collection, data reposition and
organizations’(ITU-T, ISO/IEC JTC 1, oneM2M) IoT service data processing are called IoT devices and there is various IoT
domains and deduct our IoT service domain and examine IoT devices in each domain. Therefore selection of representative
devices of each service domain device for each domain is needed.
To deduct IoT device’s security requirements,
TABLE II. SERVICE DOMAIN AND DEVICES
categorization of IoT devices implemented various areas and
examination of each category’s IoT devices are necessary. Domains Devices Description
Some agencies and organizations categorize IoT devices by - Stores, manages, processes collected
function, network role, device form and service but this paper data
MDMS
categorizes IoT devices based on IoT service domain. - Provides interfaces for sharing
processed data with various applications

Energy - Calculates account data from smart


A. International Standards Organization’s IoT Domain meter’s measured value
Service
Categorization CIS - Provides remote connect/disconnect
Domain
For IoT device categorization, we examine major belongs to customer’s service admission
international standards organization’s IoT services domains. /withdrawal
And the result are classified in Table 1, some service domains
are categorized in multiple organization, such as energy, smart DCU
- Collects electricity meter data
- Connects over TCP/IP network

100
Back to Contents

Domains Devices Description TABLE III. THREAT CASES FOR IOT DOMAINS

- Collects time of use data, consumption


data, net-metering Domain Threat Cases
- Alerts blackout - In December 2013, Germany IT Security enterprise
- Remotely blocks and restores ‘Recurity Labs’ did simulation hacking to electricity control
Smart
- Restricts undependable payers or load system in Germany’s small town Ettlingen and only 3 days
Meter they got whole system of control, emphasized control
limits for demanding response
- Monitors electricity quality and system’s security magnitude.
- In August 2014, about 50 oil/energy companies including
electricity theft Energy
Statoil, biggest oil/energy company in Norway, were hacked
- Communicates with smart home devices Service
from unknown attacker. Attack’s purpose was system
Domain
destruction, but detection engine detected fatal malicious
- Smart chip that enables internal
program before execution and appropriate measures was
processing and communicating act.
Smart
capabilities is equipped - According to ICS-CERT March 2015 report, cyberattack
Appliance
Smart - Controls electricity consumption itself to control system is increasing, the most part of attack is
Home linkage Customer EMS energy area.[16]
Service - In 2014 Black Hat conference, using KNX that one of
- Major components of AMI home automation protocol vulnerability, took over Ipad and
Domain
IHD - Provides electricity consumption and controlled room’s temperature, TV on/off, blind.
(In Home statistical information, high value service - In 2015, security enterprise ‘Rapid7’ reported more than
Display) information in real time at home with 10 baby monitoring CCTV has vulnerabilities including
Smart Gynonii-GCW, iBaby. The vulnerabilities are about web
smart meter Home application, authentication and others and using these
- Locates with a patient of outside of Service vulnerabilities attacker can take over the vulnerable baby
Medical hospital Domain monitoring CCTVs.
Sensor - Collets patients of physical status - In 2015, security enterprise ‘Symantec’ reported
- Transmits data on one-way ‘Insecurity in the Internet of Things’ that examined 50
devices of vulnerabilities including smart temperature,
- Locates with a patient of outside of smart locker, smart bulb and figured out many devices have
hospital multiple vulnerabilities.[17]
Medical - In 2012 BreakPoint Security conference, IOActive
- Has an effect based on a hospital’s
Actuator presented that some pacemaker can be hacked and remotely
prescription received from a mobile
controlled about 15m distance from the device.
device - In 2013, ICS-CERT and FDA found out more than 300
E-health medical devices have hard coded password vulnerability
- Handles whole devices of a patient
among more than 40 companies and medical devices
Data - Functions as a gateway for the hospital include surgery device, anesthesia devices, drug injection
collector - Transmits patient’s data to hospital and device, monitoring patient device and others. Using this
and control receives hospital’s feedback E-health
vulnerability, attacker can modify device’s configuration
device - Enables patient to check physical and internal firmware.[18]
conditions using infographics - In 2015, DerbyCon security conference, security
- Locates in hospital researcher presented that using Shodan, he could get about
68,000 medical devices connected Internet. And he
e-health - Receives patients data from collection
connected a virtual medical device, during 6 months he
server and control devices through Internet reported hundreds of thousands of login attempts and 299
- Processes data and stores to database malicious code attack attempt were happened.[19]

IV. THREAT CASES FOR IOT DOMAINS V. SECURITY REQUIREMENTS FOR IOT DEVICES
In this section, we summarize real threat cases, simulation IoT devices are using many technologies including
hacking, vulnerability reports for each IoT device domains. communication, sensing, big data and others, so there are
various security issues. And there are other issues due to IoT
Various service domains provides diverse and convenient devices use lightweight security protocol because of IoT
services by using IoT devices, but because of increased device’s specialty like a low power consumption, low
openness and IoT device’s specialty, threats to IoT devices also computing power and so on. This section deducts security
increased. Actually, HP, Symantec and others reported requirements of IoT devices.
vulnerabilities for various using IoT devices and in major
conference like Black Hat, Defcon presented IoT exploits and
real threat cases. A. Lightweight Protocol and Cryptography
Considering IoT device’s specialty, lightweight Protocol
research like a CoAP, MQTT and lightweight cryptography
research like a PRESENT, PRINCE, Kecak have actively done
but verification of security strength is ongoing. Protocol and
cryptography should be selected by considering device and
data’s importance, capability, power consumption and trade-off
of these factors.

101
Back to Contents

VI. CONCLUSION & FUTUREWORK


B. Communication Security
This paper identified IoT devices and examined security
IoT devices can use short distance communication (NFC,
threats and attack techniques of IoT devices. And we deducted
ZigBee, Bluetooth), wireless communication (Wi-Fi, 3G, LTE),
security requirements for IoT devices.
wired communication (ADSL, Optical) and others, so security
requirements for each communication technologies. And if For the future works, detailed analysis of each IoT category
devices use multiple communication technologies, additional and deducting security requirements of each IoT domains are
security requirements are needed. Each communication needed.
technology should support integrity, availability,
confidentiality and other security factors like authentication, REFERENCES
permission, charging.
[1] “Y.2060-Overview of the Internet of things,” ITU-T, 2012.
[2] “Y.2066-Common requirements of the Internet of things,” ITU-T, 2014.
C. Data Protection
[3] “Y.2068-Functional framework and capabilities of the Internet of
Data of IoT devices can include user’s private information things,” ITU-T, 2015.
(physical information, location, behavior pattern), so when data [4] “Y.2069-Terms and definitions ofr the Internet of things,” ITU-T, 2012.
sent to other devices or cloud, confidentiality should grantee by [5] “Internet of Things(IoT) Preliminary Report 2014,” ISO/IEC, 2014
using cryptography. [6] “TS0001-Functional Architecture,” oneM2M, 2015.
[7] “TS0002-Requirements,” oneM2M, 2015.
D. Physical Protection [8] “TS0003-Security Solutions,” oneM2M, 2015.
IoT devices are closely located, so physical access is easy [9] “TS0004-Service Layer Core Protocol Specification,” oneM2M, 2015.
and security risk of devices is also increased. Actually, root [10] “TS0005-Management Enablement(OMA),” oneM2M, 2015.
authority capturing attacks through external connection were [11] “TS0006-Management Enablement(BBF),” oneM2M, 2015.
reported. Therefore security requirements for physical control, [12] “TS0008-CoAP protocol Binding,” oneM2M, 2015.
destruction, capturing should be considered. [13] “TS0009-HTTP protocol Binding,” oneM2M, 2015.
[14] “TS0010-MQTT protocol Binding,” oneM2M, 2015.
E. IoT Devices Identification and Permission [15] “TS0011-Common Terminology,” oneM2M, 2015.
Many IoT devices can be added or removed from network [16] "NCICC/ICS-CERT Monitor" newsletter (ICS-MM201506), ICS-
and each device has different authority and domain, so CERT, 2015
requirements for IoT devices’ identification, authentication and [17] “Insecurity in the Internet of Things,” symantec, 2015
permission using ID/Password/Mac/Certificate. [18] “Medical Devices Hard-Coded Passwords,” ICS-ALERT-13-164-01,
October 2013.
F. IoT Devices Monitoring and Controlling [19] Scitt Erven, Mark Collao, “Medical Devices: Pwnage and Honeypots,”
Derbycon 2015, September 2015.
Identified IoT devices can cause malfunction, damage,
removal and infection from malware, so security requirements
for monitoring and controlling in case of abnormal activities or
situations are discovered.

102
Back to Contents

Design of Arc Fault Pressure and Temperature


Detectors in Low Voltage Switchboard

Kuan Lee Choo Ahmad Azri Sa’adon


Faculty of Engineering and Technology Infrastructure Faculty of Engineering and Technology Infrastructure
Infrastructure University Kuala Lumpur Infrastructure University Kuala Lumpur
Selangor, Malaysia Selangor, Malaysia
lckuan@iukl.edu.my azriaces@gmail.com

Abstract— This paper presents the design of the arc fault faults are named as bolted fault. Bolted fault current is the
pressure and temperature detectors that is able to detect the highest possible current supplied by the source and a protective
overpressure and overheating in the interior of the low voltage system is designed according to the value of bolted fault
switchboard prior to the occurrence of an arcing fault. The current. The protective system must be able to detect the bolted
simulation results show that the proposed arc fault pressure and fault and the protective devices must be capable of interrupting
temperature detectors will activate the relay and send a trip this value of current [3].
signal to the circuit breaker if and only if both the detectors
detect a pressure and temperature level which are higher than Due to the high resistance loads, an arcing fault will result
the reference value. This is to ensure that no fault tripping signal in much lower values of current. Thus, the protective devices
is sent to the circuit breaker and therefore unnecessary power such as circuit breakers, fuses and relays, which are designed to
shut down. operate for bolted fault, may not detect these lower values of
current. As a result, the arcing fault will persist until severe
Keywords—LM 335 Temperature Sensor; 1140 Pressure burn down damage occurs. The magnitude of the arc current is
Sensor; Arc Fault Pressure Detector; Arc Fault Temperature limited by the resistance of the arc and the impedance of the
Detector; Low Voltage Switchboard. ground path [4].

I. INTRODUCTION Arc faults are categorized into series arc faults and parallel
arc faults. Series arc faults happen when the current carrying
An arc fault is a high power discharge of electricity paths in series with the loads are unintentionally broken
between two or more conductors. This discharge translates into whereas parallel arc faults happen between two phases, phase
heat, which can break down the wire's insulation and possibly to ground or phase to neutral of the switchboard [5].
trigger an electrical fire. These arc faults can range in power
from a few amps up to thousands of amps high and are highly Large amounts of heat will be dissipated during an arc
variable in terms of strength and duration. Common causes of event. A portion of this heat is coupled directly into the
arc faults include faulty connections due to corrosion and faulty conductors, a portion heats the air and another portion is
initial installation. radiated in various optical wavelengths. Hasty heating of the
air and the expansion of the vaporized metal into gas produces
Arc incidents occur due to various reasons such as, poorly a strong pressure wave which will blow off the covers of the
installed equipment (human mistakes), natural aging of switchboards and collapse the substations [5].
equipment, bad connections, faulty connection due to
corrosion. Statistics have shown that 80% of electrically related Fig. 1 shows the time, current and damage for the 53 arcing
accidents and fatalities involving qualified workers are caused tests. When the circuit breakers are tripped within less than
by arc flash or arc blast. A true arc fault will rapidly increase 0.25 seconds, the damage will be limited to smoke damage.
energy level up to 20MW cycle, increase in pressure up to The triangle markers represent arcs that cause only smoke
3 atm, and in heat up to 3000 degrees Celsius [1]. damage to the side of switchboards. The square markers
represent arcs that cause surface damage to the side of
switchboards whereas the star pointers represent holes of
II. BEHAVIOUR AND CHARACTERISTIC OF ARC FAULT several square inches at the side of the switchboards [2].
An arc fault is the discharge of electricity through the air
between two conductors which creates huge quantities of heat When an arc is ignited, the plasma cloud expands
and light. It is a high resistance fault with resistance similar to cylindrically around the arc. The expansion of the plasma is
many loads and it is a time varying resistor which can dissipate constrained by the parallel busbar and thus the plasma expands
large amount of heat in the switchboard [2]. more to the front and the back of the bus. As the plasma
reaches any obstructions such as the switchboard, plasma
Circuit breakers are tested by bolting a heavy metallic short expansion is retarded by the obstructions. Due to the lower
across the output terminals to determine their capabilities of velocity of the arc, the plasma becomes more concentrated and
handling an essentially zero resistance load. The zero resistance its temperature and current will increase [2].

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


103
Back to Contents

The calculations for the output voltage are shown below:


Output voltage for 20 kPa = 20 × 12 × 10-3
= 0.24 V (1)
Output voltage for 400 kPa = 400 × 12 × 10-3
= 4.8 V (2)
Every 1 kPa increases in the pressure of the surrounding.
The output voltage will increase by 12 mV. Since the
atmospheric pressure is 100 kPa, it is assumed that under
normal condition, the pressure inside a switchboard is about
150 kPa which is equivalent to 1.8 V. From [1], a true arc fault
will rapidly increase in pressure up to 3 atm or equivalently to
303.975 kPa. Therefore, in this proposed design, the pressure
of 250 kPa is set as the reference value. By calculation, the
voltage value corresponding to 250 kPa is 3.0 V. This voltage
Figure 1. Damage to the Side of a Switchboard versus Arc Current and Time is set as the reference voltage. The input voltage (i.e. the
pressure inside the switchboard) will be compared with the
The root of the arc where the arc contacts the conductor is reference voltage (i.e. the reference pressure value of 250 kPa).
reported to reach temperatures exceeding 20000ºC, whereas the A “HIGH” output from the voltage comparator will send to the
plasma portion or positive column of the arc is around AND gate if the input voltage is higher than the 3.0 V.
13000 ºC [6]. For reference, surface of the sun is reported to be
about 5000 ºC. The components in the switchboard can only The LM 335 temperature sensor is used to detect the
withstand this temperature within 250 milliseconds before presence of an arcing fault by sensing the temperature changes
sustaining severe damages [7]. in the switchboard. LM 335 has a breakdown voltage directly
proportional to the temperature, which is +10 mV/oK. LM 335
is chosen because it is precise, easily calibrated and integrated
III. SYTSTEM DESCRIPTION OF ARC FAULT PRESSURE AND
circuit temperature sensor. In addition, it has a linear output
TEMPERATURE DETECTORS and it is cheaper compared to other types of temperature
Fig. 2 shows the block diagram for the proposed design of sensors. When it is calibrated at 25oC, it has typically less than
an arc fault pressure and temperature detectors consists of a 1oC error over a 100oC. The temperature range for LM 335 is -
pressure detector, a temperature detector, voltage comparator, 40oC to 100oC. In other words, the output voltage of this
an AND gate, a relay and a LED. The pressure and temperature temperature sensor will range from 2.33 V to 3.73 V.
detectors will detect the arc fault by amount of pressure and
heat produce and convert into corresponding voltage and the The calculations for the output voltage are shown below [6]:
output of voltage comparator from both sensors will send to the Output voltage for -40oC = (-40 + 273) × 10 × 10-3
AND gate then the output will trigger the relay to trip the
signal that will turn on the LED. = 2.33 V (3)
Output voltage for 100 C = (100 + 273) × 10 × 10-3
o

= 3.73 V (4)
o
For every 1 K increase, the output voltage will increase by
10 mV. Under normal condition, the temperature inside a
switchboard is about 40oC. It is assumed that the temperature
inside a switchboard will rise to 92oC and by calculation; the
corresponding voltage is 3.65 V. This voltage is set as the
Figure 2. Block Diagram of an Arc Fault Pressure and Temperature Detectors reference voltage. The input voltage will be compared with the
The 1140 pressure sensor is used to detect the presence of reference voltage. The relay will be activated, once the input
an arcing fault by sensing the pressure changes in the voltage is more than the reference voltage.
switchboard. 1140 has a breakdown voltage directly
proportional to the temperature, which is +12 mV/kPa. 1140 is IV. MODELLING OF ARC FAULT PRESSURE AND TEMPERATURE
chosen because the 1140 Absolute Pressure Sensor is an air DETECTORS CIRCUIT
pressure sensor that measures the absolute pressure of its
A buffer amplifier provides electrical impedance
environment. It can measure pressures from 20 kPa to 400 kPa.
transformation from one circuit to another circuit. It is used to
In addition, it has a high precision and narrow measurement
transfer a voltage from the pressure sensor and temperature
range compared to other types of pressure sensors. It has
sensor to voltage comparator. A unity gain buffer is used in the
typically less than ±1.5% error. The pressure range for 1140 is
circuit design. The output of the op-amp (buffer) is connected
20 kPa to 400 kPa. In other words, the output voltage of this
to its inverting input, which is the negative feedback.
pressure sensor will range from 240 mV to 4.8 V.
Therefore, the output voltage is simply equal to the input

104
Back to Contents

voltage of the buffer. The output from the pressure sensor and U1 and U3 and the negative feedback of U1 and U3 is
temperature sensor is connected to the non-inverting input of connected to the output of U1 and U3 to produce a unity gain
the buffer (op-amp), which is the positive feedback, and the buffer. The output voltage of U1 and U3 is same as the input
output from the buffer is identical to the pressure sensor and voltage since it is a unity gain buffer.
temperature sensor output.
Then, the output voltage of U1, Vin1, is connected to the
The pressure sensor and temperature sensor will generate a positive feedback (pin 3) of U2, which is a voltage comparator.
voltage based signal with respect to the amount of pressure and Also, the output voltage of U3, Vin2, is connected to the
temperature detected from the surrounding of the switchboard. positive feedback (pin 3) of U4, which is a voltage comparator.
The signal is then sent to a voltage comparator through a
buffer. The voltage comparator is used to compare the signal The output voltage of the buffer is used as the input voltage
with a reference voltage. The output of the comparator from of the comparator. Theoretically, the input voltage of U2 and
both pressure sensor and temperature sensor will produce a U4 are identical to the output voltage of U1 and U3 and are
positive (‘HIGH’ output) value which will then become the also identical to the output voltage of the pressure and
inputs for the 7408 AND gate. Then the output of the AND temperature sensor, which is represented by an AC source in
gate will activate the relay and send a trip signal to the circuit this circuit. uA741 op-amp is used as the voltage comparator.
breaker (i.e.to turn on the LED) if both the pressure sensor and A DC input voltage, Vref1, of 3.65 V is placed at the negative
temperature sensor detect a pressure and temperature that feedback (pin 2) of U2 to produce a constant value of reference
exceed the reference voltage of the voltage comparator. Else, voltage. The voltage value of 3.65 V is equal to the temperature
the output voltage of the voltage comparator will indicate a value of 92oC. Also, a DC input voltage, Vref2, of 3 V is
negative value (‘LOW’ output) which will not trigger the trip placed at the negative feedback (pin 2) of U4 to produce a
signal. constant value of reference voltage. The voltage value of 3 V is
equal to the pressure value of 250 kPa.
Before detect the changes of pressure and temperature in
the environment and operate the LED when the pressure and A +9 V DC supply is connected to pin 7 and a -9 V DC
supply is connected to pin 4 of U2 and U4 to supply voltage for
temperature exceeds the predetermined limit. The arc fault
pressure and temperature detectors are modeled to lower values this component. Output voltage from U2 and U4 (pin 6), Vout1
of pressure and temperature with respect to the practical and Vout2, is used to indicate the comparison result of the
pressure and temperature. input voltage and the reference voltage.

Fig. 3 shows the schematic diagram of an arc fault pressure The 7408 AND Gate inputs are connected with the outputs
and temperature detectors using PSpice program. The input for voltage of U2 and U4 then AND Gate output is connected to
this circuit is an AC supply. An AC supply is used to represent the Voltage Controlled Switch which act as a relay that will
the output signal from the pressure and temperature sensor. The activate the Voltage Pulse and therefore turn on the LED if
output voltage range of the sensor is used as the input voltage output voltage of the AND gate is positive.
range for the circuit. The AC input voltage for temperature
sensor, Vin1, is ranged from 2.33 V (corresponding to -40oC) V. SIMULATION RESULTS
to 3.73 V (corresponding to 100oC) as obtained from The PSpice simulation result from the schematic diagram of
Equation (3) and Equation (4). The AC input voltage for the arc fault pressure and temperature detectors is shown in
pressure sensor, Vin2, is ranged from 0.2 mV (corresponding Fig. 4. From Fig. 4, the input voltage of temperature sensor of
to 20 kPa) to 4.8 V (corresponding to 400 kPa) as obtained 3.03 V is represented by the sine waveform (in pink color) and
from Equation (1) and Equation (2). input voltage of pressure sensor of 2.4 V is represented by
another sine waveform (in light blue color). The reference
voltages are the two straight lines at 3.65 V for temperature
sensor and 3.0 V for pressure sensor. The two square waves are
the output from voltage comparator which trigger as logic ‘1’
(‘HIGH) at 8.5 V if the input voltage of sine wave is higher
than the reference voltage and becomes logic ‘0’ (‘LOW’) at -
5.0 V if the input voltage of sine wave is lower than the
reference voltage.
The red line at 3.5 V between the times of 0 ms to 10ms is
the output voltage of AND gate which combine the output
voltage of pressure and temperature detector. After 10 ms the
red line becomes 0 V as it trip the signal at the Controlled
Voltage Switch. The controlled voltage switch is act like a
relay and will close when it receive a ‘HIGH’ output from the
Figure 3. PSpice Schematic Diagram of an Arc Fault Pressure and AND gate. Equivalently, it will trigger the voltage pulse at
Temperature Detector 5.0 V (represent in green color) which will turn on the LED.
U1 and U3 is an op-amp, which represents a buffer in this
circuit. The AC supply is connected to the positive feedback of

105
Back to Contents

just before the occurrence of arc fault and thereby reduce the
danger to personal injury and building. In addition, it improves
the system reliability without power interruption which is
particular essential to hospitals and certain industries with
sensitive loads. The circuit help to eliminate the possibility of
an arc occurring and hence prevents against the effects of arc
occurrence. The proposed circuit can be modeled to meet
specifications of industry with different supply requirements. It
is easy to design and highly reliable.

REFERENCES
Figure 4. Simulation Result of an Arc Fault Pressure and Temperature
Detectors [1] L.C. Kuan, “Arc Fault Pressure Detector in Low Voltage Switchboard”,
International Journal of Scientific and Research Publications, Volume 3,
In other words, these pressure and temperature detectors Issue 6, June 2013.
could detect early precautions of arc fault occurrence. This [2] H. B. Land, “The Behavior of Arcing Faults in Low Voltage
could reduce the possibilities of arc occurrences and avoid Switchboards”, IEEE Transactions on Industry Applications, Vol. 44, No.
damages to the switchboard, buildings and personnel injury. 2, March/April 2008.
[3] K. Malmedal and P. K. Sen, “Arcing fault current and the criteria for
VI. CONCLUSION setting ground fault relays in solidly-grounded low voltage systems”,
Industrial and Commercial Power Systems Technical Conference, 2000.
Arcing faults in low voltage switchboards is a serious issue
[4] T. Gammon and J. Matthews, “Arcing Fault Models for Low Voltage
as the effects of the arcing faults are devastating. In this paper, Power Systems”, Industrial and Commercial Power Systems Technical
the proposed arc fault pressure and temperature detectors Conference, 2000.
circuit will activate the relay and send a trip signal to the [5] L.C. Kuan, “Arc Fault Temperature Detector in Low Voltage
circuit breaker if and only if both the detectors detect a Switchboard”, International Conference of Information, System and
pressure and temperature level which are higher than the Convergence Applications, June 2015.
reference value. This is to ensure that no fault tripping signal [6] B. R. Baliga and E. Pfender, “Fire Safety Related Testing of Electric
is sent to the circuit breaker and therefore unnecessary power Cable Insulation Materials”, Univ. Minnesota, 1975.
shut down. [7] H. B. Land, C. L. Eddins and J. M. Klimek, “Evolution of Arc Fault
Protection Technology”, John Hopkins APL Technical Digest, 2004.
An early detection of arc fault in low voltage switchboard
enable the isolation of the power supply to the consumer side

106
Back to Contents

Performance Evaluation of Discrete-time Hedging


Strategies for European Contingent Claims
Easwar Subramanian, Vijaysekhar Chellaboina and Arihant Jain
TCS Innovation Labs, Tata Consultancy Services, India
easwar@atc.tcs.com, vijay@atc.tcs.com, arihantjain.iitg@gmail.com

Abstract—We consider the use of discrete-time quadratic performed continuously. However, in reality, a portfolio can
optimal hedging strategies for hedging European contingent only be readjusted at discrete times over any finite period
claims (ECCs). Specifically, the objective of the current work is to and herein lies our primary motivation behind this work
numerically compare the effectiveness of two quadratic trading
strategies with the standard delta hedging. The two quadratic which is to consider discrete-time trading strategies. While
optimal hedging strategies that we consider are mean-variance deploying discrete-time trading, it is not possible to hedge a
hedging in a risk-neutral measure and optimal local-variance seller’s risk completely and the seller of the ECC typically
hedging in a market probability measure. The objective function accumulates a net P/L. This accumulated P/L contributes to
for the former is the variance of the hedging error calculated in the risk associated with a trading strategy which in some sense
a risk-neutral measure and the latter optimizes the variance of
the mark-to-market value of the portfolio over a single trading has to be minimized.
interval in a market probability measure. The comparison is
done on multiple performance measures such as the probability B. Scope of the work
of loss, expected loss, different moments of the hedging error and
shortfall measures such as value at risk (VaR) and conditional Popular approaches to discrete-time hedging involve formu-
value at risk (CVaR). The performance evaluation results on lating quadratic criterion on the net P/L or hedging error as
path-independent and exchange like options conclude that in the risk associated with a discrete-time hedging strategy [3, 4]
the discrete-time setting the quadratic optimal hedging strategies and perform suitable optimization. The work evaluates the
outperform the delta hedging strategy. Indeed, as the re-balancing efficacy of two quadratic optimal strategies over the discrete-
is done more sparsely, the quadratic-optimal trading schemes fair
better than the standard delta-hedging. time version of the standard delta hedging. The delta hedging
strategy, which is well suited to mitigate the effects of risk due
to price movements of the underlying assets of the ECC on
I. I NTRODUCTION
the illiquid portfolio, is based on the theory of continuous-
We consider the problem of performance evaluation of time trading. Our intention is to show that, when trading
discrete-time quadratic optimal hedging strategies for Eu- is performed in discrete-time, quadratic optimal strategies
ropean contingent claims. The ECC that is being hedged perform better than the standard delta-hedging strategy. To
could be a single or multi-asset ECC and may be path- this end, we consider different risk measures such as the
dependent. The trading portfolio consists of risky securities moments of the hedging error, VaR, CVaR, probability of loss
and a risk less money market asset that grows continuously at and expectation of loss. The performance evaluation results are
the risk-free rate. The risky securities of the trading portfolio, demonstrated on path-independent ECCs including multi-asset
known as the hedging assets, are the underlying assets of the options like exchange options.
ECC that is being hedged. Trading occurs at pre-specified
discrete times between the time of initiation and the time C. Outline
to maturity of the ECC. We consider two quadratic optimal
The outline of the current work is as follows. In Section II
hedging schemes, namely, risk-neutral mean-variance hedging
we introduce our notations and provide a mathematical formu-
and optimum local-variance hedging. While the former is a
lation of the discrete-time hedging problem. In Section III we
trading strategy that minimizes the variance of the hedging
introduce the two different hedging techniques that we study in
error to the seller of the ECC in an equivalent risk-neutral
this paper. For each hedging technique, generalized solutions
measure, the latter minimizes the variance of the mark-to-
involving Monte-Carlo computations are provided. Thereafter,
market value of the trading portfolio between current and the
in Section IV we introduce our performance measures and
next trading instance in a market probability measure.
thereafter demonstrate the efficacy of the quadratic hedging
A. Background and Motivation solutions over the standard delta hedging in Section V. We
Much of the work on hedging have their origin in the summarize the results in Section VI.
celebrated works of Black and Scholes in [1] and that of
II. P RELIMINARIES
Merton in [2] wherein the authors provided a way to replicate
a European call option with its underlying and eliminate the We begin by introducing our notations and then proceed to
risk completely under the assumption that trading can be the problem description.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


107
Back to Contents

A. Notations and Problem Setup Δ = (Δ1 , . . . , Δ𝑛 ) such that Δ ∈ 𝒫 and var𝑃 (𝐻(Δ, 𝛽, 𝑉 )) is
Consider an ECC that matures at time 𝑇 . To hedge this minimum. A general solution to the risk-neutral mean-variance
ECC, we consider a hedging portfolio consisting of a risk less hedging problem for the martingale case is given by,
money-market asset 𝑆0 , and 𝑚 underlying (or risky) assets on 𝑔𝑣
Δ𝑘 = [covar𝑃 (𝑆𝑘 ∣ℱ𝑘−1 )]−1 covar𝑃 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 ), (2)
which the ECC is written. We let 𝑆 𝑙 , 𝑙 = 1, ..., 𝑚, to denote
the 𝑚 underlying assets. Trading on the risky assets occur at where 𝑉𝑘 = 𝔼𝑃 (𝑉 ∣ℱ𝑘 ) is the risk-neutral price of the ECC at
pre-specified discrete times {𝑇0 , 𝑇1 , ⋅ ⋅ ⋅ , 𝑇𝑛 }, 𝑇𝑘−1 < 𝑇𝑘 , 𝑘 ∈ time 𝑇𝑘 . For a rigorous derivation of (2), the reader is referred
{1, . . . , 𝑛}, with 𝑇𝑛 = 𝑇 (with ℱ𝑗 as filtrations at time 𝑇𝑗 ) and to [5] and generalizations are found in [6, 7].
let 𝑆𝑘𝑙 denote the price of the asset 𝑆 𝑙 at time 𝑇𝑘 . With 𝑟 as
the risk-free interest rate, we let 𝑆¯𝑘𝑙 = 𝑆𝑘𝑙 𝑒−𝑟𝑇𝑘 be the price of B. Optimum local-variance hedging
the asset 𝑆 𝑙 discounted to time 𝑇0 . We let 𝛽 ∈ ℝ be the initial In here, we seek a trading strategy such that the local vari-
amount invested at time 𝑇0 in the money market asset which ance of the hedging error is minimized in the market measure
includes the premium received for the ECC and let 𝑉 be the 𝑄. The term local variance may be understood as the variance
payoff of the ECC. Finally, let Δ𝑙𝑘 denote the amount of the of the P/L to the trader between successive trading time
underlying asset 𝑆 𝑙 held in the hedging portfolio over the time instances. It could also be viewed as the variance of the mark-

interval (𝑇𝑘−1 , 𝑇𝑘 ], and let Δ𝑘 = [Δ1𝑘 , ⋅ ⋅ ⋅ , Δ𝑚 𝑇
𝑘 ] ∈ ℝ
𝑚
and to-market value of the total portfolio in the trading interval un-
△ 1 𝑚 𝑇 𝑚 der consideration. The optimum local-variance minimization
𝑆𝑘 = [𝑆𝑘 , ⋅ ⋅ ⋅ , 𝑆𝑘 ] ∈ ℝ denote the amounts and prices of
problem is to determine a trading strategy Δ = (Δ1 , . . . , Δ𝑛 )
underlying assets 𝑆 at time 𝑇𝑘 , 𝑘 ∈ {0, ⋅ ⋅ ⋅ , 𝑛}. At time 𝑇𝑛 ,
such that Δ ∈ 𝒫 and the conditional variance var𝑄 (𝐻𝑘 ∣ℱ𝑘−1 )
the positions Δ𝑛 in the underlying assets 𝑆𝑛 are liquidated
is minimum, where 𝐻𝑘 = 𝔼𝑃 (𝐻∣ℱ𝑘 ).
in order to deliver the payoff of the ECC. The hedging error,
To see the motivation for considering such a local-variance
which is the discounted final money-market position to the
minimization, fix 𝑘 ∈ {1, . . . , 𝑛}. Using the ℱ𝑘−1 measura-
seller of the ECC is then given by,
bility of Δ𝑘 one can from (1) deduce that,
𝑛

𝐻(Δ) = Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) + 𝛽 − 𝑉¯ (1) 𝐻𝑘 = 𝐻𝑘−1 + Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) − (𝑉¯𝑘 − 𝑉¯𝑘−1 ). (3)
𝑘=1
In (3), 𝑉¯𝑘 − 𝑉¯𝑘−1 represents the change in the discounted
where 𝑉¯ = 𝑉 𝑒−𝑟𝑇 . The hedging error (1) represents the profit value of the ECC that occurs over the interval (𝑇𝑘−1 , 𝑇𝑘 ],
to the seller of the ECC, with positive values of 𝐻 indicating while Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) represents the change in discounted
profit. value of the underlying assets held in the hedging portfolio
III. H EDGING STRATEGIES over the same interval. Thus 𝐻𝑘 −𝐻𝑘−1 represents the change
in the discounted mark-to-market value of the total portfolio
In this section, we provide a brief description of the two consisting of a long position in the underlying assets and a
quadratic hedging strategy that we consider in this exposition. short position in the ECC. Since 𝐻𝑘−1 is ℱ𝑘−1 measurable,
A. Risk-neutral minimum-variance hedging var𝑄 (𝐻𝑘 ∣ℱ𝑘−1 ) = var𝑄 (𝐻𝑘 − 𝐻𝑘−1 ∣ℱ𝑘−1 ) is a measure
of the risk exposure due to movements in the price of the
In this case, we seek trading strategies such that the global
underlying assets and the value of the ECC over the inter-
variance of the hedging error is minimized in a risk neutral
hedging time interval (𝑇𝑘−1 , 𝑇𝑘 ]. The general solution to the
measure. The term global variance may be understood as the
optimum local-variance hedging problem is given by (see [8]),
variance of the overall profit (P/L) to the trader starting from
𝑙𝑣
the time of initiation till the time of maturity of the ECC. Δ𝑘 = [covar𝑄 (𝑆𝑘 ∣ℱ𝑘−1 )]−1 covar𝑄 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 ). (4)
A risk-neutral probability measure is characterized by the
property that the expected rate of return of any market asset IV. P ERFORMANCE MEASURES
in the risk-neutral measure equals the risk-free interest rate In this section, we compare the performance of the two
offered by the economy. As per the theory of asset pricing, quadratic hedging strategies with the standard delta hedging.
the risk-neutral measure determines the prices of all derivative The evaluation process can be better performed if a trader
assets in the market. For the purpose of this exposition, let 𝑃 relies on the results of multiple performance metrics instead
denote a risk-neutral measure that is equivalent to the market of just one. This is because one single number cannot capture
measure 𝑄 in the sense that both 𝑃 and 𝑄 are mutually ab- all the risks involved in the trading process. For example,
solutely continuous. In addition, the discrete-time discounted a trader might want to know the probability of loss or the
asset price processes {𝑆¯𝑘𝑙 }𝑛𝑘=0 , 𝑙 ∈ {1, ⋅ ⋅ ⋅ , 𝑛} are martingales expected loss due to a trading strategy. Further, it may also
under the measure 𝑃 . In other words, 𝔼𝑃 (𝑆¯𝑘𝑙 ∣ℱ𝑗 ) = 𝑆¯𝑗𝑙 , where be useful to know the uncertainty in the hedging error that
0 ≤ 𝑗 ≤ 𝑘 ≤ 𝑛, 𝑙 ∈ {1, ⋅ ⋅ ⋅ , 𝑚}. For this hedging scheme, we results from performing the suggested trade. In addition, it
additionally assume that the discounted payoff 𝑉¯ and the asset is important to look at the (left) tail of the distribution of
price processes 𝑆𝑘𝑙 , 𝑘 ∈ {0, ⋅ ⋅ ⋅ , 𝑛}, 𝑙 ∈ {1, ⋅ ⋅ ⋅ , 𝑛} are square the returns to know about the worst possible outcomes. The
integrable with respect to the measure 𝑃 . The risk-neutral knowledge of the worst possible outcomes is important for
minimum-variance hedging scheme seeks a trading strategy capital budgeting and regulatory compliance. The outputs of

108
Back to Contents

0.52 -4.1
Delta hedging Delta hedging
Risk-neutral mean-variance hedging Risk-neutral mean-variance hedging
0.5 Optimum local-risk hedging -4.2 Optimum local-risk hedging

0.48 -4.3

0.46 -4.4

0.44 -4.5

Expectation of loss
Probability of loss

0.42 -4.6

0.4 -4.7

0.38 -4.8

0.36 -4.9

0.34 -5

0.32 -5.1

0.3 -5.2
Eu

Ex

Eu

Ex
.O

.O
r.C

r.C
pt

pt
al

al
io

io
l

l
n

n
Fig. 1: Plot of probability of loss and expected loss for different strategies and different ECCs. Observe that the performance
of the quadratic optimal strategies are better than delta-hedging. Parameters used ; European call : spot value 𝑆0 = 50; strike
𝐾 = 50; Exchange Option : spot values 𝑆01 = 50; 𝑆02 = 45;

6 28
Delta hedging Delta hedging
Risk-neutral mean-variance hedging Risk-neutral mean-variance hedging
5.9 Optimum local-risk hedging Optimum local-risk hedging

26

Conditional Value at Risk (CVaR) at 99%


5.8
Value at Risk (VaR) at 99%

5.7 24

5.6
22
5.5

5.4 20

5.3
18
5.2

5.1 16
Eu

Ex

Eu

Ex
.O

.O
r.C

r.C
pt

pt
al

al
io

io
l

l
n

n
Fig. 2: Plot of value-at-risk (VaR) and conditional value-at-risk (CVaR) for different strategies and different ECCs. Re-balancing
frequency used is 7 days

various measures can be useful to choose particular trading strategies can be obtained by optimizing the variance or second
strategy in a given scenario. In the current work, we consider moment of the hedging error. Numerically, variance of the
three types of measures to evaluate a trading strategy. hedging error is calculated as var𝑄 (ℋ) over many possible
scenarios. The strategy with a lower variance is a better
A. Loss measures
strategy. Yet another performance measure considered in the
We consider two loss measures that convey the quantum literature is the so-called effectiveness of hedge denoted by Γ
of loss resulting from a trading strategy. They are probability in which the performance of a hedging strategy is compared
of loss and expected loss. The former is the probability of with no hedging. It is evaluated as,
making a loss in hedging an ECC with the underlying asset(s)
using a trading strategy and the latter provides an estimate var𝑄 (ℋ𝐻𝑆 )
Γ=1− (5)
of the average loss in hedging an ECC given that loss has var𝑄 (ℋ𝑁 𝐻 )
occurred on using a trading strategy. If we let ℋ represent the
set of hedging errors obtained on multiple scenarios due to the where var𝑄 (ℋ𝐻𝑆 ) is the numerical variance of the hedging
application of a trading strategy on hedging an ECC and ∣ ⋅ ∣ error of an arbitrary trading strategy and var𝑄 (ℋ𝑁 𝐻 ) is the
denotes cardinality of the set, then, numerically, probability numerical variance of the hedging error when no hedging is
− performed.
of loss is calculated as ∣ℋ ∣ −
∣ℋ∣ where ℋ ⊂ ℋ are the negative
values of ℋ (loss making scenarios). And, expected loss is C. Shortfall measures
calculated numerically, as the average of (ℋ− ). A strategy
that has lower probability of loss or expected loss is a better We also consider two popular shortfall measures i.e. mea-
strategy. sures that focus on the worst 𝛼% outcomes or tail outcomes.
They are value-at-risk (VaR) which is the maximum likely loss
B. Measures based on moments to occur in (1 − 𝛼)% of cases and conditional value-at-risk
It is also desirable for a strategy to possess less uncertainty (CVaR) which is the expected tail loss if a tail event occurs.
over the eventual P/L values at the time of maturity. Such In its most common form, value-at-risk (VaR) measures the

109
Back to Contents

European Option 0.88


Delta hedging
40 Risk-neutral mean-variance hedging
Delta hedging Optimum local-risk hedging
Risk-neutral mean-variance hedging 0.86
Optimum local-risk hedging
35

0.84
30
Variance of the hedging error

Effectiveness of hedge
0.82
25

0.8
20

0.78
15

0.76
10

0.74
5

0.72
0

Eu

Ex
.O
1-

7-

14

91

r.C
D

pt
-D

-D

al
ay

ay

io
l
ay

ay

n
s

s
Fig. 3: Left hand side is a plot of variance of P/L versus hedging frequency. Right hand side plots the effectiveness criterion as in
(5) for different strategies and different ECCs. Quadratic optimal strategies performs better than the conventional delta-hedging.

European call Exchange option


40 450
Delta-hedging Delta-hedging
Risk-neutral mean-variance hedging Risk-neutral mean-variance hedging
35 Optimum local-risk hedging 400 Optimum local-risk hedging

350
30
Variance of the hedging error

Variance of the hedging error


300
25
250
20
200
15
150

10
100

5 50

0 0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X X

Fig. 4: Plot of 𝜎 2 d𝑡 versus variance of the hedging error where d𝑡 is the re-balancing frequency (or trading interval). Observe
that the variance obtained from delta-hedging scheme is higher than the quadratic hedging schemes. Parameters used ; European
call : spot value 𝑆0 = 50; Strike 𝐾 = 50; Exchange Option : Spot values 𝑆01 = 50; 𝑆02 = 45;

boundaries of risk in a portfolio over short durations under at different re-balancing frequencies of 1 day, 7 days, 14
normal market conditions at a given confidence level. Condi- days and 91 days. Re-balancing done only on the 1st and
tional VaR, on the other hand, provides information about the 91st day corresponds to the case of static hedging. For each
average level of loss, given that the VaR has exceeded. That path a particular hedging scheme is applied at a particular
is, CVaR tells us about the extent of the loss that could be re-balancing frequency and the corresponding hedging error
incurred in the event that VaR is exceeded. is recorded. Thereafter we calculate statistical measures on
the hedging errors to obtain a desired performance measure.
V. N UMERICAL S IMULATION
Six performance measures were considered which include
We begin the section by explaining our experimental setup. probability of loss, expected loss, variance of the hedging
A. Experimental setup error, effectiveness of the trading strategy, VaR and CVaR. VaR
and CVaR were calculated with 𝛼 = 0.01 (99% percentile) and
For simulations, we considered European call options and
the time horizon ℎ was set to time to maturity of the hedged
exchange options as the hedged ECCs. Given parameters 𝜇 and
ECC. The results of the simulation exercise were also tested
𝜎, we simulated multiple paths of the underlying asset prices
for multiple choices of parameters to include out-of-money,
𝑆𝑡 using the GBM model. For the multi-asset exchange option,
at-the-money and in-the-money options to test robustness of
correlated Brownian with suitable correlation coefficient 𝜌
the performance results.
were used. Time to maturity was set at 𝑇 = 91 days and
the risk-free interest rate of the market was fixed at 𝑟 = 10%
B. Results
for all simulations. The growth rate of the underlying asset(s)
were typically set at twice of the risk-free interest rate i.e., at Figure 1 plots the probability of loss and expected loss for
𝜇 = 20% unless otherwise stated. For a given spot value 𝑆0 different trading strategies and for European and exchange
of the underlying asset, 100, 000 GBM paths were generated. options. Observe that the performance of the quadratic optimal
For each path, 𝑆𝑡 value was recorded at multiple frequencies strategies are superior to that of the delta hedging schemes.
of daily, weekly and fortnightly. Hedging was performed The plots were qualitatively similar for high and low volatili-

110
Back to Contents

European call European call


26 26
Delta-hedging Delta-hedging
Risk-neutral mean-variance hedging Risk-neutral mean-variance hedging
Optimum local-risk hedging Optimum local-risk hedging
24 24

Variance of the hedging error


Mean-squared hedging error

22 22

20 20

18 18

16 16

14 14

12 12
40 45 50 55 60 40 45 50 55 60
Strike of European call option Strike of European call option

Fig. 5: Plot of mean-squared hedging error and variance of the hedging error as a function of strike for European call option.
The quadratic optimal strategies fair better for all values of the strike. Parameters used ; Spot value 𝑆0 = 50;

ties and different re-balancing frequencies. In Figure 2, we plot CVaR. In all of these performance measures, the discrete time
the VaR and CVaR resulting from a hedging strategy calculated quadratic optimal strategies faired better than the delta-hedging
for different strategies and for different ECCs. One can observe approach. The results were even more skewed in favor of
that the performance of the quadratic optimal strategies are the quadratic hedging strategies when trading is performed
superior to that of the delta hedging schemes. The left hand more sparsely. The results obtained in this work make a case
side of Figure 3 depicts that the quadratic optimal strategies for the use of quadratic optimal hedging strategies over the
performs better than the conventional delta-hedging when classic delta hedging in practice. These discrete-time strategies
trading is performed more sparsely. Although the plot is for not only incur less risk when measured in terms of the loss
European option, the result is qualitatively same for Exchange measures such as the probability and expectation of loss but
option. On the right figure we plot the effectiveness criterion as also helps in efficient capital budgeting for regulatory purposes
in (5) for different strategies and different ECCs which again since they fair better in shortfall measures such as VaR and
shows the superiority of the quadratic hedging schemes. CVaR.
In Figure 4, we plot the variance of the hedging error calcu-
R EFERENCES
lated in market probability measure 𝑄 with 𝜎 2 d𝑡. The quantity
𝜎 2 d𝑡 is calculated from four re-balancing frequencies of daily [1] F. Black and M. Scholes, “The pricing of options and cor-
(1 day), weekly (7 days), fortnightly (14 days) and static hedg- porate liabilities,” Journal of Political Economy, vol. 81,
ing (91 days) and three volatilities (20%, 80% and 200%). pp. 637–659, 1973.
Observe that the variance obtained from delta-hedging scheme [2] R. C. Merton, “Theory of rational option pricing,” Bell
is higher than the quadratic hedging schemes. Figure 5 plots Journal of Economics and Management science, vol. 4,
the variance and the mean-squared hedging error for call pp. 141–183, 1973.
options which are deep-in-the-money, at-the-money and out- [3] H. Föllmer and D. Sondermann, Hedging of non-
of-money. For all cases, the performance of the quadratic redundant contingent claims. North Holland: In W.
optimal schemes were better than the standard delta hedging. Hildenbrand and A. Mas-Colell, editors,, 1986, ch. 12,
pp. 205–223.
VI. S UMMARY AND OUTLOOK [4] D. Duffie and H. R. Richardson, “Mean variance hedging
In this work, we considered the problem of evaluating in continuous time,” The Annals of Probability, vol. 1,
the performance of quadratic optimal hedging schemes with no. 1, pp. 1–15, 1991.
delta hedging. The portfolio that is being hedged consists of [5] S. Bhat, V. Chellaboina, M. U. Kumar, A. Bhatia, and
an ECC that could be possibly be a multi-asset and path- S. Prasad, “Discrete time minimum variance hedging
dependent. The trading portfolio consists of the underlying for european contigent claims,” in IEEE Conference on
assets of the hedged ECC. The quadratic optimal strategies Decision and Control, Shanghai, 2009.
that were considered in this exposition are risk-neutral mean- [6] M. Schweizer, “Variance-optimal hedging in discrete
variance hedging and optimum local-variance hedging. The time,” Mathematics of Operations Research, vol. 20, pp.
former minimizes the variance of the overall profit to the 1–32, 1995.
seller of the ECC and the latter minimizes the variance [7] M. Sc̈hal, “On quadratic cost criteria for option hedging,”
of the mark-to-market value of the portfolio over a single Mathematics of Operations Research, vol. 19, no. 1, pp.
trading interval. For the purpose of performance evaluation, 121–131, 1994.
we evaluated the strategies on multiple measures such as [8] E. Subramanian and V. Chellaboina, “Explicit solutions
probability and expectation of loss, second-order moments of of discrete-time quadratic optimal hedging strategies for
the hedging error, and shortfall measures such as VaR and european contingent claims,” London, 2014, pp. 449–456.

111
Back to Contents

Explicit Solutions of Discrete-time Hedging


Strategies for Multi-Asset Options
Easwar Subramanian, Vijaysekhar Chellaboina and Arihant Jain
TCS Innovation Labs, Tata Consultancy Services, India
easwar@atc.tcs.com, vijay@atc.tcs.com, arihantjain.iitg@gmail.com

Abstract—We consider the use of discrete-time quadratic op- which is to consider discrete-time trading strategies. While
timal hedging strategies for hedging Multi-asset options. Specifi- deploying discrete-time trading, it is not possible to hedge a
cally, the objective of the current work is to provide specialized seller’s risk completely and the seller of the ECC typically
closed form solutions to hedge Exchange options using two
quadratic hedging schemes. The two quadratic optimal hedging accumulates a net P/L. This accumulated P/L contributes to
strategies that we consider are mean-variance hedging in a risk- the risk associated with a trading strategy which in some sense
neutral measure and optimal local-variance hedging in a market has to be minimized.
probability measure. The objective function for the former is In this exposition we consider variance or the second
the variance of the hedging error calculated in a risk-neutral
measure and the latter optimizes the variance of the mark-to-
moment of the hedging error (P/L) as the criterion function
market value of the portfolio over a single trading interval in a to be minimized. The solutions to various quadratic hedging
market probability measure. To arrive at closed-form solutions, problems depend on the assumptions made on the measurabil-
we assume geometric Brownian motion (GBM) as the stochastic ity and the integrability of the underlying asset price processes.
model for the underlying asset prices. The hedging solutions Indeed, as the problem setting becomes more generic, the
are expressed in terms of the pricing function of the hedged
ECC and the prices of the underlying assets. The motivation
associated solution gets more complicated and involve compu-
to provide these explicit solutions is that the proposed hedging tationally intensive Monte-Carlo simulations (see [3, 4]). Our
solutions are well suited for computer implementation and reduce second motivation is to provide hedging solutions that are not
compute time and complexity unlike the Monte-Carlo based compute intensive and therefore we consider stronger problem
solution formulations. settings and derive explicit closed form solutions for different
quadratic discrete-time hedging problems, thus making it well
I. I NTRODUCTION suited for computer implementation.
We consider evaluation of explicit solutions of discrete-time
quadratic optimal hedging strategies for multi-asset European B. Scope of the work
contingent claims. Specifically, we consider exchange option
as the ECC that is being hedged and the trading portfolio We derive explicit closed form solution for the two quadratic
consists of the underlying assets of the ECC and a risk less optimal hedging problems for exchange option. To arrive at
money market asset that grows continuously at the risk-free closed-form solutions, we assume geometric Brownian motion
rate. Trading occurs at pre-specified discrete times between (GBM) as the stochastic model for the underlying asset prices.
the time of initiation and the time to maturity of the ECC. We The hedging solutions are expressed in terms of the pricing
consider two quadratic optimal hedging schemes, namely, risk- function of the hedged ECC and the prices of the underlying
neutral mean-variance hedging and optimum local-variance assets. These explicit solutions when used instead of complex
hedging. While the former is a trading strategy that minimizes Monte-Carlo based solutions makes the proposed hedging
the variance of the hedging error to the seller of the ECC in solution well suited for computer implementation and reduce
an equivalent risk-neutral measure, the latter minimizes the compute time and complexity.
variance of the mark-to-market value of the trading portfolio
between current and the next trading instance in a market C. Outline
probability measure.
The outline of the current work is as follows. In Section
A. Background and Motivation II we introduce our notations and provide a mathematical
Much of the work on hedging have their origin in the formulation of the multi-asset discrete-time hedging problem.
celebrated works of Black and Scholes in [1] and that of In Section III we introduce the two different hedging tech-
Merton in [2] wherein the authors provided a way to replicate niques that we study in this paper. For each hedging technique,
a European call option with its underlying and eliminate the generalized solutions involving Monte-Carlo computations are
risk completely under the assumption that trading can be provided. Thereafter, in Section IV we introduce exchange
performed continuously. However, in reality, a portfolio can option, its pricing function and provide a derivation of the
only be readjusted at discrete times over any finite period hedging solutions for the quadratic hedging schemes consid-
and herein lies our primary motivation behind this work ered. We summarize in Section V.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


112
Back to Contents

II. P RELIMINARIES where 0 ≤ 𝑗 ≤ 𝑘 ≤ 𝑛, 𝑙 ∈ {1, 2} where ℱ𝑗 are filtrations


We begin by introducing our notations and then proceed to at time 𝑇𝑗 . For this hedging scheme, we additionally assume
the problem description. that the discounted payoff 𝑉¯ and the asset price processes
𝑆𝑘𝑙 , 𝑘 ∈ {0, ⋅ ⋅ ⋅ , 𝑛}, 𝑙 ∈ {1, 2} are square integrable with
A. Notations and Problem Setup respect to the measure 𝑃 . The risk-neutral minimum-variance
Consider an exchange option that matures at time 𝑇 . To hedging scheme seeks a trading strategy Δ = (Δ1 , . . . , Δ𝑛 )
hedge the exchange option, we consider a hedging portfolio such that Δ ∈ 𝒫 and var𝑃 (𝐻(Δ, 𝛽, 𝑉 )) is minimum. A
consisting of a risk less money-market asset 𝑆0 , and 2 un- general solution to the risk-neutral mean-variance hedging
derlying (or risky) assets on which the ECC is written. We problem for the martingale case is given by,
let 𝑆 𝑙 , 𝑙 = 1, 2, to denote the two underlying assets of the ex- 𝑔𝑣
Δ𝑘 = [covar𝑃 (𝑆𝑘 ∣ℱ𝑘−1 )]−1 covar𝑃 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 ), (2)
change option that is being hedged. Trading on the risky assets
𝑃
occur at pre-specified discrete times {𝑇0 , 𝑇1 , ⋅ ⋅ ⋅ , 𝑇𝑛 }, 𝑇𝑘−1 < where 𝑉𝑘 = 𝔼 (𝑉 ∣ℱ𝑘 ) is the risk-neutral price of the ECC at
𝑇𝑘 , 𝑘 ∈ {1, . . . , 𝑛}, with 𝑇𝑛 = 𝑇 and let 𝑆𝑘𝑙 denote the time 𝑇𝑘 . For a rigorous derivation of (2), the reader is referred
price of the asset 𝑆 𝑙 at time 𝑇𝑘 . With 𝑟 as the risk-free to [5] and generalizations are found in [3, 6].
interest rate, we let 𝑆¯𝑘𝑙 = 𝑆𝑘𝑙 𝑒−𝑟𝑇𝑘 be the price of the
B. Optimum local-variance hedging
asset 𝑆 𝑙 discounted to time 𝑇0 . We let 𝛽 ∈ ℝ be the initial
amount invested at time 𝑇0 in the money market asset which In here, we seek a trading strategy such that the local vari-
includes the premium received for the ECC and let 𝑉 be the ance of the hedging error is minimized in the market measure
payoff of the ECC. Finally, let Δ𝑙𝑘 denote the amount of the 𝑄. The term local variance may be understood as the variance
underlying asset 𝑆 𝑙 held in the hedging portfolio over the time of the P/L to the trader between successive trading time
△ instances. It could also be viewed as the variance of the mark-
interval (𝑇𝑘−1 , 𝑇𝑘 ], and let Δ𝑘 = [Δ1𝑘 , ⋅ ⋅ ⋅ , Δ𝑚 𝑇
𝑘 ] ∈ ℝ
𝑚
and
△ 1 𝑚 𝑇 𝑚
to-market value of the total portfolio in the trading interval un-
𝑆𝑘 = [𝑆𝑘 , ⋅ ⋅ ⋅ , 𝑆𝑘 ] ∈ ℝ denote the amounts and prices of der consideration. The optimum local-variance minimization
underlying assets 𝑆 at time 𝑇𝑘 , 𝑘 ∈ {0, ⋅ ⋅ ⋅ , 𝑛}. At time 𝑇𝑛 , problem is to determine a trading strategy Δ = (Δ1 , . . . , Δ𝑛 )
the positions Δ𝑛 in the two underlying assets 𝑆𝑛 are liquidated such that Δ ∈ 𝒫 and the conditional variance var𝑄 (𝐻𝑘 ∣ℱ𝑘−1 )
in order to deliver the payoff of the ECC. The hedging error, is minimum, where 𝐻𝑘 = 𝔼𝑃 (𝐻∣ℱ𝑘 ).
which is the discounted final money-market position to the To see the motivation for considering such a local-variance
seller of the ECC is then given by, minimization, fix 𝑘 ∈ {1, . . . , 𝑛}. Using the ℱ𝑘−1 measura-
𝑛
∑ bility of Δ𝑘 one can from (1) deduce that,
𝐻(Δ) = Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) + 𝛽 − 𝑉¯ (1)
𝑘=1 𝐻𝑘 = 𝐻𝑘−1 + Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) − (𝑉¯𝑘 − 𝑉¯𝑘−1 ). (3)
where 𝑉¯ = 𝑉 𝑒−𝑟𝑇 . The hedging error (1) represents the profit In (3), 𝑉¯𝑘 − 𝑉¯𝑘−1 represents the change in the discounted
or loss (P/L) to the seller of the ECC, with positive values of value of the ECC that occurs over the interval (𝑇𝑘−1 , 𝑇𝑘 ],
𝐻 indicating profit. while Δ𝑇𝑘 (𝑆¯𝑘 − 𝑆¯𝑘−1 ) represents the change in discounted
value of the underlying assets held in the hedging portfolio
III. H EDGING STRATEGIES over the same interval. Thus 𝐻𝑘 −𝐻𝑘−1 represents the change
In this section, we provide a brief description of the two in the discounted mark-to-market value of the total portfolio
quadratic hedging strategy that we consider in this exposition. consisting of a long position in the underlying assets and a
short position in the ECC. Since 𝐻𝑘−1 is ℱ𝑘−1 measurable,
A. Risk-neutral minimum-variance hedging var𝑄 (𝐻𝑘 ∣ℱ𝑘−1 ) = var𝑄 (𝐻𝑘 − 𝐻𝑘−1 ∣ℱ𝑘−1 ) is a measure
In this case, we seek trading strategies such that the global of the risk exposure due to movements in the price of the
variance of the hedging error is minimized in a risk neutral underlying assets and the value of the ECC over the inter-
measure. The term global variance may be understood as the hedging time interval (𝑇𝑘−1 , 𝑇𝑘 ]. The general solution to the
variance of the overall profit (P/L) to the trader starting from optimum local-variance hedging problem is given by (see [7]),
the time of initiation till the time of maturity of the ECC. 𝑙𝑣
A risk-neutral probability measure is characterized by the Δ𝑘 = [covar𝑄 (𝑆𝑘 ∣ℱ𝑘−1 )]−1 covar𝑄 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 ). (4)
property that the expected rate of return of any market asset
in the risk-neutral measure equals the risk-free interest rate IV. E XCHANGE OPTION
offered by the economy. As per the theory of asset pricing, In this section we consider the exchange option, provide
the risk-neutral measure determines the prices of all derivative its pricing function and derive explicit expressions for the
assets in the market. For the purpose of this exposition, let 𝑃 two quadratic hedging schemes discussed in this exposition.
denote a risk-neutral measure that is equivalent to the market An exchange option is a multi-asset option written on two
measure 𝑄 in the sense that both 𝑃 and 𝑄 are mutually ab- underlying assets, that allows the holder of the option to
solutely continuous. In addition, the discrete-time discounted exchange one asset for another. Accordingly, the payoff at time
asset price processes {𝑆¯𝑘𝑙 }𝑛𝑘=0 , 𝑙 ∈ {1, 2} are martingales 𝑇 is given by max(𝑆𝑇1 − 𝑆𝑇2 , 0) with 𝑆𝑇1 and 𝑆𝑇2 representing
under the measure 𝑃 . In other words, 𝔼𝑃 (𝑆¯𝑘𝑙 ∣ℱ𝑗 ) = 𝑆¯𝑗𝑙 , prices of the two underlying assets at terminal time 𝑇 . These

113
Back to Contents

Lemma 3.1: Consider an Exchange option with payoff given by 𝜙(𝑆𝑇1 , 𝑆𝑇2 ). Let the prices of the underlying assets 𝑆𝑡𝑖 , 𝑖 ∈ {1, 2}
follow the GBM as in (9) with 𝛿𝑘 as the time left until next hedging instance 𝑇𝑘 . The risk-neutral mean-variance hedging
strategy (2) for the trading interval (𝑇𝑘−1 , 𝑇𝑘 ], 𝑘 ∈ {1, ⋅ ⋅ ⋅ , 𝑛} is given by,
( )
𝑔𝑣 Δ1𝑘
Δ𝑘 = = 𝐴−1 𝐵, where
Δ2𝑘
( 2
)
1
(𝑆𝑘−1 )2 𝑒2𝑟𝛿𝑘 (𝑒𝜎1 𝛿𝑘 − 1) 1
𝑆𝑘−1 2
𝑆𝑘−1 𝑒2𝑟𝛿𝑘 (𝑒𝜌𝜎1 𝜎2 − 1)
𝐴 = 2 (5)
1 2
𝑆𝑘−1 𝑆𝑘−1 𝑒2𝑟𝛿𝑘 (𝑒𝜌𝜎1 𝜎2 − 1) 2
(𝑆𝑘−1 )2 𝑒2𝑟𝛿𝑘 (𝑒𝜎2 𝛿𝑘 − 1)
⎛ [ ( 2
) ( 1 )]⎞
1
𝑆𝑘−1 𝑒2𝑟𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒𝜎1 𝛿𝑘 , 𝑆𝑘−1
2
𝑒𝜌𝜎1 𝜎2 𝛿𝑘 − 𝑉𝑘−1 𝑆𝑘−1 2
, 𝑆𝑘−1
𝐵 = ⎝ [ ( 2
) ( 1 )]⎠
2
𝑆𝑘−1 𝑒2𝑟𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒𝜌𝜎1 𝜎2 𝛿𝑘 , 𝑆𝑘−1
2
𝑒𝜎2 𝛿𝑘 − 𝑉𝑘−1 𝑆𝑘−1 2
, 𝑆𝑘−1

options are used commonly in FX markets. The prices of and the underlying asset price processes {𝑆¯𝑖 }𝑛𝑘=0 , 𝑖 ∈
the underlying assets of the exchange option are typically {1, 2} are martingales in measure 𝑃 . To compute the risk-
correlated. In the case when the two underlying assets of the neutral mean-variance hedging solution for the exchange op-
exchange option follow a GBM, their dynamics is given by, tion we compute the individual elements of the matrices
[covar𝑄 (𝑆𝑘 ∣ℱ𝑘−1 )] and covar𝑄 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 ). The product
d𝑆𝑡1 = 𝜇1 𝑆𝑡1 d𝑡 + 𝜎1 𝑆𝑡1 dℬ𝑡1 ,
[covar𝑃 (𝑆𝑘 ∣ℱ𝑘−1 )]−1 [covar𝑃 (𝑆𝑘 , 𝑉𝑘 ∣ℱ𝑘−1 )], in the case of
d𝑆𝑡2 = 𝜇2 𝑆𝑡2 d𝑡 + 𝜎2 𝑆𝑡2 dℬ𝑡2 , an exchange option, is a 2 × 1 matrix that conveys the number
where 𝜇1 and 𝜇2 are growth rates of the underlying asset of units in the underlying assets 𝑆 1 and 𝑆 2 to be held during
prices 𝑆 1 and 𝑆 2 respectively, ℬ𝑡1 and ℬ𝑡2 are correlated the trading interval (𝑇𝑘−1 , 𝑇𝑘 ]. Lemma 3.1, whose proof is
Brownian motions in measure 𝑄 with dℬ𝑡1 dℬ𝑡2 = 𝜌 for all provided in the appendix, provides the details that are required
𝑡 and 𝜎𝑖 , 𝑖 ∈ {1, 2} are constants related to the volatilities of to compute the risk-neutral mean-variance hedging solution for
the two underlying asset prices. Alternatively, the dynamics exchange option.
can be expressed as, 2) Optimum local-variance hedging: The dynamics of the
prices of the underlying assets for local-variance hedging
d𝑆𝑡1 = 𝜇1 𝑆𝑡1 d𝑡 + 𝜎1 𝑆𝑡1 d𝑊𝑡1 ,
( √ ) are as given in (6). As in the case of risk-neutral mean-
d𝑆𝑡2 = 𝜇2 𝑆𝑡2 d𝑡 + 𝜎2 𝑆𝑡2 𝜌d𝑊𝑡1 + 1 − 𝜌2 d𝑊𝑡2 , (6) variance hedging, we compute the individual elements of
conditional covariance matrices in solution (4). Recall that
with 𝑊𝑡1 and 𝑊𝑡2 as uncorrelated Brownian motions in mea- these individual conditional covariances are to be computed
sure 𝑄. A closed form price for the exchange option at any in market probability measure 𝑄. Lemma 4.1 outlines the
time 𝑡 < 𝑇 is given by, solution to the optimum local variance hedging of an exchange
𝑉𝑡 (𝑆𝑡1 , 𝑆𝑡2 ) = 𝑒−𝑟(𝑇 −𝑡) 𝑆𝑡1 𝑁 (𝑑1 ) − 𝑆𝑡2 𝑒−𝑟(𝑇 −𝑡) 𝑁 (𝑑2 ), option.
( 2) Note that when 𝜇1 = 𝜇2 = 𝑟 (or when 𝑃 = 𝑄) , the solution
ln(𝑆𝑡1 /𝑆𝑡2 ) + 𝜎2 (𝑇 − 𝑡)
𝑑1 = √ , (7) of optimum local-variance hedging becomes identical to the
𝜎 𝑇(− 𝑡) solution of risk-neutral mean-variance hedging problem.
2
ln(𝑆𝑡1 /𝑆𝑡2 ) + 𝜎2 (𝑇 − 𝑡)
𝑑2 = √ , (8) V. S UMMARY
𝜎 𝑇 −𝑡
√ The results obtained in this work make a case for the use
where term 𝜎 is computed as 𝜎 = 𝜎12 + 𝜎22 − 2𝜌𝜎1 𝜎2 and of quadratic optimal hedging strategies over the classic delta
the notation 𝑁 (⋅) stands for the cumulative normal distribu- hedging in practice. Numerical simulations suggest that these
tion. In the case of hedging an exchange option, the trading discrete-time strategies not only incur less risk when measured
portfolio consists of both the underlying assets of the exchange in terms of the loss measures such as the probability and
option. Therefore, at any time trading time 𝑇𝑘 , 𝑘 ∈ {0, ⋅ ⋅ ⋅ , 𝑛}, expectation of loss but also helps in efficient capital budgeting
the solution to the hedging problem provides the number of for regulatory purposes since they fair better in shortfall mea-
units that is to be held in the underlying assets 𝑆 1 and 𝑆 2 for sures such as VaR and CVaR. One of the main impediments in
the trading time interval (𝑇𝑘−1 , 𝑇𝑘 ]. using trading strategies of such nature is the non-availability of
1) Risk-neutral mean-variance hedging: We begin by not- closed form expressions. The general solutions such as in (2)
ing that, in the risk-neutral world, the dynamics of the under- and (4) require heavy computational resources and are time
lying assets 𝑆𝑡1 and 𝑆𝑡2 are given by, consuming. This exposition and some of our previous work
d𝑆𝑡1 = 𝑟𝑆𝑡1 d𝑡 + 𝜎1 𝑆𝑡1 d𝑊𝑡1 , [5] [7], does address this problem and provides analytical
( √ ) expressions which are well-suited for implementation. The
d𝑆𝑡2 = 𝑟𝑆𝑡2 d𝑡 + 𝜎2 𝑆𝑡2 𝜌d𝑊𝑡1 + 1 − 𝜌2 d𝑊𝑡2 , (9) solution approach rely mainly on manipulations of conditional

114
Back to Contents

Lemma 4.1: Consider an Exchange option with payoff given by 𝜙(𝑆𝑇1 , 𝑆𝑇2 ). Let the prices of the underlying assets 𝑆𝑡𝑖 , 𝑖 ∈ {1, 2}
follow the GBM as in (6) with 𝛿𝑘 as the time left until next hedging instance 𝑇𝑘 . The optimum local-variance hedging strategy
(2) for the trading interval (𝑇𝑘−1 , 𝑇𝑘 ], 𝑘 ∈ {1, ⋅ ⋅ ⋅ , 𝑛} is given by,
( )
𝑙𝑣 Δ1𝑘
Δ𝑘 = = 𝐴−1 𝐵, where
Δ2𝑘
( 2
)
1
(𝑆𝑘−1 )2 𝑒2𝜇1 𝛿𝑘 (𝑒𝜎1 𝛿𝑘 − 1) 1
𝑆𝑘−1 2
𝑆𝑘−1 𝑒(𝜇1 +𝜇2 )𝛿𝑘 (𝑒𝜌𝜎1 𝜎2 − 1)
𝐴 = 2
1 2
𝑆𝑘−1 𝑆𝑘−1 𝑒(𝜇1 +𝜇2 )𝛿𝑘 (𝑒𝜌𝜎1 𝜎2 − 1) 2
(𝑆𝑘−1 )2 𝑒2𝜇2 𝛿𝑘 (𝑒𝜎2 𝛿𝑘 − 1)
⎛ [ ( 2
)⎞
1
𝑆𝑘−1 𝑒(𝜇1 +𝑟)𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒(𝜇−𝑟+𝜎1 )𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘
⎜ ( 1 )] ⎟
⎜ −𝑉𝑘−1 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟)𝛿𝑘 ⎟
𝐵 = ⎜ ⎜ 2 [ ( 2
) ⎟
⎟ (10)
⎝𝑆𝑘−1 𝑒(𝜇2 +𝑟)𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒(𝜇1 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟+𝜎2 )𝛿𝑘 ⎠
( 1 )]
−𝑉𝑘−1 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟)𝛿𝑘

expectations. Using similar techniques, it may be possible to Let the underlying assets 𝑆𝑡1 and 𝑆𝑡2 follow GBM according
arrive at simpler solutions for quadratic hedging problems for to (6). From Ito’s Lemma, we have, for 0 ≤ 𝑡1 ≤ 𝑡2 ≤ 𝑇 ,
more advanced asset price models other than GBM and for 1 2 1 1
𝑆𝑡12 = 𝑆𝑡11 𝑒(𝜇1 − 2 𝜎1 )(𝑡2 −𝑡1 )+𝜎1 (𝒲𝑡2 −𝒲𝑡1 )
wider choices of ECCs. Such studies can further advance
2 1 1

1
1−𝜌2 (𝒲𝑡2 −𝒲𝑡2 ))
the case for the use of discrete-time trading schemes by 𝑆𝑡22 = 𝑆𝑡22 𝑒(𝜇2 − 2 𝜎2 )(𝑡2 −𝑡1 )+𝜎2 (𝜌(𝒲𝑡2 −𝒲𝑡1 )+ 2 1

practitioners.
which can be rewritten as
R EFERENCES 1 2 √
𝑆𝑡12 = 𝑆𝑡11 𝑒(𝜇1 − 2 𝜎1 )(𝑡2 −𝑡1 )+𝜎1 𝑡2 −𝑡1 𝜀1
(11)
[1] F. Black and M. Scholes, “The pricing of options and cor- √ √
(𝜇2 − 12 𝜎22 )(𝑡2 −𝑡1 )+𝜎2 1−𝜌2 𝜀
porate liabilities,” Journal of Political Economy, vol. 81, 𝑆𝑡22 = 𝑆𝑡22 𝑒 𝑡2 −𝑡1 (𝜌𝜀1 + 2)
(12)
pp. 637–659, 1973. where 𝜀𝑖 , 𝑖 ∈ {1, 2} are independent unit normal random
[2] R. C. Merton, “Theory of rational option pricing,” Bell variables. The risk-neutral pricing theory asserts that the
Journal of Economics and Management science, vol. 4, discounted price of the ECC is a martingale in measure 𝑃 .
pp. 141–183, 1973. Applying the martingale property for time 𝑇𝑘−1 to compute
[3] M. Schweizer, “Variance-optimal hedging in discrete the expected price of the ECC at time 𝑇𝑘
time,” Mathematics of Operations Research, vol. 20, pp. ( −𝑟𝛿 )
1–32, 1995. 𝔼𝑃
𝑘−1 𝑒
𝑘
𝑉𝑘 (𝑆𝑘1 , 𝑆𝑘2 ) = 𝑉𝑘−1 (𝑆𝑘−1
1 2
, 𝑆𝑘−1 ) (13)
[4] A. V. Melnikov and M. L. Nechaev, “On the mean- Computing the expectation in (13) and substituting 𝑆𝑘𝑙 , 𝑙 =
variance hedging problem,” Theor. Probab. Appl., vol. 43, 𝑙
{1, 2} in terms of 𝑆𝑘−1 we have,
no. 4, pp. 588–603, 1999. [ (
𝑒−𝑟𝛿𝑘
∫∞ ∫∞ 1

(𝑟− 12 𝜎12 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1
[5] S. Bhat, V. Chellaboina, M. U. Kumar, A. Bhatia, and
2𝜋 −∞ −∞
𝑉 𝑘 𝑆 𝑘−1 𝑒 ,
S. Prasad, “Discrete time minimum variance hedging 1 2
√ √ 2 )
for european contigent claims,” in IEEE Conference on 2
𝑆𝑘−1 𝑒(𝑟− 2 𝜎2 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌𝜀1 + 1−𝜌 𝜀2 )
Decision and Control, Shanghai, 2009. −1 2 −1 2
]
[6] M. Sc̈hal, “On quadratic cost criteria for option hedging,” 𝑒 2 𝜀1 𝑒 2 𝜀2 d𝜀1 d𝜀2 = 𝑉𝑘−1 (𝑆𝑘−1 1 2
, 𝑆𝑘−1 ). (14)
Mathematics of Operations Research, vol. 19, no. 1, pp. To compute the conditional covariances cov𝑄 (𝑆𝑘1 , 𝑉𝑘 ∣ℱ𝑘−1 )
121–131, 1994. and cov𝑄 (𝑆𝑘2 , 𝑉𝑘 ∣ℱ𝑘−1 ), we recall that,
[7] E. Subramanian and V. Chellaboina, “Explicit solutions
of discrete-time quadratic optimal hedging strategies for cov𝑄 (𝑆𝑘𝑙 , 𝑉𝑘 ∣ℱ𝑘−1) = 𝔼𝑄 (𝑆𝑘𝑙 𝑉𝑘 ∣ℱ𝑘−1 )
european contingent claims,” London, 2014, pp. 449–456. − 𝔼𝑄 (𝑉𝑘 ∣ℱ𝑘−1 )𝔼𝑄 𝑙
𝑘 (𝑆𝑘 ∣ℱ𝑘−1 ), 𝑙 ∈ {1, 2}.

A PPENDIX Evaluating the conditional expectation 𝔼𝑄 (𝑉𝑘 ∣ℱ𝑘−1 ) yields,


[∫ ∞ ∫ ∞ (
Proof of Lemma 4.1 The proof of Lemma 3.1 follows 1 1 1 2

𝑉𝑘 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1 ,
from the proof of Lemma 4.1 by setting 𝜇1 = 𝜇2 = 𝑟 (or 2𝜋 −∞ −∞
when 𝑃 = 𝑄). To derive a closed form expression for the 1 2
√ √ 2 )
2
optimum local-variance hedging of an exchange option, we 𝑆𝑘−1 𝑒(𝜇2 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎2 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌𝜀1 + 1−𝜌 𝜀2 )
−1 2 −1 2
]
explicitly compute the conditional covariances matrices in (4) 𝑒 2 𝜀1 𝑒 2 𝜀2 d𝜀1 d𝜀2 = 𝔼𝑄 (𝑉𝑘 ∣ℱ𝑘−1 ) (15)
by evaluating the necessary conditional expectations with the
help of risk-neutral pricing theory. Comparing (15) and (14) we have,

115
Back to Contents

[∫ ∞ ∫ ∞ (
1 1 1 2 √ 1 2

= 𝑆𝑘−1 𝑒(𝜇1 − 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1
𝑉𝑘 𝑆𝑘−11
𝑒(𝜇1 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1 ,
2𝜋 −∞ −∞
1 2
√ √ 2 ) −1 2 −1 2 ]
𝑆𝑘−1 𝑒(𝜇2 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎2 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌𝜀1 + 1−𝜌 𝜀2 ) 𝑒 2 𝜀1 𝑒 2 𝜀2 d𝜀1 d𝜀2
2

1 1 2 [∫ ∞ ∫ ∞ (
𝑆𝑘−1 𝑒(𝜇1 − 2 𝜎1 )𝛿𝑘 1 1 2

= 𝑉𝑘 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1 ,
2𝜋 −∞ −∞
√ √ ) −1 2 √ −1 2
]
2 (𝜇2 −𝑟)𝛿𝑘 +(𝑟− 12 𝜎22 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌𝜀1 + 1−𝜌2 𝜀2 )
𝑆𝑘−1 𝑒 𝑒 2 𝜀1 +𝜎1 𝛿𝑘 𝜀1 𝑒 2 𝜀2 d𝜀1 d𝜀2
1 [∫ ∞ ∫ ∞ (
𝑆𝑘−1 𝑒𝜇1 𝛿𝑘 1 1 2

= 𝑉𝑘 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1 ,
2𝜋 −∞ −∞
√ √ ) −1 √ 2 −1 2 ]
2
𝑆𝑘−1 𝑒 (𝜇2 −𝑟)𝛿𝑘 +(𝑟− 12 𝜎22 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌𝜀1 + 1−𝜌2 𝜀2 )
𝑒 2 (𝜀1 −𝜎1 𝛿𝑘 ) 𝑒 2 𝜀2 d𝜀1 d𝜀2
1 [∫ ∞ ∫ ∞ (
𝑆𝑘−1 𝑒𝜇1 𝛿𝑘 1 1 2

= 𝑉𝑘 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 𝜀1 ,
2𝜋 −∞ −∞
√ √ √ ) −1 √ 2 −1 2
]
(𝜇2 −𝑟)𝛿𝑘 +(𝑟− 12 𝜎22 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌(𝜀1 −𝜎1 𝛿𝑘 )+ 1−𝜌2 𝜀2 )+𝜌𝜎1 𝜎2 𝛿𝑘
2
𝑆𝑘−1 𝑒 𝑒 2 (𝜀1 −𝜎1 𝛿𝑘 ) 𝑒 2 𝜀2 d𝜀1 d𝜀2
1 [∫ ∞ ∫ ∞ (
𝑆𝑘−1 𝑒𝜇1 𝛿𝑘 1 2 1 2
√ √
= 𝑉𝑘 𝑆𝑘−1 𝑒(𝜇1 −𝑟+𝜎1 )𝛿𝑘 +(𝑟− 2 𝜎1 )𝛿𝑘 +𝜎1 𝛿𝑘 (𝜀1 −𝜎1 𝛿𝑘 ) ,
2𝜋 −∞ −∞
√ √ √ ) −1 √ 2 −1 2 ]
(𝜇2 −𝑟+𝜌𝜎1 𝜎2 𝛿𝑘 )𝛿𝑘 +(𝑟− 12 𝜎22 )𝛿𝑘 +𝜎2 𝛿𝑘 (𝜌(𝜀1 −𝜎1 𝛿𝑘 )+ 1−𝜌2 𝜀2 )
2
𝑆𝑘−1 𝑒 𝑒 2 (𝜀1 −𝜎1 𝛿𝑘 ) 𝑒 2 𝜀2 d𝜀1 d𝜀2
( 2
)
1
= 𝑆𝑘−1 𝑒(𝜇1 +𝑟)𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒(𝜇1 −𝑟+𝜎1 )𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘 (16)

( )
𝔼𝑄 (𝑉𝑘 ∣ℱ𝑘−1 ) = 𝑒𝑟𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒(𝜇1 −𝑟)𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟)𝛿𝑘 .compute the covariance matrix 𝐴 given by,
( )
The computation of the conditional expectation cov𝑄 (𝑆𝑘1 , 𝑆𝑘1 ∣ℱ𝑘−1 ) cov𝑄 (𝑆𝑘1 , 𝑆𝑘2 ∣ℱ𝑘−1 )
𝔼𝑄 (𝑉𝑘 𝑆𝑘1 ∣ℱ𝑘−1 ) with respect the underlying asset 𝑆 1 𝐴= .
cov𝑄 (𝑆𝑘2 , 𝑆𝑘1 ∣ℱ𝑘−1 ) cov𝑄 (𝑆𝑘2 , 𝑆𝑘2 ∣ℱ𝑘−1 )
is shown in the float on top (Equation 16). The last equality
in the float follows from (14). We see that the diagonal elements are given by,
𝑄 1
The conditional expectation 𝔼 (𝑆𝑘 ∣ℱ𝑘−1 ) can be evaluated cov𝑄 (𝑆 1 , 𝑆 1 ∣ℱ ) = var𝑄 (𝑆 1 ∣ℱ
𝑘 𝑘 𝑘−1 𝑘−1 𝑘−1 )
as ( )2
𝑄 1 𝑟𝛿𝑘 1 (𝜇1 −𝑟)𝛿𝑘 1 𝜇1 𝛿𝑘
= 𝔼 ((𝑆𝑘−1 )2 ∣ℱ𝑘−1 ) − 𝔼𝑄 (𝑆𝑘−1
𝑄 1 1
∣ℱ𝑘−1 )
𝔼 (𝑆𝑘 ∣ℱ𝑘−1 ) = 𝑒 𝑆𝑘−1 𝑒 = 𝑆𝑘−1 𝑒 (17) 2
= 𝑒(2𝜇1 𝛿𝑘 +𝜎1 𝛿𝑘 ) (𝑆𝑘−1 1
)2 − 𝑒2𝜇1 𝛿𝑘 (𝑆𝑘−1
1
)2
Substituting (17), (16) and (17) into (15) we have the condi- 1 2

tional covariance cov𝑄 (𝑆𝑘1 , 𝑉𝑘 ∣ℱ𝑘−1 ) as = (𝑆𝑘−1 )2 𝑒2𝜇1 𝛿𝑘 (𝑒𝜎1 𝛿𝑘 − 1) (20)


( 2 and
1
= 𝑆𝑘−1 𝑒(𝜇1 +𝑟)𝛿𝑘 𝑉𝑘−1 𝑆𝑘−1 1
𝑒(𝜇1 −𝑟+𝜎1 )𝛿𝑘 ,
2
) cov𝑄 (𝑆𝑘2 , 𝑆𝑘2 ∣ℱ𝑘−1 ) = (𝑆𝑘−1 2
)2 𝑒2𝜇2 𝛿𝑘 (𝑒𝜎2 𝛿𝑘 − 1) (21)
2 (𝜇2 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘
𝑆𝑘−1 𝑒
1
Lemma 4.1, follows by computing the off-diagonal element
−𝑆𝑘−1 𝑒𝜇1 𝛿𝑘 𝑒𝑟𝛿𝑘 𝑉𝑘−1 (𝑆𝑘−11
𝑒(𝜇−𝑟)𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇−𝑟)𝛿𝑘 ) cov𝑄 (𝑆𝑘1 , 𝑆𝑘2 ∣ℱ𝑘−1 ) as
[ (
1 (𝜇1 +𝑟)𝛿𝑘 1 (𝜇−𝑟+𝜎12 )𝛿𝑘
= 𝑆𝑘−1 𝑒 𝑉𝑘−1 𝑆𝑘−1 𝑒 , = 𝔼𝑄 (𝑆𝑘−1 1 2
𝑆𝑘−1 ∣ℱ𝑘−1 ) − 𝔼𝑄 (𝑆𝑘−1 1
∣ℱ𝑘−1 )𝔼𝑄 (𝑆𝑘−1 2
∣ℱ𝑘−1 )
)
2 (𝜇2 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘 1 2 (𝜇1 +𝜇2 )𝛿𝑘 𝜌𝜎1 𝜎2
𝑆𝑘−1 𝑒 = 𝑆𝑘−1 𝑆𝑘−1 𝑒 (𝑒 − 1)
( )] 𝑄 2 1
= cov (𝑆𝑘 , 𝑆𝑘 ∣ℱ𝑘−1 ). (22)
1
−𝑉𝑘−1 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 , 𝑆𝑘−12
𝑒(𝜇2 −𝑟)𝛿𝑘 (18)

The conditional covariance cov𝑄 (𝑆𝑘2 , 𝑉𝑘 ∣ℱ𝑘−1 ) can similarly


be evaluated as
[ ( )
2 (𝜇2 +𝑟)𝛿𝑘 1 (𝜇1 −𝑟+𝜌𝜎1 𝜎2 )𝛿𝑘 (𝜇2 −𝑟+𝜎22 )𝛿𝑘
= 𝑆𝑘−1𝑒 𝑉𝑘−1 𝑆𝑘− 1𝑒 , 𝑆𝑘2 −
1𝑒
( )]
1
−𝑉𝑘−1 𝑆𝑘−1 𝑒(𝜇1 −𝑟)𝛿𝑘 , 𝑆𝑘−1
2
𝑒(𝜇2 −𝑟)𝛿𝑘 (19)
Equations 18 and 19 constitute the terms of the matrix 𝐵 as
seen in the statement of Lemma 4.1 (Equation 10). Now we

116
Back to Contents

Optimal Static Hedging of Uncertain Future Foreign


Currency Cash Flows Using FX Forwards
Anil Bhatia and Sanjay P Bhat
TCS Innovation Labs, Tata Consultancy Services, India
anil.bhatia@tcs.com, sanjay@atc.tcs.com

Abstract—An exporter is invariably exposed to currency risk and a predetermined exchange rate. It costs nothing to enter
due to unpredictable fluctuations in the exchange rates, and it a forward contract. The party agreeing to buy the currency in
is of paramount importance to minimize risk emanating from the future assumes a long position, and the party agreeing to
these forex exposures. The currency risk problem is further
compounded if the future foreign currency receivables are un- sell the currency in the future assumes a short position. For a
certain in quantity. In this paper, we address the currency risk general introduction to various other financial instruments and
hedging problem for an uncertain foreign currency receivables. their utility in FX hedging, see [3], [4].
We consider a static hedging strategy defined by the positions In order to capture the impact of the randomness in the
in a FX forward contract, and present a minimum-risk static risk factors on the company’s profit and loss, it is necessary
hedging problem to determine an optimal hedging strategy that
minimizes conditional value at risk (CVaR) due to the loss. to quantify or summarize the effect of the risk by using an
This risk minimization problem can be solved numerically by appropriate risk measure or metric [5]–[7]. There are many
using Monte-Carlo simulations which, however, turn out to be risk measures available to quantify such risk exposure such
computationally intensive, because of the presence of two sources as expectation, variance, the probability of loss, value at risk,
of randomness. We consider bounding functions that bound the conditional value at risk etc.
CVaR due to the loss above and below, and numerically obtain
an outer estimate of the set of solutions of the minimum-risk A hedging strategy represents a trading strategy involving
static hedging problem. We present several bounding functions different types of hedges that can reduce the risk. The company
obtained under different assumptions on the risk factors. has a choice of doing nothing (no hedging), enter into a
hedging contract once (static hedging), or enter into multiple
I. I NTRODUCTION contracts at different times (dynamic hedging). The choice
Exposure to foreign exchange (FX) risk [1], [2] arises when depends on various factors including the company’s view on
companies conduct business in multiple currencies. A typical the market and service costs involved in entering hedging
scenario that such a company may encounter involves a future contracts. See [3], [4] for a detailed introduction to hedging.
receivable in foreign currency (FC) for some services/goods A minimum-risk hedging problem represents an optimization
the company exported. Any FC thus received needs to be problem in which one seeks to minimize a risk metric over a
converted to the company’s home currency (HC) at some feasible set of decision variables, which are typically used to
predetermined time. However, due to uncertain fluctuations construct a hedging portfolio.
in exchange rates, the company may incur huge losses (or FX risk hedging continues to remain an active area of
reduced profits) at the time of conversion. Specifically, if the research [1], [2], [8], [9]. Most of the existing literature focuses
value of the HC appreciates against FC by the conversion on specific risk measures such as variance or conditional
date, then the company will receive less HC. On the other value at risk [10]–[12]. The literature on foreign exchange risk
hand if FC appreciates then the company will profit from the management predominantly addresses the situation of a known
exchange rate fluctuation. Since the direction and magnitude (deterministic) cash flow. The case of an uncertain future
of these fluctuations are uncertain, the company is exposed to cash flow is addressed in [13], where the authors consider
the FX (or currency) risk. Furthermore, the magnitude of such a correlation between the amount of the foreign currency
fluctuations are significant enough to affect the company’s cash flow and the actual spot or futures exchange rate. They
profit and loss. The FX risk problem is further compounded derive the hedge ratio by minimizing the variance of a cash
if the future FC receivables are uncertain in quantity. and futures portfolio. Another aspect of uncertain cash flow,
This uncertainty in the future receivables as well the future namely the arrival of new information, is investigated in [14].
exchange rates necessitates the application of risk management The optimal hedge, using forward contracts, is determined
techniques, in particular, hedging. The company can enter into by solving a stochastic dynamic programming problem. The
different hedging contracts with some financial institution. problem of hedging price and quantity has been considered in
Such contracts may involve FX instruments, also known as other areas as well. In [15], the authors address the hedging
hedges, such as forwards, futures, swaps, and a variety of problem of a load serving entity, which provides electricity
FX options [1], [2]. A FX forward is an agreement between service at a regulated price in an electricity market with price
the two parties to exchange currencies or equivalently, to buy and quantity risk. The authors present an optimal zero-cost
or sell a particular currency at a predetermined future date hedging function by exploiting the correlation between the

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


117
Back to Contents

consumption volume and spot price in the electricity market, random variables on (Ω, 𝔽, ℙ). Given 𝑌 ∈ 𝐿(Ω, 𝔽, ℙ), 𝔼[𝑌 ]
from a portfolio of forward contracts and call and put options. denote the expectation (or the expected value) of 𝑌 under the
In this paper, we address the problem of hedging future measure ℙ.
FC receivables which can be either deterministic or uncertain A financial risk may be modeled as a random variable on
in quantity. We consider a static hedging strategy defined by a suitable probability space (Ω, 𝔽, ℙ). A risk measure is a
the positions in a FX forward contract and introduce the loss mapping from 𝐿0 (Ω, 𝔽, ℙ) to ℝ. In this paper, we consider one
associated with this strategy. We consider conditional value at of the popular shortfall measures, namely, conditional value at
risk (CVaR) to capture the effect of the randomness of the risk risk (CVaR). CVaR provides the information about the average
factors on profit and loss, and present the minimum-risk static level of loss, given that the value at risk (VaR) is exceeded,
hedging problem to determine an optimal hedging strategy that where VaR provides an estimate of how much loss is likely
minimizes CVaR due to the loss. When the cash flow is deter- to occur, given normal market conditions. Mathematically, for
ministic, the loss variable has a single source of randomness, a given confidence level 𝛼 ∈ (0, 1) and a random variable
namely the exchange rate, and it becomes easier to solve the 𝑌 ∈ 𝐿(Ω, 𝔽, ℙ) having a continuous distribution function, the
risk minimization problem analytically. The uncertainty in the CVaR is defined as
cash flow introduces a second source of randomness in the loss
CVaR𝛼 (𝑌 ) = 𝔼[𝑌 ∣ 𝑌 > VaR𝛼 (𝑌 )], (1)
variable. In this case, any risk minimization problem can be
solved numerically by using Monte-Carlo simulations which, where
however, turn out to be computationally intensive, because of
the presence of two sources of randomness. VaR𝛼 (𝑌 ) = inf {𝑦 ∈ ℝ ∣ ℙ(𝑌 > 𝑦) ≤ 1 − 𝛼} . (2)
In this paper, we address minimum-risk hedging problem Let 𝑎 ∈ ℝ, and 𝑌 ∈ 𝐿(Ω, 𝔽, ℙ). Define
numerically by obtaining an outer estimate of the set of [ ]
1
solutions of the minimum-risk static hedging problem in terms 𝐺𝛼 (𝑎, 𝑌 ) = 𝑎 − 𝔼 (𝑎 − 𝑌 ) {𝑌 ≥𝑎} . (3)
of the minimum value and sublevel set of the bounding 1−𝛼
functions which bound the CVaR of loss from above and Then, CVaR𝛼 (𝑌 ) equals the value of the optimization prob-
below, respectively. We consider different bounding functions lem [16],
which are obtained by assuming that the two risk factors are CVaR𝛼 (𝑌 ) = min𝑎∈ℝ 𝐺𝛼 (𝑎, 𝑌 ) (4)
independent random variables and the random cash flow is
while VaR𝛼 (𝑌 ) equals the corresponding optimal solution
bounded, aprioi. We show that such bounding functions can
be computed either analytically or numerically. VaR𝛼 (𝑌 ) = argmin𝑎∈ℝ 𝐺𝛼 (𝑎, 𝑌 ). (5)
In Section II we introduce our notation and present pre-
CVaR is a coherent risk measure and exhibits the properties of
liminary results used in later sections. In Section III, we start
monotonicity, translational equivalence, positive homogeneity,
by introducing a static hedging problem with a deterministic
and sub-additivity [5], [17].
cash flow and present the corresponding loss function. Next,
Finally, we present a useful result, which we will use later
we extend this loss function to include the randomness in the
when we discuss the minimum risk static hedging problem.
cash flow. In Section IV we define the minimum-risk static
The result gives an outer estimate of the set of minimizers of
hedging problem, which minimizes CVaR for the loss variable.
a given function in terms of the minimum value and sublevel
Next, we present expressions for the CVaR in the case where
set of functions which bound the given function from above
the cash flow is deterministic and discuss optimal solution
and below, respectively. Given a function 𝑓 : ℝ𝑛 → ℝ, denote
to the minimum-risk static hedging problem. Thereafter, we
argmin 𝑓 = {𝑥 ∈ ℝ𝑛 ∣𝑓 (𝑥) ≤ 𝑓 (𝑦) ∀ 𝑦 ∈ ℝ𝑛 }.
consider the case of uncertain cash flow, and present different
Proposition 2.1: Let 𝑔 : ℝ𝑛 → ℝ, 𝑢 : ℝ𝑛 → ℝ and 𝑙 :
bounding functions. In Section V the results are illustrated
ℝ𝑛 → ℝ be continuous functions such that 𝑙 is convex and
through a numerical example.
𝑙(𝑥) ≤ 𝑔(𝑥) ≤ 𝑢(𝑥) ∀ 𝑥 ∈ ℝ𝑛 . If argmin 𝑙 is nonempty and
II. N OTATION AND P RELIMINARIES bounded, then argmin 𝑔 is nonempty and bounded. Further,
if 𝑚 = min𝑥∈ℝ𝑛 𝑢(𝑥), then
Let ℝ (ℝ+ ) denote the set of real (positive real) numbers.
The indicator function of subset 𝑆 of a set 𝑋 is the function argmin 𝑔 ⊆ {𝑥 ∈ ℝ𝑛 ∣ 𝑙(𝑥) ≤ 𝑚}.
𝑆 : X → {0, 1} such that
{ III. T HE L OSS VARIABLE UNDER A S TATIC H EDGING
1, 𝑥 ∈ 𝑆, S TRATEGY
𝑆 (𝑥) =
0, 𝑥 ∈ / 𝑆. Consider a company that is expecting 𝑛 units of foreign
currency at a future time 𝑇 . In this paper, we restrict our

For 𝑥 ∈ ℝ, we denote sign(𝑥) = {𝑥>0} − {𝑥<0} . attention to a static hedging strategy where the company
Let (Ω, 𝔽, ℙ) be a probability space, where (Ω, 𝔽) is a can invest only once in a FX forward contract and hold the
measurable space, 𝔽 is a 𝜎−algebra on Ω, and ℙ is a positions in the forward till 𝑇 . We assume that the forward
probability measure on (Ω, 𝔽). Let 𝐿0 (Ω, 𝔽, ℙ) denote the set contract expires at the future time T, and is initiated at the same
of random variables and 𝐿(Ω, 𝔽, ℙ) denote the set of integrable time as the start of hedging, which we denote by 𝑇0 . Let 𝐹

118
Back to Contents

denote the forward rate at time 𝑇0 of the forward contract. 1000

Let Δ ∈ ℝ represent a short position in FX forward contract 900

bought at 𝑇0 . This arrangement allows the company to sell 𝛿 800

amount of FC currency at an exchange rate of 𝐹 at time 𝑇 . 700

By entering into a forward contract to sell FC, the company 600

locks a future exchange rate without any additional cost. Let

CVaR
500

𝑋 represent the spot exchange rate at time 𝑇 . Thus one unit 400

of FC sold at time 𝑇 fetches 𝑋 units of HC. We assume that


300

𝑋 is an integrable random variable on the probability space


200

(Ω, 𝔽, ℙ).
100

At time 𝑇 , Δ forward contracts held in the hedging portfolio −10 −5 0 5


Delta
10 15 20

mature and generate a HC cash flow of Δ𝐹 . Any remaining


(𝑛 − Δ) units of FC are then spot traded at 𝑋, and generate Fig. 1: Plot of CVaR𝛼 (𝜓(𝑛, Δ)) for the loss variable 𝜓(𝑛, Δ)
an HC cash flow of (𝑛 − Δ)𝑋. Furthermore, we assume that with a deterministic cash flow 𝑛, given by (6), under a static
the company has an obligation to pay 𝑛𝐵 units of HC at time hedging strategy defined by Δ units in FX forward contract.
𝑇 , where 𝐵 is the budget rate 1 . The loss from this hedging
portfolio takes the form
𝜓(𝑛, Δ) = 𝑛𝐵 − (Δ𝐹 + (𝑛 − Δ)𝑋) A. Deterministic Cash Flow
= (𝐵 − 𝑋)(𝑛 − Δ) + (𝐵 − 𝐹 )Δ. (6) The following result provides an expression for
CVaR𝛼 (𝜓(𝑛, Δ)).
Notice that 𝜓 is a random function of the cash flow 𝑛 and Proposition 4.1: Consider the loss random variable
position Δ. Positive values of 𝜓(𝑛, Δ) represent a profit while 𝜓(𝑛, Δ) given by (6) under a static hedging strategy defined
negative values represent a loss. We will find it convenient to by Δ ∈ ℝ. Then, for 𝛼 ∈ (0, 1),
treat the amount of deterministic cash flow 𝑛 in (6) as known
and fixed, and view the loss variable as a function of Δ only. CVaR𝛼 (𝜓(𝑛, Δ)) =
In this case, we denote 𝑛𝐵 − 𝐹 Δ + CVaR𝛼 (sign(Δ − 𝑋)𝑋)∣𝑛 − Δ∣. (8)

𝐿D (Δ) = 𝜓(𝑛, Δ). Proof. The result is a direct consequence of applying the
In the case where the cash flow is random, we denote it by translational equivalence and positive homogeneity properties
𝑁 . We assume that 𝑁 ∈ 𝐿(Ω, 𝔽, ℙ). Setting 𝑛 = 𝑁 in (6), the of CVaR to the loss function 𝜓(𝑛, Δ). □
loss function 𝜓(𝑁, Δ) has two sources of randomness, that is
𝑋 and 𝑁 . We will find it convenient to treat the random cash From (8), CVaR𝛼 (𝜓(𝑛, Δ)) is a piecewise linear function
flow 𝑁 in 𝜓(𝑁, Δ) as a fixed random variable, and view the of the decision variable Δ and continuous at Δ = 𝑛. Also,
loss variable as a function of Δ only. In this case, we denote CVaR𝛼 (𝜓(𝑛, Δ)) has

1) a positive slope everywhere if 𝐹 > CVaR𝛼 (𝑋),
𝐿R (Δ) = 𝜓(𝑁, Δ). 2) a negative slope for Δ1 < 𝑛 and a positive slope for
Δ1 > 𝑛, in the case 𝐹 ∈ (CVaR𝛼 (−𝑋), CVaR𝛼 (𝑋)),
IV. M INIMUM -R ISK S TATIC H EDGING P ROBLEM
and
The minimum-risk static hedging problem is the problem of 3) a negative slope everywhere if 𝐹 < CVaR𝛼 (−𝑋).
determining an optimal static hedging strategy that minimizes In the unconstrained case Λ = ℝ, if 𝐹 ∈
a given risk measure due to the loss. Specifically, for CVaR (CVaR𝛼 (−𝑋), CVaR𝛼 (𝑋)) the minimum value of
measure due to the loss variable 𝐿(Δ), the minimum-risk static CVaR𝛼 (𝜓(𝑛, Δ)) is 𝑛(𝐵 − 𝐹0,𝑇 ) and is achieved at
hedging problem can defined as Δ = 𝑛.
min CVaR𝛼 (𝐿(Δ)). (7)
Δ∈Λ B. Random Cash Flow
where 𝐿(Δ) = 𝐿D (Δ) in the case of a deterministic cash flow In the case of random cash flow, the loss variable 𝐿R (Δ)
and 𝐿(Δ) = 𝐿R (Δ) in the case of a random cash flow, Λ ⊆ ℝ depends on two random variables, and it is difficult to get
denotes the set of feasible static hedging strategies 2 . an expression for CVaR𝛼 (𝐿R (Δ)). In this case, any risk
In this section we present expressions for CVaR due to minimization problem can be solved numerically by using
loss from deterministic and random cash flow, and discuss Monte-Carlo simulations which, however, turn out to be com-
the solution to the minimum-risk hedging problem (7). putationally intensive, because of the presence of two sources
of randomness, namely 𝑁 and 𝑋. A cheaper alternative,
1 Typically the budget rate 𝐵 is determined by the company once a year
which we pursue, is to use Proposition 2.1 to obtain an outer
based on many factors such as costs and profit expectations.
2 The feasible set Λ is chosen by the company based on its risk appetite, estimate of the set of solution of (7). This, however, require
risk tolerance, and regulatory constraints. bounding functions that bound CVaR𝛼 (𝐿R (Δ)) above and

119
Back to Contents

below. Hence, we next turn our attention to obtaining such However, by applying translational equivalence, positive ho-
bounding functions. mogeneity, and sub-additivity properties of CVaR, it is pos-
Throughout the rest of the paper, we assume that 𝑋 and sible to obtain bounding function for CVaR𝛼 (𝜓L (Δ)) and
𝑁 are independent random variables. This assumption is CVaR𝛼 (𝜓U (Δ)). For instance, for each Δ ∈ ℝ, it is easy
reasonable since the spot rate at 𝑇 does not influence the to use (12)-(13) to show that
amount of the realized FC cash flow the company will receive.
CVaR𝛼 (𝜓L (Δ)) ≥ CVaR𝛼 (𝜓(𝑛1 , Δ) {𝐵−𝑋𝑇 >0} )
Define 𝑔 : ℝ × ℝ+ × ℝ → ℝ by 𝑔(𝑎, 𝑚, Δ) =
𝐺(𝑎, 𝜓(𝑚, Δ)), where the function 𝐺(⋅, ⋅) is given by (3). − CVaR𝛼 (−𝜓(𝑛2 , Δ) {𝐵−𝑋𝑇 <0} ), (16)
Applying the independence lemma [18, Lem.2.3.4], we have and
𝐺(𝑎, 𝐿R (Δ)) CVaR𝛼 (𝜓U (Δ)) ≤ CVaR𝛼 (𝜓(𝑛2 , Δ) {𝐵−𝑋𝑇 >0} )
[ [ ]]
1 + CVaR𝛼 (𝜓(𝑛1 , Δ)
= 𝔼 𝔼 𝑎− (𝑎 − 𝐿R (Δ)) {𝐿R (Δ)≥𝑎} ∣𝑁 {𝐵−𝑋𝑇 <0} ). (17)
1−𝛼
= 𝔼[𝑔(𝑎, 𝑁, Δ)]. (9) As a result, we have
CVaR𝛼 (𝐿R (Δ)) ≥ CVaR𝛼 (𝜓L (Δ)) ≥ 𝐶l (Δ), (18)
As 𝑔 is convex in its second argument, Jensen’s inequality
implies that 𝔼[𝑔(𝑎, 𝑁, Δ)] ≥ 𝑔(𝑎, 𝔼[𝑁 ], Δ). Hence, we have and
𝐺(𝑎, 𝐿R (Δ)) ≥ 𝐺(𝑎, 𝜓(𝔼[𝑁 ], Δ)), ∀Δ ∈ ℝ. (10) CVaR𝛼 (𝐿R (Δ)) ≤ CVaR𝛼 (𝜓U (Δ)) ≤ 𝐶u (Δ) (19)
Minimizing (10) over 𝑎 leads to where 𝐶l (⋅) and 𝐶u (⋅) represent the right hand sides of (16)
and (17), respectively. The functions 𝐶l and 𝐶u in (18)-
CVaR𝛼 (𝐿R (Δ)) ≥ CVaR𝛼 (𝜓(𝔼[𝑁 ], Δ)), ∀Δ ∈ ℝ. (11) (19) have expressions and can be computed analytically for
Note that the bounding function provided by the right hand a hedging strategy defined by Δ.

side of the inequality (11) can be easily computed using (8). To obtain yet another upper bounding function, let 𝜆1 =
𝑛2 −𝑁 △ 𝑁 −𝑛1
The convexity property of CVaR and the form of 𝜓(𝔼[𝑁 ], Δ)) 𝑛2 −𝑛1 and 𝜆2 = 𝑛2 −𝑛1 . Note that 𝜆1 + 𝜆2 = 1 and 𝑁 =
assures that the right hand side of (11) is convex in Δ. 𝜆2 𝑛2 + 𝜆1 𝑛1 . The loss 𝐿R (Δ) may be rewritten as 𝐿R (Δ) =
Next, we restrict ourselves to the special case where the 𝜆1 𝜓(𝑛1 , Δ)+𝜆2 𝜓(𝑛2 , Δ). Applying the Independence lemma
random cash flow 𝑁 is bounded, a priori. Let 𝑛1 , 𝑛2 ∈ ℝ be a [18, Lem.2.3.4], we have
lower and an upper bound on 𝑁 , respectively. Define the best
𝐺(𝑎, 𝐿R (Δ))
and the worst case loss variables as

= 𝔼[𝑔(𝑎, 𝑁, Δ)]
𝜓L (Δ) = = 𝔼[𝑔(𝑎, 𝜆1 𝑛1 + 𝜆2 𝑛2 , Δ)]
𝜓(𝑛1 , Δ) {𝐵−𝑋>0} + 𝜓(𝑛2 , Δ) {𝐵−𝑋<0} , (12) ≤ 𝔼[𝜆1 𝑔(𝑎, 𝑛1 , Δ) + 𝜆2 𝑔(𝑎, 𝑛2 , Δ)]
and = 𝔼[𝜆1 ]𝑔(𝑎, 𝑛1 , Δ) + 𝔼[𝜆2 ]𝑔(𝑎, 𝑛2 , Δ)
△ = 𝔼[𝜆1 ]𝐺(𝑎, 𝜓(𝑛1 , Δ)) + 𝔼[𝜆2 ]𝐺(𝑎, 𝜓(𝑛2 , Δ)). (20)
𝜓U (Δ) =
𝜓(𝑛2 , Δ) {𝐵−𝑋>0} + 𝜓(𝑛1 , Δ) {𝐵−𝑋<0} . (13) Minimizing over 𝑎 in (20) leads to

Observe that (12) combines scenarios where the exchange CVaR𝛼 (𝐿R (Δ)) ≤ min𝑎 [𝔼[𝜆1 ]𝐺(𝑎, 𝜓(𝑛1 , Δ))
rate falls (rises) with scenarios where the random cash flow +𝔼[𝜆2 ]𝐺(𝑎, 𝜓(𝑛2 , Δ))] , ∀Δ ∈ ℝ.
assumes its lowest (highest) values. The random variable (12) (21)
thus represents the lowest possible value that the loss random
variable 𝐿R (Δ) can assume in any actual scenario for a While an analytical expression for the bounding function
given Δ. Similarly, the random variable 𝜓U (Δ) represents the provided by the right hand side of (21) is not available, the
highest possible value that the loss random variable 𝐿R (Δ) bounding function can be computed numerically.
can assume in any actual scenario for a given Δ. Thus we V. N UMERICAL R ESULT
have
We use the geometric Brownian model [19] to simulate
𝜓L (Δ) ≤ 𝐿R (Δ) ≤ 𝜓U (Δ), ∀Δ ∈ ℝ. (14) 100, 000 paths of the FX rates 𝑋. The time to maturity was
set at 𝑇 = 152 days. The home and the foreign growth
Now using the monotonicity property of CVaR, we have rates are such that the difference between the two is 5%.
CVaR𝛼 (𝜓L (Δ)) ≤ CVaR𝛼 (𝐿R (Δ)) ≤ CVaR𝛼 (𝜓U (Δ)) (15) With 𝛼 = 0.99, CVaR𝛼 (𝑋) and CVaR𝛼 (−𝑋) are calculated
(using MC) to be 163.32 and −8.81 respectively. The forward
for each Δ ∈ ℝ. Equation (15) gives bounding functions for rate 𝐹 is chosen to be 46.24 and the budget rate 𝐵 is set
CVaR𝛼 (𝐿R (Δ)). Unfortunately, it is difficult to find analyt- at 47. Note that 𝐹 ∈ (CVaR𝛼 (−𝑋), CVaR𝛼 (𝑋)). Figure
ical expressions for CVaR𝛼 (𝜓L (Δ)) and CVaR𝛼 (𝜓U (Δ)). 1 illustrates CVaR𝛼 (𝜓(𝑛, Δ)) as a function of position Δ

120
Back to Contents

with a deterministic cash flow 𝑛. In the unconstrained case, 1600

since 𝐹 ∈ (CVaR𝛼 (−𝑋), CVaR𝛼 (𝑋)), the minimum value 1400


CVaR(Psi(E[N],Delta))
CVaR(Psi (Delta))

𝑛(𝐵 − 𝐹0,𝑇 ) = 7.50 for CVaR𝛼 (𝜓(𝑛, Δ)) is achieved at


u
CVaR(Cu(Delta))
CVaR(Lu)

Δ = 𝑛 = 10 (refer Figure 1).


1200
CVaR(Psi (Delta))
l

Figure 2 illustrates how Proposition 2.1 can be used along 1000

with bounding functions (11) and (21) to obtain a range of

CVaR
800

Δ containing the solution to the minimum-risk static hedging 600

problem (7) in the case of an uncertain cash flow. To do this, 400

choose 𝑙 and 𝑢 in Proposition 2.1 to be the functions of Δ m

200
appearing on the right hand side of (11) and (21), respectively.
The function 𝑙 is convex. For our choice of parameters, the 0
70 75 80 85 90
n
1
95
d2
n
2
100 105 110

d1 Delta

horizontal dashed line in Figure 2 indicates the minimum


value 𝑚 = 369.8 of the upper bounding function 𝑢. This line Fig. 2: Plots of the bounding function given by (11), (15),
intersects the plot of the convex lower bounding function 𝑙 at (19), and (21) as a function of Δ.
point Δ = 𝑑1 and Δ = 𝑑2 , where 𝑑1 = 87 and 𝑑2 = 97.5.
Proposition 2.1 now lets us conclude the solution of minimum-
risk static hedging problem (7) must lie in the interval [𝑑1 , 𝑑2 ]. [8] J. Y. Campbell, K. S. Medeiros, and L. M. Viceira, “Global currency
hedging,” J. Finance, vol. 65, pp. 87–121, 2010.
VI. C ONCLUSION [9] J. Glen and P. Jorion, “Currency hedging for international portfolios,”
vol. 48, pp. 1865–1886, 1993.
In this paper, we addressed the currency risk hedging [10] N. Topaloglou, H. Vladimirou, and S. A. Zenios, “Cvar models with
problem for the future FC receivables which can be either selective hedging for international asset allocation,” Journal of Banking
and Finance, vol. 26, pp. 1535–1561, 2002.
deterministic or uncertain in quantity. We considered a static [11] K. Volosov, G. Mitra, F. Spagnolo, and C. Lucas, “Treasury management
hedging strategy defined by the positions in a FX forward model with foreign exchange exposure,” Computational Optimization
contract only, and introduced the loss associated with this and Applications, vol. 32, pp. 179–207, 2005.
[12] A. Bhatia, V. Chellaboina, and S. Bhat, “A monte carlo
strategy. We considered conditional value at risk (CVaR) to approach to currency risk minimization,” in International
capture the effect of the randomness of the risk factors on Simulation Conference of India, 2012. [Online]. Available:
profit and loss, and presented the minimum-risk static hedging http://www.ieor.iitb.ac.in/files/ieorweb/ISCI2012/index.html
[13] J. Kerkvliet and M. Moffett, “The hedging of an uncertain future foreign
problem to determine an optimal hedging strategy that mini- currency cash flow,” The Journal of Financial and Quantitative Analysis,
mizes CVaR due to the loss. For a deterministic cash flow, we vol. 26, pp. 565–578, 1991.
derived analytical expression of the CVaR and observed that it [14] M. R. Eaker and D. Grant, “Optimal hedging of uncertain and long-term
foreign exchange exposure,” Journal of Banking and Finance, vol. 9, pp.
is easier to solve the risk minimization problem analytically. 221–231, 1985.
To solve the risk minimization problem for a random cash [15] Y. Oum and S. Oren, “Optimal static hedging of volumetric risk in a
flow, we considered bounding functions that bound the CVaR competitive wholesale electricity market,” Decision Analysis, vol. 7, pp.
107–122, 2010.
due to the loss, above and below. We obtained different [16] R. Rockafellar and S. Uryasev, “Optimization of conditional value-at-
bounding functions by assuming independence amongst the risk,” Journal of Risk, vol. 2, pp. 21–41, 2000.
two risk factors and by considering the random cash flow to be [17] G. Pflung, “Some remarks on the value-at-risk and the conditional value-
at-risk,” Probabilistic Constrained Optimization: Methodology and Ap-
bounded. The results in this paper make no assumptions about plications (ed. S. Uryasev) Kluwer, 2000.
the underlying probability distributions. Finally, we used these [18] S. E. Shreve, Stochastic Calculus for Finance II: Continuous-Time
bounds to obtain an outer estimate of the set of minimizers of Models. Springer, 2004.
[19] P. Glasserman, Monte Carlo Methods in Financial Engineering.
the minimum-risk static hedging problem. Springer, 2003.
The case of minimum-risk static hedging problem with other
hedging instruments will be considered in a future paper.
Furthermore, the current work will also be extended to the
dynamic hedging case.

R EFERENCES
[1] J. J. Stephens, Managing Currency Risk using Financial Derivatives.
England: Wiley, 2001.
[2] H. Xin, Currency Overlay: A Practical Guide. Risk Books, 2003.
[3] P. Wilmott, Paul Wilmott Introduces Quantitative Finance, 2nd ed.
Wiley, 2007.
[4] J. C. Hull, Options, Futures and Other Derivatives, 7th ed. Prentice
Hall, 2008.
[5] P. Artzner, F. Delbaen, J. Eber, and D. Heath, “Coherent measures of
risk,” Mathematical Finance, vol. 9, pp. 203–228, 1999.
[6] C. Alexander, Market Risk Analysis IV: Value-At-Risk Models. John
Wiley and Sons, 2009.
[7] K. Dowd, Measuring Market Risk. John Wiley and Sons, 2005.

121
Back to Contents

A Neural Network Approach to the Operational Strategic Determinants of


Market Value in High-tech Oriented SMEs

Jooh Lee, William G. Rohrer College of Business, He-Boong Kwon, Hasan School of Business,
Rowan University Colorado State University-Pueblo
201 Mullica Hill Rd., Glassboro, NJ 08028, USA, 2200 Bonforte Blvd., Pueblo, CO 81001, USA,
lee@rowan.edu heboong.kwon@csupueblo.edu

Abstract - The purpose of this paper is to present an The assumption free nature of the model and adaptive
adaptive performance model using backpropagation learning capacity draws an increasing interest on
neural network (BPNN) in scrutinizing impact of ANNs as an alternative to conventional parametric
strategic factors on firm performance, especially methods with greater potential for further
within high-tech SMEs in U.S. The novel design advancement for predictive and explorative analysis.
approach introduced in this paper segments SMEs ANNs are a favorable choice especially in the
into high and low performance groups and captures presence of unknown complex relationships between
different impact patterns of strategic variables (e.g. variables. This paper capitalizes on these prominent
R&D). This paper explores both explanatory and features of backpropagation neural network (BPNN),
predictive capacity of a neural network and extends a supervised ANN model, and presents a unique
its application to the measurement of relative empirical model to scrutinize impact of strategic
efficiency and subsequent prediction of potential factors to the SME performance under differing
improvement. This paper demonstrates effectiveness efficiency levels. This is a meaningful extension of
of a neural network for SME analysis and its BPNN applications and a significant addition to the
potential advancement toward performance modeling existing literature. The model proved effective in
as an adaptive decision support tool. both explanatory and predictive analysis in capturing
differential impact of variables in each category and
I. INTRODUCTION predicting potential improvement for SMEs in Low
performance. In so doing, this paper provides
SMEs, as compared to large enterprises, are more
methodological breakthrough for performance
stringent on their business performance due to lack of
analysis through integrity of BPNN and enhances
capital and versatile resources. As a consequence,
managerial intuitions toward strategic management of
SMEs, especially in the technology oriented industry,
firm resources especially for SMEs.
strive to minimize slack in their operations. In other
words, those SMEs are more conscious on their II. DESIGN OF THE EMPIRICAL MODEL
efficient and effective utilization of limited resources
for successful business outcomes (Blackwell et al., The proposed empirical model involves three stages
2006; Hsu et al., 2011). In this regard, prudent of sequential BPNN processes as depicted in Figure 1.
performance management is a crucial managerial In this empirical study, the model is designed to
imperative not only for short term profitability but investigate performance patterns of SMEs in high-
also for a long term sustainable operations. For these tech oriented industry. Especially, this paper intends
efficiency motivated business entities, sound to explore varying patterns of SMEs in different
methodology to support systematic performance performance categories, High (above average) and
measurement has been a practical demand to support Low (below average) as differentiated by relative
decision making process (Ciampi and Gordini, 2013; efficiency. For this task, BPNN models are
Garengo et al., 2005). implemented in a sequential manner.
This paper presents an innovative The first step extracts average performance
performance modeling approach using artificial pattern of all SMEs in consideration by exploiting
neural networks (ANNs) to fill the research gap. BPNN’s characteristics in learning central tendency.
ANN is an intelligent analytical method capable of In this stage, the performance of BPNN (BPNN 1) is
modeling complex and nonlinear patterns of data and compared with MR before progressions to the next
their associations (Fausett, 1994; Rumelhart et al., stage. The 2nd stage is centered on refinement of
1986). The assumption free nature of the model and BPNN as a key process in this modeling. After
adaptive learning capacity draws an increasing sophistication and verification of performance, the
interest on ANNs as an alternative to conventional model (BPNN 2) separates SMEs into two categories,
parametric methods with greater potential for further High and Low, based upon relative performance.
association (Fausett, 1994; Rumelhart et al., 1986). Indeed, BPNN outcomes provide significant means to
assess relative efficiency of each SME and its rank,

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


122
Back to Contents

eventually enabling further extension of this study to Table 2. Results of Multiple Regression Analysis
the analysis of different performance patterns in the ----------------------------------------------------------------
subsequent stage. Finally, the third stage includes Results β (Std. Error) V.I.F.
----------------------------------------------------------------
implementation of two separate BPNN models Constant 1.300 (0.273)
(BPNN_H and BPNN_L) by taking inputs from two ROA 0.700 (0.268)* 1.127
previously separated stages. As a result, BPNN RND 1.147 (0.302)** 1.182
CRatio 0.149 (0.028)*** 1.113
models learn performance patterns of each category Growth 0.028 (0.002)*** 1.015
and captures impact of strategic variables. InvTO 0.021 (0.127)* 1.030
For this empirical study, seven variables ACP 0.003 (0.003) 1.033
were used chosen: return on asset (ROA), R&D R2 = 0.6382; Adj. R2 = 0.6221; F-Ratio = 40.269***
intensity (RND), current ratio (CRatio), sales growth ----------------------------------------------------------------
(Growth), inventory turnover (InvTO), average Note: *P < 0.05; **P < 0.01; ***P < 0.001
collection period (ACP), and market value in natural
Followed by MR, BPNN was employed by utilizing
log (LnMV). The first six variables were considered
the same data set. For this experiment, data set was
key strategic factors which impact market
split into 7:3 ratio for training and test subsets where
performance (LnMV).
test data was used to prevent over-fitting of the model
As a preliminary step toward empirical
during the network training process. In so doing, both
analysis, correlation analysis was used to find out
BPNN and MR models were built on the same
correlations between variables as summarized in
number of decision making units (DMUs), 144 SMEs,
Table 1. The result shows significant relationship
thus facilitating comparison of two different methods.
between four independent variables (e.g. ROA, RND,
The resulting BPNN model has a 6-10-1 structure
CRATIO, and Growth) and a single dependent
with 6 (10, 1) input (hidden, output) neurons,
variable (LnMV).
respectively. Then, a comparative prediction analysis
was conducted for both MR and BPNN models by
Table 1 Correlation Analysis
--------------------------------------------------------------------- using various performance metrics such as
Variable 1 2 3 4 5 6 correlation between actual and predicted value
---------------------------------------------------------------------
1. ROA (Pearson R), mean absolute error (MAE), mean
2. RDINT .29*** absolute percentage error (MAPE), and number of
3. CRatio -.05 .22** DMUs less than (LT) 25% prediction errors. Both
4. Growth .02 .10 .03 models show high prediction performance, however,
5. InvTO .07 .02 -.15* -.01 BPNN demonstrates superior outcomes across all
6. ACP -.09 -.01 -.13 -.06 .02 performance metrics.
7. LnMV .21** .39** .32** .66*** .09 -.03
Besides predictive analysis of both models,
impact of independent variables to the output was
---------------------------------------------------------------------
Note: *P < 0.05; **P < 0.01; ***P < 0.001 assessed. In this measure, relative variations of
BPNN output in accordance with input changes were
III. RESULTS OF EMPIRICAL ANALYSIS measured as a partial derivative of an input to the out.
These results were then compared with MR output,
Analysis of average performance. the normalized Beta (standardized coefficients).
As a first step to assess impact of strategic factors on Figure 1 visualizes comparative impact of each
the market performance of SMEs, multiple regression variable in each method.
(MR) analysis was conducted first and compared
Figure 1. Impact of independent variables
with BPNN outputs prior to exploring characteristics
of SMEs in two different categories, high and low
performance. The MR analysis is conducted to serve
as a basis to compare outputs of BPNN. Both models
generate valuable information on patterns of overall
SMEs in terms of average performance. For these
two different methods, six input variables (e.g., ROA,
RND, CRatio, Growth, InvTO, and ACP) were paired
with a single output LnMV. Table 2 summarizes the
results of the MR model. The model generated R2 of
0.638 and all variables except for ACP were
significant. Growth is the most significant variable
followed by CRatio, RDINT, ROA, InvTO, and ACP.

123
Back to Contents

Interestingly enough, relative impact of variables DMUs (more than 95%) presenting less than 20%
shows similar patterns in both models. As in MR error in both models.
experiment, BPNN identifies Growth as the most By building models on subset of SMEs
impactful variable followed by Cratio and RND. (High or Low) separated by prior BPNN model
InvTO and ROA are less impactful and ACP shows (BPNN 2), both of BPNN H and BPNN L models
minimal impact, which is consistent with MR in that take on inputs of less fluctuation. In other words, data
ACP is the only insignificant variable. Overall, sets preserve better monotonicity which is an
besides superior predictive capacity, BPNN model important aspect for stable learning of BPNN and a
demonstrates sound explanatory power, thus consequent robust model (Archer and Wang, 1993;
preserving its value for further progression of the Pendharkar, 2005). In so doing, two subsequent
method for strategic performance analysis. BPNN models learn central tendency of SMEs in
High and Low performance.
Categorization of SMEs (High vs. Low).
Figure 2. Prediction error
Superior predictive power of BPNN further prompted
refinement of the model for the prediction experiment
in exploring performance patterns of SMEs in
different performance categories. As a prediction
model to categorize SMEs into two performance
groups, input preprocessing function of the software
package (Neuralware, 2003) was utilized and the data
set was split into training (68%), test (17%), and
validation (15%) subsets.

BPNN2 plays a crucial role to differentiate SMEs


into two groups: High (above-average) and Low
(below-average) performers. By using multiple inputs
and output, the BPNN model forms a nonlinear Figure 2 is a visual representation of improvement in
separation surface, which is an approximation of terms of prediction errors as compared to the result of
central tendency or general pattern of the given data BPNN 2 built on all SMEs, High and Low. The
set. Consequently, a positive prediction error between figure clearly shows centralized learning property of
actual and prediction value indicates High BPNNs and far less error scales for both High and
performance while negative ones indicating Low Low SMEs with most of DMUs less than 20% error.
performance. The empirical analysis presented so far can
be summarized as a sequential BPNN process which
Modeling differential performance (High vs. Low) analyzes average performance of SMEs and
BPNN 2 model produced meaningful outcomes on subsequent advancement into hierarchical analysis of
the characteristics of SMEs in terms of average two performance patterns. The promising outcomes
performance and segmentation of SMEs based upon of sequential BPNN models indicate successful
efficiency. However, further elaboration needs to be estimation of production functions of SMEs which
made to probe into patterns of SMEs in different capture nonlinear input-output relationships. In a
performance levels. In fact, this is a significant Blackbox type learning system of neural networks,
research agenda to unveil dynamic impact of strategic responses to the changing inputs can provide an
factors in varying performance levels. Literature effective means to look into the functional behavior
shows no evidence on this attempt, especially within of each input and output variable. Therefore, for the
the SME context. Exploiting aforementioned features analysis of impact of each input variable to the model
of artificial neural networks, this pilot research output, variations in BPNN outputs were observed in
extends empirical experiment to further explore accordance with sequential changes of each input.
performance patterns of High (Efficient) and Low Figure 3 visualizes relative importance of variables in
(Inefficient) SMEs, in particular. In this advancement, both models (BPNN H and BPNN L), which
ACP, which showed a minimal impact in BPNN and represent output variations on a uniform change (e.g.
was not significant in MR, was eliminated for the 10% standard deviation) of each input. The result
parsimony of the model. For each group of 73 High clearly demonstrates fruitful outcomes of the
and 71 Low SMEs, subsequent BPNN model, BPNN proposed neural network approach as evidenced by
H and BPNN L, were. Indeed, the result shows R differing impact of strategic variables in High and
above 0.94 and MAPE less than 8% with majority of Low SMEs.

124
Back to Contents

Figure 3. Relative importance of variables patterns of SMEs in different performance levels


using sequential BPNNs. This empirical progress is a
valuable addition to the extant literature which will
inspire further exploration of BPNN in the stream of
efficiency analysis. In addition, differential impact
patterns of RND according to a performance level in
SMEs is a significant empirical finding obtained in
this research. In sum, the contribution of this paper is
in its distinctive methodological advancement to
pioneer empirical performance model exploring
strategic factors in SMEs and beyond.

(References are available from the authors upon


request)
Admitting that Growth is the most impactful input in
both performance categories, notable distinction
between High and Low SMEs can be observed from
the relative importance of RND. In Low SMEs, RND
is the 3rd important variable after Growth and Cratio.
However, in High SMEs, RND is a crucial factor
comparable to Growth (94%) as contrasted to 32% in
Low SMEs. Furthermore, the result explicitly shows
increasing returns on RND in High SMEs over Low
SMEs (e.g. 1.93% vs. 1.32%) indicating more than
46% increase of output as compared to Low SMEs
upon increases of equitable input. It is a significant
finding obtained from this empirical study. RND has
a strategic importance positively impacting long term
performance and is considered a source of technology
advancement and scientific innovation leading to
sustaining competitive advantage of firms (Morby
and Reithner, 1990; Tubbs, 2007; Verma and Sinha,
2002; Wang et al., 2013). From this perspective,
higher impact of RND in High performing SMEs is
meaningful. As a pilot attempt to differentiate
variables impact on SMEs in different performance
levels, the result provides managerial insights on
managing strategic variables such as R&D. In
contrast, InvTO is the least impactful input followed
by ROA in both SMEs. However, the present
approach captures the opposite impact of the
variable; negative in Low SMEs but marginally
positive in High SMEs.

IV. CONCLUSION AND FOLLOW-UP


This paper presents a salient application of BPNNs to
model SME performance. Besides predictive capacity
of BPNNs, explanatory power of BPNN, commonly
noted as a Blackbox type system, was explored. As a
nonlinear model, BPNN successfully captures
implicit functional relationships of key input and
output vectors of SMEs and proves its superior
prediction accuracy to linear regression models (Das
and Datta, 2007; Sexton et al., 2003; Wong and Chan,
2015). The present approach uniquely explores

125
Back to Contents

Green Banking: A proposed model for green housing


loan

Alexander V. Gutierrez
Our Lady of Fatima University
College of Business and Accountancy
alexv_gutierrez@yahoo.com

Abstract—Climate Change had become one of the most more market opportunity for the banking sector. But despite
critical factors that affect our environment. Every stakeholder of the involvement of some of the commercial banks there is
must initiate activities that will save our planet. For the banks still a greater need for other banks such as thrift banks and
Green Banking Initiative (GBI) is a way for them to contribute to rural banks to be more proactive in promulgating green
the deteriorating environment. This study proposed a green
housing loan model for banks to be used for their proactive
banking initiatives as this types of banks are the one within the
contribution to save our environment and to encourage reach of small medium enterprises and home loan borrowers.
borrowers to do their own share. It uses the content analysis in This study will try to answer the following questions. What
the literature result to as a basis for the proposed model. The green financial product and services currently being offered by
study concluded that if more banks will offer green housing loans the different financial institutions? What are the benefits of
in will widen the scope of its client based, for the borrowers it will green loans for both the banks and the borrowers.
give them savings both on the cost of borrowings and energy The objectives of the study is to propose a model
efficiency. It will also decrease the use of carbon emission. green housing loan for lending that will be used to increase
the adoption of home loan borrowers to green banking
Keywords—Green Banking; Green Loans; Climate Change;
Energy Efficiency initiatives, such as investing in energy efficiency equipment
and constructions materials. It will also highlight the
important benefit that both the borrowers (companies) and
I. INTRODUCTION lenders (banks) will have in this undertaking.
In the recent 21st Session of the Conference of the
Parties (COP21) to the UN Framework Convention on
Climate Change (UNFCCC), world leaders hammered out an II. RELATED LITERATURE
agreement aimed at stabilizing the climate and avoiding the
worst impacts of climate change. A single typhoon alone in 2.1 Green Banking
2015 had an estimated damage of 11billion pesos. As losses The effects of climate change increases rapidly which put
grew and environmental issue becomes a significant factor the all the stakeholders to do their share of protecting our planet.
need for all the stakeholders to do their part in preserving the Banks plays a critical role as an intermediary between
environment to further deteriorating is inevitable. This economic development and environmental protection, for
includes not only businesses, governments and consumers but promoting environmentally sustainable and socially
also includes the role the financial institutions such as banks is responsible investment [1]. According to World Bank, losses
very much critical. Thus the demand for the banks to adopt from natural disasters account for more than .5 percent of the
Green Banking Initiatives is likewise important as it’s Philippines gross domestic product annually, and climate
contributions in the promotion of environmental friendly change is expected to increase these losses further. The need
undertakings depends on the failure or success of such for green banking initiatives in the Philippines is in critical
initiatives. condition. Presently there have been several banks who
Green banking means promoting environmental- partnered with International Finance Corporation (IFC) to
friendly practices and reducing carbon footprint from banking finance projects that will extend credit to companies
activities. In the Philippines several banks venture into IFC promoting or adopting to energy efficiency. But despite of
(International Finance Corporation) the financial arm of the this there is no clear cut policy that pertains to green banking.
United Nations to do sustainable energy financing. These Green loans and green deposits likewise is not widely offered.
activities will help companies to adopt a more sustainable and
energy efficient use of electricity in their productions or 2.2 Green Banking Initiatives
consumptions thus increasing profitability. The bank likewise Green Banking Initiatives
will expand its reach, specifically among small and medium There are several green banking initiatives that a bank
enterprises that need financial support to lower cost and could do proactively it includes the following
improve competitiveness. Wider reach of customers means • Paperless statements, bills and annual reports

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


126
Back to Contents

• Donations to conservation charities as an incentive upgrade of machinery and equipment of UPSPI, a company
for choosing green products engaged in the manufacture and importation of L-E-D lighting
• Special line of credit to homeowners for investment systems and one of the first to promote such energy saving
technology in the Philippines.[4]
in energy efficient upgrades
The challenge of this emerging dynamic presents new
• Use of solar powered ATMs opportunities. New opportunities mean best possible solutions
• Energy-efficient branches and loans which, in turn, give us new ideas; new ideas give rise to new
• Providing recycable debit cards and credit cards jobs, new companies, new industries, or old industries that are
• Green Loans ripe for reinvention.[4]
The benefit of using green credit card is it is design to
• Green Mortgages donate a portion of every transaction to an environmental
• Green Deposits friendly non-profit organization thus contributes to the funds
that will definitely help save the deteriorating environment.
In 2011, China Development Bank had lent 658 billion Yuan Green loans for home improvements is any person who is
to energy-saving, emission-reducing projects, that accounts for purchasing eco-friendly product for home,
12.7% of the bank’s total outstanding loans. bank is providing finance for the equipment at very less
(www.banktrack.org). Loans and leases overal in the interest rate like 4% p.a. it is a very good deal for the person
Philippines under the Sustainable Energy Finance (SEF) who is purchasing solar equipment.
program of the World Bank’s IFC grew 33% in 2013, the Business loan is a loan intended for business
Bank of the Philippines (BPI) said.[2] purposes, your local bank extend small business loans to
2.3 Advantages of Green Banking Products and Services would-be entrepreneurs, but only after they submitted a formal
The case for creating an efficient government interest business plan. The financial institutions typically require that
rate discount mechanism for green loans Also known as the individual personally guarantee loan, which means that
discounted loan interest rates, interest rate discounts refer to they probably have to put personal assets as collateral in case
the government policy of offering subsidies on the interest of business fails.
rates of loans in certain sectors by certain proportions (partial Present requirements and criteria for Housing loans:
or total) to support the development of these sectors. In the Fixing Period Interest Rate
present composition of fiscal spending on energy conservation 1 year 5.5 %
and fiscal spending, direct government subsidies for products 2 year 6.5%
account for a significant share while fiscal spending of interest 3 year 6.5%
rate discounts is still insignificant with a limited scope of 4 year 6.88 %
application.[3] 5 year 6.88%
In general, green mortgages, or energy efficient Annual Interest
mortgages (EEMs), provide retail customers with considerably Rates repricing for
lower interest rates than market level for clients who purchase existing Home
new energy efficient homes and/or invest in retrofits, energy Loan Borrowers is
efficient appliances or green power. Similarly, banks can also 7% p.a. for fixed
choose to provide green mortgages by covering the cost of year
switching a house from conventional to green power, and
Housing Loan
include this consumer benefit when marketing the product.
The minimum amount can borrow is PHP400.000
These retail products is in different designs, some of which
The loan terms available
have been successful than others [3]
• Maximum of 25 years for house and lot
Some bank and financial institution have taken initiatives like
• Maximum of 10 years for vacant lot, residential
State Bank of India, Yes Bank and Financial Information
condominium, business loan, refinancing,or multi-purpose
Network and Operations (FINO) while making their branches
loans
and building environmental friendly and conscious on their
The minimum total household income be to qualify for a loan
clients projects to whom they have given the loan. [3]
is PHP 40,000
Maximum loan
2.4 Green products and services
*Up to 80% of the appraised value of property
Mode of payment
Green Design, an outstanding example is financing of the
Auto Debit Arrangement (ADA) or Post-Dated Checks
construction of the first Green- Designed Building in the
(PDC’s)
province of Bulacan for Baliuag University 85 percent of this
The loan is repaid in uniform monthly amortization payments,
building’s energy requirement comes from the sun and it also
paid in arrears (1 month after loan is booked)
boasts a rainwater harvesting facility.[4]
Energy Efficiency. To support commercialization of L-E-D
lighting systems in the country, our Bank backed the
renovation of the energy efficient offices as well as the

127
Back to Contents

2.5 Conceptual Framework 50 % of the processing fee will be waived.


Independent Variable Dependent Variable How to qualify?
Requirements Increase Income of Banks Criteria 1
-Compliance with Increase savings of Your building or renovation must comply with the minimum
local government borrower environmental standard required in your city or municipality.
standards Decrease Carbon Criteria 2
- Double or Triple Emission You must be installing a minimum of two items from the
Glazing Conserve Energy following:
- Solar Hot Water Double or Triple Glazing Reduces heat loss and noise
Heater transmission through windows
- Solar Hot Water Solar Hot Water Heater Reduces your reliance on the
Heater or Heat Pump or Heat Pump power supply for the heating of
- Water Storage Tanks water and can result in financial
(minimum 2,500 litre savings through reduced energy
capacity) bills
- Roof and Wall Water Storage Tanks Reduces your reliance on water
Insulation (minimum 2,500 litre storages, resulting in financial
- LED lighting capacity) savings through reduced water
installed bills
Roof and Wall Insulation Helps prevent temperature
extremes within a dwelling.
III. METHODOLOGY Monetary savings through
The author uses the content analysis in the result in the reduced energy bills
literature to came up with the proposed model for green LED lighting installed Conserve energy
housing loans.
IV. RESULTS AND DISCUSSION
Proposed Model for Green Housing Loans The borrower has an estimated one year savings of
Interest Less 1% on the first 3 years of the almost P3,000.00 per annum for a minimum amount of
rates: current market lending rate P400,000.00 and a total savings of more than P71,000.00 for
Less 2% on the 4th to 5th year of the the whole duration of the loan
current market lending rate An automatic approval of 80% of the loanable amount based
Less 1% of the succeeding years of on appraised value of the property. The minimum amount the
repricing up to the end of the term. loan is the same with regular loans as in with the green loans.
For existing Home Loan: Less 1% on Loan terms was fixed to 25 years regardless of the purpose of
the repricing rate fixed for 1 year the loan. Mode of payment is ADA only. An additional month
is added before the loan payment starts, and additional
The PHP400.000 payment is available and unlimited. All of the possible
minimum features can be avail if only at least two of the following given
amount criteria is followed by the borrower. Half of the processing
Maximum Automatic 80% of the appraised fee will be waived.
loanable value of the property More and more banks starts to consider lending with
amount some tools by incorporating environmental concerns as part of
Loan Term Maximum of 25 years their requirements. The creation of a model to be used by
banks as part of their products, in this case green lending, will
Suitable Purchase of lot, house & lot, help the banks in promoting green loans but also will
for condominium unit or townhouse, encourage possible lenders to avail of environmental friendly
Takeout of existing housing loan, bank products.
Renovation of existing home, House The cost of borrowing will be lessen by a maximum of 23%,
construction. this means a good amount of savings for the borrowers, having
more money to spend or save. On the part of banks an
Mode of Auto Debit Arrangement (ADA) only opportunity to have a wider market reach as the lower the
payment interest rate is, the more it will attract borrowers resulting to
Payment Monthly (2 months after the loan is more client and more interest income as well. Household
options booked) adoption to any of the given criteria to avail of the green loans
Additional Available and unlimited will be an enormous help to preserve our environment.
Repayment Natural resources conservation is also one of the underlying

128
Back to Contents

principles in green bank while assessing capital/operating [4] Sharma, Sarika, Gopal (2012) A study on customer’s
loans to extracting/industrial business sector.[6] awareness on Green Banking Initiatives in selected public
and private sector banks with special reference to
Green loans will also become as a way of fulfilling an Mumbai.
institution Good Corporate Governance. This not only on the [5] PROGED (2013). Green Finance for Micro, Small and
part of the financial institution but also for the developer and Medium Enterprises (MSMEs) in the Philippines
constructions companies to be more environmental friendly by [6] Sudhalakshmi, Chinnadorai (2014) Green Banking
giving emphasis on renewable energy and energy efficiency. Practices In Indian Banks
It will give them a great sense of responsibility to adopt such
an initiative to be a part of those who wants to save our planet
from further deterioration.
Likewise, by increasing the maximum years for the loan to
pay from the normal 20 years maximum to 25 years will give
more leeway for the borrowers to pay their loans. The 50%
discount on the processing fee will not only encourage first
time borrowers but will also help them save money and can be
used as addition to their deposits that can be lend back by the
bank to other prospective borrowers.
The proposed ADA only way of paying the
amortization instead of issuing PDC will help save paper use.
Less paper use means lesser trees to be cut. Online banking
and mobile apps will be utilized for account inquiry,
transactions inquiry and other transactions.

V. CONCLUSION
Banks have become a very important part of our daily
lives and it’s contributions to the financial systems and to the
economy as whole is critical. Therefore, promoting and
adoption of some green banking initiatives by the banking
industry will greatly contribute to the purpose of helping save
our planet. If more banks will start offering green housing
loans it will definitely results in efficient energy used. If
borrowers will avail of the green housing loan they will incur
savings not only in the matter of interest rates but also in their
daily consumption of water and electricity as energy efficient
tools are in place.

VI. RECOMMENDATION
Green banking is still a major issue in the
Philippines, though some Universal Banks have already made
a collaboration with some International financing group there
still a wider market left untouched. Likewise government
should intensify its own initiatives to encourage banks to offer
more green products and services through offering some
incentives such as regulatory points to banks that will intensify
green banking. Consumers and businesses alike still lack
awareness of the term green banking, a massive promotional
campaign must be adapted by the banks to highlight the
benefits of going green.

REFERENCES

[1] Benedikter, R. (2011). Answers to the Economic Crisis:


Social Banking and Social Finance, Spice Digest New
York:Springer
[2] Gatdula (2014). BPI-IFC sustainable energy loans up
33%
[3] UNEP. (20070 Establishing China’s Green Financial
System. Detailed Recommendation 4: Strengthen
Discounted Green Loans

129
Back to Contents

A feasibility study and design of biogas plant via


improvement of waste management and treatment
in Ateneo de Manila University
S. Granada1, R. Preto2, T. Perez1,3, , C. Oppus1,4, G. Tangonan1, P.M Cabacungan1
1
Ateneo Innovation Center, Ateneo de Manila University, Quezon City, Philippines
12
Adjunct Professor at the DIII, Faculty of Engineering, University of Pavia, Via Ferrata 1, Pavia, (Italy)
roberto.preto@unipv.it
Department of Environmental Science, Ateneo de Manila University, Quezon City, Philippines
Department of ECCE, Ateneo de Manila University, Quezon City, Philippines

Abstract-- This paper reports the results of feasibility studies crawler worms to process organic waste into compost for
on energy production by utilizing the waste produced by garden fertilizer. Figure 1 shows the summary of the
Ateneo de Manila University. This approach is environment waste management scheme in Ateneo de Manila
friendly to reduce CO2 emissions as well as an interesting
University.
support to the use local resources as well as reduce landfill
disposal. Biogas, as renewable energy form, can properly
substitute conventional sources of energy (fossil fuels, oil, etc.)
which are causing ecological problems in the fast increasing
of CO2 emissions in the atmosphere. The biogas consists
essentially of methane (CH4, 50 ÷ 75% by volume), carbon
dioxide (CO2, 25 ÷ 45%) and water vapor (H2O, 2 ÷ 7%), as
well as other gases in smaller concentrations, including
hydrogen sulfide (H2S). The heating value of the biogas is a
function of its content in CH4. On average the Net Heating Fig. 1. Waste management scheme in Ateneo de Manila
Value can be considered equal to 20,000 ÷ 24,000 kJ/Nm3. University.
The organic component of Municipal Solid Waste (MSW) can
be treated anaerobically to produce methane.
Index Terms — Municipal Solid Waste (MSW), combined
heat and power (CHP), Dry Matter (DM), Reactors. Ateneo de II. FEASIBILITY BIOGAS PRODUCTION
Manila University (AdMU). The above-mentioned initiatives have been found to be
successful since the time of major development in 2008,
I. INTRODUCTION but still have room for improvement. For wastewater,
Wastewater and solid Waste has been a major concern around 26,000 cubic meters annually is managed with
that has great impact and continuously degraded the treatment and system of reuse. This is equivalent to less
status of the environment and human health in the than 10% of wastewater being generated in Loyola
Philippines. Ateneo de Manila University with its schools campus. The Average annual water withdrawal
different sectors has been dealing with these issues. and consumed by AdMU is a about 330,000 cubic
Systems, programs and facilities have been reviewed and meters. All of the septic sludge in the building septic
upgraded to manage the wastewater and solid waste being tanks is disposed via syphoning services of utility
company or private service provider for offsite treatment.
generated in the school.
For the solid waste, at least 83 tons annually of residual
In streamlining the sustainability of the campus, eco-
waste is transported to the landfill, 23 tons annually of
friendly and cost-effective methods were initiated and food/kitchen waste is disposed, 73 tons of dry leaves has
implemented. Wastewater is collected and treated with not been processed in vermi-composting, 5 tons of used
natural wastewater treatment system using plants, stones cooking oil is collected from the canteen concessionaires,
and other locally available materials. The treated water is and at least 3 tons of combined oil, fat and grease were
being re-used for irrigation of the gardens. Solid waste is collected from the grease traps. Table 1 shows the
being managed with combination of several programs summary of organic waste available in Loyola Schools
such as; (1) adoption of 3R concept of reduce, re-use and with corresponding potentials for Biomethanation and
recycle in the generation of solid waste (2) waste Biogas Production.
segregations at point of source in which at least 5
different waste bins were deployed in strategic areas , (3) Table 1 presents the organic waste produced in AdMU in
Development of material recovery facility (MRF) to 2014 and the estimated potential biogas production.
house and further manage the recyclable waste, reusable
materials and residual waste, (4) setting out of Vermi-
composting facility with the deployment of African night

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


130
Back to Contents

Table 1. – Estimated organic waste generated in Ateneo


de Manila University (Loyola Schools) in 2014 and
corresponding biogas production.
Fig. 1 – Technical Design of the DEWATS
According to this calculation, the biogas produced is able
to run a small CHP (spell out) plant that can produce
Inside the existing DEWATS system the proposed
electrical and thermal energy that can be utilized the
project is to install an anaerobic digester of 500 m3 of
campus.
reaction capacity that is able to manage more than three
There are also near to AdMU other college and
times the waste coming from the AdMU. This digester
University where is possible to bring other biomass to the
can run in mesophilic/thermophilic work condition a
plant that it can increase the production of three times
CHP (Spell out) plant till 100 kW electrical fed by
more.
organic parts of MSW.
Table 2 reports energy production from the biogas
production of Table 2.
III. DEVELOPMENT OF BIOGAS FACILITY (MODIFICATION As we can see the investment cost is covered in less
OF EXISTING AND PROPOSED ADDITION OF NEW SYSTEM)
than 6 years only from the AdMU waste production.
Vegetable oil for frying is sold to external companies
The situation of the existing treatment plant facilities (Table 1), but if it is used to feed the digester revenue
can be upgraded further to make use remaining organic resulting from the energy production is more than four
waste and to eliminate the cost for outside disposal or times the results of external sale.
treatment. This waste can be used in harmony with nature
and to produce energy in the process.

Existing Facilites

Marian Garden and the Matteo Ricci lawn are maintained


by the use of drip irrigation systems. Also known as
trickle irrigation, this system saves water by slowly
dripping water directly onto roots through a network of
valves, pipes, and piping. Water loss via evaporation is
prevented by this method. The water used to irrigate these
gardens is from the decentralized waste water treatment
systems found on-site. Excessive runoff from rainwater is
also collected in several catchment ponds. These also
provide irrigation water for the nearby fields and gardens.

Waste Water Treatment

The University has in place four decentralized


wastewater treatment systems.
The newest and largest working system is the
Decentralized Wastewater Treatment System (DEWATS)
Table 2.– Biogas Plant cost and simple payback
near the New Rizal Library. The DEWATS
calculation to install an anaerobic digester in the existing
accommodates 110 m3 of wastewater daily and treatment facilities
is conducted through a series of settling tanks,
underground aerobic and anaerobic reactors, polishing Integrated in the cost is a worker for the maintenance for
gravel filter and indicator ponds. the genset for one year .

131
Back to Contents

The existing waste available can generate an electrical recovery of recyclable materials, (4) Carbon foot print
power of only 11.50 kW, there will be a corresponding contribution, (5) Working laboratory for student learning
increase in power generation is of we can collect waste system, and (6) Support and Compliance with both R.A.
from from the near colleges and University. 9275 (Clean water act of 2004), R.A. 9003 (Ecological
Moreover the biogas plant is a good opportunity for the Solid Waste Management Act) and R.A. 9513 (Clean
students that want to improve knowledge in chemical, Energy Act).
biology and environment sustainability.

ACKNOWLEDGMENT
IV. ANALISYS OF BIOGAS PLANT The team would like to thank the Philippine
Renewable energy help to reduce the greenhouse effect, Commission on Higher Education (CHED)-Philippine
methane emitted by the septic tanks the facilities is a Higher Education Research Network (PHERNET).
potent greenhouse gas.
The proposed biogas plant will not only be a source of
energy but will reduce environment impacts of the
existing facilities at the same time minimize on the cost REFERENCES
of disposal of waste.
ALTERNATIVE ENGINE INPUT (354,32 MJ as input fuel)
[1] Drapcho M. C., Nhuan P. N., Walker H. T. : Biofuels
a
b=a*1000/c'
biogas MJ
Sm3
354.32
15.80
Engineering Process Technology, 2008, pp. 334-335
c percentage methane contents % vol 65.00
c'=34500*c/100 (^) NHV biogas kJ/Sm3 22,425.00
NHVnatgas NHV natural gas MJ/Sm 3 34.40 [2] Petrecca, G. : Industrial Energy Management: principles
NHVoil
REFb
NHV fuel oil
reference boiler efficiency
MJ/kg
%
41.86
90.00
and application. Kluwer Academy Publishers, 2000, pp.
173-200.
ANAEROBIC DIGESTER [3] Petrecca, G. Preto, R. Organic solid slaughterhouse
x
y
organic part of MSW
percentage waste in the input mix of digester
kg
%
100.00
100.00
wastes: a resource of energy, 2012/IEEE/978-1-4673-
z total solids (SS) % 35.00 1301-8
z' potential biogas production Sm3/t wet 423.00
x'=a/(z' * c'/1000) organic waste to obtain 100 MJ t wet 0.037 [4] Preto, R.: “Biomasses and territory planning: the
ALTERNATIVE ENGINE OUTPUT development of chains for use with innovative technologies
d energy shaft MJ 112.50
e=d*1000/3600 kWh 31.25 for the production of energy”. Biomasse e pianificazione
f mass of smoke kg 101.60
g=f/1,29 (^^) volume of smoke Nm3 78.76
del territorio: lo sviluppo di filiere per l’utilizzo con
h
h'
smoke temperature °C
K
572.00
845.15
tecnologie innovative per la produzione di energia.
i
l=f*h*i/1000
smoke specific heat
smoke enthalpy
kJ/kg*K
MJ
1.068
62.07
Doctorate Thesis – University of Pavia, 2010, pp. 154.
m smoke circuit losses MJ 2.33
n typical smoke recovery heat MJ 59.74
o=l-m
p
maximum smoke recovery heat
heat recovery from cooling circuit
MJ
MJ
59.74
134.64
[5] George W. Huber, Sara Iborra, Avelino Corma: Synthesis
ELECTRIC GENERATOR of transportation fuels from biomass: Chemistry, Catalyst
q efficiency % 96.00
r=d*q/100 electrical energy output MJ 108.00 amd Engineering, American Chemical Society 2006 pp.
s=r*1000/3600 electrical energy output kWh 30.00
SPECIFIC CONSUMPTION 2.2-2.3
t=a/s input fuel MJ/kWh 11.81
u=b/s
u'=(a/NHVnatgas)/s
biogas
natural gas
Sm3/kWh
Sm3/kWh
0.53
0.34
[6] K. Stahl, L. Waldheim, M. Morris, U. Johnsson, L.
u''=(a/NHVoil)/s fuel oil kgoil/kWh 0.28 Gardmark, Biomass IGCC at Värnamo, Sweden – Past and
Future, GCEP Energy Workshop, April 2007, pp.3-4
Table 3. – Alternative engine 30 kW electrical and [7] U.S. Department of Energy National Renewable Energy
anaerobic digester 500 m3 of reaction volume Laboratory. Gas-Fired Distributed Energy resource
technology characterizations – NREL/TP pp. 1-8, pp. 2-31

V. CONCLUSIONS [8] George W. Huber, Sara Iborra, Avelino Corma: Synthesis


of transportation fuels from biomass: Chemistry, Catalyst
amd Engineering, American Chemical Society 2006 pp.
This study shows the feasibility and sustainability of 2.2-2.3
incorporating biogas technology in the approach for both
wastewater and solid waste management. Clean electrical
energy will be generated with biogas feed generator set,
while heat energy will be utilized with CHP system. The
design suggest that a good investment cost will have a 6
(6) years payback period for a system that will run for 25
years with minimal maintenance considerations. The
calculation is for the energy production only, but there
are other cost benefits and positive impacts such as: (1)
Elimination for the cost of disposal of residual waste,
sludge management and maintenance/repair of pipes due
to clogging with oil/grease, (2) Increase the available
treated water for re-use in irrigation which will result to
avoidance cost in using utility water, (3) Increase the

132
Back to Contents

Effective Data Collection and Analysis of Solar Radio


Burst Type II Event Using Automated CALLISTO
Network System
N. H. Zainol
S. N. U Sabri
Faculty of Applied Sciences Faculty of Applied Sciences
MARA Technology University MARA Technology University
Selangor, Malaysia Selangor, Malaysia
hidayahnur153@yahoo.com sitinurumairah@yahoo.com

Z. S. Hamidi M..O. Ali


Faculty of Applied Sciences
Faculty of Applied Sciences
MARA Technology University
MARA Technology University
Selangor, Malaysia
Selangor, Malaysia
marhanaomarali@gmail.com
zetysh@salam.uitm.edu.my

nurulhazwani husien
N.N.M.Shariff
Faculty of Applied Sciences
Academy of Contemporary Islamic Studies
MARA Technology University
MARA Technology University
Selangor, Malaysia
Selangor, Malaysia
hazwani_husien21@yahoo.com
nnmsza@salam.uitm.edu.my

M. S. Faid
C.Monstein
Academy of Contemporary Islamic Studies
Insitute of Astronomy, Wolfgang-Pauli-Str
MARA Technology University
27 8093 Zürich
Selangor, Malaysia
Switzerland
syazwaned@siswa.um.edu.my
monstein@astro.phys.ethz.ch

Abstract—The Callisto network systems are widely used for extendable the Callisto system (e-Callisto). The analysis being
continuous data collection of solar activities every day through carried out based on spectrogram data of the CALLISTO
the internet connection and stored in the central database in system obtained from these three countries. Results showed
the computer. The system installation began in 2002 in Zurich, that all three sites observed the same Solar Radio Burst Type
and its network has spread all around the globe ever since, II at the same time but different in locations. The e-
benefiting researchers and individuals worldwide. This CALLISTO system has proven to be a new tool for monitoring
research paper presents one of selected event using a radio solar activity and for space weather research.
spectrometer the Callisto system from Ireland, which
demonstrates a solar radio burst event detected during the Keywords—e-Callisto system; data collection; analysis; solar
13:23 (UT) to 13:26 (UT) on 30th March 2013 in Ireland. activity
Besides, data from Glasgow and Humain were compared to be
analyzed. Those installed Callisto in each country is called as

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


133
Back to Contents

I. INTRODUCTION researchers interested to pursue research in this field. This


Research and study of the Sun evolve rapidly and the involves solar activity associated with the technology we
progress proves many new findings that can be prospered used. It is also important to understand solar radio bursts
into various field. These processes undergo observations because these can disrupt wireless communications, damage
from space satellites, space probes in X-rays, extreme UV satellites, produce destructive surges on power grids, and
and ground-based telescopes in the specific wavelength. endanger astronauts [10].
However, data collected from the instruments must be Solar radio emission closely related to the Sun activities
stored and compiled for research used. Specifically, ground- during maximum and minimum cycle. It arises from several
based telescopes at meter and decimeter wavelengths pose different phenomena and can be divided into three (3) main
many challenges and potential treasures after many decades components (i) the quiet Sun component, which is always
of study. Some Sun’s event can only be detected using present, (ii) the slowly varying component and (iii) the
higher wavelength radio emission rather than lower active Sun component which is caused by sunspots and flare
wavelength such X-ray. Only a few ground-based solar activity [11]. It is possible to detect a large flare and
radio instruments have a large part of the spectrum such as Coronal Mass Ejections (CMEs) at instance if we alert on it
in Europe which limited only during daylight. The Coronal [12]. The early phase of solar flares that are detected in the
mass ejections (CMEs) and large flares happen several solar coronal region [13]. Solar activity includes solar burst.
hours which results in limited time for observation. In tens There are five types of solar burst; Type I, Type II, Type III,
of minutes, they can convert in excess of 1032 ergs of Type IV, and Type V. Solar radio bursts at low frequencies
magnetic energy into accelerated particles, heated plasma, (< 15 MHz) are of particular interest because they are
and ejected solar material [1]. Moreover, if the important associated with energetic CMEs that travel far into the
event occurs, only a few parts of the events can be observed interplanetary (IP) medium and affect the Earth’s space
from entire duration. So, a network system called the environment if Earth-directed [14]. Interplanetary (IP)
CALLISTO has been introduced into this field to overcome shocks driven by coronal mass ejections (CMEs) are
the problems arise [2]. The system is particularly useful for indication of powerful eruptions on the Sun that accelerate
studying the large explosions in the Sun's atmosphere particles to very high energies [15]. Coronal mass ejections
known as solar flares [3]. The extendable CALLISTO (e- (CMEs) are regarded as main factor of heliospheric and
CALLISTO) network has 24 hours coverage of the radio geomagnetic disturbances [16]. During CMEs, magnetic
emission of the Sun includes the instruments that have been arcades overlying magnetic neutral lines are hurled into the
installed all over the world. The idea is to distribute corona and the interplanetary medium, which in turn drive
identical spectrographs in the meter wave region and low shocks by virtue of their speeds exceeding the local
decimeter radio band around the world by connecting them characteristic wave speeds [17].
through the internet. The e-CALLISTO project is provided
by ETH Zurich includes spectrometer unit, an antenna and a The shocks can be inferred as Type II as it produced in
PC connected to the internet. The Compact Astronomical IP medium. The type II solar radio bursts result from the
Low-cost Low-frequency Instrument for Spectroscopy and excitation of plasma waves in the ambient medium by a
Transportable Observatory (CALLISTO) spectrometer shock wave propagating outward from the Sun [18]. The
operates between 45 to 870 MHz using a modern, current interpretation of the type II burst emission is as
commercially available broadband cable-TV tuner [4]. It is follows: electrons accelerated in the MHD shock front
possible to obtain data from the whole frequency range generate plasma waves, which then converts into
because different locations have different frequency range. electromagnetic radiation at the fundamental and harmonic
The stations observe independently at frequencies, which of the local plasma frequency [19]. According to
are the most well suited for each location, for example wavelength regime in which they are observed, type II radio
depending on such features as local radio frequency burst are grouped into metric (m), decameter-hectometric
interference (RFI) levels [5]. For that, it is utmost (DH), and kilometric (km) bands [20]. Here we present data
importance to identify and continuously monitor the collection flow and analysis of data spectrograph collected
unwanted signals which are emitted due to the massive from the computer.
global increase in technology application [6]. The RFI
II. METHODOLOGY
signal from the terrestrial surrounding may give inaccuracy
data of the ground observation [7]. To establish any radio A. CALLISTO spectrometer instrument
astronomical observation, it is important to initially identify The CALLISTO spectrometer is composed of a handful
all possible RFI in the targeted site [8]. The hardware of standard electronic components. The system is designed
(antenna) and software (CALLISTO system) have become to measure the dynamic spectra of Solar Radio Burst Type
commercially available on the consumer electronics market. II and Type III produced by coronal shock waves, as well as
Thus, the construction cost of radio spectrometer is Type I, IV and V bursts due to other solar energetic
affordable and can be developed in a large quantity. This processes.
creates new possibilities, concerning the ever increasing
interference. Identical spectrometers could be placed far
apart to observe the different interference-free signal.
Alternatively, a spectrometer can be brought temporarily
into a remote region with less interference [9]. Many

134
Back to Contents

OPTION DESCRIPTION
Frequency Range 45.0-870.0 MHz

Frequency Resolution 62.5 KHz

Radiometric 300 KHz at -3 dB


Bandwidth
Dynamic Range ~50 dB at -70 to -30 dBm
maximum rf level
Sensitivity 25 ± 1 mV/dB

Noise Figure <10 dB

The e-CALLISTO network system presents areas which


are covered by several solar radio telescopes. This network
behavior is known as geographic selection. The geographic
selection replicates systems between two geographically
distant stations so that the observed data are acquired from
several stations. That helps the observation of a possible the
event in case of failure of one of the stations. This
geographic selection is very reliable from the point of view
of both, solar science and space weather. During installation
of the system, the basic specifications were always
followed. However, it must be the complement to the
environment of the installed locations. Therefore, each
station of CALLISTO installation has their own
specifications regarding their environment, but still referred
to the general specification as a guide. In general, there is no
need for a permanent operator. Once the system is powered
and configured, it runs automatically, controlled by an
internal scheduler of the PC. This scheduler allows
Figure 1: Bicone antenna used in Rosse Solar Terrestrial automatic starting and stopping of observations as well as
Observatory (RSTO) controlling of an optional focal plane unit with up to 64
The lower receiver operates at 10 - 100 MHz and is fed different configurations. It is realized by a 6-bit digital
by a bicone antenna. The instrument observes automatically output connected to a standard connector. This option is
and data are collected each day and stored at the Rosse Solar needed for automatic calibration or antenna switching. An
Terrestrial Observatory (RSTO) and also in a central operator is only needed in case of missing internet
database at ETH Zurich. The CALLISTO spectrometer is connection to transfer data manually to an external data
composed of standard electronic components assembled on server.
a single PCB (printed circuit board). This PCB fits into a
standard aluminum box and has connectors for the antenna,
computer, power supply, focal plane unit (FPU). A complete III. RESULTS
set of drawings, parts list, procedures, PCB-layout and also
the complete software is freely available on the internet (e-
The system of e-CALLISTO, operating in Switzerland,
CALLISTO websites). An important step is location
India and Siberia, covered more than 60% of the time.
evaluation, where sometimes unexpected interference,
Nowadays, this network coverage has expanded all over the
present by nearby radio transmitters. A spectral overview
world as many new stations operated well in order to
was made using a special function of the CALLISTO for
produce the result of solar activity for 24 hours. Using this
every host site. It allows measuring the whole frequency
system, the result comes out in the form of spectrogram and
ranging from 45 to 870 MHz in steps of 62.5 KHz leading
being analyzed by researchers based on the pattern appeared
to potentially 13,120 channels. This high-resolution
in the spectrogram.
spectrum is then used to create a frequency program
avoiding channels with terrestrial interference. Such a A. Data Collected in Ireland from the e - CALLISTO system
frequency program observes only those frequencies with
low radio frequency interference (RFI) and if needed, it
jumps over spectral ranges like the FM band between 80
and 110 MHz.

135
Back to Contents

Figure 2: Solar Radio Burst Type II event occurs during the


Figure 3: Observation of solar activity at the Glasgow
13:23 (UT) to 13:26 (UT) on 30 March 2013 in Ireland. (Credited
Observatory on 30 March 2013 (Credited to E-CALLISTO
to e-Callisto website)
website)
Figure 2 shows the solar flare radio emission of March Mid frequency observation at Glasgow Observatory
30, 2013, observed by the e-CALLISTO spectrometer (United Kingdom). It shows harmonics of the same type II
element in Ireland. The data are displayed as a spectrogram, burst with the split band due to the magnetic configuration
intense emission presented bright, no emission black or no of the Sun and also some herringbone structures.
color. The time progressing to the right. The three elements
in the spectrogram represent the time, frequency, and
intensity which produce an image that can be interpreted
more easily. The frequency is given in MHz, increasing
downwards.
B. Other Data Collected from Other Locations
This system allows observing and obtaining data from other
sites at the same time or any time. The figures below show
data collected at the different site but the same solar activity
event almost at the same time.

Figure 4: Solar activity detected in Humain Observatory on 30


March 2013 (Credited to e-CALLISTO spectrometer)

136
Back to Contents

Similar observation at Humain observatory (Royal 3. Z. S. Hamidi1, N.N.M.S., The Mechanism of Signal Processing
of Solar Radio Burst Data in E-CALLISTO Network (Malaysia).
Observatory of Belgium). It shows the same type II burst as International Letters of Chemistry, Physics and Astronomy,
in figure 2 and 3. 2014(15): p. 30-38.
4. A Russu, R.G.H., M Prieto, C Monstein, H Ivanov, J Rodriguez
Pacheco, J J Blanco, A year of operation of Melibea e-Callisto
IV. ANALYSIS Solar Radio Telescope. Journal Of Physics, 2015. 632.
5. Juha Kallunki, M.U., Christian Monstein, Callisto Radio
Based on Figure 2, 3 and 4, as the emission is Spectrometer for Observing the Sun—Metsähovi Radio
proportional to the density (plasma emission), the vertical Observatory Joins the Worldwide Observing Network. IEEE
axis also represent altitude in the solar atmosphere, A&E SYSTEMS MAGAZINE, 2013.
comprising about one to two solar radii above the 6. Z. S. Hamidi, Z.Z.A., Z. A. Ibrahim, and N. N. M. Shariff,
Indication of Radio Frequency Interference (RFI) Sources for
photosphere. As density decreases with altitude, height
Solar Burst Monitoring in Malaysia. International Conference
increases upward in the picture. The radio emission is tilted on Physics and its Applications, 2012. 43: p. 43-46.
towards the right, indicating that an exciter is moving 7. N. M. Anim, Z.S.H., Z. Z. Abidin, C. Monstein, and N. S.
upwards. Assuming a density model of the corona, the Rohizat, Radio Frequency Interference Affecting Type III Solar
speed can be estimated. It amounts to about a third of the Burst Observations. AIP CONFERENCE PROCEEDINGS,
speed of light. The emission is therefore interpreted as the 2013. 82.
8. 1, R.U., Z.Z. Abidin, Z.A. Ibrahim, N. Gasiprong, K. Asanok, S.
signature of an electron beam escaping from the Sun. Nammahachak, 6S. Aukkaravittayapun, P. Somboopon, A.
Moreover, the emission drifts again to the right, Prasit, N. Prasert, Z.S. Hamidi, 1,8N. Hashim and Ungku
Ferwani Salwa Ungku Ibrahim, The Study of Radio Frequency
indicating an exciter moving upwards. However, the speed Interference (RFI) in Altitude Effect on Radio Astronomy In
is much slower in this case and of the order of 595.5 km/s. Malaysia And Thailand. Middle-East Journal of Scientific
This is a typical velocity of a shock wave in the corona. Research, 2013. 14(6): p. 861-866.
Such asradio emissions, called Type II radio bursts, are 9. ARNOLD O. BENZ, C.M., HANSUELI MEYER, CALLISTO–
often precursors of coronal mass ejections, which disturb the A NEW CONCEPT FOR SOLAR RADIO SPECTROMETERS.
Solar Physics, 2005(226): p. 143–151.
whole heliosphere and are of great interest in near-Earth 10. Azam Zavvari, M.T.I., Mhd Fairos Asillam, Radial Anwar,
space weather. Alina Marie Hasbi, and Christian Monstein, CALLISTO Radio
Spectrometer Construction at Universiti Kebangsaan Malaysia.
Based on the analysis above, this valuable system allows IEEE Antennas and Propagation Magazine, April 2014. Vol. 56.
the researcher to automatically obtain data via the internet 11. Z.S. Hamidi, N.N.M.S., Z.Z. Abidin, Z.A. Ibrahim and C.
for 24 hours solar activity data at any installed CALLISTO Monstein, Coverage of Solar Radio Spectrum in Malaysia and
stations and at any time. Spectral Overview of Radio Frequency Interference (RFI) by
Using CALLISTO Spectrometer from 1MHz to 900 MHz.
Middle-East Journal of Scientific Research, 2012. 12(6): p. 893-
Acknowledgment 898.
We are grateful to CALLISTO network; STEREO, LASCO, 12. Space Weather: The Significance of e-CALLISTO (Malaysia) As
One of Contributor of Solar Radio Burst Due To Solar Activity.
SDO/AIA, NOAA and SWPC make their data available 13. Zamri Zainal Abidin, N.M.A., Zety Sharizat Hamidi, Christian
online. This work was partially supported by the 600- Monstein, Zainol Abidin Ibrahim, Roslan Umar, Nur Nafhatun
RMI/FRGS 5/3 (135/2014), UiTM grant, Universiti Md Shariff, Nabilah Ramlia, Noor Aqma Iryani Aziz, Indriani
Sukma, Radio frequency interference in solar monitoring using
Teknologi MARA and Kementerian Pendidikan Malaysia. CALLISTO. NewAstronomy Reviews, 2015. 67: p. 18-33.
Special thanks to the National Space Agency and the 14. Z. S. Hamidi, Z.Z.A., Z. A. Ibrahim, N. N. M. Shariff and C.
National Space Centre for giving us a site to set up this Monstein, Observations of Coronal Mass Ejections (CMEs) at
Low Frequency Radio Region on 15th April 2012. 2013.
project and support this project. Solar burst monitoring is a 15. N. Gopalswamy1, H.X., P. Mäkelä2, S. Akiyama2, S. Yashiro3,
project of cooperation between the Institute of Astronomy, M. L. Kaiser1, and R.A.H.a.J.-L. Bougeret5, Interplanetary
ETH Zurich, and FHNW Windisch, Switzerland, MARA shocks lacking type II radio bursts. 2007.
16. Y.-J.Moon, 2 G. S. Choe,3 HaiminWang,1 Y. D. Park,2 N.
University of Technology and University of Malaya. The Gopalswamy,4, A STATISTICAL STUDY OF TWO CLASSES
research has made use of the National Space Centre Facility OF CORONAL MASS EJECTIONS. The Astrophysical Journal,
and a part of an initiative of the International Space Weather 2002
17. N. Gopalswamy, M.R.K., P. K. Manoharan, A. Raoult, and P.Z.
Initiative (ISWI) program. N. Nitta, X-RAY AND RADIO STUDIES OF A CORONAL
ERUPTION: SHOCK WAVE, PLASMOID,
AND CORONAL MASS EJECTION. THE ASTROPHYSICAL
References JOURNAL, 1997
18. E. Aguilar-Rodriguez, 2,3 N. Gopalswamy,4 R. MacDowall,4 S.
1. A. G. Emslie, H.K., B. R. Dennis, N. Gopalswamy, G. D. Yashiro,1 and a.M.L. Kaiser4, A universal characteristic of type
Holman,, A.V. G. H. Share, T. G. Forbes, P. T. Gallagher, G. II radio bursts. JOURNAL OF GEOPHYSICAL RESEARCH,
M. Mason,, and R.A.M. T. R. Metcalf, R. J. Murphy, R. A. 2005. 110.
Schwartz, and T. H. Zurbuchen, Energy partition in two solar 19. Gopalswamy, N., Interplanetary Radio Burst. Solar and Sapce
flare//CME events. JOURNAL OF GEOPHYSICAL Weather Radiophysics, 2002.
RESEARCH, 2004. 209. 20. E-Aguilar-Rodriguez, N.G., R. MacDowall, S. Yashiro and M.L
2. A. O. Benz, C.M., H. Meyer, P. K. Manoharan, R. Ramesh, A. Kaiser, A Study of Drift Rate Type II Radio Burst At Different
Altyntsev, A. Lara, J. Paez, K.-S. Cho, A We orld-Wide Net of Wavelengths. 2005.
Solar Radio Spectrometers: e-CALLISTO. Springer
Science+Business Media, 2009(104): p. 277–285.

137
Back to Contents

The Dependence of Log Periodic Dipole Antenna (LPDA) and


e-CALLISTO software to Determine The Type of Solar Radio
Burst (I -V)
S.N.U.Sabri, N.H Zainol, M.O Ali, N.N.M.Shariff
NurulHazwani Hussien, M.S.Faid Academy of Contemporary Islamic Studies,
School of Physics and Material Science, Universiti Technologi MARA,
Universiti Teknologi MARA, Selangor, Malaysia
Selangor,Malaysia nnmsza@salam.uitm.edu.my
sitinurumairah@yahoo.com.my

Z.S.Hamidi C.Monstein
School of Physics and Material Science, Institute of Astronomy, Wolfgang-Pauli-Str
Universiti Teknologi MARA, 278093 Zurich, Switzerland
Selangor, Malaysia
zetysh@salam.uitm.edu.my

Abstract---Solar radio burst originated at the


layer of the atmosphere where the Geo-effective
disturbance occurred which energy will be released in I. INTRODUCTION.
solar flares and Coronal Mass Ejections (CMEs) will
be launched. Solar Radio Burst can be divided into 5
Log Periodic Dipole Antenna (LPDA) is
types and determined by using the Log Periodic
Dipole Antenna (LPDA) and e-CALLISTO system. instruments with variable application which can
The LPDA was set up in a 45-870 MHz range in be used to receive and transmit the
frequency and has maximum boom length 5.45m. electromagnetic waves due to principle reciprocity
Besides that, it has minimum scale factor, τ=0.76 and and has highly sensitivity in signal detection with
maximum at τ=0.98. We put some effort to construct high possibility in handling noise issue[1].
suitable with designs, high specification and practical Besides that,twisted balanced transmission lines is
enough with the size of boom length as the conclusion important in fed especially for coplanar linear
the scale factor that suitable with this design is 0.8118 array of unequal and unequally spaced parallel
as a directivity of an antenna. LPDA has 19 elements
linear dipoles. The advantages of LPDA was in
which using two (2) aluminium rod with 7.01dB gain.
The antenna has a function to receive the signals then design because it has straightforward design by
connected to the low noise amplifier and e- Carrel and fulfills the criteria to produce
CALLISTO spectrometer completes it as a system. A maximum range in frequency from 45MHz to
CALLISTO (Compound Astronomical Low-Cost- 870MHz[2]. Thus, the different radiation
Low-Frequency Instrument for Spectroscopy properties detected and used to monitor solar radio
Transportable Observatory) spectrometer was used to flux due to solar flare phenomena [3]. This range
figure out the dynamic of solar corona which in has been standardized for e-CALLISTO
metric and decimetric wavelength radio observation networking. Monitor the solar activity in the radio
and the main objective of this study was to study how
region is the main objectives in the International
the solar radio burst can be detected by using LPDA
(Malaysia) and e-CALLISTO (ETH Zurich, Space Weather Initiative (ISWI) project and make
Switzerland) which were set up in a different observation of Solar activity 24h/d in order to
location.In this paper, the potential of Malaysia be support internationally with latest technology
one of the candidates to contribute a good data will be of the instrument.
highlighted and we will focus more on performance
evaluation and visualization data.

Keywords—Log Periodic Dipole Antenna(LPDA);


e-CALLISTO system; Solar Radio Burst (I-V)

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


138
Back to Contents

It was started in 2011 with the National Space to wave-wave and wave-particle interaction where
Agency (ANGKASA) and dealing with Malaya the magnetic trap in the solar corona. [10, 11]. It has
University and MARA University of Technology. a frequency range between 20 MHz till 2 GHz
The details of Log Periodic Dipole Antenna that which usually followed by flares and has a
have been discussed [4]. e-CALLISTO networking broadband continuum structure [12] . Type V is
important in this project in order to determine the known as combination of solar flares and solar
signal that receive from LPDA and interpreted the storm and the more details will discuss in results and
signal. From the interpretation, determination can be discussion.
made on the frequency that received and the
specification the signals came from. Furthermore, I. EXPERIMENTAL SETUP AND
from the signals received, the type of solar radio METHODOLOGY.
burst can be detected and other phenomena such as
solar flare and Coronal Mass Ejection (CMEs) can In this project, Compound Astronomical Low-
be detected if these events were occurs at that time. cost Low-Frequency Instrument for Spectroscopy
CALLISTO has designed by electronics engineer, Transportable Observatory (CALLISTO) system has
Christian Monstein from Institute for Astronomy of been used. Basically the antenna will connect
the Swiss Federal Institute of Technology Zurich directly to CALLISTO spectrometer, which housed
(ETH Zurich) developed the system so that solar in a steel case through low loss of coaxial cable.
activity can be monitored from all over the world CALLISTO has been used to figure out the dynamic
and there are about 25 sites used this system to of the solar corona and make an observations on the
monitor sun activity within 24 hours. meter and decimeter wavelength to determine the
progression of the solar atmosphere. Before that, the
unwanted signal or RFI sources need to be removed
In this system, transfer and data management need
during the observation. Log Periodic Dipole
to be considered. Then, the signal processing deals
Antenna (LPDA) at National Space Centre
with operation so that it can analysed our data thus
(ANGKASA), Selangor at (3.0833333°N
gave the best results of the images will be discussed
101.5333333°E) was connected with CALLISTO
and the range of frequency of the signals detected
system and has minimum Radio Frequency
will be stated. United Nations and NASA also
Interference (RFI) with average (85- 100) dBm and
involved in this project as to support the countries
45-870 MHZ range of frequency [11, 13].
that have been participated[5]. This detection is
independently used if it connected with radio
telescope and tracking system and usually in the
II. SPECIFICATION OF THE LOG
form of solar radio burst type. It can be classified
PERIODIC DIPOLE ANTENNA (LPDA)
into 5 types (I-V) and can be determined by different
phenomena. Solar Radio Burst Type I known as
solar storm or non- related phenomenon that has Log Periodic dipole Antenna (LPDA) was set
continuum component and a burst component. It has up at the National Space Centre (ANGKASA),
100-400 MHz range in frequency. Furthermore. Selangor and it has 19 elements which represent
Type II burst usually related with Coronal Mass different frequencies received by two (2) boom
Ejection (CMEs) which occurs when the soft X-Ray made up by aluminium rods which has non-
at peak time in solar flare and identified by a slow corrosive and lightweight properties. This
drift to lower frequency with duration of time. This important point for array as we want to increase the
burst is in second-harmonic bands[6, 7]. Meanwhile, smoothness in dimension greater from previous
Type III burst is fast drift burst which related with elements. The elements are more specified by
solar flares caused by energetic particles escapes angles compared by distances as it follows the
from magnetic field lines because the electron principle of Rumsey. For wide bandwidth, the
produced by ejection of energetic electrons out longer log periodic dipole array needed so that the
along open magnetic field caused high drift in accuracy will be optimized and dynamic of the
frequency due the duration of electrons is very range achieved in continuous burst which it
short[8, 9].Solar Radio Burst Type IV is the location divided into 15 minutes for each image thus gain
where new active region will be born and occurs due high resolution of the frequency. The

139
Back to Contents

electromagnetic that have been received and III. DISCUSSION


collected at the antenna will be converted to
electrical energy and feed to outdoor antennas. The frequency range 45- 870 MHz has been
chosen and it has longest boom length of 5.45m.
The outdoor antenna used RG58 coaxial cables With the range of frequency, the scale factor which
for feeding and the radio frequency (RF) used to control directivity of an antenna is between
transformers was used for transformation between 0.76 till 0.98 and it has relative spacing about 0.148.
the antenna and the cable. The log Periodic Dipole The total of the boom length was 5264m and has 19
Antenna was set up with the range of frequency number of elements which increased logarithmically
from 45MHz till 870 MHz [14]. The antenna has but decreased smoothly further from the feed point.
boomlength 5.45m with 7.01dB gain and 0.818 Besides that, two (2) boom lengths were used to
scale factor. CALLISTO software stored each control the antenna balance and it quiet long thus
spectrum data file and tracking log file so that it can help in terms of front-to-back ratio of the
can transfer and process to the data acquisition antenna in signal direction. During the set up was
server, thus Space Weather Monitoring Laboratory done, the low frequency-dependence of radiation
(SWML) can display the data and current tracking and the input impedance need to be considered by
status. The signal from the antenna will connect to making sure that all metal parts are connected
CALLISTO spectrometer, which housed in steel electrically to mast flags then can give high
case due low loss coaxial cable, LMR-240. Low protection from electrostatic charges. Radio Flux
noise preamplifier was used to amplify the gain. Density data have been used from many countries to
CALLISTO spectrometer can function as detector determine the type of solar radio burst and make
and maximize the gain more than 10dB with comparisons using NOAA or Space Weather
25mV/dB sensitivity supplied by ETH Zurich. Prediction Centre (SWPC) data.
CALLISTO also divided into 3 sub bands which
consists 62.5 KHz channel, resolution comparable
radiometric bandwidth is 300 KHz approximately. A. Data collected from Angkasa station,
The sampling time is about 1.25 ms per frequency, MALAYSIA
pixel, 1ms integration time and we decided to use
E plane polarized antenna so that it can respond to From the data, frequency and duration of the
the range of frequency. LPDA, CALLISTO burst will be determined and Figure 3 below shows a
spectrometer and windows computer connected to Moving Solar Radio Burst Type III that received by
the internet so it is easier to transport the data or Pusat Angkasa Negara on 9th March 2012 which
information and be used in many observatories as detected using e-CALLISTO software. The Type III
it known as frequency-agile spectrometer. In order burst event was detected started from 04:22 UT till
to control the sampling time of the spectrometer 04:28 UT in 6 minutes duration of time.
and to control the antenna direction, GPS Clock
and tracking controller have been used. The daily
observation starts from 7.00 am till 7.00 pm [15]

Figure 3 . First data received from Angkasa


station showed Type III burst in Solar Radio Flux
Density (Credit to: e-CALLISTO software)
Figure 2. Log Periodic Dipole Antenna (LPDA)
that have been set up at National Space
Centre,Banting,Malaysia.

140
Back to Contents

B. Data Collections from Others Sites differences in the emission are due to changes in
electron density and known as moving burst while
Figure 7 showed that the Type V burst occurred on
9th August 2011 which also associated with solar
flares. The duration of burst took about 7 minutes
and has a frequency range from 180 MHz to 870
MHz .

IV. CONCLUSION

In this project, We also want to highlight that


Malaysia has potential to focus on solar monitoring
Figure 4 and Figure 5. Radio Flux Density for of solar radio emission to get good data and will
Type II and Type III event from KRIM observatory focus on optimization and performance evaluation
(Credit to: e-CALLISTO software) data. This show that the importance of LPDA and e-
CALLISTO software in determining the solar radio
burst types as the Radio Flux Density have been
used to determine the frequency, duration and
intensity of the burst. Moreover, for the next project,
we focus on finding the potentiality the dynamic
optimization of an array’s of antenna and increased
the power output under a non-fault conditions
instead only focusing on mitigations of fault
conditions. Lastly, to monitor the system such as the
accuracy of measurements, sampling rate and
protocols in communication system involve.We
hope this project will be sufficient in reducing the
Figure 6 and Figure 7. Radio Flux Density for additional cost for antenna’s set up and sensing
Type IV and Type V event from KRIM and system thus become common in the future.
BLEN7M observatories
(Credit to: e-CALLISTO software)
ACKNOWLEDGEMENT
Figure 4 showed solar radio burst Type II event
and usually associated with CMEs that occurred on We are grateful to CALLISTO network ,
9th February 2015 from different sites which has NOAA and SWPC make their data available online.
ranged frequency between 260MHz till 350 MHz Special thanks to the National Space Agency and the
and duration of the burst was in 10 minutes due National Space Centre for giving us a site to set up
electron accelerated propagating through the solar this project and support this project. Solar Burst
monitoring is a project of cooperation between the
corona and the electron intensity.
Institute of Astronomy, ETH Zurich, and FHNW
Windisch, Switzerland, MARA University of
Meanwhile, Figure 5 showed Solar Radio Burst Technology and University of Malaya. This paper
Type III, 3rd February 2015 (KRIM Observatory) also used NOAA Space Weather Prediction Centre
from 250 MHZ to 340 MHz in frequency. It took (SWPC) for the solar radio flux data for comparison
about approximately 1 minutes, which can be drift purpose. The research is a part of an initiative of the
rapidly due to difference in ambient coronal density International Space Weather Initiative (ISWI)
with time. Figure 6 showed it has broadband Program.
continuum features and related to the development
of sunspots. This solar Type IV burst occurred on
2nd February 2015 and have been started from 04:36
UT till 04:44 UT and it took about 8 minutes. It has
a frequency range from 250 MHz till 345 MHz. The

141
Back to Contents

REFERENCES Narrowband frequency-drift structures in


solar type IV bursts. 2013.
[13] Z.S.Hamidi1, *, Z.Z.Abidin1,
[1] Hamidi, Z., et al., Modification and N.N.M.Shariff1 and Z.A. Ibrahim,
Performance of Log Periodic Dipole C.Monstein3, The Beginning Impulsive of
Antenna. International Journal of Solar Burst Type IV Radio
Engineering Research and Development, Emission Detection Associated with M Type
2012. 3: p. 36-39. Solar Flare.
[2] Hamidi1, Z.S., N.N.M. Shariff, and C. [14] Hamidi, Z., et al., Signal Detection
Monstein, The Different Between the Performed by Log Periodic Dipole Antenna
Temperature of the Solar Burst at the Feed (LPDA) in Solar Monitoring. International
Point of the Log Periodic Dipole Antenna Journal of Fundamental Physical Sciences,
(LPDA) and the CALLISTO Spectrometer. 2012. 2(2).
2014: p. 167-176 [15] Z.S.Hamidi1, A.Z.Z., Ibrahim Z.A.3,
[3] Hamidi, Z. and N. Shariff, Determination of N.N.M. Shariff4, C. Monstein5,
Flux Density of the Solar Radio Burst Event Modification and Performance of Log
by Using Log Periodic Dipole Antenna Periodic Dipole Antenna. Volume 3(Issue 3
(LPDA). International Letters of Chemistry, ): p. 36-39.
Physics and Astronomy, 2014. 7: p. 21--29. [16] Anim, N., et al., Radio frequency
[4] Hamidi, Z., et al., Designing and interference affecting type III solar burst
Constructing Log Periodic Dipole Antenna observations. 2012 National Physics
to Monitor Solar Radio Burst: e-Callisto Conference:(PERFIK 2012), American
Space Weather Project. 2nd International Institute of Physics, 2013: p. 82-86.
Conference on Applied Physics and [17] Hamidi, Z. and N. Shariff, Chronology of
Mathematics, AICIT Press, 2011: p. 1-4. Formation of Solar Radio Burst Types III
[5] Hamidi, Z. and N. Shariff, The Mechanism and V Associated with Solar Flare
of Signal Processing of Solar Radio Burst Phenomenon on 19th September 2011.
Data in E-CALLISTO Network (Malaysia). International Letters of Chemistry, Physics
International Letters of Chemistry, Physics and Astronomy, 2014. 5.
and Astronomy, 2014. 15. [18] Ramli, N., et al. The relation between solar
[6] Gopalswamy, N., INTERPLANETARY radio burst types II, III and IV due to solar
RADIO BURSTS. activities. in Space Science and
[7] White, S.M., Solar Radio Bursts and Space Communication (IconSpace), 2015
Weather. International Conference on. 2015. IEEE.
[8] •, P.Z.E.P.C.J.M.P.T.G.C.M. and R.T.J.M.
3, Observations of Low Frequency Solar
Radio Bursts from the Rosse Solar-
Terrestrial Observatory. 26 Jul 2012.
[9] Z. S. Hamidi1, N. N. M. Shariff2, Enormous
Eruption of 2.2 X-class Solar Flares on
10th June 2014. 2014: p. 249- 257.
[10] Z. S. Hamidi1, M. B. Ibrahim1, N. N. M.
Shariff2, C. Monstein3, An Analysis of Solar
Burst Type II, III, and IV and Determination
of a Drift Rate of a Single Type III Solar
Burst. 2014: p. 160-170.
[11] Z. S. Hamidi1, *, N. N. M. Shariff 2,**,
Detailed Investigation of a Moving Solar
Burst Type
IV Radio Emission in on Broadband
Frequency. 2014: p. 30-36.
[12] Yukio Nishimura1 , T.O., Fuminori
Tsuchiya2 , Hiroaki Misawa2 , Atsushi
Kumamoto1 , Yuto Katoh1 , Satoshi
Masuda3 , and Yoshizumi Miyoshi3,

142
Back to Contents

Monitoring the level of Light Pollution and its Impact


on Astronomical Bodies Naked-Eye Visibility Range
in Selected Areas in Malaysia using the Sky Quality
Meter

M. S. Faid
Academy of Contemporary Islamic Studies Nurulhazwani Husien
MARA Technology University Faculty of Applied Sciences
Selangor, Malaysia MARA Technology University
syazwaned@siswa.um.edu.my Selangor, Malaysia
hazwani_husien21@yahoo.com
N. N. M. Shariff
Academy of Contemporary Islamic Studies M..O. Ali
MARA Technology University Faculty of Applied Sciences
Selangor, Malaysia MARA Technology University
nnmsza@salam.uitm.edu.my Selangor, Malaysia
marhanaomarali@gmail.com
Z. S. Hamidi
Faculty of Applied Sciences N. H. Zainol
MARA Technology University Faculty of Applied Sciences
Selangor, Malaysia MARA Technology University
zetysh@salam.uitm.edu.my Selangor, Malaysia
hidayahnur153@yahoo.com

S. N. U Sabri
Faculty of Applied Sciences
MARA Technology University
Selangor, Malaysia
sitinurumairah@yahoo.com

Abstract— Light pollution is an anthropogenic by-product of Keywords— anthropogenic pollution, light pollution, population,
modern civilization and heavy economical activity, sourced from night sky brightness, limiting magnitude
artificial light. In addition of its detrimental impact on human, and
ecology, light pollution brightens the night sky, limiting the range I. Introduction
of visible astronomical bodies detected by naked-eye. Since it is One of the massive downfalls of the modern civilization is
becoming a global concern for astronomers, the level of light
the alteration of the natural environment, which include
pollution needs to be monitored to study its mark on the
astronomical data. Using Sky Quality Meter in the period of 5 ambient light alteration in the night sky. The man-made light,
months, we investigated the links between city population and its or artificial light produces a sky glow, that scattered vertically,
vicinity from the city center towards the profile of the night sky then returned back into our sky by the atmosphere, making the
and the limiting magnitude of the naked eye. We eliminate the data sky brighter and disrupts the suppose night sky ambient. This
factored by clouds and moon brightness on account of it has an phenomenon classified as one the human pollution towards the
adverse effect on sky brightness that could disrupt research on environment and it is called light pollution [1].
light pollution. From the result, we can see population and location
distance from the city as major variables of light pollution, as Increase in human population density in a certain location
Kuala Lumpur, a city center sky is 5 times brighter than Teluk parallels with growth in economic and social infrastructure.
Kemang, a suburban sky. Some recommendations in reducing the However, this unstoppable economic growth and population
effect of light pollution will be also discussed. development crops a massive light output through their street

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


143
Back to Contents

light and giant building spotlight that is directly proportional to with population, courtesy of Jurij Stare1 which is from the night
the light pollution [2]. This trends keeps deteriorating, without time radiance composite images using Visible Infrared Imaging
people realizing it could damage the environment and the health Radiometer Suite (VIIRS) Day/Night Band (DNB) produced by
of living things [3]. The truth is people find other issue more
the Earth Observation Group, NOAA National Geophysical
important although they concerned about the environment [4] .
Data Centre. As you can see from the map, Kuala Lumpur is
In terms of astronomy, the ground-observable astronomical located at the center of the city, with a population of 1,674, 621
bodies in the sky are determined by the contrast of these bodies people in an area of 242.7 km2, meaning it has the density of
with the night sky brightness [5]. The human eye has a certain
6900 inh/km2. While Teluk Kemang is located just at the 24 km
contrast threshold in detecting a dim celestial object in the sky
[6]. The detection of celestial object on a certain night sky is from the city center Port Dickson, with a population of 115 361
quantified as Bortle scale. A highly polluted sky with low in an area of 575.76 km2, indicates it has the density of 200.4
magnitude of sky brightness will lower the amount of celestial inh/km2. To monitor the overall light pollution, the
object visible to both naked and optical aid observation. This measurement of night sky brightness is taken in the period of 5
make light pollution is the major concerns of astronomers months, from August to December.
considering its effect on ground based observation especially
on optical and radio observation [7]. Therefore, it is an utmost
importance to monitor the light pollution level to stress out its
mark on astronomical data. According to the Walker Law, the
night sky brightness varies from one location to another
location, depends on the element of population, economic
activity and location distance from the nucleus of the population
[2]. A lowly populated area has a high viewing chance of
multiple astronomical bodies compared to the heavily dense
area. Concerning about this problem, we examine the character
of night sky brightness in selected areas in Malaysia. The
purpose of the examination is to study the magnitude of sky
brightness from different location profiles and to determine the
amount of celestial objects in the spectrum of naked eye
visibility in respect to the locations. Since the zenith sky
brightness signify the level of light pollution at that particular Figure 1. Irradiance Mapping of Kuala Lumpur and
Teluk Kemang
location, we can see the pattern of light pollution from two
different location profile. Ten nights of hourly data are taken per month in
Kuala Lumpur as heavy population traffic and close proximity
II. METHODOLOGY
to city suggest an inconsistency in sky magnitude, thus longer
A. Instrument hours of monitoring are needed for calibration to yield the
In this study, we will use SQM to determine the sky actual night sky brightness data. On the other hand, single night
brightness of the night sky. The Sky Quality Meter (SQM) is a per month is enough to represent the night sky brightness in
pocket device, to evaluate the sky brightness in the unit of Teluk Kemang, since it has a far distance from the city nucleus
mag/sec2 developed by Unihedron. SQM use a light intensity of and dormant human activity.
frequency detector, which then convert the output in digital Besides site distance from the city and its population
LED reading. The sensor is covered by a near-infrared blocking density, there a few variables that could deviate the magnitude
filter to make sure the brightness reading is on the visible reading from the actual sky brightness. These variables occur
wavelength spectrum. naturally to in any place in the world, namely the amount of
cloud in the sky and the brightness of the Moon. Considering
B. Site Survey Malaysia climatology and the moon brightness, we expected the
data to be fluctuated throughout measurement period. However,
In this study, we stationed the SQM in two locations this data can be calibrated with a series of step.
that have a different site profile in terms of population density
and its vicinity from the city center. These locations are C. The Sky Brightness Fluctuation from Cloud Cover Amount
classified as urban and suburban site, named Kuala Lumpur and
Teluk Kemang respectively, and been pinpointed on the map by One of the important aspects that could affect the sky
white mark. Figure 1 demonstrates the radiance map layered brightness is the amount of cloud cover. In the case of Malaysia,

1
www.lightpollutionmap.info

144
Back to Contents

the magnitude inconsistency of sky brightness is the result of


the natural climatology of cloud in Malaysia which have a 65 -
III. RESULTS
95 percent amount of cloud cover in the sky throughout the
year [8]. An overcast sky has a massive impact on the brightness
of the night sky ranging from 3 to 18 times darker than clear A. Light Pollution Data
sky depending on the location profile. The high reflectivity of Table 1. Sky Brightness Throughout 5 Months
cloud reduces the amount of incoming solar radiation absorbed
by the Earth-atmosphere system by increasing the albedo. An
increase of cloud amount, will increase the albedo, in our cases
the sky brightness. We use the Okta Scale to study the effect of
the cloud amount to the brightness of the sky in Okta Scale in
8-point scale ranging from 0 (completely clear sky) to 8 (fully
covered sky). A study by Kocifaj and Solano Lamphar simulate
that distance plays an important part for night sky irradiance
due to cloud cover [9], and since Teluk Kemang is far from the The trends of the sky brightness in both stations throughout
city center for its night sky to be effected by the cloud, we do August until December are shown in Table 1. Cloud and
moonlight are the major factors for the fluctuation trend of the
not include the analysis of Teluk Kemang Okta Scale. graph. Malaysia was struck by a massive haze by November
and September but the effect of haze can be accounted as same
D. Effect of Moon Phase on Sky Brightness as the effect of cloud cover. Both suburban and urban sites
indicate a stable difference between both of them, with sky
After the effect of the cloud, the Moon is the highest natural brightness 2 to 4 magnitudes brighter relative to the suburban
site, under the same month and the same meteorological
contributor of the sky brightness. Jason formulated that the conditions. At a glance, we can agree that the population and
moon scattered radiation can influences the night sky brightness their artificial light play a major explanation of the difference
up to 7 magnitude difference [10] which when full moon at its in magnitude, evidently shows that it is the cause light
maximum brightness. A moon phases more than 0.5 in has a pollution.
long hour regime in the night sky while the full moon Table 2. Sky Brightness Data
(phases>0.9) is above the horizon all night. In this study, we
will examine the magnitude reading of the night sky that has the Location Raw Average Standard
(Name) Data Deviation
lowest yield of sky brightness from average magnitude reading
Urban (KL) 50 16.19 0.45
and eliminate the data that show a clear influence of moon Suburban 5 19.30 0.78
brightness, when moon phase > 0.5. (TK)
Total 55 16.20 0.91
E. Naked Eye Limiting Magnitude

By eliminating the data affected by cloud cover and moon Since August 2015 until December 2015, a total of 55 raw
phases, we then can obtain to actual yield of night sky data from Teluk Kemang and Kuala Lumpur. Of the Sky
brightness on that particular location. The difference pattern of Brightness data collected, 50 data came from urban station
Kuala Lumpur, while the rest is from suburban station Teluk
night sky brightness at both location is then determined and the
Kemang. Table 2 presents the combined statistic. The total data
naked-eye range of visible astronomical bodies. The average is 16.2. As mentioned before, the sky brightness at the
determination of the visible celestial object by the naked eye is urban location show a lower average magnitude, therefore the
expressed in the log of the night sky brightness magnitude (m) sky brightness is higher or brighter than suburban location. The
with formula [11] derived from Knoll, where NELM is the mean difference of magnitude from urban location Kuala
Lumpur, with average 16.19 and suburban location Teluk
naked eye limiting magnitude. Kemang, with average 19.30 is 3.10307.

NELM = 7.93-5 x log (104.316-(m/5) + 1 (1)

From this formula we can predict the visible object in the


sky theoretically in respect of the distinction of both site
profiles. The prediction is based on the Bortle Scale theory,
relevant to the location characteristic

145
Back to Contents

The altitude of the moon during night depends on the phases


of the moon. During the highest phases, or full moon, the moon
altitude is always above the horizon throughout the night,
making sky brightness data collection is not ideal. Table 4
exhibits that 4 of our data was clearly affected by the full moon,
bringing a massive 2 magnitude difference in brightness from
the lowest count of sky brightness, 17.78, or 6.5 times brighter.
In early September until mid-November, Malaysia climatology
is being stricken by the terrible haze that obstructs optical depth
of seeing. The aerosol particles contained in the haze scattered
and absorb the artificial light making our sky brighter. This
Figure 2. Relative Frequency of Brightness Magnitude
explains why data 5-10 have magnitude less than 16 even
This implies the urban location Kuala Lumpur is 17 times though it is not at the full moon. The effect of haze, at the
brighter than rural location Teluk Kemang.2 The combined maximum Air Pollution Index has contributed 3.52 differences
histogram of Night Sky Brightness recorded in Figure 2. Since of magnitude or 25 times brighter than the normal sky.
data from suburban location Teluk Kemang is a lot less than
data from urban location Kuala Lumpur, it is being scaled so
that it is comparable relative. The largest difference of sky Table 4. Sky Brightness & Moon Phases
brightness in magnitude from urban location and suburban is 6
mag /sec2 or 251 brighter in observed luminosity. The highest No Mag Moon Phases
frequency at urban location Kuala Lumpur is around 16 1
mag/sec2, while at the suburban location Teluk Kemang is 19 15.97 0.99
mag/sec2. Both of these values are affected by the natural 2 15.09 0.98
climatology of Malaysia that has a high amount of cloud cover 3 15.16 0.97
in the sky throughout the year.
4 15.97 0.94
5 15.78 0.27
B. Cloud Cover Effect 6 14.26 0.24
7 14.55 0.16
Table 3. Average Magnitude over Okta Scales 8 15.37 0.09
9 14.55 0.06
10 15.21 0.02

D. Naked-Eye Limiting Magnitude


By studying the natural phenomenon can that affect the sky
brightness, at a normal percentage of cloud cover and
neglecting the data at moon phases larger than 0.5, we can agree
that 17.59 is the average brightness of the normal sky at urban
The Okta relationship with night sky brightness is being location Kuala Lumpur and 19.30068649 is the average
portrayed is Table 3. At lower magnitude, which less than 17, brightness of the normal sky is suburban location Teluk
the data is densely distributed at 6-8 Okta Scale, while at Kemang, portrayed in Table 5. This implies that the night sky
in Kuala Lumpur is 5 times brighter than the night sky in Teluk
magnitude higher than 17, we can see the data sparsely Kemang. The brightness of the sky can affect the limiting
dispersed in 2-5 Okta Scale. The lowest value of sky brightness amount of celestial object brightness visible in that particular
is 17.78 on the Okta Scale 2 or 31 percentage of cloud cover. A sky, or Naked Eye Limiting Magnitude (NELM).
full clouds cover in the sky will contribute around 1.62
difference in sky brightness or 4.4 brighter than cloudless sky.
From a total of 55 data of sky brightness, a few data with
magnitude less than 16 come to our attention.

C. Moon Phase Effect

2
Magnitude in brightness operates in log by the base of 2.512.
Therefore, the difference in mag, x, refers to flux ration of
2.512

146
Back to Contents

Table 5. Magnitude of Brightness and NELM wavelengths shorter than 540, since smaller wavelength in that
spectrum has an inimical effect on human and animal health.
Magnitude Milky Astronomical ACKNOWLEDGMENT
and Way Object and
Standard Constellations This work was supported by 600-RMI/RAGS 5/3
Sites Deviation NELM (121/2014) and Kementerian Pengajian Tinggi Malaysia.
Not The Pleiades Special thanks to the Jurij Stare of www.lightpollutionmap.info
Visible Cluster is and the NOAA National Geophysical Data Center for providing
at all visible, but
very few other thematic map of night time irradiance. Also credit to Malaysia
objects can be Department of Statistic for population density data and Permata
detected. The Pintar Observatory UKM for atmospheric extinction
constellation is observation and moon brightness data.
dimmer and
Kuala lack of main REFERENCES
Lumpur 17.59/0.92 3.78 star
Not The Pleiades [1] T. W. Davies, J. Bennie, R. Inger, and K. J. Gaston, “Artificial light
Visible Cluster is the alters natural regimes of night-time sky brightness,” Sci. Rep., vol.
at all only object 3, p. 1722, 2013.
visible to all [2] M. F. Walker, “THE EFFECTS OF URBAN LIGHTING ON THE
except for BRIGHTNESS OF THE NIGHT SKY,” Publ. Astron. Soc. Pacific,
experience vol. 89, no. June, pp. 405–409, 1977.
observer. Only [3] P. Cinzano and C. D. Elvidge, “Night sky brightness at sites from
the brightest DMSP-OLS satellite measurements,” Mon. Not. R. Astron. Soc., vol.
constellation is 353, no. 4, pp. 1107–1116, 2004.
discernible [4] N. N. M. Shariff, Z. S. Hamidi, A. H. Musa, M. R. Osman, and M.
Teluk and they are S. Faid, “Creating Awareness on Light Pollution’ (CALP) Project as
Kemang 19.288/0.70 4.98 missing star. a Platform in Advancing Secondary Science Education,” in
International Conference of Education, Research and Innovation,
Seville. Spain., 2015.
[5] Bradley E. Schaefer, “Astronomy and the limits of vision,” Vistas
IV. Discussion & Conclusion Astron., vol. 36, pp. 311–361, 1993.
[6] A. Crumey, “Human contrast threshold and astronomical visibility,”
After the elimination of natural contributors of night sky Mon. Not. R. Astron. Soc., vol. 442, no. 3, pp. 2600–2619, 2014.
brightness, we can see the toll of artificial light in observing [7] Z. S. Hamidi, Z. Z. Abidin, Z. A. Ibrahim, and N. N. M. Shariff,
celestial objects in the sky. Kuala Lumpur sky, the site that has “Effect of light pollution on night sky limiting magnitude and sky
quality in selected areas in Malaysia,” Sustain. Energy Environ.
denser population and located in the city center is heavily (ISESEE), 2011 3rd Int. Symp. Exhib., no. June, pp. 233–235, 2011.
polluted with a magnitude value of 17.59 mag⁄sec and 3.78 [8] J. A. Engel-Cox, N. L. Nair, and J. L. Ford, “Evaluation of Solar
naked-eye limiting magnitude, are 5 times brighter than Teluk and Meteorological Data Relevant to Solar Energy Technology
Kemang which has a brightness of 19.28 mag⁄sec and 4.98 Performance in Malaysia,” J. Sustain. Energy Environ., vol. 3, pp.
naked-eye limiting magnitude. 115–124, 2012.
[9] M. K. H. A. S. Lamphar, “Quantitative analysis of night skyglow
The light pollution level in Teluk Kemang starting to amplification under cloudy conditions,” Mon. Not. R. Astron. Soc.,
vol. 443, no. 4, pp. 3665–3674, 2014.
worrisome. Teluk Kemang has been a hot observation spot
[10] C. S. J. Pun, C. W. So, W. Y. Leung, and C. F. Wong,
since years back and numbers of new moon observation records “Contributions of artificial lighting sources on light pollution in
was achieved3. But with increasing human activity in its nearby Hong Kong measured through a night sky brightness monitoring
city Port Dickson and all over the world, this location will be network,” J. Quant. Spectrosc. Radiat. Transf., vol. 139, no. 2014,
example of many other locations that may someday no longer pp. 90–108, 2014.
viable for astronomical observation. The idea of maintaining [11] H. A. Knoll, R. Tousey, and E. O. Hulburt, “Visual Thresholds of
Steady Point Sources of Light in Fields of Brightness from Dark to
the night sculpture in the sky may not be well accepted by the Daylight,” J. Opt. Soc. Am., vol. 36, no. 8, pp. 480–482, 1946.
vast dimension of human society, since the impetus of humanity
is driven by industrial expedience and the economy, but we
regard the idea of conserving mortal well-being and ecosystem
stability is an idea shared by all. Attention concerning light
pollution regulation and policy should be put on the highest
priority since a rapid increase of artificial light every year
endangering all living things. Such simple initiative that can be
implemented is by introducing the dark sky location
specifically for astronomical observation and nocturnal animal
activity, with a minimum distance of 7 km is good enough to
conserve the night sky natural brightness. A more dramatic
action is by proposing a total ban of man-made light at

3
Youngest Moon Age and Smallest Elongation. For more
info, see http://www.icoproject.org/record.html

147
Back to Contents

Solar Radio Bursts Detected By CALLISTO System


And Their Related Events
Nurulhazwani Husien N.H. Zainol
Faculty of Applied Sciences Faculty of Applied Sciences
MARA Technology University MARA Technology University
Selangor, Malaysia Selangor, Malaysia
hazwani_husien21@yahoo.com hidayahnur153@yahoo.com

Z.S. Hamidi S.N.U. Sabri


Faculty of Applied Sciences Faculty of Applied Sciences
MARA Technology University MARA Technology University
Selangor, Malaysia Selangor, Malaysia
zetysh@salam.uitm.edu.my sitinurumairah@yahoo.com

N.N.M.Shariff M.S. Faid


Academy of Contemporary Islamic Studies Academy of Contemporary Islamic Studies
MARA Technology University MARA Technology University
Selangor, Malaysia Selangor, Malaysia
nnmsza@salam.uitm.edu.my syazwaned@siswa.um.edu.my

M.O. Ali C. Montsein


Faculty of Applied Sciences Institute of Astronomy, Wolfgang-Pauli-Str 27
MARA Technology University 8093 Zurich
Selangor, Malaysia Switzerland
marhanaomarali@gmail.com monstein@astro.phys.ethz.eh

Abstract --- The solar radio bursts are the results of solar flares Keywords---Sun, solar flares, solar radio bursts,
take place on the surface of the Sun. Some of them could reach CALLISTO
the surface of the Earth and can be detected by the ground
base antenna. The CALLISTO system is used to interpret the
radio bursts that have been detected by the antenna before the
results appeared on the computer’s screen. The CALLISTO
system consists of software and hardware with the aim to
observe the solar radio emission for astronomical science. I. INTRODUCTION
There are five types of solar radio bursts that have been The Sun can be considered as our nearest star which has
classified based on their characteristics since 1960s. However, distance 149.6 million km from our Earth. It acts as the
all these types of solar radio bursts have their own related
events due to different process before their formations. Based
center of our solar system which allowed all the planets to
on the result, it shows that all related events of the bursts such move around it on their orbits in anticlockwise direction.
as Coronal Mass Ejections, prominences and flares occurred The Sun consist of three types of major layers; core,
several minutes or hours before the bursts appeared on the radiative zone, convective zone and solar atmosphere which
computer’s screen.
include photosphere, chromosphere and corona.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


148
Back to Contents

events are associated with high energy particles and depend


on the magnetic of the Sun. These activities emit the
plasma particles which propagating and produce solar radio
bursts that can be detected by antenna from the surface of
the Earth.

Fig. 1. The structure of the Sun.

Usually the all activities of the Sun such as solar flare,


prominence and coronal mass ejection taking place on the
corona. Unlike the Earth, the Sun contains all gaseous, there
is no solid surface of the Sun. The surface of the Sun also
Fig. 2. The complex magnetic field of the Sun.
covered by a magnetic field which are very complex. It is
reported that the magnetic reconnection on the Sun will lead There are five types of solar flares below a few hundred
energy release that contributes to all activities on the surface MHz that have been stated since 1960s [3] which are Solar
of the Sun. Radio Burst Type (SRBT) I, SRBT II, SRBT III, SRBT IV
and SRBT V. These SRBT differ in their shape on
The magnetic reconnection is the process in which the spectroscopy, frequencies and duration of the burst. Usually
magnetic lines of force break and re-join into a lower-
energy configuration, which magnetic energy is converted the SRBT I is associated with solar storm, type II is
to the plasma kinetic energy[1]. This process accelerates the correlated with Coronal Mass Ejections events. Type III and
particles which initiated the solar activity. As the particles IV caused by solar flare and formation of new sunspot.
accelerates, it will produce the solar flare which leads to Meanwhile type V, is very rare, but normally can be seen
solar radio burst that can be detected on the surface of the after type III formation [4]. However, it is reported that
Earth by their frequencies using the antenna. The successful there are several types of solar radio burst which cannot be
launched Yohkoh satellite gives the evidence of the
classified due to the complexity of the activity of the Sun.
magnetic reconnection take place on the Sun. Yohkoh
satellite is the Japan technology launched to the space on
August 30, 1991 by the Institute of Space and Astronautical
Science (ISAS) from Kagoshima Space Center and operated II. SOLAR RADIO BURST
until December 14, 2001 even the goal of this satellite was The activity of the Sun usually emits the plasma from
only for three years. It has acquired about 6 thousand the corona. This plasma may reach the surface of the Earth
snapshots of the soft X-ray images and about 2800 high that may cause the disturbance in the magnetic field of the
energy solar flares. This gives the advantages for the Earth. It is important to understand the solar radio bursts
astrophysicist and upcoming generations to do research and
because these can disrupt the wireless communications,
publish journal based on these photos. The photos provide
indisputable proof of magnetic reconnection occurs on the satellites damages, produce destructive surges on power
Surface of the Sun. Yohkoh satellite was design with the grids and endanger astronauts [5]. The activity of Sun
specific instruments which are the Hard X-ray Telescope started after the process of the magnetic reconnection on the
(HXT), and Bent Crystal Spectrometer (BSC) [2]. surface of the Sun which leads to solar flares. Solar flare is a
brief eruption of intense high-energy radiation from the
During the flares, energy stored in magnetic fields within
Sun’s surface. It is more dangerous if it is directed towards
active centers is rapidly converted into thermal, kinetic and
mechanical energies. Although Coronal Mass Ejections the Earth. The impact of solar flare that can be observed by
(CMEs) and solar flares are two different events, it could the naked eyes is aurora phenomenon which usually takes
not be denied that CME occur during large flares. Energy place in the Southern and Northern hemisphere, causes by
partition of both events at the same time also has been the collisions between the electrically charged particle
studied. The CMEs is thought to explode just before the normally plasma released by the Sun with the gases inside
eruption of solar flares. However, in some situation the the Earth’s atmosphere such as oxygen and nitrogen.
CMEs occurred after the production of solar flares. Both

149
Back to Contents

During the solar flare event, the radio wave is produced and Table 1. The characteristics of solar radio burst (Credit to
some of them propagate toward the Earth. The radio waves IPS AUSTRALIA).
have the longest wavelength and the lowest frequency in the
electromagnetic spectrum. The wavelength of the radio
waves can be small as a football until larger than our planet.
It was first discovered by Heinrich Hertz late 1880s. It is
believed that all the astronomical objects like planets
produce radio waves if they have a changing magnetic field.
The radio astronomy instrument called WAVES on the
WIND spacecraft has recorded the data of radio waves from
the astronomical object, including the radio waves from the
Sun’s corona and other planets in our solar system.

The figure below shows the data recorded by the WIND


spacecraft in a day. The variation emission of waves was
obtained from the Sun’s radio burst, the Earth, and even
from Jupiter’s ionosphere whose wavelength measure about
fifteen meters in length. The right part of the graph shows
the radio burst from the Sun caused by electrons that ejected
into the space during the solar flare events which moving at
20% of the speed of light.

Fig. 4. The schematic diagram of solar radio burst in the


graph of frequency versus time.

III. METHODOLOGY
Fig. 3. The data of radio emission recorded by WIND
spacecraft (Photo credit to NASA/GSFC Wind Waves The radio wave of the burst can be detected by the
Michael L. Kaiser). antenna on the ground base. Most of the data acquire in this
paper obtain from the CALLISTO website which is used the
The radio waves of the Sun, also known as solar radio burst antenna in the first stage. It is the worldwide network that
has been classified into different classes for more than 40 aims to observe the solar radio emission for astronomical
years [6], [7], [8]. The main observational parameters of science with the antenna covers frequency range 45 – 870
these classified solar radio bursts are identified by their MHz [10]. The idea of CALLISTO project is to construct a
bandwidth, frequency drift rate and duration of the simple and low-cost instrument, which can be sited in places
emissions [9]. The table 1 below shows the characteristics of low infrastructure or remote locations [11] with the aim
of each type I, II, III, IV and V burst and the figure 4 below to monitor the 24 hours solar activity per day [10]. It is led
shows the formation of each radio burst that could be obtain by the Swiss Federal Institute of Technology Zurich (ETH
in the computer screen. Zurich) which work up collaborations with local host
institutions, which operates e-CALLISTO Solar Radio
Telescope with currently more than 166 instruments in more

150
Back to Contents

than 66 locations including Switzerland, Belgium, Egypt IV. RESULT AND DISCUSSION
and others.

The CALLISTO spectrometer [12] was built in the


framework of IHY2007(International Heliophysic Year) and
the ISWI by former Radio and Plasma Physics Group
(Principal Investigator Christian Monstein) at Institute for
Astronomy, ETH Zurich [13]. The system contains both
hardware and software. The radio burst detected by the
antenna before interpreted through the software which
provides several output files. The most important are the
data files, which use the Flexible Image Transport System
(FITS) file format [10]. The figure 6 below showed the
process of the CALLISTO started when the solar radio burst
detected by the antenna, then the data is interpreted in the
computer and appeared on the screen. The data then
gathered in the database through the internet and can be
reach through the website http://www.e-callisto.org/.

Fig. 7. The solar radio burst type I (a), II (b), III (c), IV (d)
and V (e).

All the above data are the different type of solar


radio bursts that have different shapes, intensity and
frequency appeared on the spectrograph. The data in figure
7 (a) occurred on 1st July 2014 is the SRBT I. It has the
shortest period of occurrence compared the other events (b),
(c), (d), and (e). They also associated with different
phenomena occurred before they are started to appear on the
spectrograph. For example the SRBT II occurred on 24th
Fig. 5. The antenna built in Finland (Photo credit to August 2014 observed in figure 7 (b), it was associated
CALLISTO website). with a Coronal Mass Ejections event (CMEs) which started
at 12:06 UT with perfect CMEs emerged at 12:14 UT on the
same day. This CMEs event (figure 8 (f)) recorded by
NASA’s Solar Dynamics Observatory. The burst on figure 7
(c), which is SRBT III occurred on 13th April 2015) also
associated with CMEs events (figure 8 (g)) recorded by
SOHO on the same day at 00:12 UT several hours before
complex SRBT III formed.

Fig. 6. The process takes place in CALLISTO framework.

151
Back to Contents

ACKNOWLEDGMENT

We are grateful to CALLISTO network, STEREO, LASCO,


SDO/AIA, NOAA, SOHO, SolarMonitor and SWPC make
their data available online. This work was partially
supported by the 600-RMI/FRGS 5/3 (135/2014) and 600-
RMI/RAGS 5/3 (121/2014) UiTM grants and Kementerian
Pengajian Tinggi Malaysia. Special thanks to the National
Space Agency and the National Space Centre for giving us a
site to set up this project and support this project. Solar
burst monitoring is a project of cooperation between the
Institute of Astronomy, ETH Zurich, and FHNW Windisch,
Switzerland, Universiti Teknologi MARA and University of
Malaya. This paper also used NOAA Space Weather
Prediction Centre (SWPC) for the sunspot, radio flux and
solar flare data for comparison purpose.

Fig. 8. The related events of solar radio bursts type II, III,
IV and V.
REFERENCES
The event in figure 7 (d) appeared on 2nd April 2014 was
SRBT IV with strong ultraviolet blast recorded by NASA’s [1] Hamidi, Z., et al. Magnetic Reconnection of Solar Flare
Solar Dynamics Observatory. This extreme ultraviolet blast Detected by Solar Radio Burst Type III. in Journal of Physics:
Conference Series. 2014. IOP Publishing.
(refer to figure 8 (h)) producing the huge M6-class solar [2] Martens, P., Yohkoh-SXT Observations of reconnection.
flare. This M class flare are medium sized that can cause Advances in Space Research, 2003. 32(6): p. 905-916.
[3] Wild, J., S. Smerd, and A. Weiss, Solar bursts. Annual Review
brief radio blackouts which affected the Earth’s polar of Astronomy and Astrophysics, 1963. 1: p. 291.
region. It is reported during the SRBT V on 15 th May 2015 [4] Hamidi, Z. and N. Shariff. Evaluation of signal to noise ratio
(figure 7 (e)), the “hedgerow prominence” catches by the (SNR) of log periodic dipole antenna (LPDA). in Business
Engineering and Industrial Applications Colloquium (BEIAC),
space telescope with amazing things ; tadpole-shaped 2013 IEEE. 2013. IEEE.
plumes that float up from the base of the prominence, [5] Zavvari, A., et al., CALLISTO Radio Spectrometer Construction
at Universiti Kebangsaan Malaysia. IEEE Antennas and
narrow streams of plasma that descend from the top like Propagation Magazine, 2014. 56(2): p. 278-288.
waterfalls and swirls and vortices that resemble van Gogh’s [6] Kundu, M.R., Solar radio astronomy. New York: Interscience
Publication, 1965, 1965. 1.
Starry Night (refer to figure 8 (i)). [7] McLean, D.J. and N.R. Labrum, Solar radiophysics: Studies of
emission from the sun at metre wavelengths. 1985.
[8] Benz, A.O., Plasma Astrophysics. Kinetic process in solar and
stellar coronae. 1993, Kluwer Academic Publishers, Dordrecht,
V. CONCLUSION Holland.
[9] Raulin, J.-P. and A. Pacini, Solar radio emissions. Advances in
Space Research, 2005. 35(5): p. 739-754.
[10] Zavvari, A., et al., Analysis of radio astronomy bands using
There are five types of solar radio burst that has been CALLISTO spectrometer at Malaysia-UKM station.
classified based on their own characteristics. However, there Experimental Astronomy, 2015: p. 1-11.
[11] Kallunki, J., M. Uunila, and C. Monstein, Callisto radio
are several types that cannot be detected as the activity of spectrometer for observing the sun—Metsähovi Radio
the Sun is not static or various. All these types of solar radio Observatory joins the worldwide observing network. Aerospace
and Electronic Systems Magazine, IEEE, 2013. 28(8): p. 5-9.
burst is occurred due to magnetic reconnection process. All [12] Benz, A.O., C. Monstein, and H. Meyer, CALLISTO–a new
the events have different related phenomena. The solar radio concept for solar radio spectrometers. Solar Physics, 2005.
burst is the effect of particles propagating during solar flare 226(1): p. 143-151.
[13] Russu, A., et al. A year of operation of Melibea e-Callisto Solar
which some of them may reach the Earth and can be Radio Telescope. in Journal of Physics: Conference Series.
detected using an antenna. 2015. IOP Publishing.

152
Back to Contents

Simulation Modeling of Sewing Process for


Evaluation of Production Schedule in Smart Factory
Sooyoung Moon, Sungjoo Kang, Jaeho Jeon, and Ingeol Chun*
CPS Research Section
Embedded SW Research Department
SW·Content Research Laboratory
Electronics and Telecommunications Research Institute
Daejeon, Korea
{symoon, sjkang, jeonjaeho11, igchun}@etri.re.kr

*Corresponding Author: igchun@etri.re.kr

Abstract— We should complete customized products in generate customized products. Other characteristics of a smart
specified time to implement smart factories. For this simulation factory are as follows [6-8].
can work as a tool for validating production schedule. We
propose in the manuscript a simulation model for sewing 1) Each product has a unique ID. 2) Each product passes a
machine and modeling tool for simulation of sewing process. different sequence of processes until all required processes are
completed. 3) Products and facilities communicate with each
Keywords—smart factory; industry 4.0; sewing process; other to determine each product’s production schedule.
modeling; simulation Therefore, facilities in the smart factories should be modeled
differently than facilities in existing factories.
I. INTRODUCTION
B. Sewing machine model (Structural model)
Industrie 4.0, proposed by DFKI [1], is defined as the 4th
industrial revolution based on Internet-of-Things (IoT) [2], We defined a sewing machine model for simulating sewing
cyber-physical systems (CPS) [3], and Internet-of-Services machine processes. Figure 1 shows the structure of the
(IoS) [4]. One of the characteristics of Industrie 4.0 is that it proposed sewing machine model.
includes smart factories capable of generating customized
products for customers. One of the important issues to
implement a smart factory is to complete and deliver
customized products to customers within specified time. For
this, we need an efficient scheduling algorithm [5].
It becomes more and more sophisticated work to validate a
production schedule in factories. Simulation is a tool for
validating a production schedule and changing it if needed. For
using simulation, we need appropriate simulation models. In
this paper, we propose a sewing machine model for simulating
sewing process. The proposed sewing machine model includes
sensing, sewing, forwarding and control functions as
submodels. Also, we propose a modeling tool that includes the
proposed model. The proposed modeling tool manages a model
library that can be continuously extended for sewing process
simulation. Further, it can automatically generate and build
source codes for simulation models. Therefore, users can easily
develop their own models and simulate them. Fig. 1. Sewing machine model (Structural model)

II. SIMULATION MODEL FOR SEWING PROCESS

A. Motivation The sewing machine model was defined as a structural


model consisting of four component models (Sensor / Work /
One of the distinguishing features of smart factories Forward / Control). Sensor model detects raw material or semi-
compared to existing factories is that the smart factories finished products arrived at the sewing machine. Work model

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


153
Back to Contents

performs sewing process for the arrived raw material or semi-


finished products. Forward model chooses the next forwarding
facility and passes the processed semi-finished product on the
selected next facility. Finally, Control model governs the
whole operations of the sewing machine.

C. Sensor model (Behavioral model)


Sensor model abstracts a sensor module that detects raw
material or semi-finished products arrived at the sewing
machine. Figure 2 illustrates the state transition diagram of the
sensor model.

Figure 3 Work model (Behavioral model)

Work model has 3 possible phases (Idle, Working,


Reporting). In Idle phase, it waits for a product to arrive at the
sewing machine. When an input is arrived through the port
In_Work_Command, the Work model moves to the Working
phase. It changes the properties of the arrived product in
Working phase and outputs the work result through the port
Out_Work_Report.

E. Forward model (Behavioral model)


Fig. 2. Sensor model (Behavioral model)
Forward model implements a variable process of a smart
factory by choosing the next forwarding facility for a semi-
finished product and delivers the product to the selected facility.
Figure 4 represents the state transition diagram of the Forward
Sensor model has 4 phases and moves from one phase to model.
another whenever state transition occurs. The Sensor model in
Init phase stores its current location and moves to the Sensing
phase. In Sensing phase the Sensor model periodically checks
whether semi-finished products has been arrived at the sewing
machine. If there is one, it goes to Detected phase. Otherwise,
it goes to Non-detected phase. In Detected phase the Sensor
model outputs the information of arrived product through the
port Out_Sensor_Detection and then returns to the Sensing
phase. In Non-detected phase it returns to the Sensing phase
after a predefined time.

D. Work model (Behavioral model)


Work model represents the sewing operation and changes
the properties of an arrived product. Figure 3 shows the state
transition diagram of the Work model.
Figure 4 Forward model (Behavioral model)

Forward model consists of 4 possible phases (Idle,


SelectNext, Forwarding, Reporting). It waits until the sewing
operation ends in Idle phase. When an input is arrived through
the port In_Forward_Command, the Forward model goes to the
SelectNext phase. In the SelectNext phase it chooses the next
forwarding facility based on the workload of candidate
facilities and then goes to the Forwarding phase. The
Forwarding model pass the semi-finished product to the
selected facility and moves to the Report phase. Finally, it
generates the output through the port Out_Forward_Report.

154
Back to Contents

F. Control model (Behavioral model) A user can easily add models such as sewing machine to a
simulation scenario by using the proposed modeling tool.
Control model manages the other submodels of the Sewing Further, the modeling tool automatically generate and build
machine model. Figure 5 shows the state transition diagram of source codes for the models to be executed by the simulator.
the Control model. Therefore, the modeling tool support addition, deletion,
modification and reuse of simulation models in the model
library.

IV. CONCLUSION
We should complete and deliver personalized products to
customers within specified time to implement smart factories.
Whether we can complete the customized products in specified
time depends on the production schedule used. Simulation can
work as a tool for validating production schedule and changing
it if needed. In the manuscript we define a Sewing machine
model organizing a sewing process and a modeling tool. The
proposed model and modeling tool can be continuously
improved and extended.

ACKNOWLEDGMENT
This work was supported by the ICT R&D program of
Figure 5 Control model (Behavioral model) MSIP/IITP. [R-20150505-000691, IoT-based CPS platform
technology for the integration of virtual-real manufacturing
facility]
Control model has 4 possible phases (Sensing, Working,
Forwarding, Logging) that correspond to an operation cycle REFERENCES
(detection, sewing, sending, and recording) of a sewing [1] DFKI - German Research Center for Artificial Intelligence.
machine in a smart factory environment. http://www.dfki.de.
[2] J. Gubbi, R. Buyya, S. Marusic and M. Palaniswami, "Internet of Things
Control model in Sensing phase waits for an input from (IoT): A vision, architectural elements, and future directions," Future
Sensor model and goes to the Working phase when it receives Generation Comput. Syst., vol. 29, pp. 1645-1660, 2013.
arrived product information through the port [3] V. MAJSTOROVIĆ, J. MAČUŽIĆ, T. ŠIBALIJA and S. ŽIVKOVIĆ,
In_Control_Detection. In Working phase it sends a work "CYBER-PHYSICAL MANUFACTURING SYSTEMS–
command to the Work model through the port MANUFACTURING METROLOGY ASPECTS," Proceedings in
Manufacturing Systems, vol. 10, pp. 9-14, 2015.
Out_Control_WorkCommand. When the Control model
[4] J. Cardoso, K. Voigt and M. Winkler, "Service engineering for the
receives the forwarding result from the Forward model, it goes internet of services," in Enterprise Information SystemsAnonymous
to the Logging phase. In Logging phase, the Control model Springer, 2008, pp. 15-27.
records the processing result and returns to the Sensing phase. [5] A. K. Gupta and A. I. Sivakumar, "Simulation based multiobjective
schedule optimization in semiconductor manufacturing," in Simulation
Conference, 2002. Proceedings of the Winter, 2002, pp. 1862-1870.
III. MODELING TOOL FOR DEFINING SIMULATION MODELS
[6] A. Radziwon, A. Bilberg, M. Bogers and E. S. Madsen, "The Smart
We have implemented a modeling tool that manages a Factory: Exploring adaptive and flexible manufacturing solutions,"
model library including the described Sewing machine model Procedia Engineering, vol. 69, pp. 1184-1190, 2014.
[9]. Figure 6 shows a simulation scenario organized by using [7] Z. Detlef, "Smart Factory–Towards a Factory of Things, Technology
the proposed modeling tool. Initiative V Kaiserslautern, Germany, German Research Center for
Artificial Intelligence, DFKL. Kaiserslautern," Annual Reviews in
Control, vol. 34, pp. 129-138, 2010.
[8] D. Lucke, C. Constantinescu and E. Westkämper, "Smart Factory - A
Step towards the Next Generation of Manufacturing," pp. 115-118,
2008.
[9] H. Lee, I. Chun and W. Kim, "DVML: DEVS-Based Visual Modeling
Language for Hybrid Systems," vol. 256, pp. 122-127, 2011.

Figure 6 Simulation scenario by using modeling tool

155
Back to Contents

A Knowledge Management Framework for


Studying the Child Obesity
Raslapat Suteeca Prompong Sugunnasil
College of Arts, Media and Technology College of Arts, Media and Technology
Chiang Mai University Chiang Mai University
Chiang Mai, Thailand 50200 Chiang Mai, Thailand 50200
Email: RASLAPATS@GMAIL.COM Email: P.SUGUNNASIL@GMAIL.COM

Abstract—The problem of obesity in children is an important more knowledge[3]. Students learn through participation in
problem because it could lead other disease. The major challenge project-based learning where they make connections between
in the current childhood obesity prevention is the diversity different ideas and areas of knowledge facilitated by the
of information. Not all of the information is a valid medical
knowledge, such as some local belief. Therefore, applying the teacher through coaching rather than using lectures or step-
invalid information to children often cause more harm than good. by-step guidance[3]. Furthermore, the constructionism holds
In this work, we present a knowledge management framework that learning can happen most effectively when people are
to collect and analyze the related knowledge about the children active in making tangible objects in the real world. In this
obesity prevention. The process of this framework is based on sense, constructionism is connected with experiential learning
the knowledge creation theory.
and builds on Jean Piaget’s epistemological theory of construc-
I. I NTRODUCTION tivism. The important aspects of constructionism theory is that
learning can occur anytime both in the classroom or outside
Nowadays, obesity has become major regional and global the classroom.
problems. World Health Organization (WHO) has explored the This research demonstrates knowledge management by us-
world obesity state in 2006 and found that children under 5 ing knowledge creation theory. Activities for each knowledge
years old or approximately 22 million children has weighted base were determined. This research focuses on fostering pro-
over standard. In addition the study of Ladda Mausuwan cedures to prevent obesity in children and the study includes
(2008) found that in Thailand children aged under 6 years local belief which can be either supported or argued. The
old has weight increased more than 40% from 5.8% between samples were parents- currently taking care of their children,
1995 to 2001[1]. Obesity has become Thailand major problem. or experienced parents. In the next section literature reviews
There are several causes of childrens obesity both from chil- will be presented in obesity in children and knowledge man-
dren themselves, namely, obesity at birth and external causes, agement to build up the ground comprehension and application
namely, parents nurturing condition. Apart from this, in some of knowledge management. Framework applied in this study
area with local believes mentioning that obese children are will also demonstrated afterward including details in each
cuter or more adorable than those who are thinner. These are activities to accomplish our purposes of study. Lastly, the
several causes entailing childrens obesity which results in un- summary of research result and recommendations will be
healthy children. Solutions applied to solve obesity in children presented.
have been collecting fostering procedures from experienced
parents or elderly and then propagating as appropriate. II. L ITERATURE R EVIEW
Knowledge management is the process in collecting and This section mentions related research and other related
organizing data/information to create useful knowledge and theory. Section II-A presents overall condition of obesity
retain such knowledge in the organization. Disseminating including effects of obesity in children. Then, section II-B
knowledge has shared similar methods as in distributing provide the basic concept of the knowledge management
knowledge on fostering children; therefore, it is reasonable process applied in this study. Finally, in section II-C, the brief
to apply knowledge management procedure in collecting and detail of Bloom’s taxonomy is provided.
organizing fostering procedures so it can be used to decrease
childrens obesity. A. Obesity in Childhood
Saymore Paert has concluded constructionist learning as Obesity is a medical condition in wide range of gender and
follows: ”...inspired by the constructivist theory that individual age in which the body fat has accumulated to the level that
learners construct mental models in order to understand the could have the negative effect on health[1][4]. The level of
world around them”[2]. The concept of constructivism in obesity varies in different race. The criteria to determine the
learning proposes that the student is the center of learning obesity is body mass index (BMI)[5], a measurement obtained
where students use known information to acquire or discover by dividing a person’s weight by the square of the person’s

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


156
Back to Contents

height. The Western people are considered to be obese when conclusion, obesity in children and adult is dangerous more
their BMI is over 30 𝑘𝑔/𝑚2 . The BMI of within the range of than perceived by the society [14][8][15].
2530 𝑘𝑔/𝑚2 is defined as overweight. Obesity increases the
likelihood of other diseases such as heart disease, diabetes, B. Knowledge Management
obstructive sleep apnea [4]. Sustainable-successful organizations are organizations that
Normally, the obesity is caused by several reasons including has continuously developed its efficient working process. One
excessive food intake, lack of physical activity, and genetic factor of efficiency is knowledge, enhancing competitive ad-
susceptibility. Despite the fact that there are a small number vantage via building and maintaining intellectual capital and
of genetic, the evidence to support the view that some obese efficient knowledge including applying systematic knowledge
people eat little yet gain weight due to a slow metabolism is management [16][17]; therefore, knowledge- imitable and
limited. Generally, the obese people have a higher degree of irreplaceable is the vital resource for the organization [18].
energy expenditure compare to normal people, since they have Nonaka has proposed dynamic model in building knowl-
a large number of mass in the body[1]. Obesity can be caused edge, namely, knowledge feature created by socialization
by genetic or external resulting from energy intakes is more process in individual level and organizational level[19]. In-
than energy outtakes. The rest energy has been transformed to formation can be transformed to knowledge via individuals
fat and retained in the body, if too much, one is facing with interpretation [18]. Knowledge can be divided into 1) tacit
obesity [1][4] knowledge which is embedded in each individual by expe-
The common causes for childs obesity is the genetic factor riencing, learning or talent. It is autonomous, cant be found
and environmental factors [6][7]. The environmental factors in manual, book, database or file [20][18] 2) explicit knowl-
can be economic and social condition[8], life style and cultural edge refers to knowledge processed and stored systematically,
environment [4], i.e., obese child is cuter and healthier [9], ef- easy to attain/access and easy to use [21]. This knowledge
fects of advertisements on unhealthy products [10][7], mother involves systematic database creation, reliable accessibility of
with obesity, inadequate exercise, consumption pattern [11], such database, easiness to use such information to present,
parents educational level, i.e., less educated parents tend to applicability of problem solving in case of similar incidents
choose less appropriate food and activity suitable for children [20].
than those of better educated parents [4]. The race also has Tacit knowledge and explicit knowledge can be changed
some influence over the status of obesity in children[12]. their status all the times which depends on situations requiring
The Hispanic children has higher amount of obese children new knowledge. Nonaka explains that knowledge is viral,
then non-Hispanic children. Moreover, the Asian and Pacific starting from individual, group, and organization consecu-
Islander children are the only group whose status is improved. tively [19][22]. Knowledge can be created by socialization,
Advances in information technology has affected directly externalization, combination and internalization [19]. Social-
to childrens behavior; for instance, behavior in television ization is the interaction between tacit knowledge and tacit
watching and playing computer games and so on [10][13]. knowledge from individuals who have shared experiences or
Based on Thai Children Health Report it is found that children activities. Learning in this step takes place by observation.
between 1-18 years old spend their times on watching televi- Externalization occurs when key knowledge from individual is
sion everyday with the average number of hours of 2.35-5.51 interpreted in the comprehensive format so it can be presented
hours per day. Based on Children and Youth Report (2004- to others; namely, documentation of such knowledge. Combi-
2005), it is found that elementary children takes 75.66 minutes nation occurs when individual or group from different orga-
per day to be online and in 2006-2007, the number of minutes nizations -who own explicit knowledge- has participated in
online tends to increase to 108.89 minutes per day. In addition, activity together resulting more complex knowledge and more
children between 6-10 years old play the Internet game the systematized knowledge such as conferences, and seminars.
most[8]. Internalization occurs when individual, group or organization
Puhl and Latners study found that 10-20 % of children with has adjusted and applied knowledge via learning by doing,
obesity tend to be obese children and 40% of these children i.e., training, simulation, or experimentation [22].
would be obese teenagers. From these teenage group, 75-
80% would become obese adults, which can be concluded that C. Blooms Taxonomy
children with obesity tend to become adult with obesity[14]. In the past academia and psychologists have tried to cre-
Obesity can directly and indirectly affect children quality ate framework in prioritizing academic goals. The generally
of life, such as, study report, social adaptation as they can accepted theory has been learning behavior by Bloom and
be object of humiliation resulting in bad personality and his associates. Taxonomy has divided brain working system
unhealthy emotions. Based on the studies mentioned, it can into 6 steps, from lowest to highest, as follows: 1)Remember-
be concluded that obesity is not adult sickness but starting ing 2)Understanding 3)Applying 4) Analyzing 5)Evaluating
from childhood as there have been the accumulation of fat 6)Creating [23].
continuously, adhering to blood vessel, heart, brain, lever, and First, remembering refers to ability to recall, list, tell, and
other organs all over the body. These cause several disease as name such as students can explain the meaning of theory.
follows: hypertension, osteodystrophy, heart-related disease. In Secondly, understanding refers to ability to interpret, provide

157
Back to Contents

examples, conclude, refer such as students can explain the knowledge including efficient fostering procedures via web-
theory. Third, applying refers to ability to put knowledge to site, hyperlink, e-learning, movie clip, CAl, e-text and blog.
use, solve the problem such as students can use knowledge to The third step- combination- aims to integrate knowledge
solve for certain problems. Fourth, analyzing refers to ability by re-examining knowledge from socialization and external-
to examine, critique, judge such as students can evaluate the ization stage to ensure the validity and reliability of effi-
theory. Fifth, evaluation refers to ability to evaluate and lastly cient fostering procedure. The analytical process must be
creating refers to ability to design, plan, produce such as reasonable, and logical. Keyword used are analyzes, breaks
students can create new theory which is different from the down, compares, contrasts, diagrams, deconstructs, differenti-
old ones[24]. ates, discriminates, distinguishes, identifies, illustrates, infers,
outlines, relates, selects, and separates. It can be noted that
III. K NOWLEDGE MANAGEMENT FRAMEWORK TOWARD such keywords assisting in considering knowledge gained. In
OBESITY IN CHILDREN addition, primary data collection via conversation were used to
construct statistical framework which can be used to analyze
This section would present architecture of knowledge man- obesity condition in children in selected area. Such statistical
agement framework toward children with obesity by mention- data would connect each fostering procedure to childrens
ing related activities in each knowledge base. characteristics. When accompanied together, this knowledge
Based on literature review on knowledge management by would be used to be the content for media purpose which is
using knowledge creation theory found that transformation comprehensible by common locals. Each knowledge must be
of knowledge base must be equipped with adequate resource either validated or argued by trusted data; if not, fundamental
in changing such knowledge[25]. Each knowledge base must statistical information must be presented as appropriate. Key
contain enough activities to be operated in order to entail activities are info-graphic.
knowledge retention. In this study, the research team has The last step -internalization- is the process by which
determined detailed activities in each base hoping to change transforms explicit knowledge to tacit knowledge, namely,
knowledge base by using Blooms technique. Keywords were utilizing knowledge and putting it to practice. Knowledge
used in this research to stimulate learning and applying mul- user must create the new tacit knowledge which belongs
timedia to ensure fast and efficient communications, eliminate to himself/herself. Keywords used are categorizes, combines,
the limitation of time and place. In addition, the application of compiles, composes, creates, devises, designs, explains, gen-
multimedia would stimulate remembering, and comprehension erates, modifies, organizes, plans, rearranges, reconstructs,
in the long run; therefore, multimedia has been applied in relates, reorganizes, revises, rewrites, summarizes, tells, and
knowledge creation in each knowledge base. writes. This implies that this process involves making the
The first step in knowledge creation is socialization. The knowledge learned in to use whether it comes from reading
purpose of this step is to share and create tacit knowledge or watching from several sources, or via experimentation and
via interactions among individual by exchanging experiences. real practices which can be in the form of simulation or game.
After exchanging, knowledge is created and developed to
be personal knowledge and then used to be personal pro- IV. C ONCLUSION
cess. Therefore, to accomplish the activity goal on prevent-
ing obesity in children is to create comprehension on obe- This research has proposed knowledge management model
sity in children. In this regards, keywords in jogging the to collect and analyze knowledge on protection of children
memory, understanding like comprehends, converts, defends, obesity by determining activities for each knowledge base.
distinguishes, estimates, explains, extends, generalizes, gives However, there are still several hindrances experienced in
an example, infers, interprets, paraphrases, predicts, rewrites, accomplishing the process as follows:
summarizes, translates were used via conversation, exchange ∙ Knowledge categorizing: As knowledge gained from this
of knowledge, web board, live chat, community, and social study are varied in terms of importance especially in
media. Fostering procedures were introduced including factors socialization process- experienced participants lack of
involving fostering process hoping to develop to community theoretical knowledge, categorizing knowledge is consid-
of practices (CoP) at last. ered to be the vital process and should be conducted by
In the second step, externalization aims to create and specialist.
share tacit knowledge and disseminate to explicit knowledge- ∙ Persuasion process: Knowledge management seminar to
from socialization stage. Externalization includes information collect knowledge has been popular in collecting and
processing, and using information from valid source to create exchanging key knowledge; however, there has been the
database as per to related literature review. This involves risk of knowledge distortion of exchanger intentionally
applying knowledge. Keywords comprises applies, changes, and unintentionally; i.e., tone of voice, body language,
computes, constructs, demonstrates, discovers, manipulates, different level of education, which are hindrance for the
modifies, operates, predicts, prepares, produces, relates, shows, others to participate.
solves, and uses as they reflect what have learned by the ∙ Knowledge diffusion: This process requires creditwor-
learner. The key activities are creating media to publicize thiness of diffuser; therefore, selecting of diffuser (of

158
Back to Contents

knowledge) is vital process. He/she must be someone who [22] D. Finley and V. Sathe, “Nonakas seci framework: Case study evidence
is closes the target community. and an extension.”
[23] T. Naruemannalinee and W. Tangkuptanon, “An integration of e-learning
The proposed method is a conceptual framework to study and collaborative learning on the basis of blooms taxonomy,” in Pro-
the obesity in children. As a consequence, the proposed frame- ceeding of the 6th National Conference on Computing and Information
Technology, 2010.
work must be evaluated through the real world experiment. [24] W. K. . C. Piyapimolsit, “Revised blooms taxonomy,” Parichart Journal,
The framework has to be implemented and the statistical 2005.
results must be gathered to verified the performance of the [25] M. Mvungi and I. Jay, “Knowledge management model for information
technology support service,” Electronic Journal of Knowledge Manage-
proposed method. ment, vol. 7, no. 3, pp. 353–366, 2009.
R EFERENCES
[1] L. Mausuwan, The decade of children and family wisdom. National
Institute for Child and Family Development, 2008, ch. Child Nutrition
in Thailand, pp. 49–52.
[2] S. Papert and I. Harel, “Situating constructionism,” Constructionism,
vol. 36, pp. 1–11, 1991.
[3] M. Wagner and J. Schorger, “Core knowledge area module number 2:
Principles of human development,” 2005.
[4] M. Dehghan, N. Akhtar-Danesh, and A. T. Merchant, “Childhood
obesity, prevalence and prevention,” Nutrition journal, vol. 4, no. 1,
p. 24, 2005.
[5] [Online]. Available: https://en.wikipedia.org/wiki/Obesity
[6] J. Warren, C. Henry, H. Lightowler, S. Bradshaw, and S. Perwaiz,
“Evaluation of a pilot school programme aimed at the prevention of
obesity in children,” Health Promotion International, vol. 18, no. 4, pp.
287–296, 2003.
[7] J. Lumeng, “What can we do to prevent childhood obesity?.” Zero to
Three (J), vol. 25, no. 3, pp. 13–19, 2005.
[8] P. Y.-n. T. N. Raipin, Arunee, “Opinions and practices of mothers having
overweight pre-school children,” Princess of Naradhiwas University
Journal, vol. 2, no. 4, pp. 1–15, 2012.
[9] S. J. Tongdang, Pulawit, “Overweight children in thailand,” Rama Nurs
I, vol. 18, no. 3, pp. 287–297, 2012.
[10] S. A. French, M. Story, and R. W. Jeffery, “Environmental influences
on eating and physical activity,” Annual review of public health, vol. 22,
no. 1, pp. 309–335, 2001.
[11] E. Isganaitis and L. L. Levitsky, “Preventing childhood obesity: can
we do it?” Current Opinion in Endocrinology, Diabetes and Obesity,
vol. 15, no. 1, pp. 1–8, 2008.
[12] L. Pan, L. C. McGuire, H. M. Blanck, A. L. May-Murriel, and
L. M. Grummer-Strawn, “Racial/ethnic differences in obesity trends
among young low-income children,” American Journal of Preventive
Medicine, vol. 48, no. 5, pp. 570 – 574, 2015. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0749379714006655
[13] J. F. Sallis and K. Glanz, “Physical activity and food environments:
solutions to the obesity epidemic,” Milbank Quarterly, vol. 87, no. 1,
pp. 123–154, 2009.
[14] S. Jongpiputvanich. (2015, September) Childhood obesity is more
dangerous than you think. [Online]. Available: http://www.healthstation.
in.th/action/viewarticle/737/
[15] E. Takahashi, K. Yoshida, H. Sugimori, M. Miyakawa, T. Izuno, T. Ya-
magami, and S. Kagamimori, “Influence factors on the development of
obesity in 3-year-old children based on the toyama study,” Preventive
medicine, vol. 28, no. 3, pp. 293–296, 1999.
[16] K. Srimahavaro, “The use of knowledge management system to increase
academic management efficiency of extended opportunity school under
office of primary educational service area4 nakornsrithammarat: A case
study of bansrabua school,” Ph.D. dissertation, Rangsit University, 2012.
[17] K. M. Wiig, “Knowledge management: an introduction and perspective,”
journal of knowledge Management, vol. 1, no. 1, pp. 6–14, 1997.
[18] R. Seidler-de Alwis and E. Hartmann, “The use of tacit knowledge
within innovative companies: knowledge management in innovative
enterprises,” Journal of knowledge Management, vol. 12, no. 1, pp. 133–
147, 2008.
[19] I. Nonaka, “A dynamic theory of organizational knowledge creation,”
Organization science, vol. 5, no. 1, pp. 14–37, 1994.
[20] E. A. Smith, “The role of tacit and explicit knowledge in the workplace,”
Journal of knowledge Management, vol. 5, no. 4, pp. 311–321, 2001.
[21] K. Chaharbaghi, A. Adcroft, R. Willis, S. M. Jasimuddin, J. H. Klein,
and C. Connell, “The paradox of using tacit and explicit knowledge:
strategies to face dilemmas,” Management decision, vol. 43, no. 1, pp.
102–112, 2005.

159
Back to Contents

Hybrid Clustering System Applied In Patent Quality


Management-- Take Intelligient Car Industry For
Example

Chin-Yuan Fan Shu-Hao Chang


Science &Technology Policy Research and Information Science &Technology Policy Research and Information
Center, Center,
National Applied Research Laboratories, Taiwan National Applied Research Laboratories, Taiwan
* cyfan@narlabs.org.tw * shchang@narlabs.org.tw

Abstract—Intelligent Car has been the key area of our future field of analysis, providing manufacturers focus on
technology, In recent years, more and more competitors started understanding the patent and technical information to help
to invest one after another. Therefore, it is essential for the manufacturers focus on patent judgment against the track,
researcher to track how to improve their technology and strengthening the competitiveness of this industry, I believe
research ability in intelligent car industry area. In this research, this is a topic of concern at this stage most of the investigators.
the future development of Intelligent Car industry is discussed
using patent analytic tools. First of all, related patents are This research attempts to use patent indicators to judge the
searched and collated by the key words in order to determine value of patents, the first part of the first studies on the past,
twenty key statistic indicators. Then, six are selected among these select the appropriate value of the patent indicators Patent
twenty key statistic indicators by stepwise regression method for judgment, then for this type of indicators, using principal
SOM (Self-organizing map) clustering use later on. Finally, component analysis to find the best fit the parameters of this
according to the result from SOM clustering, the patent clusters model, proposed further use of self-organizing map network
are defined as high, mid-high, middle, or low value patents. clustering analysis carried out, and finally scale back use to
Corresponding forecasting models are specifically designed for predict for the relevant priorities, identify the predicate, is
each value type of patents in this research for industrial determined by this formula, further condition for a smarter
researchers to evaluate strength and weakness of competitors vehicle industry integration and analysis.
when developing competition strategies.

Keywords—Intelligient Car; SOM (Self-organizing map) ; II. LITERATURE REVIEW


Patent; Cluster
A. Patent Analysis
I. INTRODUCTION This study attempts to analyze Intelligent Car industry
using patent analytic tools. Generally, patent analysis can be
"Intelligent car" has gradually become the focus of R & D
used to estimate opponents’ potential and predict their
direction in the automotive industry in recent years. Not like
progress. A corresponding strategy then can be planed. In early
traditional “Car” or “Auto Motive” area, intelligent car
nineties Mary Ellen Mogee (1991) included competitor
combines electronic equipment, computer analysis and Data
analysis, tracking and forecasting of the technologies, master
base system, this Intelligent Car system seems the most popular
the important technological development, international patent
application in recent car industry.
strategy, and international patent analysis into a complete
However, when this research discuss about Intelligent Car, patent analysis. About ten years later, Thomas’ study (2002)
first will Exploring about "Intelligent car" technology indicated a patent analysis should be helpful for a company in
development before, first think what is "Intelligent car"? What three aspects: (1) Patent analysis can help a firm in sighting its
is different from the conventional wisdom of cars and vehicles? patent portfolio and selecting acquisition target. (2) Patent
Wisdom is defined at this stage of the car, basically can be analysis can give necessary information for stock market
divided into three facets to explain, this three facets are energy valuation. (3) Data from patent analysis is valuable especially
systems, security systems, vehicle networking system of the in industries with high value patents. More scholars has given
three dimensions, the three facets of the energy industry and statement on the value of patent analysis later on. (Chart 1.)
were machinery industry and ICT industry is closely related to:
technical performance were affected at this stage of the three
key industries of the operation, and therefore on the outline, the
smart car can be said is the most important field of technology
research development stage, how to effectively improve the
research in this area, and further focusing on key patents in this

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


160
Back to Contents

Chart 1. Related Statement on Value of Patent Analysis B. Data mining and clustering.
Year of As mentioned above, the combination of data mining and
Author Statement
publication patent analysis can lead us to finding useful information. In
1987 Wilson Patent analysis has the potential to be a modern society, knowing how to filtering out useful
quantitative analysis which is valuable in information from the not so important ones efficiently is the
aspects such as acquisition, deprivation of key to success. Therefore, data mining is the technique
property, R & D planning and new product developed to meet the need. Several relatively important
development.
technique are briefly introduced as following: (Fayyad, 1996)
1997 Liu and Shyu Patent contains a lot of technical information,
and patent analysis is helpful for technology 1. Classification:
development
Classification is the separation of data elements into classes
2001 Abraham and Patent analysis can be used as a base for using models built based on a set of known attribute. That is,
Moita strategic planning, firm development,
competitor analysis, and oversea market finding a function of values of other attributes based on the
development. class attributes of the historical records or samples from a
2002 Breitzman Patent analysis can be used for acquisition in
complete database (training data), for the purpose of
and Thomas the following aspects: target selection, due categorical labels prediction of unclassified records.
diligence investigation, compatibility and
valuation. 2. Prediction:
2009 Lee, Yoon, Functions of patent analysis: Prediction is the estimation of future number of a certain
Lee and Park value. In order to predict the future number, the historical
Evaluating technology assets. Understanding
the business’s advantages and disadvantages attribute of this value are used as training data. In other words,
it’s the method for future number of a certain value based on
Knowing competitors’ patent activities.
the observation of the past.
Understanding the trend in global
competition. 3. Clustering:
Guiding managers to evaluate the R&D plan Clustering is putting data elements into clusters base on the
priorities.
characteristics of the data. After several training and learning
2011 Trappey, Wu, Patent analysis can be used for: phrases, characteristics of clusters starts to reveal. That is to
Dutta and say, the purpose of clustering is to identify the differences of
Trappey Comparison of industrial position
internationally. one cluster from another and to select similar samples within
one group.
Finding the leading countries in different
fields of technology by their number of Consequently, patent analysis and data mining are used in
patents and patent applications.
this study for discovering useful knowledge from Intelligent
Car patent-related information. Hopefully, the discovered
knowledge can provide a way to evaluate values of patents.
Huang and colleagues (2010) used data mining as a tool to
discover hidden information. During the process, related
economic and technical knowledge are organized through III. METHODS AND PROCEDURES
patent searching, patent information acquisition, and then, The very first step of this study is to determine the key
trying on various models step by step. The organized patent indicators using SPSS Statistics. Such determination is
knowledge can be used when making management decisions important since the dependent and independent valuables are
and innovating technology. The objects patent analysis set according to its result. Studying previous researches, the
performed on include content of patents, citation of patents, times of transaction and times of litigations of patents are
and patent numbers, etc. Four areas of patent analysis are found to be the most often used values when evaluating of the
covered in Huang’s study, and they are (1) Patent technology value a patent. Thus, times of transaction and times of litigation
activities monitoring, (2) industry analysis of a certain of patents are set to be the dependent valuables here as well.
technology, (3) life cycle analysis of a certain technology, and On the other hand, for the independent valuables
(4) trend analysis. The procedure practiced is list step by step determination, the possible key indicators are inventor, patent
as following: raw data processing, data cleaning up, data family, number of organizations own the patent, number of
transformation, key word determining, term frequency–inverse patent classification (IPC, CPC, and USPC), citations of the
document frequency (TF-IDF) processing, clustering patent, times the patent cited, and non-patent references. Each
analyzing, time series analyzing, association rules applying, time, six indicators among them are randomly picked to
result analyzing, model interpretation, evaluation, and finally practice stepwise recession. At the end, six indicators are kept
the knowledge revealing. Nowadays, patent value analysis with as the independent valuable for this study after six practices.
the aid of data mining is often used for key patent They are, number of IPC at the moment, DWPI patent family,
determination. number of days from application to approval, status on non-
patent references (thesis or journal articles), number of patent
inventors, and number of CPC. These independent valuables
and dependent valuables are later on be used in the analysis.

161
Back to Contents

After the determination of the indicators, the analytic


results then are used for Self-Organizing Map clustering
(SOM) in this study. The purpose of SOM clustering is to
further analyze the overall status of patents by understanding
the relations between each patent. This method can also apply
to analyzing similar types of patents in the future. The
algorithm of SOM patent clustering is as following:
Step 1: Compute parameters.
w
Step 2: Set the initial weight of patent value clustering ij .
The initial weight is usually the minimum value randomly
t
chosen. The radius of unit cluster ( R ) and learning rate ( a )
t

are assigned at this step, too.


Step 3: Select the prototype vector from input data set.
Step 4: Calculate the input vector of the patent value
training data set and the output distance (d) between each Figure1 Flow Chart of this Study
neuron using equation 1.
IV. USING THE TEMPLATE
d =  ( X i − Wij )
2
(1) In this study, Intelligent Car is used as a key word for
patent searching in Thomson Innovation database. As the
Step 5: The closest one (the one with minimum d) is then result, there are 5580 related patents in total. These patents are
winner unit which represents the center of the cluster as shown used as data set to practice several steps in this study- (1)
in formula 2. finding the values of patents based on the patent indicator, (2)
d j = min{d j }
determining the dependent valuables for evaluation of patents,
j
analyzing times of transaction and litigation, discovering the
(2) relationships between the dependent valuables using SOM
clustering, and setting the steps of evaluation using stepwise
wij recession.
Step 6: Adjust the weight ( ) between input vector and
output. j * is the center of the neighborhood, the winner unit. The purpose of clustering is to put similar units into the
same group for the discovery of new knowledge from
ΔWij = Wij + a t ( X i − Wij )× R _ factor j observing characteristics of each cluster and relationships
(3) between clusters. The optimal partitioning should give
t R _ factor j = f (R, r j ) minimum distance within and maximum between clusters.
( a is learning rate and : the From past experiences, setting large quantity of partitions
coefficient of j output unit. ) usually lead to bad result. To keep the balance between
samples for building model and number of prediction made is
Step 7: A learning cycle is the process of performing step 3 also important. An ideal RMSE cannot be lead to when too
to step 6 on all training data. The radius of neighborhood many samples are used to predict few cases, and vice versa.
clusters and the learning rate decreases after each time learning Hence the test number of cluster (C) is computed from C=2 to
cycle practiced. Learning stops when the settings reache a C=4. As a result, C=4 is the best choice because it shows
steady state. greater differences between clusters, takes fewer learning
cycles (52,000 cycles), and has smaller PMSE (0.0085).
a t = a t −1 × a _ rate (4)
Number of transactions plus litigations of a patent is used
R t = X R _ rate here for value evaluation. In this study, patents with 0~1, 2~3,
(5) 4~10, and 11 or more transactions and litigations are defined as
low, mid-low, mid-high, and high value patents respectively.
After completing patent clustering, this study attempts to
construct four prediction functions for high, mid-high, mid-
low, and low value separately. All this research model can been
seen as figure 2.

162
Back to Contents

Cluster group 4

Cluster group 3

Cluster group 2

Cluster group 1

Figure2. Illustration of the four clusters 4. Prediction function of high value patent:
Y4=0.4466+0.001X1+1.828X2-1.557X3-0.421X4-
Forecasting model design and strategy application.
0.059X5+0.265X6
1105 samples saved at beginning state are used for testing
the accuracy of resulting models at this point. The testing
In high-value patents, the three significant indicators are
outcome are listed below.
number of IPC at the moment, number of patent inventor, and
1. Prediction function of mid-low value patents:
number of CPC while the rest are not significant. That is,
Y1=-1.003+0.003X1+0.773X2-0.391X3-0.097X4-
changing in these three valuables in the function varies the
0.119X5+0.772X6
result most dramatically.
In mid-low value patents, the significant indicators are
the days from application to approval, number of IPC at the
As a conclusion, the only significant valuable is the
moment, the number of inventor, number of non-patent
number of inventor in the high value patent prediction
references, number of DWPI families. The rest indicators
function, both number of non-patent references and number of
show no significance.
patent family are important valuables in changing the result of
2. Prediction function of low value patent:
the mid-high value patent prediction function, no significant
Y2=0.1022+0.005X1+0.475X2-0.442X3-0.079X4-
valuable is find in mid-low patent prediction function, and
0.104X5+0.324X6
number of CPC and constant can differ the result of the low
In low value patents, just like in mid-low value patents,
value patent prediction function the most.
days from application to approval, number of IPC at the
moment, the number of inventor, number of non-patent
references, number of DWPI families show significance but V. CONCLUSION AND RECOMMENDATIONS
the rest are not.
First of all, the key indicators were selected through SOM
3. Prediction function of mid-high value patents:
clustering technique, namely, number of IPC at the moment,
Y3=0.6440+0.000X1+0.271X2-0.927X3-0.282X4- number of patent inventors, number of CPC, number of non-
0.003X5+0.224X6 patent references, number of DWPI patent families and number
In mid-high value patents, number of patent inventor is of transaction and litigation involved. Later, the data were
the only significant indicator ,while others are not. Variation of grouped into four significantly different partitions with features
this valuable has an impact on the prediction. that almost consistent with the actual situation. Accurate
prediction functions were then driven from the clustering
result. From the result, we conclude that the overall flow of this

163
Back to Contents

study was properly planned for reaching the goal of


constructing accurate and precise models for Intelligent Car
industry. The models built in this study can contribute in
understanding features of clusters by different variables,
guiding direction when decision making, stimulating the
development of high-tech green industry, and elevate the level
of Taiwan Intelligent Car industry level.
5.2 Recommendations.
When confronting competitions in the future, researchers in
Intelligent Car industry are recommend to compare
technological potential between oneself and the competitors
first. Then, the models built in this study can be used for the
future predictions. Finally, the corresponding strategies can be
planned and adjusted according to the prediction result.

REFERENCES
[1] A.J.C. Trappey, C.V. Trappey, C.Y. Wu, C.W. Lin..(2012).”A patent
quality analysis for innovative technology and product development.
Adv. Eng. Inf.26 (1).pp. 26–34.
[2] B.P. Abraham, S.D.(2001). Moitra Innovation assessment through patent
analysis.Technovation, 21 (4). pp. 245–252
[3] Chih-mei Chuang. The Performance and Risk Evaluation of Offshore
Wind Farm Power of Taiwan. Master thesis unpublished.
[4] Chin-Yuan Fan, M. F. Lai, T. Y. Huang, C. M. Huang.(2011). Applying
K-means clustering and technology map in Asia Pacific-semiconductors
industry analysis. IEEM 2011: 1043-1047
[5] Hsiao-Ju Chien. (2007).”Application of the Two-stage Cluster Analysis
on Employee Voluntary Turnover Intention”,.
[6] J. Thomas. (2002). The responsibility of the rulemaker: comparative
approaches to patent administration reform. Berkeley Tech. L. J., 17, pp.
728–761
[7] M. E. Mogee.(1991). “Using Patent Data for Technology Analysis and
Planning, ”Research Technology Management, vol. 34, no. 4, pp. 43-49.
[8] Min-Chi Wen, Kuei-Chao Chang, Pei-Chi Chang, Ray-Yeng Yang,
Hwung-Hweng Hwung.(2012). A Study on the Measures of Navigation
Safety for Offshore Wind Farm. Proceedings of the 34th Ocean
Engineering Conference in Taiwan National Cheng Kung University,
November 2012.
[9] R.M. Wilson.(1987). Patent analysis using online databases—I.
Technological trend analysis. World Patent Information Volume 9, Issue
1. Pages 18–26
[10] Tsung-han Tsai. (2012). “Using Patent Analysis to Forecast Technology
Trends and Firm's Competitive Strategy---A Case Study of AMOLED
Technology.”.
[11] U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R.
Uthurusamy.(1996). Advances in Knowledge Discovery and Data
Mining. AAAI/MIT Press,

164
Back to Contents

An Integrated QFD and Kano’s Model to


Determine the Optimal Target
Specifications
Dian Retno Sari Dewi Dini Endah Setyo Rahaju
Industrial Engineering Department Industrial Engineering Department
Widya Mandala Surabaya Catholic University Widya Mandala Surabaya Catholic University
Surabaya, Indonesia Surabaya, Indonesia
dianretnosd@yahoo.com

Abstract— The excellence of Quality Function makers. HOQ contained lots of information necessary for
Deployment (QFD) methodology for translating customer the decision making process, yet it did not equip the
needs into target specifications had been broadly known. development team with a formal way to optimally allocate
However, a number of researches had revealed some the product development resource to maximize customer
methodological flaws. Those stated that QFD did not have satisfaction. Furthermore, in the conventional QFD, the
a formal methodology to optimally allocate the resource decisions were made based on many subjective assessments.
available for product development. QFD also employed One of those subjective judgments was used in the
subjective technique in assessing the relationship between evaluation of the relationship strength between customer
customer need and engineering characteristics. In need and engineering characteristics. Dealing with those
addition, QFD implicitly assumed that the fulfillments of issues, this paper presents a mathematical model to
customer needs linearly related to customer satisfaction. maximize customer satisfaction by optimally allocating
However, Kano’s model notified that the fulfillment of a product development resource. The relationship between
customer need might have a nonlinear effect to the customer need and engineering characteristics was
customer satisfaction. With regard to those issues, this established by using regression technique.
paper presented an optimization model to allocate product Moreover, in the conventional QFD, the fulfillment of
development resource. The relationship between customer need was considered related linearly to customer
engineering characteristics and customer need was satisfaction. The customer satisfaction would increase
assessed using regression technique. Kano’s model was proportionally to its customer need’s importance weight as
integrated in the model to represent the relationship the customer need met. On the other hand, Kano’s model
between customer needs and customer satisfaction. The classifies the customer needs into several categories,
proposed model was then applied to determine the target according to its impact on customer satisfaction [5,6]. One
specifications of wooden single bed frame. The result of those categories, namely satisfier, contains customer
showed that by using the target specifications obtained, a needs which have linear impact on customer satisfaction
great customer satisfaction was created. when it is met. Though, there also exist customer needs
which are classified into attractive category and basic
Keywords— customer satisfaction; Kano’s model; category. Attractive category contains customer needs
optimization; product development; Quality which have non linear impact on customer satisfaction when
Function Deployment met. And the customer needs which have no or insignificant
impact on customer satisfaction, even when those are fully
met, are included in basic category. In this paper, Kano’s
I. INTRODUCTION model is integrated into the proposed mathematical model to
make better representation of the relationship of the
Quality Function Deployment (QFD) is a methodology that customer need fulfillment and customer satisfaction
has been commonly used to develop product which improvement [7,9]. An application of the optimization
conforms to customer needs. QFD’s structured tool, i.e. model for setting the target specifications of wooden single
House of Quality (HOQ) consists of matrices that have been bed frame is also presented.
systematically arranged to help the development team
translates customer needs to the corresponding engineering
target specifications [1]. Despite of its benefit in
maximizing customer satisfaction through better product
design, several researches notified that the conventional
QFD suffered from some methodological flaws [2,3,4].
According to those researches, the conventional QFD did
not provide sufficient formal methodology to the decision

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


165
Back to Contents

II. THE PROPOSED OPTIMIZATION MODEL for Kano’s attractive parameter, 1 for Kano’s satisfier
The proposed mathematical model is presented below. parameter, and 0.5 for Kano’s basic parameter.

Max
 
S P =   s 1j  S P max (1)
(
γ j ≤ p1j ≤ 5 γ j = r js ∀j ) (7)

 j  γ j is defined to ensure that the product developed has better


The objective function, as written in equation (1) is performance than competitor’s product in meeting customer
developed to maximize total customer satisfaction score, need j .
which value lies between 0 and 1.
Li ≤ x in ≤ U i ∀ i n = 0,1 (8)
Subject to For engineering chacteristic i , its values lie between upper
bound U i and lower bound Li .
Z i = xi1 − xi0 ∀ i (9)
(2)
The improvement made for engineering characteristics i is
Total maximum potential contribution of all customer presented by using equation (9) while the resource that may
limit the specifications improvements are showed by
needs ( S P max ) is considered as the sum of the maximum of
equation (10), (11) and (12).
all customer need’s contributions to customer satisfaction.
The maximum of contribution of customer need j is notated (
c D Zi + cp Zi ≤ B
i
i i
) (10)

as s max
j and represents a maximum satisfaction score that t Z
i
i i ≤ T (if the activities of improvements were
can be reached by customer need j . For the customer needs
carried in series manner) or (11)
max
that are considered as attractive needs, s j usually gets a max ti Zi ≤ T ( if the activities of improvements were
i
relative high score.
carried in parallel manner) (12)
SP > SS (3) where:
In the case of the development team is interested in creating
a better product than competitor’s, it is necessary to add S P = total customer satisfaction
equation (3). In this way, the product developed will deliver
a higher total customer satisfaction score than the
S P max = total maximum potential contribution of customer
competitor’s product. needs

xincc =
{x − ((U + L ) 2)} ∀j n = 0,1
n
i i i (4)
S S = competitor’s total customer satisfaction score
(U − L ) 2
i i s 0j = initial satisfaction score contributed by customer
The engineering characteristic values should be coded using
need j , per 100 units
equation (4) to eliminate the effect of different scaling of
different engineering characteristics. s 1j = satisfaction score gained by meeting the customer

p nj = β 0 + β ij xincc ∀j n = 0,1
i
(5) need j , per 100 units

In the conventional QFD, relationship strength of customer s max


j = maximum contribution of customer need j to
need j and the engineering characteristics are denoted by customer satisfaction
using subjective ratings, such as 1, 3, 9. To reduce the k j = Kano’s parameter of customer need j
subjectivity of the relationship evaluations, the regression
technique is applied. The regression function obtained is as xi0 = initial natural value of engineering characteristic i
in equation (5). β ij represents the relationship strength xi0 cc = initial value of engineering characteristic i ,
between engineering characteristics i and customer need j . centered and coded

 p 1j 
kj x i1 = natural value of target of engineering characteristic i
s = s  0  ∀j
1 0
x i1cc = centered and coded value of target of engineering
j
p  j
 j characteristic i
1 ≤ p j ≤ 5,0 < s nj ≤ 100 n = 0,1
n
(6) Li = lower bound of engineering characteristic i values
Equation (6) is based on [8] and then was adjusted similar to
[9]. For practical reasons, the development team may use 2
U i = upper bound of engineering characteristic i values

166
Back to Contents

β ij = regression parameter those customer needs were classified into several categories.
CN 1 was classified into attractive category, CN 2 was a
β0 = regression constant
satisfier, CN 3 was a basic, and CN 4 was an attractive.
p 0j = initial product performance in meeting customer need
Next, six related engineering characteristics were
j identified, i.e. head thickness ( EC1 ), distance between top
p 1j = current product performance in meeting customer of the head and top of the mattress ( EC 2 ), distance
need j between top of the mattress and floor ( EC3 ), leg cross
γ j = the lowest performance allowed in meeting customer
sectional area ( EC 4 ), slat board width ( EC5 ), distance
need j
between slat boards ( EC6 ). Figure 1 shows a bed and a
r js = competitor performance in meeting customer need j
bed frame images with the engineering characteristics.
Z i = the improvement of engineering characteristic i The HOQ of the wooden bed frame is presented by
Figure 2. The roof part of the HOQ was not defined,
c pi = production cost needed to make a unit improvement because the proposed model assumes that all engineering
of engineering characteristic i characteristics are independent. In this way, the linear
c Di = R&D cost needed to make a unit improvement of regression function can be used to represent the relationship
between customer need and engineering characteristics. The
engineering characteristic i relative importance weights of customer needs are the
B = the available budget for product development normalized values of the averages of customer needs
t i = time needed to make a unit improvement of importance weights data. The importance weights data were
collected using a survey. See [10], to get the details of how
engineering characteristic i to do such survey. Three designs of competitor’s products
T = the available time for product development were selected as benchmarks, i.e. product B, product C,
product D, while product A is the base product to be
developed. The benchmarking result showed the product
III. AN ILLUSTRATIVE EXAMPLE performance in meeting certain customer need; the
An application of the proposed model is presented in this performance was measured by customers as respondents in
section. The target specifications of the wooden single bed a survey using the 1 to 5 rating scales. The average
frame were determined using the optimization model. performance of each product in meeting certain customer
Four customer needs were identified during need is presented in the right columns of HOQ matrix.
observations and lead user interviews. Those customer
needs were: facilitates user’s daily activity ( CN 1 ),
occupies minimum space ( CN 2 ), sturdy ( CN 3 ), large
storage space ( CN 4 ). CN 1 meant that the product should
support user’s additional activities on bed, such as reading,
typing on laptop, and writing. CN 2 meant that the frame
needed minimum space when used,CN 3 meant that the
frame was not easily broken, while CN 4 meant that Figure 1. Engineering characteristics
customers needed a bed frame that had an additional
function as storage space. By using Kano’s questionnaire,
Engineering Characteristics Benchmark

Relative Product Product Product Product


Customer Importance EC1 EC2 EC3 EC4 EC5 EC6
A B C D
Needs Weight

CN1 0.214 9 9 3 2.05 3.98 2.98 3.07

CN2 0.291 9 4.03 3.00 3.03 2.92

CN3 0.290 9 9 3 3.00 3.95 3.01 4.01

CN4 0.204 9 9 9 2.08 3.04 3.72 3.86


Figure 2. House of Quality

167
Back to Contents

Product A
Product B Product C and D

Figure 3. Concept designs

TABLE 1. Product specifications


Engineering Product A Product B Product C Product D
Characteristics
EC1 (cm) 6 30 25 35
EC2 (cm) 20 27 22 30
EC3 (cm) 46 53 53 55
EC4 (cm2) 24 35 24 35
EC5 (cm) 10 15 15 15
EC6 (cm) 20 20 25 25

Figure 3 shows the 3D of the concept designs of product value in meeting CN 3 and CN 4 , in consecutive manner,
A, B, C and D, while Table 1 contains the specifications
details. The feasible range of engineering characteristics were 3.01 and 3.04.
In this case example, the resource constraint was the
were defined as follows: 6 to 37.4 cm for EC1 , 20 to 36.6 available budget for product improvement, i.e. IDR
cm for EC 2 , 46 to 55 cm for EC3 , 24 to 35 cm2 for 200,000. The incremental improvement costs for
engineering characteristics were IDR 4250 per cm for EC1 ,
EC 4 , 10 to 15 cm for EC5 , and 20 to 25 cm for EC 6 .
Those ranges might show the technically accepted and/or IDR 2716 per cm for EC 2 , IDR 2980 per cm for EC3 ,
technically feasible specifications. 2
IDR 377 per cm for EC 4 , IDR 1130 per cm for EC5 ,
The relationship between customer need and engineering
characteristics was assessed using regression technique. The and IDR 283 per cm for EC6 . The other resources, such as
engineering characteristics were the independent variables development time, were considered unbounded.
and the product performances were the dependent ones. The 0
The initial satisfaction score, denoted by s j per 100
regression results are as follows:
CN1 =1.93+9.36 EC1 -3.33 EC 2 -6.16 EC3 units, that was contributed by certain product performance
0
level in meeting customer need j , denoted by p j and
CN 2 =3.34 - 0.627 EC1
quantified in 1 to 5 rating scales, was obtain using focus
CN 3 =3.50+0.479 EC 4 -0.0034 EC5 +0.0274 EC 6 group discussions. For customer need indexes ( j ) 1 to 4,
CN 4 =3.05-3.89 EC1 +0.943 EC 2 +3.92 EC3 the corresponding s j
0 0
for certain p j , in consecutive
Using α = 5%, the significant predictors were those which P
manner, were as follows: s10 = 25 for p10 = 2.50 ,
value < 0.05.
The minimum product performance in meeting certain s 20 = 65 for p 20 = 4.03 , s 30 = 40 for p 30 = 3.00 ,and
customer need was defined to assure that the prk,oduct
would be able to perform its basic functions and also to s 40 = 25 for p 40 = 2.08 .
ensure that the performance was not below market In this case example, the maximum satisfaction score
expectation. Considering technical and market that was able to be reached by the basic need was 60 per
requirements, it was determined that the minimum product 100 units and 100 per 100 units for the other customer need
performance in meeting CN1 was 3.04, and in categories.
According to the input data, the complete mathematical
meeting CN 2 was 3.03; while the minimum performance model was described as follows.

168
Back to Contents

(
Max S P = s11 + s 12 + s31 + s 14 S P max ) (13)
Subject to References
[1] Cohen, L. , How to Make QFD Work for You, Addison-Wesley
S P max =100+100+100 + 60 (14)
Publishing Company, 1995.

ncc {x n
− ((37.4 + 6 ) 2 ) }; x 2ncc =
{x 2
n
− (36.6 + 20 2) ; } [2] Park, T., Kim, K. J., “Determination of an Optimal Set of Design
Requirements Using House of Quality,” Journal of Operations
x = 1
1
(37.4 − 6 ) 2 (36.6 − 20) 2 Management, vol. 16, pp. 569–581, 1998.

x 3ncc =
{x 3
n
− (55 + 46 2 ) ; ncc
x4 = 4
}
x − (35 + 24 2 )
;
{ n
} [3] Askin, R. G.., Dawson, D. W.., “Maximizing Customer Satisfaction
by Optimal Specification of Engineering Characteristics,” IIE
(55 − 46 ) 2 (35 − 24) 2 Transactions, vol. 32, pp. 9–20, 2000.

x 5ncc =
{x
− (15 + 10 ) 2 )
5
n
}
; x 6ncc = x 6 − (25 + 20) 2) { n
} [4] Vanegas, L. V., Labib, A. W., “A Fuzzy Quality Function
Deployment (FQFD) Model for Deriving Optimum Targets,”
International Journal of Production Research, vol. 39(1), pp. 99–120,
(15 − 10 ) 2 (25 − 20) 2 2001.
(15) [5] Sauerwein, E., et. al., “The Kano Model: How to Delight Your
ncc ncc ncc Customers,” in Preprints of the IX International Working Seminar on
p1n = 1.93 + 9.36 x1 − 3.33 x 2 − 6.16 x 3 ; Production Economics, paper vol. 1, pp. 313-327, 1996.
ncc
p 2n = 3 .34 − 0 .627 x1 ; [6] Matzler, K., Hinterhuber, H. H., “How to Make Product Development
n ncc ncc ncc Projects More Successful by Integrating Kano’s Model of Customer
p = 3.5 + 0.479 x 4
3 − 0.0034 x 5 + 0.0274 x 6 ; Satisfaction into Quality Function Deployment,” Technovation, vol.
n ncc ncc ncc 18(1), pp. 25-38, 1998.
p = 3.05 − 3.89 x1
4 + 0.943x 2 + 3.92 x 3 (16)
[7] Rahaju, D. E. S., Dewi, D. R. S, “Dealing with Dissatisfaction
2 Measure in QFD Model to Derive Target of Engineering
 p  ; 1 1 1

 s 2 = 65 p 2  ;
1
s11 = 25 
1
 
Characteristics, ” International Journal of Industrial Engineering,
vol. 18(12), pp. 634–644, 2011.
 2.05   4.03 
0.5 2 [8] Tan, K. C., Shen, X. X., “Integrating Kano’s Model in the Planning
 p1   p1  Matrix of Quality Function Deployment,” Total Quality Management,
s31 = 40 3  ; s 14 = 25 4  (17)
vol. 11(8), pp. 1141-1151, 2000.
 3.00   2.08  [9] Rahaju, D. E. S., Dewi, D. R. S, “An Approach to Dealing with
3.04 ≤ p11 ≤ 5 ; 3.03 ≤ p 12 ≤ 5 ; Importance Weights of Customer Needs and Customer Dissatisfaction
in Quality Function Deployment Optimization,” Advanced Material
3.01 ≤ p31 ≤ 5 ; 3.04 ≤ p14 ≤ 5 (18) Research, vol. 931-932, pp. 1636-1641, 2014.
6 ≤ x ≤ 37.4 ; 20 ≤ x ≤ 36 .6 ; 46 ≤ x ≤ 55 ;
n n n [10] Malhotra, N. K., Marketing Research: an Applied Orientation, 4th
1 2 3
ed., New Jersey: Pearson Education, Inc., 2004.
24 ≤ x4n ≤ 35 ; 10 ≤ x5n ≤ 15 ; 20 ≤ x6n ≤ 25 (19)
n n n
4520 (x1 – 6) + 2716 (x2 – 20) + 2980 (x3 – 46) + 2 ×
377 (x4n – 24) + 7 × 1130 (x5n – 10) + 283 (x6n – 20) ≤
200000 (20)

The result of this model are the score of engineering


characteristics. The result showed that model was able to
perform under constrain restriction. The engineering
characteristic for EC6 is the smaller the better, meanwhile
the others are the larger the better. We have run the
sensitivity analysis and the result is satisfied for all
engineering characteristics. For the larger the better
engineering characteristic, the output will increase, and will
decrease for the smaller the better criteria.

IV. CONCLUSION
Setting target specifications by using the proposed
mathematical model, the available development resource
was spent effectively to increase total customer satisfaction.
By integrating the Kano’s model into the optimization
model, the relationship between certain customer need
fulfillment and perceived customer satisfaction was better
represented.

Acknowledgment
This research was supported by the Directorate General of
Research, Technology and Higher Education Of The
Republic Indonesia

169
Back to Contents

A Simultaneous Integrated Model With


Multiobjective For Continuous Berth Allocation
And Quay Crane Scheduling Problem
Zaitul Marlizawati Zainuddin
Department of Mathematical Sciences, Faculty of Science
Nurhidayu Idris and Universiti Teknologi Malaysia - Centre of Industrial
Department of Mathematical Sciences, Faculty of Science and Applied Mathematics
Universiti Teknologi Malaysia Universiti Teknologi Malaysia
Johor, Malaysia Johor, Malaysia
nurhidayu5@live.utm.my zmarlizawati@utm.my

Abstract—This paper presents the simultaneous integration practical and less time consuming. A Tabu search heuristic has
model of berth allocation and quay crane scheduling. Berths and been applied to solve the integration problem with the
quay cranes are both critical resources in port container objectives to minimize the handling time and weighted
terminals. The mathematical model uses a mixed integer linear tardiness of vessels. Wang, Cao, Wang and Li [5] applied the
programming with multiple objectives generated by considering same simultaneous concept for the integration problem of BAP
various practical constraints. Small data instances have been and QCSP. They built an integration optimization model
taken to validate the integrated model. A numerical experiment regarding berth allocation and quay crane scheduling with the
was conducted by using LINGO programming software to purpose of minimizing the operating costs of quay crane and
evaluate the performance and to obtain the exact solution of the
operating costs of the ships. A new genetic and ant colony
suggested model.
algorithm is developed where the partial allocation plan is
Keywords— berth allocation; quay crane scheduling; multi solved by genetic algorithm, then adjusted against berth
objectives; integration through ant colony algorithm to find an optimal solution. In
contrast to Aykagan [4], Wang, Cao, Wang and Li [5] focused
on a discrete type model rather than continuous approach.
I. INTRODUCTION However, both papers are concentrated of using an equal
Because of the high capitalization and fierce competition temporal constraint which is dynamic concept for the arrival of
among container terminals, efficient processing in a container ships. Simultaneous concept for the integration of BAP and
terminal is vital as mentioned in [1]. As previously, Daganzo QCSP also has been studied as in [1]. The integration of BAP
[2] and Park and Kim [3] were both first presented the and QCSP problems are solved in a simultaneous way with
integrated approaches. Both papers highlighted applying uncertainties of vessel arrival time and container handling time.
monolithic model for crane assignment and crane scheduling as The spatial and temporal constraint applied in this research
in [2] and berth allocation with crane assignment as in [3]. paper is similar to [5] whereby discrete concept of berths with
Generally, the handling time of the ship is affected by the crane dynamic arrival of vessels. A simulation based on Genetic
schedule. A model of integration concept of Berth Allocation Algorithm (GA) search procedure is constructed in order to
Problem (BAP) and Quay Crane Scheduling Problem (QCSP) generate robust berth and QC schedule proactively.
was proposed by Park and Kim [3] with continuous berth
model. In formulating the mixed integer programming (MIP) II. PROBLEM DESCRIPTION
model, both time and space were discretized and a two phased
procedure was designed to arrive at solving the problem. The This research focuses on simultaneous Integration
first phase is to determine the berth allocation and rough quay Continuous Berth Allocation Problem and Quay Crane
crane allocation while in the second phase, a detailed Scheduling Problem (IBAPCQCSP) while considering multiple
individual crane scheduling was generated based on the objectives. Berths and quay cranes are the two most important
solution found in the first phase. The allocation of the two resources and berth planning with quay crane scheduling are
resources which are berths and quay cranes as in [3] is vital operations in container terminal. Thus, the flow of the
sequential, not simultaneous. operations has been presented in the mathematical model
throughout this paper. Berth allocation problem is a process of
In addition, the simultaneous concept has been studied as in assigning incoming ships to the berthing positions. Meanwhile,
[4] on the integration of berth allocation and quay crane Quay cranes (QCs) are utilized at the terminal for charging and
scheduling problem. This problem was designed where the discharging containers from and onto vessels. The efficient
scheduling of cranes for each hold for all vessels occurred utilization of the resources results in short vessel turnaround
simultaneously with the allocation of vessels to the berths. It times and early departures. This research focuses on
happened to be a continuous problem since it was more continuous berth approach. This continuous berth concept is

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


170
Back to Contents

described as when the ships are allowed to be served wherever T : time period of vessels
empty spaces are available along the quay to physically V : a set of vessels with known arrival time
accommodate the ships. A vessel in a waiting and servicing
line at a berth can be presented by a rectangle in a time-space M : a large positive scalar
representation diagram or Gantt chart. The concept of hi : number of holds of vessel i,
continuous berth approach system can be seen in Figure 1.
k
pri : processing time of hold k of vessel i,

pri max
: maximum hold processing time for vessel i

pri max : max i { prik }


Fig. 1. Ship arrangement for Continuous Berth Allocation : lateness penalty time of vessel i,
fi
Recently, more researchers studied integrated solution di : due time of vessel i ( where di , ai + pri∗ )
approaches for berth allocation and quay crane scheduling. The
following assumptions are imposed while formulating the multi ai : arrival time of vessel i,
objectives model of simultaneous IBAPCQCSP:
bpi : berthing position of vessel i,
1) Each vessel is divided along its length into holds 3 or 4
container rows where at most one quay crane can work bti : berthing time of vessel i,
on a hold in a time period.
ei : the earliest time that vessel i can depart.
2) Multiple vessels can moor at the berth and get service
immediately.
1 if vessel i berth after vessel j departs,
xij = 
3) Vessels can arrive at the port during the planning 0 otherwise;
horizon and each vessel cannot be served before its
arrival time. 1 if vessel i berth completely above vessel j on the

4) A vessel is considered processed once cranes have yij =  time-space diagram,
completed the work on a set of holds. 0 otherwise;

5) Once work started, cranes operated on vessels need to
1 if at least one quay crane is assigned on hold k
continue on a hold until completion. t
r =
iqk of vessel i at time t ,
6) Only one crane can work on a hold at a given time 0
period.  otherwise;

7) Quay cranes are on the same tracks and cannot cross 1 if at least one quay crane is assigned to vessel i
each other. 
wit =  at time t ,
8) A vessel can leave the port only after the process of 0 otherwise;

loading and unloading container is completed on every
hold. The simultaneous concept of IBAPCQCSP model with
9) Vessel processing time depends on the number of quay multi objectives and dynamic arrival of vessels is formulated as
cranes assigned to the vessel. follow:
n n n hi
10) A crane can be shifted from hold to hold both within
ships and between ships, as long as cranes do not cross
Min  ( e − a ) +  f ( e − d ) + r
i =1
i i
i =1
i i i
i =1 t∈T q∈Q k =1
t
iqk (1)
one another. subject to

III. MATHEMATICAL FORMULATION xij + x ji + yij + y ji ≥ 1 ∀i, j ∈ V and i < j (2)


In this section, a multiple objectives mathematical model is xij + x ji ≤1 ∀i, j ∈ V and i < j (3)
formulated with the simultaneous and integrated berth planning
and quay crane scheduling. The study focuses on the dynamic yij + y ji ≤1 ∀i, j ∈ V and i < j (4)
arrival of vessels where a set V of vessels with known arrival
times, whereby n = V . For each vessel i ∈ V , the parameter bt j ≥ ei + ( xij − 1) M ∀i, j ∈ V and i < j (5)
and formulation applied in this study is defines as below:
bp j ≥ bpi + hi + ( yij − 1) M ∀i, j ∈ V and i < j (6)
L : set of berths equal size sections
Q : a set of identical quay cranes operating on a single bti ≥ ai ∀i ∈ V (7)
set of rails.

171
Back to Contents

t
bti ≥ triqk t
+ (1 − riqk )T ∀i ∈ V , ∀k ∈ {1,….., hi } , Simple Numerical Example
(8)
∀t ∈ T The model formulation presented before was validated by
using commercial programming software, LINGO 15.0 for
t
ei ≥ tr + pri
iqk
k
∀i ∈ V , ∀k ∈ {1,….., hi } , small problem which can be seen in Table 1. For the model on
(9)
∀t ∈ T , ∀q ∈ Q small problem with length of berth, L = 7 and quay cranes, Q =
4 are used .
ei ≥ bti + pri max ∀i ∈ V (10)
TABLE I. A SMALL INSTANCE FOR SIMULTANEOUS
wimin ≤  riqk
t
≤ wimax ∀q∈ Q, ∀k ∈ {1,….., hi } , IBAPCQCSP
i∈V (11)
i 1 2 3 4 5
∀t ∈ T
ai 2 1 3 1 4
hi

 r t
iqk ≤Q ∀t ∈ T (12)
di 8 4 9 5 11
i∈V q∈Q k =1 fi 3 4 3 3 4

r t
iqk =w t
i ∀ i ∈ V , ∀k ∈ {1,..., hi } ,
hi
pr 1
2
3
3
2
3
2
4
3
4
3
q∈Q (13) i

∀t ∈T pri2 4 2 4 1 2
pri3 - 2 1 0 2

t∈T
wit = ei − bti ∀i ∈ V (14)
pri 4
- - - 1 4

bpi ≤ L − hi + 1 ∀i ∈ V (15)
Each vessel in this problem is divided into equal size
bpi ≥ 1 ∀i ∈ V (16) sections of holds whereby, each hold consists of 2, 3 or 4
container rows section on the vessel. For the length of each
xij ∈ {0,1} , yij ∈ {0,1} ∀ i, j ∈ V and i ≠ j (17) vessel at the berth section, it can be represented through the
number of holds from each vessel. In addition, the assumption
has been made where the total length of berth is L holds. For
t
riqk ∈{0,1} , wit ∈{0,1} ∀ i ∈ V , ∀ k ∈ {1,..., hi } ,
(18) this problem, the vessels can moor at the berth and obtain
∀t ∈ T , ∀q ∈ Q service simultaneously which in a given time period, only one
crane can serve on a hold. Each hold has been assigned a
According to the mathematical formulation above, the first known processing time to be served by quay cranes The
objective that needs to be minimized is the duration of vessels servicing time of each vessel is considered completed once the
at port and second objective is to minimize the penalty charged quay cranes finished processing each hold in a vessel.
for the tardiness of vessels. The last objective is to reduce the To obtain the initial solutions for this model, the First
operation time of quay cranes. Constraints (2) through (4) are Come First Serve (FCFS) rule and Large Vessel First (LVF)
to guarantee that no vessel rectangles overlapped. Constraint rule are considered. Basically, the vessel that arrived earlier
(5) and (6) ensure that the selected berthing times and berthing will get a tendency to be chosen first since this FCFS rule is
positions are consistent with the definitions of xij and x ji , applied while for the LVF rule, the priority is set to be the
where M is a large positive scalar. Constraint (7) forces the large vessel will be served first. Every vessel is given only one
vessel to berth after the arrival time and constraint (8) forces opportunity to work in the operations. After the arrival and
the processing of hold on vessels by quay cranes to begin being chosen, a vessel is being served on holds by the quay
immediately after berthing. Constraint (9) ensures that after all cranes allocated. The quay crane is allowed to shift form one
holds are processed by quay cranes, vessels can depart. hold of a vessel to another only if a quay crane finished
Constraint (10) is a valid inequality which provides the lower processing initially assigned hold.
bound on the ei given berthing time, bti . Constraint (11) After works on the holds of vessels are completed and
ensures that the number of quay cranes serving a vessel at the satisfied, two feasible solutions are produced for simultaneous
same time can meet the requirements which cannot exceed four IBAPCQCSP with multi objectives for two different rules.
quay cranes and less than one quay crane at the same time. The feasible solutions are presented in Table 2 for FCFS rule
Constraint (12) ensures that number of quay cranes utilized at and Table 3 for LVF rule where the riqk denoted at least one
any time period not exceeded Q, and constraint (13) ensures
crane is assigned on hold k of vessel i which means the start
the consistency between wit and riqkt
where the number of quay time of works. In the diagrams, the horizontal axis is used to
cranes operated on vessel i at time t are equal. Constraint (14) measure time while vertical axis for representing the berth
sets the starting and finishing time of quay cranes for serving sections. The time space diagrams for two feasible solutions
vessels. Constraint (15) and (16) are to make sure that all can be seen in Figure 2 for FCFS rule and Figure 3 for LVF
vessels fit on the berth. rule. From the diagrams, a vessel is designed as rectangle
which the length is the handling time taken of vessel at port

172
Back to Contents

and the height is the length of vessel along the quay. The Based on the two feasible solutions diagrams above, the
position of cranes in any holds depicted by using solid and total optimal objective value obtained by applying FCFS rule
empty squares. Solid squares mean the quay cranes are being was 70 with the idle cranes of 8 while LVF rule with total
operated on the holds for loading and discharging process and optimal objective value of 63 and 8 idle cranes. It shows that
empty squares show idle quay cranes. the performance of the integrated operation regarding BAP
TABLE II. A FEASIBLE SOLUTION FOR SIMULTANEOUS and QCSP by using the LVF rule is more optimal compared to
IBAPCQCSP (FCFS RULE) FCFS rule. Therefore, the LVF rule is more applicable and
i 1 2 3 4 5 effective for the model to be used in further research. From the
results, LVF rule gives more optimal results since the
bpi 1 1 5 4 1
solutions are nearer to the optimal solutions.
bti 3 1 6 1 7
ei 7 3 11 6 12 IV. CONCLUSION
riq1 3 1 6 1 7 This paper proposes a mathematical formulation for the
simultaneous IBAPCQSP with several objectives which are
riq2 3 1 6 4 7
important operations in port container terminals. This
riq3 - 1 10 - 9 integrated model of continuous berth allocation and quay
cranes schedules are developed through a simultaneous process
riq4 - - - 5 8 by satisfying the restrictions given with two different rules
tested namely, FCFS and LVF. This study also performs
simple numerical experiments by comparing two different rules
to validate the simultaneous integrated model by using LINGO
15.0. The research on the IBAPCQSP should be enhanced in
order to improve the level of understanding on integration
concept and knowledge on this field which will surely increase
the efficiency of port operations compared to optimize berth
allocation or quay crane scheduling individually. Future study
in this integration problem looks certainly very beneficial and it
is expected from researchers to explore more in this field. Here,
we highlight that the main contribution in this paper is the
Fig. 2. A feasible solution of simultaneous IBAPCQCSP on time space integration on operating both resources which are berths and
diagram (FCFS rule). quay cranes at the port terminal simultaneously. Integration of
BAPC and QCSP also increase the operations efficiency at
TABLE III. A FEASIBLE SOLUTION FOR SIMULTANEOUS container terminal. For future studies, metaheuristic approach
IBAPCQCSP (LVF RULE) will be considered for the simultaneous integrated model with
i 1 2 3 4 5 real life data.
bpi 1 5 5 1 1
bti 8 2 4 1 4 ACKNOWLEDGMENT

ei 12 4 10 4 8 The authors would like to thank the Malaysian Ministry of


riq1
Higher Education for their financial funding through
8 2 8 1 4
Fundamental Research Grant Scheme (FRGS) vote number
riq2 8 2 4 1 4 R.J130000.7809.4F470. This support is gratefully
r 3
- 2 8 - 6 acknowledged. The author would also like to thank UTM
iq

4
Centre for Industrial and Applied Mathematics and
r iq - - - 1 4 Department of Mathematical Sciences, Faculty of Science,
UTM for the support in conducting this research work.
References
[1] X. L. Han, Z. Q. Lu, and L. F. Xi, "A proactive approach for
simultaneous berth and quay crane scheduling problem with stochastic
arrival and handling time." European Journal of Operational Research
vol. 207.3, pp. 1327-1340, December 2010.
[2] C. F. Daganzo, "The crane scheduling problem." Transportation
Research Part B: Methodological vol. 23.3, pp. 159-175, June 1989.
[3] Y.M.Park, and K.H.Kim, "A scheduling method for berth and quay
cranes." Container Terminals and Automated Transport Systems.
Springer Berlin Heidelberg, pp. 159-181, 2005.
Fig. 3. A feasible solution of simultaneous IBAPCQCSP on time space [4] A. Aykagan, Berth and quay crane scheduling: problems, models and
solution methods. ProQuest, 2008.
diagram (LVF rule).

173
Back to Contents

[5] R. D. Wang, J. X. Cao, Y. Wang, and X. X. Li, "An integration


optimization for berth allocation and quay crane scheduling method
based on the genetic and ant colony algorithm." Applied Mechanics and
Materials. vol. 505-506, pp. 940-944, January 2014.

174
Back to Contents

A Multi Objective Mixed Integer Programming


Approach for Sustainable Production System
Ma. Teodora E. Gutierrez
Technological Institute of the Philippines
Industrial Engineering Department
dhorieg@gmail.com

Abstract—This study aims to develop a model for sustainable


C. Decision Variables
production system with optimization approach. Hence, Multi Xki = Quantity of product k to produce in production facility i
objective Mixed Integer Programming was identified as an Pki= 1 if product k is produced at production facility i;
appropriate method. A proposed model was formulated = 0 otherwise
considering two objective functions which are to minimize the
economic cost and to minimize the carbon footprint of the D. Objective Function
production system. A case study company was conducted to have The objective is to minimize the economic cost and carbon
numerical example of the model and also to test the feasibility and footprint the firm.
effectiveness of the proposed model. The results showed a Pareto
optimal solution for the case study company. D.1 Economic Objective
Keywords—Multi Objective Mixed Integer Programming; The economic objective is to minimize the cost within the
Sustainable Production System production. The production costs are assumed to be fixed
production cost and variable cost. That is,
I. INTRODUCTION Minimize Total Cost (Z1) = Fixed Cost + Variable Cost
There is a strong concern for implementing sustainable
development in all business activities and human activities.
Sustainable development is defined as “development that
meets the needs of the present without compromising the
D.2 Environmental Objective
ability of future generations to meet their own needs” [1].There
are several papers that investigate the impact of sustainable
development to the company, for instance a multiobjective The environmental objective is to optimize by minimizing
the carbon footprint within the production. The carbon
optimization model for supply chain was formulated and found footprint considered is production carbon footprint. That is,
that the model is feasible [2]. Also, a multi objective model
that optimizes simultaneously the product’s life cycle cost, Minimize Carbon Footprint (Z2) =Production Carbon
environmental impacts and performance values were evaluated Footprint
and tested in real assembly line[3].

II. MATHEMATICAL MODEL FORMULATION

A. Indices and Notations E. Multi-objective Optimization

i represents production facility Since there are two (2) different objectives with different
k represents product m units of magnitude, a multi-objective optimization technique
represents carbon footprint was used. The weighted-sum approach was used. The general
form of this technique is,
B. Input Parameters

Aik = Fixed production cost for product k at


production facility i
Bik = Variable cost for product k at production facility i
Fki= Carbon Footprint of production facility i per unit Applying the technique to the model:
of product k Weights should be set for each objective function to solve
Gik = Production Capacity of production facility the optimization simultaneously.
ito produce product k
Ok = Total demand of product k

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


175
Back to Contents

Since the objectives have different magnitude, the objective


function should be normalized. Hence, the sum of percentage Table 3. Production facility carbon footprint
per unit product (kg)
(%) deviations will be used:
Production Production Production
Min: W1 [actual z1 – target z1)/target z1] +W2 [actual z2 Facility 1 Facility 2 Facility 3
– target z2)/target z2] (5) Product 1 3.14 3.1 3.2
Product 2 3.4 3.45 3.4
The equation (5) becomes a single objective function. Product 3 2.9 3.0 2.7
The weights are set from the Analytical Hierarchy
Procedure. , the weights are determined using the
assumption that the economic performance (Z1) is slightly Table 4. Production Capacity of the facilities ( units)
more important than the environmental performance (Z2).
Production Production Production
With this assumption, the weights of the objectives were
Facility 1 Facility 2 Facility 3
derived though calculating the eigenvector of the evaluation
matrix. The results give the weights of 0.67 for the economic Product 1 700 650 650
objective (Z1) and 0.33 for the environmental objective (Z2). Product 2 400 500 450
Product 3 500 450 425
E. Constraints
Table5. Product Demand (units)
1.
2. TOTAL
3. Product 1 1100
4. is binary (either 0 or 1) Product 2 900
Product 3 950
The first constraint (1) is about the number of units of
product k to be produce should not exceed the capacity of
each production facilities i. The second constraint (2) is
about the number of units of product k to produce should be IV. OPTIMIZATION RESULTS
greater than or equal to the total product demand.
Constraints (3) is integer variables while constraint (4) is a Using the input parameters as coefficient of the
binary variables. decision variables and all constraints mentioned, the
model was coded in LINDO software to compute for the
decision variables and the objective functions.
III. NUMERICAL EXAMPLE: CASE STUDY
COMPANY Table 6 shows the result of the mathematical
formulation. The optimal solution is solution 2, which
The company is vertically integrated and they have three identifies the quantity of product i to be produced at
major products that can produce in any of their three production facility/plant j. In this case, 450 units of
production facilities. The company wants to minimize the product 1 should be produced at plant 1 (i.e. X11 = 450).
production costs and the carbon footprint. The production Product 1 should not be produced in plant 2 (X12 = 0).
costs and carbon footprint parameters were gathered from Plant 3 should produce 650 units of product 1(X13 = 650).
the company. To test its effectiveness, the proposed And so on and so forth.
mathematical model is used to this steel works company. With the given units of product i to be produced
The input parameters for the proposed model were shown at plant j, the overall total production cost of the company
on Tables 1 to 5. considering the resulted production plan is 52, 120 Php.
This is for the monthly basis. At the same time, the carbon
footprint is 9,378 kgs.
Table 1. Fixed cost of the product if produced in The single function value of this multiobjective
production facility (Php)
problem was computed below where equation 5 was used.
Production Production Production This objective function was set to minimize the percentage
Facility 1 Facility 2 Facility 3 deviation from the target. Hence, it was validated that
Product 1 500 550 525 solution 2 is the optimal solution with 0 deviations. Or the
Product 2 525 500 575 achievement of having a sustainable production has been
Product 3 550 525 500 met.
Objective Function of Solution 1
Min.z = (Sum of the weighted % deviation) = .67
Table 2. Variable Costs per Unit of product if [(55,588.50-52,120) / 52,120)] + .33[(10038.50-
produced in production facility (Php) 9378.00)/9378.00] = 0.07
Production Production Production Objective Function of Solution 2
Facility 1 Facility 2 Facility 3 Min.z = (Sum of the weighted % deviation) = .67 [(52,120
Product 1 15.0 15.7 15.3 - 52,120) / 52,120)] + .33[(9378.0-9378.00)/9378.00] =
0.00
Product 2 19.0 18.7 19.3
Product 3 15.3 16.0 15.3

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


176
Back to Contents

Table 6 – Optimization Results

Decision
Variables Solution 1 Solution 2
x11 700 450
x12 0 0

x13 650 650


x21 400 400
x22 500 500
x23 0 0
x31 500 500
x32 26 450
x33 425 0

Z1( Cost ) 55,588.50 52,120.00


Z2( Carbon
Footprint) 10038.50 9378.00

V. CONCLUSION

The proposed model for sustainable production system


was formulated and tested in a case study company. The
results show a Pareto optimal which validate the feasibility
of the model and its effectiveness in achieving the desired
goal of a firm. This also reinforced the need to adopt
initiatives for sustainable development.

REFERENCES

[1] Drexhage,J., Murphy.(2010)” Sustainable Development: From


Brundtland to Rhio 2012
http://www.un.org/wcm/webdav/site/climatechange/shared/gsp
/docs/GSP1-6_Background/Sustainable_Development.pdf
[2] Zhixiang Chen and Svenja Andresen (2014)“A Multiobjective
Optimization Model of Production-Sourcing for Sustainable Supply Chain
with Consideration of Social, Environmental, and Economic Factors,”
Mathematical Problems in Engineering, Vol. 2014, 11 pages.
[3] Cocco,M.;Cerri,D.;Taisch,M,;Terzi,S.(2014).”Development and
implementation of a Product Life Cycle Optimization model”.
International Conference on Engineering, Technology and Innovation

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


177
Back to Contents

The Determining Factors in Prescribing


Anti-Hypertensive Drugs
to First-Ever Ischemic Stroke Patients

Rasvini Rajendran Zaitul Marlizawati Zainuddin


Department of Mathematical Sciences, Faculty of Science Department of Mathematical Sciences, Faculty of Science
Universiti Teknologi Malaysia The UTM Centre for Industrial & Applied Mathematics
Johor Bahru, Johor, Malaysia (UTM-CIAM)
rasvinirajen@gmail.com Universiti Teknologi Malaysia
Johor Bahru, Johor, Malaysia
zmarlizawati@utm.my

Abstract— Prescription of drugs plays a vital role not only in analysis models (DAM) to synthesize data from many sources
patient care but also to the health care industry for the many [1].
consequences it is entangled with. With the advanced use of
mathematical models in healthcare today, it has become possible Decision analysis can be described as the process of
for healthcare personnel to practice evidence-based-decision making an optimal choice among competing alternatives under
making. This paper attempts to aid in the prescription of anti- conditions of uncertainty. In the case of CEA, the competing
hypertensive drugs for first ever ischemic stroke patients by alternatives are the different medical interventions we are
conducting sensitivity and post-optimality analysis of a studying [2]. The most important aspect of the decision
previously proposed multi-objective mixed-integer nonlinear modelling process is that it must represent the choice that is
integer programming model. Determining factors that contribute being made. When constructing a model of a clinical or
to the optimal drug prescription will be identified, enabling pharmacological decision, a series of characteristics of the
decision makers to make informed decisions on how a patient’s actual problem must be represented in the model structure and
drug therapy needs to be adjusted when the need arises. method [1].
Keywords— anti-hypertensive drug, ischemic stroke, drug There are a number of types of DAMs which have been
prescription, sensitivity analysis, post-optimality analysis used in CEAs depending on the nature of the problem. The
most basic is a simple decision analysis tree which is usually
I. A NEW PERSPECTIVE IN DRUG PRESCRIPTION employed to examine events that will occur in the near future.
They are therefore best suited to evaluate interventions to
Prescription of drugs by doctors is not just an intuitive call prevent or treat illness of a short duration such as acute
or a decision based on medical expertise alone. It is a reflection infectious diseases. Besides that, they may also be used to
of experience gained from the various cases a doctor has heard evaluate chronic diseases that can be cure, for instance, by
or witnessed in his years of practicing. When dealing with surgical intervention. However, when these trees are used to
critical cases like strokes, drug prescription is extremely evaluate diseases that changes over time, they sometimes
important. Apart from the possibilities of causing serious if not become too unruly to be useful [2].
fatal complications, wastage is a vital issue to be addressed
here. Different patients require varying strengths of drugs When the disease in question is chronic or complex,
depending on many aspects to be taken into consideration. Markov models is a good approach as these models allow
Some patients require stronger medicines while for some others researchers to incorporate changes in health states over time
milder ones will do. Prescribing otherwise will, in both cases, into the analysis. These models which are called state transition
lead to wastage of valuable resources that could have been models allow researchers to track changes in the quality of life
easily avoided. and the cost of a disease over time when alternative health
interventions are applied [2]. Apart from these two common
To optimize the cost and effectiveness associated with and approaches, a modeller can also use simulation models or
a drug therapy, competing alternatives should be assessed. deterministic models which are sometimes termed as
Cost-Effectiveness Analysis (CEA) compares medical mechanistic models. There are many methodologies and
intervention strategies, which in our case is the different drug modelling types that can be used to create and evaluate
combinations, through the calculation of the incremental cost- decision models. The modeller, however, should use the
effective ratio, a measure of the cost of changes in health method most appropriate to the particular problem being
outcomes. These analyses can be performed on clinical trial addressed [1].
data where information on both costs and effectiveness is
available, or more commonly through the use of decision

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


178
Back to Contents

As mentioned earlier, the choice is dependent upon the more practical approach in healthcare as human-drug
complexity of the problem in addition to the need to model interaction is never exact. Acknowledging this, we attempt to
outcomes over extended periods of time and whether resource minimize the deviations instead. The model can then be solved
constraints and interactions of various elements in the model using either weighted, preemptive or prioritized GP. As the
are required. To make an appropriate decision regarding what model is used to conduct CEA of the drug(s) prescribed, the
consequences and outcomes to include, the modeller must goals are thus to minimize cost (G1) and maximize
make decision regarding four characteristics [1] namely: (1) the effectiveness which is measured in terms of maximum (G2)
perspective of analysis (2) the setting or context of the analysis and minimum (G3) allowable MAP. A total of three case
(3) the appropriate level of detail or granularity (4) the studies were conducted as follows:
appropriate time horizon.
A broad range of mathematical modelling types are TABLE I. COMPARISON OF MODEL SOLUTION METHOD
available to the modeller to represent disease, treatments and Solution
Case
costs. There is undoubtedly a trade-off between complexity of Study Method Description
the process being modelled and the type of model that should
G1 is assigned higher weight than G2 and G3
be used to represent the problem. In general, the simplest I Weighted as the goal pertaining to cost should be more
modelling technique that accurately represents the components important in a CEA
of the problem according to a clinical expert is sufficient. As a G1 is set to be at priority level 1 and thus
matter of fact, most problems can be addressed with either solved first before addressing G2 and G3. A
simple branch and node decision trees or Markov process- goal with lower priority (the goals pertaining
II Preemptive
to effectiveness, G2 and G3) should not be
based state transition models. However, it is to be noted here attained at the cost of a higher priority goal
that there are also a small portion of problems that do require (the goal pertaining to cost, G1).
more complex approaches where simulation models and An extension of Case II, the goals at priority
deterministic models have to be utilized to effectively III Prioritized
level 2, G2 and G3, are further assigned
comprehend the problem. weightage to show importance of different
goals at the same priority level.
Another aspect to be taken into consideration is that current
DAMs tend not to address the actual decisions that need to be The model was tested for a patient admitted with
made. Instead of contrasting various realistic potential MAP0=150 mm hg to validate the results. As a whole, it was
scenarios of use of a new intervention, they engage in the very concluded that in all three scenarios, the models gave the same
artificial comparison of all patients using the new intervention set of solutions. In addition, all the goals were also fully
versus all using some alternative and the assumption that all are achieved. The only difference being in term of their simplicity.
starting at the same time. The models are typically While Case I was run easily using LINGO 15.0, the other two
conceptualized in terms of health states even though they do cases were a little more tedious as two levels of problems had
not correspond to what the clinicians are seeing in practice. The to be solved depending on the priority of the goals. Despite the
biggest problem is however is that they set up in terms of the extra work, the solution generated was nevertheless exactly
population as a whole – “cohort” models. This in return creates the same which suggests the consideration of preemptive and
awkward problems as populations are not homogenous and prioritize GP is unnecessary.
applying probabilities based on the average profile yields
incorrect results [1].
III. SENSITIVITY AND POST-OPTIMALITY ANALYSIS
In attempting to address this shortcoming, the use of linear Formulating a mathematical model to cater to a particular
programming (LP) in formulating mathematical models to purpose is one thing but to be able to analyze how the
optimally prescribe anti-hypertensive drugs to first ever different variables of the model will affect the outcome if they
ischemic stroke patients was explored. Accordingly, a multi- were to change is totally another. It is therefore vital to also
objective mixed-integer nonlinear integer programming understand the behavior of the model. This is utterly important
(MOMINLP) model was formulated [3]. Although not when questions arise such as: (1) what would be the effect of
exhaustive in detail, the model is capable of determining the the course of treatment if a different weightage was assigned
anti-hypertensive drug(s) (ACE inhibitors, beta-blockers, to the goals? (2) at what range of the MAP0 would the course
calcium channel blockers) that should be prescribed to a patient of treatment change from mono drug to combination drug
subject to the desired in-patient days as well as the allowable therapy? (3) how will the course of treatment differ if the
range of mean arterial pressure (MAP) reading based on the minimum length of stay (LOS) is reduced or increased from
patient’s initial MAP (MAP0). This paper will thus further the current value?
discuss the factors that affect the outcome of the formulated
model in [3]. The answer to these queries can be obtained by conducting
sensitivity analysis (SA) and post-optimality analysis (POA).
II. MODEL SOLUTION Although often used interchangeably, these two analyses have
a clear distinction between them. While the former deals with
Since the model is a multi-objective model, it is practical the range of the parameter value that will keep the solution
that goal programming (GP) approach is opted for as it is optimal, the later analyses how the optimal solution changes
known to allow for deviations. Deviations here refer to the when the parameters of the original model are changed. In the
possibility of not fulfilling a given objective fully which is the context of this model, SA and POA can be done by analyzing

179
Back to Contents

the effect on the solution when changes occur in (1) the three case studies with different MAP0 values to determine the
weightage assigned to the goals (2) the patient’s MAP0 (3) the range of MAP0 that will keep the obtained optimum basic
minimum LOS of patient (4) the minimum allowable MAP (5) solution unchanged. Beginning with 110 mm hg, a few MAP0
the maximum allowable MAP. values at a constant interval of 10 mm hg is implemented.
Again, in all scenarios the solution obtained is the same and is
These analyses are imperative for a few reasons. An as summarized in Table III.
obvious one is that the range of values of a model’s parameter
are bound to change over time. The efficacy of drugs in TABLE III. SOLUTION COMPARISON FOR VARIOUS MAP0
question, for instance, may improve with the advancement of
pharmaceutical-related researches and this will lead to the Case MAP0 (mm hg)
need to solve the model yet again with the new efficacy Study 110 120 130 140 150 160 170
coefficient in order to determine the most cost-effective course I
of treatment. However, with SA the decision maker can easily
determine from the current solution how the change in the II A A A AC ABC ABC ABC
drug efficacy will affect the optimal solution. III
Avoiding the need to solve the model every time there is a
change not only saves time but also the resources, as in some Next, the range of MAP0 for which the course of treatment
cases, to do the data collection all over again could be both remains the same is analyzed. The results from Table III aid in
manpower and cost consuming. The values of parameters in a narrowing the values that need to be tested for as it is clearly
model may change over time to cater to the current situation depicted here that the drug therapy changes from mono drug
and by conducting SA and POA, the decision maker will be to combination drug therapy when MAP0 is between 130 and
pre-informed on how much changes are allowable for the 150 mm hg. By running the model again for each MAP0
current optimal solution to remain the same. within this range, the results reveal that a patient requires only
mono drug therapy if MAP0 is 131 mm hg and below which
means only when MAP0 increases, the need for combination
A. Changes in the Weightage Assigned to the Goals drug therapy emerges.
Despite justifying the assignment of more weightage to the
goal pertaining to cost, it remains a question if the weightage Thus, for patients who are admitted with MAP0 greater
did play such a big role in the solution. Thus, the significance than 131 mm hg, mono drug alone would be insufficient for a
of the weightage assigned to the goals are tested by running cost-effective treatment. Again, this raises another question.
various combinations of weightage. The other parameters in Which combinations of drugs should be chosen? In reference
the model are fixed to be the same. The summary of the results to the Malaysian Clinical Practice Guideline for Management
obtained is as tabulated in Table II. of Hypertension (4th Edition) [4] and Management of
Ischaemic Stroke (2nd Edition) [5], it is known that ACE
The results clearly suggest that regardless of the weightage inhibitors should be prioritized over the three anti-
assigned, the same choice of treatment is obtained in all hypertensive while calcium channel blockers should be
scenarios for the various MAP0. Thus, it can be concluded that preferred over beta-blockers.
weightage does not play any significant role when the model Summarizing the results, a guideline based on the
was run for the current set of constraints. However, it is feasibility range is obtained and tabulated in Table IV. The
possible that with additional constraints, the weightage may results here further verified this as the drug choice complies to
play a big role in the solution obtained. [4,5]. However, it is to be remembered here that the suggested
drug therapy guideline is subject to changes depending on the
TABLE II. TREATMENT CHOICE FOR VARYING WEIGHTAGE OF GOALS constraints imposed. Consideration of more constraints may
MAP0 (mm hg) result in a more cost effective model and therefore a more
Scenario
120 130 140 150 160
cost-effective guideline.
Varying G1 weightage TABLE IV. SUGGESTED DRUG THERAPY GUIDELINE
Varying G2 weightage A A AC ABC ABC MAP0 Range
Drug Combination
(mm hg)
Varying G3 weightage
≤ 131 ACE Inhibitors
Key : A = ACE Inhibitors
B = Beta – Blockers 132 - 148 ACE Inhibitors + Calcium Channel Blockers
C = Calcium Channel Blockers
149 ACE Inhibitors + Beta Blockers
B. Changes in the Initial MAP of Patients ACE Inhibitors + Beta Blockers + Calcium Channel
≥ 150
Blockers
To validate the formulated model, it was tested for a
patient with MAP0 of 150 mm hg in all three case studies.
While this is good to ensure an accurate comparison between C. Changes in the LOS Benchmark
the solution methods, this alone is insufficient to generalize
the choice of treatment. Hence, the model is run again for the The benchmark of the patients’ LOS in the formulated
model was set to be four days as recommended by [4]. Here,

180
Back to Contents

as cost is directly proportional to a patient’s LOS, the selected change in the original optimal solution is similar the change
value may be questionable as a lower value can be chosen to on the maximum allowable MAP. That is to say, an increase
cut cost. While a longer duration may impose additional cost, of 10 mm hg in the maximum allowable MAP to 120 mm hg
a shorter duration means reducing a vast amount of MAP per gives a new feasibility range of the solutions whereby the
day which can only be attained by giving patient stronger change from mono drug to combination drug therapy no
drug(s) or higher dose. In both cases, this is definitely a longer occurs at 132 mm hg but at 142 mm hg. Table VI
dangerous practice as it may subject patients to other medical depicts the pattern of the change in drug therapy.
complications.
TABLE VI. SOLUTION COMPARISON FOR VARIOUS MAXIMUM
When the model is run for a LOS shorter by a day, mono ALLOWABLE MAP
drug therapy is no longer an optimal treatment choice even
Maximum MAP0 (mm hg)
when MAP0 is merely 110 mm hg. However, for LOS longer Allowable
than 4 days, the optimal solution remains the same. The results MAP 110 120 130 140 150 160 170
obtained for varying LOS are as tabulated in Table 5. 100 A A AC ABC ABC ABC ABC

TABLE V. SOLUTION COMPARISON FOR VARIOUS LOS 110 A A A AC ABC ABC ABC

MAP0 (mm hg) 120 A A A A AC ABC ABC


LOS
110 120 130 140 150 160 170 130 A A A A A AC ABC
2 AC AC ABC ABC ABC ABC ABC
It is clear that when the minimal allowable MAP level is
3 AC AC AC ABC ABC ABC ABC adjusted, the optimal solution generated remains the same
4 A A A AC ABC ABC ABC while changes in the maximum allowable MAP level causes
similar changes to the original guideline. This is unsurprising
5 A A A AC ABC ABC ABC
as changes in the maximum allowable MAP level changes the
6 A A A AC ABC ABC ABC reduction level per day which is closely connected to the
choice of drug therapy to be administered.
It can be concluded, thus, that the feasibility range of this
constraint is [4,∞) whereby when the LOS is less than 4, even IV. CONCLUSION
patients who are just slightly over the allowable range are to
be administered with both ACE inhibitors and calcium Based on the conducted SA and POA, it is concluded that
channel blocker which is definitely a strong combination for a a patient’s initial MAP, the benchmarking LOS as well as the
slightly elevated MAP0. However, when the LOS is four days maximum allowable MAP level affect the solution derived
or more, the treatment choice remains the same. This indicates from the model. Whereas the weightage of the goals and the
a more reasonable choice of treatment can be administered to patient’s minimum allowable MAP level does not play a role
a patient if a shorter LOS alone is not the aim of the attending and therefore, any changes pertaining to these two constraints
physician. will not affect a physician’s decision in determining the drug
therapy to be administered.
D. Changes in the Minimum Allowable MAP
ACKNOWLEDGMENT
In the original model, the minimum allowable MAP was
The authors would like to thank the Malaysian Ministry of
set to be 70 mm hg as the normal MAP is 70-110mm hg.
Since an MAP of at least 60 mm hg is required to keep blood Higher Education for their financial funding through
flowing through the vasculature in humans [6], the Fundamental Research Grant Scheme (FRGS) vote number
significance of lowering the minimum allowable MAP is also R.J130000.7809.4F470. This support is gratefully
checked by running the model for varying minimum MAP acknowledged. The authors would also like to thank UTM
value between 60 and 70 mm hg. The results evidently shows Centre for Industrial and Applied Mathematics and
the solution remains unchanged and thus the feasibility range Department of Mathematical Sciences, Faculty of Science,
of this constraint is [60,70]. UTM for the support in conducting this research work.
This is a clear indication that the minimum allowable MAP
level does not play a role in determining the drug therapy REFERENCES
choice in an ischemic stroke patient. A possible reason here [1] R.J.G. Arnold (Ed.). Pharmacoeconomics From Theory to Practice.
could be the rarity of patients’ MAP level dropping to this level Fluorida:CRC Press, Taylor & Francis Group, LLC, 2010.
despite the strength of the anti-hypertensive administered due [2] P. Muening,P. Cost-Effectiveness Analysis in Health A Practical
to the elevated blood pressure during the stroke attack. Approach. San Fransisco:John Wiley & Sons, 2008.
[3] R. Rajendran, Z.M. Zainuddin and B. Idris, “Cost-effectiveness analysis
E. Changes in the Maximum Allowable MAP of the antiplatelet treatment administered on ischemic stroke patients
using goal programming approach”, in STATISTICS AND
OPERATIONAL RESEARCH INTERNATIONAL CONFERENCE
Similarly, the maximum allowable MAP level is also (SORIC 2013), Sarawak, Malaysia, 2014, pp. 71-80.
analyzed to observe the change in the original optimal solution
with various maximum allowable MAP level. In this case, the

181
Back to Contents

[4] Malaysian Clinical Practice Guidelines Development Group, “Clinical P. Team, "Mean Arterial Pressure Calculator - PhysiologyWeb",
Practice Guidelines on Management of Ischaemic Stroke (2nd Edition)”, Physiologyweb.com, 2016. [Online]. Available:
2012. http://www.physiologyweb.com/calculators/mean_arterial_pressure_calc
[5] Malaysian Hypertension Guideline Working Group, “Clinical Practice ulator.html. [Accessed: 17- Jan- 2].
Guidelines on Management of Hypertension (4th Edition)”, 2013.

182
Back to Contents

Location Routing Inventory Problem with


Transshipment points using p-center
S.Sarifah Radiah Shariff
Centre for Statistical and Decision Science Studies, Mohd Omar and Noor Hasnah Moin
Faculty of Computer and Mathematical Sciences, Institute of Mathematical Sciences,
UniversitiTeknologi MARA, 40450 Shah Alam, Selangor, Faculty of Science, University of Malaya,
MALAYSIA Kuala Lumpur, MALAYSIA
shari990@salam.uitm.edu.my mohd@um.edu.my, noor_hasnah@um.edu.my

Abstract—Location Routing Inventory Problem with experience customer’s dissatisfaction, worst outcome is losing
Transshipment (LRIP-T) is a collaboration of the three a customer. It is vice versa of surplus where surplus is
components in the supply chain which are location-allocation, described as a situation to retain excess inventory. Surplus can
vehicle routing and inventory management problems that allows cause profit loss to a company. When there is a surplus, it
transshipment process, in a way that the total system cost and the takes up space and increased holding costs. We have seen the
total operational time are minimized. This study is to determine a
set of customer points to act as the transshipment point as and
impact of both issues in supply chain, thus we consider
when it is necessary, based on the surplus quantities it has, the incorporating the transshipment process in order to prevent it
quantities to ship to the needing customers and the sequence in from happening.
which customers are replenished by homogeneous fleet of Transshipment has important role in the supply chain
vehicles. This LRIP-T is modified by making one or more of the as it allows the goods to be shipped from depot to a customer
customer points in the distribution system as the transshipment or from a customer with excess stock to be shipped to another
points with the selection of the points are done using p-center. customer. It is profitable to the supplier as they can save the
Sensitivity analysis for two cases: 1) unlimited supply, 2) limited transportation cost in terms of delivering goods to the
supply are presented. Results show important savings achieved respective customer and at the same time, customer can also
when compared to the existing model in solving the supply chain save cost in terms of storage space and obsolete the access
problem. stock. Several studies implemented transshipment and proved
Keywords- Location Inventory Routing Problem with that it is effective compared with the LRIP without
Transshipment, p-center transshipment. Researchers tend to integrate transshipment
I. INTRODUCTION with inventory routing problem (IRP) such as researcher [4]
and [5]. IRP is the combination of two problems in operation
This Location routing inventory problem (LRIP) is an research; vehicle routing and inventory and IRPT is the
integration of three keys logistics decision problems which are extension of IRP with insertion of transshipment. In both
location-allocation, vehicle routing and inventory studies they used the customers as the transshipment points and
management. LRIP arises when decisions on the three we also implemented the similar technique. The transshipment
problems must be taken simultaneously. According to [1], is only allowed when the location of transshipment point with
LRIP in the distribution systems is to allocate depots from retailer are nearer compared to the location of supplier. It is
several potential locations to schedule the routes for vehicle in profitable to the supplier as they can save the transportation
meeting the customer’s demand and to determine the cost because of the short distance of travel to the customers.
inventory policy based on the information of customer’s Other than IRPT, some researchers also has
demands in order to minimize the system’s total costs. incorporated vehicle routing problem (VRP) with
Location inventory routing problem (LRIP) is a branch of transshipment. This compatible extension of VRP is called the
logistic study that is not widely explores by the researchers vehicle routing problem with transshipment (VRPT). There
due to the interrelated decision areas. However, excellent are only a few studies that focused on VRPT as it is NP-hard
integration consequently presents significant saving costs as problem which is difficult to solve by using normal
simultaneous decisions imposed in solving the problems. optimization method. Researchers [6] developed a mixed
Not having enough stock to fulfill the customer’s demand integer programming model and algorithm for VRPT but the
or stock out is one of issues in the supply chain distribution. study only obtained a satisfied solution due to NP-hardness.
This unexpected situation occurred when there is an excess However, the [6] study has been improved by [7] using the
demand that lead to falling of inventories. When this control variables that are able to determine the routes. Their
happened, customers tend to purchase goods at another store models are based on dynamic programming, combination of
or does not purchase at all. In addition, when a substitution is two-phase method and branch-and-bound method. These
made, the retailer and supplier lose their potential sale as methods are found to be efficient in improving the efficiency
customers start to switch to another substitute and they may

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


183
Back to Contents

of the program running and increase the satisfaction level of Where


solutions. In this study, we consider LRIP-T or Location N : number of customer points
Routing Inventory Problem with Transshipment by applying M : number of logistics centers
p-center and p-median to select the transshipment center.
h : index of customer point or logistic centers
II. RESEARCH METHODS (1 ≤ h ≤ N + M )
A. Model g : index of customer point or logistic
centers (1 ≤ g ≤ N + M )
The following model is adapted: i : index of customer point (1 ≤ i ≤ N )
j : index of logistics center (N +1 ≤ j ≤ N + M )
Minimize Total Cost k : index of vehicles or routes (1 ≤ k ≤ K )
t : index of time periods of planning horizon ( p )(1 ≤ t ≤ p)
N +M
 l P l  

j = N +1
( E j +   ( B j + Y j ) H j (t ) + q j ⋅ ⋅  (V j (t ) + α j γ j (t )) + B j  ⋅ A j 
 t =1 l i =1  
E j : cost to establish the logistics center j
l K N +M N +M
 Dhg : distance between point h and point g
+    Dhg ⋅ Whgkt 

t =1 k =1 h =1 g =1  P : length of the planning horizon
CK : capacity of vehicles
Subject to: c : unit cost of vehicles
B j : cost of dispatching the product from factory to logistics
 
K N +M
k =1 h =1
Wgikt = 1 , for all i , t (1)
center j
 W −  g =1 Whgki = 0 , for all h , k , and I
N +M N +M
g =1 ghki α j : probability of being reused after a circulation
(2) qj : holding cost of products (in unit) at logistics center j

 
N +M N +M
h =1 g =1
W ghki ≤ 1 , for all k , and t (3) Yj : ordering cost of every time in logistics center j
β kt (i ) ≤ CK , where i ∈ {0,1,2,, N } (4)
L j : lead time in logistics center j where, L j ≤ P / p
τ i (t ) ≤ U ( f i (t )) , for all I (5) V j (t ) : logistics center’s j inventory level start at period t,
where V j (t = 0) = 0 stands for inventory is zero at the
 Wihkt − h =1 W jhkt − S ij ≤ 1 ,for all i , j , k and
N +M N +M
h =1
initial stage of planning horizon
t (6)
Wghkt ∈ {0,1}, for all g , h , k and t (7) And the decision variables are:
A j ∈ {0,1} , for all j (8) W hgkt : 1, if point h immediately go to point g on route k in
period t ; 0, otherwise
S ij ∈ {0,1} , for all i and j (9)
S ij : 1, if customer i is allocated to logistics center j ; 0,
O jkt ∈ {0,1} , for all j , k , and t (10) otherwise
O j ∈ {0,1}, for all j (11) Aj : 1, if logistics center j is opened; 0, otherwise
O jkt : 1, if route k is served by logistics center j in period
The objective function is to minimize the total system cost
expressed by the summation of logistics center’s set up cost,
t ; 0, otherwise
transportation cost and inventory cost. Constraint (1) ensures H j (t ) : 1, if there exists an order for new product at logistics
that each customer appears in only one route during period t. center j ; 0, otherwise
Constraint (2) states that every point entered should be the
same point the vehicle leaves. Constraint (3) insures that each Ri (t ) : actual collection volume from customer i in period t
route only served by one logistics center. Constraint (4) and
(5) are the statement for any period, in any point the total load
is less than the vehicle capacity and the actual volume is equal B. Insertion of Transhipment Parameters
or less than expected volume. Constraint (6) states that a In this study, the transshipment cost is incorporated into
customer can be allocated to a logistics center if there is a total system cost of existing model adapted from [8]. p-center
route passed by the customer. Constraint (7)-(11) insures the model is used to select the best transshipment point. In p-
decision variable’s integrality. center problem the maximum distance between the

184
Back to Contents

distribution point and customer’s node is minimized. Hence,


the formulation of the objective function is as below: TABLE III. DEMAND OF CUSTOMER POINTS

Minimize Customer Demand Customer Demand Customer Demand


MaxD C1 42 C6 61 C11 32
N+M
l P l 
+  (Ej +(Bj +Yj )Hj (t) +qj ⋅ ⋅ (Vj (t) +αjγ j (t))+ Bj ⋅ Aj C2 56 C7 37 C12 69
j=N+1  t=1 l i=1  C3 53 C8 45 C13 45
where C4 34 C9 48 C14 62
rij : the cost (unit) of transshipping product from i to j C5 48 C10 68 C15 47
Qvij : the quantity transshipped from customer i in period t Total demand =747 units
MaxD : the total cost at the maximum distance,
l K N +M N +M
Maximize    D hg ⋅ Whgkt IV. RESULTS
t =1 k =1 h =1 g =1 In Phase I, a sensitivity analysis is done by dividing into
two cases, to analyze the stability of data. First case is the
III. DATA ANALYSIS
unlimited total supply restricted to only one logistics center
A. Data preparation open and the second case is to analyze limited total supply
Data are adapted based on [8] and used based on the also with one logistics center open. For Case 1, the total
following assumptions. First, there are 5 candidates of logistics demand is 747 units. In this case, we consider a balanced
centers and 15 customers, the capacity of vehicles (CK) is 125 transportation problem where total demand equals to total
units, the unit cost of vehicles (c) is 1/unit distance and the supply. However the total supply for Case 2 is set to be 900
probability of being reused after a circulation (αj) is 0.9. In units. The entire inventories are distributed by identical fleet
addition, we assume the expected distribution demand is of vehicles and similar in capacity. Each vehicle only can visit
randomly populated and obey the Poisson distribution with = one customer. The vehicle needs to return to logistics center
2 and = 1. Table I and II present the parameter values of when the inventory capacities in the vehicle are unable to
logistics centers and customer points. cover the next customer demand. The leftover inventory is
defined as a waste. There are 10 trials to determine the
TABLE I. LOGISTICS CENTERS
customer to visit and also the route for each logistics center.
Logistics Coordinates αj Setup The decisions are made by considering the total waste and
centers cost total distance travelled.
D1 (43,49) 25 10,000 In Phase II, the transshipment point is determined based on
p-center. In the actual scenario, the total system cost is the
D2 (1,12) 20 12,000 summation of establishing cost of logistics centers,
D3 (41,30) 30 14,000 transportation cost and inventory cost. There are some
D4 (5,58) 10 13,000 modified costs for LRIP-T in this study.
i. Logistics Center Establishment Cost- No
D5 (24,19) 15 12,000
establishment cost needed as the transshipment center
will be selected among the customers.
ii. Transportation cost- Transportation cost covers the
TABLE II. CUSTOMER POINTS from the distribution centers and from the
transshipment centers.
Customer Coordinates Customer Coordinates iii. Inventory cost- The inventory level at the
points points transshipment center is assumed to be zero, or the
C1 (15,3) C9 (3,35) center is assumed not to keep any stock.
C2 (18,24) C10 (33,21)
A. Phase I
C3 (2,59) C11 (45,27)
This shows the results on the sensitivity analysis that
C4 (9,6) C12 (46,6)
determines the logistics centre that has the lowest average
C5 (49,54) C13 (24,32) distance travelled.
C6 (33,10) C14 (28,33) In Case I, as shown in Table IV, the logistics center D1 is
C7 (30,50) C15 (2,0) determined to be the best center with the lowest average
travelled distance of 96.67 km.
C8 (24,59)

185
Back to Contents

TABLE IV. CASE I:UNLIMITED SUPPLY C3 2276.4 C5 336.0


C4 1867.6 C9 700.0
Logistics Vehicle Total Average Average C6 1554.0 C14 400.4
Center Used Distance Distance Waste
(km) (km) (unit) C7 2632.0 C15 532.0
D1 7 676.7 96.67 18 C8 1920.8
D2 8 778.51 97.31 31 C10 1103.2
D3 8 1135.62 141.95 31 C12 2105.6
D4 8 1055.43 131.925 31 C13 1265.6
D5 8 850.51 106.3125 27 Total cost =19756.8

For Case II, where the supply is limited, the logistics center Fig.1 shows the possible distribution network generated by
D1 also appear to be eth best logistics center with average the proposed result in Table VI.
travelled distance is 46.05 km only.
.

TABLE V. CASE II: LIMITED SUPPLY

Logistics Total Average Average Average


Center distance Distance Distance/Vehicle Waste
(km) (km) (km) (unit)
D1 690.69 46.05 230.23 51
D2 725.91 48.39 241.97 51
D3 1022.31 68.15 340.77 51
D4 956.7 63.78 318.9 51
D5 749.79 49.99 249.93 51

B. Phase II

Fig.1: Possible Distribution Network of Proposed result


Originated from Phase I, D1 is selected as the best logistics
center. In this phase, a transshipment point is determined based V. DISCUSSIONS
on the notion of p-center. Based on the results from Case 1 and Case 2 (sensitivity
Initial sets of routes are chosen and MaxD is calculated and analysis), we identified that the smallest distance and efficient
minimized. Each customer from C1 up till C15 has equal logistics center to be assigned for transshipment process is D1,
chance of being chosen as the transshipment point. Total 676.70 km for Case 1 (unlimited supply) and 690.69 km for
transportation system cost is calculated and the best result is Case 2 (limited supply). As D1 entered the selection process by
based on the set that produces the lowest cost. Details of each using p-center, C11 is selected as the transshipment point and
customer being the chosen transshipment point are summarized yields a better performance on distance travelled (658.56 km)
in Table VII and shows that C11 produces the lowest cost. with small total cost 19756.80 compared to LRIP in Case 1
when only one logistics center is opened. This means that
Table VI shows the final answer that lists the customers LRIP-T model has performed tremendously well in order to
that being covered by D1 as the logistics center and those that save the total costs for all the three systems; location, inventory
being covered by C11 as the transshipment point. This also and routing.
shows the lowest cost of 19, 756.8.
Table VII presents the comparisons between the LRIP and
LRIP-T.
TABLE VI. COVERAGE OF LOGISTICS CENTER AND THE TRANSSHIPMENT TABLE VII. COMPARISON OF MODELS
POINT

D1 Coverage C11 Coverage Model LRIP LRIP-T

Customer Cost Customer Cost Distance


676.7 658.6
points points (km)
C2 2287.6 C1 775.6

186
Back to Contents

In this paper, we have proposed a new way of determining REFERENCES


the transshipment point among the customers for location
routing inventory problem. We present a sensitivity analysis on [1] W. Xuefeng, “An integrated multi-depot location-inventory-routing
LRIP model as a preparation for the transshipment process. problem for logistics distribution system planning of a chain enterprise”
The selection of routes in this analysis are randomly picked and in Logistics Systems and Intelligent Management, 2010 International
tested in order to achieve the maximum distance that has the Conference on 3, Nanchang China: IEEE, 2010, pp. 1427 – 1431.
minimum total system cost. The total system cost has [2] M.G. Granada, and C. W. Silva, “Inventory location routing problem: a
column generation approach.” in Proceedings of the 2012 International
considered the new technique of choosing the transshipment Conference on Industrial Engineering and Operations Management,
point using p-center. This technique has excluded some costs Istanbul, Turkey, 2012, pp. 482-491
and only has the total distance travelled in the formulation. In [3] L. Bertazzi, G. Paletta, and M.G. Speranza, Transportation Science,
future research, we will further consider using the heuristic 36(1), 119-132 (2002).
method called genetic algorithm for a better route’s selection. [4] A. J. Kleywegt, V.S. Nori, and M.W.P. Savelsbergh, Transportation
Another area of possible research involves LRIP-T together Science, 36(1), 94-118 (2002). [5] Q.H. Zhao, “Study on logistics
with development of new algorithm is the stochastic demand. optimization models”, PhD thesis, Beihang University, 2003.
[6] F.M. Yang, and H.J. Xiao, Systems Engineering, Theory & Practice,
In real life, fluctuation of customer’s demand can affect the 27(3), 28-35 (2007).
supply chain distribution. [7] B. Zhang, Z. Ma and S. Jiang , “Location routing inventory problem with
stochastic demand in logistics distribution systems”, In Proceeding of:
ACKNOWLEDGMENT Wireless Communications, Networking and Mobile Computing, IEEE
Xplore, 2008, pp.1-4.
We would like to acknowledge Ministry of Higher [8] H. Ahmad, P. Hamzah, Z.A.M Md Yasin and S.S.R. Shariff, “Location
Education Malaysia for funding this research through the Routing Inventory Problem with Transshipment (LRIP-T)” in
Research Management Institute (RMI) of Universiti Teknologi Proceedings of the 2014 International Conference on Industrial
MARA, Malaysia, Grant No: 600-RMI/FRGS 5/3(9/2013)). Engineering and Operations Management, Bali, Indonesia,(January 7 – 9,
2014), pp. 1595-1605.

187
Back to Contents

Optimal Worker Assignment under Limited-Cycled


Model with Mutiple Periods
- Consecutive Delay Times is Limited -

Xianda Kong, Hisashi Yamamoto, Shiro Masuda


Department of System Design, Tokyo Metropolitan University
6-6, Asahigaoka, Hino, Tokyo, 191-0065, Japan
{kong-xianda, yamamoto, smasuda}@tmu.ac.jp

Abstract— strictly keep delivery date is the life of the than arranging these new recruits, which is called as untrained-
manufacturing enterprise indefinitely. Especially in the workers in the essay.
developing countries such as China, labor-intensive enterprise is
still the mainstay industry. As a result, assembly line is widely The first mathematical formalization of assembly line
used as a manufacturing process for reducing costs and balancing is established by Salveson [1] 60 years ago,
production time. However, laborer is the main factor of assembly line balancing problem is popularly researched and
production speed in such a labor-intensive enterprise. So because developed [2]. It assumes a homogeneous skill set to solve the
of the various working capacity of worker, one work process may problem of worker assignment in the above literature, that is to
delay or idle, and this will influence the rear several processes. say, the skill of workers is not distinguished. And during
This unforeseeable consecutive delay of process may influence the assembly line balancing problem develops, Yamamoto et al.
whole manufacturing production. So we build a model, called as [3] produce the limit-cycle model with multiple periods
limited-cycled model with multiple periods (LCMwMP), to (LCMwMP). This model is set with constraint condition, (e.g.,
assume evaluating the influence risk. In order to minimize the processing time with a target) which is repeated in every
risk, the assignment of the workers, especially untrained worker multiple periods. If the constraint condition is broken through,
is focused on. In previous researches, rules of optimal worker an expected risk (e.g., penalty cost) will be occurred.
assignment with two kinds of workers were proposed in which
one, two and three untrained workers exist. As it is, the non- And according whether the constraint condition (target
limited consecutive delay time is not allowed in the fact world. In processing time) reset or not, The LCMwMP is divided into
this paper, we deal with an assembly line as LCMwMP model reset model and non-reset model. In the field of reset model, a
when there are two kinds of workers, who are distinguished by recursive formula for the total expected risk and an algorithm
worker’s efficiency. We consider an optimization problem with for optimal assignments based on the branch and bound
three untrained workers, specially consider limited consecutive method are proposed by Yamamoto et al. [4].
delay times as twice or three times, for comparing with the
previous research in which consecutive delay times is not limited. Recently, Yamamoto et al. [5] and Kong et al. [6] proposed
And, by some numeral experiments, we will conclude several properties of optimal worker assignment with two kinds of
rules of optimal assignment. workers in which one special worker exists. Then, Kong et al.
[7] [8] proposed rules of optimal worker assignment with two
Keywords—optimal worker assignment; multiple periods; special workers when special workers are less than 3.
consecutive delay times; line balancing; wave phenomenon.
After that Song et al. [9] [10] continued the research and
expanded the rules of optimal assignment with the number of
I. INTRODUCTION workers in minority group being 3, but only situation when
Even the assembly line has been widely used for several untrained worker is minority is discussed. It finally
decades, till now, we can still benefit from it is also commonly demonstrates that when 3 untrained workers exist, it is the
used in the labor-intensive enterprise for reducing processing optimal assignment when and only when the first process on
costs and production time. However, no matter how increased the assembly line is set by untrained worker.
the reliability of the machine on the assembly line is by the
developing of technology, human error of the worker, In this paper, based on the researches above, especially
appearing as mistake or failure, is inescapable. And it will lead according to the rules which found by Kong et al. [7] and Song
to a delay of whole production processes. Also, because of the et al. [10], we try to expand the rules of optimal assignment of
variability of ability of workers, the time of the same workers under this LCMwMP models when the allowed
processing is different from people to people. Especially, the consecutive delay times is limited. This paper is written as the
new recruit is not skillful, so they are considered with a higher following prescribed order. First, reset model as a simple
probability of delay. As a result, there is no important thing model of LCMwMP is introduced. Then, some assumptions are
proposed according to the previous researches and some

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


188
Back to Contents

numerical experiments are researched assuming the processing (6) When Tw > Z , the delay cost per unit time, C P(i ) ( > 0) ,
time as Erlang distribution. Finally, the other rules on other
occurs in the process if delay occurs in consecutive i processes
certain conditions are discussed.
before its process, for i = 1,2,..., n . If the delay continues for
several processes, it can be considered that it will cost more for
II. MODEL EXPLANATION
recover the delay. So it is supposed that C P(i ) is increasing in i,
In this section, we consider a ‘Reset model’ which is a
simple model of the LCMwMP. Then, we define the optimal which can be expressed as 0 < C P(1) ... < C P( n ) .
assignment problem in reset model.
2.2 Assumptions of Processing Ability of Workers
2.1 Reset Model of LCMwMP The aim of this paper is to search the optimal assignment
The model is considered based on the following of workers when 3 untrained minority worker exists, so the
definitions by Song et al. [11] in Fig. 1: reasonable assumption of the property of worker is
(1) In an assembly line system, n is the number of particularly important. And the assumption is the followings.
processes. The production (we can it job in the following (1) Only one worker can be assigned to each process. And
article) is processed in a rotation of process 1, process 2, ..., each process must be assigned with one worker.
and process n. One production will be processed by all n (2) The processing time of workers is self-dependent. The
processes. processing ability is decided by the property of worker own
(2) Z is the cycle time of all of the processes, which can be and is not influenced from the processing status such as idle or
also considered as target processing time. All of the jobs delay.
should be accomplished in current process and moved to next (3) In this paper, the workers are distinguished into two
process by time Z. types of workers by the processing ability, marking as A and B.
(3) Because of the various processing abilities of workers, Although the processing abilities are various from each other,
the actual processing time cannot always obey the limit of it is difficult to calculate the optimal assignment and explain
target processing time Z. Idle and delay should be also the rules clearly and briefly without using such marks. Worker
concerned in this model. For 1 ≤ w ≤ n , processing time of A represents for untrained worker whose processing ability is
process m is donated by Tw. lower than others, and A represents untrained worker. The
In this model, a regular processing cost Ct (> 0) per unit number of A is less than B, which means untrained worker A is
time will permanently occur during target processing time Z, minority. It can be considered that one day, three workers are
regardless whether it is idle or delay. It is for the reason that out of work. In this case, new recruits are needed to help. They
are untrained. It can be assumed as a reasonable situation in
although the job is accomplished prematurely in current
the real factory.
process, the next process may be occupied by another job. So, (4) These two kinds of workers have different probability
the job must wait for its start. And as a result, the idle cost per of idle or delay. If the processing time of worker is marked as
unit time, C s (≥ 0) will occur. On the other hand, if the Tl, where l ∈ {A, B} ,
processing time is longer than Z, it is supposed that the delay Pl : The probability of worker l becoming idle, which is
of process time can be recovered by the overtime work or Pr{ Tl ≤ Z} ,
spare workers in this process, and as a result overtime work or Ql : The probability of the worker l becoming delayed,
additional resources will be requested in order to meet the which is Pr{ Tl >Z},
target time Z. So the delay cost per unit time, C P( k ) (≥ 0) , will TSl : The expected idle time of the worker l , which is
occur (that is why we call the model a ‘Reset Model’). E[(Z-Tl) I(Tl ≤ Z)] ,
As a summary of above, we get: TLl : The expected delay time of the worker l , which is
(4) The processing cost per unit time, Ct (> 0) , for the E[(Tl-Z) I(Tl >Z)] ,
target processing time limit occurs in each process.
where I(O) is an index function and given as follows:
1 (O is true)
(5) When Tw ≤ Z , the idle cost per unit time, C s (≥ 0) I (O ) = 
occurs. 0 (O is not true).

Fig. 1. Description of idle and delay cost in reset model of LCMwMP

189
Back to Contents

In this paper, we call π * the optimal assignment.


2.3 Optimal Assignment Problem under Reset Model
However, it is easily known from (1) that if the target
We consider that three well-trained workers are allocated processing time Z is constant, the target production cost,
in this reset model. One of the most important problems is nC t Z , is also constant, so we can simplify (2) to
how to allocate workers to processes for minimizing the
expected cost in n processes. We call such a problem the
optimal assignment problem. For describing the optimal
( )
f n; π * = min
1≤i < j < k ≤ n
f (n; π (i, j, k ) ) (3)
assignment problem, we define the following notations [6]:
For 1 ≤ i < j < k ≤ n ,
π (i, j , k ) : 3 well-trained workers are assigned to process i, III. NUMERAL EXPERIMENT
j, and k, and the other n-3 untrained-workers are assigned to The processing time of these two kinds of workers
the other processes . follows Erlang distribution. In other word, Ql, the probability
TC (n; π (i , j , k ) ) : The total costs of processes 1 to n when of the worker l becoming delayed is
workers are allocated by assignment π (i, j , k ) .which can be m −1
( μ l Z ) k − μl Z
expressed as Ql = 
k =0 k!
e (4)

TC (n; π (i, j , k ) ) = nCt Z + f (n; π (i, j , k ) ) (1) where l ∈ {A, B} .


And Pl, the probability of the worker l becoming idle, is
where, 1-Ql.
f (n; π (i, j , k ) ) : The sum of the expected idle cost and the Here is the description of the parameter of Erlang
expected delay cost caused in process n. distribution.
By using these notations, the optimal assignment problem (1) μl is the processing rate. And when μl is bigger, it means
with multiple periods becomes the problem of obtaining processing speed is higher and lower possibility of delay
assignment in the following equation: of processing.
(2) m is the shape parameter. As mentioned in the introduction
( )
TC n; π * = min TC (n; π (i, j, k ) )
1≤i < j < k ≤ n
(2) above. m can be considered as the quantity of tasks in one
process. If the shape parameter m=1, Erlang distribution
simplifies to the exponential distribution.
Table 1. Boundary of centralized and transition optimal assignment when n=7, m=2, consecutive delay cost set 1

μA
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 …
μB
… … … … … … … … … …
0.5 AAABBBB AAABBBB AAABBBB AAABBBB × × × × …
0.6 AAABBBB AAABBBB AAABBBB
CENTRALIZED AAABBBB AAABBBB × × × …
0.7 AAABBBB AAABBBB AAABBBB AAABBBB AAABBBB AAABBBB × × …
0.8 AAABBBB AAABBBB AAABBBB AAABBBB AABBBBA AABBBBA AABBBBA × …
0.9 AAABBBB AAABBBB AAABBBB AABBBBA AABBBBA TRANSITIO
AABBBBA N AABBBBA AABBBBA …
1.0 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA …
… … … … … … … … … …

Table 2. Boundary of transition and decentralized optimal assignment when n=7, m=2, consecutive delay cost set 1

μA
0.1 … 1.0 1.1 1.2 1.3 1.4 1.5 1.6 …
μB
… … … … … … … … … …
1.4 ABABBBA … ABABBBA TRANSITIO
AABBBB N A
A AABBBB AABBBBA …
× × ×
1.5 ABBABBA … ABBABBA ABABBBA AABBBBA AABBBBA AABBBBA × × …
1.6 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA AABBBBA × …
1.7 ABBABBA … ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA AABBBBA AABBBBA …
DECENTRALIZED
1.8 ABBABBA … ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABABBBA …
1.9 ABBABBA … ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA …
… … … … … … … … … …

190
Back to Contents

Table 3. Boundary of transition optimal assignment when n=7, m=2, consecutive delay cost set 2

μA
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 …
μB
… … … … … … … … … …
0.5 AABBBBA AABBBBA AABBBBA AABBBBA × × × × …
0.6 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA × × × …
0.7 AABBBBA AABBBBATRANSITIO
AABBBBAN AABBBBA AABBBBA AABBBBA × × …
0.8 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA × …
0.9 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA ABABBBA ABABBBA ABABBBA …
… … … … … … … … … …

Table 4. Boundary of transition and decentralized optimal assignment when n=7, m=2, consecutive delay cost set 2

μA 0.1
… 1.0 1.1 1.2 1.3 1.4 1.5 1.6 …
μB
… … … … … … … … … …
1.4 ABABBBA … ABABBBA AABBBBA AABBBBA AABBBBA × × × …
1.5 ABBABBA … ABBABBA ABABBBA AABBBBA AABBBBA AABBBBA × × …
1.6 ABBABBA … ABBABBA ABABBBA ABABBB A ABABBB
TRANSITIONA ABABBBA AABBBBA × …
1.7 ABBABBA … ABBABBA ABABBBA ABABBBA ABABBBA ABABBBA AABBBBA AABBBBA …
1.8 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA ABABBBA ABABBBA …
DECENTRALIZED
1.9 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA ABABBBA ABABBBA …
2.0 ABBABBA … ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA …
… … … … … … … … … …

Table 5. Boundary of centralized and transition optimal assignment when n=7, m=2, consecutive delay cost set 3

μA
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 …
μB
… … … … … … … … … …
0.5 AAABBBB AAABBBBLIZED
CENTRA AAABBBB AAABBBB × × × × …
0.6 AAABBBB AAABBBB AAABBBB AABBBBA AABBBBA × × × …
0.7 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA × × …
0.8 AABBBBA AABBBBA AABBBBA AATRANSITIO
BBBBA AANBBBBA AABBBBA AABBBBA × …
0.9 AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA AABBBBA …
… … … … … … … … … …

Table 6. Boundary of transition and decentralized optimal assignment when n=7, m=2, consecutive delay cost set 3

μA
0.1 … 1.0 1.1 1.2 1.3 1.4 1.5 1.6 …
μB
… … … … … … … … … …
1.4 ABABBBA … ABABBBA AABBBBA AABBBBA AABBBBA × × × …
1.5 ABBABBA … ABBABBA ABABBBA AABBBBA AABBBBA AABBBBA × × …
1.6 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA AABBBBA × …
1.7 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA AABBBBA AABBBBA …
1.8 ABBABBA DECENTRALIZED
… ABBABBA ABBABBA ABBABBA ABABBBA TRANSITIO
ABABBB N A
A ABABBB ABABBBA …
1.9 ABBABBA … ABBABBA ABBABBA ABBABBA ABABBBA ABABBBA ABABBBA ABABBBA …
2.0 ABBABBA … ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA ABBABBA …
… … … … … … … … … …

191
Back to Contents

And other parameters are assumed as follows,


Consecutive Delay Cost C P(i ) : And as a future research, we will try to demonstrate these
rules and propose rules commonly applying to all cases
Set 1: C P(1) = 40, C P( 2) = 80, C P(3) = 160, C P( 4) = 320, without concerning the number of untrained workers and
whether untrained workers are minority or not.
C P(5) = 640, C P(6) = 1280, C P(7) = 2560
REFERENCES
Set 2: C P(1) = 40, C P( 2) = 80,
C P(3) = C P( 4) = C P(5) = C P( 6) = C P(7 ) = 38219 [1] M. E. Salveson. The assembly line balancing problem. The Journal of
Industrial Engineering, 6(3): 18--25, 1955
Set 3: C P(1) = 40, C P( 2) = 80, C P(3) = 160,
[2] B. Nils, F. Malte and S. Armin. A classification of assembly line
C P( 4) = C P(5) = C P(6) = C P( 7) = 38219 balancing problems, European Journal of Operational Research, 183(2):
674--693, December 2007.
[3] H. Yamamoto, M. Matsui and J. Liu. A basic study on a limited-cycle
Processes n = 7 ; Shape Parameter m=2; Target Processing problem with multi periods and the optimal assignment problem.
Time Z = 2 ; Idle Cost C S = 20 . Journal of Japan Industrial Management Association, 57(1): 23--31,
April 2006.
Tables 1~6 are the results of the numerical experiments [4] H. Yamamoto, M. Matsui and X. Bai. A branch and bound method for
with deferent consecutive delay cost set and the other the optimal assignment during a Limit-cycle problem with multiple
periods. Journal of Japan Industrial Management Association, 58(1): 37-
parameters are the same. Different consecutive delay cost -43, April 2007.
means the allowed consecutive delay times is different. [5] H. Yamamoto, J. Sun, M. Matsui and X. Kong. A study of the optimal
Tables 1 and 2 are from Song et al. [11]. There are three arrangement in the reset limited-cycle problem with multiple periods:
with fewer special workers. Journal of Japan Industrial Management
types of assignment. They are centralized, transition and Association, 62(5): 239--246, Dec. 2011.
decentralized optimal assignments. [6] X. Kong, J. Sun, H. Yamamoto and M. Matsui. A study of an optimal
arrangement of a processing system with two kinds of workers in a
Tables 3 and 4 are the results of optimal assignment with limited-cycle problem with multiple periods. Proceedings of the 11th
consecutive delay cost set 2. It means the allowed consecutive Asia Pacific Industrial Engineering & Management Systems Conference
delay times is twice. We get a different range of each optimal (APIEMS2010), Melaka, Malaysia, 2010. (on CD-ROM)
assignment type from Tables 1 and 2. Specially, in this case, [7] X. Kong, J. Sun, H. Yamamoto and M. Matsui. Two special Workers’
the centralized optimal assignment is not exist. optimal assignment with two kinds of workers under a limited-cycle
problem with multiple periods. Proceedings of the 21st International
Tables 5 and 6 are the results of optimal assignment with Conference of Production Research (ICPR2011), Stuttgart, Germany,
consecutive delay cost set 3. It means the allowed consecutive 2011. (on CD-ROM)
delay times is three times. We get a different range of each [8] X. Kong, J. Sun, H. Yamamoto and M. Matsui. Optimal Worker
optimal assignment type from Tables 1 and 2, also, Tables 3 Assignment with Two Special Workers in Limited-Cycle Multiple
Periods, Asian Journal of Management Science and Applications, Vol.
and 4. 1, No. 1, pp. 96-120. Aug. 2013.
From the numerical experiments, we can say, when the [9] P. Song, X. Kong, H. Yamamoto, J. Sun and M. Matsui. Numerical
allowed consecutive delay times is limited, the optimal Analysis of Three Rookies Assignment Optimization in Limited-Cycled
Model with Multiple Periods under the case of Erlang Distribution.
assignment is different. Proceedings of the 15th Asia Pacific Industrial Engineering &
Management Systems Conference (APIEMS2014), Jeju, Korea, Dec.
IV. CONCLUSION 2014 (on CD-ROM)
[10] P. Song, X. Kong, H. Yamamoto, J. Sun and M. Matsui. A study on
In this paper, we deal with an assembly line as LCMwMP rules of three untrained workers' assignment optimization under the
model when there are two kinds of workers, who are limited-cycled model with multiple periods. Proceedings of the 1st East
distinguished by worker’s efficiency. We consider an Asia Workshop on Industrial Engineering, Hiroshima, Japan, Nov.
optimization problem with three untrained workers, specially 2014. (on CD-ROM)
consider allowed consecutive delay times as twice or three [11] P. Song, X. Kong, H. Yamamoto, J. Sun and M. Matsui. Rules of Three
Untrained Workers’ Assignment Optimization in Reset Limited-Cycled
times, for comparing with the previous research which no Model with Multiple Periods, Industrial Engineering & Management
allowed consecutive delay times. And, by some numeral Systems, Vol. 14, No. 4, pp. 372-378. Dec. 2015.
experiments, we will conclude several rules of optimal
assignment. From the numerical experiments, we can say,
when the allowed consecutive delay times is limited, the
optimal assignment is different.

192
Back to Contents

Method for an Energy-Cost-Oriented


Manufacturing Control to Reduce Energy Costs
Energy Cost Reduction by Using a New Sequencing Method

M. Sc. Stefan Willeke


Dr.-Ing. Georg Ullmann
Prof. Dr.-Ing. habil. Peter Nyhuis
IPH – Institut für Integrierte Produktion Hannover gGmbH
Hannover, Germany
willeke@iph-hannover.de

Abstract— Rising electricity prices for industrial companies change the electrical load [3, 4]. The European Union (EU)
result in increasing energy costs and thus lower international already asks the European energy suppliers to provide time or
competitiveness. Due to increasing electricity price fluctuations, load variable electricity tariffs [5]. The highest electricity price
savings in energy costs without capital-intensive investments are volatility arises during the trading that occurs in the power
possible by implementing specific organizational methods to
process energy-intensive orders at times of low prices and ener-
exchanges (for example, the European Energy Exchange in
gy-low orders at times of high prices. Germany or the Korean Power Exchange in South Korea),
where, depending on the traded product, the price changes
KEYWORDS — ENERGY COSTS; ENERGY PRICE; MANUFACTURING every 15 minutes. Within a day, it can change up to 8
CONTROL; SEQUENCING. €cent/kWh. Even negative electricity prices are possible [6].

I. INTRODUCTION II. MANUFACTURING CONTROL AND ITS EFFECT


ON ENERGY COSTS
Since the turn of the millennium, in almost every industrial
nation, electricity prices for manufacturing companies have Manufacturing control, as a part of the production planning
been rising continuously (Fig. 1) [1]. This is mainly a result of and control (PPC), has the ability to enforce the production
taxes and duties to support the integration of renewable ener- plan despite often unavoidable disturbances, as well as to
gies and the turning away from low-cost electricity generated ensure the compliance of the logistic objectives, such as work
by nuclear power. As a result, the share of the energy costs in process (WIP), throughput time, schedule reliability and
relative to the production costs increases, resulting in a lower utilization [7]. The first subtask of the manufacturing control is
competitiveness compared to countries with lower or more the order generation, which generates concrete work orders out
slowly increasing electricity prices [2]. of customer and stock orders. In addition, planned appoint-
30 ments to the production program are determined. The order
release specifies the time the order is physically pushed into the
25 production. Thus, this subtask affects the actual input to the
production. The capacity control decides in the short-term on
Electricity Price [€cent/kWh]

France
20
Germany the reduction of working times and overtimes, and thus on the
Ireland actual output [8]. The fourth and last subtask of manufacturing
15 Italy
U.K.
control is sequencing. Sequencing determines the processing
Canada sequence of order queuing in front the work stations. The man-
10
Japan ufacturing control model of LÖDDING is shown in Fig. 2, which
Korea
5 USA
describes the cause-effect relationships between subtasks of
manufacturing control and the logistic objectives. It is evident
0 that sequencing has a significant impact on the schedules relia-
2000 2002 2004 2006 2008
Year
2010 2012 2014
bility, which is the most important logistic objective [8].

Fig. 1 Growth of global industrial electricity prices [1] The energy consumption of a work station, which ultimate-
ly determines the cost of energy, depends on physical con-
Renewable energy sources (such as solar or wind energy straints, such as the characteristics of work orders of the manu-
sources) have a fluctuating characteristic and only a partially factured product variation (for example, revolution speed, press
predictable feed behavior. As a result, in the medium-term, the force, heating power due to size, weight, or geometry). Thus,
electricity demand must be adapted to the electricity supply. the manufacturing control indirectly affects the temporal ener-
This demand control (also named demand response) offers, for gy consumption by defining when each order is processed at a
example by adjustment of prices, an incentive to shift or given work station and thus when the amount of energy con-

The IGF project 17900 N of the research association BVL is funded via
the German Federation of Industrial Research Associations (AiF) in the
program of Industrial Collective Research (IGF) by the Federal Ministry for
Economic Affairs and Energy (BMWi) based on a decision of the German
Parliament.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


193
Back to Contents

ConWIP to obtain higher energy efficiency by reducing ener-


gy consumption and utilizing volatile electricity prices.
SHROUF ET AL. [21] proposed a mathematical model to mini-
mize the energy consumption costs for a single machine pro-
duction planning problem. Scheduling is performed using a
genetic algorithm that utilizes variable energy prices and de-
termines launch times for order processing.
An approach for energy-oriented production control by utiliz-
ing the energy flexibility of a production system was presented
by SCHULTZ ET AL. [22]. The authors developed an energy-
oriented order release, which decontrols orders into the pro-
duction as a function of self-produced energy available. Dis-
ruptions in the scheduled amount of energy available are being
Fig. 2 Advanced Manufacturing Control Model
considered. This approach is able to increase a company’s
sumption accrues. If a time-variable electricity tariff is taken as share of energy-self supply and adapt to volatile energy prices
a basis, the manufacturing control affects the resulting energy by providing a better synchronization of energy demand with a
costs and opens up a new field of action [9]. Because of the limited energy supply.
rapidly changing electricity prices, a fast reaction of the energy GONG ET AL. [23] proposed a method to minimize energy costs
consumption is required. For this reason, an approach for ener- and improve energy efficiency of manufacturing unit process-
gy cost-oriented sequencing as a part of the manufacturing es. In this method, energy-cost-aware job order scheduling on
control will be presented in this paper. a single machine is performed by a generic algorithm imple-
mented in a mixed-integer linear programming mathematical
III. OVERVIEW OF PREVIOUS WORK
model. This algorithm searches for energy-cost-effective
RAGER [11] developed an energy-oriented hybrid evolutionary schedules at volatile energy prices.
scheduling algorithm to reduce energy consumption via high Reference
system utilization, thereby affecting the energy costs in a job

Eberspächer et al. 2013 [17]


Pechmann et al. 2012 [14]

Fernandez et al. 2013 [18]


Artigues et al. 2013 [16]
Weinert et al. 2011 [12]
shop with parallel identical work stations. The benefits of this

Schultz et al. 2015 [22]


Langer et al. 2014 [20]
Shrouf et al. 2014 [21]

Gong et al. 2015 [23]


Haag et al. 2012 [13]

Bruzzone 2012 [15]

Böning 2013 [19]


algorithm are higher energy efficiency, reduced number of

Rager 2008 [11]


Junge 2007 [10]

Present Paper
stations in operation, and reduced CO2 emissions.
A PPS Software for increasing the energy efficiency by opti-
Calculating the Lot Size X
mizing the production schedule was presented by PECHMANN In-House
Production Finite Scheduling and Sequence Planning X X X X X X X X X X
et al. [14]. In this approach, scheduling is performed by con- Planning Verifying Availabilty
Topic

Generating Orders X
sidering energy consumption during production steps and load Manufacturing
Control
Order Release X X
Capacity Control X
balancing to avoid high energy demands during peak periods. Sequencing X

The result is reduced overall energy consumption and an ener- Command Variable /
Energy Costs
Energy Comsumption/Efficiency
X X X
X X X X X X X X X
X X X
X X X X
Control Varibale
gy-efficient production schedule with routing alternatives for Energy Self-Generation Efficiency X
Load reducing X X X X
each production step. In addition, a 24-hour power load profile Manipulated Variable
Load shifting X X X X X X X X X X X X X X
Load buffering X X
forecast is calculated to provide to energy providers. Load increasing
FERNANDEZ ET AL. [18] presented a buffer inventory method- Energy Price
Electrical Power
X X X X X
X X X X X X X X X X X X X X X
Model Characteristics
ology for manufacturing systems to reduce power demand Energy Production X
Environmental Conditions X X
during peak periods. Buffers are built up during off-peak peri-
ods, and the methodology called "Just-for-Peak" is utilized Fig. 3 Overview of previous work
during high-peak periods to turn-off upstream machines. The It can be observed that the aim of most of the previous
results obtained in a case study based on an automotive as- work is to reduce energy consumption and to increase the
sembly line are lower energy costs and reduced energy con- energy efficiency by load shifting (Fig. 3). In recent years,
sumption without sacrificing system throughput. energy prices have been considered in the PPC to reduce the
BÖNING [19] developed a method for energy-cost-oriented energy costs. However, some work does not refer to the unit
resource scheduling. A genetic algorithm schedules orders on rate for energy consumption (price per kilowatt-hour) but
the corresponding machines to minimize the total logistic and considers the contracting rate for power peaks (price per kilo-
energy costs. Peak-loads can be reduced significantly without watt). In particular, the sequencing, which makes a decision
resulting in a disproportionate increase in logistic costs. shortly and operative in front of the work station, determines
LANGER ET AL. [20] presented a framework for energy- directly the temporal energy consumption and can unlock
sensitive production control in mixed model manufacturing. potential that has not been considered in any work.
Energy-sensitive order scheduling, energy-sensitive control of
material flow and energy-sensitive control of operating states IV. ENERGY COST-ORIENTED MANUFACTURING CONTROL
are being integrated in a Manufacturing Execution System As part of the research project "Integration of time-variable
(MES) called eniMES. These approaches employ energy- energy costs in existing methods of manufacturing control"
sensitive production control strategies based on Kanban and ("EnKoFer") the energy cost-oriented sequencing (ECO-S) has

194
Back to Contents

been developed. ECO-S is geared to the least slack sequencing EPI from planning time point TPL0. Consciously, the median
rule and is applied after each completion of an order at a work instead of the average value was chosen to not follow single
station i. In this context, slack [h] is the remaining time of the electricity price peaks. The EEX-day-ahead-trade electricity
regarded order n based on planning time point TPL0 [h] until prices for the next 7 days are available. If the reality differs
the planned end of order processing EDOplan,n [h], which is not from the forecast, then the intra-day-trade can balance this
required for the operation TOPi,n [h] (processing and setup) or difference.
minimum inter-operation times TIOmin,i,n [h] such as transport:
To realize priority values PRn,i [-] between 0 and 1, the
numerator is normalized over the maximum average electrical
= , − 0 − , − ,, (1) power at the regarded work station Pi,max [kW] and the maxi-
= = +1
mum energy price difference EPm − EPmin or EPmax − EPm,
Therefore, COP [-] represents the dimensionless index of depending on the average energy price in the considered inter-
the current work station, and NOP [-] represents the total val EPI. The same applies for the slack, which is unified about
number of work stations for this order. In this rule, the order the maximum possible slack, slackmax. Thus, the priority value
with the least slack, and thus with the shortest inter-operation of an order PRn,i is based on (6). The order with the lowest
or waiting time, it is assigned with the highest priority and will priority value is assigned the highest priority.
be processed first.
The first part of the equation weighs the average electrical
For the ECO-S for every order n in the queue of the work power of an order at a price difference, whereby the relative
station i, the slack and the maximum possible slack slackmax energy costs are generated (in €cent/h). A distinction should
[h] must be identified out of the remaining planned inter- be made whether the order-specific energy price is inexpen-
operation times TIOplan,n,i [h] of all orders to generate priorities sive (EPn, i < EPm) or expensive (EPn,i ≥ EPm) in relation to the
between 0 and 1: median value of the energy prices. Due to the design of the
equation in the case of a low order-specific energy price com-
, = , , (2) pared to the median value of the energy prices, an order with a
= higher average electrical power receives a lower priority value
Furthermore, the average electrical power of the order n at and is processed preferentially. In the case of a high order-
work station i Pn,i [kW], the average (order-specific) energy specific energy price compared to the average energy price, an
price for the regarded order EPn,i [€cent/kWh] and the average order with lower average electrical power receives a lower
energy price EPm [€cent/kWh] in the interval EPI should be priority value and is processed preferentially. Thus, orders
known. The average order-specific energy price EPn,i results with small electrical power are prioritized at times of high
based on the energy prices EP(t) [€cent/kWh] weighted over energy prices and vice versa.
the processing time TPn,i [h], in which the order uses electrical To ensure the schedules reliability, a slack priority be-
power (3). Tstart,n,i [h] is the starting time of the order at the tween 0 and 1 is integrated in the second part of (6). If an
regarded work station. Ideally, Tstart,n,i correlates with the plan- order has much slack, then the priority value increases, and the
ning time point TPL0 that is enlarged for the specific setup order is less prioritized. A slack smaller than –slackmax, will
time TPrn,i [h] (4). The finish time of the order Tend,n,i [h] is the result in negative priority values. As a result, negative slack is
starting time Tstart,n,i that is enlarged for the processing time limited to –slackmax:
TPn,i (5).
=− ≤− (7)
, ,

=
, , (3) The factor ς (sigma) affects a weighting factor between en-
,
, ergy costs and slack and represents a logistical positioning. If
= + sigma is selected as 0, then the priority is determined by only
, , , (4)
considering energy costs. If sigma is 1, then the result is the
, , = , , + , (5) sequence rule least slack. A sigma between 0 and 1 induces a
prioritization over both energy costs and slack.
If an order is processed for 1 hour during an energy price
of 5 €cent/kWh and 3 hours during an energy price of 6 Basically, the ECO-S is only suited for the use of volatile
€cent/kWh, the order-specific energy price is (1 h × 5 energy prices and differences in the electrical power of orders.
€cent/kWh + 3 h × 6 €cent/kWh) / 4 h = 5.75 €cent/kWh. In addition, the processing time for each order and work sta-
tion should be short; otherwise, electricity price fluctuations
The average energy price EPm is the median value of the cannot be exploited effectively. Similarly, the average electri-
future energy prices, which applies during the interval length cal power, the slack of the orders and the electricity price

(6)

195
Back to Contents

a) b) c)
Fig. 4 Impact of the sigma of ECO-S on the logistic objectives

forecasts (at least for the interval length EPI) must be known. tions with different starting values of the underlying random
A high WIP increases the impact on the logistic objectives of distributions for each experiment were realized to avoid ran-
any sequencing rule. dom influences.

V. SIMULATION STUDY OF ECO-S As shown in Fig. 4a, the punctuality increases up to 100
percent with an increasing sigma. The energy costs increases
To validate the developed sequencing method ECO-S, a with an increasing sigma as well, but the effect occurs later.
simulation model was created by using the software Tecno- This results from an uneven variation in the components of
matix Plant Simulation. In addition, the method of ECO-S and (5). At a sigma of 0, 14.15 percent of energy costs could be
the reference method FIFO were implemented in the simula- saved. At the same time, at a sigma of 0, extremely high
tion model. The integrated shop-floor persists out of the four throughput times and a high standard deviation (SD) of the
work stations on which six product variants are being manu- throughput time arise, which decrease with an increasing sig-
factured. The product variants differ in regard to their se- ma (Fig. 4 b). In addition, at a sigma of 0, the positive sched-
quence of operation, electrical power consumption and indi- ule deviation (the time delayed orders are tardy) is extremely
vidual processing times. The lot sizes are determined by an high. With a rising sigma, this deviation decreases. The nega-
Erlang distribution to generate a high variation coefficient tive schedule deviation is consistently negative, i.e., on aver-
(0.65) with a mean-value of 13, while the mix of product vari- age, all orders are completed too early (Fig. 4c).
ants follows a predetermined pattern. Every 12 hours, the For a company that has decided to select a sigma of 0.5 at
order release decontrols orders by the use of their planned start ECO-S, compared to FIFO, the punctuality is 12.1 percent
of order processing. In addition, a high WIP (60 h) was creat- higher and 13.9 percent energy costs could be saved. Howev-
ed, and a real volatile electricity price was integrated (intraday er, this selection of a sigma of 0.5 results in a 3 percent (4.78
spot price of the EEX from 1st January to 4th November 2015). h) increased throughput time and a double standard deviation
After a transient phase of 10 days, all orders completed in the of the throughput time. The average negative schedule devia-
following 90 days are being evaluated. Basically, 10 observa- tion of orders is 13 percent; accordingly, it is seven hours less

a) b) c)

d) e) f)
Fig. 5 Impact of simulation model parameters on the energy cost savings of ECO-S towards FIFO

196
Back to Contents

than in FIFO. electricity prices in Spain: A challenge for the competitiveness”, Ap-
plied Energy, vol. 135, pp. 815-824, 2014.
To validate the conditions of application, several model [3] P. Siano, “Demand response and smart grids - A survey”, Renewable
parameters have been varied. First, the volatile electricity and Sustainable Energy Reviews, vol. 30, pp. 461-478, 2014.
prices have been linearly discounted relative to the median [4] Department of Energy of the United States of America, “ Benefits of
demand response in electricity markets and recommendations for
value of the population to reduce the standard deviation of the achieving them – a report to the united states congress pursuant so sec-
electricity price. Fig. 5a shows that the energy cost savings are tion 1252 of the energy policy act of 2005”, Washington, 2006.
highest by taking a highly volatile electricity price as a basis. [5] European Parliament and Council of the european Union, “Directive
If all prices are fully discounted, and thus a constant price is 2012/27/EU of the european Parliament and of the council of 25 October
assumed, no energy costs can be saved by using this method. 2012 on energy efficiency”, Official Journal of the European Union.
2012.
If the mean lot size of the Erlang distribution is increased
[6] European Energy Exchange – European Power Exchange (EPEX)
linearly in the simulation model, whereby, under constant Intraday Prices, http://www.epexspot.com/en/
individual processing times, longer mean processing times for [7] H.-P. Wiendahl, “Fertigungsregelung. Logistische Beherrschung von
each order and work station result. As a result, the energy cost Fertigungsabläufen aus Basis des Trichtermodells”, Munich, Vienna,
savings decrease (Fig. 5b). If an order has processing times Carl Hanser, 1997.
that exceeds 24 hours per work station, only a small amount of [8] H. Lödding, “Handbook of Manufacturing Control”, Berlin, New York,
the energy costs can be saved because a displacement of the Springer, 2013.
orders cannot account for the systematic daily electricity price [9] S. Willeke, S, Wesebaum, G. Ullmann, P. Nyhuis, „Energiekosten-
variations. effiziente Fertigungssteuerung“, ZWF, vol. 109, no. 5, pp. 328-331,
2014.
An increase of the mean WIP in front of each work station
[10] M. Junge, „Simulationsgestützte Entwicklung und Optimierung einer
causes the highest resulting effect compared to any priority
energieeffizienten Produktionssteuerung“, Ph.D. Thesis, Kassel, Kassel
rule [7]. With an increasing WIP, the energy cost savings university press GmbH, 2007.
increase and converge to a maximum (Fig. 5c). [11] M. Rager, „Energieorientierte Produktionsplanung“, Ph.D. Thesis,
If the fluctuation of the average order-specific electrical power Wiesbaden, Betriebswirtschaftlicher Verlag Dr. Th. Gabler, 2008.
is discounted similar to the electricity price, the savings de- [12] N. Weinert, S. Chiotellis, G. Seliger, „Methodology for planning and
crease. If all orders processed at a work station do not differ in operating energy-efficient production systems”, CIRP Annals – Manu-
facturing Technology 2011, vol. 60, pp. 41-44, 2011.
their electrical power (discounting of 100 percent), then no
[13] H. Haag, J. Siegert, T. Bauernhansl, E. Westkämper, „An Approach for
energy costs can be saved (Fig. 5d). the Planning and Optimization of Energy Consumption”, CIRP Interna-
The interval length to calculate the current electricity price tional Conference on Life Cycle Engineering, vol. 19, pp. 335-339,
median should be selected to typify a representative period on 2012.
the basis of the mean and the maximum processing times for [14] A. Pechmann, I. Schöler, R. Hackmann, „Energy efficient and intelligent
each order and work station. Excessively long intervals do not production scheduling: Evaluation of new production planning and
scheduling software“, CIRP International Conference on Life Cycle En-
consider high and low electricity price periods sufficiently gineering, vol. 19, pp. 491-496, 2012.
(Fig. 5e). In contrast, excessively short intervals may end [15] A. Bruzzone, D. Anghinolfi, M. Paolucci, F. Tonelli, „Energy-aware
before the finish time of the regarded order. scheduling for improving manufacturing process sustainability: A math-
Fig. 5f demonstrates that, with an increasing portion of fixed ematical model for flexible flow shops“, CIRP Annals – Manufacturing
Technology, vol. 61, pp. 459-462, 2012.
energy consumption at all work stations, the energy cost sav-
[16] C. Artigues, P. Lopez, A. Hait, “The energy scheduling problem: Indus-
ings decrease and approach a minimum level. trial case-study and contraint propagation techniques”, International
Journal of Production Economics, vol. 143, pp. 13-23, 2013.
VI. SUMMARY
[17] P. Eberspärcher, A. Verl, „Realizing energy reduction of machine tools
In this paper, the future importance of considering fluctuating through a control-integrated consumption graph-based optimization
energy prices in production via the manufacturing control was method“, Procedia CIRP 2013, vol. 7, pp. 640-645, 2013.
highlighted. In the demonstrated simulations model, the de- [18] M. Fernandez, L. Li, Z. Sun, „“Just-for-Peak“ buffer inventory for peak
electricity demand reduction of manufacturing systems“, International
veloped sequencing method affords energy cost savings of up Journal of Production Economics, vol. 146, pp. 178-184, 2013.
to 14 percent towards FIFO without endangering the schedule [19] C. Böning, “Stromintensive Fertigung – Energiekostenorientierte
reliability or schedule deviation. By using a simulation model, Belegungsplanung”, IT&Produktion, vol. 14, pp. 88-89, 2013.
the energy cost savings were identified, and the conditions of [20] T. Langer, A. Schlegel, J. Stoldt, M. Putz, „A model-based approach to
application and potentials were demonstrated. Thus, the ECO- energy-saving manufacturing control strategies“, Procedia CIRP 2014,
S is particularly suitable for short processing times for each vol. 15, pp. 123-128, 2014.
order and work station, as well as for highly fluctuating elec- [21] F. Shrouf, J. Ordieres-Meré, A. García-Sánchez, M. Ortega-Mier, „Op-
trical power consumption. Using ECO-S, the case of highly timizing the production scheduling of a single machine to minimize total
energy consumption costs”, Journal of Cleaner Production, vol. 67,
volatile electricity prices result in the highest energy cost sav- pp. 197-207, 2014.
ings. [22] C. Schultz, P. Sellmaier, G. Reinhart, „An approach for energy-oriented
production control using energy flexibility“, Procedia CIRP 29 (2015),
REFERENCES vol. 29, pp. 197-202, 2015.
[1] Department of Energy and Climate Change of the United Kingdom, [23] X. Gong, T. De Pessemier, W. Joseph, L. Martens, “A generic method
“Industrial electricity prices in the IEA”, 2015. for energy-efficient and energy-cost-effective production at the unit pro-
[2] B. Moreno, M. Garcia-Alvarez, C. Ramos, E. Fernandez-Vazquez, “A cess level”, Journal of Cleaner Production 2015, accepted, 2015.
General Maximum Entropy Econometric approach to model industrial

197
Back to Contents

Economic Production Quantity with Imperfect


Quality, Imperfect Inspections, Sales Return, and
Allowable Shortages
Muhammad Al-Salamah
Mechanical and Industrial Engineering Department, Majmaah University, Saudi Arabia
m.alsalamah@mu.edu.sa

Abstract - This paper builds on the model of Yoo, Kim, and a attempt to examine the key factors of the manufacturing
Park (S. H. Yoo, D.S. Kim, and M.-S. Park, “Economic process’s instability and the defective item inventory, [7]
production quantity model with imperfect-quality items, two- develop EPQ models to determine the lot size and the
way imperfect inspection and sales return,” International backorder under the conditions of fixed vs. probabilistic
Journal of Production Economics, vol. 121 (1), pp. 255-65, fraction of defective items and the timing of defective
2009) and extend their model by considering the possibility items’ withdrawal from inventory. Recently, [8] has
that the producer can permit shortages which the producer determined the optimal lot sizes in the batch manufacturing
can utilize to reduce the costs of inventory. The coverage of the setting with imperfect quality items when the producer
model offers alternative and simplified modeling approach to implements acceptance sampling with destructive and non-
the model. The concavity of the expected total profit function is destructive testing, implemented separately, to determine if
proved under certain conditions. The concept is illustrated the lot can be accepted or rejected. The EOQ model in [9]
through a numerical example. It has been illustrated that the
takes on the assumption of fixed screening rate, and the
EPQ model with shortages is effective for certain ranges of the
defective proportion and the probability of the Type 1 error;
model considers the possibility of shortages occurring
outside these ranges, the model with unpermitted shortages during the screening stage, and hence the screening rate is
must be implemented. treated as a decision variable. The EOQ model developed in
[10] considers misclassifications in inspection and
Keywords – EPQ, imperfect quality, imperfect inspection, shortages.
shortage The paper extends the model of [4] by assuming the
I. INTRODUCTION possibility of shortages which are backordered. This paper
is organized as follows. Notations are introduced in the
The Economic Order Quantity / Economic Production Section II. In Section III, the model is formulated and
Quantity (EOQ/EPQ) models have long evolved from their optimal solutions of the decision variables are obtained. In
basic assumptions to address important issues in inventory Section IV, a numerical example is presented along with the
control, such as item’s imperfect quality and quality sensitivity analysis.
assessment plans. In this direction, inventory models with
the possibility of imperfect quality have been developed to II. NOTATIONS
determine the optimal lot size that either minimizes a cost The notations in [4] will be maintained, with some
function or maximizes a profit function. There have been notations being redefined, and new notations are introduced:
recent advances in the literature on this topic. Ref. [1]
assumes imperfect items in an inventory model where these max Maximum allowable shortage level
imperfect items can be sold in a single batch at the end of a Shortage cost per item short
screening process, and it has been found that the economic Shortage cost per item short per unit of time
lot size reacts positively to the increase in the average Production time duration during the positive
percentage of imperfect quality items. The inventory model inventory
in [2] allows for shortages to be backordered in a case Beginning of cycle shortage time duration
where items can be of imperfect quality. In extending the Time duration of inventory depletion of serviceable
model in [1], [3] allows for shortages which are
items
backordered. Ref. [4] propose a profit maximizing inventory
model to find the optimal lot size when the quality is Time duration of shortage buildup
imperfect, the inspection is influenced by two types of III. MODEL FORMULATION
errors, and defective items are returned; the proposed model
does not permit shortages. The model in [5] is an extension The objective is to find optimal quantity ∗ and optimal

of an earlier model to include the possibility of increasing shortage level max that maximize the total profit (total profit
rate of inspection due to learning. Ref. [6] assign costs to per unit time), which is the total revenues minus the total
Type 1 and Type 2 errors associated with the inspection. In

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


198
Back to Contents

cost ( = − ). The total revenues value is given by


(3) in [4]. =
(1)
The total cost is defined by [4] as of all costs in a cycle;
and these costs include the production setup and variable =
costs, inspection, return, penalty, and rework costs in (2)
addition to total inventory holding cost. In this article, the δ ( + )
=
shortage is added to objective function. After adding
(3)
shortage to the model, the inventory curves are
δ ( + )
illustrated in Fig. 1. Now the model has both positive and =
negative inventory; this situation affects the different time (4)
segments in the cycle as well as the cycle length. From the
inventory curves, it can be observed that:

Fig. 1: Inventory curves: (a) serviceable and reworked items, (b) screened items, (c) returned items, and (d) items being reworked

199
Back to Contents

At the end of , the number of items in the inventory is ( ) (


+ + )
equal to the number of serviceable items ( ) minus = = ( + )
2 2 (12)
number of sales items ( ) during , − =
The remaining inventory areas are:
− . Therefore, the fraction − represents the
( + ) α( + )
portion of the lot sizes and which is stacked up = =
before the production ends and before the production starts, 2 2 (13)
respectively. Therefore, it can be written: ( + ) δ ( + )
= =
2 2 (14)
− 1
= = − max max
(5) = =
2 2 (15)
− 1 max
= = − ==
(6) 2 2 (16)
The cycle length is: Based on the definitions of inventory costs in [4], the
total inventory holding cost can be expressed as:
( + )
=
(7) Total inventory holding cost = ( + + +
From [4], defective and screened nondefective fraction )+ + + + ( + )+ max
= + (1 − ) , serviceable fraction = + (1 − (17)
)(1 − ), and sales fraction = + . From (5) and The inventory holding costs per item per unit of time ,
, , and are defined in [4]. The shortage cost is
(6), the pace − represents the time between arrivals of shortage cost per item short per unit of time; is the
items to the inventory; it has a unit of items per unit of time. shortage cost per item short.
To simplify the model, let = − . Define the sales fraction = (1 − ) (1 – ) + ,
The inventory curves are shown in Fig. 1. The inventory salvage fraction = (1 − ), and unscreened defective
area of serviceable items during the manufacturing fraction = . The total profit in a cycle is:
period is given by:
= ( + )+ ( + )−
−( + +( + ) + )(
− − + )− ( + + + )
= = =
2 2 2 − − −
(8) − ( + ) − max
The inventory area of serviceable items during (18)
rework period is:
1 In the definition of the inventory problem, the
− ( + )
probabilities , , , and are random variables with
= − =
know distribution function. After substituting the values of
= ( + ) (9) , , , and , the expected total profit per unit of time
The inventory area is given by: is:


= =
2 2 (10)
At the end of the rework period , the number of
reworked units accumulated is δ ( + )− =
( )
δ ( + )− = 1− δ ( + ).
Hence, the inventory area is:

1− ( + )
=
2
1− ( + )
=
2 (11)
Define the screened items fraction = (1 − ) +
(1 − ). The inventory area is given by:

200
Back to Contents

( , )
+ 1
= − +( + ) +
( + )
1 + ℎ( + )

2 ( + )
1 + ℎ( + )

2 ( + )
1
− ℎ( + )( + ) − ℎ( + )
2
ℎ( + + )( + )
− [ ]−
2
−( + )
ℎ ( + ) 1 1
− + − ( + + ) (
2

+ ) − ( + ) +
2 (19)
The function represents the return value to the
producer, which aims to maximize it. Therefore, the
optimality conditions for need to be checked at the
critical points. The components of the Hessian of
( , ) are:

1
+ ( ℎ+ ℎ + ) +2 ( − )
=−
( + ) (20)
1
+ ( ℎ( + ) + ) + ( ( − )−2 )
=
( + ) (21)
1
2 ( + )+ + ( ℎ( + ) + )
=−
( + ) (22)

It can be observed that < 0. In order for <


0, it is required that:

+ ( ℎ+ ℎ + )
< +
1
2
(23)

The determinant of is given by:

1 1
2 ℎ ( + ) + + 2 +2 −
| |=
( + ) (24)

Hence, | | > 0 if satisfies another condition on


that:

201
Back to Contents

2 + ( ℎ( + ) + )
<
1
(25)

Therefore, is positive definite, and as a result


( , ) is concave, only if meets the conditions = $100 per cycle
in (23) and (25). = $0.0925 per unit
= $0.001 per unit per year
To find the optimal values of and , the partial
derivatives of ( , ) are set to zero and solved for
and ; and the results are:

1
+ ( ℎ( + ) + ) +2 ( − )
= −
ℎ ( + ) + + [ ] + + [ ] + + + ℎ 1−

(26)

1
2 ( + )+ + ( ℎ( + ) + )
= −
ℎ ( + ) + + +2 + + [ ] + + [ ] + ℎ( − )

(27)

It can be observed that if is set to zero, then the The probability distribution function of the fraction
value of in (27) is reduced to the lot size without shortage defective is:
case in [4]. From (26) and (27), prior near optimal values
for and are needed to calculate the optimal values. If 25 0 ≤ ≤ 0.04
( )= (28)
these starting values are absent, a recursive procedure can 0 otherwise
be implemented, in which an initial value for either or The probability distribution function of the Type 1 error
is assumed and used to start a procedure to refine the is:
initial values until convergence is reached. It is proposed in
this article that a starting value for is the optimal lot size 50 0.01 ≤ ≤ 0.03
( )= (29)
from the model without shortage in [4]. 0 otherwise
The probability distribution function of the Type 2 error
IV. NUMERICAL EXAMPLE AND SENSITIVITY is:
ANALYSIS
25 0.03 ≤ ≤ 0.07
In the model implementation and sensitivity analysis, the ( )= (30)
parameter values in [4] are adopted with additional values 0 otherwise
for and : The rework proportion is a random variable; in [4],
has been fixed at = 0.4 due to the computational
ℎ = 0.2 per year difficulty. In this article, is not fixed; rather is assumed
= $ 25 per unit to have the probability distribution function:
= $ 0.5 per unit
( ) = 10 0.35 ≤ ≤ 0.45 (31)
= $ 3 per unit 0 otherwise
= $ 15 per unit Based on the definitions of the probability distribution
= $5 per unit functions, the expected values in (26) and (27) are
calculated as follows: [ / ] = 9.44163 × 10 , [ /
= $50 per unit
] = 8.71711 × 10 , [1/ ] = 1.02335, [ / ]=
= $20 per unit 0.0395912, [ / ] = 2.85582 × 10 , [ ] = 0.001,
= 50,000 units per year [ / ] = 7.62983 × 10 , [ / ] = 1.48631 ×
= 100,000 units per year 10 , [ / ] = 0.998973, and [ / ] = 0.0243725,
= 80,000 units per year [ / ] = 0.0162458, and [ / ] = 0.00102703.
To find the optimal lot size and maximum allowed
shortage, the recursive procedure is applied as it has been

202
Back to Contents

proposed in the previous section. The recursive procedure is 0.0577432


started with = 2023.51, which is the lot size in the case
of shortage not allowed. Then, ( ) is calculated from
(26). Then, is calculated from (27). The steps of the 0.0516174

recursive procedure are shown in Table I. Convergence is


reached after 6 steps; the optimal lot size ∗ = 2006.99
units, optimal maximum allowed shortage ( )∗ = 0.0445066
270.17 units, the optimal expected total profit per year 1207020
( ∗, ( )∗ ) = $1.207 × 10 per year, and the

expected cycle length is = 0.0445067 years. In
evaluating the two conditions in (23) and (25), = 1207010
0.0925 < min(0.0952324,0.0951298). The plot of
( , ) is shown in Figure 2.
1207000

999.184
Table I: Recursive Procedure to find the optimal and
Steps
( ) 660.39

1 2023.51 249.147
2 2008.22 268.512
3 2007.09 270.039 270.17
4 2007.00 270.159
5 2006.99 270.169 2005.99
6 2006.99 270.17

1980.59
In the analysis of the model, Fig. 3 illustrates how the
variation in the value of the shortage cost per unit changes
the optimal values of the lot size, the maximum shortage, 1955.22
the expected total profit per year, and the expected cycle 0.09065925 0.091575 0.0925
length. When the value of is increased, the lot size
increases at a rate of around 3 items per 1 cent increase in
the value of the shortage cost. The maximum shortage Fig. 3: Sensitivity analysis of the shortage cost per item short
shows to be more sensitive to the changes in the shortage
cost ; declines at a rate of 40 units per 1 cent increase
in the value of . The expected total profit per year shows to The sensitivity analysis of is illustrated in Fig. 4. The
decline with increase in ; the annual expected profit falls at value of the shortage cost per unit per year has been
rate of $ 1 per per 1 cent increase in . The expected cycle varied until a value at which noticeable changes are
length exhibits a decline with the increase in the shortage observed for the decision variables. When the value of is
cost per unit. If increases, decreases at a rate of 0.00072 increased from $ 0.001 to $ 10 per unit per year, the lot size,
years (0.26 days) per 1 cent increase in the value of . the maximum allowable shortage, and the cycle length
Therefore, it can be concluded from this analysis that is change only slightly; whereas the expected total profit per
an influential factor to the producer’s profit as well as to the year does not change. Based on these observations, the
production process in terms of the optimal lot size and the producer can ignore the shortage cost per unit per year
production cycle length. and rely instead only on the shortage cost per unit .
In conducting the sensitivity analysis on the fractions ,
, , and , corresponding expected values are substituted
into the recursive procedure consisting of (26) and (27).
After calculating the expected values, the fixed values for
the proportions are = 0.4, = 0.02, = 0.02, and =
0.05. The range of variations in the values of these
proportions is restricted between 0.005 and 0.022; this is
because for values greater than 0.22 for either or , the
optimal value for is negative. Fig. 5 plots the optimal
lots, the maximum allowable shortages, the total profit, and
the cycle lengths as the values of , , and are varied.
The defect proportion and the Type 2 error show a
Fig. 2: Plot of the expected total profit per unit of time around the optimal significant effect on the decision variables. As increases,
solution increases, drops to about 80 items, declines,
and settles down to about 0.0411 years. The effects of

203
Back to Contents

have similar patterns as . The probability of Type 2 error


has a subtle effect on the lot size, maximum shortage,
total profit per year, and the cycle length. Thus, it can be
concluded that, order to maximize its total profit in a year,
the producer is advised to increase the lot size, avoid
backorders, and reduced the cycle lengths when the the
producer is faced with large values of the defect proportion
or the Type 1 error. The probability of Type 2 error has 0.0710235
minimal effect on the inventory decision. It is additionally
noted that there are thresholds for the defect proportion and 0.0660235

the Type 1 error probability beyond which the backorder 0.0610235


option is not a profit maximizer to the producer; therefore,
the no-shortage model must be used in these cases. 0.0560235

The sensitivity analysis of the rework proportion is 0.0510235

shown in Fig. 6. The variation of is set between 0.1 and 0.0460235


0.5. If is increased, the lot size drops, while the maximum
allowable shortage, the total profit per year, and the cycle 0.0410235

length increase. The producer is therefore advised to have


higher lot sizes, smaller backorders, and shorter cycle 1212270

lengths when the rework proportion is at the lower end; and 1211270
the producer should lower the lot size, increase the
backorders, and increase the cycle length for higher values 1210270
of .
1209270

1208270

1207270

1206270

1880

1680
0.0445066

1480

1280

1080
0.0445025
880

680

1207000 480

280

80
270.17

2012

1992

269.938
1972
2007.01

1952

1932

2006.99
0.001 10
1912
0.005 0.01 0.015 0.02 0.022
Fig. 4: Sensitivity analysis of the shortage cost per item short per unit of
time
Fig. 5: Sensitivity analysis of the defect proportion, probability of Type 1
error, and probability of Type 2 error

204
Back to Contents

0.0447
while the probability of Type 2 error demonstrates its
0.0446
insignificance in effecting the lot size and the shortage level.
0.0445
The rework proportion influences inversely the lot size; on
0.0444 the other hand, the shortage level and the cycle length
0.0443 increase with the increase in the rework proportion.
0.0442

0.0441
VI. REFERENCES
1207300 [1] M. K. Salameh, and M. Y. Jaber, “Economic ordering quantity
1207200 models for items with imperfect quality,” International Journal of
1207100
Production Economics, 64, pp. 59-64, 2000
1207000
1206900
doi:10.1016/S0925-5273(99)00044-4
1206800 [2] H.M. Wee, Jonas Yu, and M.C. Chen, “Optimal inventory model for
1206700 items with imperfect quality and shortage backordering,” Omega, vol.
1206600 35 (1), pp. 7-11, 2007
1206500
doi:10.1016/j.omega.2005.01.019
280 [3] A. Eroglu, and G. Ozdemir, “An economic order quantity model with
275 defective items and shortages,” International Journal of Production
270
Economics, vol. 106 (2), pp. 544-49, 2007.
doi:10.1016/j.ijpe.2006.06.015
265
[4] S. H. Yoo, D.S. Kim, and M.-S. Park, “Economic production quantity
260
model with imperfect-quality items, two-way imperfect inspection
255
and sales return,” International Journal of Production Economics, vol.
250
121 (1), pp. 255-65, 2009
2040 doi:10.1016/j.ijpe.2009.05.008
2035 [5] M. Khan, M. Y. Jaber, and M.I.M. Wahab, “Economic order quantity
2030
2025
model for items with imperfect quality with learning in inspection,”
2020 International Journal of Production Economics, vol. 124, pp. 87-96,
2015 2010
2010
2005
doi:10.1016/j.ijpe.2009.10.011
2000 [6] M. Khan, M. Y. Jaber, and M. Bonney, “An economic order quantity
1995
0.1 0.2 0.3 0.4 0.5
(EOQ) for items with imperfect quality and inspection errors,”
International Journal of Production Economics, vol. 133 (1), pp. 113-
118, 2011.
Fig. 6: Sensitivity analysis of the rework proportion
doi:10.1016/j.ijpe.2010.01.023
[7] L.-F. Hsu, and J.-T. Hsu, “Economic production quantity (EPQ)
V. CONCLUSION models under an imperfect production process with shortages
This article extends the model of Yoo, Kim, and Park (S. backordered,” International Journal of Systems Science, vol. 47 (4),
H. Yoo, D.S. Kim, and M.-S. Park, “Economic production pp. 852–867, 2014
quantity model with imperfect-quality items, two-way http://dx.doi.org/10.1080/00207721.2014.906768
imperfect inspection and sales return,” International Journal [8] M. Al-Salamah, “Economic production quantity in batch
of Production Economics, vol. 121 (1), pp. 255-65, 2009) manufacturing with imperfect quality, imperfect inspection, and
and assumes that the producer allows for a maximum destructive and non-destructive acceptance sampling in a two-tier
shortage in the production cycle. The advantage of market,” Computers & Industrial Engineering, 93, pp. 275–285, 2016
allowable shortages is that the producer can reduce the http://dx.doi.org/10.1016/j.cie.2015.12.022
associated inventory carrying cost by permitting backorder [9] T.-Y. Lin, M.-T. Chen, and K.-L. Hou, “An inventory model for
items which can be be delivered at the beginning of the items with imperfect quality and quantity discounts under adjusted
screening rate and earned interest,” Journal Of Industrial And
following production cycle. The inventory model with two
Management Optimization, vol. 12 (4), 2016
decision variables, lot size and maximum allowable
doi:10.3934/jimo.2016.12.1333
shortage, is developed. The bivariate objective function is
[10] Y. Zhou, C. Chen, C. Li, and Y. Zhong, “A synergic economic order
proved to be concave under certain conditions; and the
quantity model with trade credit, shortages, imperfect quality and
formulas for the optimal lot size and shortage are derived. In
inspection errors,” Applied Mathematical Modelling, vol. 40 (2), pp.
the analysis of the model parameters, it has been found that 1012–1028, 2016
the lot size increases with the increase the shortage cost per doi:10.1016/j.apm.2015.06.020
unit; the maximum shortage, the total profit per period, and
the cycle length declines with the increase in the shortage
cost per unit. It has been shown also that the shortage cost
per item per unit of time has no noticeable effect on the
decision variable; hence, this cost can be ignored in the
model. In addition, it has been observed that the defect
proportion and the probability of Type 1 error exhibit
noticeable effects on the lot size and the shortage level;

205
Back to Contents

Analysis and Prediction Cost of Manufacturing


Process Based on Process Mining
Thi Bich Hong Tu Minseok Song
School of Business Administration Department of Industrial & Management Engineering
Ulsan National Institute of Science and Technology POSTECH (Pohang Univ. of Sci. Tech)
Ulsan, South Korea Pohang, South Korea
hong91@unist.ac.kr mssong@postech.ac.kr

Abstract—Analysis and prediction of manufacturing cost play techniques such as process model-enhanced cost, and cost
a decisive role in manufacturing process management; however, prediction are presented. Process model-enhanced cost is a
they face a challenge to be conducted due to the complexity of systematic plug-in that allows users to execute detail cost of a
manufacturing process. Process mining has demonstrated to be a particular task or an individual resource of a manufacturing
valuable tool for observing and diagnosing inefficiencies of a
business process based on event logs. Nevertheless, significantly
process. The purpose of cost prediction is to predict the cost of
less attention has been done on investigating cost perspective. running instances based on production volume and time
Therefore, this paper suggests a framework to analyze and prediction using working progress of manufacturing processes.
predict manufacturing cost by utilizing and extending existing The remainder of this paper is organized as follows.
process mining techniques. In this study, new techniques such as Related works are explained in section 2. Section 3 introduces
process model-enhanced cost, and cost prediction based on manufacturing cost. Section 4 presents a methodology to
production volume and time prediction using working progress analyze and predict cost. A validation of proposed method is
of manufacturing processes are presented. showed in section 5. Finally, section 6 concludes the paper.
Keywords—manufacturing process; manufacturing cost; II. RELATED WORKS
processmining; cost analysis; cost prediction
The initial point for process mining is an event log that can
I. INTRODUCTION be derived from Process Awareness Information Systems
The pressure from globalization and rapid technological (PAISs) [5]. This study distinguishes three different
change have motivated manufacturing enterprise moving perspectives, namely, the process perspective, the
towards three primarily competitive factors, i.e. time, cost, and organizational perspective and the performance perspective.
quality. Among them, cost is a more critical factor for gaining The process perspective focuses on control-flow which are the
competitive advantages. Therefore, manufacturing enterprises ordering activities aiming to find a good characterization of all
have attempted to analyze manufacturing cost. However, they possible path [6]. The organizational perspective focuses on
face a challenge to analyze due to the complexity of resource fields and their relationships. The performance
manufacturing process. For instance, cost execution of a perspective focuses on discovering bottlenecks, measuring
particular task or an individual resource whenever service levels and monitoring the utilization of resources [7].
organization requires is hard to be conducted. The questions Each of three perspectives includes three types of process
regarding cost consuming or remaining cost can be accurately mining that are discovery, conformance, and enhancement.
answered until the end of the process. The complexity of Many fields have applied process mining such as healthcare,
manufacturing process refers to the variety of products and port and so forth. Given manufacturing area, there have been
employed work centers [1]. Therefore, there is a necessity of a few studies introduced. In addition to a shortage application,
proper method to handle the entire cost mining in the most of the previous studies have just investigated time
manufacturing sector. perspectives. Few studies towards examine cost perspective [3,
Process mining has demonstrated to be a valuable tool for 4].
analyzing the operational process and tracking down its III. MANUFACTURING COST
problems or inefficiencies using event logs [2]. In literature,
very few approaches exist towards cost perspective [3]. Such Manufacturing process concerns all efforts of an
Nauta’s (2011) proposed a way towards-cost awareness in organization to add values to the inputs transforming into the
process mining and its first implementation [4]. However, it outputs [8]. Each activity includes attributes of actuality as well
as plan data such as timestamps, work progress and so on.
had not fully supported cost mining of manufacturing process
Thus, a total cost of a manufacturing process is likely a
where cost depends on not only duration but also production summation of cost of individual activities included in that
volume. This paper addresses a method for analyzing and process. Organization can use various costing techniques such
predicting cost of manufacturing process. To do this, news as job-order costing and activity based costing (ABC) to set

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


206
Back to Contents

and manage manufacturing cost per activity. Job-order costing Definition 1: (Event, Case, Trace of manufacturing
identifies and assigns production cost to the particular job [9]. processes) Let K be the event universe, i.e., the set of possible
ABC allocates overhead cost to a product using activities event identifiers. Let denote A is set of activities. Let set
required to produce the product [10]. attributes of an activity for plan and actual data as follow: S
Previous studies have shown that manufacturing cost are and S’ be a set of start timestamps, C and C’ be a set of
all essential costs for converting the inputs into the products complete timestamps, R and R’ be a set of resources, Q and Q’
[9]. Also, they mentioned that manufacturing cost is typically be a set of production volumes, W and W’ be a set of working
divided into three categories such as direct labor cost (DL), progresses, (0 ≤ W ≤ 1) , lr be a set of direct labor cost, ilr be
direct material cost (DM), and manufacturing overhead cost a set of indirect labor cost, mr be a set of direct material cost,
(OH). For instance, DM is the cost of materials which become imr be a set of indirect material cost, ma be a set of
part of finished product. DL is the cost workers who add value machinery cost, oh be a set of other cost. An event E is a tuple
to a product. OH includes cost of all activities that support a
of (A, S, C, R, Q, W, S’, C’, Q’, W’, lr , ilr , ma, mr , imr , oh ). For
manufacturing process but often are not directly related to any
particular product. The basic concept for calculating those cost any event e ∈ E , π (e) is the value of attribute n for event e.
types has been employed elsewhere [9]. This study extends the Case. A case C is set of events, have attributes. Cases
original ideas above by extra a simplification step on
always have a trace, denoted as σ ∈ E * is a finite sequence of
calculating it per activity or job of manufacturing process for
events such that each event appears only once, i.e., for
application.
The DM of task k can be expressed as: 1 ≤ i ≤ j ≤ / jσ / : σ (i ) ≠ σ ( j ) and time is non–descending
M M
DM k =  (MRj * Qj) =  (Con j * t j * P j * Qj) (1) denoted σ (i ) ≤ σ ( j ) (if i occurs before j).
j=1 j=1
Where, MR j : material rate; Conj: unit of net consumption; Definition 2: (Event log -enhanced cost of manufacturing
t j : ratio of waste materials allowance; Pj : standard price of processes) L ∈ β (C ) is an event log and β (C ) is the set of all
bags (multi – set) over C.
material; Q j : production volume; M: the number of difference
materials used for task. TABLE I shows a fraction of event log-enhanced cost
which consists of cases, events, timestamps, working progress,
The DL of task k can be express as: resources, and production volume. For instance, the case with
E the case ID of 1 consists of sequence of events and activities:
DL k =  (LR a * Ta ) (2)
a =1 E01 (A01), E02 (A02), and E03 (A03); the event E01 (A01)
Where, LRa : labor rate; Ta: time for task k completion; E: has the attributes such as start timestamps for plan and actual
the number of different employees concerned task. (S = {03.10.15}, S’ = {03.14.15}), complete timestamps
(C={03.20.15}, C’ = {03.30.15}), resources (R ={R01}, R’ =
Moreover, this study separates OH cost of the indirect material {R01}), working progress (W = {100%},W’ = {100%}),
cost, indirect labor cost, machinery cost, and other overhead
production volume (Q ={5565}, Q’= {5565}), direct labor cost
costs for the purpose of detail extracting them.
( lr ={0.71}), indirect labor cost ( ilr ={3.49}), direct material
The OH of task k can be express as: ( mr ={14.15}), indirect material ( imr = {0.04}), machinery
OHk = IDMk + ILk + MCk + OtherOHk (3) cost (ma = {0.5}), other overhead cost (oh = {0.02}).
This study presents process model-enhanced cost and cost
Where, IDM: indirect material cost; IL: indirect labor cost; prediction as techniques for analyzing and predicting cost of
MC: machinery cost manufacturing process. Process model-enhanced cost attempts
P Pur - Tr
MC k =  (w b * ( ) * Tb ) (3.1) to understand the detailed cost of each task or resource in the
b =1 l b process. Cost prediction is not only describes the cumulative
Where, Ti : duration; wb : percentage of value added; cost of each task or resource in timely manner but also predict
Purb : purchase cost of machine; Trb : tradeoff price of cost of running instances based on time prediction and
machine; lb: expected life span of machine (hours); P: the production volume using working progress.
number of difference machines used for task. A. Process model-enhanced cost
IV. MANUFACTURING COST ANALYSIS BASED ON The process model-enhanced cost is defined in the form of
PROCESS MINING a frequent sequence graph-enhanced cost. A frequent sequence
To analyze and predict cost of manufacturing process, we graph has been retrieved from event log using frequent
need event log-enhanced cost as the input. The detailed step to sequence mining [11,12]. The following definitions formalize
enhance event log with cost well described elsewhere [5, 6]. frequent sequence graph and frequent sequence graph-
This study focuses on formalizing it in manufacturing process enhanced cost.
context as follow.

207
Back to Contents

TABLE I. Event log -enhanced cost of manufacturing process

Case Event
A S C R Q W S' C' R' Q' W' lr ilr mr imr ma oh
ID Id
1 E01 A01 03.10.15 03.20.15 R01 5565 100% 03.14.15 03.30.15 R01 5565 100% 0.71 3.49 14.15 0.04 0.50 0.02
1 E02 A02 03.21.15 04.15.15 R02 5565 100% 03.30.15 04.15.15 R02 5565 100% 0.86 14.15 1.17 0.41 0.70 0.02
1 E03 A03 04.17.15 04.23.15 R03 5565 100% 04.17.15 - R05 4174 75% 0.18 0.04 0.00 0.01 0.78 0.01
2 E11 A01 03.19.15 03.30.15 R01 500 100% 03.19.15 03.20.15 R01 500 100% 0.25 1.17 0.00 0.50 1.04 0.03
2 E12 A02 03.29.15 04.03.15 R02 500 100% 03.29.15 04.03.15 R02 500 100% 0.30 0.00 4.08 0.70 0.95 0.04
2 E13 A03 04.03.15 04.11.15 R03 500 100% 04.03.15 - R03 500 35% 0.27 0.00 0.00 0.78 0.36 0.01
2 E14 A04 04.07.15 04.14.15 R04 500 100% - - R04 175 0% 0.04 4.08 3.49 1.04 0.41 0.02

Definition 3: (Frequent sequence graph) A frequent sequence B. Cost prediction


graph, denoted G, is a directed graph of tuple
Assume that there is a close relationship between time,
( N , a s , ae , E, L N , LE , f n , f e , l ) , where: cost, and production volume. Therefore, cost prediction should
― N is a finite set of nodes represent an event ei ; therefore be based on time prediction and production volume. Whenever
all node has all attributes of event that it presents for organization requires predicting the completion time of
denoted #n (ei ) =#n (ni ), ni ∈ N , process instance, we can take a partial trace and consider its
working progress. Working progress of an event denoted by
― as ∈ N , is the start node such that as ≠ φ , “W” refers to the percentage completion of that event
― ae ∈ N , is the start node such that ae ≠ φ , (0 ≤ W ≤ 100%) . In case an event started but not complete
― E = {(ni , n j ) | (ni , n j ) ∈ N × N , and (ni , n j ) ∈ E} is a set of (0 < W < 100%) , we can easily track back the consumed time,
edges represent the frequent sequence among nodes, also calculate the remaining time using working progress.
― LN ∈ N is a set of node labels, Otherwise, we can learn from its plan timestamps. Definition 6
gives a formal definition of time prediction for an event in an
― LE ∈ N is a set of edge labels, f n ∈ N is a set of frequency event log.
such that f: ni → n j , denoted (nij ) which represent the total
Definition 6: (Event, time prediction) Let σ be the Trace in
flow occurring from ni to n j .
event log L ⊆ C , given event em in a trace such that
Definition 4: (Cost measurement function) A cost
em ∈ σ i ,1 ≤ m ≤ n : n = length(σ i ), denoted the remaining time
measurement function is a function that, given a bag a
measurements produces some cost value, e.g., the sum, until completion event em d remaining (em )
average, min, max. Formally, measure ∈ P( M ) → M , i.e, for
some bag of measurements p, cos tmeasure( p ) returns some  d plan( em ) if wuptodate( em ) = 0

cost value. = (1− wuptodate (em ))
Let us assume that p = [ pi ]in=1 , i.e., then measurements are  d uptodate ∗ w if 0 < wuptodate (em ) ≤1
 uptodate ( e m )
taken as a sample. The sample total is defined as follow:
In which:
measuretotal ( p ) = sum {[ pi ]in=1}. Then the sample mean is d plan(em ) = E plan(em ) − S plan(em ) : the budget execution time to

n
p complete event em ; duptodate(em ) = Euptodate(em ) − S actual (em ) : the
defined as follow p = i =1
. Other prediction functions can
n actual execution time until reporting date.
be used for the measurements, for example measure min( p) =
The partial trace has fully filled timestamps according to
min {[ pi ]in=1} or measure max( p) = max {[ pi ]in=1}. Definition 6. Subsequently, we can calculate cost for events in
the trace. Given a trace σ i in an event log, a cumulative cost
Definition 5: Frequent sequence graph-enhanced Cost) Let of an event em in the trace refers to a sum of cost of all event
G ( L) = ( N , as , ae , E , LN , LE , F , l ) a frequent sequence follow em from start event, and em cost itself. Thus, the last
graph of event log L. A frequent sequence graph –enhanced event of trace has maximum cumulative cost. Chosen any
cost is a tuple C ( L) = (G ( L), O, P ) where: event em in a trace, we suppose that remaining cost refers to
― O: E → N is a function that associates an event [e] with the abstraction of maximum cumulative cost and its
cost its event label occurs in the event log, and cumulative cost.
corresponds with the equivalence switched measurement,
Definition 7: ( Trace, Event, Cost measurement) Let σ be a
i.e. O([e]) = P([e]),
― P: is a set of aggregate functions to measure cost in graph
Trace in Event log L ⊆ C , given event em in a trace such that
a Definition 4, ∀i, j ∈ P, i ≠ j. em ∈ σ i for m is the order of event in trace,

208
Back to Contents

1 ≤ m ≤ n : n = length(σ i ),1 ≤ i ≤ / σ /, produces some


measurment ACost (em ) : cumulative cost and RCost (em ) :
remaining cost.
 Cost ( em ) if m =1
ACost (em ) =  ACost ( e ) + Cost ( e ) if m ≠1
 m −1 m

Where:
ACost (em −1 ) = Cost (e1 ) + Cost (e2 ) + ... + Cost (em −1 )
em −1 > σem : event em −1 directly follows event em
 0 if m = n
RCost (em ) =  max σ ( ACost )− ACost ( e ) if m≠ n
 i m

Fig.2. Remaining cost showing per activity


Note that these are not really estimators for the whole event
log, but only for the trace in a log. We supposed that cost of an measurement functions are applied to calculate cost as demand
event in a log refers to a set of cost of that event in traces quickly. For instance, cost of activity B is defined as follow: total =
denoted [
Cost(em)L = Cost(em)σi ]
n
1≤i=a≤n
[ ]
= Cost(em )σi , Cost(em )σi+1 ,.... sum (30, 55, 30, 30)= 145, mean = 145g/4 = 36.25, max = max (30,
55, 30, 30) = 55, min = min (30, 55, 30, 30) =30.
Therefore, we also have cumulative cost of event in log = Now we calculate the remaining cost. Coming back to the
ACost (em ) L = [ACost (em )σ i n] and remaining
1≤ i = a ≤ n
cost of first trace, the cumulative maximum cost of this trace is 125.
Using Definition 7, we can quickly know the remaining cost of
event in log = RCost (em ) L = [RCost (e )σ ]m . n each activity in a trace. For example, activity A has cumulative
i 1≤ i = a ≤ n cost 10. Therefore, the remaining cost of A is 115 (= 125-10).
As a running example, we use the event log shown in Fig. 2. Then we add 115 to state {A}. Activity B has cumulative cost
Each line correspond to a process instance, e.g., the first trace < 30. So remaining cost of B is 95 (= 125-30). We add 95 to state
A10, B20, C30, D10, E15, F30, G10 > refers to a process instance {B}. Using the same way, we have the remaining cost of C 65,
where activity A has a cost 10, activity B has a cost 20, activity D 55, E 40, F 10, G 0. Next, these steps are repeated for all
C has a cost 30. Similarity, we can execute cost of activity D, E, other traces and same with calculating cumulative cost.
F, G. This trace starts at A and end at G. So the cumulative cost Consequently, we have the remaining cost showing per activity
is at A and max at G. Now we are ready to calculate the as Fig.2.
cumulative cost of each activity. A is the start activity, so
cumulative cost of A is itself 10. Therefore, we add 10 to state V. TEST CASE STUDY
{A}. Next, B is followed directly by A. Therefore, cumulative
In this section, a test case example is used to illustrate the way
cost at B refers to cost of B 20 plus cumulative cost of A 10,
and equal to 30. Then we add 30 to state {B}. Similarity, we the methodology defined in section 4. This test case is a jean
add 60 (=30+30) to {C}, 70 (=60+10) to {D}, 85 to {E}, 115 manufacturing process and cost information was defined for
to {F}, 125 to {G}. Consider example the second trace < A10, the process. For the analysis, an event log was extracted and
H25, B20, C30, E30, F10, G10 >. We add 10 to {A}, 35 to {H}, 45 enhanced with cost information. The event log-enhanced cost
to {B}, 75 to {C}, 105 to {E}, 115 to {F}, 125 to {G}. These included information about activity, resources, detailed cost,
steps are repeated for all other traces. Consequently, state{A} etc. The summary information was as follows: 11 cases, • 80
is annotated with a bag containing four elements:[10, 10, 10, events and 8 tasks (e.g. DP01, DP02, DP03, etc.)
10]. State{B} is annotated with a bag containing four elements:
A. Manufacturing process model-enhanced cost
[30, 55, 30, 30]. State{C} is also annotated with a bag
containing four elements: [60, 85, 60, 55], so on. Now cost Fig.3 Illustrates the conceptual idea of the process model-
enhanced cost. Each event in the graph is represented as a
node, i,e. start event as = {DP01}, end event ae = {DP08}. The
graph shows the occurrence of nodes, e.g. N DP 01 = 11, N DP 04
=3, and flow between nodes, e.g. N DP 01, DP 04 = 3. Based on
the result, we can know the detail cost of execution time of
each activity or resource in the process.
By comparing them, we can know which the cheapest or
the most expensive activity is. For instance, total cost of DP02
is 2406.12 USD and execution time is 85 days. The cheapest
one is activity DP04 with 384.64 USD; the most expensive one
is activity DP05 with 6716.84 USD.

Fig.1. Cumulative cost showing per activity

209
Back to Contents

Fig.3. Process model -enhanced cost remaining to complete the process. A case study approaching
B. Manufacturing Cost prediction a method of this research with real life data should be a future
work.
Fig.4 is a screen shot of cost prediction of process.
Accumulated cost was denoted as “a”, and remaining cost was ACKNOWLEDGMENT
denoted as “r” of each activity or resource. The process
This work was supported by Basic Science Research
includes 8 tasks. DP01 is the start task, and DP08 is the end
Program through the National Research Foundation of Korea
task. DP01 had consumed 498237.68 USD and required
(NRF) funded by the Ministry of Education (No. 2011-
276692.57 USD to complete the process. Based on the result,
0010561) and Institute for Information & communications
we can figure out the highest remaining cost at the start event
Technology Promotion (IITP) grant funded by the Korea
and zero remaining cost at the end event. Users also can use
government (MSIP) (No. R0190-15-2012, High Performance
measurement such as total, max, min, AVG, median to
Big Data Analytics Platform Performance Acceleration
customize their demands.
Technologies Development)
VI. CONCLUSION
REFERENCES
This paper proposed a method to analyze and predict [1] A.N. Boons, Product costing for complex manufacturing systems,
cost of manufacturing process. To approach this method, International Journal of Production Economics, 55 (1998) 241-255.
process model-enhanced cost and cost prediction technique [2] W.M.P. van der Aalst, T. Weijters, L. Maruster, Workflow mining:
were presented. Process model-enhanced cost executed the Discovering process models from event logs, Knowledge and Data
Engineering, IEEE Transactions on, 16 (2004) 1128-1142.
detail cost of each activity in a process; whereas, cost [3] M.T. Wynn, W.Z. Low, A.H. ter Hofstede, W. Nauta, A framework for
prediction showed not only the consumed cost but also cost cost-aware process management: cost reporting and cost prediction, Journal of
Universal Computer Science, 20 (2014) 406-430.
[4] W. Nauta, Towards Cost-Awareness in Process Mining, in, Master’s thesis,
Eindhoven University of Technology, 2011.
[5] W. Van Der Aalst, B.F. Van Dongen, J. Herbst, L. Maruster, G. Schimm,
A.J. Weijters, Workflow mining: a survey of issues and approaches, Data &
knowledge engineering, 47 (2003) 237-267.
[6] G. Keller, T. Teufel, SAP R/3 process oriented implementation, Addison-
Wesley Longman Publishing Co., Inc., 1998.
[7] B.F. van Dongen, R. Crooy, W.M.P. van der Aalst, Cycle time prediction:
when will this case finally be finished?, in: On the Move to Meaningful
Internet Systems: OTM 2008, Springer, 2008, pp. 319-336.
[8] T. Altiok, Performance analysis of manufacturing systems, 1st ed.,
Springer-Verlag New York, 1997.
[9] L.A. Thomas, W.I. Robert, S.H. John, W.I. Robert, Managerial
Accounting: Information for Decisions, 4th ed., South-Western College Pub,
2005.
[10] R.S. Kaplan, A.A. Atkinson, Advanced management accounting, 3rd ed.,
Prentice Hall New Jersey, 1998.
[11] R.J.C. Bose, W.M.P. van der Aalst, Abstractions in process mining: A
taxonomy of patterns, in: Business Process Management, Springer
Heidelberg, 2009, pp. 159-175.
[12] G.T. Lakshmanan, S. Rozsnyai, F. Wang, Investigating clinical care
pathways correlated with outcomes, in: Business process management,
Springer Heidelberg, 2013, pp. 323-338.

Fig.4.An example of cost prediction

210
Back to Contents

Application of TPM in Production Process of


Aluminium Stranded Conductors

Orapadee Joochim Jumnong Meekaew


Institute of Field Robotics Aluminium Conductor Department
King Mongkut’s University of Technology Thonburi Bangkok Cable Co., Ltd. (Chachoengsao Factory)
Bangkok, Thailand Chachoengsao, Thailand
orapadee.joo@kmutt.ac.th jumnong.m@bangkokcable.com

Abstract—This paper presents the application of a total polyethylene (XLPE). Production quality control is strictly
productive maintenance (TPM) technique to increase the executed on every step of production for the highest quality.
effectiveness in producing aluminium stranded conductors in
order to reduce waste problems of the machine and improve the However, it was found from the production data collection
quality of the production. Two pillars of TPM activities for of aluminium conductors between July to December 2014 that
autonomous maintenance and focused improvement are the overall equipment efficiency (OEE) is equal to 80.96
established. Machines which have low overall equipment percent. OEE of the production of aluminium stranded
efficiency (OEE) are selected as prototype machines. Employee conductors is equal to 70.46 percent which is the lowest
training program about the production work, cleaning activities to comparing with other stages of the production. By increasing
discover abnormal conditions, corrective actions during abnormal OEE, time and waste losses will be decreased. Therefore, it is
operations and a one point lesson (OPL) to educate operators for possible to expand the production in the future. The purpose of
the production process are implemented. Autonomous this paper is to study the effectiveness and implementation of
maintenance standards are also created. Pareto analysis is used to TPM program for the production process of aluminium stranded
identify the critical equipment in the factory. A corrective action conductors of an electric wire and cable manufacturer (i.e.,
team is chosen to improve the operation of the process. The Chachoengsao Factory of Bangkok Cable).
research is evaluated by comparing the OEE of prototype
machines based on production problems occurred before and after II. STUDY METHODOLOGY
the improvement. The results demonstrate that the TPM
implementation decreases the downtime from 7,730.80 minutes A. Total Productive Maintenance (TPM)
per month to 4,942.20 minutes per month, the loss of the scrap TPM has been depicted as a manufacturing strategy for
from 4,570.00 kilograms per month to 2,236.67 kilograms per helping to increase the productivity and overall equipment
month. The OEE is increased from 67.21 percent to 72.14 percent. effectiveness. The concepts of TPM has been introduced and
developed by in M/S Nippon Denso Co., Ltd of Japan in 1971.
Keywords—overall equipment efficiency; one point lesson;
total productive maintenance
TPM has been widely used, and there are a number of case
studies of TPM applications in the literature [1].
I. INTRODUCTION The principal activities of TPM are often called the pillars
Thailand’s electricity consumption continues to increase due or elements of TPM. The core TPM is categorized into eight
to the economic growth of the country. The rapid growth in TPM pillars or activities for accomplishing the manufacturing
Thailand’s electricity demand requires a substantial amount of performance improvements. The eight pillars are autonomous
electric wires and cables. Bangkok Cable Company Limited is a maintenance, focused improvement (Kobetsu Kaizen), planned
leader in manufacturing electric wires and cables in Thailand. Its maintenance, quality maintenance, education and training,
first factory is located in Samutprakan Province, and its second office TPM, development management, as well as safety, health
factory is situated in Chachoengsao Province. The company and environment [2]. In order to increase the overall
offers covering all types of bare and insulated conductors. effectiveness and reduce times and wastes during the production
of aluminium stranded conductors, autonomous maintenance
Fig. 1 illustrates the electric cable production process. The and focused improvement which are the main TPM pillars are
production process starts by feeding aluminium rods into the implemented in this paper.
melting process to produce aluminium conductors. In the next
step, the conductors are extruded. The conductors are stranded Autonomous maintenance means maintain one’s own
after the extrusion. Then, the conductors are packaged into the equipment in good condition by one self. The aim of this pillar
coils and reels. Insulated conductors must be made before is to prepare the operators to be able to take care of small
packaging, if the conductors are required to be insulated or maintenance tasks, hence allowing the skilled maintenance
covered. Three main types of insulating materials used are operators for spending time on more value added activity and
polyvinyl chloride (PVC), polyethylene (PE) and cross-linked technical repairs. The operators are therefore responsible for

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


211
Back to Contents

upkeep of their equipment on daily basis to prevent it from by the operators. Hence, the operators are becoming more
deteriorating or break down [2,3]. skillful [7].
III. CASE STUDY
A. Aluminium Conductor Stranding Process

Fig. 1. Electric Cable Production Process

Focused Improvement is aimed at decreasing losses in the


work place in order to improve operational efficiency. The goals
of the improvement are zero losses (identify and eliminate
losses), remove unsafe conditions, improve effectiveness of all
equipment and reduce operation and maintenance costs. The Fig. 2. Aluminium Conductor Stranding Process
principle of this pillar is that “a very large number of small
improvement is more effective in an organization than a few The aluminium conductor stranding process as in Fig. 2
improvement of large value” [3,4]. begins with passing extruded aluminium conductors into each
layer of the stranders. The conductors are stranded according to
B. Overall Equipment Efficiency (OEE) the standard of the strand length. Each conductor is pulled by
OEE is a crucial measure in TPM used as a quantitative using a capstan and the stranders are rotated at the same time so
metric to measure the performance of a productive system. OEE that the conductors are twisted. The aluminium stranded
is used for measuring the success of TPM implementation conductors are then sent to be stored.
program. The principle goal of TPM is to increase the overall B. Problem Definition
equipment efficiency. The three main components of OEE are
equipment availability (A), performance efficiency (P), and The data from the stranding process were collected in six
quality rate (Q). OEE can be calculated as follows [5,6]. months (from July to December 2014). Table I shows the
production data of the aluminium conductor stranding process.
= × × (1)
TABLE I. THE PRODUCTION DATA OF THE STRANDING PROCESS

− (2) Production Plans Good Products Wastes Loading Time Downtime


= × 100% Machine

(Tons) (Tons) (Tons) (Hours) (Hours)

S T01 288.62 276.12 12.50 2,879.46 353.76


× (3)
= × 100% S T04 635.27 617.87 17.40 2,808.54 462.38

S T06 558.47 533.72 24.75 3,097.18 646.67

S T08 196.22 192.09 4.13 2,327.04 366.65


− (4)
= 561.07 535.75 25.32 2,597.45 430.77
S T09

S T11 273.04 254.54 18.50 2,690.00 440.75


C. One Point Lesson (OPL)
S T02 205.20 204.80 0.40 2,298.25 155.62
The OPL form is a tool which helps to communicate TPM
training concepts to participants and employees. This form is S T03 1,392.00 1,388.60 3.40 2,976.81 306.27
structured for motivating the trainer to establish all substantial 1,560.55 1,553.35 7.20 3,188.47 521.00
S T05
activities onto one simple and easy to use [6]. OPL is the lessons
learnt by operators after carrying out the autonomous S T13 251.45 250.05 1.40 2,278.43 131.50
maintenance or focused improvement activities. A small-group- Total 5,921.89 5,806.89 115.00 27,141.63 3,815.37
activity leader marks the activities into the OPL report. The
enhancing of OPL allows the increase of the improvements done Using the data of Table I, the sample of the OEE calculation
for the ST09 strander machine is as below.

212
Back to Contents

demonstrate the abnormality. The equipment used for the


2,597.45 − 430.77 primary cleaning, protection and others is prepared. The
= × 100% = 83.42%
2597.45 maintenance is defined to do together every Wednesday.
0.0011 . × 2,400 .
= × 100%
3.17
P is calculated from each type of conductors before
calculating for the average of the machine. For example, the idle
cycle time is equal to 0.001 hours per meter, the length of
produced conductors is equivalent to 2,400 meters, and the
operating time is equal to 3.17 hours. The following is the
calculation of P.
After the average of all conductor types is calculated, P is
then equal to 76.83%.
561.07 × 25.32
= × 100% = 95.49%
561.07

= 0.832 × 0.763 × 0.9549 × 100% = 61.20%


Fig. 4. Pareto of the Downtime Problems of ST03 before the Improvement
= 84.12%
OEE of strander machines are illustrated in Fig. 3. The
strander machines can be divided into two groups; (1) machines
that loading the extruded conductors into steel wheels (i.e.,
ST01, ST04, ST06, ST08, ST09 and ST11), and (2) machines
that loading the extruded conductors into baskets (ST02, ST03,
ST05 and ST13). The ST03 and ST09 machines which have the
lowest OEE compared with other machines of their group are
chosen as the prototype machines. The data of the problems of
the downtime that do not include the preparation time of ST03
are collected. As in Fig. 4 and 6, the problems are arranged
depending on the priority of the downtimes and represented by
the pareto graph for data analysis. The collection of the data of
quality loss problems is implemented. The arrangement of the
problems is done according to the priority of the wastes. The
data are demonstrated as the pareto graph, see Fig. 5 and 7.
Fig. 5. Pareto of the Quality Problems of ST03 before the Improvement

The cleaning and finding of the abnormality are


simultaneously performed. The labels are attached at the
abnormal parts. The discovery dates and priorities are specified.
The abnormality and correctional solution are defined. The list
of the abnormality is made and recorded. The abnormality is
corrected. The before and after maintenance data about the
problems, solutions and benefits of the maintenance are list.
The learning about the finding of the problems and solutions are
shared to the related operators by using OPL. The training using
OPL is classified according to the topic, basic knowledge,
improvement work or problem. The area for the lubrication is
Fig. 3. Electric OEE of Strander Machines defined. The standard of the lubrication is implemented. The
self-inspection of the machines is performed. The self-
C. Implementation of Autonomous Maintainance inspection standard is created. The information about the
Autonomous maintenance is implemented by training the machine components, lubrication standard, inspection of the
operators of the strander departments 1 and 2 about how to operations of the equipment and observation method is used to
maintain the strander machines (self-maintenance). The create the inspection sheet for daily maintenance. Table II
operators are divided into two teams for the self-maintenance shows the amount of maintenance works.
works for the ST03 and ST09 machines. The procedure begins
by cleaning the machines, finding the abnormality, labeling to

213
Back to Contents

TABLE II. AUTONOMOUS MAINTENANCE WORKS

Finished Operations
Procedures Topics Amount
Solved (%)
1 Correction of Abnormality 60 53 88.33
2 Training Using OPL 25 20 80.00
3 Lubrication Standard 8 8 100.00
4 Self-Inspection Standard 4 3 75.00
Total 97 84 86.60

The improvement is done related to the defined plan. As a


result, the downtimes and wastes are reduced. Focused
Improvement starts with solving the problem of bows that
cannot be rotated. The take up system doesn’t work. The slip
rings are electrocuted. The machines are shut down for fixing.
The bows and slip rings are changed. The downtime problem is
solved. During the stranding process, the conductors are teared.
The cause of tearing is investigated, and the factors of tearing
and standard of the inspection are defined. The frequency of
inspection and responsible persons are specified. Subsequently,
Fig. 6. Pareto of the Downtime Problems of ST09 before the Improvement the losses are decreased. The problems of Coupling Cage 12
and 18 are occurred due to eroding of shafts. The shafts and
couplings are changed. The data of the problems after the
improvement are collected between January to March 2015, see
Fig. 8 to 11.

IV. CONCLUSION
It is found from this paper that TPM successfully gives the
improvement. The wastes are reduced and also the downtimes
are decreased. Additionally, the aluminium stranded conductors
have higher quality than before the improvement resulting in
higher overall efficiency of the machines. These machines can
be used to be the prototypes for operations of other machines of
the company. The results before and after the improvement can
be concluded as follows.
1) ST03 Prototype Machine: The downtime is reduced from
3,495.60 minutes per month to 1667.20 minutes per month. The
waste is decreased from 413.33 kilograms per month to 20
kilograms per month. Before the improvement, OEE is 70.97
percent. OEE after the improvement is 76.89 percent.
2) ST09 Prototype Machine: The downtime is reduced from
4,235.20 minutes per month to 3,275.00 minutes per month.
The waste is decreased from 4,156.67 kilograms per month to
2,216.67 kilograms per month. Before the improvement, OEE
Fig. 7. Pareto of the Quality Problems of ST09 before the Improvement
is 63.44 percent. OEE after the improvement is 67.38 percent.
D. Implementation of Focused Improvement 3) Total of ST03 and ST09 Prototype Machines: The
downtime is reduced from 7,730,80 minutes per month to
From Fig. 4 to 7, a large majority of the problems (80
percent) are selected for the improvement according to pareto 4,942.20 minutes per month. The waste is decreased from
analysis. The related workers are defined for the problems 4,570.00 kilograms per month to 2,236.67 kilograms per month.
consisted of representatives from engineering, quality Before the improvement, OEE is 67.21 percent. OEE after the
assurance and planning departments. The meeting is set for the improvement is 72.14 percent.
related workers in order to solve the problems using the 5W 1H
principle for inspecting the data of the problem characteristics.

214
Back to Contents

Fig. 8. Pareto of the Downtime Problems of ST03 after the Improvement

Fig. 11. Pareto of the Quality Problems of ST09 after the Improvement

REFERENCES
[1] Chetan S. Sethia, Prof. P.N. Shende, and Swapnil S. Dange, “Total
Productive Maintenance-A Systematic Review,” International Journal for
Scientific Research and Development, vol. 2, issue 8, pp. 124-127, 2014.
[2] Andrea Sütőová, Štefan Markulik, and Marek Šolc, “Kobetsu Kaizen – its
value and application”, in Electronics International Interdisciplinary
Conference., 2012, pp.108-110.
[3] R. Gulati, Maintenance and Reliability Best Practice, 2nd ed. New York:
Industrial Press Inc., 2013.
[4] Chetan S Sethia, Prof. P.N. Shende, and Swapnil S Dange, “A Case Study
on Total Productive Maintenance in Rolling Mill,” Journal of Emerging
Technologies and Innovative Research (JETIR), vol. 1, issue 5, pp. 283-
289, 2014.
[5] Harsha G. Hegde, N.S. Mahesh, and Kishan Doss “Overall Equipment
Effectiveness Improvement by TPM and 5S Techniques in a CNC
Fig. 9. Pareto of the Quality Problems of ST03 after the Improvement Machine Shop,” SASTECH, vol. 8, issue 2, 2009, pp. 25-32.
[6] Ravikant V. Paropate1, and Dr. Rajeshkumar U. Sambh, “The
Implementation and Evaluation of Total Productive Maintenance –A Case
Study of midsized Indian Enterprise,” International Journal of Application
or Innovation in Engineering&Management (IJAIEM), vol. 2, issue 10,
2013, 120-125..
[7] Sia Soon Siong, and Shamsuddin Ahmed, “TPM Implementation Can
Promote Development of TQM Culture: Experience from a Case Study in
a Malaysian Manufacturing Plant,” in Proceedings of the International
Conference on Mechanical Engineering, Dhaka, Bangladesh, 2007.

Fig. 10. Pareto of the Downtime Problems of ST09 after the Improvement

215
Back to Contents

‘Creating Awareness on Light Pollution’ (CALP)


Project: Essential requirement for school-university
collaboration

NNM Shariff, MR Osman, MS Faid ZS Hamidi, SNU Sabri, NH Zainol, MO


Academy of Contemporary Islamic Studies Ali, Nurulhazwani Husien
Universiti Teknologi MARA Faculty of Applied Sciences
Shah Alam, Malaysia Universiti Teknologi MARA
nur.nafhatun.ms@gmail.com Shah Alam, Malaysia
zsharizat@yahoo.co.uk

Abstract—Managing a project involves a process of


synchronizing two different organizations with two different Lieberman [5] depicted the collaboration as Figure 1 – it is
cultures. In doing so, mutual agreement on certain things and the effort of two organizations, moving towards the same goals,
levels is a must for that project to be run smoothly. Therefore, the same direction and sharing the same values. This inter-
objective of this paper is to identify the essential requirement for organizational collaboration effectively influences education
school-university collaboration specifically for ‘Creating the community. The government of Malaysia has targeted
Awareness on Light Pollution’ (CALP) Project. With the aim of students’ enrolment in higher learning institution in the ratio
educating the society about light pollution, the University has 60: 40 (science/technical: literature). Unfortunately, to this
taken an advance step together with selected schools to monitor point, it never reached the targeted 60 percent [6]. Therefore,
light pollution by utilizing Sky Quality Meter. In total, this we hope through university involvement – temporary and
project has nine stations in peninsular Malaysia. It is hope that unique project – is able to enhance student interest in science.
by outlining the essential requirement, this project can be
replicated to others countries.

Keywords— project management; university; school;


collaboration; light pollution; astronomy

I. INTRODUCTION
According to Project Management Institute [1], a project
has two important characteristics that are temporary and
unique. Temporary means it has specific beginning and ending
time, making it has its own particular resources and scope.
Meanwhile, a project is unique because it is has a specific sets
of operations for specific objectives not a routine operation.
Therefore, project management is the effort to meet the project
requirement by applying knowledge, skills, tools and
techniques onto project activities. Project management has
five (5) processes namely: 1) initiating; 2) planning; 3)
executing; 4) monitoring and controlling; and 5) closing.
Establishing school-university collaboration is not an easy
task but it is doable. Among the important elements of school-
university collaborations are 1) shared goals with a short-term
focus; 2) common planning or in other words put emphasize on
the existence or persistence of the partnership; and 3) mutual Figure 1 Caricature depicting collaboration of school-
respect with expectation to participate in an adaptable and university (Lieberman, 1992)
dynamic inter-organization collaboration [2-4].
The aim of this paper is to highlight the essential
In Malaysia, one of the initiatives by Ministry of Education
requirement for school-university collaboration in ‘Creating
is to provide research seed grant namely Knowledge Transfer
Awareness on Light Pollution’ (CALP) Project. Section 2
Program (KTP). There are two types of it: 1) collaboration with
briefly elaborates on light pollution and its importance.
industry; and 2) collaboration with community. Other than that,
Followed by Section 3 describes the details of the CALP
collaboration with national agency such as ANGKASA
Project. Section 4 will touch about four important elements in
(National Space Agency) or MOSTI (Ministry of Science,
making the project successful i.e. 1) project management; 2)
Technology and Innovation) can possibly happen.
equipment; 3) observation; and 4) software. As for summary,
conclusion is at the last section.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


216
Back to Contents

magnitudes per square arc second [14]. The SQM used light-
to-frequency silicon photodiode; TSL237S1 as its sensor. The
II. THE IMPORTANCE OF MONITORING LIGHT sensor is covered with a HOYA CM-500 filter to block near-
POLLUTION infrared light. It is calibrated using a NIST light meter and
the absolute precision is ±10% (±0.10 mag/arcsec2) [15].
Monitoring light pollution can be two common ways. First
is from satellite measurement [7]. Second is from ground
measurement [8-10]. The importance of monitoring light
pollution is because light pollution does have impacts towards
nocturnal creatures, night sky heritage and especially human
health [6]. United Nations [11] emphasized the importance of
increasing education and enhancing global awareness in
science and technologies of light through their 2015
International Year of Light Resolution. The awareness on light
pollution and astronomy can be through: 1) proper curriculum
in formal education; and 2) through informal education such as Figure 3 Sky Quality Meter (SQM) to
CALP Project [12]. measure the brightness of the sky

III. ‘CREATING AWARENESS ON LIGHT POLLUTION’ PROJECT


IV. THE ESSENTIAL REQUIREMENT OF CALP PROJECT
CALP Project is a pilot project between Universiti
Several essential requirements were identified and
Teknologi MARA (comprises two faculties - Academy of
assessed in order to ensure the project is successful. It can be
Contemporary Islamic Studies and Faculty of Applied
divided into four main interrelated categories such as 1)
Sciences) and selected secondary schools in peninsular
project management; 2) equipment; 3) observation; and 4)
Malaysia. Up to this point, there are eight participating
software. Figure 4 below illustrates all the four categories.
schools plus one fixed station. In order to get a fair
distribution of light pollution data, the schools were roughly Project management element covers collaborators (the
selected based on cardinal point i.e. east, west, north and University & the schools), cost, schedule/synchronization and
south of peninsular Malaysia – Figure 2. documentation/reporting. Ninety percent of the cost is borne
by the University through research grant and another ten
percent is from the schools. Due to that, we have to have a
proper agreement especially when involving financial and
data ownership. Documentation from the beginning is very
crucial for reporting purpose. In this case, we have to report to
the grantor.
The most vital equipment is the SQM. Without SQM,
nothing can be achieved. There are three type of SQM been
used which depends on the school facility (internet, PC etc.).
Most schools we supplied with basic SQM – better for hands
on purpose – as Figure 2. Other than that as listed in Figure 4,
are accessories. It would be good if they have those. For
instance, compass to learn about cardinal point. This project
also tries to advance secondary science education – further
explanation can be found in Shariff, Hamidi, Musa, Osman
and Faid [6].

Figure 2 CALP stations

There are two goals of the project: 1) creating a chance to


secondary students to experienced hands-on measurement of
light pollution or the brightness of the sky; which eventually
1
2) encourage awareness on light pollution among secondary TSL237 has been temperature compensated for the ultraviolet-to-visible
students [6]. We can say that this project is in line with range of 320 nm to 700 nm and responds over the light range of 320 nm to
1050 nm. Although it is very close to human eye response, the designer
national aspiration in promoting science among school deposits the Hoya CM-500 filter to cuts off the entire infrared part of the
students [13]. spectrum. From web
https://ams.com/chi/content/.../TSL237_Datasheet_EN_v1.pdf accessed on 22
For that purpose, we choose to utilize Sky Quality Meter
Dec 2015. TSL237 is manufactured by TAOS Inc. (Texas Advanced
(SQM). This meter measure the brightness of the sky in Optoelectronic Solutions) and now known as ams AG (Austria).

217
Back to Contents

climate of distrust [4, 5]. Thus, we try our best to share the
same values (listen to each other) because these two
organizations have two different cultures that should be
celebrate and respect. Understanding towards each other is
very vital as we ourselves have our own duty and task.
Lieberman [5] stated that in order to forge a culture of
collaboration several aspects must be clear i.e.: 1) creating a
vision; 2) structuring after the activities to grow innovative; 3)
authentic opportunities to fail; 4) different group, same vision;
and last but not least 5) numerous opportunities for
leadership.

V. CONCLUSION
To conclude, this project has potential to be expanded in
Figure 4 Overview of CALP Project terms of duration and schools participation. Each and every
project that involves two or more parties/organizations will
As for observation, simple testing will be done because have its own ebb and flow. Making clear about essential
they embark on the observation. This element of observation requirement is important as it is acts like a first step towards
will be taught during training – a special module. One-page mutual understanding. And this of course will be depending
checklist is provided for the teachers for pre-observation, much on communication. This essential requirement serves as
during the observation and post-observation – in making sure a basic guide that open to any suggestion for improvement.
that nothing is missed out for the data to be valid. Two
options of SQM inclination angle i.e. 45 degrees or 90
degrees. To be synchronized, 90 degrees angle was opt for the ACKNOWLEDGMENT
whole system. Furthermore, it is easier to point SQM at 90 The authors wish to thank the referee for comments and
degrees rather than 45 degrees. suggestions to improve the paper. This work was partially
Observation shift depends on local weather. If the supported by RAGS/1/2014/SSI03UITM//2,
observation is done during weekdays, normally they choose FRGS/2/2014/ST02/UITM/02/1 and 600-RMI/RACE
from 9pm to 12am (some earlier than that as astronomical 16/6/2(4/2014). Special thanks to the selected schools (KPM)
twilight gone). During the weekends, the observation begins and all the volunteers.
at 12am till 6am depending on their preferences. As we
described at the previous section, we choose the school based REFERENCES
on cardinal points. Other criteria for site selection are: 1) high [1] Project Management Institute, What is Project
density/low density; 2) urban/sub-urban/rural; 3) willingness Management?, in, Project Management Institute, Pennsylvania,
of the teacher (voluntary or non-voluntary); 4) willingness of 2015.
the students; and 5) basic facilities [6]. [2] M.E. Kersh, N.B. Masztal, An analysis of studies of
Software is supplied by the University. In order to collect collaboration between universities and K-12 schools, The
data, Knightware or SQM Reader will be used. The previous Educational Forum, 62 (1998) 218-225.
mentioned softwares can be paired with Stellarium2. [3] R. Thorkildsen, M.R.S. Stein, Fundamental characteristics
Stellarium is 3D planetarium software which is a free open of successful university-school partnership, The School
source. Therefore, it can be a medium for students to learn Community Journal, 6 (1996) 79-92.
more about astronomy. To analyse the data, we import data [4] A.C. Borthwick, T. Stirling, A.D. Nauman, D.L. Cook,
from SQM Reader/Knightware to Microsoft Excel. Achieving Successful School-University Collaboration, Urban
ArcGIS/MatLab is used to map the light pollution.
Education, 38 (2003) 330-371.
Among all these principles in this project which are the [5] A. Lieberman, School/University Collaboration: A View
requirement of ensuring a clear sense of shared ownership and from the Inside, The Phi Delta Kappan, 74 (1992) 147-152.
the maintenance of ‘open communication’ between all [6] N.N.M. Shariff, Z.S. Hamidi, A.H. Musa, M.R. Osman,
partners. A set of document was drafted by project M.S. Faid, ‘Creating Awareness on Light Pollution’ (CALP)
coordinator then discussed, modified and agreed upon by the Project as a Platform in Advancing Secondary Science
teacher research coordinators and principal from each school. Education, in: International Conference of Education,
Among other things discussed in the document are planning,
Research and Innovation, International Academy of
MoU, installation, configuration, operating, maintenance and
data analysis [6]. Technology, Education and Development, Seville. Spain, 2015.
[7] P. Cinzano, F. Falchi, C.D. Elvidge, Global monitoring of
Since the selected of the school is from bottom-up – go for light pollution and night sky brightness from satellite
voluntary school rather than being mandate by the Ministry of measurements, (2003).
Education – this is the measure we take in order to avoid [8] C.S.J. Pun, C.W. So, Night-sky brightness monitoring in
Hong Kong - a city-wide light pollution assessment,
2
www.stellarium.org Environmental Monitoring and Assessment 184 (2012) 2537-

218
Back to Contents

2557.
[9] C.S.J. Pun, C.W. So, W.Y. Leung, C.F. Wong,
Contributions of artificial lighting sources on light pollution in
Hong Kong measured through a night sky brightness
monitoring network, Journal of Quantitative Spectroscopy &
Radiative Transfer, 139 (2013) 90-108.
[10] Z.S. Hamidi, Z.Z. Abidin, Z.A. Ibrahim, N.N.M. Shariff,
Effect of Light Pollution on Night Sky Limiting Magnitude and
Sky Quality in Selected Areas in Malaysia, in: 2011 3rd
International Symposium & Exhibition in Sustainable Energy
& Environment, IEEE, Melaka, Malaysia, 2011, pp. 233-235.
[11] United Nations, International Year of Light and Light-
based Technologies, 2015, in, United Nations, New York,
2014.
[12] J.R. Percy, Light Pollution: Education of Students,
Teachers and the Public, Preserving the Astronomical Sky:
IAU Symposium, 196 (2001).
[13] Government of Malaysia, Laporan Strategi Mencapai
Dasar 60: 40 Aliran Sains/Teknikal: Sastera, in, Government of
Malaysia, Kuala Lumpur, 2013.
[14] N.N.M. Shariff, A. Muhammad, M.Z. Zainuddin, Z.S.
Hamid, The Application of Sky Quality Meter at Twilight for
Islamic Prayer Time, Internation Journal of Applied Physics
and Mathematics, 2 (2012) 143-145.
[15] N.N.M. Shariff, Sky Brightness at Twilight: Detectors
Comparison between Human Eyes and Electronic Device For
Isha' and Subh from Islamic and Astronomical Considerations,
in: Science and Technology Studies, University of Malaya,
Kuala Lumpur, 2008.

219
Back to Contents

Signal Detection of the Solar Radio Burst Type III


Based on the CALLISTO System Project
Management
NNM Shariff, MS Faid
Academy of Contemporary Islamic Studies
Z.S.Hamidi, NH Zainol, MO Ali, SNU Sabri, Universiti Teknologi MARA
Nurulhazwani Husien Shah Alam, Malaysia
School of Physics and Material Sciences, Faculty of nur.nafhatun.ms@gmail.com
Sciences,
Universiti Teknologi MARA, C. Monstein
40450, Shah Alam , Malaysia Institute of Astronomy, Wolfgang-Pauli-Strasse 27,
zetysh@salam.uitm.edu.my Building HIT,
Floor J, CH-8093 Zurich, Switzerland

Abstract —The E-CALLISTO (Compact Astronomical Low heating, particle acceleration and particle transport in solar
Cost Frequency Instrument for Spectroscopy and Transportable magnetized plasma [2, 3]. However, the dynamics of the solar
Observatory) network is a worldwide system in order to observe corona is still not understood, and new phenomena are
the Sun’s activity in the radio region. At present, more than 80 unveiled every year. This region covers from 15 - 30 GHz [4].
instruments have been installed at more than 43 locations, with
users from more than 113 countries in the e-CALLISTO
Thus, the radio spectrum is limited at the low frequency side
network. At present, more than 80 instruments have been of the ionosphere and at the high frequency region of the
installed at more than 43 locations, with users from more than troposphere [5]. The E-CALLISTO is an acronym stands for
113 countries in the e-CALLISTO network. In this paper, we extended Compact Astronomical Low Cost Frequency
make use of the e-CALLISTO data that shows a sign of solar Instrument for Spectroscopy and Transportable Observatory,
activity. On 9th May, the Solar Radio Burst Type III (SRBT III) is a worldwide network of frequency-agile solar spectrometer
happens for two times. The first detection of SRBT III occurred [6]. This network has a receiving instrument named
less than 1 minute within 05:31UT and 05:32 UT as illustrated in CALLISTO spectrometer, which is inspired from the name of
Figure 3. The second SRBT III seems to be occurred within 05:41 one of the Jupiter’s larger moons [7].
UT to 05:42 UT for approximately 1 minute. The Coronal mass
ejection which was ejected from the active region AR2339, was
detected at 05:42UT has the ‘beta-gamma’ magnetic field that
harbors energy for strong solar flares. From the results, the point
we wish to make here is that at least some of these type III bursts
with low starting frequencies are consistent with front-side flares,
which indicates to us that the low starting frequencies observed
for many of these bursts are intrinsic to the type III emissions
and do not result from occulting of the high-frequency emissions
from any plasma structures.

Keywords— Solar burst; CALLISTO system; project


management

I. INTRODUCTION
An extensive experimental and theoretical work has
succeeded in elucidating many observational and
characteristics of radio bursts in understanding their physical Figure 1 Map of current distribution of Callisto
nature. Solar radio observations have been carried out since instruments in May 2015 ( Credicted to: http://www.e-
1944 when J.S Hey discovered that the Sun emits radio waves callisto.org/)
[1] . This method reveals us to study energy release, plasma

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


220
Back to Contents

The goal of the e-Callisto network is to provide a logarithmic device and low pass filtered. The logarithmic
worldwide system in order to observe the Sun’s activity in the domain is more than 45 dB.
radio region [8]. In specific, the solar radio burst can be Data acquisition for both receivers and the interface to the
detected due to the solar flare and Coronal Mass Ejections PC are on a separate board. Two step process measurements
phenomena. In this project, the observation can be done within are made. In the first step a receiver is tuned to a frequency, in
for 24 hours per day [9]. This is based on the period of the second step the signal is measured. However, the receivers
observation at different sites all over the world. At present, can also be configured to measure the same polarization and to
more than 80 instruments have been installed at more than 43 alternate: while one is measuring, the other is tuned to a new
locations, with users from more than 113 countries in the e- frequency.
CALLISTO network. With higher temporal resolution, radio
apparatus with a complex, interrelated analysis could offer a
high quality data concerning small and short scale physical II. RESULTS AND DISCUSSION
processes as well as the main properties of the area involved We now make use of the e-CALLISTO data that shows a
[10, 11]. sign solar activity. There were flares reported and solar bursts
are drifted from 80MHz to 20MHz. However, their intensity
and burst duration is different.
On 9th May, the Solar Radio Burst Type III (SRBT III)
happens for two times. The first detection of SRBT III
occurred less than 1 minute within 05:31UT and 05:32 UT as
illustrated in Figure 3. The second SRBT III seems to be
occurred within 05:41 UT to 05:42 UT for approximately 1
minute. The table below shows the current condition of the sun
during the observation. It was found that the second SRBT III
duration is long and it can be observed the first radio burst has
a faster drift rate compared to the second SRB. The intensity is
of the second SRB is higher than the first SRBT III. One might
attempt to explain the solar parameters. Table 1 shows the
current condition of the Sun on 9th May 2015.

TABLE I. THE CURRENT CONDITION OF THE SUN ON 9TH OF MAY 2015

Solar Wind Value


Speed
Solar Wind 367.5 km/s
Speed
Figure 2 Simulation of solar burst data from different sites
Solar Wind 4.1 protons/cm3
in CALLISTO network (Credicted to: http://www.e- Density
callisto.org/) Interplanetary 7.5 nT
Magnetic Field

A very important feature of the CALLISTO spectrometer


is to detect the intensity of electromagnetic radiation at radio
frequencies between 45 - 870 MHz. It consists three main
components which are the receiver, a linear polarized antenna Based on the X-Ray flux data in Figure 3 and Figure 4,
and control/logging software. Due to the development of the there are several flare occurred classified a s C1.0, C2.0, C3.0
technology, more advanced system was implemented in the and C5.0 solar flare suddenly ejected from the active region
system includes a tower-mounted preamplifier or low noise AR2339. While this eruption, a magnetic field filament
amplifier, additional antennas and a focal plane unit (FPU) connected to the active region AR2339 also erupted. The two
with antenna polarization switching and noise calibration wild filaments combined to produce a bright Coronal Mass
capabilities. In assessing the existing and planned space Ejections. This event can be considered as an active solar
weather observing systems, the signal from the feed will be activity.
fed into the receivers. Subsequently, the signal will be
converted to a first intermediate frequency of 37.7 MHz by
two local oscillators. For filtering and amplification purposes,
the signal is down converted to 10.7 MHz, it is detected by

221
Back to Contents

Figure 5 Active region 12339 (Credited to: Solar Monitor)


Figure 3 A signal detection of solar flares (Credited to
NOAA website)

Figure 6 Image of the CMEs (Credited to CACTUS)

The data from CACTUS in Figure 6 shows that the height


Figure 4 Solar radio burst III on 9th May 2015 of the CME quite high whereby it reaches 6.8 Rs. Based on the
(http://www.e-callisto.org/) observation the CME occur 2 hours before the SRB detection.
The CME is believed to produce a geomagnetic storm and will
The Coronal Mass Ejections which were ejected from the affect the magnetic field on the Earth and it will cause pressure
active region AR2339, was detected at 5:42UT which is when on the Earth when it hit.
appearing within the duration of the second SRBT III. It is also
observed that on the AR2339 has ‘beta-gamma’ magnetic field
that harbors energy for strong solar flares.

222
Back to Contents

intrinsic to the type III emissions and do not result from


occulting of the high-frequency emissions from any plasma
structures.

ACKNOWLEDGMENT
We are grateful to CALLISTO network; STEREO,
LASCO, SDO/AIA, NOAA and SWPC make their data
available online. This work was partially supported by the 600-
RMI/FRGS 5/3 (135/2014), 600-RMI/RACE 16/6/2(4/2014and
600- RMI/RAGS 5/3 (121/2014) UiTM grants, Universiti
Teknologi MARA and Kementerian Pendidikan Malaysia.
Special thanks to the National Space Agency and the National
Space Centre for giving us a site to set up this project and
support this project. Solar burst monitoring is a project of
cooperation between the Institute of Astronomy, ETH Zurich,
and FHNW Windisch, Switzerland, MARA University of
Technology and University of Malaya. The research has made
use of the National Space Centre Facility and a part of an
initiative of the International Space Weather Initiative (ISWI)
program.
Figure 7 Velocity of CMEs on 9th of May 2015(Credited to
CACTUS)

REFERENCES
Hence, Fig. 7 shows the graph of velocity of CME versus
angle from the north. This graph shows the distribution of
CME velocity on the 9th of May 2015. It can be observed that [1] Hey, J.S., S.J. Parsons, and J.W. Phillips, Some characteristics of
there are 52 CMEs occurring on that day with the velocities solar radio emission. Monthly Notices of the Royal Astronomical
Society, 1948. 108: p. 354-371.
that ranging from 100 ms-1 to 2000 m/s. The average velocity [2] Gary, D.E.K., C. U., Solar and Space Weather Radiophysics {
of CME is 2000 ms-1. This speed used to classify the CMEs on Current Status and Future Developments 2004, Dordrecht:
9th of May is an ‘impulsive CME’. Kluwer.
[3] Aschwanden, M.J., Particle Acceleration and Kinematics in Solar
Based on the calculations that have been made, it was Flares: A Synthesis of Recent Observations and Theoretical
found that the electron density of the burst is 3.554 x 1012 4 e Concepts. Space Science Reviews, 2002. 101(2).
m-3 and the drift rate for both solar radio burst are 2MHz s-1 [4] Kundu, M.R., Solar Radio Astronomy. 1965: John Wiley.
[5] Gopalswamy, N., Radio-rich Solar Eruptive Events. Geophys. Res.
and 0.6MHz s-1. The CME originates with high temperature, Lett, 2000. 27(1427).
which is 28019 K. Besides, the photon energy are ranging from [6] Benz, A.O., et al., A broadband spectrometer for decimetric and
3.309 ×10-7 eV and 8.272 ×10-8 eV. The high temperature of microwave radio bursts first results. Sol. Phys., 1991. 133: p. 385-
the CME confirmed that the CME is impulsive CME. We 393.
propose that the low-frequency termination can be intrinsically [7] Benz, A.O., et al., The background corona near solar minimum.
Solar Phys., 2004. 55: p. 121-134.
explained by the maser instability model. [8] Benz, A.O., et al., A World-Wide Net of Solar Radio
Spectrometers: e-CALLISTO. Earth Moon and Planets 2009. 104:
III. CONCLUSIONS p. 277 – 285.
[9] Hamidi, Z., et al., Coverage of Solar Radio Spectrum in Malaysia
In general, this project encompasses many science, and Spectral Overview of Radio Frequency Interference (RFI) by
engineering and mathematical disciplines. The e-CALLISTO Using CALLISTO Spectrometer from 1MHz to 900 MHz. Middle-
system has already proven to be a valuable new tool for East Journal of Scientific Research, 2012. 12(6): p. 893-898.
[10] Robinson, P.A., Sol. Phys., 1991. 134.
monitoring solar activity and for space weather research. From [11] Aschwanden, M.J., Physics of the Solar Corona, an Introduction.
the results, the point we wish to make here is that at least some 2004, Berlin: Springer.
of these type III bursts with low starting frequencies are
consistent with front-side flares, which indicates to us that the
low starting frequencies observed for many of these bursts are

223
Back to Contents

A risk management approach for collaborative NPD


project

Ioana Filipas Deniaud François Marmier Didier Gourc Sophie Bougaret


Pharmaceutical R&D
Strasbourg University Toulouse Univ., Mines Albi Toulouse University
BETA Mines Albi Management Consulting
& Strasbourg Univ., BETA
Strasbourg, France Albi, France Company - Manageos,
France Francarville, France
deniaud@unistra.fr marmier@mines-albi.fr gourc@mines-albi.fr
sophie.bougaret@wanadoo.fr

Abstract—To be competitive, a new product should present an the project to implement (How do we do it?) and the actors in
innovative advantage while being achievable. To ensure the charge of different activities (Who does what?). The NPD
success of a New Product Development (NPD) project, specific project contains a set of activities that must be carried out to
skills and resources are required. Most often, if a product is meet design objectives. The innovation level of the future
complex, a unique company doesn’t have all the competences to
product involves a creative capital [6]. The NPD is the result
provide the complete product. In this case alliances must be
formed to create a collaborative network that work on the new of a collaborative process between resources belonging to
project. Depending on the selected partners, different possible different functions of the same company (internal
innovation level can be reached. This decision also influences the development) or between different companies working
uncertainty and the risk of the project. It is difficult to assess the networked [7]. Both innovation level and collaborative
risk level of a NPD project especially when the collaborative network are sources of risks.
network is new. In this paper we address the topic of alliance To identify the risks and propose a risk management plan,
making in NPD and we present a reading frame of the projects the objective of this work is to propose a reading frame for
by taking into account the type of innovative projects, the collaborative NPD project. The originality of this work is to
possible network and the risk that are obviously inherent. The
consider the type of project, the type of collaboration and the
originality of the paper is to consider correlate decisions focused
on the collaborative network and on the risk management in type of risk in order to help choosing the risk treatment
innovation context. The objective of the paper is to establish the strategies.
process making the ling between network design, NPD project The paper is structured as follow: first, we present of short
planning and risk management in order to have an overview of literature review on NPD Project and its characteristics,
the repercussion of the network design on NPD project. second we propose a process and a reading grid helping to
define a risk management plan and then finally, we present our
Keywords—new product development; project management; conclusions
collaborative network; risk management; innovation.
II. NEW PRODUCT DEVELOPMENT PROJECT
I. INTRODUCTION A NPD project is characterized by an innovation level,
Future success of a company often requires that firms offer most often the time by distributed skilled resources between
regularly new products. In order to reach fundamentally better partners and by a high level of risk. Therefore, in this section,
products, lower costs, and basically new product features, we define these particular notions.
technological innovations are needed [1]. Higher requirements A. The innovation and its effect on the project
for the products lead to the need for constant innovation for
company’s competitive advantage. In the same time the NPD collaborative project has four main steps:
complexity of technology needed to innovate has increased Requirements specification (including idea genesis), Product
and NPD costs are exploding [2]. Firms with diversified design, Product implementation, and Commercialization. This
networks usually hold major advantages through access to a steps are formalized as a sequential process (stage-gate
rich knowledge base. Ritala and al [3] show that the external process), or a concurrent, iterative one (Vee cycle) [8]. Several
knowledge sharing increased firm-level innovation authors distinguish three NPD project types according to the
performance. External knowledge sharing appeared in initial innovation level. In the Gero’s classification [9] defined
collaborative network and forms an open innovation system the design output can be creative, innovative or routine. In the
[4] with advantage and difficulties. Evbuomwan et al.’s [10] classification the level of originality
Nightingale [5] point out that NPD simultaneously implies is obtained by routine design, to redesign and non-routine
defining the product to be created (What do we do?), defining design. In reality, it is often not possible to define precisely the

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


224
Back to Contents

boundaries between theses types of design. For example, a 2) Innovation in network


complex product is composed by different sub-systems [11]. Firms pursuing innovation must maintain a balance
One of sub-systems maybe corresponds to innovative design, between learning from external (exploration) and internal
while another is routine design. Therefore, this should be sources (exploitation). Too much exploitation is unlikely to
considered to be only a broad classification. In reality it is a lead to higher-order learning, whereas too much exploration is
continuum. expensive and may produce many underdeveloped concepts
Du to the innovation, NPD project are risky. Innovation and ideas [20]. This supposes the capability to make alliances
demands coordination and cooperation between partners in with other firms with it’s risk-taking propensity to devote
NPD. The types of collaborations are described in the next resources to projects. Since 1960, in the R&D alliances, firm
section. increasingly prefer contractual partnership to joint venture
[21]. To encourage innovation in NPD, two types of non-
B. Collaborative network and partners election
equity alliances (figure 1): unilateral alliance (R&D
To realize a collaborative work a form of cooperative contract…), bilateral alliance (joint R&D…).
arrangement, named “strategic alliance”, must to be done • In unilateral alliance one partner provides founds to
between partners [12]. another partner for specified R&D development. In this
1) Definition case there are no knowledge sharing between partners
NPD project may imply a “co-development alliance “[13] and innovation level tends to be low;
or “Technology development alliances” [14], with the purpose
of improvement in technology and know-how, for example • In bilateral alliance, partners combine their knowledge.
“Novelty gain” increases with cognitive distance
agreements for joint R&D, simultaneous engineering,
between partners [22]. Innovation level is directly
licensing, joint design and/or technology commercialization. It
proportional with novelty gain.
can be done in particular bi-lateral relations or in clusters and
during each step of the project, alliances can by done. Those 3) Strategy of alliance selection
involve the sustained joint creation of property and knowledge Emden et al [13] identified three conditions for the
for the partners, requiring them to bring in resource and work creation of value in co-development network:
together on a constant basis [15]. Depending on project time • Strategic alignment focuses on selecting partners with
(moment) the company that initiates the collaborative process maximum potential to collaborate. This implies
may seeks partners with a strong creative potential (in initial motivation correspondence and goal correspondence
phases) towards simple subcontractors (at the end of the between all network partners.
project). • Technological alignment focuses on selecting partners
Many theoretical frameworks dealing with strategic with maximum potential for creating technological
alliances with very different perspective of innovations; in synergy. Partners must have either an innovative
neo-classical theories, technology or innovations are simply technology or expertise in certain domain.
assumed to appear, from time to time, resulting from
economically exogenous processes. Schumpeter [16], contrary • Relational alignment focuses on selecting partners with
to the standard neo-classical theory, point out the endogenous maximum potential to sustain the relation-ship.
character of technology and innovation (mainly in-house In NPD, few firm’s specific resources may lead to firm’s
R&D). Gnyawali and Srivastava [17] discus about network competitive advantage [23]. However, “the ‘do it yourself’
orientation and they identified two possibilities: acquisition mentality in technology and R&D management is outdated”
and co-development. In acquisition, network is viewed as [24]. A network is realized by pooling the various resources/
means to get specific resources of the other firm to enable to competences of the partners. Das and Bing-Sheng Teng [12]
pursue its own innovation in NPD. In co-development propose an integrated resource based view of strategic
orientation network is viewed as a way of pursuing innovation alliances, identifying four types of resources that the partners
together with its partners in joint NPD. Several typology of bring to an alliance: Financial, Technological, Physical and
collaboration was proposed in literature [15; 18; 19]. Figure 1 Managerial.
we present our synthesis of those typologies.
#" C. Risk management in Network
"%
%"% Firms trying to be innovative develop new ways of doing
business. R&D activities usually connote higher risks for
"


  ' firms. However, depending on the partners and on the
"  different resources shared in an alliance (financial,
  " 

"%


technological, physical, managerial) different risks are
" 
possible. Two types of risks in alliances have been identified

  in the literature: relational risk end performance risk [26; 12;
' 15]:
$"
• Relational risks are those regarding cooperation (eg.
Fig. 1. Alliances typology partners opportunistic behaviour).

225
Back to Contents

• Performance risks are those regarding future states of possible risks and associated treatment strategies to define risk
the alliance objectives (eg. objectives are not achieved). management plan.
In NPD alliances, both relational risk end performance risk A. Risk management in collaborative project design
tend to be high [25]. The two sets of risks present an inverse During the NPD project is designed, risks are studied.
evolution of level with the increase of the number of partners. Figure 2 presents the macro process leading to the risk
The performance risk is usually shared by making alliances management plan design. This one starts from the
but the relational risk appears only if an alliance is made [27]. specifications. The first constituting process consists in
The greater the number of previous alliances between the defining the deliverable and a macro vision of the project
same partners, the lower will be the perceived relational risk. planning. The level of innovation drives the manager to
The collaborative experience also has an influence on the risk develop incrementally the new product based on existing
level. The greater the asymmetries between partners, the products or a radically different one. Since the knowledge
higher will be the perceived relational risk [15]. capitalisation helps to be efficient, incremental innovation can
The technology knowledge helps solving technical problems. be quickest than radical ones. It has a huge influence on the
This point is a key factor of the risk level in NPD project. result of this process.
However, other factors exist. Tornatzky and Fleischer [28] The second process consists in selecting the different
framework highlights the three main elements of a firm’s partners, if needed, to achieve the project. In innovative
context influencing the process by which it adopts and product development presenting research the three main types
implements technological innovation: Organization, of collaboration are the crowdsourcing, the joint R&D and the
Technology, and External Task Environment. In the same R&D contract to develop products. Mainly due to the skills of
ideas Wohlfeil and Terzidis [1] identified different critical the selected partners the product design can change. The
success factors in technological innovation splited in three planning of the project and the associated project can have to
categories: Target market, organization and Technology. All be reviewed.
this factors generate risks. The third process aimed to develop a risk management
Other kind of risks, exogenous of network, are possible. plan. After having identified the different risks, strategies of
They are mentioned below but are not considered in this work. treatment are proposed for each one of them. These strategies
These external risks depend on target market: induce modification of the planning to prevent risks and or
• Market barriers: high capital demand, patent situation, correct the effect of possible occurrences. [29] proposes a
image requirements, lack of appropriate location, project management approach based on a synchronized
resources or suppliers, economy of scale. process of project schedule and risk management. This
• Environmental context: technology support synchronized process has been used for decision-making
infrastructure, social, political (government regulation), support in variant of project selection [30] and for product
economic, legal, etc. selection in [31]. These works do not considered the effect of
the alliance selection on the planning and on the risk.
D. Finding Therefore, risk treatment strategies generally conduct to
Depending on the innovation level of the project and on review the product design or the project planning as well as to
the collaboration involved in the development, different risks modify the collaborative network.
are possible. The strategies needed to manage risk and to



success the project different on the type of the considered 
 

 

risks.
Consequently, to make the project robust to the possible
risk, strategies of risk management have to be anticipated. To
    

   
  

 

identify risks and propose adapted risk treatment strategies, is
necessary to be able to characterize a project following the
three dimensions presented in the literature review (Innovation
level, collaborative network and type of risks).
  
     

III. A DECISION SUPPORT APPROACH
Fig. 2. The macro process of risk management plan design
To help in designing risk management plan, we take into
account innovation and risks aspects in collaborative NPD This macro-process conducts then to position the project or
project. First we formalized the process to build the risk a phase of the project in the structure proposed in the next
management plan. This process uses the characteristics of the section.
project to achieve efficient plan. Second, to categorized
projects, we propose a structured frame of these B. A structuring frame
characteristics. Third, based on the identified project category, Three characteristics are therefore considered here. The
we propose a table making the correspondence with their innovation level with the incremental or radical innovation.
The network, with the Crowdsourcing, the Joint R&D project.

226
Back to Contents

The type of risk with the relational and the performance risk. Understanding
8 Increm Joint Rela Accidental knowledge Placement of Managers
The cube obtained presented in figure 3 show in each cells the ental R& t. leakage in alliance key positions
intersection of this decomposition. Each cell represent D Intentional knowledge Long term Contracts
leakage Partners integration
contexts where the project will be carried on and for which Core competency lose
risk treatment strategies have to be proposed. 9 Increm R& Rela Non –equity Share Financial Control
ental D t. Lose of control Long term Contracts
Cont

 .

   !







1 Radica Cro Rela No efficiency Short term, recurrent


 
 0 l wd t. Unstable collaboration contracts (conditioned by



 S. previous performances)



1 Radica Joint Rela Accidental knowledge Improve managerial




1 l R& t. leakage efficiency


   

D Intentional knowledge
" # $ leakage
Core competency lose
 
1 Radica R& Rela Non –equity Share Improve managerial


2 l D t. Lose of control efficiency


 

Cont Short term, recurrent


% & ' . contracts (conditioned by
previous performances)

  
     
 IV. ILLUSTRATION
  
 To illustrate our proposal, we develop the case of the new
Fig. 3. The macro process of risk management plan design drug development. NPD projects in pharmaceutical laboratory
consist in developing new medicine for the treatment of
human diseases. The attrition level is exceptionally high; less
C. Risk management
than one-in-ten projects launched in development is a
For each cell of the cube the literature identify different technical success. The nine others are sacrificed in the no-go
risks and possible treatment strategies. The table 1 presents for process [32]. These projects are innovative, costly but very
each cell of the cube examples of risks and associated risky. The development of a new product in the
treatment strategies. These sets are provided in order to drive pharmaceutical industry has to go through a precise succession
the decision maker in the identification of the risks and in the of phases, which take between 7 and 12 years depending on
risk treatment strategies to provide a risk management plan. the level of innovation and the disease targeted. Paul et al.
indicate that the cost of a new drug development is average
TABLE I. Risk example and strategies for each cell of the cube
873 M$.
Ce Characteristics of Risks Strategies Pharmaceutical companies establish alliances to develop
ll the project new medicine. The type of alliance depends on the level of
Innov. Net. Risk
1 Radica Cro Perf Non lead users integration Users integration
innovation, the availability of competences and the type of the
l wd o. Intellectual propriety company (big pharma, middle pharma, start up). When a
S. rights sharing laboratory starts a new development, decisions on strategic
(Hagedoorn, 2003)
Patenting alliances have to be taken. In this context, strategic alliance is
2 Radica Joint Perf Low resources flexibility Joint-Patenting a relationship where two laboratories contribute. They bring
l R& o. Difficulty to adopt
D technology by partners
different but complementary resources and capabilities to
(Feasibility & Maturity) achieve a common objective. One of the most common
3 Radica R& Perf Financial shortfall Enhance profit likelihood bilateral alliances observed in the pharmaceutical sector is
l D o. Non profit Improve managerial
Cont No appropriate Time-to- efficiency made by a small laboratory, or start up that own a patent and
. market System engineering the first proof of effeicience. A collaboration with a big
No appropriate Time-to- implementation
launch
compagny will provide the budget to go deeply in the tests and
4 Increm Cro Perf Non –equity Share Non Disclosure development. The risks are shared and the project is
ental wd o. Imitation Agreement contractualised [33]. Therefore, in this collaborative situation,
S. Incomplete contracts Licensing
Patents the height following cells of the cube can be observed: 2, 3, 5,
5 Increm Joint Perf Not sufficient use in Enhance utility by license 6, 8, 9, 11, 12. One of the main identified risks is therefore the
ental R& o. alliance (by partners/by the used in network
D firm itself) Physical resources
absorption of the startup by the big company.
Low utility (low flexibility
benefit/cost) V. CONCLUSIONS AND PERSPECTIVES
6 Increm R& Perf Financial shortfall Improve managerial
ental D o. Non profit efficiency When designing a NPD Project, the decision maker goes
Cont No appropriate Time-to- Enhance profit through different and successive decision to make innovative
. market SWOT analyse
No appropriate Time-to- product. In this paper we consider the innovation type, the
launch collaboration and the risk management. The problematic
7 Increm Cro Rela Low authority Non Disclosure
ental wd t. Imitation Agreement
addressed in this paper is to help the manager building a risk
S. No trust Memorandum of management plan to make the project robust to the risks.

227
Back to Contents

In this work we formalize the macro-process leading to the [14] M. E. Porter, Fuller, M.: Coalitions and global strategy'. In M. Porter
(ed.), Competition in Global Industries. Harvard Business School Press,
project risk management plan. We propose a reading frame of Boston, MA, 315-343 (1986)
the project characteristics and a table allowing identifying [15] T. K. Das, Teng, B. S.: A risk perception model of alliance structuring.
possible risks and associated strategies. Journal of International Management, 7(1), 1-29 (2001)
This work gives a macroscopic vision of the influence of the [16] J.A. Schumpeter, Capitalism, Socialism and Democracy. Harper and
innovation and alliance on the project risk level. This Broth-ers, New-York (1950)
approach helps the decision maker establishing the project risk [17] D. R. Gnyawali, Srivastava, M. K.: Complementary effects of clusters
management plan. and networks on firm innovation: A conceptual model. Journal of
Engineering and Technology Management, 30(1), 1-20 (2013)
One of the main biases of this approach is the fact that
[18] M.Y. Yoshino, Rangan, U.S.: Strategic Alliances: An Entrepreneurial
project considered are categorized in cells. In reality each Approach to Globalization. Harvard Business School Press, Boston, MA
criteria is a kind of continuum. Moreover, for one project, (1995)
regarding the different macro-phases, the position can be in [19] P.Dussauge, Garrette, B.: Determinants of success in international
different cells and not only range in one cell for a project. strategic alliances: evidence from the global aerospace industry. J. Int.
The main perspective is to take into account the effect of Bus. Stud. 26, 505-530 (1995)
the alliance selection combined with the risk management on [20] J. G. March, Exploration and exploitation in organisational learning,
Organisation, Science, 2 (1), 71-87 (1991)
the project planning.
[21] J. Hagedoorn, Inter-firm R&D Partnerships: An Overview of Major
Trends and Patterns since 1960. Research Policy 31(4), 477-92 (2002)
REFERENCES [22] B. Nooteboom, Van Haverbeke, W., Duijsters, G., Gilsing, V., and Van
[1] F. Wohlfeil, Terzidis O.: Critical Success Factors for the strategic den Oord, A.: Optimal cognitive distance and absorptive capacity.
management of radical technological innovation, IEEE International ICE Research Policy, 36/7, 1016-1034 (2007)
Conference on Engineering, Technology and Innovation, Bergamo 2014, [23] J.B. Barney, Firm resources and sustained competitive advantage.
1-9. Journal of Management 17 (1), 99-120 (1991)
[2] A. Rindfleisch, Moorman, C. The Acquisition and Utilization of [24] O. Gassmann : Opening up the innovation process: towards an agenda,
Information in New Product Alliances: A Strength of-Ties Perspective. R&D Management, 36 (3), 223-228 (2006)
Journal of Marketing 65(2), 2001, 1-18 [25] R. N. Osborn, Baughn, C. C.: Forms of inter-organizational governance
[3] P. Ritala, Olander, H., Michailova, S., Husted, K.: Knowledge sharing, for multinational alliances. Academy of Management Journal, 33: 503-
knowledge leaking and relative innovation performance: An empirical 519 (1990)
study. Technovation, 35, 2015, 22-31. [26] P. S. Ring, Van de Ven, A. H.: Developmental processes of cooperative
[4] H. W. Chesbrough, Open Innovation: The New Imperative for Creating interorganizational relationships. Academy of Management Review 19,
and Profiting from Technology. Boston, Harvard Business School Press, 90-118 (1994)
2003. [27] D. Littler, Leverick, F. and Bruce, M.: Factors Affecting the Process of
[5] P. Nightingale, The product-process-organisation relationship in Collaborative Product Development: A Study of UK Manufacturers of
complex development projects. Research Policy, 29, 2000, 913-930. Information and Communications Technology Products. Journal of
[6] P. Badke-Schaub, Neumann, A., Lauche K, and Mohammed, S.: Mental Product Innovation Management 12(1):16-33 (1995)
models in design teams: a valid approach to performance in design [28] L.G. Tornatzky, Fleischer, M.: The processes of technological
collaboration? CoDesign, 3, 2007, 5-20. innovation, Lexington Books, Lexington, Mass, (1990)
[7] B. Townley, Beech, N., and McKinlay, A.: Managing in the creative [29] T. Nguyen, Marmier, F., Gourc, D.: A decision-making tool to maximize
industries: Managing the motley crew. Human Relations, 62(7), 939-962 chances of meeting project commitments, International Journal of
(2009) Production Economics, 142(2) p. 214-224 (2013)
[8] G. Pahl, Beitz W., Feldhusen J., Grote K.H.: Engineering Design, A [30] F. Marmier, Gourc, D., Laarz, F.: A risk oriented model to assess
Systematic Approach, 3rd Edition, Springer (2007) strategic decisions in new product development projects, Decision
[9] J.S. Gero, Design prototypes: a knowledge representation schema for Support Systems, 56, p. 74-82 (2013)
design. AI magazine, 11 (4), 26-36 (1990) [31] F. Marmier, Filipas Deniaud, I., Gourc, D.: Strategic decision-making in
[10] N.F.O. Evbuomwan, Sivaloganathan, S. and Jebb, A.: A survey of NPD projects according to risk: application to satellites design projects,
design philosophies, models, methods and systems. Proceedings of the Computers in Industry, 65(8), 1107-1114 (2014)
Institution of Mechanical Engineers, 210(42), 301-320 (1996) [32] S. M. Paul, Mytelka, D. S., Dunwiddie, C. T., Persinger, C. C., Munos,
[11] C. Freeman, Networks of Innovators: A Synthesis of Research Issues. B. H., Lindborg, S. R., Schacht, A. L.: How to improve R&D
Research Policy 20, no. 6 (1991) productivity : the pharmaceutical industry’s grand challenge, Nature
Reviews Drug Discovery 9 (3) 203-214 (2010)
[12] T. K. Das, Teng, B. S.: Resource and Risk Management in theStrategic
Alliance Making Process. Journal of management, Vol. 24(1), 21-42 [33] Exporting Pharmaceuticals: A Guide for Small and Medium-Sized
(1998) Exporters, ITC International Trade Centre UNCTAD/WTO, (2005)
[13] Z. Emden, Calantone, R. J., Droge, C.: Collaborating for new product
development: selecting the partner with maximum potential to create
value. Journal of product innovation management, 23(4), 330-341
(2006)

228
Back to Contents

e-CALLISTO Network System and The Observation


of Structure of Solar Radio Burst Type III

M.O. Ali S. N. U Sabri


Faculty of Applied Sciences
Faculty of Applied Sciences MARA Technology University
Universiti Teknologi MARA Selangor, Malaysia
Selangor, Malaysia
sitinurumairah@yahoo.com
marhanaomarali@gmail.com

Z. S. Hamidi Nurulhazwani Husien


Faculty of Applied Sciences Faculty of Applied Sciences
Universiti Teknologi MARA MARA Technology University
Selangor, Malaysia Selangor, Malaysia
zetysh@salam.uitm.edu.my hazwani_husien21@yahoo.com

N.N.M.Shariff N. H. Zainol
Academy of Contemporary Islamic Studies Faculty of Applied Sciences
Universiti Teknologi MARA MARA Technology University
Selangor, Malaysia Selangor, Malaysia
nnmsza@salam.uitm.edu.my hidayahnur153@yahoo.com

C.Monstein
M. S. Faid Insitute of Astronomy, Wolfgang-Pauli-Str
Academy of Contemporary Islamic Studies 27 8093 Zürich
MARA Technology University Switzerland
Selangor, Malaysia monstein@astro.phys.ethz.ch
syazwaned@siswa.um.edu.my

Abstract-- This paper highlighted on the unique occurrence of Keywords— CALLISTO network system; Solar Radio Burst;
the Solar Radio Burst Type III (SRBT III) during the high Solar Flare,
activities of the Sun. e-CALLISTO network is the system that
responsible for the observation of the Sun 24 hours per day, I. INTRODUCTION
which is a program under IHY/UNBSSI and ISWI instrument The CALLISTO spectrometer is a programmable
deployment program. The data was taken from one of the part heterodyne receiver built in the framework of IHY2007 and
of e-CALLISTO network which is Bleien, Switzerland. The
ISWI by former Radio and Plasma Physics Group (PI
event that had been selected was on 27th August 2015 since
there was two subtypes of SRBT III can be obviously observed
Christian Monstein) at ETH Zurich, Switzerland [1].
during the day within 12:00 UT till 12:05 UT. The current CALLISTO is a network that able to continuously observe
condition of solar wind speed is 348 km/s with density 8.4 the solar radio spectrum which is a program under
protons/cm3. Besides the magnetic flux also quite high which is IHY/UNBSSI and ISWI instrument deployment program
13.4 nT. Regarding the detection of SRBT III, the x-ray flux [2]. All Calisto spectrometer all over the world form the e-
data from Solar Monitor shows there is strong class m-flare Callisto network. e-CALLISTO is an acronym stands for
also occur. The strong flare is also believed to have high extended Compact Astronomical Low Cost Frequency
temperature due to the high magnetic field. A geo-effective Instrument for Spectroscopy and Transportable Observatory
explosion was occur even though the sunspot no longer directly
that is a worldwide network of frequency-agile solar
facing on the earth. The active region AR2403 was predicted
can potentially cause radio blackout and radiation storm as
spectrometer [3]. This network has a receiver instrument
long as the sunspot remains visible. named CALLISTO which is inspired from one of the name
Jupiter’s larger moons.

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


229
Back to Contents

be valuable new tool for monitoring solar activity. Besides,


it has become an instrument for identifying the nature of
solar radio emission from solar eruptions [7]. One of the
uniqueness of the CALLISTO spectrometer network is,
because of the collaboration of many countries, the Sun can
be monitored 24 hours per day and all the data available
online.

Nowadays, the interaction between the Sun and the Earth is


clearly of particular interest. A wealth of solar phenomena
has been found through the Sun atmosphere such solar flare
and Coronal Mass Ejections (CMEs) [8]. It is also believed
that there is possibility that erratic solar effects on the
Earth’s climate could be very important [9]. The sun
interacts with the earth through its particle emission as well
Figure 1: World CALLISTO Spectrometer network as its electromagnetic radiation. During the high solar
activity, the solar wind increase caused the magnetosphere
This device was designed by Christian Monstein from
Institute of Astrophysics, ETH Zurich, Switzerland. The of the earth disturbed thus the magnetic storm and aurora
goal of e-Callisto is to provide worldwide network that can will be formed [10]. Solar wind is the situation where the
monitored solar radio burst for 24 hours per day [4] At charged particle, primarily electrons and protons flow out
present, more than 66 instruments have been installed at from the surface of the Sun. Space radio instrument have
more than 35 locations, with users from more than 92 been designed to provide at least some information on the
countries in the e-CALLISTO network. In this project, location of the radio source [11].
instrument deployment including education and training of
observers was financially supported by SNF, SSAA, and The Sun produces five different types of solar radio burst (I-
NASA, institute for Astronomy and North South Center of IV). Type III burst one of the best indicators of release
ETH Zurich and few private sponsors. Figure 1 shows the electron beams near the Sun along open magnetic field lines
distribution of e-CALLISTO instruments around the world [12]. A solar type III radio burst is a transient burst of radio
[5]. emission that starts at higher frequencies and drifts to lower
frequencies as a function of time [13]. Noted that some of
these type III radio bursts at kilometric wavelengths,
associated with major flares, were unusually intense and had
very complex and long-lasting intensity – time profiles near
1 MHz [14]. It should also be theoretically possible to
observe hard X-rays associated with the type III producing
electron beams in the upper corona [15].

As mention earlier, solar flare is one of the phenomena of


the Sun. Observation of solar flare in all region of
electromagnetic spectrum from visible to gamma ray
provides a wealth of information. Solar flare produce
copious amount of coherent radio wave, which have been
classified for more than 40 years into different classes [16].
Type IIIs are among the more studied solar coherent burst
Figure 2: The schematic diagram of connection of CALLISTO and generally occur at the beginning of the impulsive phase
system (credit to: ROOPA Nandita PIRTHEE Sagar Girish Kumar of a solar flare [17]. The x-ray and ultraviolet radiation
Beeharry)
provide evidence that locally very high temperature during
flare outburst [18].
CALLISTO is completed by two systems which are indoor
Solar flare that has the most significant effect on earth is the
and outdoor system. For outdoor, it has CALLISTO antenna
largest solar flare named X-class flares [19]. Some of the
and low-noise preamplifier. Meanwhile, indoor consist of effects are long lasting radiation storms in the upper
indoor amplifier, CALLISTO Receiver, and PC. Figure 2 atmosphere and trigger blackouts. There are also another
shows the schematic diagram of connection of CALLISTO type of flare which is Medium size M-class flares which can
system. The total frequency range of CALLISTO is from 45 cause brief radio blackouts in the polar region and the
MHZ to 870MHz [6]. The CALLISTO system has proven to occasional minor radiation storms [20]. Meanwhile, C-class

230
Back to Contents

flares have few noticeable consequences. Since X-class flare Figure 4 shows the antenna disk in Bleien and it Callisto
is the largest solar flare, absorbing it will affects the coverage. This antenna disk telescope has the diameter of 7
atmosphere. It will results in increase in heat and an meter in frequency range 45 till 870 MHz that provide a
expansion of the Earth ionosphere [21]. very good data of Solar Radio Burst. The maximum
coverage that Bleien has is within May to Jun which the
It is believed that solar activities such as and solar flare have data of Solar Radio Burst available start from 03:00 UT to
a very close connection and significant impact of the climate
19:00 UT. Basically, he signal from the feed will be fed into
changes and Earth’s environment [5]. The value of solar
the receivers and it will be converted to a first intermediate
radio bursts at low frequencies lies in the fact that they
originate in the same layers of the solar atmosphere in which frequency of 37.7 MHz by two local oscillators. The event
geo–effective disturbances probably originate: the layers that had been selected was on 27th August 2015 since there
where energy is released in solar flares, where energetic was two subtypes of SRBT III can be obviously observed
particles are accelerated, and where coronal mass ejections during the day within 12:00 UT till 12:05 UT.
(CMEs) are launched [22]. The dynamical behaviour of the
Sun exhibits a variety of physical phenomena, some of III. RESULT AND ANALYSIS
which are still not at all or only barely understood due to the Figure 5 shows complex and single type III Solar Radio
complexity of the structure of the Sun [23]. Burst were observed on 27th August 2015 which both of
them occur within 12:00 UT to 12:05 UT and the sunspot
active region (AR) is AR2403. It is unique because two
II. METHODOLOGY types of subtypes SRBT III occur continuously after one
In this study, the data were taken from Bleien’s Callisto another. For the complex SRBT III it was drifted from
Spectrometer since the data taken was really clear with less 20MHz to 80MHz. meanwhile for the single SRBT III it
noise as compared to the other spectrometer and the was drifted from 35MHz to 80MHz. The duration for the
maintenance of the spectrometer is really good. Besides, first SRBT III is approximately one minute and the second
some information were also taken from space agency such SRB only about 30 seconds.
National Astronomical Space Agency (NASA), Solar
Monitor, Space Weather Prediction Centre and SOHO
Observatory that make their data available online so that the
observation can be made more detail.

Complex
SRBT III

Single SRBT
III

Figure 5: Solar Radio Burst Type III on 27th August 2015

During the day, the space weather were show in table 1


Figure 3: Antenna Disk and Callisto Coverage of Blein below.
Switzerland

231
Back to Contents

Table 1: Table of condition of space weather on 27th August sunspot no longer directly facing on the earth. AR2403 was
2015 predicted can potentially cause radio blackout and radiation
storm as long as the sunspot remains visible.
Solar wind’s speed 348 km/s
Proton density 8.4 protons/cm3 IV. CONCLUSION
Interplanetary Mag. Field 13.4 nT Solar Radio Burst Type III is one of the best indicator for
the solar flare event that occurred during the high solar
It was observed that the interplanetary magnetic field really activities. The powerful of the explosion or the flare are
high which is 13.4 nT. Since magnetic reconnection was the related to the magnetic field of the active region of the Sun
primary energy release mechanism in solar phenomena thus through the process of magnetic reconnection. The stronger
the higher the magnetic field the higher the temperature of the magnetic field of the active region, the higher the
the flare. This relationship can be confirmed by the relation temperature of the flare thus resulting in high class of flare.
(1). Whereby B is magnetic field, is proton density, and L A continuous observation on the Sun is critical since it
height of flare. affect the condition of the earth and sometimes will affect
the high technology activities. If the impact of the flare is
= 3 × 10 ( )( ) ×( ) (1) only minor impact it will caused the beautiful aurora.

ACKNOWLEDGMENT

We are grateful to CALLISTO network, STEREO, LASCO,


SDO/AIA, NOAA and SWPC make their data available
online. This work was partially supported by the 600-
RMI/FRGS 5/3 (135/2014) and 600-RMI/RAGS 5/3
(121/2014) UiTM grants and Kementerian Pengajian Tinggi
Malaysia. Special thanks to the National Space Agency and
the National Space Centre for giving us a site to set up this
project and support this project. Solar burst monitoring is a
project of cooperation between the Institute of Astronomy,
ETH Zurich, and FHNW Windisch, Switzerland, Universiti
Teknologi MARA and University of Malaya. This paper
also used NOAA Space Weather Prediction Centre (SWPC)
for the sunspot, radio flux and solar flare data for
comparison purpose. The research has made use of the
National Space Centre Facility and a part of an initiative of
the International Space Weather Initiative (ISWI) program.

REFERENCES

[1] A.O. Benz, C. Monstein, H. Meyers, CALLISTO, A New


Concept for Solar Radio Spectrometer, Kluwer Academic
th th
Figure 6: X-ray flux Data for 25 till 28 August 2015 Publishers, The Netherland, 2004.
[2] A.O. Benz, M. Guedel, H. Isliker, S. Miszkowicz, W. Stehling, .
Figure 6 shows the x-ray flux data for 25th August till 28th A broadband spectrometer for decimetric and microwave radio
bursts first results, Sol. Phys. 133 (1991) 385-393.
August 2015. It can be observed that the highest class of
[3] Z. Hamidi, N. Shariff, Z. Abidin, Z. Ibrahim, C. Monstein, E-
flare was on 27th August 2015. The day before, 26th August Callisto Collaboration: Some Progress Solar Burst Studies
the x-ray flux data shows the flare on that day C9 class flare. Associated with Solar Flare Research Status in Malaysia,
It indicates that the flare can be even bigger. Malaysian Journal of Science and Technology Studies 9 (2013)
15-22.
Space Weather predicted that there might be a huge flare on [4] Z. Hamidi, N. Shariff, Z. Abidin, Z. Ibrahim, C. Monstein,
Coverage of Solar Radio Spectrum in Malaysia and Spectral
27th August 2015 due to the high interplanetary magnetic Overview of Radio Frequency Interference (RFI) by Using
field and high flare temperature. The x-ray flux data from CALLISTO Spectrometer from 1MHz to 900 MHz, Middle-
solar monitor shows that M2 class of flare occurred during East Journal of Scientific Research 12 (2012) 893-898.
[5] U.F.S.A. Z.S Hamidi ; Ungku Ibrahim, Z. Z.; Ibrahim, Z. A.;
the day. M class flare was considered as a strong flare after Shariff, N. N. M., Theoretical Review of Solar Radio Burst III
X class flare which also can caused impact to the earth. (SRBT III) Associated With of Solar Flare Phenomena,
Sunspot AR403 was observed approaching the limb of sun. International Journal of Fundamental Physical Sciences Vol. 3
(June 2013) 20
A geo-effective explosion was happen even though the

232
Back to Contents

[6] Benz, A. O., Monstein, C., & Meyer, H. (2005). CALLISTO–a [15] N. Gopalswamy, Radio-rich Solar Eruptive Events, Geophys.
new concept for solar radio spectrometers. Solar Res. Lett 27 (2000).
Physics, 226(1), 143-151 [16] A.O. Benz, Flare Observations, Living Rev. Solar Phys. 5
[7] Zawari, A., Islam, M. T., Anwar, R., Hasbi, A. M., Asillam, M. (2008).
F., & Monstein, C. (2014). Callisto radio spectrometer [17] G.A. Dulk, Type III solar radio bursts at long wavelengths, in:
construction at universiti kebangsaan malaysia [antennas and R. Stone, E. Weiler, M. Goldstein (Eds.), Geophys. Monogr.,
propagation around the world].Antennas and Propagation 2000.
Magazine, IEEE, 56(2), 278-288. [18] A. Belov, H. Garcia, V. Kurt, and E. Mavromichalaki Proton
[8] N. Gopalswamy, S. Yashiro, Y. Liu, G. Michalek, A. Vourlidas, events and X-ray flares in the last three solar cycles, Cosmic
M.L. Kaiser, R.A. Howard, Coronal mass ejections and other Res 43 (2005).
extreme characteristics of the 2003 October–November solar [19] T.A. Chubb, Creplin, H. Friedman, Observations of Hard X-
eruptions, Journal of Geophysical Research: Space Science 110 Ray ESnission from Solar Flares, J. Geophys. Res. 97 (1966).
A09S15-JA010958. [20] I.H. Cairns, S.A. Knock, Predictions for dynamic spectra and
[9] A. Kruger, Introduction to Solar Radio Astronomy and Radio source regions of type II radio bursts In the inhomogenous
Physics, D. Reidel, Publ. Co., Dordrecht, Holland, 1979. corona and solar wind Solar Physics 210 (2002) 419-430.
[10] G. Mann, Coronal Magnetic Energy Releases, Springer, Berlin, [21] S.W. Kahler, The role of the big flare syndrome in correlations
1995. of solar energetic proton fluxes and associated microwave burst
[11] D.J. McLean, a.N.R. Labrum, Solar Radiophysics, Cambridge parameters, J. Geophys. Res. 87 (1992) 3439-3448.
University Press, Cambridge, 1985. [22] S. Krucker, R.P. Lin, Two classes of solar proton events
[12] V.V. Lobzin, C. I.H., R. P.A., S. G., G. Patterson, Automatic derived from onset time analysis, Astrophys. J. 542 (2000)
recognition of type III solar radio bursts; automated radio burst 61-64.
identification system method and first observations, Space [23] Z. Hamidi, U. Ibrahim, U.F. Salwa, Z. Abidin, Z. Ibrahim, N.
Weather (2009). Shariff, Theoretical Review of Solar Radio Burst III (SRBT III)
[13] F. Reale, Coronal Loops: Observations and Modeling of Associated With of Solar Flare Phenomena, International
Confined Plasma, living Rev. Solar. Physics 7 (2010). Journal of Fundamental Physical Sciences 3 (2013)
[14] M.R. Kundu, Radio observations of high energy solar flares,
Highlights in Astronomy 12 (2002).

233
Back to Contents

Use of Earned Value Management in the UAE


Construction Industry

Mohamed Morad, M.Sc. Sameh M. El-Sayegh, Ph.D., PMP


Graduate: Engineering Systems Management Associate Professor: Civil Engineering Department
American University of Sharjah American University of Sharjah
Sharjah, United Arab Emirates Sharjah, United Arab Emirates
selsayegh@aus.edu

Abstract— the use of Earned Value Management (EVM) is of ensure effective management measures across all projects.
paramount importance for the successful delivery of construction With time, project management sector has recognized the need
projects. This paper evaluates the use of EVM in the United Arab to improve such methods and standards. EVM does not
Emirates (UAE) construction industry and identifies the key interfere in how the project must be managed; however, it
factors for its successful implementation. The success factors are provides very precise organizational and reporting
identified through literature review. The paper is based on the requirements. There are several reasons behind implementing
assessment of survey responses from project managers and cost EV. The internal management in an organization would like the
engineers in the UAE construction industry. The results were control and track earned value in their ongoing projects [7].
assessed based on 53 responses. The analysis revealed that
Earned value has been used in many fields. One advantage of
several companies are using EVM in their cost control systems.
However, big portion of UAE’s construction companies are not
this method is that it is designed to accommodate any field [8].
using EVM. Although many companies are not implementing EVM has been widely accepted and used in construction
EVM, the project showed that 87% of responses agreed on the companies across the world. Construction companies in the
necessity of EVM implementation as cost control tool. UAE have been controlling their projects through different
Keywords— Earned value management; construction industry; control systems. EVM was first developed in the United States.
United Arab Emirates; project management The United States has different conditions than gulf region in
general. Using EVM has several benefits, and acknowledging
I. INTRODUCTION these benefits in the UAE construction industry is essential to
study the understanding and awareness of regional contractors.
Eared Value Management (EVM) is widely accepted as a As some contractors are using EVM in UAE, EVM users can
valuable management tool used for the purpose of performance identify what are the critical success factors that support the
and project control. It is the core of management systems that usage of EVM in the UAE. The main objective of this paper is
is used by many private, public, and governmental sectors [1]. to evaluate the status of Earned Value Management in the UAE
EVM consists of a framework which integrates project’s scope, including its success factors.
cost, and schedule together. It provides variance and indexes
that allow project managers to detect project’s cost overruns
and delays [2]. EVM is based on the concept that a project can II. EARNED VALUE MANAGEMENT METHOD
be broken into definable and measureable tasks and it Earned Value Management (EVM) allows project
effectively provides a realistic status of the various projects managers to evaluate the project status at any point in time,
within the program [3]. EVM is considered as a crucial method develop trends in cost and schedule performance and take
for project management that can facilitates project control and corrective actions as necessary. EVM is based on determining
forecasts the expected final project cost and duration [4]. three parameters. The first parameter is the Budgeted Cost of
Work Performed (BCWP) or Earned Value (EV). This is
The initiation of Earned Value Management concept is
calculated for each activity by multiplying the actual percent
attributed to the United States industrial engineers. They
completion by the Budget at Completion (BAC) for that
developed it by employing “three-dimension” approach, which
activity. For example, if activity A has a budget of $10,000 and
measures performance against “planned standards”, and then
after one month, it is observed that the progress on activity A is
measures the “earned standards” achieved against “actual
30%, then the Earned Value is $3,000. The second parameter is
expenses” [5]. “The fundamental ideas that have come from the
the Budgeted Cost of Work Scheduled (BCWS). This is
earned value approach to project control stem from two distinct
calculated based on the scheduled completion of the activity. If
lines of thought: those that originated with industrial engineers
the activity duration is 2 months, then after one month, the
and those that come from project managers” [6].
planned completion is 50% and the BCWS is $5,000. The third
For decades, new methods and standards have been parameter is the Actual Cost of Work Performed (ACWP).
established and entered into the project management field to This value comes from the accountant based on the actual cost

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


234
Back to Contents

that is spent on the activity at that point in time. Let’s assume it 150 surveys were distributed to several construction
is $4,000 for activity A. companies in Dubai, via site visits. The survey targeted only
engineers who normally work with EVM method such as
Once the three parameters are determined, it is necessary to project managers and cost engineers. The responses were
calculate the Schedule Variance (SV) and the Cost Variance solicited using Likert scale, with level of measurement;
(CV). The schedule variance measures the deviation from the strongly agree, agree, neither agree nor disagree, disagree, and
schedule. It is calculated by subtracting the BCWS from the strongly disagree. The weighted average of responses is
BCWP. For example, the SV for activity A is $ - 2,000. A calculated from completed surveys. Out of 150 distributed
negative value indicates that the project is behind schedule. A surveys, 53 surveys were collected from 29 construction
positive value indicates that the project is ahead of schedule. companies.
The cost variance is calculated by subtracting the ACWP from
the BCWP. For example, the CV for activity A is $-1,000. A
negative value indicates that the project is over budget while a IV. EVM USAGE
positive value indicates that the project is under budget. In this Respondents were asked to state the level of usage of EVM
example, activity A is both behind schedule and over budget. in their organization. Figure 1 shows the usage percentages.
The Schedule Performance Index (SPI) can also be Approximately half of EVM implementers are always using
calculated by dividing the BCWP by the BCWS. For activity EVM in their projects.
A, the SPI is 0.6. A value less than 1 indicates that the activity
is behind schedule. The Cost Performance Index (CPI) can also
be calculated by dividing the BCWP by the ACWP. The CPI
for activity A is 0.75. A value less than 1 indicates that the
activity is over budget. The analysis can be extended to
estimate the cost at completion. The Estimate at Completion
(EAC) is calculated by dividing the BAC by the CPI. In the
case of activity A, the EAC is $13,334. This means that if we
continue at this rate, the actual cost, when activity A is
completed, will be $13,334 causing a negative cost variance of
$3,334. Seeing this forecast, the project manager needs to find
ways to improve the performance. The same analysis can be
performed at the project level. Fig. 1. Level of EVM usage

III. RESPONDENTS’PROFILE As for the use of EVM as a forecasting tool for cost at
completion, the analysis shows that 76% use it often or always.
Table I summarizes the respondents’ profile. On the other hand, 73% use it to forecast the project duration.
79% of the respondents often or always use the EVM indexes
TABLE I. RESPONDENTS’ PROFILE to measure performance. Figure 2 demonstrates the usage of
EVM’s outcomes for decision making purposes. It is shown
Category Respondents
Number %
clearly that 29% of respondents are using EVM’s outcomes for
Usage of EVM their decision making process, while only 3 % are not really
Companies Implement EVM 35 66 using EVM’s outcomes for decision purposes.
Companies do not implement EVM 18 34
Years of experience
>20 years 15 28
10–20 years 21 40
5–10 years 12 23
<5 years 5 9
Project Type
Housing 4 7.5
Building 37 70
Industrial 5 9.5
Infrastructure/Heavy 7 13
Project Size
less than 10 M 2 4
10-50 M 3 6
50-100 M 8 15
above 100 M 40 75
Company Business
Local 16 55
International 13 45 Fig. 2. Use of EVM for decision making

The overall results indicate that EVM usage in terms of


implementation, forecasting, alarming, and making decisions

235
Back to Contents

are preferable. Employees signify that EVM is implemented TABLE III. RANKED BENEFITS OF EVM
across their companies’ projects. This gives EVM creditability Benefits of EVM Rank Average
to be implemented across UAE construction industry. In terms
of forecasting, EVM proved reliability for construction EVM provides early warning signals of performance 1 4.25
companies, where engineers agreed on using EVM for problems
forecasting at completion project cost and forecasting at EVM contributes to achieving project cost objectives 2 4.23
completion project duration. Likewise, construction project
EVM enables the integration of work, schedule, and cost 3 4.23
managers depend on EVM’s indexes as alarm in case of
performance slippage. Some engineers depend on their own EVM establishes a single management control system 4 4.20
experience to take the suitable decisions regardless on EVM providing reliable data
outcome. EVM provides a database of completed projects useful 5 4.18
for comparative analysis
V. BENEFITS OF USING EVM The Cost Performance Index (CPI) can be used as a 6 4.14
predictor for the final cost of the project
There are several benefits of using Earned Value The periodic (e.g., weekly or monthly) Cost 7 4.02
Management in the construction industry. EVM users can Performance Index is used as a benchmark
experience these benefits while implementing this method. Allows better communication among the project team 8 3.95
Table II identifies several EVM benefits and the corresponding
reference. Helps identifying and documenting project risk 9 3.93

Provides one database of information (Multi-disciplinary 10 3.93


TABLE II. BENEFITS OF EVM capability)
The management by exception principle can reduce 11 3.42
No EVM Benefits Reference information overload
EVM contributes to improving project scope definition 12 3.34
1 EVM establishes a single management control 1,7,9,10,11,12
system providing reliable data. EVM contributes to achieving project time objectives 13 3. 31
2 Provides one database of information (Multi- 13
disciplinary capability)
3 EVM enables the integration of work, schedule, 2,3,6,7,9,10,14-19
and cost.
4 EVM provides a database of completed projects 3,7,9,10,14 VI. SUCCESS FACTORS
useful for comparative analysis
5 EVM provides early warning signals of 1,3,5,6,7,9,10,13, Table IV presents several factors to ensure proper
performance problems 20-26 implementation of EVM in any organization.
6 The Cost Performance Index (CPI) can be used 5,7,9,10,17,20
as predictor for the final cost of the project
TABLE IV. SUCCESS FACTORS FOR EVM IMPLEMENTATION (SOURCES)
7 The periodic (weekly or monthly) Cost 3,5,7,9,10,26
Performance Index is used as benchmark No Success Factors Reference
1 High level of acceptance among project managers 11,13,14,18
8 The management by exception principle can 5,7,9,10
reduce information overload 2 Open communications among project team members 11,13,14
9 Allows better communication among the project 5,23,11,17
3 Strong support from top management 11,13,23,24
team
10 EVM helps identifying and document project 6,13 4 Extensive training on EVM implementation 7,11,13,24
risk
11 EVM contributes to achieving project cost 13,24-26 5 Strong administrative and technical capabilities of 11,14,20
objectives project managers
12 EVM contributes to achieving project time 6, 7, 13,14 6 Adequate computer & software infrastructure 23
objectives
7 The use of electronic data interchange 11,13,23
13 EVM contributes to achieving project scope 5, 13, 24,27,28
definition 8 Efficient procedures & processes for EVM 13,28
implementation
9 Motivation of team members to use EVM 11,23
The weighted average of the responses is calculated. The
benefits are then ranked as shown in Table III. 10 Sufficient organization’s resources for EVM 6,28
implementation
There is a general level of agreement on all these benefits
as the weighted average is greater than 3. It is clear that the
cost benefits are more than the time benefit. EVM contribution The data presents the weighted average of these success
to achieve the cost objective is ranked 2nd with a weight of 4.23 factors and ranks them accordingly. Table V expresses the
while the EVM contribution to achieving the time objective managers’ views for EVM to be implemented in any
was ranked 13th with a weight of 3.31. organization. It is obvious that the responses have high
agreement on these factors.

236
Back to Contents

TABLE V. SUCCESS FACTORS FOR EVM IMPLEMENTATION [4] W. Lipke, O. Zwikael, K.Henderson, and F.Anbari, “Prediction of
project outcome, the application of statistical methods to earned value
Success Factors of EVM Average management and earned schedule performance indexes,” international
Journal of Project Management, Vol. 27, no.4, pp. 400-407, 2009.
Strong support from top management 4.32
[5] Q. W. Fleming and J.M. Kopperlman, Earned Value Project
High level of acceptance among project managers 4.29 Management, Third Edition , 2005
[6] A. Webb, Using Earned Value, A Project Manager’s Guide, Gower
Strong administrative and technical capabilities of project 4.24 Publishing limited, 2003.
managers
[7] C. Budd and C. I. Budd, A Practical Guide to Earned Value Project
Open communications among project team members 4.18
Management, Management Concepts, 2005.
Sufficient organization’s resources for EVM implementation 4.18 [8] J.H. Cable, J. F. Ordonez, G.Chintalapani and C. Plaisant, “Project
Portfolio Earned Value Management Using Treemaps,” PMI Research
Efficient procedures & processes for EVM implementation 4.18 Conference. London, England, 2004,
http://www.cs.umd.edu/hcil/treemap/PROJECT%20MANAGEMENT-
Motivation of team members to use EVM 4.15 ASPUBLISHED.pdf
Adequate computer & software infrastructure 4.12 [9] D. Christensen, “The Cost and Benefits of the Earned Value
Management Process,” Acquisition Review Quarterly, 1998.
Extensive training on EVM implementation 4.09 [10] D. Christensen, “The Costs Benefits of Earned Value Management
Process,” Acquisition Review Quarterly, pp. 373-386. 1998
The use of electronic data interchange 3.85
[11] E. Kim, W. Wells, and M. Duffey, "A Model for Effective
Implementation of Earend Value Managemnt Methodology,"
The senior management support and project managers’ International Journal of Project Management, vol. 21, no. 5, pp. 375-
acceptance are the most critical issues to ensure successful 382, July 2003.
implementation of EVM. These two factors play big role in [12] J. G., Kerby, & S.M. Counts, “The benefits of earned value management
changing employees’ perceptions and make them think from a project manager’s “perspective. Retrieved from
positively toward EVM. http://www.nasa.gov/pdf/293270main_
63777main_kerby_counts_forum9.pdf. 2008
[13] L. Snog and H. Shalini, A Global and Cross Industry Perspective on
VII. SUMMARY & CONCLUSION Earned Value - Management Practice and Future Trends, university of
Earned Value Management is a cost control tool was Houston, May 2009.
developed in 1960s. EVM integrates the projects’ triangle of [14] J. Angelo Valle and C. Soares, “The Use of Earned Value Analysis
(EVA) in the Cost Management of Construction Projects,” p.1-11, 2001
scope, schedule, and cost together to measure project’s http://icec.dreamhosters.com/ICMJ%20Papers/Valle%20-%20EVA.pdf
performance. The objective of this paper is to investigate the
[15] PMBOK, Guide to the Project Management Body of Knowledge, Project
use of Earned Value Management in the UAE construction Management Institute, 5th Ed., 2013.
industry. The project identified the benefits of EVM for [16] O. Moselhi, "The Use of Eraned Value in Forcasting Project Durations,"
construction companies that use EVM and clarified critical International association for Automation and Robotic in Construction
success factors of EVM across the organization. Conference, Seoul, South Korea, 2011.
[17] D. Fullwood & K. Samuels, "Putting Earned Value Management (EVM)
In conclusion, Earned Value Management is wildly used in Perspective," ARMY AL&T, October 2008.
across the UAE construction industry. There are good numbers [18] A. Naderpour & M. Mofid, "Improving Construction Management of an
of companies implement EVM. The project targeted many Educational Center byApplying Earned Value Technique," Procedia
large construction companies in UAE. Many companies Engineering, vol. 14, pp. 1945-1952, 2011.
depend on their own systems to evaluate project performance. [19] M. Bgherpour, "An Extension To Earned Value Management," Cost
There are huge construction companies in the UAE do not use Management, vol. 23, no. 3, pp. 41-47, May/Jun 2011.
EVM because they have their own systems – house made [20] S. Russell, "Eraned Value Management Uses and Misuses," pp. 157-
systems. While conducting the survey some engineers refused 161.
to contribute in this project because their organization does not [21] R. Warburton, "A Time Dependent Earned Value Model for Softwear
implement EVM as cost control technique. Strong support Projects," International Journal of Project Management, vol. 29, no. 8,
pp. 1082-1090, 2011.
from top management is a key to the successful EVM
[22] D. S. Christensen & S. R. Heise, "Cost Performance Index Stability,"
implementation. It is recommended that top management in National Contract Management Journal, no. 1, pp. 17-25, 1992.
construction companies adopt and actively support the [23] H. Jarnagan, “Lessons Learned in Using Earned Value Systems, a Case
implementation of EVM in their projects. Study”, AACE Transaction, p.EVM.01.1-EVM.01.20, 2009.
[24] R. V. Vargas, "Earned Value Analysis in the Control of Projects:
REFERENCES Success or Failure," Orlando - Florida, 2003.
[25] H. Sparrow, ”EVM = Earned value Management Results in Early
[1] A. Czarnigowska, P. Jaskowski, S. Biruk, “Project performance
Visibility and Management Opportunities,” Houston: 31st Annual
reporting and prediction: Extension of Earned Value Management,”
Project Management Institute Seminars & Symposium, 2000.
International Journal of Business and management Studies, Vol 3, no.1,
pp.11-20, 2011. [26] P2C2 Group, Inc, "Smarter Enterprise Management with Earned Value
Management," January 2006.
[2] J. Pajares and A. Lopes-Paredes, “An Extension of the EVM analysis for
project monitoring: The Cost Index and the Schedule Control Index,” [27] R. Viana Vargas, “Earned Value Analysis in the Control of Projects:
International Journal of Project Management, Vol. 29, no. 5, pp. 615- Succes or Failure?” AACE international Transaction, CSC.21, p. 1-4,
621, 2011. 2003
[3] K. Storms, “Earned Value Management Implementation in an Agency [28] J. A Lukas, “Earned Value Analysis-Why it Doesn’t Work,” AACE
Capital Improvement Program,” Cost Engineering ,Vol. 50, no. 12, pp. International Transactions, EVM.01, pp. 1-10, 2008.
17-20, December 2008.

237
Back to Contents

Significant factors affecting the size and structure of


project organizations

Sameh M. El-Sayegh, Ph.D., PMP Mustafa Kashif


Associate Professor: Civil Engineering Department Civil Engineering Department
American University of Sharjah American University of Sharjah
Sharjah, United Arab Emirates Sharjah, United Arab Emirates
selsayegh@aus.edu
Nilli Nikoula
Civil Engineering Department
Mohammed Al Sharqawi American University of Sharjah
Civil Engineering Department Sharjah, United Arab Emirates
American University of Sharjah Mei Alhimairee
Sharjah, United Arab Emirates Civil Engineering Department
American University of Sharjah
Sharjah, United Arab Emirates

Abstract— construction projects are temporary, unique and techniques for the successful delivery of construction projects.
challenging. The management of construction projects requires Project management is defined as the application of
the coordination of a large number of people. The design and knowledge, tools, techniques and skills to deliver the project
implementation of an effective project organization structure is successfully. The objective of construction project
vital for project success. Project success involves delivering the management is to complete the scope of work on time, within
scope on time, within budget in addition to meeting or exceeding budget while meeting or exceeding customer expectations.
customer’s expectations. The objective of this paper is to identify
and assess the key factors affecting the size and structure of the Successful delivery of construction projects requires proper
project organization. Sixteen factors affecting the project management techniques on the part of all project participants.
organization structure were identified based on the review of The contractor is responsible for the physical construction of
related literature. These factors were grouped in four categories: the project. To appropriately manage the construction project,
project, company, stakeholders, and environment factors. A the contractor needs to setup a temporary project organization
questionnaire was then developed and distributed to construction that will stay for the duration of the project. A project
professionals in the United Arab Emirates (UAE). To ensure organization is a structure that facilitates the coordination and
completeness of the survey, interviews were conducted with the implementation of project activities [3]. Its purpose is to assist
professionals. Fifty six professionals completed the survey. The interaction and develop collaboration among individual team
staff availability and skills appear to be the most important
members to complete their duties perfectly and achieve project
factors followed by the project size and the owner’s
requirements. Keywords— Project organization structure;
goals [3]. An organizational chart is a diagram that shows the
effective organization; construction industry; project structure of an organization. It shows the relationships between
management project manager and the project team members in a hierarchal
structure depending on the authority relationships.
Organizational Structure is simply the manner in which an
I. INTRODUCTION organization arranges (or rearranges) itself [4]. When designing
A project is defined as a temporary endeavor undertaken to an organizational structure, managers must consider the
create a unique product, service or result [1]. Projects are distribution of authority. The structure defines the authority by
temporary in nature which means that they have a definite start means of a graphical illustration called an organization chart.
and end. Projects are also unique and involve doing something An organization can be structured in many different ways,
that has not been done before. Projects have distinctive depending on their objectives. A properly designed project
attributes, which distinguish them from ongoing work or organization chart is essential to project success [3].
business operations [2]. Projects have a purpose, temporary, Although the organizational chart shows the hierarchical
unique, complex and challenging [2]. Construction projects, in relationship among the team members, it does not show how
particular, are more challenging due to the involvement of the project organization will work. There are two distinctly
several contracting parties such as the owner, designer, different types of organizations: the centralized and
contractors, sub-contractors, material suppliers and others. decentralized multidivisional forms of organization. In the
These attributes necessitate the use of project management centralized organization, decisions are directed to the top

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


238
Back to Contents

management. It consists of functional managers, each manager


reports to a project manager. The project manager is
responsible for coordinating the activities of the project.
Conversely, decentralized multidivisional forms organization
tends to push the decisions down to the lowest levels. It means
that an organization member has the right to make a decision
without obtaining approval from a higher-level manager. This
property encourages individuals to develop decision-making
skills and increases motivation and that will increase project
organization profitability.
The span of control refers to the number of people who
report to one manager or supervisor [5]. Organizational chart
has two dimensions: the vertical dimension, in which the
organization is considered to be either a tall or a narrow Fig. 1. Respondents’ Profile
structure; and the horizontal dimension, in which an
organization is considered to be either flat or wide [6]. In the More than 67% of the respondents have more than 10 years
tall organizational structure there are so many levels (twelve, of experience in the construction industry. More than 71% of
fourteen, sixteen or more), thus managers tend to have a the respondents work on projects greater than 50 million UAE
narrow span of control, which means there are no more than Dirhams (1 US$ = 3.67 Dirhams). Thirty seven respondents
five or six people reporting to any individual manager or work in international companies operating in the UAE while 19
supervisor. However, in the flat organizational structure there respondents work in local companies. All respondents agreed
are fewer levels (five or six levels at the maximum), therefore, or strongly agreed that the project organization structure affects
managers tend to have a wide span of control, there could be as project’s success.
many as ten or twelve people reporting to any individual
manager or supervisor, depending upon the tasks involved [6]. The respondents were asked to determine the importance of
An organization should choose the minimum number of the several factors affecting the project organization structures
management levels consistent with its goals and the by using Likert scale of 1 to 5 (5 being very important and 1
environment in which it exists. Each hierarchical level created being not important). The weighted average of all respondents
should add value since it provides additional decision making was calculated to determine the most important factors.
level in the organization [7].
Designing an effective organization includes developing an
organizational structure in order to improve the overall III. FACTORS AFFECTING THE PROJECT ORGANIZATION
efficiency and maximize performance. Defining an effective STRUCTURE
structure can increase the organization’s overall performance,
improve efficiency and enhance growth. However, lacking of Sixteen factors affecting the project organization structure
defined roles, responsibilities and clear workflows leaves were identified based on the review of related literature. These
organizations with redundant roles and complex reporting factors were grouped in four categories: project, company,
structures which, in turn, bring down the employee stakeholders, and environment factors. There are four factors
productivity and increases the overall operations cost [8]. The listed in each category.
project organization structure may have a significant impact on
the success of the project. It has influence on the project A. Project Factors
performance from inception to completion [9]. Defining the The project category includes those factors that are related
responsibilities of each role will help maximize the use of to the project. This category includes the project size,
resources, increase efficiency, and ensure the quality of the complexity, uniqueness and importance. The project size refers
completed project [10]. The objective of this study is to to how big the project is. The larger a project becomes, the
identify the factors affecting the size and structure of the more complicated its organization structure [11]. For instance,
project organization. building a small villa project or a warehouse needs a simple
organization structure. On the other hand, constructing a high
II. RESEARCH METHODOLOGY rise building requires complex organization structure. Actually,
a formal structure doesn’t exist when the project is very small.
The methodology used in this research follows the In this case, the people responsible for the project will make
qualitative research method. The research started with literature decisions based on their experience, abilities, and requirements.
review to identify the factors affecting the size and structure of
project organization. A questionnaire was then developed and The second factor is project complexity. A project is called
distributed to construction professionals in the United Arab complex when it consists of various interrelated parts. This
Emirates (UAE). To ensure completeness of the survey, definition can be relevant to the project management process.
interviews were conducted with the professionals. Fifty six The organization, technology, environment, information,
professionals completed the survey. Figure 1 shows the decision making and systems can be part of the project
respondents’ profile in terms of their years of experience and complexity [12]. The project uniqueness is another factor. If
the average project size. the project activities are different from the normal business of

239
Back to Contents

the firm, or if the project encompasses many unique phases, The project organization structure is also affected by the
then the project management organization structure should tend internal and external powers. Internal power refers to the power
to be partially projectized or decentralized. When the project is within the project staff. When some of the people working in
critical and significant in terms of the importance, urgency, the project have more power in decision making than other
priority, and financial risk, then a proper project organization employees in the project that will affect the organization
structure should be developed. As a result, when the project has structure of the project and it may centralize it. External power
high significance to the company portfolio, the organizations refers to the power by outside stakeholders. Usually, the
structure leans to become more compound. consultant or owner representative may have power over the
people in the organization, and this has an effect on the size or
B. Company Factors the structure of the organization. It can also affect the decision
The company category includes those factors that are making in the project organization structure by either making it
related to the company. This category includes availability of centralized or decentralized.
staff, available technology, market strategy and desired level of
control. The number of staff available in the company affects D. Environment Factors
the size of the project organization structure. Changes in The environment category includes those factors that are
technology affect the project organization structure as the related to the environment. This category includes location,
changes may require additional specialized staff and/or the stability of the external environment, financial uncertainty and
removal of redundant positions. The advances in technology technological uncertainty. One of the environment factors that
are one of the most important causes of project organization affect the size and the structure of the project organization
changes. structure is the location of the project. For example, when the
location of the project is far from the head office of the
The market strategy is another factor. When a company company, a larger organization structure is needed for the
wants to position itself in the market, it should choose proper project. The project organization structure should fit the
organization structures for its projects. For instance, a company external environment of the project because the environment
that newly enters the market has simple project organization affects the structure of the project. When the external
structures to reduce the cost of overheads. However, a environment of the project is stable, the organization structure
company that is old in the market will have more complex of the project is somehow rigid. However, the organization is
organization structure with more levels of management. So, the more flexible when there are many changes in the environment.
project organization structure should fit the market strategy of As a result, reaction to those changes is faster.
the company. The fourth factor is the desired level of control.
Each company sets a desired level of control for its projects, When there is no financial certainty in the environment
and this level of control will have an effect on the shape of the surrounding the project, the shape and size of the project
project organization structure. For example, a high level of organization structure is affected. In addition, the shape of the
control project needs more management levels which will organization structure leans toward flat organization with less
make the structure of the organization a tall structure. On the management levels. The extent of the technological uncertainty
other hand, lower level of control projects require less has an effect on the size and structure of the project
management levels and wider span of control. As a result, the organization structure. For example, when the technological
project structure will be flatter. development is stable, the design of the project organization
structure is simple and there is no need for complex structures.
C. Stakeholders’ Factors However, when there is a rapid change in the technology
The stakeholders’ category includes those factors that are development, the project organization structure is more
related to the project key stakeholders. Stakeholders can be complicated.
defined as any person or organization that can affect or get
affected by the project. This category includes owner’s
requirements, skills of staff, internal power and external power. IV. DATA ANALYSIS AND RESULTS
The structure of the project organization can be affected by Fifty six percent of the respondents agree or strongly agree
the demands and requirements from the owner’s side. that the project organization structure affects projects success.
Furthermore, the size of the project organization structure may Respondents were asked to state how the Project Organization
be increased with the increase of the owner requirements. More Structure (POS) was developed. Figure 2 shows the responses.
people may possibly be added when the owner requires The majority (50%) of the respondents stated that they use
something that is not done before or requires a very short time formal organization design while 30% use rational decisions
for project completion. The level of skills and experience of the based on experience and 14% use adaptation of a structure used
staff working on the project have an effect on the design of the on a previous project. Table I shows the assessment of the
project organization structure. When there is a shortage of the factors by category.
in-house technical expertise, the choice of the project
organization structure will be restricted. However, having
plenty of skilled staff working in the project will increase the
flexibility of designing the structure.

240
Back to Contents

stakeholders’ factors. Conversely, the least important


stakeholders’ factors were internal and external powers. The
major factor out of the environment was financial uncertainty.

Fig. 2. How was the POS developed?

TABLE I. Assessment of the Factors by Category

Fig. 3. Ranked Factors


No. Category Factor Description Average Rank
After looking at the most important factors in each
F1 Project Size 4.45 3 category, the most important factors were ranked. As in shown
Project Factors

F2 Project Complexity 4.14 7 Figure 3, the most important factor came to be the availability
of staff under the company’s category followed by the factor of
F3 Project Uniqueness 3.61 14
having skilled staff working on the project under the
F4 Project Importance 3.75 11 stakeholders’ category.
F5 Availability of Staff 4.55 1
Company Factors

F6 Available Technology 4.18 5


F7 Market Strategy 3.48 15 V. SUMMARY & CONCLUSION

F8 Desired Level of Control 3.86 10


Designing an effective and efficient organization structure
is important for the successful delivery of construction projects.
F9 Owner’s Requirements 4.34 4 There are several factors that affect the size and structure of
Stakeholders'

F10 Skills of Staff working on the Project 4.54 2 project organizations. Sixteen factors were identified from the
Factors

review of related literature. A questionnaire was then


F11 Internal Power (within staff) 3.75 12
developed and distributed to construction professionals in the
F12 External Power 3.45 16 UAE.
F13 Location 3.88 8 The results indicate that the most important factors include
Environment

F14 External Environment Stability 3.71 13 the availability of staff, the skills of staff working on the
Factors

project, project size and owner’s requirements. It is


F15 Financial Uncertainty 4.18 6 recommended that project managers pay special attention to
F16 Technological Uncertainty 3.88 9 these factors when designing their organization structure for
their projects as it has a great impact on project success.

There are four categories for the factors that affect the size
and the structure of the project organization, which are project, REFERENCES
company, stakeholders, and environment factors. In each [1] PMBOK 2013, A Guide to the Project Management Body of
category there are four factors. Construction professionals were Knowledge, Project Management Institute, PMI, 2013.
asked to rank the importance of each factor. For the project [2] J. Davidson Frame, Managing Projects in Organizations, Wiley, New
factors, the most important factors are the size and complexity York, 1995.
of the project. However, the least important are the project [3] PM4DEV, Project Management Organizational Structures, Project
uniqueness and project importance, respectively. Management for Development Organization, 2007.
[4] F. Fontaine, A Critical Factor for Organizational Effectiveness and
For the company factors, staff availability and available Employee Satisfaction, Northeastern University, 2007.
technology were the most important. On the other hand, [5] H. Richardson, N. McDaniel, and B. Thomsen, Span of Control, Report
desired level of control was less important and the least No. 941, 2006.
important was market strategy. Skills of staff working on the [6] C. Fontaine, How Organizational Structure Impacts Organizations. First
project and owner’s requirements were the most important Annual Conference on Organizational Effectiveness, Chicago, IL 2006

241
Back to Contents

[7] F. Gould & N. Joyce, Construction Project Management, 3rd ed., United [10] Delivering Project Success, 2010.
States of America: Pearson Prentice Hall, 2009. http://www.infrastructureapp.mei.gov.on.ca/.../Delivering_Project%20S
[8] [Online document], uccess_An%20Owner's%20Guide_Guide.pdf
http://www.neuterrain.com/optimal_organization_structure_and_design. [11] R. Ledbetter, Organizational Structure: Influencing Factors and Impact
html in the Grand Prairie Fire Department, 2003.
[9] S. Mensah, The Effect Of Project Management Practices On Bulding [12] D. Baccarini, The concept of project complexity, International Journal of
Project Performance: The Case Of Three Organizations, 2007. Project Management, vol. 14, no. 4, pp. 201-204. 1996.

242
Back to Contents

Real-World Software Projects as Tools for the


Improvement of Student Motivation and University-
Industry Collaboration

Zsolt Csaba Johanyák


Department of Information Technology
Kecskemét College
Kecskemét, Hungary
johanyak.csaba@gamf.kefo.hu

Abstract—Software engineering education has to deal with • Supporting the acquisition of hardware and software by
different challenges nowadays in Hungary. While the number of donations or discount offers.
job offers in this field is continuously increasing the students’
interest in programming shows a declining trend. The • Encouraging employees to take part in the education.
recognition of the problem led to several initiatives trying to
address the existing lack of software engineers and student • Providing scholarships and internships for students.
motivation in programming. • Sponsoring students’ participation at competitions.
This paper reports some of our successful attempts to reverse Although all of them provide great help for students and
the unfavorable process. While some short-scale solutions like the universities they do not contribute directly to the solution of the
usage of Lego mobile robots or team-based laboratory class current problem. Therefore new ideas and initiatives have to be
assignments provided only short term lasting improvements the found out in order to renew the toolkit of the education and to
break-through was achieved by the help of a strong University- exploit with the biggest efficiency the potentials available in
Industry collaboration. Real-world software projects as team- University-Industry collaboration.
work assignments brought benefits for the students as well as for
the university and the involved companies. As a large-scale In this paper we report our experiences regarding some
modification of the study program towards the enhancement of difficulties of software engineering education focusing on the
the industry readiness of our graduates the key features of the phenomena of decreasing student interest in programming as
cooperative education are outlined at the end of the paper as well. well as the approaches we have applied so far to stop and
reverse the unfavorable process. In the last 15 years several
Keywords—software project; student motivation; programming; ideas have been tried and our projects have improved the
real-world problems situation for shorter or longer periods of time in most of the
cases.
I. INTRODUCTION
The problem itself and the most successful past projects
Software engineering education faces several challenges in aiming its solution are shortly presented in Section II. Section
Hungary. The industry is growing rapidly, the needed III introduces in details a recent attempt, which proved to be
technologies and tools change frequently and the potential the most successful one so far and is based in University-
employers demand a high number of graduates having up-to- Industry (U-I) collaboration. Thinking about U-I linkage on a
date skills and knowledge. In contrast to the requirements of bigger scale, the idea of cooperative education is outlined in
the labor market the popularity of software engineer career and Section IV as the newest approach that affects the whole study
particularly the attractiveness of programming has a clear program. The conclusions are drawn in Section V.
decreasing trend. Even students enrolled in computer science
and other IT related study programs show less interest in II. PRELIMINARIES
programming than their counterparts showed 15 years ago.
The software industry has an increasing need for well-
There are several initiatives addressing the problem. Some trained and talented software engineers [11]. For example
of them are based only on university resources and ideas while solely in Hungary there are available more than ten thousand
others try to get help from concerned companies. Classical open jobs for programmers [4]. The industry is growing so
industry contributions in an education related University- rapidly that the output of the colleges and universities, i.e. the
Industry collaboration would cover one or more of the number of graduating students in this field is not enough
following topics [8]. anymore. As a side effect of the increasing number of job
offers for BSc graduates and relatively high salaries the number
of students enrolling in master programs has decreased

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


243
Back to Contents

significantly. Moreover, recently several software companies This approach resulted in a temporary improvement in
have been offering jobs for BSc students in their second or motivation of the students; however, after a couple of years the
third year of studies and so several students start their small robots became available in secondary schools and even
professional career without even finishing their bachelor in some elementary schools as well. Thereafter they have
studies. gradually lost their charm in the eyes of university students and
the popularity of programming related courses started to
Surprisingly, in spite of the favorable position on the job decline again.
market, i.e. software engineer is among the twenty best paying
professions in Hungary [2], the number of young people having As a response to this change a new approach was
interest in a career in this field has continuously decreased introduced at our institution, i.e. the team assignment based
since the early 2000s. Not only the number of students laboratory classes and student work evaluation [14]. The “team
enrolling in the related study programs has fallen but also the experiment” was partly initiated by the software companies we
students who are majoring in information technology has were in contact. They unambiguously demanded the
shown less and less interest in software development. improvement of team-work capabilities and soft skills of the
graduating students. At that time most of our students had no
Recognizing the seriousness of the situation several idea about software development in teams, because both the
experimental projects were initiated at Kecskemét College educational and the evaluation systems were based on the
(KC) aiming to arouse the students’ interests in programming. individual performance. Only this skill had been assessed
Some of them have led to positive changes in the attitude of the during their previous studies. In course of the team-work based
students while others not. However, the successful ones were problem solving the emphasis was put on educating good team
usually having only a short time effect. Therefore we have players, gaining experience in business process modeling,
been looking continuously for new methods and pedagogical object oriented system design, database design and
approaches. management, as well as test design and execution.
Surprisingly, the introduction of Microsoft’s Visual Studio The students met the team assignments for the first time in
and C# based visual programming methods and tools for which the third semester of their studies starting with the laboratory
we had had high hopes did not improve the attitude of the classes of the Programming Paradigms and Techniques course
students. Moreover, a survey showed that the self-concept of where they enhanced their object oriented programming skills
the students who took the course was significantly lower than using the C# language. The experiment continued in the third
the others’ self-concept [6]. Studies investigating the origins of and fourth semesters, respectively, with the laboratory classes
the problem showed that the decreasing popularity of of the Visual Programming and Software Engineering courses,
programming could be traced back partly to the low level of where rapid application development techniques and software
programming self-concept and lack in abstraction capabilities engineering methodologies were taught.
of the students [14].
While the Lego robot based project was an instant success
The first successful attempt was the application of Lego at its time [10], the team-work based approach needed a longer
Mindstrom mobile robots (Fig. 1) in the higher education period of time to be accepted by students and to result in the
starting with visual programming tools and continuing with a improvement in the interest in programming related topics. The
C-like language [5][9]. In course of the laboratory classes the low-key approval of the team-work by the students was caused
students worked in groups of two or three using cooperative by their concern about being left to their own by their team
learning methods. Instead of traditional knowledge transfer mates and having to do the whole project by themselves.
techniques the teachers provided tutoring, motivation and Unfortunately, we could not say the worry was groundless.
coordination. The students first had to build a robot from the However, after some times the students got used to the team-
available components, followed by the development of the work, and in order to ensure the most fairly individual grading
program that controls the robot. a new student evaluation model was developed that took into
consideration peer rating as well [13]. Thereafter the majority
of the students have been no longer afraid of team-work.
Although the team skills and other soft skills have
improved significantly as a result of the team-work, the
original objective of the project, i.e. obtaining a considerable
positive change in the attitude of the students towards
programming, has not been fully achieved.

III. REAL-WORLD PROJECT ASSIGNMENTS


Due to the moderate success of the team assignment based
projects we recognized that new ideas are needed. We
recognized that the actual implementation of the team-work
based assignments suffered from some deficiencies. For
Fig. 1. Lego Mindstrom mobile robot [7] example the assignment problems were invented by professors
and teaching staff and so maybe sometimes they were not as
related to the everyday problems as it would be optimal.

244
Back to Contents

Moreover, some of them were even considered by students to met practical problems, solutions and technologies, and
be boring. acquired a skillset that matches industry needs.
The analysis of the previous results led to the assumption
that real-world problems and assignments could be a key factor IV. COOPERATIVE EDUCATION AS LONG-TERM SOLUTION
to successful team projects and therefore to the realization of Kecskemét College started a new initiative in Hungary
our aims regarding the improvement of student motivation. aiming the education of engineers who meet the requirements
Thus we decided to increase the attractiveness of the software of the labor market at the highest possible level by not only
projects by invoking external help from some partner having right after the education the necessary theoretical
companies. knowledge, but the practical social and methodological skills
The idea of real-world software project assignments was as well.
strongly supported from the beginnings by several partner KC adopted the German cooperative (also called dual)
companies, whose support made it possible the compilation of model [1] where students carry out periods of academic study
a pool of interesting problems. Having a significant number of at the university alternating with the on-the-job training at
real-world problems we decided to include a new course in the companies. The exact schedule is presented in Table I. This
curriculum focusing only on the software engineering kind of study program was initially offered in the field of
assignments. It was called IT Project and at first it was offered Vehicle Engineering as a pilot project in 2012. Later the
as an elective course for computer science students in approach was extended to other fields as well and so starting
collaboration with some software companies. Most of these from September 2015 the secondary education graduates who
companies not only defined the problems to be solved but also enrolled to KC could opt for the dual type education in the field
offered counselling for the students. of Computer Science as well. Cooperative education has a big
At the beginning of the semester the students who enrolled potential, which was also shown by a study [15] that revealed
into this course had to form teams of four or five. The teams that the Vehicle Engineering students who were the first ones
had to be formed self-organizing, i.e. nobody was forced to enrolling into our dual program had better study results and
join a group. Each team could select one assignment from the increased motivation than their counterparts who studied in the
problem offer list. Besides, a tutor was also assigned to each traditional study program.
team. Some of the tutors came from industry, while others were
university staff. There were no mandatory contact classes, the TABLE I. TABLE STYLES
team had to ask for consultation meeting. One of our partner
Sem. Cooperative study program Traditional study program
companies offered in case of some assignments the simulation
Academic studies: 13 weeks Academic studies: 13 weeks
of a real customer-software developer team relationship, where 1
Industry practice: 8 weeks
one of its employees acted as a customer liaison with no Academic studies: 13 weeks Academic studies: 13 weeks
programming knowledge. Thus the team members could 2
Industry practice: 16 weeks
experience the whole software project life-cycle starting with 3
Academic studies: 13 weeks Academic studies: 13 weeks
the problem specification through customer meetings until the Industry practice: 8 weeks
product delivery. Academic studies: 13 weeks Academic studies: 13 weeks
4
Industry practice: 16 weeks
No specific software development methodology was 5
Academic studies: 13 weeks Academic studies: 13 weeks
prescribed, but conform to the current trends teams were Industry practice: 8 weeks
Academic studies: 13 weeks Academic studies: 13 weeks
encouraged to use Scrum [12]. At the end of the semester each 6
Industry practice: 16 weeks
team had to prepare a presentation and a technical report about Academic studies: 13 weeks Academic studies: 13 weeks
the results. Besides, each student had to evaluate the 7
Industry practice: 8 weeks Industry practice: 8 weeks
contributions of his or her team members to the group effort. Sum
Academic studies: 91 weeks Academic studies: 91 weeks
The considered factors were attendance and participation in Industry practice: 80 weeks Industry practice: 8 weeks
team meetings, individual contributions to the project,
communication within the group, etc. These evaluations were
completely confidential and have not been shown to other team V. CONCLUSIONS
members.
Starting from the early 2000s the Hungarian computer
After two-semester experiences one can say that this science related higher education has faced problems like the
initiative proved to be successful. Not only the attitude of the students’ decreasing interest in a career as a software engineer
students has improved but the new component of programming and the lack of their enthusiasm for programming. Meanwhile
teaching has also contributed to the deepening of our relations the labor market showed a huge demand on software engineers.
with the software industry. The potential benefit for the Therefore several projects have been started to make
involved companies laid in increasing the company visibility programming more attractive and enjoyable for students.
and being able to select talented students who could be later
Most of these projects (like the usage of Lego robots and
employed. The benefit of the program for the university and
the team assignment based laboratory classes) had at least a
the involved teaching staff was that they would get a better
temporary positive effect on the attitude of the subject group
knowledge of practical needs and problems of software
but a longer lasting result could be achieved for the first time
companies. Finally, the benefit for the students was that they
by the real-world software project assignments. This approach

245
Back to Contents

not only increased the students’ interest, but also contributed to International Electrotechnical and Computer Science Conference ERK
the enhancement of their team-work skills, professional 2007, Portoroz, Slovenia, 2007.09.24., pp. 133-136.
knowledge, and contributed to the effective education for [6] Z.C. Johanyák, R. Pap-Szigeti, and R.P. Alvarez Gil, “Analyzing
students’ programming failures,” A GAMF Közleményei, Kecskemét,
practical and industrial problems. XXII. (2008), ISSN 0230-6182, pp. 115-120.
In order to improve the social and professional skills as [7] Lego Mindstorms Nxt [Retrieved 30-01-2016]
well as the industry readiness of our graduates a new kind of https://en.wikipedia.org/wiki/File:Lego_Mindstorms_Nxt-FLL.jpg
collaboration was started between university and industry at [8] N.R. Mead, “Industry/University Collaboration in Software Engineering
Education: Refreshing and Retuning Our Strategies,” Proceedings of the
Kecskemét College adapting the German dual model. This 37th International Conference on Software Engineering, IEEE Press
model makes possible to students to meet industry demands Piscataway, NJ, USA, 2015, Vol. 2, pp. 273-275.
and everyday practice from the beginning of their studies, [9] A. Pásztor, “How can it be Tought Playfully with the Help of
which approach can lead to better motivation and Programmable Robots?,” in Proceedings of International Technology
understanding of their future profession. and Development Conference, Valencia, Spain, 2008.03.03-05.
Valencia: International Association of Technology, Education and
ACKNOWLEDGMENT Development (IATED), 2008. Paper 933.
[10] A. Pásztor, R. Pap-Szigeti, E. Lakatos Török, “Effects of Using Model
This research was supported by ShiwaForce Ltd. and the Robots in the Education of Programming,” Informatics in Education,
Foundation for the Development of Automation in Machinery 2010, Vol. 9, No. 1, pp.133–140.
Industry. [11] H. Saiedian, “Bridging Academic Software Engineering Education and
Industrial Needs,” Computer Science Education, 2002, Vol. 12., No. 1-2,
pp. 5-9.
REFERENCES
[12] K. Schwaber and J. Sutherland, “The Scrum Guide™. The Definitive
[1] A. Göhringer, “University of Cooperative Education – Karlsruhe: The Guide to Scrum: The Rules of the Game,” Scrum.Org and ScrumInc,
Dual System of Higher Education in Germany,” Asia-Pacific Journal of 2014. [Retrieved 25-01-2016]
Cooperative Education, 2002, 3(2), pp. 53-58. http://www.scrumguides.org/docs/scrumguide/v1/Scrum-Guide-US.pdf
[2] Best Paying Jobs in Hungary [Retrieved 24-01-2016] [13] G.F. Tóth, Z.C. Johanyák, “Reform of the Software Engineering
http://www.salaryexplorer.com/best-paying-jobs.php?loc=98&loctype=1 Teaching – Demands and Conception,” 7th International Conference on
[3] P. D’Este, F. Guy, and S. Iammarino, “Shaping the formation of Applied Informatics (ICAI 2007), January 28 - 31, 2007, Eger, Hungary,
university–industry research collaborations: what type of proximity does pp. 63-70.
really matter?,” Journal of Economic Geography, pp. 537-558. , Vol. 13, [14] G.F. Tóth, Z.C. Johanyák, “Teaching Software Engineering -
Issue 4, 2013, doi: 10.1093/jeg/lbs010 Experiences and new Approaches,” XIX Didmattech 2006, September
[4] “Digital Economy: Economical and Social Breakout Opportunity for 6-7, Komarno, Slovakia, pp. 261-265.
Hungary” (in Hungarian), Digitális gazdaság: gazdasági és társadalmi [15] E. Török, Z. Kovács, “Dual Training in the Light of Vehicle Engineering
kitörési lehetőség Magyarországnak. IVSZ, 2015. Student Feedback,” Proceedings of the 7th International Scientific and
http://ivsz.hu/wp-content/uploads/2015/11/IVSZ_DG- Expert Conference TEAM 2015 (Technique, Education, Agriculture &
infografika_2015.pdf [Retrieved 24-01-2016] Management), Belgrade, October 15-16, 2015, pp. 377-381.
[5] Z. Istenes, A. Pásztor, “Using innovative devices in the education of IT
from primary to university level,” in Proceedings of the 16th

246
Back to Contents

The impact of Information Technology and the


alignment between business and service innovation
strategy on service innovation performance

Fotis Kitsios, Maria Kamariotou


Department of Applied Informatics
University of Macedonia
Thessaloniki, Greece
kitsios@uom.gr
tm1367@uom.edu.gr

Abstract—New Service Development is a fundamental issue Another critical factor is Information Technology, which
concerning researchers and managers, as successful new services affects the whole process of New Service Development.
can have a positive impact on competitive advantage. Innovation is Services must be supported by Information Technologies to
an important factor which can positively affect firm performance. improve [5]. The successful implementation of innovation in
Unfortunately, the percentage of unsuccessful services is high that services requires the existence of the necessary resources.
can be due to lack of strategy in New Service Development. When These resources include business strategy, knowledge,
services are supported by Information Technology they can support Information Technologies, skills of employees, the structure of
and increase their innovation rate. Such an implementation requires
the organization and communication with customers [46].
both advanced technological capabilities and alignment. By the way
there is a lack of structured literature reviews in the field of New However there is a lack of structured literature reviews in
Service Development upon the examination of these issues. the above issues of New Service Development [1, 40, 41].
Consequently, this literature review aims to promote not only the Structured literature reviews are important because they define
strategic alignment but also the Information Technology capabilities the existing knowledge and they highlight areas for further
considering these two main factors as the most important ones for the investigation [45]. As New Service Development has been
successful implementation of new services. Webster’s and Watson’s critical issue for researchers recently, this research aims at
methodology has been selected as it is tried and tested by many developing the respective concepts, in order to highlight the
researchers in the field of Management. Research shows that
importance, both by academics and by Managers. The aim of
managers should focus on these two factors in order to develop
successful new services in accordance with their customers’ needs
this literature review is the promotion of strategic alignment
and to increase their performance, as well. and Information Technology capabilities as the two main
factors for the successful implementation of new services.
Keywords—new service development; innovation; strategy; The structure of this paper is as follows: after a summarized
alignment; Information Technology capabilities introduction, the methodology of literature review is analyzed
and then the theoretical framework is examined. Firstly, a
I. INTRODUCTION general framework for new service development and service
As services account for the 70% of the economic activity of innovation is provided and then the meaning of alignment and
a country its importance can be easily understood [22]. It is Information Technology capabilities are examined. Finally, the
crucial that managers should comprehend both the stages of study works out an overview of these two critical success
New Service Development and its critical success factors in factors and further research avenues.
order to achieve increasingly successful new services that can
enhance their competitive advantage [10, 13]. It is reported by II. METHODOLOGY
both businesses and researchers that innovative services have
an impact on increasing market share and revenue [24, 25]. The A. Literature review methodology
percentage of failed services is high, because of many factors. Literature review is very important and the initial idea is a
The most important one is lack of strategy in New Service compilation of summaries and literature of previous studies.
Development process [3]. Strategic development of New The quality of the literature review is important since it
Service Development process is necessary for businesses so as determines the way in which researcher combines the different
to gain competitive advantage. Alignment is one of the most parts of the studies and how they have been analyzed. Finally,
important factors of success of new services according to it highlights areas that require further research.
researchers [15, 28, 30, 31]. Alignment refers to the connection
between objectives of business and new service strategy [2, 9]. Three steps are suggested by this methodology of literature
review to achieve effective implementation of the above. These

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


247
Back to Contents

are the search, in which it refers the definition of keywords and Innovation may concern several aspects. The first
databases, and the selection of individual topics , the dimension concerns the final characteristics of the service, the
"backward search" and finally the "forward search." Finally, second the approach of information through new technologies
the analysis and synthesis of the central ideas of articles is of enterprise for customers and suppliers and the third
followed [45]. dimension the resources at the undertaking and thus give a
competitive advantage in regard to the new service. These
B. Implementation of the literature review methodology resources relate to the skills and knowledge of employees [14].
As primary and decisive stage of the search of articles Another distinction correlates the four dimensions of
seemed appropriate, an initial and general search on the issue innovation, to the content of the service, distribution service,
of New Service Development, innovation in new services, and customer interaction and technology. The first dimension
corresponding literature reviews that have been implemented examines the value proposition that will be given to the
by previous researchers to create a basic idea of the concepts customer, the second the distribution pattern of innovation in
resulting. Databases and keywords are selected from these New Service Development activities, the third dimension
literature reviews of the field. Databases are Scopus, Science relates to customer interaction and the recent contribution of
Direct, Web of Science and ABI/INFORM and searching was technology to New Service Development process [34, 35, 36,
done with keywords “new service development”, “innovation 37].
management and new services”, “service innovation process”
and “service innovation strategy”. Articles are not limited to The innovation process is characterized by uncertainty, so it
certain periods due to the fact that this field of research has requires human, financial and technological resources to
occupied researchers for decades. Also, articles are only in support innovation in New Service Development [20]. For the
English and are published in scientific journals or conferences, successful implementation of innovation in services the
not in books. existence of necessary resources is required. These resources
are related to business strategy, the knowledge, Information
Having searched all databases there has been a total Technologies, skills of employees, the structure of the
outcome of 78 articles. In which 30 articles from initial sources organization and communication with customers [46].
were added, after having excluded all common results, the Technologies and Information Systems play an important role
content of which is strictly relevant to the subject under in service innovation [19]. However, investments in
investigation. Thus, a total number of 108 appears. Therefore, Information Technologies should be supported by the
it appears a total of 108 articles. In these 108 articles search principles of business [46].
included 28 more, which resulted from the backward search.
Additionally, 9 more articles completed all the articles after the Innovation in services is a risk for a company and for this
forward search and thus a total of 144 articles revealed. reason implementing innovation has to be strategically
designed from scratch. Consequently a business should firstly
Search was completed when it came to common articles define its innovation strategy and then is to be passed across
from all databases and different combinations of keywords. It the business by managers [21].
was then that the critical mass of relevant literature sources was
considered as having been collected. The highly percentage of failed services is attributed to lack
of strategy [3]. To gain the competitive advantage businesses
need to strategically develop New Service Development
III. THEORETICAL FRAMEWORK process [24, 37]. New Service Development is important
requisition strategy for companies [6]. The failure of new
A. New Service Development and Innovation services is high because the required strategic resources,
procedures, methods for New Service Development have not
The major concepts of these 144 articles are about New been understood. This fact is increases the need of further study
Service Development stages and critical success factors. of this field [16].
Researchers have proposed models for the successful process
of New Service Development, since 1989, but a few of them Enterprises with services have developed innovative
have dealt with the success factors of New Service strategies for New Service Development and use the
Development for the successful implementation of the process appropriate resources to achieve and thus gain competitive
and in creating successful services [4, 12]. advantage. Most studies relate success factors to the process,
though there has been little research regarding the required
Researchers and businesses report that innovation in resources. In their research, [17], various types of services
services has an impact on increasing market share and revenue concluded that the resources that affect each stage of New
[24, 27]. During the last two to three decades there have been Service Development are divided into organizational, natural
many studies on innovation in products, services, Information resources and related knowledge. In the first category resources
Technology processes and systems, as innovation is a factor of are related to the structure, control, reactions and relationships
competitive advantage [44]. Generally the term innovation of members of the company. The second category includes
services has been used in the literature not only to describe new resources such as materials, location, technologies and
or improved services, strategy and processes to create new facilities. The third category includes resources such as
services using new knowledge, new processes and education, knowledge and skills of employees. Technological
technologies, as well [21]. resources are required for the dissemination of information,
such as customer needs [33, 46].

248
Back to Contents

The skills which are required for New Service Information Technology on organizational and strategic
Development are cooperation, information gathering, effective objectives for successful services. Alignment of innovation
planning process, the involvement of customers in the process strategy with the surroundings leads to more rapid introduction
and setting goals and commitment to them [26]. Additional of the new service on the market.
capabilities, that are important for New Service Development,
concern technology, structure of the organization, marketing The three types of services emerging strategy is cost
and Research and Development [42]. Needed essential skills leadership, differentiation and focus on this market segment.
needed for New Service Development are categorized to The diversification strategy supports innovation and is
strategic, organizational and administrative. Strategic skills promoted by the structure of the organization in the
concern the choice of appropriate operational strategy, development and delivery of service. The cost leadership
organizational with change management and administrative strategy is trying to minimize costs in all activities in the
with communication and leadership [29]. company and strategic focus on this market segment tries to
cover this market segment with specific needs.
The above analysis shows that the alignment of business
strategy, the strategy of new services and information Every innovation strategy provides a clear direction for
technology, combined with the capabilities of Information addressing strategic issues, the selection of the market where
Technologies leads to successful services and increases the the company wants to enter and abilities to be developed.
performance of a firm. The following sections discuss these Communication, informed decisions about the technology to be
two factors. used and definition of the concept of innovation for the
enterprise are required so as to develop the innovation strategy
[21].
B. Alignment
Alignment is one of the most important factors of success The three types of innovation strategy services emerge are
of new services according to researchers [15, 28, 30, 31]. focused on the creation of the service, the distribution of
Alignment refers to the connection of objectives and of service and the customer engagement.
business and new service strategy [2, 9]. To achieve this These types align with each other in the following ways.
connection, enterprises need to collect information from clients
about their needs, data on competitors, but also to understand The first way of the alignment is between the
the process of New Service Development, to manage to differentiation strategy and the focus on the creation of the
harmonize the strategic goals of the business with them from service. The diversification strategy affects the competitive
the process of New Service Development. Research is needed advantage. Focus on creating the service is changing the way
on how to align the operational strategies with service of service to the customer and the diversification strategy
innovation strategy [37]. The difficulty in aligning customer affects the strategy of focusing on the creation of the service on
needs with the new service is the challenge for quick how to promote service.
introduction of the service to the market the use of The second way refers to the alignment between strategy of
specifications in technology [7]. cost leadership and the focus on service delivery. The cost
The innovation strategy of services can be defined as the leadership strategy reduces the cost of each operation and the
strategic decision to change an attribute or a combination of the focus on the distribution of the service strategy aims to change
characteristics of innovation to achieve competitive advantage the way of delivery to the customer, which also aims to reduce
[37]. costs.
An additional consideration of alignment was obtained The third way of the alignment is between the strategic
[19], according to which the alignment is the extent to which focus on this market segment and focus on customer
the mission, objectives and plans contained in the Business engagement. The first strategy aims to cover specific needs and
strategy are linked to the Information Technology strategy. The with the second strategy the firm receives feedback on
alignment of these two strategies offers a competitive customer needs so as the service can be adapted to these.
advantage. For this reason, the strategy requires use of These alignment methods are combined thereby to increase
technology. The design of technology is part of the strategic the efficiency of the operation. Deciding the kind of impact that
planning operation. that alignment is going to have should be given the most of
The alignment of business strategy and the strategy of new attention in Information Technology which will impact
services provide expertise in business for customers and the business operations [34, 35, 36, 37].
market. It also enables communication between the company
and customers to develop new service according to their needs. C. Information Technology Capabilities
The result of the alignment, therefore, is to increase the firm Improvement services supported by Information
performance of the business [9]. Technologies [5]. The development of technology has changed
Companies that successfully develop New Service the way of production services. The technology applications to
Development process, align the innovation of service to the services require knowledge and training of workers. However
environment to reduce the risks and failures. Companies have they help to increase the customers’ choices since most offer
focused on the strategic use of Information Technology to integrated service packages [18].
achieve competitive advantage and to align the capabilities of

249
Back to Contents

Several studies have research the relationship between the factors throughout the process of development process of new
business strategy, the strategy of the new service and the services.
impact of Information Technology skills to achieve
competitive advantage [35]. To highlight the importance of these factors a literature
review was conducted in accordance to the methodology of
Information Technology capabilities include the Webster and Watson.
harmonization of strategies, systems and people, who must
know how to handle systems [11]. Information Technology The results of the literature review state that companies
capabilities in services help to assist in the distribution system, competing in today's changing environment are facing the
the process and the communication with customer. Information challenge of developing new services according to the needs of
Technology capabilities enhance the knowledge that is needed customers. To achieve this goal, companies align their business
in the development and distribution service. Information strategy with new service strategy and with Information
Technology capabilities reduce the cost of finding the Technology strategy. Information Technologies affect all
information in New Service Development [34, 35]. stages of New Service Development process. Information
Technologies help knowledge sharing about customer needs
When a company has technological capabilities can and therefore are required appropriate technological
innovate and thus the relationship between innovation and capabilities.
strategy of Information Technology which is very important
because companies should choose the right technology that will Future efforts should be made in the light of literature and
provide them with the necessary information. This information empirical review surveys, in order to assess and enlight this
concerns the environment of the business and the customers. field. Yet, managers are recommended that look at these
Business through Information Technology can personalize surveys to successfully implement new services.
services to customers and increase their efficiency [19]. The field of New Service Development is broad and
New technologies have created opportunities for companies researchers suggest that the literature review of this field
with high technological services, after increasing the value should be associated with others, such as Information Systems
offered to customers [43]. Through the capabilities of and the Organizational Theory or Strategy, which will
Information Technology companies communicate with constitute categories of successful factors of new services [8].
customers and this is a strategic resource that gives competitive The percentage of new services is low. Managers need to
advantage, because businesses create the new service according understand the process of New Service Development and to
to their customers’ needs [38]. align new service skills of employee, resources of the company
With the use of technology, businesses personalize service and its strategy in order to increase the competitive advantage.
and adapt to customer needs. Businesses also use Information Managers should effectively organize the process of New
Technologies to further innovation, because they can analyze Service Development and be in line with customers’ needs and
the needs of customers [11]. abilities of marketing. Researchers propose to investigate the
way in which businesses align their strategy to meet market
Internet offers opportunities to communicate businesses and customers’ needs and maximize their market share [4, 32].
with their customers. Internet contributes to market research,
generation and evaluation of ideas in collaborative service Finally, new services have to respond to an ever-changing
planning and proposals for improvement. Also, virtual technological environment, but also to changeable customer
communities help to analyze the environment, the generation needs. Many services fail and taking this into account
of ideas and their evaluation [39]. managers need to decide how to properly conduct the
procedure to obtain competitive advantage [23].
Subsequently, there are many studies that show how the
capabilities of Information Technology affect services. REFERENCES
Businesses should understand what kind of strategy they [1] R. Adams, J. Bessant, and R. Phelps, “Innovation management
need to use to develop Information Technology capabilities measurement: A review”, International Journal of Management
employee’s training programs, to participate in the process of Reviews, vol. 8, pp. 21-47, 2006.
personalization of new services according to customer needs [2] M. Akoğlan Kozak, and D. Acar Gürel, “Service design in hotels: A
conceptual review”, Tourism, vol. 63, pp. 225-240, 2015.
and to apply the innovation process in services so as to
[3] I. Alam, and C. Perry, “A customer-oriented new service development
maximize the value provided to the client [11]. process”, Journal of services Marketing, vol. 16, pp. 515-534, 2002.
[4] K. Atuahene‐Gima, “Differential potency of factors affecting innovation
IV. CONCLUSIONS performance in manufacturing and services firms in Australia”, Journal
of Product Innovation Management, vol. 13, pp. 35-52, 1996.
The objective of this paper was to highlight the importance [5] C.A. Biancolino, E.A. Maccari, and M.F. Pereira, “Innovation as a Tool
of alignment and capabilities of Information Technology as for Generating Value in the IT Services Sector”, Revista Brasileira de
two critical factors of success of New Service Development. Gestão de Negócios, vol. 15, pp. 410-426, 2013.
The development of these factors was selected because the [6] A. Boukis, and K. Kaminakis, “ Exploring the Fuzzy Front-end of the
percentage of successful new services is low and this is due to New Service Development Process–A Conceptual Framework”,
lack of strategic alignment with the necessary resources. Procedia-Social and Behavioral Sciences, vol. 148, pp. 348-353, 2014.
Technological capabilities are important as they influence [7] P. Carbonell, A.I. Rodríguez‐Escudero, and D. Pujari, “Customer
Involvement in New Service Development: An Examination of

250
Back to Contents

Antecedents and Outcomes”, Journal of Product Innovation [29] N. Nada, and Z. Ali, "Service Value Creation Capability Model to
Management, vol. 26, pp. 536-550, 2009. Assess the Service Innovation Capability in SMEs”, Procedia CIRP, vol.
[8] P. Carlborg, D. Kindström, and C. Kowalkowski, “ The evolution of 30, pp. 390-395, 2015.
service innovation research: a critical review and synthesis”, The [30] M. Ottenbacher, J. Gnoth, and P. Jones, “Identifying determinants of
Service Industries Journal, vol. 34, pp. 373-398, 2014. success in development of new high-contact services: Insights from the
[9] H.L. Chang, H.E. Hsiao, and C.P. Lue, “Assessing IT-business hospitality industry”, International Journal of Service Industry
alignment in service-oriented Enterprises”, Pacific Asia Journal of the Management, vol. 17, pp. 344-363, 2006.
Association for Information Systems, vol. 3, pp. 29-48, 2011. [31] M.C. Ottenbacher, and R.J. Harrington, “Strategies for achieving
[10] CC. Cheng, J.S. Chen, and H.T. Tsou, “Market-creating service success for innovative versus incremental new services”, Journal of
innovation: verification and its associations with new service Services Marketing, vol. 24, pp. 3-15, 2010.
development and customer involvement”, Journal of Services [32] P. Papastathopoulou, G. Avlonitis, and K. Indounas, “The initial stages
Marketing, vol. 26, pp. 444-457, 2012. of new service development: A case study from the Greek banking
[11] J.S. Chen, and H.T. Tsou, “Performance effects of IT capability, service sector”, Journal of Financial Services Marketing, vol. 6, pp. 147-161,
process innovation, and the mediating role of customer service”, Journal 2001.
of Engineering and Technology Management, vol. 29, pp. 71-94, 2012. [33] H. Rusanen, A. Halinen, and E. Jaakkola, “Accessing resources for
[12] U. de Brentani, “New industrial service development: scenarios for service innovation–the critical role of network relationships”, Journal of
success and failure”, Journal of Business Research, vol. 32, pp. 93-103, Service Management, vol. 25, pp. 2-29, 2014.
1995. [34] H.S. Ryu, and J.N. Lee, “Effect of IT Capability on the Alignment
[13] J.P. De Jong, and P.A. Vermeulen, “Organizing successful new service between Business and Service Innovation Strategies”, Proceedings of
development: a literature review”, Management decision, vol. 41, pp. 17th Pacific Asia Conference on Information Systems, June 18-22 in
844-858, 2003. Jeju Island, Korea, pp. 1-17, 2013.
[14] H. Droege, D. Hildebrand, and M.A. Heras Forcada, “Innovation in [35] H.S. Ryu, and J.N. Lee, “How Should Service Innovation Strategy be
services: present findings, and future pathways”, Journal of Service Aligned with Business Strategy?:Focused on the Moderating Effect of
Management, vol. 20, pp. 131-155, 2009. IT Capability”, Journal of Information Technology Services, vol. 6,
pp. 195-229, 2015.
[15] S. Edgett, “The traits of successful new service development”, Journal of
Services Marketing, vol. 8, pp. 40-49, 1994. [36] H.S. Ryu, J.N. Lee, and J. Ham, “Understanding The Role of
Technology In Service Innovation: a Comparison of Three Theoretical
[16] B. Edvardsson, T. Meiren, A. Schäfer, and L. Witell, “Having a strategy
Perspectives”, Proceedings of 18th Pacific Asia Conference on
for new service development-does it really matter?”, Journal of Service Information Systems, June 24-28 in Chengdu, China, pp. 1-17, 2014.
Management, vol. 24, pp. 25-44, 2013.
[37] H.S. Ryu, J.N. Lee, and B. Choi, “Alignment Between Service
[17] C.M. Froehle, and A.V. Roth, “A resource-process framework of new Innovation Strategy and Business Strategy and Its Effect on Firm
service development”, Production and Operations Management, vol. 16, Performance: An Empirical Investigation”, Engineering Management,
pp. 169-188, 2007. IEEE Transactions, vol. 62, pp. 100-113, 2015.
[18] F. Gölpek, “Service Sector and Technological Developments”, Procedia-
[38] P. Schulteß, S. Wegener, A. Neus, and G. Satzger, “Innovating for and
Social and Behavioral Sciences, vol. 181, pp. 125-130, 2015.
with your service customers: An assessment of the current practice of
[19] H.L. Huang, “Performance effects of aligning service innovation and the collaborative service innovation in Germany”, Procedia-Social and
strategic use of information technology”, Service Business, vol. 8, pp. Behavioral Sciences, vol. 2, pp. 6503-6515, 2010.
171-195, 2014.
[39] M. Sigala, “Exploiting web 2.0 for new service development: Findings
[20] C. Jaw, J.Y. Lo, Y.H. and Lin, “The determinants of new service and implications from the Greek tourism industry”, International Journal
development: Service characteristics, market orientation, and actualizing of Tourism Research, vol. 14, pp. 551-566, 2012.
innovation effort”, Technovation, vol. 30, pp. 265-277, 2010.
[40] M. Smith, M. Busi, P. Ball, and R. Van Der Meer, “Factors influencing
[21] F. Kitsios, and S. Sindakis, “Analysis of Innovation Strategies in an organisation's ability to manage innovation: a structured literature
Hospitality Industry: Developing a Framework for the Evaluation of new review and conceptual model”, International Journal of innovation
Hotel Services in Thailand”, Proceedings of 2nd International management, vol. 12, pp. 655-676, 2008.
Conference on Innovation and Entrepreneurship: ICIE 2014, February 6-
[41] D.R. Tranfield, D. Denyer, and P. Smart, “Towards a methodology for
7 in Bangkok, Thailand, pp. 136-141, 2014.
developing evidence-informed management knowledge by means of
[22] F. Kitsios, S. Angelopoulos, and T. Papadogonas, “Strategic innovation systematic review”, British journal of management, vol. 14, pp. 207-222,
in tourism services: Managing Emerging Technologies”, International 2003.
Journal of Management Research and Technology, vol. 3, pp. 217-237,
[42] C.Y. Tseng, H.Y. Kuo, and S.S. Chou, “Configuration of innovation and
2009.
performance in the service industry: evidence from the Taiwanese hotel
[23] S. Kuester, M.C. Schuhmacher, B. Gast, and A. Worgul, “Sectoral industry”, The Service Industries Journal, vol. 28, pp. 1015-1028, 2008.
Heterogeneity in New Service Development: An Exploratory Study of [43] A.C. Van Riel, J. Lemmink, and H. Ouwersloot, “High-technology
Service Types and Success Factors”, Journal of Product Innovation service innovation success: a decision-making perspective”, Journal of
Management, vol. 30, pp. 533-544, 2013. Product Innovation Management, vol. 21, pp. 348-359, 2004.
[24] J. Kuusisto, A. Kuusisto, and P. Yli-Viitala, “Service development tools
[44] N. Vitezić, and V. Vitezić, “A Conceptual Model of Linkage Between
in action”, The Service Industries Journal, vol. 33, pp. 352-365, 2013.
Innovation Management and Controlling in the Sustainable
[25] C. Lee, H. Lee, H. Seol, and Y. Park, “Evaluation of new service Environment”, Journal of Applied Business Research, vol. 31, pp. 175-
concepts using rough set theory and group analytic hierarchy process”, 184, 2014.
Expert Systems with Applications, vol. 39, pp. 3404-3412, 2012.
[45] J. Webster, and R.T. Watson, “Analyzing the Past to Prepare for the
[26] T. Limpibunterng, and L.M. Johri, “Complementary role of Future: Writing a Literature Review”, MIS Quarterly, vol. 26, pp. 13-
organizational learning capability in new service development (NSD) 23, 2002.
process”, The Learning Organization, vol. 16, pp. 326-348, 2009.
[46] Y. Yang, and A. Kankanhalli, “Investigating the Influence of IT and
[27] H.L. Melton, and M.D. Hartline, “Customer and frontline employee Other Resources on Service Innovation in Banking”, Proceedings of
influence on new service development performance”, Journal of Service 17th Pacific Asia Conference on Information Systems, June 18-22 in
Research, vol. 13, pp. 411-425, 2010. Jeju Island, Korea, pp.1-11, 2013.
[28] L.J. Menor, and A.V. Roth, “New service development competence in
retail banking: Construct development and measurement validation”,
Journal of Operations Management, vol. 25, pp. 825-846, 2007.

251
Back to Contents

Optimal Strategies for Escaping from the Middle


Income Trap: Automotive Supply Chain in Thailand
Kanda Boonsothonsatit*, Orapadee Joochim
Institute of Field Robotics
King Mongkut’s University of Technology Thonburi
Bangkok, Thailand
*kanda@fibo.kmutt.ac.th

Abstract— The middle income trap is escaped by an increase


in gross domestic product (GDP). In Thailand, its achievement is
mainly influenced by automotive industry as a production base.
It has consequences on suppliers providing materials and auto-
parts to motor vehicle assemblers (original equipment
manufacturers: OEMs) and aftermarket suppliers (replacement
equipment manufacturers: REMs). The escape from the middle
income trap needs the optimal strategies considering all
perspectives along the automotive supply chain. They are often
generated by a strategic map designed through a tool of balanced
scorecard (BSC). However, BSC is criticized as limitations
because of unidirectional causality and internal focus. They are
overcome by a causal linkage-based strategic map design along Fig. 2 Thailand motor vehicle production
the automotive supply chain in Thailand. Its conceptual
framework is proposed in this paper purposing to causally link The increased demand is caused by moving production
the automotive strategies to operations. The causal linkages bases from western to eastern world. Several of them are
contribute to identify their root causes leveraging the RA vision emerging countries (e.g. China and India) where have lower
and mission. This contribution is useful for policy makers to set rate of motor vehicle ownership and higher rate of motor
the optimal automotive strategies interrelated systematically vehicle consumption when compared to mature countries (e.g.
along the supply chain stages. United States, Europe, and Japan). In terms of motor vehicle
ownership, India has the lowest rate (58.9 people per motor
Keywords— middle income trap; automotive; supply chain vehicle) whereas the highest rate belongs to United States (1.3
people per motor vehicle) [4].
I. INTRODUCTION
Apart from the increased demand of motor vehicles, supply
Thailand is one of many countries having upper-middle chain management (SCM) is one of the most important
incomes (USD 4,126 to 12,735) as shown in Fig. 1 [1]. This enablers for escaping from the middle income trap. It also
trap can be escaped by an increase in gross domestic product supports the vision of the master plan for automotive industry
(GDP), especially manufacturing sector. One of the most 2012 – 2016, namely “Thailand is a global green automotive
important industries in Thailand is automotive. It generated production base with strong domestic supply chains which
GDP of 3% in 2013 and continuously growing [2]. This is create high value added for the country” [4]. Thailand
because of increases in domestic and oversea demand of motor automotive supply chain covers four stages including domestic
vehicles which leads to an increase in motor vehicle production and oversea customers, motor vehicle assemblers (original
at 1.88 million units in 2014 as shown in Fig. 2 [3]. equipment manufacturers: OEMs) or aftermarket suppliers
(replacement equipment manufacturers: REMs), and auto-part
suppliers (tiers 1 and 2 or 3) as shown in Fig. 3. The 2nd or 3rd
tier suppliers procure raw materials (e.g. steel, plastic, and
rubber) and machineries (e.g. molds and dies) for
manufacturing auto-parts. They are dispatched to the 1st tier
suppliers as components for manufacturing powertrain,
suspension, electronical and electronic, body, and others before
assembled as motor vehicles by OEMs. On the other hand,
aftermarket-parts are manufactured by REMs. The end-
products (i.e. motor vehicles and aftermarket-parts) can be sold
to domestic and oversea customers. These material flows are
depicted as arrows in Fig. 3.

Fig. 1 Country classifications by income

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


252
Back to Contents

[7]. The BSC-based ViA can be interrelated directly and


indirectly [8][9] but considered as causal in upward directions
[10][11][12] as shown in Fig. 1. In practical, these
interrelations are multi-directional. They are resolved by a
technique of causal linkages which are broadly applied
[13][14]. In addition, the causal linkages are capable of
identifying the root causes which leverage the GDP
maximization. The identified root causes contributes policy
makers to set the optimal strategies which are interrelated
Fig. 3 Thailand automotive supply chain casually and systematically along the automotive supply chain
All of the OEMs are multinational companies such as in Thailand as the aim of this paper.
Toyota, Honda, Isuzu, etc. Most of the 1st tier suppliers are
oversea enterprises, whereas most of the 2nd or 3rd tier II. THREE-STEP METHODOLOGY
suppliers (including REMs) are Thai small and medium-sized In order to achieve the paper aim, its methodology
enterprises (SMEs). In 2013, a Thai enterprise generated much undergoes three steps as follows (Fig. 6). Firstly, the Thailand
lesser revenue than a joint venture (6.3 times) as shown in Fig. automotive supply chain’s vision and missions are clarified.
4, and twice lower profit margin than its average as shown in Secondly, their related parameters are brainstormed and
Fig. 5. The lowest profit margin was incurred by Thai small- conceptualized as a unified causal loop diagrams (CLD). Its
sized enterprises (0.28%), and followed by Thai medium-sized root causes (leveraging the clarified vision and missions) are
enterprises (1.08%) [5]. To escape from the middle income analyzed and identified for setting the optimal strategies as the
trap, SCM in Thailand automotive industry emphasizes Thai third step.
SMEs manufacturing auto-parts and aftermarket-parts.

Fig. 6 Three-step methodology

III. RESULTS AND DISCUSSIONS


Fig. 4 Revenue per enterprise by nationality According to the three-step methodology, it is step-by-step
presented and discussed as follows.
A. Clarification of Vision and Mission
The vision is to escape from the middle income trap whose
mission emphasizes maximizing GDP. In order to achieve such
the vision and mission, its conceptual framework is developed
as shown in Fig. 7. The 2nd or 3rd tier suppliers (especially,
Thai SMEs) are supported by three means that include
enhancing domestic competitiveness, expanding auto-part
exports, and increasing aftermarket-part sales. They are caused
by automotive expert brainstorms.

Fig. 5 Profit margin by enterprise size


SCM in Thailand automotive industry confronts of
complexity in nature. It negatively impacts dispersions of
supply chain operations from their strategies which spread
from their missions and vision [6]. This is generally overcome
using a balanced scorecard (BSC). It is a technique for turning Fig. 7 Means to escape from the middle income trap
the supply chain vision into actions (ViA) in perspectives of
learning and growth, internal process, customer, and financial

253
Back to Contents

B. Conceptualization of Parameters The last four reinforcing loops (i.e. R5 to R8) and one
Subsequently, the three means are conceptualized along balancing loop (B1) are illustrated in Fig. 10. Its root causes are
with their interrelated parameters as shown in Fig. 8. The technology transfer (R5 to R8) and government support (B1).
interrelations of causes and effects are indicated by arrows, and The technology transfer have positive effects on capabilities of
directed by polarities. The polarity is positive if causes robotics and automation technology, diversification, and
reinforce their effects; otherwise, the polarity is negative. The research and development (R&D) as well as quality system
individual interrelations are linked until closing cycles. The so- management (company certified with ISO/TS 16949). The first
called causal loop diagram (CLD) can be reinforcing (R) and two effects support increasing productivity (+), reducing auto-
balancing (B) loops. There are nine reinforcing loops and one part cost (-), and then expanding FDI (+) before enhancing
balancing loop (Fig. 9 and 10) emerged from the vision and GDP (+) and government support (+). The enhancement of
mission, namely the GDP maximization along the supply chain GDP (+) and government support (+) are also reinforced by the
in Thailand automotive industry. last two effects which add value of auto-parts. Otherwise, the
government supports worsening restriction of intellectual
Tax incentives property (IP) (-) which causes R&D incapability (+) (B1).
+
+ Financial + Government Tax incentives
Technology
accessibility support +
transfer
+ + + - + Financial + Government
+ Technology
Company certified + Robotics & Fluctuation of R5 accessibility support
transfer
(ISO/TS16949) automation technology material price + +
+ -
+
+ + - + Company certified + Robotics & Fluctuation of
+ + +
Diversification ASEAN autopart (ISO/TS16949) automation technology material price
Autopart reliability Productivity Autopart cost R6
capability network +
- + + + - +
+
+ - Diversification ASEAN autopart
Autopart + + Autopart reliability Productivity Autopart cost
Autopart supply capability network
- value-added GDP FDI R8 -
+ + + -
Autopart supply Autopart + + B1
R&D capability IP restriction - value-added
- GDP FDI
+ R7 +
+
-
IP restriction
Fig. 8 CLD overview +
R&D capability

Fig. 9 illustrates the first four reinforcing loops (i.e. R1 to Fig.10 CLD 2
R4) which government support is a root cause. It has a negative
effect on fluctuation of material price (-); and positive effects C. Set of Optimal Strategies
on ASEAN auto-part network (+), financial accessibility (+),
and tax incentives (+), respectively. They impact auto-part cost To leverage the vision and mission, the identified two root
as follows. The lower auto-part cost is directly influenced by causes (i.e. government support and technology transfer) are
lesser fluctuation of material price (+) and stronger ASEAN discussed for setting their related strategies. The government
auto-part network (-), whereas its indirect influencers include mainly emphasizes supporting technology transfer. It aims to
more financial accessibility (-) and more tax incentives (-). increase number of companies certified with ISO/TS16949,
They supports more investment in robotics and automation enhance capabilities of process technology (i.e. robotics and
technology (+) for increasing productivity (+) and then automation, and diversification), and product innovation (i.e.
reducing auto-part cost (-). The reduced cost of auto-parts R&D). The last one support overcoming increasingly serious
enhances foreign direct investment (FDI) (+), GDP (+) and restriction of IP. These aims leverage to the GDP maximization
then government support (+). which contributes to the escape from the middle income trap.

Tax incentives
+
IV. CONCLUSION
R4

Technology
+ Financial + Government This paper proposes a conceptual framework of causal
transfer accessibility support linkage-based strategic map design along the automotive
+
+
+ R3 + - supply chain in Thailand. It primarily aims to set root cause-
Company certified
(ISO/TS16949)
+ Robotics &
automation technology
Fluctuation of
material price
R2 based strategies interrelated systematically along the supply
+ +
chain stages. The proposed framework is capable of identifying
+ -
+
Diversification
+ +
ASEAN autopart
the root cause-based strategies which eventually leverage the
Productivity Autopart cost
Autopart reliability capability
- network clarified vision and mission. Noticeably, the proposed
R1
+ Autopart +
- framework is considered as qualitative. It only informs how the
Autopart supply +
- value-added GDP FDI supply chain vision and mission, strategy themes, and other
+ + parameters are interrelated. These interrelations cannot be
R&D capability IP restriction - otherwise expressed as numbers. In order to bridge these gaps,
+
the proposed framework can be extended using system
Fig. 9 CLD 1 dynamic modelling as a future work.

254
Back to Contents

ACKNOWLEDGMENT [6] S. Li, S.S Rao, T.S. Ragu-Nathan, and B. Ragu-Nathan, “Development
and validation of a measurement instrument for studying supply chain
The author would be indebted to the Thailand Research management practices,” J Oper Manag, 23(6), pp. 618-641, 2005.
Fund (TRF) which provides funds for studying a project [7] R.S. Kaplan, and D.P. Norton, “Alignment: Using the Balanced
entitled “Policy research to enhance green automotive industry Scorecard to Create Corporate Synergies,” Harvard Business School
Press, Boston:MA, 2006.
in Thailand”. Moreover, the author would be most grateful to
[8] S.K. Vickery, J. Jarayam, C. Droge, and R. Calantone, “The effect of an
the TRF project’s working group from King Mongkut's integrative supply chain strategy on customer service and financial
University of Technology Thonburi, Thammasat University, performance: an analysis of direct versus indirect relationships,” J Oper
National Institute of Development Administration, Chiang Mai Manag 21(5), pp. 523-539, 2003.
University, and Thailand Automotive Institute. Furthermore, [9] S.W. Kim, “An investigation of the direct and indirect effect of supply
the author would be thankful to all partners in I AM chain integration on firm performance” Int J Prod Econ, 119(2), pp. 328-
346, 2009.
(Innovative and Advanced Manufacturing) Research Group,
[10] H. Nørreklit, “The balance on the balanced scorecard – a critical analysis
Institute of Field Robotics, King Mongkut's University of of some of its assumptions,” Journal of Management Accounting
Technology Thonburi for valuable knowledge and information Research. 11(1), pp. 65-88, 2000.
sharing. [11] L. Bryant, D.A. Jones, and S.K. Widener, “Managing value creation
within the firm: an examination of multiple performance measures,”
REFERENCES Journal of Management Accounting Research, 16, pp. 107-131, 2004.
[1] The World Bank, “GDP at market prices (current US$),” 2015. [12] A. Bento, R. Bento, and L.F. White, “Validating cause-and-effect
relationships in the balanced scorecard,” Academy of Accounting and
[2] Office of the National Economic and Social Development Board Financial Studies Journal, 17(3), pp. 45-55, 2013.
(NESDB), “National income of Thailand chain volume measures,”
NESDB Press, Bangkok, 2015. [13] K. Boonsothonsatit, “Supply chain causal linkage-based strategic map
design,” Journal of Advanced Management Science, in press, 2016.
[3] Thailand Automotive Institute (TAI), “Thailand Motor Vehicle
Production,” Ministry of Industry Thailand, Bangkok, 2015. [14] K. Boonsothonsatit, “Causal linkage-based strategic map design along
the robotics and automation supply chain in Thailand,” Proceedings of
[4] Thailand Automotive Institute (TAI), “Master plan for automotive
the 2016 International Conference on Industrial Engineering and
industry 2012 – 2016,” Ministry of Industry Thailand, Bangkok, 2012.
Operations Management, 2016.
[5] The Office of Industrial Economics (OIE), “Structure of auto-part
production in Thailand automotive industry,” Ministry of Industry
Thailand, Bangkok, 2014.

255
Back to Contents

The Design of Models for Coconut Oil Supply Chain


System Performance Measurement
Meilizar Lisa Nesti Putranesia Thaha
Department of Agro-industry Logistic Department of Agro-industry Logistic Departement of Civil Engineering
Management Management Bung Hatta university, Padang,
Polytechnic of ATI Padang - Ministry Polytechnic of ATI Padang - Ministry Indonesia
of Industry, Padang, Indonesia. of Industry, Padang, Indonesia. Tel: (+62) 0751-8952381, Email:
Tel: (+62) 0751-7055053, Email: Tel: (+62) 0751-7055053, Email: putranesia@yahoo.com
iza_zanetha@yahoo.com lisa_nesti@yahoo.com

Abstract− The performance measurement was padang pariaman. Where padang pariaman have 38.045
proposed to improve continuously on a supply chain, hectare of wide area that also as a large central
so that reduce cost, provide a satisfaction of customer production of coconut at west sumatera province.
and increasing profits. This research developed model Although coconut oil is not popular than palm oil but
of performance measurement on a supply chain society in padang pariaman still processing the coconut
system to coconut oil agro-industry with SCOR model become cooking oil and medicinal herb. The Institute for
as initial formula. It was focused on evaluation of a Agricultural Technology of west sumatera has been
coconuts raw material supply from the suppliers and training improvement for production technology and
also evaluates an order fulfillment to coconut oil from packaging the coconut oil in several processing industrial
customers. The beginning observation have a twelve coconut oil at padang pariaman, by expected to raised the
variables; Inventory Accuracy of Raw Material, image of coconut oil in society.
Internal relationship, Planning employee reliability,
Time to produce a production, Source fill rate, Mean The government supporting motivates an agro-
lateness of delivery, Percentage of correct delivery industry to increase quality production and distribution of
quantity, Percentage of correct delivery quality, coconut oil. Good cooperation among farmer, distributor,
Delivery to Commit Date, Order Fulfillment Lead and customer is needed to realize the competitive coconut
Time, Make Time, and Response Time. The oil agro-industry [6].
performance measurement was calculated by internal
The competitiveness and continuity of a coconuts
benchmarking based on company target. Calculation
agro-industry can be developed with performance system
result to all variable shows that the value still under
approach that optimized supply chain network. By
expected target. Hence, the business process
activity of supply chain management, the company with
improvement method is implemented to repair the
better supply chain performance has a big chance to
overall system that saving time about 120 minutes. An
wining the competition. The company learn and analyze
analysis result concludes that coconut oil agro-
that profitability can be magnified drastically by focused
industry should do improvement in several aspects, as
to a cross-company operational in one unity of supply
follow; supplier partnership, the accurate of supply
chain [1].
report, standardization fulfillment time of order, sales
forecast, and sales report system in every period. Performance measurement of supply chain system
was applied in different area and method. Refer to [4]
Keywords− performance measurement, supply
and [5], the performance measurement of supply chain
chain system, SCOR, business process improvement.
system was implemented on non-agro-industry. Balanced
scorecard method also applied by [8]. An integration
I. INTRODUCTION
SCOR model and AHP explained by [9]. Then [7] used
In 2014, the department agriculture stated that agro- integration SCOR model and fuzzy AHP to the vegetable
industry with coconut as a raw material has been grown supply chain system. Next, [3] shows an important of
up because of high level production of coconut in

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


256
Back to Contents

performance measurement that supporting to strategic system. Next, process efficiency will be determined by
level, and operational. identify value-added and non value-added process. The
last suggestion process for improvement by eliminate a
SCOR model can be used to communicate frames that process, simplicity a process, or joint process, while
explain the detail of supply chain, definition and process suggestion of a mapping business process and
categorize, develop metrics or measurement indicator improvement of supply chain system.
that needed in supply chain performance measurement.
Hence, an integration measurement is obtained among III. RESULT
the supplier, internal of company, and consumer [2].
A. Reference Model Development of Performance
This research focused on evaluation order of raw Measurement
material to the supplier and fulfilled order from a
consumer. Begins with design performance measurement The development of performance measurement has
model and prepare proposals for supply chain system to been done by consider a performance measurement
control ordering raw material and minimize delay of model of supply chain which is developed before. Based
order fulfillment from a customer. on frame on model SCOR, the supply chain can be
divided by five fundamental management process, such;
II. RESEARCH METHODOLOGY plan, source, make, deliver, and return that improved
become a performance objectives that consist of
This research has been done in several industrial
reliability, responsiveness, flexibility, asset, and cost.
processing of coconut oil at Padang Pariaman district.
Every performance objectives can be developed to
The initial stage collects a secondary data related to
performance indicator that more specifies and must be
documentation system of the ordering raw material to the
adapted with conditions of company. The effective
supplier and order documentation from customer. The
supply chain performance measurements must be useful
observation survey purposed to determine value added
to control the percentage of utilization from existing
activity as fundamental concept of business process
resources. This research focused on policy stages about
improvement.
ordering systems of raw material to the supplier and
Two stages was implemented in this research, first fulfillment ordering of the customer, because the
stage are reference model developed of performance expected target to solve the performance case was related
measurement. The initial stage determines the expected to production time and ordering delivery time. The
objective, then determine prospective measurement and corporation among the supply chain component which is
Key Performance Indicators (KPIs) that refer to model exist on processing coconut oil industry at Padang
Supply Chain Operation Reference (SCOR) that adapted Pariaman District can be base to improve the companies
by condition of coconut oil industrial processing. The performance.
next sequence, actual performance is calculated by using
Reliability and quickness be an important factor in
SCOR to every perspective that supported by primary
this research, because the coconut commodities that
and secondary data of coconut oil industrial process.
becomes object of study are an agro products which isn’t
Then, internal benchmark will follow this step after value
durable and easily rotten. The harvest time of coconut
of actual performance is calculated. An internal
most influence to the production oil quantity and quality.
benchmark is a result comparison of actual performance
Hence, the efficient and good of supply chain
and target that determined by processing industry base
management is required to make a stable, responsive and
expected optimal condition for continuity of production
sustainable supply chain activity in agro-industry coconut
and to customer satisfaction. Second stages are design a
oil.
coconut agro-industry supply chain system. Initial step is
mapping an initial business process on raw material Initial observation toward raw material ordering
ordering system to the supplier and fulfill consumer order system and fulfillment the customer order provides

257
Back to Contents

supply chain reliability and process time becomes two of Percentage of Accuracy delivery
correct quality from supplier
main prospective that should be measured. The delivery
perspective process time consists of make time and quality
response time variable, while the perspective supply Reliability Percentage order that
Delivery to
filled before or on time
chain reliability is adopted from metric SCOR by Commit Date
agreement.
inventory accuracy of raw material, internal relationship, Order
Material fulfillment time
fulfillment
planning employee reliability, time to produce a lead time
order
production, source fill rate, mean lateness of delivery, Deliver Responsive Processing time to filled
ness Make time order and supports
percentage of correct delivery quantity, percentage of administration
correct delivery quality, delivery to commit date, and Time to response the
order fulfillment lead time. Response input from one make
time time to the next make
time.
The perspective supply chain reliability was selected
because model should be able to monitor and control;
ordering quantity of coconut raw material that already B. The Actual Result of Performance Measurement
filled by supplier, delivery time required to processing The result of performance measurement and
industry, quantity ordering of coconut oil from a benchmark shows that all measure variable still under
customer that complete fulfilled by processing industry, expected target. The expected target is determined by
and required time to fulfill the customer order. While the company based on optimal condition expected for
perspective process time was selected because the production continuity and costumer satisfaction. Result
lateness of delivery order that relate directly to the comparison of initial performance a companies target
timing, so this perspective can be used to find the shown on table 2.
performance of every component of time and possibility
of delay effect. The selected KPIs on refered model are TABLE II. THE ACTUAL RESULT OF PERFORMANCE
MEASUREMENT
explained by Tabel 1.
Target Performance
No. KPIs Variable
TABLE I. REFERENCE MODEL INFORMATION (%) result (%)

Level 3 Inventory Accuracy of raw


1 70% 90%
Level 1 Level 2 (KPIs Definition material
Variable)
2 Internal relationship 70% 90%
Reliability Inventory Storage percentage in
accuracy of warehouse inventory 3 Planning employee reliability 65% 85%
Raw Material (physically)
Relationship inter Time to produce a production
Internal 4 60% 85%
division that affect to the schedule
relationship
plans process
5 Source fill rate 66,5% 90%
Planning Reliable of employee
employee that relate to the 6 Mean lateness of delivery 60% 85%
reliability planning process
Responsive Time to Percentage of correct delivery
Quickness into planning 7 73,8% 95%
ness produce a quantity
Plan process of purchasing
production
Source material Percentage of correct delivery
schedule 8 75% 90%
quality
Reliability Amount of percentage
Source fill
delivery can be filled by 9 Delivery to Commit Date 72% 90%
rate
supplier
The average of material 10 Order Fulfillment Lead Time 65% 80%
Mean lateness
delivery delay time from
of delivery 11 Make time 74,7% 90%
supplier
Percentage of Accuracy delivery 12 Respon time 85% 90%
correct quantity from supplier
delivery
quantity

258
Back to Contents

C. Evaluation Result of Initial Business process and quantity of raw


material
Identification Process
A11 Record the receipt of 20 VA
The mapping process of initial business has been raw material

done on the inspected link to find the sequence and A12 Confirmation of 10 VA
recording receipt a raw
detailed activities. The business process was inspected to material to purchasing
coconut oil industrial process involves the ordering division
process of raw material to the supplier and order from A13 The warehouse waiting 50 VNA-4
customer. The next step is selected process that have approval a transaction
value-added and non-value added to know the value of of raw material
reception
efficiency process. Table 3 shows evaluation results of
Processing time VA 140
initial business process.
Total time 621

TABLE III. EVALUATION RESULT OF INITIAL BUSINESS


PROCESS Sales of A1 Sales section receive 15 VA
coconut oil the order from a
Seq. Name of Business Time Activity
Chain to the customer
No Process (min.) VA/NVA customer A2 Sales section calling 10 VA
Purchase A1 Raw material 25 VA (sales finished product of
section of warehouse sections section, warehouse section to
row verify a supply section of confirm amount of
material to physically of raw finished supply in warehouse
supplier material and conform product and due-date ordering
(raw to supply warehouse,
A3 Sales section waiting 55 NVA
material documentation. and section
confirmation from -5
warehouse A2 The warehouse section 10 VA delivery)
finished product
and record demand of raw warehouse about
purchasing material purchase. availability of coconut
unit)
A3 Warehouse gives 3 VA oil
record demand of raw A4 Sales section make a 10 VA
material purchasing to purchase order (PO)
purchasing section.
A5 Sales section waiting 35 NVA
A4 Purchasing section 15 VA staff from finished 6
receive a record product warehouse to
demand of raw take PO
material from
A6 Sales section submit 15 VA
warehouse and submit
PO to staff warehouse
to operational manager
to approve A7 Sales section make 20 VA
invoice and delivery
A5 A wait purchasing 75 NVA-1
order
approval of raw
material from A8 Sales section submit 10 VA
operational manager invoice and delivery
order to finished
A6 Make a purchase order 20 VA
product warehouse
to supplier
A9 Finished product 45 VA
A7 Await a raw material 300 NVA-2
warehouse staff
from supplier
checking amount of
A8 Receive delivery order 10 VA coconut oil that
and raw material from accordance with the
supplier order
A9 Await demolition order 55 NVA-3 A10 Finished product 20 VA
A10 Checking quality and 25 VA warehouse staff record

259
Back to Contents

a pick up coconut oil in also stop to improve another business improvement. The
warehouse order book
non-value added processed were explained on table 4.
A11 The finished product 40 NVA
warehouse is awaiting -7 TABLE IV. IMPROVEMENT PROCESS A NON VALUE-ADDED
to do a packing
coconut oil base on Improvement
NVA Explanation of Improvement step
invoice from the sales Step
section 1 Standardization - Raw material purchase section should
A12 Finished product 5 VA waiting a long waiting time to do
purchase because they should wait an
warehouse receive
purchase agreement of raw material
invoice and delivery from operational manager to
order eliminate time, so that
A13 Packing coconut oil 35 VA standardization of decision makes
according to the should be implemented.
- The standardization improvement sep
invoice
also use for NVA-3, NVA-4, NVA-5,
A14 Finished product 10 VA NVA-6, NVA-7.
warehouse is calling 2 Supplier The waiting time of raw material from
delivery section partnership the supplier can be minimized by
A15 Awaiting delivery 90 NVA supplier partnership implementation
section -8 such a controlling arrival schedule of a
A16 Delivery section are 25 VA raw material.
doing check an amount 3 Transportation - A waiting time transportation can be
partnership minimized by transportation
A17 Delivery section record 20 VA partnership implementation such a
an invoice on delivery controlling delivery schedule a
book coconut oil.
A18 Delivery section 60 NVA - The transportation partnership is also
use for NVA-8 and NVA-9
awaiting deliver -9
transport.
A19 Ordering coconut oil 120 VA
delivered to customer E. Suggestion Business Processed Mapping
Processing time VA 360 Suggestion of business process improvement on raw
Total Time 630 material purchasing section by doing standardization
processes as a non-value added, while founded a new
business process. The suggestion of business processed
Table 3 shows that chain of raw material purchase mapping for raw material purchase section are shown in
section to the supplier consist of 13 business process by figure 1.
620 total processing time, chain of purchase a coconut oil
to customer consists of 19 business process by 630 total Raw
Purchase Request of Raw Delivery and Sending
Material
processes time. The total processing efficiency are 40%. Warehouse
Material Delivery Order
The non-value added activities are 9 business process
Operational Request Approved of Receive Raw Material
with 750 minute total time. Manager Purchase Raw Material and Delivery Order

D. Suggestion for Process Improvement Sales Section Purchase Raw Material


Record Raw Material
Reception
The next step after process identification of value-
added and non-value added are minimized or dismiss a Record Raw Material
Supplier Processing Raw Material
process non-value added that waste time without Transaction

additional value to result of end product. These process


can be improved by dismiss a process, simplicity, or joint Figure 1 The suggestion of business processed mapping for a row
process by using a technology to support a process, and material purchase section.

260
Back to Contents

Suggestion business processed improvement on sales Initial performance level of Inventory Accuracy of Raw
section by standardization process as a non-value added, Material are 70%, internal relationship are 70%, planning
while founded a new business process. The figure 2 employee reliability are 65%, time to produce a
production schedule are 60%, source fill rate are 66.5%,
shows a suggestion of business processed mapping on
mean lateness of delivery are 60%, percentage of correct
sales section delivery quantity are 73.8%, percentage of delivery
quality are 75 %, delivery to commit date are 72%, order
fulfillment lead time are 65%, make time are 74.7%,
response time 85%.

Suggestion of business process was improved by


business processed improvement concept that provides
minimize time during 120 minutes. Suggestion
improvement toward sales section are preparing sales
plan, preparing sales monthly report system and develop
communication with transportation section during
ordering request of coconut oil to the customer.
Suggestion improvement to sales section are speed up
Figure 2. Suggestion of business processed mapping on sales section purchasing approval process of a raw material by
operational manager, improvement sales support and
F. Suggestion for Supply Chain System Improvement. develop communication to supplier.
Purchase section of raw material is also important
that must observe an available of a raw material while
production process run continuity time sales section
REFERENCE
should be able to do anticipation to minimize the delay
[1] A. Vereecke, and S. Muylle, “Performance Improvement
time, because it is a front liner which have direct relation Through Supply Chain Collaboration in Europe,” International
to the customer. The summary suggestion for Journal of Operations & Production Management. Vol. 26,
pp. 1176-1198, 2006.
improvement are shown in table 5 [2] D. Simchi-Levi, P. Kaminsky, dan E. Simchi-Levi, Designing and
Managing the Supply Chain : Concept, Strategies, and Case
TABLE V. SUMMARY SUGGESTION FOR IMPROVEMENT OF Studies, McGraw-Hill Higher Education, Singapore, 2000.
SUPPLY CHAIN SYSTEM [3] JGAJ Van Der Vorst, “Performance Measurement in Agrifood
Supply Chain Network : An Overview. In : CJM Wijnands, JHM
No Section Improvement Explanation Huirne, RBM Kooten Van, O. Quantifying The Agri-food Supply
- Measurement by Chain/Ondersteijn. Dordrecht : Springer/Kluwer, (Wageningen
1 Company - Integration of
customize model UR Frontisseries15).
in general performance
- Process elimination of [4] Sudaryanto and R. Bahri, “Performance evaluation of Supply
measurement
non value added Chain Using SCOR Model: The Case of PT. Yuasa, Indonesia,”
- Business processed
Proceeding of International Seminar on Industrial Engineering and
changing
Management, pp.49-55, 2007.
2 Purchasing - Purchasing planning - Purchase report
[5] RR. D. Rahayu, “Pengembangan Model Pengukuran Kinerja
section of - Communicate to contains target and
Rantai Pasok: Studi Kasus Direktorat Aerostructure PT.
raw supplier realization
Dirgantara Indonesia,” Tesis Program Pasca Sarjana Teknik dan
- Monthly purchase - Shorten the approval
material Manajemen Industri Institut Teknologi Bandung, 2009.
report time of purchase
[6] N. Hosen, “Profil Usahatani Kelapa di Sumatera Barat,” Jurnal
- Simplicity of process
Teknologi Pertanian. Vol.5, pp. 15-23, 2009.
3 Sales - Sales planning - Sales report contains [7] A. Setiawan, Marimin, Y. Arkeman, F. Udin, “Integrasi Model
Section - Corporation with targets and realization. SCOR dan Fuzzy AHP untuk Perancangan Metrik Pengukuran
delivery or - An implementation Kinerja Rantai Pasok Sayuran,” Jurnal Manajemen dan
transportation section penalty for delay time. Organisasi, Vol.1, pp. 148-161, 2010.
- Sales monthly report [8] D. Rahmayanti, and U. Putri, “Perancangan Model Pengukuran
Kinerja Rantai Pasok Semen dengan Balanced Scorecard (BSC),”
Jurnal Optimisasi Sistem Industri, Vol. 10, pp. 135-144, 2011.
[9] A. Lutfiana, Y. Perdana, “Pengukuran Performansi Supply Chain
IV. CONCLUSION dengan pendekatan Supply Chain Operation Reference (SCOR)
dan Analitical Hierarchy Process (AHP),” Jurnal manajemen dan
The supply chain performance measurement has been Organisasi, Vol. 2, pp. 57-72, 2012.
done to find an actual level performance of the system.

261
Back to Contents

Application of Conjoint Analysis in Establishing


Aviation Fuel Attributes for Air Travel Industries
Karla Ishra S. Bassig and Hazeline A. Silverio
Department of Industrial Engineering
College of Engineering, Adamson University
Manila, Philippines

Abstract— With oil being a non-renewable energy source, product management and service based on the set of criteria
buyers of aviation fuel are more inclined to purchase Jet-A1 from from the conjoint analysis that the researchers are seeking to
oil companies that offer the best jet fuel. But the fact of the establish. With over 33 airlines operating in the local aircraft
matter is that, aviation fuel has the same content even when you industry, it’s an unexpected reality that Philippines has only
purchase it from halfway across the world. Although price is an one source of jet turbine engine fuel (Jet A-1) of which all the
apparent factor that airlines consider when deciding which oil aviation oil companies are of the beneficiary. By means of a
company to choose for as a supplier, there are ambiguous factors bidding system, the crude oil trade commences between the
that need to be established so that continuous product local airlines and the aviation oil depots. While these aviation
development can be made by aviation fuel organizations. This
oil entities supply the same kind of fuel, there are uncertainties
study used conjoint analysis as a tool to establish the attributes
considered by airlines in deciding which oil company will best
as to why airlines change their decisions on their fuel supplier.
meet their needs. As a result, the following six attributes of This can come as an opportunity for oil companies and other
aviation fuel were identified: availability, price, accessibility, airlines but without a doubt, difficulties will emanate along
manpower, equipment and facilities, and fuel quality and safety. with the seeming opportunities because of the unsystematic
The results indicate that fuel quality has the greatest influence on bidding approach both parties are currently bounded by.
the respondents’ purchasing intentions, followed by price, Based on the aforementioned discussion, the research
availability, equipment and facilities, and accessibility.
objectives of the study are as follows: 1) Identify the key
Manpower was the least important attribute with regard to the
attributes that are taken into consideration by airlines operating
airlines’ buying intention. Based on the results from data
analysis, it is concluded that conjoint analysis is an appropriate locally in assessing viable jet turbine engine fuel supply using
tool to identify the key attributes of aviation fuel. Using the conjoint analysis. 2) Establish decision criteria and metrics
results presented in this study, aviation fuel companies can better based on the key attributes that heavily influence the airline
tailor their fuel services to meet the needs of airlines and thus organizations’ purchasing intention. 3) Develop an integrated
increase sales. assessment of the jet fuel service and define an approach that
will enable local jet fuel suppliers to optimally provide the
Keywords—Conjoint Analysis; Aviation Fuel; Purchasing needs of the airlines operating in the Philippines. The focal
Intention; Consumer Preference; Part-worth point of this study is to help both the airline organizations and
aviation fuel companies improve their decisions and product
I. INTRODUCTION management
Jet Fuel (Jet A-1) is produced only by the three major Oil
Companies in the Philippines namely, Petron, Shell, and II. REVIEW OF RELATED LITERATURE
Chevron. Jet Fuel from these companies are then stored under
the supervision of Joint Oil Companies Aviation-Fuel Storage A. Conjoint Analysis
Plant (JOCASP). From this point of view, despite the different Getting the market to make trade-offs for product managers
companies that supply jet fuel, the reality is that they all and other corporate decision-makers answer questions like
provide the same oil to different airlines. A bidding system for “how do you know what the market wants?” or “What market
airlines is regulated for the aviation organizations operating in segments exist? What those segments prefer?” [21].
the Philippines to make a decision on what oil company to go Conjoint analysis helps in understanding the trade-offs one
for when it comes to their jet fuel supply and establish a should make by recognizing the trade-offs the market will
contract between the supplier and the end-user with a defined make and then applying the newly acquired market knowledge
timeframe that occasionally changes. to the organization’s revenue, profit or share objective.
Although they literally supply the same jet fuel, the airlines Conjoint analysis has been effectively functional in many
that operate locally consider factors, aside from price and industries, such as financial services, health care, real estate, air
availability, in making the oil supplier choice. These are travel, electronics, computers and smart phones. Because
uncertain factors that are expected to be identified as the conjoint analysis helps in comprehending a market’s
researchers use conjoint analysis as an analytical tool for preferences, it is applicable to a variety of difficult aspects of
aviation fuel suppliers; this will help them improve their the job, including competitive positioning, pricing, product line

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


262
Back to Contents

analysis, product development, segmentation, and resource well as all of the available and related information [14].
allocation; conjoint analysis done right is impactful [21]. Consumer preference is a make or break factor associated with
whether a purchase will be made or not, and, if made, which
In a more dictionary definition approach, “Conjoint
specific item will be chosen [30].
analysis is a set of market research techniques that measures
the value the market places on each feature of your product and
predicts the value of any combination of features. Conjoint D. Purchase Intention
analysis is, at its essence, all about features and trade-offs” Reference [33] defines intention as “a conscious plan to
[21]. With conjoint analysis, one must ask questions that force carry out a particular behavior and the motivation to perform
respondents to make trade-offs among features, determine the it.” Blackwell, Miniand, and Engel [8] defined intentions as
value they place on each feature based on the trade-offs they “subjective judgments about how consumers will behave in the
make and simulate how the market reacts to various feature future.” Reference [6] asserted that customers judge the
trade-offs considered. Extensive attention has been given to performance outcomes of using a product/service and then use
this technique both in academia and industry to quantify this information to form their own succeeding expectations, as
preferences through utility trade-offs among products and well as relying on the experiences of others through word-of-
services particularly [26]. mouth. Behavioral intention can be divided into two categories:
economic and social, as further studied by [6]. Economic
These preferences are assumed to be influenced by the behavioral intentions came from activities repeated from their
market’s subjective perceptions on the presented options. purchase behavior, while social behavioral intentions are those
Consequently, the preference structure is a function of the that are based on customers’ complaints and engaging in word-
individual’s economic, social, and cultural conditions, which of-mouth communication. Behavioral intention is an indication
affect the organization’s decision. Public preferences have an of a person’s readiness to perform a behavior [7], and the
important role in decision-making because they may in fact relative contributions of attitudes, perceived norms and
highlight a stark policy trade-off. In choice-based conjoint perceived behavioral control with regard to a specific intention
analysis, a set of attributes and their respective levels define the are expected to vary from one person to another [51].
respondents’ choices. Distinctively, the combinations of
attribute levels characterize the choice tasks in conjoint surveys
E. Purchasing and Procurement
conducted. A conjoint study leads to a set of part-worth or
benefits, which would measure the relative appeal or worth of Purchasing refers to the acquisition of goods and services
an attribute level. The higher the benefit, the more enviable the of a business organization in order to meet the organizational
attribute level [21]. goals and objectives. Though there are several organizations
that attempt to set standards in the purchasing process,
B. Decision Trade-Off processes can vary greatly between organizations; Centralized
and Decentralized are the two types of purchasing. Typically
Trade-offs is the heart of product management. Whether the the word purchasing is not used interchangeably with the word
objective is to increase the market share, profit margin or procurement, since procurement typically includes expediting,
revenue, the deciding person makes trade-offs—quality vs. supplier quality, and traffic and logistics (T&L) [10].
cost, time to market vs. breadth of features, richness of the
offering vs. ease of use, etc. Decision trade-off involves Specifically, procurement involves the process of acquiring
making a choice between one quality and aspect of an entity to goods, works, and services, covering both acquisitions from
gain something in return of another quality or entity of an third parties and from in-house providers. This process starts
entity. More particularly, trade-off decisions require a balanced from the identification of the needs, through to the end of
bearing of which when one commodity increases, its equivalent service contracts or the end of the useful life of an asset.
entity must decrease. A trade-off decision generally requires Procurement also involves options (supplier) appraisal and the
the deciding person to have full comprehension of both the critical “make or buy” decision which may result in the
advantage and disadvantage of a specific option [26]. provision of services in-house in appropriate circumstances
[29].
C. Preferences The need to contribute insights and describe patterns of
People’s preferences are often based on their own human behavior in organizations or networks against different
condensation of past experience. Lichtenstien & Slovic [27], contextual backgrounds is what the academic research about
defines preference technically as “the ordering of options (e.g., purchasing and supply management wants to address [29].
prices and choices) available for decision making which will
result in systematic responses.” The difference between the We are aiming to contribute and build theories that will
first and last option of a prospective customer is based on the help us better understand purchasing and supply chain
important attribute related to each choice [47]. There are management phenomena, in order to provide different
different types of preferences; preferences based on scenario, purchasing department of different airlines with some clear
object preference, model preference, etc. Experienced guidelines to make better decisions.
customers expect that the options and choices currently
available will lead to changes in their preference [52]. Potential III. RESEARCH METHODOLOGY
customers’ actions will differ according to what they want, and
therefore the central objects and scenarios of their preference A. The conceptual model
will sway their decisions. How logically preferences are Fig. 1 shows the conceptual model used in this study to
connected to each other makes them rational or true based, as establish the attributes.

263
Back to Contents

A fractional factorial design will be used by the researchers


to yield the n number of scenario; fractional factorial design
was specially selected to maximize the attributes explored
while minimizing the number of scenarios. The key attributes
and their characteristics that will be used in this experiment
will function as the independent variables. As for the
dependent variables, the researchers will examine the
respondents’ purchase intentions with regard to the jet fuel
combinations described in each of the scenario; The subjects
will be asked to order the eight profiles from the most preferred
to the least preferred.

D. Participants
Fig. 1. Conceptual Model The sampling size for this research study is unconventional
due to the classified and notable field of the aviation industry.
Random sampling techniques are not applicable for
B. Research Design determination of the number of respondents for this study
because of the limited number of staff in fuel operations and
The purpose of this study is to identify the attributes of
because there are only a few fraction of personnel from fuel
aviation fuel that affects the purchasing intentions of air travel
operations department who are able to answer the survey with
organizations. Conjoint analysis is adopted to measure the
credible background knowledge and over-all know of the entire
potential customer preferences for Jet-A1. Conjoint analysis
fuel operation. The stratum selected by the researchers is
has been widely used as a marketing tool to explore consumer
personnel from the top-level management of the fuel operations
preferences since it was first initiated. Conjoint analysis
department; these are usually the directors, managers, senior
requires respondents to rate different scenarios with varying
vice president of the fuel operations department, and specialists
combinations of attribute levels; in this study, combinations of
for Jet-A1 fuel.
fuel attributes are made into scenarios. Moreover, conjoint
analysis is able to create a way to plot the strategic thinking of A total number of ten (10) respondents were gathered by
customers as it quantifies subjective opinions. This reveals the researchers; three (3) from Philippine Airlines, six (6) from
hidden motivations when purchasing and also forces the Cebu Pacific, and one (1) from Air Asia. Although the small
respondents to assess the attributes individually according to number of respondents determined in the study is questionable
their respective importance by the combinations created for as it may not produce a credible data, the researchers for this
each scenario. Trade-off decisions are required to be made by study considered the potential respondents’ background,
the respondents to actually determine the weight of the attribute responsibilities and how these may affect the survey results. As
versus another attribute. established from other previous studies, quality sampling may
be characterized by the number and selection of subjects or
C. Sampling observations. Obtaining a sample size that is appropriate in
For this study, the researchers deemed survey as the both regards is critical for many reasons. While a large sample
appropriate tool and will therefore be used as the main size gives more of a representative in a population, the
sampling tool. Airlines operating locally shall be chosen as the researchers determined the sample size based on the number of
sampling frame because this industry consists of all the fuels department personnel who is capable of giving a more
affiliated companies of the population of interest. The survey comprehensive data. A quality sample size must consider the
will be designed with the aim of not just describing the sample, rate of response. Incomplete or illegible responses are not
which will come from the sampling frame, but also describing useful observations. Thus, the total sample size must account
the larger population. The survey shall be designed through a for potential issues that may arise if the researchers take as
set of questions with the criteria formulated by the researchers many as possible responses without filtering the importance of
from an interview with these airlines. All data shall be purposive sampling. For qualitative studies, where the goal is
requested from the air travel industry for analysis. The to “reduce the chances of discovery failure,” purposive
proponent will be requesting historical data of their previous jet sampling was the method chosen by the researchers. Purposive
fuel suppliers and any details that entail anything about their sampling is a common strategy for sampling in qualitative
decision on the supplier. The interview is expected to yield the research studies as it places its respondents’ in groups relevant
key attributes that influences the airlines’ purchasing intention. to the criteria that fits the research question. Factors that affect
The survey is then constructed based on the output of the sample size include their job description and responsibilities,
interview conducted. The survey, using conjoint analysis, work experience on the specific department, and their decision-
consists of a series of scenarios for respondents to assess. For making scope. The researchers determined the essential
this study, each scenario is made up of combination of the focal inclusion and exclusion criteria for the study population such
jet fuel supplier attributes each of which is measured by two that findings can accurately generalize or specify results to the
levels: high and low (pertaining to the importance of the given target group rather than setting a target number of respondents.
factor).

264
Back to Contents

solution; The bigger the oil station and their capability to store
larger number of oil, the more likely it is for them to purchase
the fuel from the oil company.
Factor 3: A more accessible jet fuel is associated with
more positive buying intention.
Transportation fee is one of the additives that is included to
the final price fuel and services that an oil company takes
account of. Of course, the further the oil depot is the higher the
transportation fee added to the price will be.
Factor 4: Better equipped and able manpower that runs the
aviation fuel company is associated with more positive buying
intention.
An equipped and skilled workforce is directly proportional
to servicing a quality fuel work from creating excellent Jet-A1
Fig. 2. Factor Development Model content down to the superior performance of transferring the
fuel to its end-user (the airlines’ airplanes). A capable staff
entails the great ability to handle fuel superbly, as well.
E. Development of Factors
Price and availability are both the most recurring and Factor 5: Better and advanced equipment is associated
constant factor from the selected pool of respondents for the with more positive buying intention.
research, as expected by the researchers from the related The instruments and devices used by the oil company in
literature when it comes to vehicle purchasing intention. The storing and maintaining the oil is a factor that airlines consider
researchers correlated the buying intentions of vehicle buyers because of the effect it has on Jet-A1. Equipment and facilities
to that of an airline’s intention of buying oil as shown in Figure that are not maintained and is a little behind the technology
2. Both aforementioned factors, price and availability, were advancements negatively influences the content and quality of
validated from the interviews performed by the researchers and the fuel itself. Therefore, the overall amenities and overhaul of
in addition, other factors were also derived from the discussion. the oil company is regarded as an important buying intention
The survey was then developed from the exchange with the top by air travel organizations.
level management of fuel operations for each airline.
Factor 6: Fuel service that pass, if not exceeds, the quality
Factor 1: Lower prices for jet fuel are associated with standards that the air travel has is associated with more
more positive buying intention. positive buying intention.
Buying jet fuel for a lower price compared to the Air travel organizations have standards to be followed and
competitors in the aforementioned industry takes the interest of an oil companies’ oil content and other oil services needs to
all the airlines as interviewed by the researchers. This is pass, if not exceed, the quality standards set by the airline. Fuel
because any organization is targeting a cost-effective system quality entails the oil content of Jet-A1 and overall
when it comes to their company expenditures. performance of service of the companies such as Shell, Petron
or Chevron.
Factor 2: A more available jet fuel is associated with more
positive buying intention. F. Variables
An airline cannot make a purchase of Jet A-1 if the oil
company, either Shell, Petron or Caltex, cannot meet the The factors used in this research are based on the answers
airline’s demand. An oil company might be able to offer oil for of the interview conducted by the researchers. The effects of
a low price but if it’s not available, purchasing fuel is without a price, availability, accessibility, manpower, equipment and fuel
quality on purchase intention with regards to Jet-A1 fuel were

TABLE I. INDEPENDENT FACTORS & THEIR DEFINITIONS


Factors Definition
Price Final cost of fuel according to the oil company
Availability Available supply of oil that the oil company can provide; this is a question of whether Shell, Petron and/or
Chevron is able to meet the fuel demand of an airline.
Accessibility The ease of access and convenience in attaining the Jet A-1
Manpower Educational attainment and capabilities of the oil company’s personnel with regards to providing quality
service; this also includes trainings and seminars their staff has undergone.
Equipment Instruments and devices used by the oil company in storing and maintaining the Jet A-1 fuel; their amenities
and overhaul of the oil company is also included in the factor’s scope.
Fuel Quality Whether the company’s oil content and other fuel services pass the quality standards of the air travel
organization a propos fuel operations; this factor includes the safety measures and/or the risk management
taken by the oil company with regards to fuel handling; this can also mean the ability of the oil company to
deliver aviation fuel on time.

265
Back to Contents

TABLE II. SCENARIO COMBINATION

Card List
Card ID Price Availability Accessibility Manpower Equipment Fuel Quality

1 Low Low Low High High Low

2 High Low High Low High Low

3 High High Low High Low Low

4 Low Low High High Low High

5 High High High High High High

6 High Low Low Low Low High

7 Low High High Low Low Low

8 Low High Low Low High High

all assessed by asking the respondents to react to eight (8) scenario’s total worth or subtract from it. Most attributes had
scenarios with various combinations of stimuli. Table I gives a positive and higher scores if present or at a high level, and had
summary of the independent variables used in this experiment negative or lower scores if absent or at a low level. Based on
and their specific characteristic A survey using conjoint the results, it could thus be determined which of the attributes
analysis provides a series of scenarios for respondents to had the strongest influence on the choice of the respondents
consider. In this study, each scenario is made up of a when deciding on which jet supplier to choose from.
combination of the six crucial fuel attributes, each of which is
measured with two levels. If a full factorial design is used, and
all possible combinations are presented, the respondents would TABLE III. UTILITY SCORES OF THE ATTRIBUTES
have to consider 64 scenarios (2 × 2 × 2 × 2 × 2 × 2 or 26). The
researchers decided to minimize the number of scenarios as a Utilities
64 scenario survey would be too large for a single respondent Utility Std.
to evaluate. The scenarios were minimized to eight scenarios Estimate Error
with different combinations of the stimuli that were used while
still maintaining orthogonality among the levels. To achieve a Price Low −1.250 0.050
smaller number of scenarios, the fractional factorial design is High −2.500 0.100
adopted so that not all possible combinations are used, but Low 1.650 0.050
Availability
profiles are specially selected to maximize the attributes
explored, while minimizing the number of profiles and thus High 3.300 0.100
this study adopted an orthogonal array design to construct the Accessibility Low 0.800 0.050
eight hypothetical Jet-A1 profiles as used in this study.
Fractional factorial designs can be expressed by the formula High 1.600 0.100
Lk-p, in which: Manpower Low 0.100 0.050
• L: the number of levels of each attribute High 0.200 0.100
Equipment Low 0.850 0.050
• k: the number of factors investigated
High 1.700 0.100
• p: the size of the fraction of the full factorial used.
Fuel Quality Low 2.800 0.050
In this study, we use 1/8 of the full factorial design or 26-3. In High 5.600 0.100
the case of the combinations among these eight models, each
attribute will have six values of High and Low values in order (Constant) −2.925 0.185
to create a balanced profile. Table II shows the eight distinctive
scenarios created for this work, and the related combinations of
attributes were used in the survey.
Table III lists the utility scores for all six attributes. Price
IV. RESEARCH RESULT attribute had a score of −1.250 when low, and −2.500 when
high, as this attribute had a reverse effect. Therefore, it can be
A. Part-worth scores said that the respondents were attracted to Oil Companies that
A part-worth score or utility score was established for each offers lower prices. The second attribute (availability) had a
attribute after generating the Conjoint Analysis Command very high score of 3.300 when the Oil Company can provide
Syntax, and it shows whether an attribute will add to a the amount of Oil needed by the Airline Company, and a score

266
Back to Contents

TABLE IV. AVERAGE IMPORTANT SCORE TABLE V. CLASSIFIED IMPORTANCE SCORE

Importance Values Importance Values


Price 20.371 Fuel Quality 34.219
Availability 18.894 Price 20.371
Accessibility 10.060 Availability 18.894
Manpower 5.515 Equipment 10.941
Equipment 10.941 Accessibility 10.06
Manpower 5.515
Fuel Quality 34.219
sought to measure a dependent variable, in this case purchase
intention, toward eight scenarios with different combinations of
of 1.650 when it can’t. It can thus be stated that the respondents the six main attributes. This conjoint approach allowed the
prefer a high level of Availability over a low level. The high respondents to consider more realistic situations and to take
level of Accessibility contributed a score of 1.600, while a low into account what they believe to be important. As a result, the
level of accessibility had an effect of 0.800. These results outcomes of this study are able to show the relative importance
indicate that a high level of accessibility is preferred by the of each attribute (i.e., the part-worth utility score), rather than
respondents. In terms of Manpower, the results showed a score just a comprehensive preference ranking evaluation.
of 0.200 with regard to high manpower, and 0.100 for low
manpower. This indicates that the respondents prefer an Oil Using conjoint analysis, the researchers are able to
Company having a well-trained employees. With regard to conclude the significant key attributes of jet fuel that heavily
equipment, these scored 1.700 when labeled as high and 0.850 influences the purchasing intention of an airline. Table V and
when labeled as low. As a result, it can be said that a high level Figure 3 shows the attributes and their respective importance
of equipment is preferred by the respondents. Lastly, the fuel score; the attributes are listed according to their importance
quality attribute had a score of 2.800 when low, and 5.600 with the first attribute having the heaviest importance weight
when high, it can be said that the respondents values the high and the last attribute having the least value of importance
quality of fuel that the Oil Companies offered. The part- worth weight. The conjoint analysis conducted in this study
utility scores all indicated that the respondents preferred a high investigates preferences for the jet fuel industry. Within
level of availability, accessibility, manpower, equipment, and a conjoint analysis respondents rate the oil company’s Jet-A1
low level of price. Therefore, the premises from the factor services consisting of multiple attributes. Thus, in conjoint
development (3.5) established by the researchers, Factors 1–6, measurement, a respondent considers all factors concurrently
were supported. for each attribute combination. Using this analysis, the user
evaluations can be decomposed into separate utilities, or part-
While the study focuses on acquiring the utility scores of worths. Data collection is, therefore more holistic and more
each attribute, the researchers applied conjoint analysis to ecologically valid, because complete situations, rather than
calculate an averaged importance score out of a total of 100. single features, are considered.
For instance, if an attribute is given a score of 50, this would
mean that it contributed 50% to the outcome of the consumer’s In this term, conjoint analyses provide an additional value
decision making. The tallied result from the initial survey is to acceptance research since it is possible to obtain an accurate
expected to correlate with the averaged importance score and estimation of user trade-offs between single device
yield a similar outcome. characteristics. Furthermore, it is possible to determine a valid
model of consumer judgments. It is clear that some jet fuel
characteristics are more relevant in the decision process of the
B. Discussion
From conjoint analysis, the attributes developed has proven
that a set of key attributes do affect decisions of airlines in
choosing which oil company to opt for in terms of aviation fuel
supply presented in Table IV confirms that these attributes hold
a weight over the airlines’ decision, thus supporting the
purpose of the study.

V. CONCLUSION AND RECOMMENDATION

A. Conclusion
This study focused on identifying the key attributes of
aviation fuel using local airlines operating in the Philippines as
the subjects. Rather than merely surveying respondents to find
out what they consider to be the most important attributes when
making a decision about choosing a Jet-A1 supplier, this study Fig. 3. Average Importance Score

267
Back to Contents

airlines than others. Although the attribute price has the second [17] G. Fischer and S. Hawkins, “Strategy compatibility, scale compatibility,
heaviest importance weight among the other factors, compared and the prominence effect,” J. Exp. Psychol.: Hum. Percept. Perform.,
vol. 19, no. 3, p. 580–597, 1993.
to Table III, price had a negative value and actually had the
[18] S. Gensler, O. Hinz, B. Skiera, and S. Theysohn, “Willingness-to-pay
lowest utility score. This means that consumers are willing to estimation with choice-based conjoint analysis: Addressing extreme
trade-off low price for services that had high level from any of response behavior with individually adapted designs,” Eur. J. Oper.
the five attributes; when combined with other attributes and Res., vol. 219, no. 2, p. 368–378, 2012, doi:10.1016/j.ejor.2012.01.002.
given a level as a complete scenario, consumers suddenly shift [19] W. Goldstein and H. Einhorn, “Expression theory and the preference
their purchasing intention for a more important attribute. reversal phenomena,” Psychol. Rev., vol. 94, no. 2, p. 236–254, 1987.
[20] P. H. Eugene (Gene) Baker III, University of North Florida, 1999.
[Online]. Available:
ACKNOWLEDGMENT http://www.unf.edu/~gbaker/Man6204/Decision.PDF
There are a number of people without whom this research [21] M. Halme and M. Kallio, “Likelihood estimation of consumer
might not have been written and to whom we are greatly preferences in choice-based conjoint analysis,” Eur. J. Oper. Res., vol.
239, no. 2, p. 556–564, 2014, doi:10.1016/j.ejor.2014.05.044.
indebted: To our parents who never stopped pushing us to do
our best and has been an inspiration to us. To our adviser, Engr. [22] C. Hsee, “The evaluability hypothesis: An explanation for preference
reversals between joint and separate evaluations of alternatives,” Organ.
Venusmar Quevedo, who has opened her life to us for over a Behav. Hum. Decis. Processes, vol. 67, no. 3, p. 247–257, 1996.
year; to all the local airline companies who opened their doors [23] S. H. Huang and H. Keskar, “Comprehensive and configurable metrics
and gave us their time so we could gather essential data. And of for supplier selection,” Int. J. Prod. Econ., vol. 105, no. 2, p. 510–523,
course, we would never accomplish any of this without the 2007. [Online]. Available:
help of our Almighty God. http://www.sciencedirect.com/science/article/pii/S0925527306001241
[24] D. Kahneman and A. Tversky, “Prospect theory: An analysis of decision
making under risk,” Econometrica, vol. 47, no. 2, p. 18–36, 1979.
REFERENCES [25] E. V. Karniouchina, W. L. Moore, B. van der Rhee, and R. Verma,
[1] L. A. Acosta, E. A. Eugenio, N. H. Enano, D. B. Magcale-Macandog, B. “Issues in the use of ratings-based versus choice-based conjoint analysis
A. Vega, P. B. M. Macandog, and W. Lucht, “Sustainability trade-offs in in operations management research,” Eur. J. Oper. Res., vol. 197, no. 1,
bioenergy development in the Philippines: An application of conjoint p. 340–348, 2009, doi:10.1016/j.ejor.2008.05.029.
analysis,” Biomass Bioenergy, vol. 64, p. 20–41, 2014, [26] J. Lee and J.-Y. Kuo, “Fuzzy decision making through trade-off analysis
doi:10.1016/j.biombioe.2014.03.015. between criteria,” Inform. Sci., vol. 107, no. 1-4, p. 107–126, 1998,
[2] W. B. E. Al-Othman, H. M. S. Lababidi, I. M. Alatiqi, and K. Al-Shayji, doi:10.1016/S0020-0255(97)10020-2.
“Supply chain optimization of petroleum organization under uncertainty [27] S. Lichtenstien and P. Slovic, The Construction of Preference. USA:
in market demands and prices,” Eur. J. Oper. Res., vol. 189, no. 3, p. Cambridge, 2006, p. 1–2.
822–840, 2008, doi:10.1016/j.ejor.2006.06.081.
[28] F. S. Manzano, Supply Chain Practices in the Petroleum Downstream,
[3] M. Anthony, “I need to know,” 1999. [Online]. Available: 2005.
http://drsmusic.com/portal/PDFS/4133/i_need_to_know_piano.pdf
[29] J. G. Murray, “Towards a common understanding of the differences
[4] T. Bahill, “Systems and Industrial Engineering,” University of Arizona, between purchasing, procurement and commissioning in the UK public
2010. [Online]. Available: http://www.sie.arizona.edu/sysengr/slides. sector,” J. Purch. Supply Manage., vol. 15, no. 3, p. 198–202,
[5] B. Beamon, “Measuring supply chain performance,” Int. J. Oper.Prod. doi:10.1016/j.pursup.2009.03.003.
Manage., vol. 19, no. 3, p. 275–292, 1999. [30] N. Novemsky, R. Dhar, N. Schwarz and I. Simonson, “Preference
[6] D. Bendall-Lyon and T. L. Power, “The impact of structure and process fluency in choice,” J. Mark. Res., vol. 44, no. 3, p. 347–356, 2007.
attributes on satisfaction and behavioral intentions,” J. Serv. Mark., vol. [31] S. Nowlis and I. Simonson, “Attribute task comparability as a
18, no. 2, p. 114–121, 2004. determinant of consumer preference reversals,” J. Mark. Res., vol. 34,
[7] L. Bergkvist and J. R. Rossiter, “The role of ad likability in predicting no. 2, p. 205–218, 1997.
ad’s campaign performance,” J. Advert., vol. 37, no. 2, p. 58–97, 2008. [32] B. Orme, M. Alpert, and E. Christensen, “Assessing the validity of
[8] R. D. Blackwell, P. W. Miniand, and J. F. Engel, Consumer Behavior, conjoint analysis–continued,” 1997. [Online]. Available:
10th ed., CENGAGE Learning Asia Pte Lte, 2012. http://www.sawtoothsoftware.com/download/techpap/assess2.pdf
[9] J. Chai, J. N. K. Liu, and E. W. T. Ngai, “Application of decision- [Accessed 21/2/2008].
making techniques in supplier selection: A systematic review of [33] C. S. Patch, Tapsell, L. C., and P. G. Williams, “Attitudes and intensions
literature,” Expert Syst. with Appl., vol. 40, no. 10, p. 3872–3885, 2013, toward purchasing novel foods enriched with omega-3 fatty acids,” J.
doi:10.1016/j.eswa.2012.12.040. Nutr. Edu. Behavior, vol. 37, no. 5, p. 235–241, 2005.
[10] A. Chand, K. Satyanarayana, and V. S. Plant, A Study on E.R.P in [34] C. Roa and G. Kiser, “Educational buyer’s perception of vendor
Visakhapatnam Steel Plant With Advanced Business Application attributes,” J. Purch. Mat. Manage., vol. 16, p. 25–30, 1980.
Programming, 2013. [35] I. K. Robert Kreitner, Organizational Behaviour, 2009. [Online].
[11] N. Cheremisinoff and P. Rosenfeld, “The petroleum industry,” in Available: https://www.inkling.com/read/organizational-behavior-
Production–Best Practices in The Petroleum Industry, ch. 1, 2009, kreitner-kinicki-10th/chapter-12/models-of-decision-making
doi:10.1016/B978-0-8155-2035-1.10001-6. [36] Z. Sandor and M. Wedel, “Designing conjoint choice experiments using
[12] M. Dabhilkar, “Trade-offs in make-buy decisions,” J. Purch. Supply managers’ prior beliefs,” J. Mark. Res., vol. 38, no. 4, p. 430–444, 2001.
Manage., vol. 17, no. 3, p. 158–166, 2011, [37] R. Schonberger, World Class Manufacturing: The Lessons of Simplicity
doi:10.1016/j.pursup.2011.04.002. Applied. New York: Free Press, 1986.
[13] G. Dickson, “An analysis of vendor selection systems and decisions,” J. [38] A. K. Sinha, H. K. Aditya, M. K. Tiwari and F. T. S. Chan, “Agent
Purch., vol. 2, p. 5–17, 1966. oriented petroleum supply chain coordination: Co-evolutionary Particle
[14] D. Egonsson, “Preference and information,” Great Britain: Ashgate, Swarm Optimization based approach,” Expert Syst. Appl., vol. 38, no. 5,
2007. p. 6132–6145, 2011, doi:10.1016/j.eswa.2010.11.004.
[15] L. Ellram, “The supplier selection decision in strategic partnerships,” J. [39] W. Skinner, “Manufacturing: missing link in corporate strategy,” Harv.
Purch. Mater. Manage., vol. 26, no. 4, p. 8–14, 1990. Bus. Rev., vol. 47, no. 3, p. 136–145, 1969.
[16] G. Fischer, Z. Carmon, D. Ariely and G. Zauberman, “Goal-based [40] W. Skinner, “The focused factory,” Harvard Bus. Rev., vol. 52, no. 3, p.
construction of preferences: Task goals and the prominence effect,” 113–120, 1974.
Manage. Sci., vol. 45, no. 8, p. 1057–1075, 1999.

268
Back to Contents

[41] W. Skinner, “Manufacturing strategy on the ‘S’ curve,” Prod. Oper. [48] A. Tversky, S. Sattath and P. Slovic, “Contingent weighting in judgment
Manage., vol. 5, no. 1, p. 3–14, 1996. and choice,” Psychol. Rev., vol. 95, no. 3, p. 371–384, 1988.
[42] C. Stamm and D. Golhar, “JIT purchasing attribute classification and [49] N. Weaver, Petroleum, 1988. [Online]. Available:
literature review,” Prod. Plann. Control, vol. 4, no. 3, p. 273–282, 1993. http://www.cabdirect.org/abstracts/19882054101.html.
[43] S. Stephens, Supply Chain Council and Supply Chain Operations [50] A. Weitz and P. Wright, “Retrospective self-insight on factors
Reference (SCOR) Model Overview, Supply Chain Council, Inc., 2001. considered in product evaluations,” J. Consumer Res., vol. 6, no. 3, p.
[44] Tapping the Value in Spend Management. (n.d.). 280–294, 1979.
[45] I. Tesseraux, “Risk factors of jet fuel combustion products,” Toxicol. [51] W. Y. Wu, Y. K. Liao and A. Chatwuthikrai, “Applying conjoint
Lett., vol. 149, no. 1-3, p. 295–300, 2004. [Online]. Available: analysis to evaluate consumer preferences toward subcompact cars,”
http://www.sciencedirect.com/science/article/pii/S0378427403005095 Expert Syst. Appl., vol. 41, no. 6, p. 2782–2792, 2014,
doi:10.1016/j.eswa.2013.10.011.
[46] M. Tracey and R. Neuhaus, “Purchasing’s role in global new product-
process development projects,” J. Purch. Supply Manage., vol. 19, no. 2, [52] T. G. Yanoff and S. O. Hansson, Preference Change. New York,
p. 98–105, 2013, doi:10.1016/j.pursup.2013.02.004. USA: Springer, 2009.
[47] A. Tversky, Preference, Belief, and Similarity: Selected Writings. [53] DOE Portal, Department of Energy, 2005. [Online]. Available:
USA: Massachusetts Institute of Technology, 2004, p. 1–2. www.doe.gov.ph

269
Back to Contents

Internet of Things-Enabled Supply Chain Performance Measurement Model

Abdallah Jamal Dweekat Prof. Jinwoo Park


Departement of Industrial Engineering/Automation and Departement of Industrial Engineering/Automation and
Systems Research Institute, Systems Research Institute
Seoul National University, Seoul National University,
Seoul, Rebublic of Korea Seoul, Rebublic of Korea
Email: jamal1002snu.ac.kr Email: autofact@snu.ac.kr

Abstract— Internet of Things is expected to offer promising business and Information and Communication Technologies
solution to transform the operation and the role of many existing in cyber space [7]. In Sensing Enterprise data acquired
industrial systems. It is also increasingly seen as an engine of
development in many systems and fields. Supply chain automatically using physical and virtual things, transformed
performance measurement is one of these systems that might be into multi-dimensional information, analyzed, and organized
significantly affected by Internet of Things evolution. In the last in such a way as to provide context awareness and decision
few decades, many standards and models have been developed to support; and that requires digitalization of enterprise entities,
measure supply chain performance. However, it has been argued
that these systems still have some problems, such as being static, regardless of their properties: tangible or intangible, simple or
short term, lack of holistic approach, and data collection complex . Sensing Enterprise will therefor better incorporate
difficulties, etc.[1-3] [1]. reactive behavior and provide a direct link between stimuli
To overcome these problems, this research has introduced a and actions [8].
new Internet of Things-Enabled model to collect information and
communicate things for the sleek of supply chain performance
Developing a smart environment for it is the first step
measurement. Supply Chain Operation Reference model and ISA- towards developing IoT-enabled SC resource management
95 standards have been used to analyze supply chains and identify system. Such environment can be defined in terms of:
performance metrics’ data identification queries. The model also resource identity, visualization and virtualization capabilities,
can be used to show the promising and essential role of Internet of
Things technologies in the next generation performance communication capabilities, and interoperability.
measurement systems that can be employed to monitor, manage Theoretically, that can be digitally characterized by an
and control the overall supply chain in real-time and in more identity unique identifier, a processing unit, a memory, and
integrated and cooperated way. networking capabilities. In order to do that, many IoT-
Keywords— Internet of Things (IoT); Supply chain Performance enabling technologies have been developed and ready to be
Management SCPM; performance metrics (PM); SCOR; ISA-95. used in various SC management applications. These
technologies can be summarized into four groups:
I. INTRODUCTION identification and tracking technologies such as RFID
systems, barcode, intelligent sensors, communication and
Internet of Things (IoT) arrivals represents a networks technologies such as WAN, LAN, WLAN, etc. and
transformative shift for human, business, and the overall service management technologies such as storage space,
economy similar to the shift that combining PC introduction. security management, operational support, etc. which are
IoT emergence incorporates other major technology trends recently offered through cloud computing [6, 9, 10].
such as data mining cloud computing etc., and goes beyond With all of these IoT-related technologies a large impact
them. IoT connection ability surpasses the earlier efforts to has been and is being made on the new SC information,
track and control systems such as Radio Frequency communications and management systems. Therefore, IoT
Identification (RFID) as it gives almost limitless adaptability can affect the whole SC in many ways. Firstly, it can develop
[4]. the SC reliability by enabling object visibility and real-time
‘IoT’ term was initially introduced by Kivin Ashton in information exchange. Secondly, it can enhance SC
1999 to refer to uniquely identifiable interoperable connected responsiveness and decrease its costs facilitating a real time
objects using RFID in P&G (Procter & Gamble) SC [5]. IoT is optimization for its functions and business process activities.
expected to transform the operation and role of most of the Thirdly, it can improve the SC assets managements by
existing industrial systems and therefore offer a promising tracking resource in real-time. And lastly, it can enhance the
solutions for all systems related to Supply Chain (SC) such as SC agility by speeding up the information flow processes.
manufacturing systems, performance management systems Accordingly, IoT technologies offer promising potentials to
and others [6]. address the traceability, visibility, and controllability
With the evolution of IoT, new paradigm has been challenges in SC. Building on that, this paper introduced an
emerged, “Sensing Enterprise”. The trend identified within IoT- Enabled Supply Chain Performance Measurement
this paradigm is aiming to achieve the seamless integration of System (SCPMS) model. The model can be used to enable
real-time performance measurement in any SC and therefore
This work was supported by the National Research Foundation of enhance more efficient decision making.
Korea (NRF) grant funded by the Korea government (MSIP), (No.
2015R1A2A2A03008086). And also by the Automation and Systems Research
Institute (ASRI).

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


270
Back to Contents

II. IoT-ENABELED SC PERFORMANCE (1) Depict SC entities geographically using business


MEASUREMENT MODEL processes level-1. And develop SC business processes
Efficient SCPMS should have the capability to fulfill three Level-2 diagram for all SC entities [6].
type of integrations within the SC. Hierarchical Integration (2) All SC entities related to the SC performance metrics
under concern should be more deeply analyzed. Hence,
which means integrating the SC activities over strategic, tactical
for each entity, a business processes Level-3 flowchart
and operational planning horizon, Special Integration which
should be configured.
means integrating all of the supply chain activities across its (3) In this step, it is the place where the inputs and outputs
geographically spread markets and Functional Integration which for each Level-3 process in previous step can be
means integrating the supply chain functions (i.e. purchasing, identified. Using SCOR Model manual (i.e. from
manufacturing, transportation, and warehousing) [11]. SCOR Model revision 10 and 11, it is easy to find out
Achieving such integrations can enable efficient SC decisions. each process inputs and outputs, and hence deciding
However, in the current dynamic markets, that became directly which processes should be monitored or measured to
related to the ability of seeing and perceiving objects in its real collect the required data).
time and to collect and analyze all information about it. In this
research we are going to introduce a model to enable such 3- By deciding metric’s Data related processes, an analysis
abilities for the sake of SCPMS (Fig. 1). should be conducted using ISA-95 standards. ISA-95
defines three main objects information models: personnel
model, material model and equipment model. At least one
of these models can be used to identify the shape and
characteristics of the information under concern.

4- The outputs of the previous steps are exactly the data that
we are interested to collect as well as the objects or things
that hold or produce these data. And it is required to
develop the system that can enable collecting and
communicating these data. Therefore, this step is the most
important one in this model. The roles of IoT technologies
to facilitate and promote SCPM processes are expressed in
three diminutions. First it decreases the data collection time
to be almost zero (real-time data). Second it increases data
efficiency, (i.e. increase the efficiency of the many
processes that can handle the data such as storage, access,
filtering, sharing, etc.). Third it enables real-time data
communication among all of the SC objects and entities.
Accordingly, this leads to fulfill the three types of
integration mentioned at the beginning of this section. Fig.
2 shows a general SC IoT architecture. To apply it to this
research approach, we need to think about how to
communicate the objects identified in step 3 and its
information among all of the entities using this SC IoT
architecture and in the same time consider each entity needs
and authorities. That can be primarily done in three steps.

Fig. 1. IoT- Enabled SCPMS Model


This model uses SCOR model and ISA-95 standard as tools
to finding out the data sources required to calculate the
performance metrics which should be decided and identified by
management. The model also will be used to show the
promising and essential role of the IoT technologies in the next
generation performance measurement systems. A detailed
explanation of model steps are given bellow.
1- Identify the supply chain scope and the metrics that reflects
its strategic objectives.

2- Configure the SC using Supply Chain Operation Reference


(SCOR) model [12].
By shorting [13]’s procedure to configure, any distinct SC
can be represented using SCOR Model processes in three
steps. Fig. 2: Supply Chain IoT Architecture.

271
Back to Contents

(1) In the first step, every object should be uniquely ACKNOWLEDGMENT


identified. GS1 system and IPv6 (Internet Protocol
version 6) can perfectly satisfy this requirement. GS1 The authors would like to thank Mr. Young-Woo Kim and
is a system what provides for the use of unambiguous Mr. Gyusun Hwang for their help. This work was supported by
numbers to identify goods, services, assets, and the National Research Foundation of Korea (NRF) grant funded
locations worldwide. These numbers can be by the Korea government (MSIP) (No.
represented in barcodes or tags in order to enable their 2015R1A2A2A03008086). And also by the Automation and
electronic reading wherever required in business Systems Research Institute (ASRI) in Seoul National University
processes [14]. While IPv6 is the most recent version in the form of resources and administrative support.
of the (IP) Internet Protocol; it is the communications
REFERANCES
protocol that provides an identification and location
system for computers on networks and routes traffic [1] A. Gunasekaran and B. Kobu, "Performance measures
across the Internet [15]. and metrics in logistics and supply chain management:
(2) The second step is to find out the sensing or tracking a review of recent literature (1995–2004) for research
system that can enable collecting real-time data from and applications," International Journal of Production
objects and enhance the communication between them. Research, vol. 45, pp. 2819-2840, 2007.
Many technologies have been developed and are being [2] A. Gunasekaran, C. Patel, and R. E. McGaughey, "A
developed for this purpose. RFID and WSN are the framework for supply chain performance
most popular and advance technologies in this field. measurement," International journal of production
Commonly, sensing and tracking systems includes economics, vol. 87, pp. 333-347, 2004.
short distance connecting network technologies such as [3] N. Agami, M. Saleh, and M. Rasmy, "Supply chain
Wi-Fi, ZigBee, and Bluetooth, etc. [6]. performance measurement approaches: Review and
(3) The last step is developing the internet-based classification," Journal of Organizational Management
communicating network. To let all entities perceive Studies, vol. 2012, pp. 1-20, 2012.
and communicate all information required to measures [4] F. Burkitt. (2015, 7 Jan. 2016). A strategist's guide to
its local performance as long as the whole SC the Internet of Things. Strategy+Business. Available:
performance simultaneously. http://www.strategy-
business.com/article/00294?gko=a9303
[5] A. Kevin. (2009, 15 Dec. 2015). That'Internet of
III. RESULTS AND FUTURE WORK Things' Thing, in the real world things matter more
than ideas. RFID Journal. Available:
http://www.rfidjournal.com/articles/view?4986
To develop SCPMS it is required to coordinate SC in order [6] L. Da Xu, W. He, and S. Li, "Internet of things in
to achieve the three type of integration, operational, special and industries: a survey," Industrial Informatics, IEEE
hierarchical integration. Having a performance measurement Transactions on, vol. 10, pp. 2233-2243, 2014.
systems that satisfying these requirements and can serve in real- [7] M. Moisescu and I. Sacala, "Towards the development
time has been a kind of dream in the past few decades. However, of interoperable sensing systems for the future
this has become a reality with IoT emerging as a technology enterprise," Journal of Intelligent Manufacturing, vol.
enabling objects to be perceived in the world of industry. 27, pp. 33-54, 2016.
[8] F. Cluster, "Future Internet Enterprise Systems:
This work introduced a new model that can be used to Research Roadmap 2025," Final Report, Version, vol.
enable SCPM in real time and in efficient way. The model use 2, 2012.
SCOR as the tool to find out metrics data. In other words, data [9] M. Sepehri, "A grid-based collaborative supply chain
collection will be assembled directly from SC business with multi-product multi-period production–
processes along all of its entities and that would achieve the distribution," Enterprise Information Systems, vol. 6,
desired integration. To directly connect the operational level, pp. 115-137, 2012.
the approach uses ISA-95 standards to identify the information [10] F. Tao, L. Zhang, V. Venkatesh, Y. Luo, and Y. Cheng,
and its holders, represented by an object or thing which can be "Cloud manufacturing: a computing and service-
oriented manufacturing model," presented at the
seen and perceived by IoT technology, and therefore enabling
Proceedings of the Institution of Mechanical Engineers,
automatic and real-time data collection. In short, this model, Part B: Journal of Engineering Manufacture, 2011.
showed that IoT has capability to enable real time data [11] J. Shapiro, Modeling the supply chain. Duxbury:
collection. Increase data efficient as long as enable the real-time Wadsworth Group, 2001.
communication within the supply chain. [12] S.-C. Council, "Supply-Chain Operations Reference
Model: version 11.0," ed, 2012.
Further researches are recommended in order to validate [13] H. Stadtler, C. Kilger, and H. Meyr, Supply Chain
the proposed model using case study. This work revealed that Management and Advanced Planning. Berlin: Springer,
IoT use in SCPM is a fruitful research topic and there is a need 2015.
of creative efforts to develop applications in this area. Industry [14] GS1, "GS1 General Specifications Version, 16," ed,
practitioners, IT experts, and researchers might play a 2016.
significant role in building a comprehensive real-time [15] A. J. Jara, L. Ladid, and A. F. Gómez-Skarmeta, "The
performance measurement systems that may help to address Internet of Everything through IPv6: An Analysis of
current and future challenges in SC management. Challenges, Solutions and Opportunities," JoWUA, vol.
4, pp. 97-118, 2013.

272
Back to Contents

Forecasting Method Under the Introduction of a


Day of the Week Index to the Daily Shipping Data
of Sanitary Materials
Komei Suzuki1
1.Graduate School of Humanities and Social Sciences,
Shizuoka University, 836, Ohya, Suruga-ku, Shizuoka-Shi, Kazuhiro Takeyasu3
Shizuoka-Ken, 422-8529, Japan
3. College of Business Administration, Tokoha
Hirotake Yamashita2 University,325 Oobuchi, Fuji City, Shizuoka, 417-0801,
2. College of Business Administration and Information Japan
Science, Chubu University,1200 Matsumoto-cho Kasugai,
Aichi,487-8501, Japan

Abstract— Correct sales forecasting is indispensable to the week index (DWI)” is newly introduced for the daily data
industries. In industries, how to improve forecasting and a day of the week trend is removed. Theoretical solution
accuracy such as sales, shipping is an important issue. of smoothing constant of ESM is calculated for both of the
There are many researches made on this. In this paper, we DWI trend removing data and the non DWI trend removing
propose a new method to improve forecasting accuracy data. Then forecasting is executed to the manufacturer’s data
and confirm them by the numerical example. Focusing of sanitary materials. This is a revised forecasting method.
that the equation of exponential smoothing method(ESM) Variance of forecasting error of this newly proposed method is
is equivalent to (1,1) order ARMA model equation, a new assumed to be less than those of the previously proposed
method. The rest of the paper is organized as follows. In
method of estimation of smoothing constant in exponential
section 2, ESM is stated by ARMA model and estimation
smoothing method is proposed before by us which satisfies
method of smoothing constant is derived using ARMA model
minimum variance of forecasting error. In this paper, “a identification. The combination of linear and non-linear
day of the week index” is newly introduced for the daily function is introduced for trend removing in section 3. “a day
data and the forecasting is executed to the manufacturer’s of the week index (DWI)” is newly introduced in section 4.
data of sanitary materials. We have obtained good result. Forecasting is executed in section 5, and estimation accuracy
Keywords— Minimum Variance, Exponential Smoothing is examined.
Method, Forecasting, Trend, Sanitary Materials
Introduction 2. DESCRIPTION OF ESM USING ARMA MODEL[1]

1. INTRODUCTION In ESM, forecasting at time t +1 is stated in the following


equation.
Correct sales forecasting is indispensable to industries.
Poor sales forecasting accuracy leads to increased inventory
and prolonged dwell time of product. xˆ t +1 = xˆt + α (xt − xˆt )
Many methods for time series analysis have been (1)
presented such as Autoregressive model (AR Model), = αxt + (1 − α )xˆt
Autoregressive Moving Average Model (ARMA Model) and
Exponential Smoothing Method (ESM) (Box Jenkins [1]), Here,
(R.G.Brown[2]), (Tokumaru et al.[3]), (Kobayashi[4]).
Among these, ESM is said to be a practical simple method. xˆt +1 : forecasting at t + 1
We proposed a new method of estimation of smoothing xt : realized value at t
constant in ESM before (Takeyasu et al. [5]). Focusing that α : smoothing constant (0 < α < 1)
the equation of ESM is equivalent to (1,1) order ARMA model
equation, a new method of estimation of smoothing constant (1) is re-stated as:
in ESM was derived. In this paper, utilizing above stated ∞
xˆt +1 = α (1 − α ) xt −l
l
method, a revised forecasting method is proposed. “a day of (2)
l =0

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


273
Back to Contents

p
By the way, we consider the following (1,1) order ARMA ~
model. xt = xt +  a i xt − i (8)
i =1
xt − xt −1 = et − βet −1 (3) q
~
xt = et +  b j et − j (9)
Generally, ( p, q ) order ARMA model is stated as: j =1
p q
x t +  a i x t − i = e t +  b j et − j (4) ~
x ~
i =1 j =1
We express the autocorrelation function of t as r k and from
(8), (9), we get the following non-linear equations which are
well known[3].
Here, q−k
{xt } : Sample process of Stationary Ergodic Gaussian Process r~k = σ e2  b jbk + j (k ≤ q)
x(t ) t = 1,2,  , N ,  j =0

{et } : Gaussian White Noise with 0 mean σ e2 variance 0 (k ≥ q + 1) (10)


MA process in (4) is supposed to satisfy convertibility
q
condition.
1. Utilizing the relation that:
r~0 = σ e2  b 2j
j =0
E[et et −1 , et −2 ,] = 0
For these equations, recursive algorithm has been developed.
we get the following equation from (3).
In this paper, parameter to be estimated is only b1 , so it can
be solved in the following way.
xˆt = xt −1 − βet −1 (5) From (3) (4) (7) (10), we get:
2. Operating this scheme on t +1, we finally get: q =1
a1 = −1
xˆt +1 = xˆt + (1 − β )et
(6) b1 = − β = α − 1
= xˆt + (1 − β )(xt − xˆ t ) (11)
~ (
r = 1 + b2 σ 2
0 1 ) e

If we set 1 − β = α , the above equation is the same with (1), r~1 = b1σ e2
i.e., equation of ESM is equivalent to (1,1) order ARMA If we set:
model, or is said to be (0,1,1) order ARIMA model because 1st ~
r
order AR parameter is −1 (Box Jenkins [1]), (Tokumaru et ρ k = ~k (12)
al.[3]). r 0
Comparing with (3) and (4), we obtain: the following equation is derived.
a1 = −1 b1
ρ1 = (13)
b1 = −β 1 + b12
From (1), (6), We can get b1 as follows.
α = 1− β 1 ± 1 − 4 ρ12
Therefore, we get: b1 = (14)
2 ρ1
a1 = −1
(7) In order to have real roots, ρ1 must satisfy:
b1 = − β = α − 1
From above, we can get estimation of smoothing constant 1
ρ1 ≤ (15)
after we identify the parameter of MA part of ARMA model. 2
But, generally MA part of ARMA model become non-linear b1 must satisfy:
equations which are described below. From invertibility condition,
Let (4) be:
b1 < 1

274
Back to Contents

From (13), using the next relation,


(1 − b1 )2 ≥ 0
( )
y = α1 (a1 x + b1 ) + α 2 a2 x 2 + b2 x + c2  
(21)
(1 + b1 )2 ≥ 0 ( )
+ α 3 a3 x3 + b3 x 2 + c3 x + d3  
(15) always holds. 0 ≤ α 1 ≤ 1  ,  
  0 ≤ α 2 ≤ 1  ,  
0 ≤ α3 ≤ 1
As (22)
α1 + α 2 + α 3 = 1
α = b1 + 1
b1 is within the range of: as the combination of linear and 2nd order non-linear and 3rd
order non-linear function.
− 1 < b1 < 0
Trend is removed by dividing the data by (21).
Finally we get:

1 − 1 − 4 ρ12
b1 = 4. A DAY OF THE WEEK INDEX
2 ρ1
(16)
1 + 2 ρ1 − 1 − 4 ρ12 “a day of the week index (DWI)” is newly introduced for the
α= daily data of sanitary materials. The forecasting accuracy
2 ρ1 would be improved after we identify the “a day of the week
index” and utilize them. This time in this paper, the data we
which satisfy above condition. Thus we can obtain a handle consist by Monday through Sunday, we calculate
theoretical solution by a simple way. DWI j ( j = 1,  ,7 ) for Monday through Sunday.
Here
ρ1 must satisfy:
For example, if there is the daily data of L weeks as
1 stated bellow:
− < ρ1 < 0 (17)
2
in order to satisfy 0 < α <1. {x } (i = 1,, L ) ( j = 1,,7 )
ij

Focusing on the idea that the equation of ESM is equivalent to


(1,1) order ARMA model equation, we can estimate
smoothing constant after estimating ARMA model parameter. 3. where xij ∈ R in which L means the number of weeks
It can be estimated only by calculating 0th and 1st order (Here L = 10 ), i means the order of weeks ( i -th week), j
autocorrelation function. means the order in a week ( j -th order in a week; for example
3.TREND REMOVAL METHOD j = 1 : Monday, j = 7 : Sunday) and xij is Daily shipping
data of sanitary materials. Then, DWI j
is calculated as
As trend removal method, we describe the combination of
linear and non-linear function. follows.
[1] Linear function
1 L
We set:
 xij
L i =1
DWI j = (23)
y = a1 x + b1 1 1 L 7
(18) ⋅   x ij
as a linear function. L 7 i =1 j =1
[2] Non-linear function DWI trend removal is executed by dividing the data by (23).
We set: Numerical examples both of DWI removal case and non-
removal case are discussed in section 5.
y = a2 x2 + b2 x + c2 (19)
5. FORECASTING THE SANITARY MATERIALS
y = a3 x3 + b3 x2 + c3 x + d3 (20) DATA
as a 2nd and a 3rd order non-linear function. 5.1. Analysis Procedure
[3] The combination of linear and non-linear function The shipping data of 2 cases from January 31, 2012 to April 2,
We set: 2012 are analyzed. Analysis procedure is as follows. There are
68 daily data for each case. We use 49 data(1 to 49) and
remove trend by the method stated in section 3. Then we
calculate a day of the week index (DWI) by the method stated
in section 4. After removing DWI trend, the method stated in

275
Back to Contents

section 2 is applied and Exponential Smoothing Constant with Product Case1 1.00 0.00 0.00
minimum variance of forecasting error is estimated. Then 1 B Case2 1.00 0.00 0.00
step forecast is executed. Thus, data is shifted to 2nd to 50th
and the forecast for 51st data is executed consecutively, which As a result, we can observe the following two patterns.
finally reaches forecast of 63rd data. To examine the accuracy ① Selected liner model:
of forecasting, variance of forecasting error is calculated for Product A Case2, Product B Case1, Product B Case2
the data of 50th to 63rd data. Final forecasting data is obtained ② Selected 2nd order model:
by multiplying DWI and trend. Product A Case1
Forecasting error is expressed as: Graphical charts of trend are exhibited in Figure 5-3 to 5-4.
ε i = xˆi − xi (24)

N
1
ε =
N ε
i =1
i (25)

Variance of forecasting error is calculated by:


2
1 N
σε =
2
(εi − ε )
N −1 i=1
(26)

In this paper, we examine the two cases stated in Table 5-1.


Fig. 5-3. Daily Shipping Data of Product A
Table 5-1. The combination of the case of trend removal and
DWI trend removal
Case Trend DWI trend
Case1 Removal Removal

Case2 Removal Non removal

5.2. Trend Removing


Trend is removed by dividing original data by (21). Here, the
weight of α1 and α 2 are shifted by 0.01 increment in (21)
which satisfy the equation (22). The best solution is selected Fig. 5-4. Daily Shipping Data of Product B
which minimizes the variance of forecasting error. Estimation
results of coefficient of (18), (19) and (20) are exhibited in 5.3. Removing Trend by DWI
Table 5-2. Data are fitted to (18), (19) and (20), and using the After removing trend, a day of the week index is calculated by
least square method, parameters of (18), (19) and (20) are the method stated in 4. Calculation result for 1st to 49th data is
estimated. Estimation results of weights of (21) are exhibited exhibited in Table 5-4.
in Table 5-3. The weighting parameters are selected so as to
minimize the variance of forecasting error. Table 5-4. a day of the week index
Table 5-2. Coefficient of (18),(19) and (20) a day of the week index
Case
Thu. Fri. Sat. Sun. Mon. Tue. Wed.
Produ Case
ct A 1 1.010 1.209 0.736 1.142 0.875 0.797 1.230
1st 2nd 3rd
Produ Case
1.231 0.382 0.863 1.154 1.049 1.196 1.125
a1 b1 a2 b2 c2 a3 b3 c3 d3 ct B 1

Product 0.15 23.95 0.01 -0.42 28.80 -0.00 0.09 -1.97 35.57
A
5.4. Estimation of Smoothing Constant with Minimum
Product
B
0.27 15.85 0.01 -0.35 21.15 0.00 -0.10 1.93 11.16 Variance of Forecasting Error
After removing DWI trend, Smoothing Constant with
Table 5-3. Weights of (21) minimum variance of forecasting error is estimated utilizing
(16). There are cases that we cannot obtain a theoretical
Case α1 α2 α3 solution because they do not satisfy the condition of (17). In
Product Case1 0.41 0.59 0.00 those cases, Smoothing Constant with minimum variance of
A Case2 1.00 0.00 0.00 forecasting error is derived by shifting variable from 0.01 to

276
Back to Contents

0.99 with 0.01 interval. Calculation result for 1st to 49th data Table 5-6. Variance of Forecasting Error
is exhibited in Table 5-5. Case Variance of Forecasting Error
Product A Case1 309 *
Table 5-5. Estimated Smoothing Constant with Minimum
Variance Case2 378
Product B Case1 981 *
Case ρ1 α 995
Case2
Case1 -0.4703 0.30
Product A
Case2 -0.0832 0.92
Case1 -0.1432 0.85 6. CONCLUSIONS
Product B
Case2 -0.2008 0.79
Correct sales forecasting is indispensable to industries.
Focusing on the idea that the equation of exponential
5.5. Forecasting and Variance of Forecasting Error
smoothing method(ESM) was equivalent to (1,1) order ARMA
Utilizing smoothing constant estimated in the previous section,
model equation, a new method of estimation of smoothing
forecasting is executed for the data of 50th to 63rd data. Final
constant in exponential smoothing method was proposed
forecasting data is obtained by multiplying DWI and trend.
before by us which satisfied minimum variance of forecasting
Variance of forecasting error is calculated by (26).
error.
Forecasting results are exhibited in Figure 5-5 to 5-6.
In this paper, “a day of the week index (DWI)” is newly
introduced for the daily data and a day of the week trend is
removed. Theoretical solution of smoothing constant of ESM
was calculated for both of the DWI trend removing data and
the non DWI trend removing data. Then forecasting was
executed on these data.
Regarding both data, the forecasting accuracy of case1 (DWI
is imbedded) was better than those of case 2 (DWI is not
imbedded). It can be said that the introduction of DWI has
worked well. It is our future works to ascertain our newly
proposed method in many other cases. The effectiveness of
this method should be examined in various cases.
Fig. 5-5. Forecasting Results of Product A
REFERENCES

[1] Box Jenkins: Time Series Analysis Third Edition (Prentice


Hall, 1994)
[2] R.G. Brown: Smoothing, Forecasting and Prediction of
Discrete – Time Series (Prentice Hall, 1963)
[3] Hidekatsu Tokumaru et al.: Analysis and Measurement –
Theory and Application of Random data Handling (Baifukan
Publishing, 1982)
[4] Kengo Kobayashi: Sales Forecasting for Budgeting
(Chuokeizai-Sha Publishing, 1992)
Fig. 5-6 Forecasting Results of Product B
[5] Kazuhiro Takeyasu and Kazuko Nagao: Estimation of
Smoothing Constant of Minimum Variance and its Application
Variance of forecasting error is exhibited in Table 5-6. to Industrial Data. Industrial Engineering and Management
Systems Vol. 7, No. 1 (2008), 44-50.

277
Back to Contents

Text Mining Analysis on the Questionnaire


Investigation for High School Teachers’ Work Load
Kazuhiro Takeyasu Tatsuya Oyanagi
Tokoha University Hachinohe Gakuin University
Shizuoka, Japan Aomori, Japan
e-mail: takeyasu@fj.tokoha-u.ac.jp e-mail: oyanagi-t@hachinohe-u.ac.jp

Yasuo Ishii Daisuke Takeyasu


Yamato University The Open University of Japan
Osaka, Japan Chiba, Japan
e-mail: y-ishii@oiu.jp e-mail: take1111@hotmail.co.jp

Abstract-High School teachers in Japan are sending very function. For example, H. Konyuba (2011) analyzed the
busy days on their daily works including teaching, support for teacher’s sparing time for club activities and pointed out that
the club activities and deskwork. Among them, they share a lot there is a difference between the sports club and the culture
of time for managing the club actives of students compared with club. K.Yonekawa (2011) discussed the mental health support
other countries. School Social Worker can coordinate the by school social worker. M.S.Kelly et al. (2010) made School
professionals out of school and can help teachers by decreasing
Social work survey and derived instructive insight.
their burden on that area. There are few related papers
concerning the support of club activities by utilizing the In this paper, a questionnaire investigation is executed in
professionals outside. In this paper, a questionnaire investigation order to clarify their current condition and their consciousness,
is executed in order to clarify their current condition and their and to seek the possibility of utilizing school social worker for
consciousness, and to seek the possibility of utilizing school social their support. Fundamental statistical analysis and Text
worker for their support. Fundamental statistical analysis and Mining Analysis are performed. Some interesting results were
Text Mining Analysis are performed. Some interesting and obtained.
instructive results were obtained. The rest of the paper is organized as follows. Outline of
questionnaire investigation is stated in section 2. Text Mining
Key Words-High school teacher, Text Mining Analysis, School
Analysis is conducted in section 3 which is followed by the
Social Worker
Remarks of section 4.

1. INTRODUCTION 2. OUTLINE AND THE BASIC STATISTICAL RESULTS OF THE


Teacher at High school / Junior High School in Japan are QUESTIONNAIRE RESEARCH
sending very busy days in general on their daily works
including teaching, support for the club activities and 2.1 OUTLINE OF THE QUESTIONNAIRE RESEARCH
deskwork. Among them, they share a lot of time for managing We make a questionnaire investigation for the Support of
the club actives of students compared with other countries. High School Teachers by the School Social Worker. The
There are many researches made on School Social Workers’ outline of the questionnaire research is as follows (Table 2.1).

TABLE 2.1 OUTLINE OF THE QUESTIONNAIRE RESEARCH

(1) Scope of High School Teachers, 7 High Schools in Aomori Prefecture,


investigation Japan
(2) Period January ~March 2014
(3) Method Leave until called for
(4) Collection Number of distribution 231
Number of collection 170(collection rate 73.6%)
Valid answer 170

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


278
Back to Contents

eous 3 ( Q1-2-2 53
2.2 BASIC STATISTICAL RESULTS Within 1 3.5 Not filled 1.7
④ -5)
Now, we show the main summary results by single Experien year 3 in 6
variable (Table 2.2). ce as a 1-2 years 2.9
Table 2.2 The main summary results by single variable (3) Course(Q1-2)(%)
teacher 4
(1)Attribute(Q4)(%) ⑤ How Within 1 4.7 (Q4-4) 3-4 years 11. Ordinary 31.
year 1 ①Course
many 18 course 76
① Sex Male 64. years are 1-2 years 6.4 5-9 years 14. Technical 31.
(Q 12 you 7 71 course 76
4-1) Female 31. working 3-4 years 12. More than 67. General 5.2
76 for the 94 10 years 06 course 9
Not filled 4.1 present 5-9 years 18. Not filled 0.5 Profession 14.
in 2 school? 24 in 8 al course 12
② Age 20-29 16. (Q4-5) More than 57. Ordinary/ 17.
(Q 47 10 years 09 Profession 06
4-2) 30-39 28. ⑥ How Adviser 86. al course
82 about the 47
40-49 22. sort of Deputy 11.
35 job? Adviser 18 3. TEXT MINING ANALYSIS
50-59 29. ( Q1-2-2 Miscellan 2.3 Now, we make analysis utilizing “Text Analytics for Surveys”
41 -3) eous 5 by focusing important keywords found in Key Graph. Key
60- 1.7 (2) Status of the club Graph is a method to visualize the data structure using key
6 activity(2-2-3,4)(%) words.
First of all, we explain the abbreviation form in the following
Not filled 1.1 Yes 18.
① Is the figures. If the sentence is stated in the figure as it is, it
in 9 82
club becomes too complicated and we can hardly identify them.
③ Posi Deputy 3.5
strong Therefore the abbreviated form is introduced in each figure. In
tion Principal 3
enough to Cannot 17. those figures, the character “1”, “2”, “3” at each word means:
(Q participat say either 06
“1”: Affirmative / It is better to do so. I think so.,“2”:
4-3) a person 8.2 e in the 62.
Neutral /Not either,
in charge 4 national 94
“3”: Negative / It is not better to do so. I do not think so.
of sport No
education For example, the abbreviated form is expressed as follows.
meet? “1”FeelItBurdenToTeach, “2”FeelItBurdenToTeach,
al affairs
( Q1-2-2 “3”FeelItBurdenToTeach
Teacher 74. Not filled 1.1
-4)
71 in 8 3.1 CONSCIOUSNESS FOR THE CLUB ACTIVITIES AND THE
Lecturer 9.4 Yes 52. DAILY WORKS(Q1-2&Q2)
② Is the
1 94
Assistant 1.1 club Cannot 31. The Key Graph analysis is executed selecting the item of
activity Q1-2 “Current Condition of Club Activities” and Q2
8 say either 76
active? “Consciousness for the daily works”. Co-occurrence
Miscellan 2.9 No 13.
probability is utilized for the analysis of co-appearance rate.

FIGURE 3.1 CONSCIOUSNESS FOR THE CLUB ACTIVITIES AND THE DAILY WORKS(Q1-2&Q2)

279
Back to Contents

From Figure 3.1, we can extract 2 clusters. The cluster located for the daily works feel it burden for other jobs. In particular,
in the upper middle is a clear one which shows that “Adviser those feel it burden for “Handling Students Performance”
in the non-strong club” has a strong co-occurrence rate with also feel it burden for “Teaching”, “Preparing Teaching”,
“ 3” Handling Students Performance, “3”Teaching, “ 3” “Class Management”, and “PTA meeting”.
Preparing Teaching, “3” Club Activities, “3” Committee
Guidance, “3” Class Management and “3” PTA meeting, 3.2 CONSCIOUSNESS FOR THE DAILY WORKS(Q2)
where these mean that they do not feel burden to such daily In this section, relationship among items in “Consciousness
works. On the other hand, we can observe the second cluster for the daily works” is analyzed.
in the left downward in the figure. Those who feel it burden

FIGURE 3.2 CONSCIOUSNESS FOR THE DAILY WORKS(Q2)

From Figure 3.2, we can observe two clusters. Those who


do not feel it burden for the daily works do not feel it burden 3.3 CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES(Q3)
for each job (Right hand side of the figure). While those who
feel it burden for the daily works feel it burden for each job Focusing on Q3 “Consciousness for guiding the club
(Left hand side of the figure). activities”, relationship among items is pursued.

FIGURE 3.3 CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES(Q3)

One strong cluster can be found from Figure 3.3. “Feel


Worthwhile” has a high co-occurrence rate with “Better for 3.4 CONSCIOUSNESS FOR THE DAILY WORKS AND
the professionals to Guide”, “Want the person to CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES(Q2&Q3)
consult with”, “Struggling in Technical Guidance” and
“Struggling in Mental Guidance”. Therefore it is The Key Graph analysis is executed selecting the item of
expected to decrease the burden by utilizing professionals Q2 “Consciousness for the daily works” and Q3
outside. “Consciousness for guiding the club activities”.

280
Back to Contents

FIGURE 3.4 CONSCIOUSNESS FOR THE DAILY WORKS” AND CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES(Q2&Q3)

From Fig.3.4, we can observe one strong cluster and the


second two clusters. Strong cluster located at the center is the 3.5 CONSCIOUSNESS FOR THE CLUB ACTIVITIES AND
one that they do not feel it burden for the daily works. The CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES
cluster located in the right hand downward is the one that they ( Q1-2&Q3)
feel it burden for the daily works. Connecting item for each In this section, Consciousness for the club activities and
cluster is “Feel Worthwhile”. This item is a key word for Consciousness for guiding the club activities(Q1-2&Q3)are
solving problem. analyzed in order to search the relationship between these and
the key word: “Feel Worthwhile”.

FIGURE 3.5 CONSCIOUSNESS FOR THE CLUB ACTIVITIES AND CONSCIOUSNESS FOR GUIDING THE CLUB ACTIVITIES(Q1-2&Q3)

From Figure 3.5, we can find one strong cluster. “Feel mental guidance and it is better for the professionals to guide
Worthwhile” has a high co-occurrence rate with “Club club activities.
Activities is active”, “Better for the professionals to
Guide”, “Not Inexperience” and “Adviser”, and a rather 3.6 CURRENT HEALTH CONDITION(Q4-15,16,17)
high co-occurrence rate with “Struggling in Mental Here, Current Health Condition(Q4-15,16,17)is analyzed.
Guidance”. We can see that even if they feel worthwhile to
guide club activities, many of think that they are struggling in

FIGURE 3.6 CURRENT HEALTH CONDITION(Q4-15,16,17)

281
Back to Contents

From Figure 3.6, we can find one strong cluster. “Good Keeping a good health condition makes a great influence to
Physical Condition”, “Progressing Smoothly”, and having the feeling of “Worthwhile”.
“Living a Full Life” have a high co-occurrence rate.

“Struggling in Mental Guidance”. We can see that even if


they feel worthwhile to guide club activities, many of think
4. REMARKS that they are struggling in mental guidance and it is better for
In this paper, text mining method is applied and the the professionals to guide club activities.
followings are the main results. We can find a cluster that they do not feel it burden for the
(1) “Adviser in the non-strong club” has a strong daily works. On the other hand, there is also a cluster that they
co-occurrence rate with “ 3” Handling Students feel it burden for the daily works. Connecting item for each
Performance, “3”Teaching, “3” Preparing Teaching, “3” cluster is “Feel Worthwhile”. This item is a key word for
Club Activities, “3” Committee Guidance, “3” Class solving problem. To promote to feel worthwhile by decreasing
Management and “3” PTA meeting, where these mean burden would be an essential function to be investigated. To
that they do not feel burden to such daily works. On the introduce outer professionals in guiding the club activities is
other hand, those who feel it burden for the daily works one of the solutions for that.
feel it burden for other jobs.
(2) “Feel Worthwhile” has a high co-occurrence rate with 5. CONCLUSION
“Better for the professionals to Guide”, “Want the
person to consult with”, “Struggling in Technical High School teachers in Japan are sending very busy days on
their daily works including teaching, support for the club
Guidance” and “Struggling in Mental Guidance”.
activities and deskwork. Among them, they share a lot of time
Therefore it is expected to decrease the burden by
for managing the club actives of students compared with other
utilizing professionals outside.
countries. In this paper, questionnaire investigation is
(3) “Male” has a high co-occurrence rate with “Feel
executed in order to clarify their current condition and their
Worthwhile”, ”Married”, “Experience 5~10 years”,
consciousness, and to seek the possibility of utilizing school
“Present School 5~10 years” and “Teacher”. These social worker for their support. Fundamental statistical
suggest that married male teachers who are working long analysis and Text Mining Analysis were performed. Based
have the high possibility to feel worthwhile. upon the results, these suggest that unique/original approach
(4) “Good Physical Condition”, “Progressing should be executed to the “Club Activities”. Teachers’ burden
Smoothly”, and “Living a Full Life” have a high may be decreased by utilizing outer specialist in guiding club
co-occurrence rate. Keeping a good health condition activities. School Social Worker can coordinate the
makes a great influence to having the feeling of professionals out of school and can help teachers by
“Worthwhile”. decreasing their burden on that area. This suggests the
possibility of developing the new activity field for the School
“Feel Worthwhile” has a high co-occurrence rate with Social Worker.
“Club Activities is active”, “Better for the Various cases should be investigated here after.
professionals to Guide”, “Not Inexperience” and
“Adviser”, and a rather high co-occurrence rate with
[2] Kazuo Yonekawa (2011) “The role of school social worker for
mental health of a junior high school teacher”, Bulletin of Faculty of
REFERENCE Literature, Kurume University, 10・11, pp.7-15.
[1] Hideyuki Konyuba ( 2011 ) ,“Analysis on teachers' workload and [3] Michael Stokely Kelly, Stephanie Cosner Berzin, Andy Frey,
development of school organization: focusing on the school club Michelle Alvarez, Gary Shaffer and Kimberly O’Brien ,“The State
activities”, National Institute for Educational Policy Research, 140, of School Social Work: Findings from the National School Social
pp.181-193. Work Survey”, School Mental Health, September 2010, Volume 2,
Issue 3, pp132-141

282
Back to Contents

The Analysis to the Questionnaire Investigation on


the Rare Sugars

Yuki Higuchi Kazuhiro Takeyasu


Faculty of Business Administration College of Business Administration
Setsunan University Tokoha University
Osaka, Japan Shizuoka, Japan
y-higuch@kjo.setsunan.ac.jp takeyasu@fj.tokoha-u.ac.jp

Hiromasa Takeyasu
Faculty of Life and Culture
Kagawa Junior College
Kagawa, Japan
takeyasuh@kjc.ac.jp

Abstract—The Rare Sugars exist naturally and have many health and is sold in the market as a sweetening, seasoning or
kinds (more than 50). They have good effect for health such as functional ingredient for food.
prevention of increasing the blood ‐ sugar level after eating,
suppression of fat accumulation, suppression of increasing the In this paper, a questionnaire investigation is executed in
blood pressure, and anti-oxidative effect etc. It is in the spotlight order to clarify the recognition level among consumers and to
for many people especially for those who are in the metabolic pursue the future possibility of the Rare Sugars. Basic
syndrome. There are few related papers concerning the statistical analysis and hypothesis testing are conducted. The
marketing research and its utilization of this matter. In this three main issues are set. Then, 7 sub issues are set and
paper, a questionnaire investigation is executed in order to clarify hypothesis testing is executed. The rest of this paper is
consumers’ current condition and their consciousness, and to organized as follows. In section 2, outline of the questionnaire
seek the possibility of utilizing the Rare Sugars. Hypothesis investigation and its basic statistical results are exhibited. After
testing was executed based on that. Some interesting and that, hypothesis testing is performed in section 3, which is
instructive results were obtained. followed by the remarks of section 4.
Keywords—the rare sugars, consumer, hypothesis testing
II. OUTLINE AND THE BASIC STATISTICAL RESULTS OF THE
QUESTIONNAIRE RESEARC
I. INTRODUCTION
The Rare Sugars’ study has launched on 1980th by A. Outline of the Questionnaire Research
Professor Takeshi Izumori (Kagawa University). The way to A questionnaire investigation is executed in order to clarify
the mass production was developed by the method of the recognition level among consumers and to pursue the future
enzymatic reaction. The International Society of Rate Sugars possibility of the Rare Sugars. The outline of the questionnaire
was established in 2001. Local government of Kagawa research is as follows.
Prefecture comes to assist this research activity on this big
innovation newly born in Kagawa Prefecture. The Rare Sugars (1) Scope of investigation: Participants to the cooking class in
have advantage that a blood-sugar level does not increase so Kagawa Prefecture
much after eating, in spite of it being a sugar. And it also hold (2) Period: April – June 2015
the upturn of the blood pressure. Therefore it is expected as a (3) Method: Leave until called for
new functional material for the prevention of metabolic (4) Collection: Number of distribution 300
syndrome. Number of collection 171 (collection rate 57.0%)
Valid answer 171
Many medical research papers are published on the Rare
Sugars as follows. B. Basic Statistical Result
Analysis of the function of D-psicose ; [2] Now, we show the main summary results by single
variable.
Analysis of the function of D-allose ; [1]
1) Basic characteristics of answerers
On the other hand, these are few papers analyzed by the
viewpoint from consumers. The Rare Sugars is good for the

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


283
Back to Contents

Q32 Sex 2) Summary results for the items used in Hypothesis


Frequency % Testing
Male 30 18 Q1 Do you know the Rare Sugars?
Female 137 82 Frequency %
Total 167 100 Know 152 89.9
Q33 Age Do not know 17 10.1
Frequency % Total 169 100
-19 54 32.1 Q3 Do you know that the Rare Sugar has effect on
20-29 18 10.7 obese prevention and/or diabetes prevention etc.?
30-39 8 4.8 Frequency %
40-49 15 8.9 Know 126 79.7
50-59 17 10.1 Do not know 32 20.3
60- 56 33.3 Total 158 100
Total 168 100 Q6 Have you drunk or eaten the food
Q34 Occupation which includes the Rare Sugar?
Frequency % Frequency %
Student 68 40.7 Yes 124 79.0
Officer 8 4.8 No 33 21.0
Company Employee 14 8.4 Total 157 100
Clerk of Organization 3 1.8
Independents 6 3.6
Part-timer 9 5.4
Housewife 49 29.3
Miscellaneous 10 6
Total 167 100

Q10 I want to use it in the cooking.


Think it Slightly Cannot Slightly do Do not
Total
very much think so say either not think so think so
Frequency 45 65 34 10 7 161
% 28.0 40.4 21.1 6.2 4.3 100
Q18 I cannot grasp the concrete effect.
Frequency 26 38 31 26 36 157
% 16.6 24.2 19.7 16.6 22.9 100
Q20 Surrounding people do not use it so often.
Frequency 24 72 51 9 2 158
% 15.2 45.6 32.3 5.7 1.3 100
Q25 Do you take interest in a diet?
Frequency 64 57 18 16 11 166
% 38.6 34.3 10.8 9.6 6.6 100

III. HYPOTHESIS TESTING B) Those who do not know the Rare Sugars feel anxiety for
Hereinafter we make hypothesis testing based upon the them.
questionnaire investigation data. C) Generally, female have much more interest in the Rare
Sugars than male.
A. Setting Hypothesis Next, we set the following 7 themes (sub issues) before
First of all, we start from the hypothesis testing. Three main setting Null Hypothesis.
issues are set as follows. A-1) Those who know the Rare Sugars know that they are
A) Those who have interest in the Rare Sugars have also effective for obese prevention and/or diabetes prevention.
interest in health. A-2) Those who know that the Rare Sugars are effective for

284
Back to Contents

obese prevention and/or diabetes prevention have eaten have interest in diet between those who have eaten or
or drunk food in which the Rare Sugars is contained. drunk food in which the Rare Sugars are contained and
A-3) Those who have eaten or drunk food in which the Rare those who have not.
Sugars is contained have interest in diet. B-1) There is not so much difference concerning that they do
B-1) Those who do not know the rare Sugars have not have acquaintances who do not use the Rare Sugars
acquaintances who do not use the Rare Sugars. between those who know the Rare Sugars and those who
B-2) Those who do not know the Rare Sugars do not do not.
understand the concrete effect of them. B-2) There is not so much difference concerning that they do
C-1) Female have much more experience of eating or drinking not know the concrete effect of the Rare Sugars between
food in which the Rare Sugar is contained than male. those who know the Rare Sugars and those who do not.
C-2) Female want to use the Rare Sugars for cooking more C-1) There is not so much difference concerning that they
than male. have eaten or drunk food in which the Rare Sugars are
contained between male and female.
Now, we set the following 7 Null hypothesis
C-2) There is not so much difference concerning that they
A-1) There is not so much difference concerning that the Rare want to use the Rare Sugars for cooking between male
Sugars have effect on obese prevention and/or diabetes and female.
prevention between these who know the Rare Sugars and
B. Hypothesis Testing
those who do not know.
A-2) There is not so much difference concerning that they The results of statistical hypothesis testing are as follows.
have experience of eating and drinking food in which the Null Hypothesis A-1): There is not those who know the
Rare Sugars are contained between those who know that Rare Sugars know that they are effective for obese prevention
the Rare Sugars are effective obese prevention and/or and/or diabetes prevention. Summary table concerning Null
diabetes prevention and those who do not know. Hypothesis A-1) is exhibited in Table 1.
A-3) There is not so much difference concerning that they
TABLE I. SUMMARY TABLE FOR NULL HYPOTHESIS A-1)

Do you know that the Rare Sugar has effect on obese


Do you know the
prevention and/or diabetes prevention etc.?
Rare Sugars?
Know Do not know Total
Frequancy 125 26 151
Know
% 82.8% 17.2% 100.0%
Frequancy 1 5 6
Do not know
% 16.7% 83.3% 100.0%
Frequancy 126 31 157
Total
% 80.3% 19.7% 100.0%
significance probability 0.000
(Rejection region is over 6.6349 for 1% significance level, Null Hypothesis A-2): There is not so much difference
3.841 for 5% significance level, 3.537 for 6% significance concerning that they have experience of eating and drinking
level and 2.874 for 9% significance level by 1 degree of food in which the Rare Sugars are contained between those
freedom.) The null hypothesis is rejected with 1% significance who know that the Rare Sugars are effective obese prevention
level. It can be said that those who know the Rare Sugars know and/or diabetes prevention and those who do not know.
that they are effective for obese prevention and/or diabetes Summary table concerning Null Hypothesis A-2) is exhibited
prevention. in Table 2.
TABLE II. SUMMARY TABLE FOR NULL HYPOTHESIS A-2)

Have you drunk or eaten the food


Do you know that the Rare Sugar has effect on obese
which includes the Rare Sugar?
prevention and/or diabetes prevention etc.?
Yes No Total
Frequancy 103 21 124
Know
% 83.1% 16.9% 100.0%
Frequancy 20 12 32
Do not know
% 62.5% 37.5% 100.0%
Frequancy 123 33 156
Total
% 78.8% 21.2% 100.0%
significance probability 0.011

285
Back to Contents

The null hypothesis is rejected with 2% significance level. Null Hypothesis A-3): There is not so much difference
It can be said that those who know that the Rare Sugars are concerning that they have interest in diet between those who
effective for obese prevention and/or diabetes prevention have have eaten or drunk food in which the Rare Sugars are
eaten or drunk food in which the Rare Sugars is contained. contained and those who have not. Summary table concerning
Null Hypothesis A-3) is exhibited in Table 3.

TABLE III. SUMMARY TABLE FOR NULL HYPOTHESIS A-3)

Have you drunk or eaten


Do you take interest in a diet?
the food which includes the
Rare Sugar? Think so Cannot say either Do not think so Total
Frequancy 94 7 19 120
Yes
% 78.3% 5.8% 15.8% 100.0%
Frequancy 22 6 5 33
No
% 66.7% 18.2% 15.2% 100.0%
Frequancy 116 13 24 153
Total
% 75.8% 8.5% 15.7% 100.0%
significance probability 0.077
The null hypothesis is rejected with 8% significance level. the Rare Sugars between those who know the Rare Sugars and
It can be said that those who have eaten or drunk food in which those who do not. Summary table concerning Null Hypothesis
the Rare Sugars is contained have interest in diet. B-1) is exhibited in Table 4.
Null Hypothesis B-1): There is not so much difference
concerning that they do not have acquaintances who do not use

TABLE IV. SUMMARY TABLE FOR NULL HYPOTHESIS B-1)

Do you know the Surrounding people do not use it so often.


Rare Sugars? Think so Cannot say either Do not think so Total
Frequancy 89 41 11 141
Know
% 63.1% 29.1% 7.8% 100.0%
Frequancy 7 9 0 16
Do not know
% 43.8% 56.3% 0.0% 100.0%
Frequancy 96 50 11 157
Total
% 61.1% 31.8% 7.0% 100.0%
significance probability 0.065
The null hypothesis is rejected with 7% significance level. Rare Sugars between those who know the Rare Sugars and
It can be said that those who do not know the rare Sugars have those who do not. Summary table concerning Null Hypothesis
acquaintances who do not use the Rare Sugars. B-2) is exhibited in Table 5.
Null Hypothesis B-2): There is not so much difference
concerning that they do not know the concrete effect of the

TABLE V. SUMMARY TABLE FOR NULL HYPOTHESIS B-2)

Do you know the Rare I cannot grasp the concrete effect.


Sugars? Think so Cannot say either Do not think so Total
Frequancy 52 26 62 140
Know
% 37.1% 18.6% 44.3% 100.0%
Frequancy 11 5 0 16
Do not know
% 68.8% 31.3% 0.0% 100.0%
Frequancy 63 31 62 156
Total
% 40.4% 19.9% 39.7% 100.0%
significance probability 0.003

286
Back to Contents

The null hypothesis is rejected with 4% significance level. Null Hypothesis C-1): There is not so much difference
It can be said that those who do not know the Rare Sugars do concerning that they have eaten or drunk food in which the
not understand the concrete effect of them. Rare Sugars are contained between male and female. Summary
table concerning Null Hypothesis C-1) is exhibited in Table 6.

TABLE VI. SUMMARY TABLE FOR NULL HYPOTHESIS C-1)

Have you drunk or eaten the food


Sex which includes the Rare Sugar?
Yes No Total
Frequancy 24 2 26
Male
% 92.3% 7.7% 100.0%
Frequancy 99 31 130
Female
% 76.2% 23.8% 100.0%
Frequancy 123 33 156
Total
% 78.8% 21.2% 100.0%
significance probability 0.066
The null hypothesis is rejected with 7% significance level. Null Hypothesis C-2): There is not so much difference
It can be said that female have much more experience of eating concerning that they want to use the Rare Sugars for cooking
or drinking food in which the Rare Sugar is contained than between male and female. Summary table concerning Null
male. Hypothesis C-2) is exhibited in Table 7.

TABLE VII. SUMMARY TABLE FOR NULL HYPOTHESIS C-2)

I want to use it in the cooking.


Sex
Think so Cannot say either Do not think so Total
Frequancy 14 9 6 29
Male
% 48.3% 31.0% 20.7% 100.0%
Frequancy 95 24 11 130
Female
% 73.1% 18.5% 8.5% 100.0%
Frequancy 109 33 17 159
Total
% 68.6% 20.8% 10.7% 100.0%
significance probability 0.027
The null hypothesis is rejected with 3% significance level. seek the possibility of utilizing the Rare Sugars. Hypothesis
It can be said that female want to use the Rare Sugars for testing was executed based on that. We have set three main
cooking more than male. issues. All of their Null Hypotheses were rejected and the main
issues were insisted clearly.
IV. REMARKS Further study on this should be executed such as
The Results for Hypothesis Testing are as follows. Main multivariate analysis. Various cases should be investigated here
issue A consists of 3 sub issues (A-1~A-3). All of their Null after.
Hypotheses were rejected and the main issue A was insisted
clearly. 2 sub issues were set for the main issue B (B-1, B-2).
All of their Null Hypotheses were rejected and the main issue REFERENCES
B was insisted clearly. 2 sub issues were set for the main issue [1] Rare sugar d-allose strongly induces thioredoxin-interacting protein and
C (C-1, C-2). All of their Null Hypotheses were rejected and inhibits osteoclast differentiation in Raw264 cells. Yamada K, Noguchi
the main issue C was insisted clearly. C, Kamitori K, Dong Y, Hirata Y, Hossain MA, Tsukamoto I, Tokuda
M, Yamaguchi F. Nutr Res. 2012 Feb;32(2):116-23.
V. CONCLUSION [2] Rare sugar D-psicose improves insulin sensitivity and glucose tolerance
in type 2 diabetes Otsuka Long-Evans Tokushima Fatty (OLETF) rats.
The Rare Sugars exist naturally and have many kinds (more Hossain MA, Kitagaki S, Nakano D, Nishiyama A, Funamoto Y,
than 50). They have good effect for health. In this paper, a Matsunaga T, Tsukamoto I, Yamaguchi F, Kamitori K, Dong Y, Hirata
questionnaire investigation is executed in order to clarify Y, Murao K, Toyoda Y, Tokuda M. Biochem Biophys Res Commun.
2011 Feb 4;405(1):7-12.
consumers’ current condition and their consciousness, and to

287
Back to Contents

Predicting Customer Lifetime Value through Data


Mining Technique in a Direct Selling Company
Arsie P. Mauricio1, John Michael M. Payawal2, Maida A. Dela Cueva3, Venusmar C. Quevedo4
Industrial Engineering Department
Adamson University
Manila, Philippines 1000
arsiemauricio@yahoo.com1, johnmichaelpayawal@gmail.com2, maidadelacueva@yahoo.com3, vcquevedo@gmail.com4

Abstract— The Direct Selling Industry in the Philippines is specific industries in the past years but the Direct Selling
continuously growing as more people become direct sellers. With Industry was not tapped yet.
this, the ability of direct selling companies to manage its sellers This study created a CLV model, based on Data
will be a challenge. Customer Lifetime Value (CLV), or the Mining Models, to be applied on the Direct Selling Industry
monetary value a customer is expected to contribute to the
company before churning, is one measure that can be used as a
with the direct sellers as its Customers. The researchers
basis for managing customers and for this matter, the conducted the study on a leading direct selling company in the
computation of the CLV must be accurate enough to be used Philippines that sells fashion items. Three (3) year customer
effectively. However, CLV computation that is specific only for transaction and demographic data were collected and are
direct selling companies is not yet established. This research used subjected to Data Mining Techniques.
Data Mining Techniques, specifically Binomial Logistic In this study, we aim (a) to identify the significant
Regression Analysis and Multilayer Perceptron Neural Network factors that affect customer churn in a direct selling company,
to develop a model that can predict CLV based on historical (b) to identify the significant factors that affect customer profit
customer’s transaction and demographic data. Through contribution in a direct selling industry, and (c) to develop a
Binomial Logistic Regression Analysis, the direct seller’s average
Service Lifetime was found to be 12 years and the significant
model that can be used to predict customer lifetime value in a
factors that affects customer churn was determined as well. direct selling company.
Through Multiple Linear Regression Analysis the significant
factors that affect customer profit contribution was identified.
II. REVIEW OF RELATED LITERATURE AND STUDIES
Markov Chain Analysis was then used to establish possible The Direct Selling Industry of the Philippines
customer states and a state transition probability matrix. Finally, (DSAP) defined Direct Selling as the “face to face selling” of
Multilayer Perceptron Neural Network with 1 hidden layer was any product to customers via independent distributors or
used to establish a neural network predictive model. The results sellers [1]. Customer lifetime value (CLV) prediction is
are used to develop the final model, which was based on the
considered the “touchstone for customer relationship
Present Worth formula. The resulting model has a hold out
relative error of 0.018, which indicates a good predictive management” [2]. In a direct selling sense, CLV is the
accuracy of the equation. The model can be used by direct selling measure of how much a direct seller will contribute to the
companies to help them manage their customers more effectively. company, in monetary terms, before he/she terminates his/her
transaction terminate, say he/she stops being a direct seller or
Keywords— Selling, Customer Lifetime Value, Data Mining, switched transactions to other direct selling company.
Binomial Logistic Regression, Multilayer Perceptron Neural Through the years, CLV prediction models have been
Network, Multiple Linear Regression, Markov Chain established by several researchers. In the context of banking
sector, Khajvand and Jafar proposed an RFM model for
I. INTRODUCTION segmenting customers and a time series method for CLV
Customer Lifetime Value (CLV) is one of the core prediction [3]. A customer-pyramid approach segmentation
tools in Customer Relationship Management. Customer technique and Markov Decision process in determining the
Lifetime Value is the monetary net present value of the profit maximum customer lifetime value was also studied [4]. On a
contribution of the customer throughout his lifetime or service Department Store and Supermarket setting, a weighted RFM
length with the company. It can be used to determine which by AHP method and Artificial Neural Network SOM method
customer segments are the most profitable (high value) and for sorting customers and enhancing customer lifetime value
which are not (low value) by predicting how much profit will prediction was presented [5]. In a car manufacturing and
the customer contribute to the company in the future. maintenance sectors, a study regarding the use of data mining
CLV model in each business differs, as the behaviour models, specifically Decision Tree, Logistics Regression,
of the customers in each of the different business varies. Markov Chain, and Neural Network, was used to determine
Different CLV models are proposed by several researchers for the most applicable method in customer lifetime value
prediction [6]. Based on different literatures present, Data

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


288
Back to Contents

Mining is the most appropriate technique to use in CLV researchers then used stratified random sampling where only
prediction. the direct sellers transacting in Company A with at least 3
Data Mining is a means of “explaining the past and years in the business will be subjected for the study. The
predicting the future” through the use of data analysis [7]. computed sample size is 51 participants. The 3 years
Data Mining uses mathematical models and algorithms to transaction and demographic data of the 51 randomly chosen
segment data and predict values through searching large direct sellers were then provided to the researchers which
collections of data to detect patterns and trends [8]. It should totals to 4661 transactions.
be noted that the field where Data Mining is used the most is The researchers used software applications,
CRM [9]. specifically, Microsoft Excel and SPSS (Statistical Product
In this paper, the researchers used one type of and Service Solutions) to analyze and interpret data.
Classification Data Mining Technique: Logistic Regression. The following are the research procedure that lead to
Logistic Regression attempts to predict the probability that a answering the research questions.
certain binary event will happen. Logistic Regression enables
A. Data Gathering
a statistical analysis to overcome restrictions of linear
regression. For example, in Logistic Regression, the The researchers collected a 3-year transaction and
dependent and independent variables can have no linear demographic data of the randomly selected direct sellers.
relationship and that the dependent variable and the error B. Estimation of Service Length through Churn Prediction
terms need not to be normally distributed [10]. Logistic
Possible factors are extracted from the transaction
Regression is similar to a linear regression except that it
predicts a dichotomous dependent variable instead of a and demographic data collected and were translated into
continuous one. Logistic Regression was used to predict the variables. These variables were used as independent variables
in the logistic regression analysis.
service life or the time before churn of each subject customer.
The researchers also used one type of Regression A churn criteria was established to determine the
Data Mining Technique: Artificial Neural Network (ANN). churn status of the customer. The churn criteria was
established by the subject company. The churn status was used
Neural Network, or more properly known as Artificial Neural
Network, is based on the structure of a mammalian cerebral as the dependent variable in the logistic regression analysis.
cortex where a neuron cell is stimulated by several inputs and Let Cs be the churn status of the customer where Cs = 0
indicates a not-churn customer and Cs = 1 indicates a churn
can be activated by an outside process [11]. The overall
process creates a predictive model that can be used even if customer
linear relationship between variables are not present. Also, The factors to be tested were divided into transaction
and demographic data. The factors were the independent
ANN has less assumptions and restrictions for data compared
to other Data Mining Models. In this paper’s context, the variable for the test while the Churn Status (Cs) was the
ANN, specifically Multilayer Perceptron, was used to predict dependent variable. The statistical software SPSS was used to
future customer contribution. aid the analysis. The researchers used the following
Multilayer Perceptron (MLP) is the most famous parameters for the Logistic Regression Analysis in this study:
CI for exp(B) is still 95%, Probability for Stepwise Entry =
ANN architecture as it can approximate any function. MLP
has three distinct layers: the input layer, the hidden layers, and 0.05 and Removal = 0.10, Classification cutoff = 0.5, and
the output layer. It uses neurons that are capable of processing maximum iterations = 20.
The logistic regression equation model based on the
data using weights and activation functions, send data to the
succeeding neuron, and propagating the error of preceding analysis was then created and the model’s accuracy was
neurons, or back propagation [12]. MLP is an example of tested. Then, the probability Pci that the churn status is equal
to 1 for each of the selected direct sellers was computed using
supervised training and its most common training algorithm is
the formula
back propagation [13].
III. METHODOLOGY Pci = eYi / (1 + eYi) (1)
This research used a quantitative research design to
Where Yi is the Y score computed using the logistic
solve the research questions presented. To solve the research
regression generated. The mean churn probability Pc of all the
problems, statistical techniques was applied, and thus required
customers was then computed. The probability of churn in a
a quantitative design approach to the problem.
logistic regression follows a geometric distribution. The
The researchers used a correlational research design
expected length L of observations before the first churn
to determine the factors that affect customer churn and profit
customer occurs is expressed by the formula
contribution.
The first sampling design the researchers used was
L = 1 / Pc (2)
convenience sampling, to find the company that the
researchers will conduct its study. The researchers sought the
help of a leading direct selling company (Company A) in the
Philippines to gather data to be used in the study. The

289
Back to Contents

C. Estimation of State Transition Probability IV. DATA AND RESULTS


From the transaction and demographic data, factors
that can affect customer profit contribution prediction are A. Data Gathering
identified. The factors were then translated into variables. The transaction data used are Dealer Code,
The variables for significance in predicting profit Transaction Number, Transaction Month, Transaction Day of
contribution using multiple linear regression analysis were the Month, and Transaction Day of the Week, Gross Sales per
tested. The Net Sales was the dependent variable for the Transaction, Discount, Returns, and Net Sales per Transaction.
equation. The confidence intervals level under Regression The demographic data gathered includes the Position, Age,
Coefficients is 95% and the Probability of F under Stepping and Gender of the direct sellers subject for this study.
Method Criteria was used where Entry = .05 and Removal = B. Estimation of Service Length through Churn Prediction
.10.
Using the significant variables identified, the possible The subject company consider their dealers as
customer states and next states were determined. Based on “churn” once they didn’t pay their debts within the allotted
past transactions, the transition probabilities are computed and due date (within 40 days of the specified due date).
then a transition probability matrix P was created. Based on the Binomial Logistic Regression Analysis,
An Identity matrix It that will be used as a matrix the Average Expense per Visit and Average Return per Visit
multiplier to indicate the current state of the direct seller was are significant in determining the likelihood that a customer
also created. will churn or not. The Nagelkerke R2 value or the variation of
the dependent variable in the model is 62%. Overall, the full
D. Prediction of Customer Contribution model can correctly predict 94.1% of the cases.
Multilayer Perceptron Neural Network was then run Based on the average Pci, the model predicted that the
using Net Sales as the dependent variable and the identified churn probability Pc of a customer is 0.0892. The Estimated
significant factors be the covariates. One hidden layer was Service Length L of the customers is therefore 12 years. This
used and the activation function used was identity type. 70%, means that the average length of time before a customer stops
20%, and 10% was used as training, test, and hold out set for transaction is 12 years.
the model, respectively. C. Estimation of State Transition Probability
The researchers used SPSS to perform Multilayer
Perceptron Neural Network. The researchers chose to Through Multiple Linear Regression Analysis, the
randomly assign cases based on relative numbers of cases for adjusted R² of the regression model is 0.995 with the R² =
Partition Dataset. The number of hidden layer is one which .995. This implies that the independent variables explain
has a Hyperbolic Tangent activation function. Under Output 99.5% of the variation in the dependent variable, Net Sales.
Layer, activation function is Identity and the rescaling of scale Position, Frequency of Visit per Month, Expense per Visit,
dependent variables is standardized. In this study, the type of Discount per Visit, and Return per Visit are statistically
training for MLP is online and the optimization algorithm is significant since their p-value < .05 (alpha level). The factors
gradient descent. Under SPSS MLP options, user- missing Position, Frequency of Visit per Month, Expense per Visit,
values is excluded, maximum steps without a decrease in error Discount per Visit, and Return per Visit, were subjected to
is 1, the data to use for computing prediction error and state transition probabilities. Since Position depends on the
maximum training epochs is automatic, the maximum training accumulated gross sales he/she made during 2 consecutive
time is 15 minutes, the minimum relative change in training months. Only the variables Average Expense per Visit,
error is 0.0001, the minimum relative change in training error Average Discount per Visit, Average Return per Visit, and
ratio is 0.001, and the maximum cases to store in memory is Frequency of Visit per month were used to define customer
1000. states.
Based on the resulting Network diagram and synaptic This means that there are 4 dimensions for each
weights, the Neural Network model equation Zt was State. To reduce the number of possible states, the continuous
established. The model’s accuracy was also tested. variables are further divided into classes. This results to a total
of 672 possible customer states. An Initial State Vector It
needs to be set to indicate at what state the customer is during
E. CLV Prediction Model time t. The Estimated Service Lifetime of the customers is 12
The prevailing discount factor d in the market was years which means that there are 672 different It. The
determined. Using the Service Lifetime (L) computed in customer states identified will then be subjected to Markov
equation (2), Initial State Vector (It), Initial State Transition Chain analysis to determine the customer state transition
Probability (P), and Neural Network Model Equation (Zt), a probability P.
model that will compute Customer Lifetime Value was then D. Prediction of Customer Contribution
created.
SPSS software was also used to aid the analysis of
Multilayer Perceptron Neural Network. The significant factors
that affect customer profit contribution are the Independent
Variables while the Net Sales is the Dependent Variable.

290
Back to Contents

A network diagram was generated by the Neural


Network analysis. There are 6 sources of input (Bias, Position, (3)
Frequency of Visit, Ave. Expense per Visit, Ave. Discount per
Visit, and Ave. Return per Visit) which will be activated by
the 4 hidden layer nodes through a Hyperbolic tangent
activation function along with the assigned synaptic weights. Where ri is the input of factor i, wi is the synaptic weight
The results of each the 4 hidden nodes are subjected to an from input i to hidden layer, and vj is the synaptic weight from
identity function and the output (Net Sales) are generated hidden layer j to the output node.
through the sum of these resulting values. The model has a Training Relative Error of 0.151, a
Table (1) shows the synaptic weights of each of the Testing Relative Error of 0.014, and a Hold out Relative Error
nodes from the input layer to the hidden layer and from hidden of 0.018. The values are a good indicator of model accuracy.
layer to the output layer. From the network diagram, the
E. CLV Prediction
formula is generated, shown on equation (3).
The customer lifetime value is the present worth of
all the predicted customer contribution in the future during
their service lifetime. The predicted Customer Lifetime Value
(CLV) is computed using equation (4)
Table 1
Synaptic Weights for Artificial Neural Network
(4)

Predicted
Output Where It is the Identity matrix used in time t, P is the
Hidden Layer 1
Predictor Layer transition probability matrix, Zt is the predicted customer
contribution at time t, and d is the discount rate in the market.
Net
H(1:1) H(1:2) H(1:3) V. CONCLUSION
Sales
With the use of Binomial Logistic Regression, the
researchers identified that the Average Expense per Visit and
(Bias) -.121 -.534 -.690
Average Return per Visit are the significant factors that affect
customer churn in a direct selling industry. These are the only
Position .133 -.043 .024 factors associated with the likelihood of churn. Also, the
probability that a customer will churn using Binomial Logistic
Regression is 0.0892. The Binomial Logistic Regression
Frequency
.085 .037 -.057 Analysis was found to have a good prediction accuracy in
Input of Visit classifying churn and not-churn customers which is 94.1%.
The model generated is therefore accurate and reliable.
Layer Expense Based on the computed churn probability, the
.695 .606 .635 expected Service Lifetime L of the Customer is found to be
per Visit
. This means that on average, a customer
Discount will churn within 12 years of transaction with the company.
1.212 -.246 -.226 With the use of Multiple Linear Regression, the
per Visit
researchers identified that Position, Frequency of Visit per
Return Month, Expense per Visit, Discount per Visit, and Return per
-.205 -.132 -.173 Visit are the significant factors that affect customer profit
per Visit
contribution. The Multiple Linear Regression model generated
from the analysis was found to have a good fit of data.
(Bias) 1.608
Based on the significant factors determined, a Neural
Network was created to predict customer profit contribution.
Hidden H(1:1) .262 Neural Networks are used because of its ease of use but an
Layer accurate and reliable prediction can be obtained. The resulting
H(1:2) 1.423 Neural Network model, based on a feed forward Multilayer
1 Perceptron Analysis with 1 hidden layer and a hyperbolic
tangent hidden layer function and an identity output function.
H(1:3) 1.548 The model has a Training Relative Error of 0.151, a Testing
Relative Error of 0.014, and a Hold out Relative Error of
0.018 which indicates a good prediction model.

291
Back to Contents

The Identity matrix is a 1 x 672 matrix that denotes [3] Khajvand, M., & Jafar, M. (2011). Procedia Computer Science
Estimating customer future value of different customer segments based
the state of a customer at time t. The transition probability on adapted RFM model in retail banking context. Procedia Computer
matrix is a 672 x 672 square matrix that shows the probability Science, 3, 1327–1332. doi:10.1016/j.procs.2011.01.011
that a customer will shift from one state to another for the next [4] Ekinci, Y., Ülengin, F., Uray, N., & Ülengin, B. (2014). Analysis of
year t. The discount factor is prevailing the interest rate of customer lifetime value and marketing expenditure decisions through a
money. Markovian-based model. European Journal of Operational Research,
237(1), 278–288. doi:10.1016/j.ejor.2014.01.014
The predictive model can be used by the company to
[5] Ekinci, Y., Ülengin, F., Uray, N., & Ülengin, B. (2014). Analysis of
compute for the Customer Lifetime Value of each customers. customer lifetime value and marketing expenditure decisions through a
Markovian-based model. European Journal of Operational Research,
VI. RECOMMENDATION 237(1), 278–288. doi:10.1016/j.ejor.2014.01.014
The computed CLV using the developed model [6] Cheng, C.-J. C.-B., Chiu, S. W. W., & Wu, J.-Y. (2012). Customer
lifetime value prediction by a Markov chain based data mining model:
should be used and interpreted correctly by the company. Application to an auto repair and maintenance company in Taiwan.
Otherwise, the CLV will not be useful at all. The company can Scientia Iranica, 19(3), 849–855. doi:10.1016/j.scient.2011.11.045
also use the results of the significance test to prevent [7] Sayad, S. (2010). Data Mining (pp. 1–25).
customers from the likelihood of churn and/or to improve the [8] Oracle ® Data Mining. (2008), 1(May).
customer’s profit contribution. The company’s marketing [9] Poll Data Mining Applications in 2008. (2008). KDNuggets.
strategy can be based on the computed CLV. The company [10] Fadlalla, A. (2005). An experimental investigation of the impact of
should also be able to improve the model through the aggregation on the performance of data mining with logistic regression.
acquisition of more data, use of other techniques, increase of Information & Management, 42(5), 695–707.
doi:10.1016/j.im.2004.04.005
sample size, and a larger time scope. For further studies, the
[11] Dolhansky, B. (2013). Artificial Neural Networks: Linear Regression
product details such as type of product purchased, quantity of (Part 1). Retrieved February 04, 2015, from
product purchased and its specifications can also be included http://briandolhansky.com/blog/artificial-neural-networks-linear-
for additional significant factors to be tested. A deeper look at regression-part-1
the factors might also extract data that the Data Mining [12] Fragkaki, A. G., Farmaki, E., Thomaidis, N., Tsantili-Kakoulidou, A.,
Angelis, Y. S., Koupparis, M., & Georgakopoulos, C. (2012).
techniques might find significant in both customer churn Comparison of multiple linear regression, partial least squares and
status and profit contribution. artificial neural networks for prediction of gas chromatographic relative
Determining possible customer states became retention times of trimethylsilylated anabolic androgenic steroids.
extensive with the use of Markov Chain analysis as the said Journal of Chromatography. A, 1256, 232–9.
doi:10.1016/j.chroma.2012.07.064
method accepts only one dimension variables. Multiple
[13] Chen, X. (2006). Customer Lifetime Value: An Integrated Data Mining
variables are likely to be involved in representing a customer Approach. Lingnan University.
state when this model is applied to other companies making [14] Cheng, C.-J. C.-B., Chiu, S. W. W., & Wu, J.-Y. (2012). Customer
the Markov chain model unmanageable (four, in this lifetime value prediction by a Markov chain based data mining model:
research’s case). In order to resolve this problem, System Application to an auto repair and maintenance company in Taiwan.
dynamic or Petri net might be used to model a customer’s Scientia Iranica, 19(3), 849–855. doi:10.1016/j.scient.2011.11.045
future purchasing behaviour as proposed by Cheng et.al. [15] Dolhansky, B. (2013). Artificial Neural Networks: Linear Regression
(Part 1). Retrieved February 04, 2015, from
Also, the Data Mining techniques used in this study http://briandolhansky.com/blog/artificial-neural-networks-linear-
such as Binomial Logistic Regression, Multiple Linear regression-part-1
Regression, and Multilayer Perceptron Neural Network can be [16] DSAP. (n.d.). Definition of Direct Selling and MLM vs Pyramiding.
improved for more accurate results by integrating other data DSAP. Retrieved from http://www.dsap.ph/definition-of-direct-selling-
and-mlm-vs-pyramiding.html
mining techniques such as Support Vector Machine (SVM).
[17] Ekinci, Y., Ülengin, F., Uray, N., & Ülengin, B. (2014). Analysis of
According to literatures, this type of data mining is a complex customer lifetime value and marketing expenditure decisions through a
algorithm and time consuming but if used correctly and if Markovian-based model. European Journal of Operational Research,
given more time it will also give good prediction accuracy. 237(1), 278–288. doi:10.1016/j.ejor.2014.01.014
A bigger sample size and a larger time scope can be [18] Fadlalla, A. (2005). An experimental investigation of the impact of
used as input for the study to increase the accuracy of the aggregation on the performance of data mining with logistic regression.
Information & Management, 42(5), 695–707.
results of the model. The study can also include different doi:10.1016/j.im.2004.04.005
companies with similar line of business to make the result [19] Fragkaki, A. G., Farmaki, E., Thomaidis, N., Tsantili-Kakoulidou, A.,
generalized for the whole direct selling industry under the Angelis, Y. S., Koupparis, M., & Georgakopoulos, C. (2012).
garments line of products. Comparison of multiple linear regression, partial least squares and
artificial neural networks for prediction of gas chromatographic relative
REFERENCES retention times of trimethylsilylated anabolic androgenic steroids.
Journal of Chromatography. A, 1256, 232–9.
[1] DSAP. (n.d.). Definition of Direct Selling and MLM vs Pyramiding. doi:10.1016/j.chroma.2012.07.064
DSAP. Retrieved from http://www.dsap.ph/definition-of-direct-selling-
[20] García Nieto, P. J., Martínez Torres, J., de Cos Juez, F. J., & Sánchez
and-mlm-vs-pyramiding.html
Lasheras, F. (2012). Using multivariate adaptive regression splines and
[2] Chen, X. (2006). Customer Lifetime Value: An Integrated Data Mining multilayer perceptron networks to evaluate paper manufactured using
Approach. Lingnan University. Eucalyptus globulus. Applied Mathematics and Computation, 219(2),
755–763. doi:10.1016/j.amc.2012.07.001

292
Back to Contents

Knowledge Sharing and the Innovation Capability


of Chinese Firms: The Role of Guanxi
Oswaldo Jose Jimenez Torres (1st Author) Dr. Dapeng Liang (2nd Author)
School of Business Management School of Business Management
Harbin Institute of Technology Harbin Institute of Technology
Harbin, China Harbin, China
Oswaldojim@gmail.com Ldp920@hit.edu.cn

Abstract—Building on the theory of Social Capital the present


theoretical study aims to introduce a framework to analyze the and the moral principles applicable in each different case
extent to which the innovation capability of Chinese firms is suppose and unequal or discriminatory treatment of members
affected by Guanxi as a moderator between the former and the in society or groups.
knowledge sharing behaviors among individuals and teams. The
Considering those facts it’s worthy to analyze the impact of
three dimensions of knowledge sharing considered here are: Type
all this socio-cultural connotations in the organizational
of knowledge, quantity and quality of knowledge shared plus the
influence of the last two on the perceptions of trustworthiness
context in order to identify whether those are beneficial,
among employees. Also this article includes propositions and
harmful or irrelevant for organizations.
recommendations to lead future research in this area. Different authors have studied the effects of this social
norms in organizations, such as [2], whom based on the
Keywords—Guanxi; innovation; knowledge sharing theory of social capital proposed that favor exchanges in
organizations outside the private lives frequently implicate
the use of organizational resources as well as positions. This
I. INTRODUCTION practices benefits the knowledge providers, in that, it
The set of social principles known as Guanxi is the central increases their private social capital but not necessary the
piece of the Chinese society, it has its origins in the public social capital in the organization. [3] showed that
Confucianism which has been part of the Chinese social individuals who share strong empathy with each other tent to
system for over 2.500 years. The concept of Guanxi was built foster this empathy by providing resources also between them,
on and preceded by the concept of lun, which encloses three even when this implicates neglecting the collective good [4].
main notions, which are: The importance of human Therefore, it can presumed that the behaviors implicit in
relationships, the social order and moral principles [1]. Much Guanxi relationships, though generate benefits for
of the literature on Guanxi is business orientated, in that, is individuals whom share close ties with each other, might be
focused on grasp and explain to non-Chinese the relevance detrimental for organizations, specifically for knowledge
and complexities of this concept for conducting negotiations sharing practices among team members, because it would act
and achieve competitive advantage in China as well as as a barrier restricting the equalitarian or unbiased flow of
dealing with Chinese organizations overseas. Confucianism knowledge between all the members involve in specific tasks
encloses the theory of Relationalism, which establishes that for an organization and therefore holding back the firm’s
individuals must be relationship-based, even though innovation capability. The role of intraorganizational
Confucianism also clearly differentiates the most important knowledge sharing in enhancing firm’s innovation capability
human relationships, known as the Five Cardinal has been the focus of previous research for instance: [5] and
Relationships which are: ruler-subject, father-son, husband- [6]. Its importance in accelerating the capacity of individuals
wife, elder brother-younger brother, and friend-friend, in and teams of solving problems, generate new ideas and
addition Confucianism establishes moral principles that business models is created by both knowledge donating and
dictate the interactions between parties, this means that in collecting among individuals in an organization.
each human relationship different moral standards are to be Therefore, the main assumption of the present study is
applied. Although the values of the Chinese society have that the innovation capability of organizations is affected by
evolve it’s still based on the idea of the Confucian society and the role of Guanxi as a moderator between the former and the
so Guanxi relationships are conducted considering those knowledge sharing behaviors of their employees. Hence, this
values. The clear distinctions established on the study presents a conceptual framework build on previous
Confucianism about the kind of relationships individuals literature, propositions and directions for upcoming research
have, the obligations of the parties on this subject. The dimensions of knowledge sharing to be

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


293
Back to Contents

addressed are the following: Type of knowledge shared objective of Guanxi relationships is to provide benefits and
(explicit and tacit), quality and quantity of knowledge shared, advantages for individuals instead of the whole community.
and the impact of the last two dimensions on the perceptions Therefore, it could be assumed that the relationship between
of trust among colleagues. knowledge sharing behaviors and firm’s innovation
capability in a social context based on the values of Wa would
be substantially different from results obtained in China.
II. LITERATURE REVIEW
The knowledge sharing behaviors and their impact in
Guanxi and the Theory of Social Capital firm’s innovation capability in the Chinese context it’s also
very likely to differ from non-Confucian-based societies. For
Research on Guanxi has been conducted taking different
instance, Wasta, which is an indigenous concept of Arab
approaches aimed to understand its meaning, the values it
countries refers to a process whereby individuals can achieve
encloses, its impact on the society and organizations. In this
goals by networking with key persons [8]. According to [9],
effort, academics had analyzed it using the theory of social
Wasta relationships do not include the principle of reciprocity
capital due to similarities observed in this two abstract concepts.
between parties. This constitutes a major difference with
[10] studied the concept of social capital considering the
Guanxi whereby reciprocity (renqing) among parties its
approach of two main intellectual streams focused on social
fundamental in order to maintain and strengths the
action. One of those treats actors as social individuals highly
relationship.
influenced by a set of norms, rules as well as obligations. On
the other hand, we found the economical stream of though, the In the light of this, it can be claimed that the results
basic assumption here is that the actor drives himself according obtained in this research as well as the conclusions reached on
only to his self-interest. [11] explained that social capital can it will not only be applicable in the Chinese context, but also
operate in two forms depending on the actors and their have the potential to help researches and managers to
dynamics, in this sense, public social capital is owned by the understand the same phenomena in different cultural
unit or organization as a whole while private social capital is environment.
owned only privately by a certain group of individuals. [12]
defined social capital as: The sum of the resources, actual or
virtual, that accrue to an individual or a group by virtue of
IV. GUANXI AS A MODERATOR
possessing a durable network of more or less institutionalized
relationships of mutual acquaintance and recognition”. The framework introduced by this article proposes Guanxi
to be treated as a moderator between the three dimensions of
[13] introduced a comprehensive analysis on the different
knowledge sharing identified for this research: Type of
definitions that had been attributed to Guanxi throughout
knowledge (tacit and explicit), quality and quantity of
decades, also some examples of the various meanings the word
knowledge shared plus the effect of the last two on the
Guanxi can have depending on the context. Some definitions
perceptions of trustworthiness towards outsiders and the
relevant for academia are: “Friendship with implications of a
innovation capability of firms. This approach refines and
continual exchange of favors”. [14] “The concept of drawing
develops previous literature in which Guanxi has been
on connections in order to secure favors in personal relatives”
regarded as a moderator considering its features as well as its
[15]. Therefore, there seems to be an affinity between the
impact between certain variables of study, such as: [16], in
dynamics and essence found in both the theory of social capital
that article both tangible relational value (relational benefits)
and Guanxi, and clear academic basis to use the former as a
and intangible relational value (Guanxi) were analyzed
theoretical kaleidoscope to study the latter.
independently as moderators between the variables of
relational risk and the willingness to share knowledge at the
III. CULTURAL DIFFRENCES BETWEEN CHINA AND BOTH inter-organizational level in supply chains. The results
CONFUCIAN AND NON-CONFUCIAN SOCIETIES. confirmed the role of Guanxi as a moderator, which improved
the interaction between relational risk and knowledge sharing.
Japan share roots with the Chinese culture and so the
Confucian precepts are also embraced in that country, never
the less some substantial variations can be acknowledged. V. PROPOSITIONS
Different authors have identified Wa as the analogous social
code of Guanxi applied in Japan. Wa can be understood as According with the analysis of previous literature
harmony, and it’s composed by other social values that conducted for the present study as well as the approach taken
basically aim to foster harmony and unity in the Japanese for it, a series of propositions are presented next:
society [7]. Based on this facts and previous studies on Propositions on knowledge sharing behaviors:
Guanxi, in which it has been considered rather a dyadic or
close-network figure in society we can identify a key Proposition 1: Organizations where the type of knowledge
difference from Wa. This Japanese social norm aims to foster shared among their employees is determined to a major extent
a collectivistic society placing the interest of the community by Guanxi networks present less innovation capability as
or organization first instead of individuals, while the main compare to those that present a rather opposite dynamic.

294
Back to Contents

Proposition 2: Organization where the type of knowledge


sharing among their employees is determine to a major extent VII. DISCUSSION
by Guanxi networks present less innovation capability as
compare to those that present a rather opposite dynamic. Both social capital and Guanxi include precepts that stress
Proposition 3: Organization where the quality of the distinction between insiders and outsiders in relation with
knowledge sharing among their employees is determined to a an individual’s social circle. In this sense, we found the
major extend by Guanxi networks present less innovation figures of private social capital in one hand and the
capability as compare to those that present a rather opposite relationalism embedded in Guanxi relationships in the other.
dynamics. Considering this and the approach taken to build this
Proposition 4: Organizations where the quantity of theoretical framework, the figure of Guanxi networks is
knowledge sharing among their employees is determined to a presented in this work to distinguish specifically the
major extend by Guanxi networks present less innovation dynamics and perceptions of actors towards insiders and
capability as compare to those that present a rather opposite outsiders.
dynamic. By answering the main question of this research important
Propositions on the perceptions of trustworthiness contributions for the literature on organizational behavior
could be obtained, as well as set the bases for future research.
Proposition 5: The perceptions of trustworthiness towards The main question to be addressed is:
team members outside individual’s Guanxi networks present
a positive correlation with the quality of knowledge shared Does the innovation capability of Chinese organizations
with those. is affected by the role of Guanxi as a moderator between the
former and the knowledge sharing behaviors of their
Proposition 6: The perceptions of trustworthiness towards employees?
team members outside individual’s Guanxi networks present
a positive correlation with the quantity of knowledge shared The main outcomes of conducting a formal research based
with those. on the framework here presented can be summarized as
follows:
Proposition 7: The innovation capability of organizations
where the perception of trustworthiness towards team First, identify the current state of Guanxi in Chinese
members outside individual’s Guanxi networks is low tent to organizations, specifically how it moderates the relationship
be inferior to those organizations that show a rather opposite between knowledge sharing behavior among employees and
dynamic. the innovation capability of firms. The evolutive nature of
this concept has been addressed in previous literature, for
Proposition on Guanxi instance [19] and [20]. Hence, this study is focused on
Proposition 8: The innovation capability of Chinese determine how the ongoing changes in Chinese organizations
organizations is determined by the role of Guanxi as a generated by diverse factors, such as the generational shifts
moderator between the former and certain knowledge sharing in management positions and new societal objectives along
behaviors among their staff. with a strong influence from the western world affects the
relevance of Guanxi and its role as a moderator between
knowledge sharing and the innovation capability of firms
VI. ECONOMIC RELEVANCE [21].

Concrete governmental actions to spur innovation in Second, find the features of the Chinese organizations that
Chinese organizations had been taken by its national have mitigated the externalities implicit in Guanxi, so the
government, which in 2005 released the Medium-and Long- applicability of those can be considered and implemented in
term National Plan for Science and Technology Development different organizations. By identifying the relevant features
(2006-2020) [17]. It included the policy of ‘indigenous of the organizations that have successfully reduced the
innovation’ as part of China’s national strategy. [18] explored externalities present in Guanxi, more organizations could
the evolution of the innovation process in China by consider, adapt and apply those managerial practices to deal
identifying three stages of technology innovation: imitative better with indigenous social norms.
innovation, cooperative innovation, and indigenous Third, present empirical bases to compare the role and
innovation. That study argued that even though China had evolution of Guanxi in intraorganizational knowledge
achieve success on implementing the two previous stages of sharing behaviors and innovation capabilities in Chinese
innovation (imitative and cooperative), indigenous organizations, with the impact of analogue social norms at the
innovation still had been elusive to Chinese organizations. organizational level in different countries, this in order to
Hence, the object of the present research is closely aligned foster the design of more comprehensive and universal
with current managerial needs generated by the necessity to managerial principles.
stimulate innovation capabilities pressing Chinese
organizations.

295
Back to Contents

VIII. CONCLUSION [9] Barnett, Andy; Yandle, Bruce; Naufal, George S.


(2013). Regulation, trust, and cronyism in Middle
To consider the particularities implicit in Guanxi as Eastern societies: The simple economics of 'wasta'.
compared with different social norms from Confucian and Leibniz Information Centre for Economics,
non-Confucian societies is a promising approach to discussion Paper No. 7201.
understand the Chinese culture and constitutes a solid
argument to engage in deeper and more detailed research on [10] James S. Coleman (1988) Social Capital in the
its relevance at the organizational level, for instance on Creation of Human Capital. Vol. 94, Supplement:
knowledge sharing and the innovation capability of firms. Organizations and institutions: Sociological and
An empirical study build on this conceptual framework economic approaches to the analysis of social
could also include the analysis of second order variables that structure (1988), pp. S95-S120
might explain different results across organizations such as: [11] Van Buren, H., & Leana, C. R. 2000. Building
Industry, staff composition, age of the organization, etc. relational wealth through employment practices. In
This would also contribute to identify more directions for C. R. Leana & D. M. Rousseau (eds.). Relational
new research. wealth: The advantages of stability in a changing
economy: 233–246. New York: Oxford University
REFERENCES Press.

[1] Xiao-Ping Chen and Chao C. Chen (2004). On the [12] Bourdieu, Pierre, and Wacquant, Loic J. D. (1992).
intricacies of the Chinese guanxi: A process model An invitation to reflexive sociology, Chicago:
of guanxi development. Asia Pacific Journal of University of Chicago Press.
Management, 21, 305–324.
[13] Han Wei and Xi Youmin (2001). Is guanxi a model
[2] Chao C. Chen & Xiao-Ping Chen (2008). Negative of China's business? Asia Pacific Management.
externalities of close guanxi within organizations. Review 6(3), 295-304.
Asia Pacific J Manage. Volume 26, Issue 1, pp 3753.
[14] Howard Davies, Thomas K. P. Leung, Sherriff T. K.
[3] Batson, C. Daniel; Batson, Judy G.; Todd, R. Luk Yiu-hing Wong. The benefits of "guanxi" the
Matthew; Brummett, Beverly H.; Shaw, Laura L.; value of relationships in developing the Chinese
Aldeguer, Carlo M. R. (1995) Empathy and the market. Industrial Marketing Management 24,
collective good: Caring for one of the others in a 207214.
social dilemma. Journal of Personality and Social
[15] Seung Ho Park and Yadong Luo (2001). Guanxi and
Psychology, Vol 68(4), Apr 1995, 619-631.
organizational dynamics: organizational
[4] Ying Chen and Ray Friedman, Enhai Yu, Fubin Sun networking in Chinese firms. Strategic
(2009). Examining the positive and negative effects Management Journal. Volume 22, Issue 5, 455–477.
of guanxi practices, Asia Pacific Journal of
[16] Jao-Hong Cheng (2011). Inter-organizational
Management, 28:715–735.
relationships and knowledge sharing in green
[5] Jay Liebowitz (2002). Facilitating innovation supply chains—moderating by relational benefits
through knowledge sharing: A look at the US Naval and guanxi. Transportation Research Part E 47 837–
surface warfare center-cardrock division. The 849.
Journal of Computer Information Systems, suppl.
[17] Medium-and long-term national plan for science
Special Issue 42.5 (2002): 1-6.
and technology development (2006-20). 国家中长
[6] Hsiu-Fen Lin (2007). Knowledge sharing and firm 期科学和技术发展规划纲要(2006—2020 年)
innovation capability: an empirical study http://www.gov.cn/gongbao/content/2006/conte
International Journal of Manpower Vol. 28 No. 3/4, nt_240244.htm
315-332.
[7] Jon P. Alston (1989). Wa, guanxi, and inhwa: [18] Peng Ru et al, (2012). Behind the development of
Managerial principles in Japan, China, and Korea. technology: The transition of innovation modes in
Business Horizons, Volume 32, Issue 2, March– China’s wind turbine manufacturing industry.
April 1989, Pages 26-31. Energy Policy 43(2012)58–69.
[8] Peter B. Smith, (2011). Are indigenous approaches [19] Guthrie, Doug 1999. Dragon in a Three-Piece Suit:
to achieving influence in business organizations The Emergence of Capitalism in China. Princeton,
distinctive? A comparative study of guanxi, wasta, NJ: Princeton University Press.
jeitinho, svyazi and pulling strings. The [20] Mayfair Mei-hui Yang, (2002). The Resilience of
International Journal of Human Resource guanxi and its new deployments: A critique of some
Management. 1–16, iFirst.

296
Back to Contents

new guanxi scholarship. The China Quarterly, No.


170, pp. 459-476.
[21] Ron Berger, Ram Herstein and Yoram Mitki
(2013). Guanxi: the evolutionary process of
management in China. Int. J. Strategic Change
Management, Vol. 5, No.1.

297
Back to Contents

Two-level Hierarchical Routing based on Road


Connectivity in VANETs
Zubair Amjad Wang-Cheol Song∗ Khi-Jung Ahn
Department of Computer Engineering Department of Computer Engineering Department of Computer Engineering
Jeju National University Jeju National University Jeju National University
Jeju, Rebublic of Korea Jeju, Rebublic of Korea Jeju, Rebublic of Korea
Email: xubairamjad@gmail.com Email: philo@jejunu.ac.kr Email: kjahn@jejunu.ac.kr

Abstract—Vehicular ad hoc networks have received consider- In the past, different types of routing protocols are proposed.
able attention in recent years. Being a subclass of mobile ad One type of routing protocols, such as, AODV [2] (Ad-hoc
hoc networks, VANETs have some different characteristics than on Demand Distance Vector Routing) and DSR [3] (Dynamic
MANETs such as, fast moving nodes and frequent disconnections
in the network. Existing routing protocols of MANETs do not Source Routing) do not maintain routing tables all the time.
perform well under these dynamic conditions in VANETs. We However, before starting the data transmission, source vehicle
present a two-level hierarchical routing protocol based on road sends a route request to find available path to the destination.
connectivity in VANETs called Hybrid Road-Aware Routing If the destination moves away from its location or the path
(HRAR) designed specifically for inter-vehicle communication in between source and destination somehow breaks, the process
urban and/or highway environments. A distinguished property
of HRAR is it’s ability to not only discover the available paths of destination discovery has to be initialized from the start.
between source and destination pairs but also to re-adjust the While other type of protocols, e.g., OLSR [4] (Optimized
paths on the fly, without starting a new destination discovery Link State Routing) and ZHLS [5] (Zone-based Hierarchical
process. Unlike other routing protocols, the path from source Link State Routing) maintain routing tables and update them
to destination is made by road id’s. Gateway nodes help in periodically. These routing protocols are expensive in terms of
locating the destination and forwarding the packets from one
road segment to other. For the evaluation of HRAR protocol, we routing packets as they update the routing tables periodically.
perform extensive simulations and results show that HRAR can Therefore, reducing the control overhead in VANETs remains
help in design and deployment of VANETs. a challenge for researchers.
Index Terms—VANETs, Road-Aware, Hybrid, Geographic In recent years, Geographic Routing (GR) is studied spe-
routing, Location service, Intelligent transport system. cially to deal with node mobility in VANETs. Position based
I. I NTRODUCTION routing does not require routing tables maintained in each
node. It is considered scalable with respect to the size of
Vehicular Ad-hoc Network (VANET) is a subclass of Mo-
the network and is therefore an excellent candidate for the
bile Ad-Hoc Networks (MANETs) and in the recent years,
vehicle communication. The geographic routing can select
it has become an important area of research because of its
the forwarding nodes in a flexible manner by using location
promising solutions to Intelligent Transportation System (ITS).
information of nodes. Several GR protocols are proposed
VANETs are characterized by highly mobile nodes and are
specifically for VANETs, e.g., [7], [8], [9]. However, most
constrained by movement patterns. VANETs have a highly
of them use the global view of the simulator and location of
dynamic topology due to the fast moving vehicles. Vehicles
destination is already known. Therefore, these protocols have
with active connection can have link failure frequently because
zero location service overhead. Furthermore, the performance
of the short lifetime of the connections and unpredictable
of GR protocols still needs to be improved in terms of
drivers’ behavior. Due to these characteristics of VANETs, it
delivery ratio and end-to-end delay for real-time and safety
becomes a challenge to provide an efficient routing protocol
applications.
that can deal with the dynamic characteristics of VANETs.
We present a novel position-based hierarchical routing pro-
Vehicles move along the roads. So, a road-aware approach is
tocol based on road connectivity called Hybrid Road-Aware
needed that can use the road information efficiently to reduce
Routing (HRAR) designed specifically for inter-vehicle com-
the bandwidth utilized by the routing packets. A road between
munication in an urban and/or highway environment. HRAR
intersection can be treated as a zone or a segment, and vehicles
integrates locating destination with identifying available paths
on a road segment usually have similar communication char-
between source and destination. Gateway Vehicles (GV) help
acteristics/conditions. If the routing information of each road
in connectivity between different road segments. Destination
is considered separately, the change in vehicle topology can
discovery process does not flood the network with the packet
be handled on the local road segments very efficiently without
broadcasts. Once a path is discovered, it is adjusted on the
utilizing the bandwidth of the entire network.
fly to account for changes in the topology, without initiating
∗ Corresponding Author a new destination discovery request. The proposed protocol is

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


298
Back to Contents

hybrid because it uses characteristics of both types of routing how destination location discovery process can influence the
protocols, proactive as well as reactive at the same time to network capacity. Moreover, if the destination vehicle moves
reduce the routing overhead. a substantial distance from its known position, these protocols
This paper is organized as follows. We talk about the fail.
related work in Section II. In Section III we introduce the The concept of ”Gateway Nodes” is not completely new. In
Hybrid Road-Aware Routing (HRAR) protocol. We discuss [5], authors make use of gateway nodes in order to maintain
the simulation setup and the results of HRAR in Section IV. connectivity between zones. There may be more than one
Section V concludes the paper. gateway nodes between two zones, therefore, it avoids single
point of failure. A similar concept is used in UGAD [6], where
II. R ELATED W ORK vehicles at intersections are given the highest priority in order
Connectivity Aware Routing (CAR) [1] is a position-based to avoid the transmission blocking by the buildings. Vehicles
routing protocol designed for VANETs. It uses adaptive bea- on the intersections can send/receive packets to/from other
coning to create and maintain neighbor routing tables. CAR connected roads. We believe using these vehicles in an efficient
introduces the concept of ”Guards” that help in path failure manner can improve the performance of the routing protocol.
recovery process. Optimized broadcast is used to discover the
location of the destination vehicle. However, the discovery III. H YBRID ROAD -AWARE ROUTING
packet is received be all vehicles between source and des- We present a novel two-level hierarchical position-based
tination at-least for once. Greedy forwarding is used over the routing protocol called Hybrid Road-Aware Routing (HRAR)
discovered path. designed specifically for inter-vehicle communication. In the
Road Topology-aware Routing (RTR) [10] extends the first level of hierarchical model, vehicles generate link state
destination discovery mechanism of AODV [2] to discover packets (LSPs) containing their neighbor information to make
two junction-disjoint routing paths. Source node uses one of road segment routing table. This proactive mechanism is only
the established paths rather than all paths for one packet used for the routing on road segments between intersections.
forwarding so that communication overhead is reduced. When In the second layer of hierarchical routing, source vehicle
both two junction-disjoint paths are connected, they are chosen uses enhanced reactive routing mechanism to discover the path
alternately to transfer data packets. RTR utilizes two junction- to destination. The path request is not sent to every vehicle
disjoint paths to transfer data packets, which avoids network rather it is only forwarded to a few vehicles on each road
congestion and link failure in a single routing path. However, segment. Once a path is discovered, it is adjusted on the
the control overhead for discovering the destination is even fly to account for changes in the topology without initiating
more than that of AODV because the RREP is sent back to a new destination discovery request. Unlike other reactive
source by the destination over two different paths. routing protocols, destination discovery process does not flood
Zone-based Hierarchical Link State routing (ZHLS) [5] is a the network with the packet broadcasts. The HRAR protocol
link state routing based protocol for MANETs which divides consists of four main parts: (1) road LSP generation; (2) path
the network into zones and routes the traffic through the zones. discovery; (3) packet forwarding over discovered path; and (4)
Each node only knows the node connectivity within its zone path maintenance.
and the zone connectivity of the whole network. The link state
routing is performed on two levels: local node and global A. Neighbor tables
zone levels. Unlike other hierarchical protocols, there is no In HRAR roads are divided into segments with unique
cluster head in ZHLS. The zone level topological information 𝑟𝑜𝑎𝑑𝐼𝐷s. Vehicles asynchronously broadcasts link requests
is distributed to all nodes. It provides a flexible, efficient, and to discover their neighbors. Neighbors within communica-
effective approach to accommodate the changing topology in tion range in turn reply to link request that includes <
a wireless network environment. However, the distribution of 𝑣𝑒ℎ𝑖𝑐𝑙𝑒𝐼𝐷, 𝑟𝑜𝑎𝑑𝐼𝐷, 𝑃 𝑜𝑠𝑖𝑡𝑖𝑜𝑛, 𝑆𝑝𝑒𝑒𝑑, 𝐷𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 >. Vehi-
zone level topological information to all nodes in the network cles wait for an interval to receive the response of link
introduces a huge amount of routing overhead. requests from their neighbors. After receiving the response
There are some proposed GR protocol that return the of link requests from the neighbors, every vehicle generates
shortest path between source and destination [11], [8], [9]. its neighbor link state packet (LSP) which contains the infor-
However, it is not always possible that the shortest path mation about all the neighbors of the vehicle. Vehicles then
between source and destination is populated. Furthermore, propagate their neighbor-LSP locally within their road segment
if the local maximum is reached at any point, a new route via intermediate neighbors. Using the neighbor-LSP, a road-
is calculated. This process may take some time and may LSP for road 1 in Fig. 1a is generated by every vehicle as
waste network capacity. Typically, GR protocols require the shown in Table I. Vehicles may also receive neighbor-LSP
location information of the destination before starting the data from other roads. After receiving the neighbor-LSP from other
forwarding process. The majority of proposed GR protocols vehicles, each vehicle knows the road segment level topology.
assume destination location is known at any time [12], [13], The shortest path algorithm is used to build its road segment
[14]. The performance of these protocols is evaluated with routing table. Table II shows an example of a routing table of
zero location service overhead. However, it remains unclear vehicle S in a road segment in Fig. 1a.

299
Back to Contents

Road
Road 2 Segment
F
Intersection
I1 I3
G
E A Source

D
Road 3 H C 1) S - I1 - I2 - I4
2) S - I1 - I3 - I4
Road 1 Dest.
B
I S I2 I4

Source/Dest
Normal traffic
Gateway
vehicles

(a) (b) (c)

Fig. 1: Destination discovery [(a) A simple intersection with gateway vehicles, (b) RREQ, (c) Graph representation of roads]

TABLE I: Neighbor-LSP for Road 1 RREQ, check their road segment routing tables to find the
Source Neighbors destination vehicle. Fig. 1b shows the process of RREQ. The
gateway vehicle that finds the destination vehicle on its road,
A B
replies the RREQ with route reply RREP packet. RREP is sent
B A,C,D
back as unicast to the source vehicle. The process of RREP is
C B,D,S
illustrated by the graph in Fig. 1c where intersections are the
D B,C,E,S
vertices and the roads are represented as edges. The root node
E S,D,2
is the first intersection near the source vehicle. The packet
S C,D,E,2,3
travels from one intersection to other towards the source.
TABLE II: Road segment routing table of vehicle S In all the routing protocols that use RREQ/RREP, path in
the RREP includes the 𝑣𝑒ℎ𝑖𝑐𝑙𝑒𝐼𝐷s. However, HRAR uses
Destination node Next node 𝑟𝑜𝑎𝑑𝐼𝐷s instead of the 𝑣𝑒ℎ𝑖𝑐𝑙𝑒𝐼𝐷s. Reason for using the
A C 𝑟𝑜𝑎𝑑𝐼𝐷s in a path instead of 𝑣𝑒ℎ𝑖𝑐𝑙𝑒𝐼𝐷s is because a link
B C between two vehicles in the path may fail at any time resulting
C C in decreased performance. Therefore, even after the link failure
D D between any two intermediate vehicles, the packets can still
E E be forwarded by other vehicles using their road segment
2 G routing table. The gateway vehicle that sends the RREP to
3 H the source, can receive more than one RREQ from different
roads. However, only the shortest path in terms of distance
and number of intermediate hops is selected by the gateway
The vehicles on the intersection or near to intersections vehicle.
also receive link requests from the vehicles on different road
segments. We call these vehicles the Gateway Vehicles (GVs). C. Packet forwarding over discovered path
As shown in Fig. 1a (with red color), vehicles S, E, G and H The HRAR protocol uses road segment routing table to
are gateway vehicles. GVs provide connectivity between road forward the packets towards the destination. The packets are
segments. Due to the fast moving vehicles and channel fading, sent from one intersection to another by the gateway vehicles.
the procedure of link request has to be performed periodically Intermediate vehicles use their road segment routing tables to
to detect and update the changes in the links between vehicles. route the packet towards next gateway vehicle. Since beacon-
ing is a basic building block of all inter-vehicle communication
B. Path discovery protocols, HRAR extends its concept to generate the road
In level two routing, HRAR takes reactive routing approach. segment routing tables. An illustration of intermediate vehicle
In order to find a path to a destination, HRAR uses route packet forwarding can be seen in Fig. 1b.
request/reply RREQ/RREP. Before sending the RREQ, source
vehicle checks its road segment routing table to find the D. Path maintenance
destination vehicle if both are on the same road segment. If the Any path between source and destination may become
destination vehicle is on a different road than the source, the invalid. Let us assume that the path between two intersections
source vehicle sends a route request to the gateway vehicle. stay connected. Therefore, only possibility for a path to break
The gateway vehicle then forwards that RREQ to the gateway is when end point vehicles (source and/or destination) move a
vehicles on all the connected roads. On the intermediate roads, substantial distance from the initial position. In such scenarios,
the RREQ is forwarded using the road segment routing table majority of the previously proposed protocols fail because they
from one GV to others. The gateway vehicles that receive the have to start a new destination discovery process each time

300
Back to Contents

a path fails. Here we present how gateway vehicles help to AODV performs very poorly in the city scenario, with 12-
adapt to such situation without loosing data packets and avoid 13% of data packets delivered. Also, GPSR shows better per-
starting a new destination discovery process. formance than AODV (around 21% delivery ratio) but shows
1) Gateway vehicles in path maintenance: When there is a lower delivery ratio than HRAR. Despite the additional over-
data transmission going on, the gateway vehicles on both sides head to maintain road segment routing table and to discover the
(source and destination) maintain a table of active connections. destination location and path, HRAR demonstrates much better
The path remains active if the end point vehicles do not results than GPSR. The highway scenarios are geographically
move to a different road segment than the current one. If less sophisticated than the city scenario, therefore, all studied
any of the endpoint vehicles move to another road segment, protocols show better PDR in highway scenario. Again, HRAR
the gateway vehicle of previous road segment finds it out outperforms AODV and GPSR, despite the need to obtain and
from the road segment routing table as the table is updated maintain paths between source-destination pairs.
periodically. The gateway vehicle send a destination discovery 2) End-to-end delay: In terms of the average end-to-end
request to all other GVs on connected roads. The GV on the delay (Fig. 2b and Fig. 3b), AODV and GPSR are worse
same road as the destination vehicle, replies to the destination than HRAR in both scenarios (city and highway). In HRAR,
discovery request. The old GV updates the path from source the path discovery process precedes for the first time a data
to destination and sends it to the source vehicle to use the new transmission is required between two vehicles. This step adds
active path in the transmission. Since the source vehicle does delay for the first packet. However, once the source vehicle
not have to start a new destination discovery request, a huge finds out the path, it is maintained by the gateway vehicles.
amount of network bandwidth is saved. Therefore, the delay is very low for the data packets after the
Gateway vehicles help adjusting the connected path without first one. The average end-to-end delay of the data packet for
employing new path discoveries even if the end point vehicles HRAR is much lower than for AODV and GPSR. This result is
change their moving speeds and/or directions. a consequence of HRARs use of real connected paths between
source and destination vehicles, whereas, GPSR often fails
IV. E VALUATION OF HRAR PROTOCOL due to resolution of local maximum handled by the perimeter
mode. Furthermore, each time a path fails between source and
A. Simulation setup
destination pairs, AODV starts a new discovery process which
In our experiments we use version 3.18 of the ns-3 simulator adds a huge amount of delay in the packet delivery process.
[15] with the Two-ray ground model. The communication HRAR handles short-term disconnections very easily due to
range of a vehicle is 250 m. The node speed is in the range the use of gateway vehicles. In the scenarios with low density
of [18, 120] km/h and the data packet size is 512 bytes. of vehicles, the network often becomes disconnected, which
The evaluated protocols are: AODV, GPSR, and the HRAR leads to a low PDR and high end-to-end delay for all the
protocol. tested protocols. However, it can be clearly seen that HRAR
In this paper we present results for three different vehicle outperforms other two protocols in both metrics.
densities: low (less than 20 vehicles/km of road), medium (30- 3) Routing overhead: The normalized routing overhead
40 vehicles/km), and high (more than 50 vehicles/km) in two is presented in Fig. 2c and Fig. 3c. Routing overhead of
movement scenarios: highway and city. 10 CBR traffic sources AODV consists of the destination discovery, whereas, in GPSR
with a sending rate of 5 packets/s are considered. Sources stop periodic beaconing contributes mainly to the routing overhead.
sending data packets 30 seconds before the simulation end. Destination discovery process in AODV consists of broadcast-
Source/sink nodes neither leave the simulation area nor park ing the packet to all the nodes in the network. However, in
anywhere in the map during the simulations (300 seconds). HRAR, there is no broadcast for the destination discovery
process. Discovery packets are sent only to the gateway vehi-
B. Metrics cles. In GPSR, the routing overhead caused by the failed paths
We present the following metrics to compare the perfor- contributes significantly to the degraded performance. It can
mance of the evaluated protocols. be seen from the evaluation results that HRAR generates less
routing overhead than other two protocols in both scenarios.
∙ Packet delivery ratio: The fraction of data packets sent It is mainly because HRAR does not use broadcasting of
by the source that are received by the destination. destination discovery and it does not start a new discovery
∙ End-to-end delay: The time required for the data to reach process each time a path disconnects.
the destination vehicle from the source vehicle.
∙ Routing overhead: The number of routing packets trans- V. C ONCLUSION
mitted per data packet delivered at the destination. In this paper we presented HRAR, a novel Hybrid Road-
Aware Routing protocol for VANETs. HRAR uses character-
C. Simulation results
istics of both types, proactive and reactive routing protocols
1) Packet delivery ratio: Fig. 2a and Fig. 3a show packet to provide a scalable low overhead routing algorithm with
delivery ratios for city and highway scenarios respectively with efficient delivery ratio for inter-vehicle communication both
three different densities of vehicles. For all traffic densities, in city and highway scenarios. HRAR is able to locate the

301
Back to Contents

0.9 14 20
AODV AODV AODV
0.8 GPSR GPSR 18
12 GPSR
HRAR HRAR HRAR
16
0.7

Average end−to−end delay [s]

Normalized routing overhead


10 14
0.6
Paket delivery ratio

12
0.5 8
10
0.4 6
8
0.3
4 6
0.2
4
2
0.1 2

0 0 0
low medium high low medium high low medium high
Vehicle density Vehicle density Vehicle density

(a) (b) (c)

Fig. 2: City scenario [(a) Packet Delivery Ratio, (b) Average end-to-end delay, (c) Routing Overhead]

1 12 20
AODV AODV AODV
0.9 GPSR GPSR 18 GPSR
HRAR HRAR
10 HRAR
0.8 16
Average end−to−end delay [s]

Normalized routing overhead


0.7 14
8
Paket delivery ratio

0.6 12

0.5 6 10

0.4 8
4
0.3 6

0.2 4
2
0.1 2

0 0 0
low medium high low medium high low medium high
Vehicle density Vehicle density Vehicle density

(a) (b) (c)

Fig. 3: Highway scenario [(a) Packet Delivery Ratio, (b) Average End-to-End delay, (c) Routing Overhead]

destination without using a location service. Rather than using [3] Johnson, David B., and David A. Maltz. ”Dynamic source routing in ad
a broadcast to discover the destination and active path, HRAR hoc wireless networks.” Mobile computing. Springer US, 1996. 153-181.
[4] Clausen, Thomas, and Philippe Jacquet. Optimized link state routing
uses an efficient scheme that reduces routing overhead and protocol (OLSR). No. RFC 3626. 2003.
end-to-end delay with successful delivery of packets. Routes [5] Joa-Ng, Mario, and I-Tai Lu. ”A peer-to-peer zone-based two-level link
are maintained on the fly by the gateway vehicles to avoid a state routing for mobile ad hoc networks.” Selected Areas in Communi-
cations, IEEE Journal on 17.8 (1999): 1415-1425.
new discovery process from the start. The evaluation results [6] Akamatsu, Ryosuke, et al. ”Adaptive delay-based geocast protocol for
show that HRAR outperforms GPSR and AODV in multiple data dissemination in urban VANET.” Mobile Computing and Ubiquitous
performance metrics. Networking (ICMU), 2014 Seventh International Conference on. IEEE,
2014.
VANETs must deal with the always changing node topolo- [7] Seet, Boon-Chong, et al. ”A-STAR: A mobile ad hoc routing strategy for
gies, and a number of other issues must be addressed before metropolis vehicular communications.” Networking 2004. Springer Berlin
VANETs can be deployed in real world. HRAR addresses Heidelberg, 2004.
[8] Tian, Jing, Lu Han, and Kurt Rothermel. ”Spatially aware packet routing
one of the key issues in the design and deployment of for mobile ad hoc inter-vehicle radio networks.” Intelligent Transportation
VANETs. Evaluation results in this study suggest that using the Systems, 2003. Proceedings. 2003 IEEE. Vol. 2. IEEE, 2003.
concept of a hybrid routing protocol with the use of gateway [9] Lochert, Christian, et al. ”Geographic routing in city scenarios.” ACM
SIGMOBILE mobile computing and communications review 9.1 (2005):
vehicles can significantly improve the performance of a routing 69-72.
protocol. [10] Zhang, Hongli, and Qiang Zhang. ”A Novel Road Topology-aware
Routing in VANETs.” Web Technologies and Applications. Springer
ACKNOWLEDGMENT International Publishing, 2014. 167-176.
[11] Lochert, Christian, et al. ”A routing strategy for vehicular ad hoc
This research was supported by Basic Science Research networks in city environments.” Intelligent Vehicles Symposium, 2003.
Program through the National Research Foundation of Korea Proceedings. IEEE. IEEE, 2003.
[12] Karp, Brad, and Hsiang-Tsung Kung. ”GPSR: Greedy perimeter stateless
(NRF) funded by the Ministry of Education, Science, and routing for wireless networks.” Proceedings of the 6th annual international
Technology (2011-0012329). conference on Mobile computing and networking. ACM, 2000.
[13] Zorzi, Michele, and Ramesh R. Rao. ”Geographic random forwarding
R EFERENCES (GeRaF) for ad hoc and sensor networks: multihop performance.” Mobile
Computing, IEEE Transactions on 2.4 (2003): 337-348.
[1] Naumov, Valery, and Thomas R. Gross. ”Connectivity-aware routing [14] Blaevi, Ljubica, Silvia Giordano, and Jean-Yves Le Boudec. ”Self
(CAR) in vehicular ad-hoc networks.” INFOCOM 2007. 26th IEEE organized terminode routing.” Cluster Computing 5.2 (2002): 205-218.
International Conference on Computer Communications. IEEE. IEEE, [15] Henderson, Thomas R., et al. ”Network simulations with the ns-3
2007. simulator.” SIGCOMM demonstration 14 (2008).
[2] Perkins, Charles, Elizabeth Belding-Royer, and Samir Das. Ad hoc on-
demand distance vector (AODV) routing. No. RFC 3561. 2003.

302
Back to Contents

An Overview of Wax Crystallization, Deposition


Mechanism and Effect of Temperature & Shear
Azwan Harun1, Nik Khairul Irfan Nik Ab Lah1, Zulkafli Hassan2
Hazlina Husin1 Faculty of Chemical and Natural Resources Engineering,
Faculty of Chemical Engineering, Universiti Malaysia Pahang, Malaysia
UniversitiTeknologi MARA, Shah Alam, Malaysia

Abstract –Wax deposition can cause serious problem to crude II. WAX CRYSTALLIZATION PROCESS
oil transportation especially in offshore transport pipelines. It is
cause by the heat transfer between the crude oil and the
Crystallization is defined as the process of formation of an
surrounding if there is no or insufficient thermal insulation layer ordered state (solid) from a disordered (gas) or partially ordered
coating on the pipeline. Cause of the wax deposition in the pipeline state (liquid) [4]. The solid wax crystal is formed from a nuclei
will lead to a smaller flow diameter of the pipeline, thus will precipitated out from crude oil that act as centers of
results in flow restriction and production loss. This paper will crystallization [5]. The wax crystallization process is occurred
explain briefly how wax started to form deposits base on in two stages, which are nucleation and crystal growth [6].
crystallization theory. Besides that, this paper will review some of Nucleation can be divided into two groups, which is primary
the proposed mechanisms of wax deposition such as molecular nucleation and secondary nucleation. Primary nucleation
diffusion, soret diffusion, shear dispersion, brownian diffusion
consists of spontaneous nucleation (homogenous) and induced
and gravity settling. Furthermore, the effects of temperature and
shear towards wax deposition on internal surface of the pipeline
by foreign particles (heterogeneous). Secondary nucleation is
are discussed. induced by the crystals that already formed from primary
nucleation. Spontaneous nucleation can be generated in
Index Terms – Paraffin wax; wax crystallization; wax isothermal conditions while induced by foreign particles
deposition mechanism; temperature; shear nucleation where it can be happened at non-isothermal
conditions in the occurrence of a cooling rate process [7].
The process of wax crystallization occurred when
I. INTRODUCTION temperature decreases, wax molecules in the crude oil are being
hindered, due to less energy to move around freely. This will
In oil reservoir, wax is soluble in the crude oil due to high
make it easier for the wax molecules to come closer and formed
pressure and temperature (HPHT) reservoir condition. When
short-lived cluster. The short-lived cluster of the wax molecules
the crude oil is extracted up to the sea surface, the temperature
is called nuclei, and the formation process is nucleation. Short
of the crude oil is decreased due to the heat loss to the
chain or flats monolayers maybe form initially and eventually
surrounding. Decreasing the temperature below the Wax
crystalline lattice structure built up which is referred to as the
Appearance Temperature (WAT) or Cloud Point (CP) leads to
growth process [8]. Once the nuclei are formed and the
wax crystallization process [1]. The precipitated solid wax
temperature is kept low, the growth process which occurs
crystal then, deposited on the internal surface of the pipeline.
rapidly continue in the local region with high molecules
This wax deposition issues are frequently appear throughout
concentration. If the nucleus growth certain critical size, it
the production process either in offshore or onshore production
becomes stable under average condition of high molecules
in all over the world [2].
concentration in the bulk of the crude oil [5].
Deposition of wax in the pipeline during oil flow leads to the
pipeline blockage where the average flow decrease cause by
decreasing in pipeline diameter of the fluid and thus results in III. WAX DEPOSITION MECHANISM
the reduction of production output. In worst cases, it can causes The wax deposition mechanism on the internal surface of the
a fully blockage that lead to huge production loss. Cold finger pipeline consists of the processes that are theoretical complex.
and flow loop tests performed in the laboratory are widely used The studies of wax deposition mechanism have been ongoing
in the petroleum industry for evaluation of the wax deposition for decades by a lot of researchers such as Hunt, P. A Bern, V.
risk in pipelines and in identifying suitable inhibitors for R. Withers, R. J. R. Cairns, E. D. Burger, T. K. Perkins, and J.
deposition mitigation [3]. This paper is to review and discuss H. Striegler [9]–[11]. A few mechanism have been suggested
the theory of the wax crystallization process due to temperature in order to develop the better understanding about the wax
drops below the WAT, suggested mechanisms of wax deposition such as molecular diffusion, soret diffusion, shear
deposition and the effect of temperature & shear on wax dispersion, brownian diffusion and gravity settling [12].
deposition.
A. Molecular Diffusion
This research was financially funded by research grant namely Research Molecular diffusion has been suggested as the most
Acculturation Grant Scheme (RAGS) which manage by Research dominant mechanism for wax deposition on the internal surface
Management Centre UiTM.
of the pipeline [10]. For the crude oil flow inside the pipeline, it

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


303
Back to Contents

will either laminar or turbulent but at the adjacent and close to net transport of the solid waxy crystal to pipe wall which,
the wall will be a thin laminar sub-layer form associated with mathematical description is similar to diffusion [11]. There is
the shear stress action acting on the internal surface of the an argument between the researchers about the Brownian
pipeline [13]. In a subsea pipeline where pipe wall is being diffusion as the mechanism for wax deposition. A. Majeed, B.
cooled below the Wax Appearance Temperature, there is a Bringedal, and S. Overa dismiss Brownian diffusion as a
temperature gradient across the laminar sub-layer. At this relevant mechanism for wax deposition based on justification
condition, wax crystallization will be occur and precipitated out that Brownian diffusion flux will be away from the wall, where
from the crude oil. The concentration gradient of dissolved wax the solid concentration would be highest [19]. However,
will be established since wax solubility decreases with the Azevedo and Teixeira have different view, if the wax crystals
temperature and leads to transportation dissolved material are trapped in the immobile solid layer at the wall, the
toward the wall by molecular diffusion [11]. The rate of concentration of solid crystals in the liquid at the wall is
molecular diffusion can be estimated by Fick’s Law: approximately zero, allowing for Brownian diffusion toward
the wall [13]. Therefore, it can be concluded that Brownian
diffusion remains as one of the possible mechanism for wax
(1) deposition.

Where mm is the mass of deposited wax, ρd is the density of the E. Gravity Settling
solid wax, Dm is the diffusion coefficient of liquid wax in oil, A
is the surface area over which deposition occurs, dC/dr is the Gravity settling becomes the possible wax deposition
radial concentration gradient [11]. mechanism since the precipitated solid waxy crystal is denser
than the crude oil. Therefore the solid waxy crystal would
settled and deposited at the bottom of the pipes [11]. However,
B. Soret Diffusion based on the experimental results, the wax deposition from
Soret diffusion is the thermal diffusion, which refer to mass horizontal and vertical flow is similar. Therefore, gravity
separation that caused by the existence of a temperature settling have been classified as insignificant in contributing to
gradient within the pipeline [14]. There are arguments between wax deposition [11], [20]. Some mathematically studies
researchers about the contribution of the soret diffusion as wax proposed that gravity settling effect might be eliminated by
deposition mechanism. Some of the researcher such as D. shear dispersion that re disperse settled solid waxy crystal in the
Merino-Garcia, M. Margarone, and S. Correra , have stated that flow.
the effect of soret diffusion in wax deposition is negligible [15].
However R. Banki, H. Hoteit, and A. Firoozabadi, have IV. EFFECT OF TEMPERATURE ON WAX DEPOSITION
classified that stating diffusion in term combination of Temperature is one of the parameters that contribute a
molecular and thermal diffusion permits the wax deposition to significant effect in wax deposition. Wax deposition is usually
be modeled ideally [16]. characterized by three temperatures: the cloud-point or wax
appearance temperature (WAT), the pour-point, and the
C. Shear Dispersion melting-point temperatures. Cloud-point is the temperature at
which the first precipitation or crystal of solute is formed.
The flowing crude oil inside the pipeline (contains) small Pour-point temperature is the lowest temperature at which the
solid waxy particles that move with the mean speed and oil is mobile. A temperature at which a pure substance liquefies
direction of the crude oil. However, the shearing effect of the is its melting point [21].
fluid near the pipe wall induces the lateral motion of the There are a lot of researches have done on the effect of
particles that known as shear dispersion. This will result in temperature on wax deposition in the last few decades and can
transportation of wax particles to the internal surface of the be separated into different scope. A. S. Kasumu, S. Arumugam,
pipeline [10]. This is agreed by a few researcher such as A. and A. K. Mehrotra , has conducted a research about effect of
Fasano, L. Fusi, and S. Correra that claim the shear dispersion cooling rate on the wax precipitation temperature (WPT) of
is dominant at certain condition [17], [18]. In other hand, there waxy mixtures [22]. WPT has been used in this study to define
are some researchers such as such Azevedo and Teixeir stated the temperature for the onset of solid formation under a
that shear dispersion give zero contribution to wax deposition constant cooling rate, which distinguishes it from the WAT.
since experimental results shows no deposition of wax under Based on the experimental result, it has been found that WPT is
conditions of zero heat flux. However, shear dispersion can still higher for same crude oil at a lower cooling rate. This means
contribute as wax removal mechanism where the wax particles that the wax will be started to precipitate at higher temperature
transported away from the pipe wall [13]. if temperature drop rate. There a few researchers that obtained
similar results with A. S. Kasumu, S. Arumugam, and A. K.
D. Brownian Diffusion Mehrotra in related laboratory test [22]–[24].
The mechanism is occurred when small solid crystal Another researcher, Jennings and Weispfennig has
that suspended in crude oil collide with thermally agitated oil performed a laboratory test about the effect of temperature
molecules which lead to small random Brownian movements of differential (∆T) between crude oil and cold surface and found
the suspended particles. With existence of concentration that the total deposition increase when the ∆T is increase. The
gradient of these particle, this Brownian motion will lead to a increase in total deposition is because of the increase in

304
Back to Contents

thermodynamic driving force for deposition. In the experiment, studied by Q. Quan, J. Gong, W. Wang, and G. Gao [27]. Cold
Jennings and Weispfennig use two different sizes of cold finger apparatus and waxy crude oil with wax content 18.6%
fingers to test the effect of ∆T on the wax deposition. Figure 1 from oil field in China is used to perform this experiment. In
shows the result from the experiment that indicates the increase order to test the effect of temperature range on aging of wax
of total deposit density with the increase in ∆T at fixed shear deposition, experiment is set up using two set of different
rate and running time [25]. temperature range. The first temperature range is Toil = WAT+5
˚C, Tcoolant = WAT-10 ˚C (higher temperature range) while the
second temperature range is Toil = WAT, Tcoolant = WAT-15 ˚C
(lower temperature range).
Figure 2 and 3 show the results from experiment, which the
variation of wax content in deposit in different temperature
ranges. Amount of lower carbon number components in deposit
decreases and amount of higher carbon number components in
deposit increases as the deposit time increases at higher
temperature range [27].

Fig. 1. The effect of temperature differential between crude oil and cold surface
on the wax deposition [25]

At fixed shear stress, a higher ∆T will generate a softer


deposit. The result from the experiment is similar to the results
obtained from an experiment that performed by Lee-Tuffnell,
where overall weight of the deposit increased and became
mechanically easier to remove” as the ∆T value increased for
experiments performed at equal shear rates. Besides that, based
on the same experiment, different ∆T will result different mean
carbon number of the wax that is physically depositing. Lower
∆T runs had higher mean carbon numbers of the physically
depositing wax [26].

Fig. 3. Variation of wax content and carbon number distribution in deposit with
time (Toil = WAT+5 ˚C, Tcoolant = WAT-10 ˚C) [27]

Under the same temperature difference, the amount of higher


carbon number components in deposit increases with the
increase in the temperature range. The reason for this result is
that both the radial concentration gradient and the actual cold
finger surface temperature increase as the temperature range
increases under the same temperature differential, which results
in more amount of wax molecules dissolved in bulk oil and
larger carbon number components diffused into deposit [27].
For second experiment, the effects of coolant temperature on
wax deposition are conducted using the same apparatus and
crude oil. For this experiment, all the parameters from the first
experiment are maintained except the coolant temperature as
manipulated parameter. At constant bulk oil temperature, wax
content in deposit increases with the increase in the coolant
temperature. For the decrease in the coolant temperature under
the same bulk oil temperature, the amount of gel oils in deposit
increase while amount of higher carbon number components in
Fig. 2. Variation of wax content and carbon number distribution in deposit with deposit decrease. This is due to the increase of the precipitating
time (Toil = WAT, Tcoolant = WAT-15 ˚C) [27] ability and decrease of the dissolving ability of carbon number
components in the crude oil [27]. Therefore, based on the
The effect of temperature range on aging of wax deposition review from this research and experimental test, it can be
and effect of coolant temperature on wax deposition has been

305
Back to Contents

concluded that temperature is one of the most crucial parameter performance often can be gained from visual assessment of
that contribute a significant effect in the wax deposition. deposits. The amount of surface area bare of deposit increased
with increasing stirring speed in almost all of the tests with the
different inhibitors.
V. EFFECT OF SHEAR ON WAX DEPOSITION
Shear plays an important role in wax deposition and removal
mechanism. Jennings and Weispfennig have studied about the
effect of shear on the performance of wax inhibitors [25], [28].
In the research, three different stirring speeds which is 500, 750
and 1000 rpm are used to evaluate the effect of shear on the
performance of different wax inhibitors. For untreated crude
oil, it has been found that total weight of the solid deposit
decreases with increasing shear as the speed increase from 500
to 1000 rpm. The results have been plotted as shown in Figure 4
below.

Fig. 6. Effect of shear on inhibitor performance for C35+ wax inhibition in


crude oil [28].

Based on the experimental evidence, Zougari in his paper


agreed with the fact that higher shear will lead to decrease in
solid deposition on the wall. In addition, the deposit formed in
higher shear tends to be mechanically harder. Figure 7 shows
the result, effect of shear on the total weight of the solid deposit
where crude oil and wall temperature fixed at 35 ˚C and 28 ˚C
respectively [29].

Fig. 4. Effect of shear on the amount of total deposition for untreated crude
oil [28].

Also for untreated crude oil, amount of entrained crude oil


contained within deposits decrease but amount wax contained
within deposits increase with increase of shear forces as shown
in Figure 5.

Fig. 7. Solid deposit buildup as a function of shear [29]

The similar research also have been conducted by N.


Ridzuan, F. Adam, and Z. Yaacob which studied about the
effects of two factors on the deposition process, which are shear
rate and different types of inhibitors. 10mL of four different
types of wax inhibitors (cocamide diethanolamine (CDEA),
diethanolamine (DEA), poly (ethylenecovinyl acetate) (EVA)
and poly (maleic anhydridealt1octadecene) (MEA) is used in
experiments. The effect of these inhibitors is examined at
Fig. 5. Effect of shear on wax content of deposits for untreated crude oil [28]
different shear rate which is 200, 400 and 600 rpm. The percent
In Figure 6, the treated crude oil with different wax inhibitor of inhibition of each of wax inhibitor increase with increases of
showed that the overall trends increasing inhibition with shear rate from 200 to 400 rpm. In this case, EVA shows the
increased shear rate/stirring speed. In addition to measuring the strong effect to inhibit wax formation, which is about 33.33%
actual amount of total depositor amount of deposited wax to reduction of wax deposit when the shear increases from 200 to
gauge inhibitor performance, indications of inhibitor 400 rpm. However, the wax inhibitors performance at 600 rpm

306
Back to Contents

outperformed where the volume of deposit attained increased 3–4, pp. 507–526, May 2003.
more than the control sample [30]. [18] A. Fasano, L. Fusi, and S. Correra, “Mathematical Models for Waxy
Crude Oils,” Meccanica, vol. 39, no. 5, pp. 441–482, Oct. 2004.
[19] A. Majeed, B. Bringedal, and S. Overa, “Model Calculates Wax
Based on research that have conducted by Jennings, Deposition for N. Sea Oils,” vol. 88, no. 25, pp. 63–69, Jan. 1990.
Weispfennig, N. Ridzuan, F. Adam, and Z. Yaacob and case [20] I. Gjermundsen, “State of the art: Wax precipitation deposition and
study conducted Zougari, it can be conclude that shear playing aging in flowing hydrocarbon systems,” Intern. Hydro report,
an important roles in wax deposition and removal mechanism Porsgrunn, 2006.
[21] A. Sadeghazad and R. Christiansen, “The prediction of cloud point
when the total solid deposition decrease with increase in shear temperature: In wax deposition,” SPE Asia Pacific Oil …, 2000.
for treated and untreated crude oil [28]–[30]. [22] A. S. Kasumu, S. Arumugam, and A. K. Mehrotra, “Effect of cooling
rate on the wax precipitation temperature of ‘waxy’ mixtures,” Fuel,
IV. CONCLUSION vol. 103, pp. 1144–1147, 2013.
The understanding of wax crystallization and deposition [23] Z. Jiang, J. Hutchinson, and C. Imrie, “Measurement of the wax
appearance temperatures of crude oils by temperature modulated
mechanism must be developed in order to design the correct differential scanning calorimetry.,” Fuel, vol. 80:367–71, 2001.
mitigation and removal technique to reduce or eliminate the [24] K. Paso, H. Kallevik, and J. Sjoblom, “Measurement of wax
paraffin wax problem. The molecular diffusion is accepted as appearance temperature using near-infrared (NIR) scattering,”
dominant mechanism in wax deposition; however the Energy Fuels, no. 23:4988–94., 2009.
contribution of other mechanisms cannot be dismissed since the [25] D. W. Jennings and K. Weispfennig, “Effects of shear and
temperature on wax deposition: Coldfinger investigation with a Gulf
combination of the mechanisms permits the wax deposition to of Mexico crude oil,” Energy and Fuels, vol. 19, no. 4, pp.
be ideally modeled. The effect temperature and shear also 1376–1386, 2005.
important to be understand since these parameters are crucial [26] Lee-Tuffnell, “Laboratory Study of the Effects on Wax Deposition of
and give significant effect on the wax deposition. Shear, Temperature and Live End Addition to Dead Crude Oils,”
Present. Symp. Control. Hydrates, Waxes, Asph. Aberdeen, Scotl.,
1996.
[27] Q. Quan, J. Gong, W. Wang, and G. Gao, “Study on the aging and
REFERENCES critical carbon number of wax deposition with temperature for crude
[1] A. K. Norland and F. M. Kelland, “Pour point depressant oils,” J. Pet. Sci. Eng., vol. 130, pp. 1–5, 2015.
development through Experimental Design,” 2012. [28] D. W. Jennings and K. Weispfennig, “Effect of shear on the
[2] L. Dong, H. Xie, and F. Zhang, “Chemical Control Techniques for performance of paraffin inhibitors: Coldfinger investigation with
the Paraffin and Asphaltene Deposition,” SPE Int. Symp. Oilf. Chem., Gulf of Mexico crude oils,” Energy and Fuels, vol. 20, no. 6, pp.
no. 65380, pp. 1–11, 2001. 2457–2464, 2006.
[3] A. Golchha and P. Stead, “Study and Analysisi of Cold Finger Test [29] M. I. Zougari, “Shear driven crude oil wax deposition evaluation,” J.
for Effective Selection of Paraffin Inhibitor,” NACE Int. Corrossion Pet. Sci. Eng., vol. 70, no. 1–2, pp. 28–34, 2010.
2015 Conf. Expo, no. 244, pp. 1–14, 2015. [30] N. Ridzuan, F. Adam, and Z. Yaacob, “Effects of Shear Rate and
[4] M. I. Zougari and T. Sopkow, “Introduction to crude oil wax Inhibitors on Wax Deposition of Malaysian Crude Oil,” Orient.
crystallization kinetics: Process modeling,” Ind. Eng. Chem. Res., Jounal Chem., vol. 31, pp. 1–5, 2015.
vol. 46, pp. 1360–1368, 2007.
[5] J. W. Mullin and J. Mullin, Crystallization. Elsevier, 2001.
[6] A. S. Mayerson, Handbook of Industrial Crystallization, Second Edi.
Elsevier, 2002.
[7] T. Ozawa, “Kinetics of non-isothermal crystallization,” Polymer
(Guildf)., vol. 12, no. 3, pp. 150–158, Mar. 1971.
[8] J. B. Dobbs, “A unique method of paraffin control in production
operations,” pp. 487–492.
[9] E. B. Hunt, “Laboratory Study of Paraffin Deposition,” J. Pet.
Technol., vol. 14, no. 11, pp. 1259–1269, Apr. 2013.
[10] P. a Bern, V. R. Withers, and R. J. R. Cairns, “Wax deposition in
crude oil pipelines,” Eur. Offshore Pet. Conf. Exhib., p. 571, 1980.
[11] E. D. Burger, T. K. Perkins, and J. H. Striegler, “Studies of Wax
Deposition in the Trans Alaska Pipeline,” J. Pet. Technol., vol. 33,
no. June, pp. 1075–1086, 1981.
[12] A. Aiyejina, D. P. Chakrabarti, A. Pilgrim, and M. K. S. Sastry, “Wax
formation in oil pipelines: A critical review,” Int. J. Multiph. Flow,
vol. 37, no. 7, pp. 671–694, 2011.
[13] L. F. A. Azevedo and A. M. Teixeira, “A Critical Review of the
Modeling of Wax Deposition Mechanisms,” Pet. Sci. Technol., vol.
21, no. 3–4, pp. 393–408, Jan. 2003.
[14] C. K. Ewkeribe, Quiescent Gelation of Waxy Crudes and Restart of
Shut-in Subsea Pipelines. University of Oklahoma, 2008.
[15] D. Merino-Garcia, M. Margarone, and S. Correra, “Kinetics of Waxy
Gel Formation from Batch Experiments †,” Energy & Fuels, vol. 21,
no. 3, pp. 1287–1295, May 2007.
[16] R. Banki, H. Hoteit, and A. Firoozabadi, “Mathematical formulation
and numerical modeling of wax deposition in pipelines from
enthalpy–porosity approach and irreversible thermodynamics,” Int. J.
Heat Mass Transf., vol. 51, no. 13–14, pp. 3387–3398, Jul. 2008.
[17] L. Fusi, “On the stationary flow of a waxy crude oil with deposition
mechanisms,” Nonlinear Anal. Theory, Methods Appl., vol. 53, no.

307
Back to Contents

Countercyclical Buffer of Basel III and Cyclical


Behavior of Palestinian Banks' Capital Resource

Ahmed N. K. Alfarra, Hui Xiaofeng, Ehsan Chitsaz and Jaleel Ahmed


School of Management
Harbin Institute of Technology
Harbin, 150001, China
ab_nouraldeen@hotmail.com

Abstract— One of the anxieties of the Basel Committee on paradox that a lowest capital requirement is slightly useless for
Banking Supervision is cyclical behavior of banks that has been extenuating risks when there are approvals if the requirement is
addressed in its new Basel III framework. The ‘countercyclical not met [10, 12]. Stringent lowest capital requirements can in
buffer’ particularly targets at the extenuation of the allegedly chance upset the economy further (through lower bank lending)
harmful behavior of bank capital. Mainstream discourses on the which could open up to more difficulties for the banks—a
subject usually assess a pro-cyclical behavior of banks' surplus vacuum circle with an instability of the banking system as a
capital but this paper examines mostly annual data by GDP as a significance [4, 15]. Basel II was connecting capital
deputation for the commercial cycle. Using collected annually
requirements with management risks and those risks are in
data, this paper identifies a defining pro-cyclical behavior of
repetition related to the business cycle. In an economic
Palestinian banks' capital.
depression, loans are more likely to get relegated (i.e., creating
Keywords—Basel III, Business cycle, Capital buffer, at least mark-to-market losses) and are more likely to lead to
Autoregressive Distributed Lag Model (ADL), Countercyclical nonpayment than a drop in retrieval rates. Important banks
buffer. might involuntarily stagnate their loan portfolio in a stagnation
[16] - hence the passive influence on the economy. According
to Basel III approach[4, 6], the assets risk weighted average
I. INTRODUCTION (RWA) calculated as:
In socioeconomics context, the banks are playing a very
significant role since banking sector is considered one of the
main element to strengthening confidence in the state's policies RWA = K × 12.5 × E (1)
and caring for the economic interests. There is an ever
developing literature on the pro-cyclical impact of banking Where E is the exposition at default (EAD) 12.5 being the
arrangement. For instance, Basel II agreement, applied in 2007, reverse of the capital adequacy should be at least 8%, and K
has support discussion on the cyclicality of regulation and the being the capital requirement [17, 18]. Especially the latter is
probability that regulation increases business cycles. The calculated out of the loss given default (LGD) the possibility to
financial crisis in 2008 was barely the first in history [1-3]. For default (PD) and thus susceptible to cyclicality Calculated as:
this reason the Basel Committee on Banking Supervision K = [LGD∗N [(1−R) ^ (−0.5) ∗ G (PD) +(R/ (1−R)) ^
(BCBS) revised the regulatory framework in Basel III[4]. The (0.5) ∗ G (0.999)] – PD ∗ LGD] ∗ (1−1.5 × b (PD)) ^ (2)
International Monetary Fund (IMF), the Bank for International (−1) × (1 + (M−2.5) ∗b (PD)
Settlements (BIS), and the Group of Twenty (G-20) support
this method and made an instruction proposal on the 20 July R = 0.12 ∗ (1- EXP (-50 ∗ PD))/ (1-EXP (-50)) +0.24 ∗
(3)
2011 so as to provide a probable legal framework for the states [1-(1-EXP (-50 ∗PD))/ (1-EXP (-50))]
that follows Basel III[3-6]. The outline should be
comprehensive at the latest in 2018[7-9]. The resolution that b = (0.11852- 0.05478∗ In (PD)) ^ 2 (4)
the committee suggests is dual: a capital preservation buffer With R: correlation, N: normal cumulative distribution
which can be accessible to under certain conditions and the function, M: maturity, b smoothed regression and G: inverse
outline of a ‘countercyclical buffer’ [10, 11] . The latter cumulative distribution function.
proposes to guarantee that banking sector capital requirements
take account of the macro-financial atmosphere in which banks The banking organization itself shows pro-cyclicality [2,
function” [4, 12]. The countercyclical buffer will be applied by 19] and time-varying capital requests may lead to an
the state jurisdictions[13]. The extreme size of that buffer after extenuation of such difficulty if banks are permitted to pull on
stepwise growths during the application stage and will finally their buffers when in requisite [18]. In this paper, we will
be 2.5%. It will be exciting to observe the international answer this question: is the cyclical behavior appropriate
uniformity of this approach [6, 14]. In addition excessive on- indicator to Palestine bank sector? We answered this question
and off-budget leverage, the quality of the capital basis and through explained the influence of macroeconomic shocks on
inadequate liquidity buffers, Basel Committee is addressing the the capital and reserves of Palestine banks from 1996 until the
pro-cyclical influence[4]. It tries to resolve the regulatory end of 2014 that was mentioned to the Palestinian Monetary

978-1-5090-1671-6/16/$31.00 ©2016 IEEE


308
Back to Contents

Authority annual report. In other words, we use an assessment III. RESULTS


of economic capital of the banks. Several experimental We have applied Autoregressive Distributed Lag Model
business cycle investigated exactly appraisal the surplus capital (ADL) to find the relationship between cyclical behavior of the
buffer—the buffer that outstrips the minimum regulatory economy and change in reserve capital. We have used Gross
requirement [13, 15]. The literature differentiates between Domestic Product (GDP) as a proxy to measure cyclical
long- and shortsighted banks [2]. However restricted banks will behavior of Palestinian economy. Descriptive statistics of the
not build up supplementary capital reserves through an given variables have presented in Table I. Mean values of the
improvement of the economy so as to take benefit of business GDP, CAD and CPI are 8.5030, 0.1443 and 3.8972,
opportunities, onward seeing banks will do so [15, 20]. respectively. Standard deviation of the mentioned variables
Elsewhere the Basel III structure some local methods stand up suggests a small variation in the given data set. As we have
with the cyclicality most conspicuously Spain, which taken annual data from 1996 to 2014, we can observe a
introduced effective provisioning in the year 2000[15, 21]. significant difference between maximum and minimum values
Contrary to capital buffers pointing at unanticipated losses, of the variables.
provisions have to safeguard banks against anticipated losses. It
builds on the credit stock and its deviation.
TABLE I. DESCRIPTIVE STATISTICS
We supplement to the literature in two methods: first, we
supplement another part of experimental proof about the pro- GDP CAD CPI
Mean 8.5030 0.1442 3.8972
cyclical behavior of banks. Second, we examine the data on a
Median 8.4641 0.1485 3.3250
higher regularity with Autoregresslve Distributed Lag (ADL) Maximum 8.9196 0.2440 9.8300
method. This lets us to evaluate the direct influence of shocks Minimum 8.1138 0.0640 1.1800
and the timing of capital modifications. Moreover, our dataset Std. Dev. 0.2562 0.0647 2.1440
contains the most current financial market crisis. In our
combined data we perceive a pro-cyclical behavior of the Autoregressive Lag Model suggests that the variables used
Palestine banking sector: banks increase their capital base in an in the model should be stationary. The variables included in
economic downturn and the bad conditions in Palestine our model have the time series properties. We have observed
territories especially after Israeli aggression on Gaza in that the mean of the interested variables is not constant over
summer 2014. The banks also did this to the Apartheid Wall in time or in other words they are not stationary. To make them
the West Bank and during the siege of the Gaza Strip since stationary we have taken their log difference. We can observe
2007. the behavior of GDP before and after taking the log difference
of the data in Figure 1.
II. DATA AND METHODOLOGY GDP

Most experimental studies that examine the behavior of 9.0

capital reserves as a task of the business cycle use GDP as 8.9

delegation for the cycle. The problematic with GDP is that it is 8.8

commonly only combined per quarter and in fact most studies 8.7

even only use the yearly aggregate. This has, among other 8.6

reasons, to do with the availability of budget data. The 8.5

Palestine offers yearly budget. The data represent all yearly 8.4

financial institutions in Palestine. 8.3

8.2
Autoregressive Distributed Lag (ADL) Model is more
appropriate when dependent variable depends on its previous 8.1
1996 1998 2000 2002 2004 2006 2008 2010 2012 2014
value (yt-1), current value of an independent variable (x) and on DLGDP
its previous value (xt-1). A simple ADL (1, 1) can be described .020
as: .015

yt = m + α1 yt −1 + β0 xt + β1xt −1 + γyt −1 + ut (5) .010

.005

Where yt is stationary dependent variable. xt Is stationary .000

independent variable. α, β and γ are parameters. ut Is an error -.005

term which has a zero mean, constant variance and serially -.010

uncorrelated. Being uncorrelated with ut and given values of -.015

xt allows us to use ordinary least square (OLS) estimates in our -.020


1996 1998 2000 2002 2004 2006 2008 2010 2012 2014
analysis. To find the effect of capital on cyclical behavior we
have estimated following model Figure 1. Behavior of GDP and CPI over the time

GDPt = m + α1GDPt −1 + β0CADt + β1CADt −1 + ut (6) Results of the ADL are reported in Table II. Two panels (A,
B) have been created in Table II. First panel (A) has been
created to report the results of the regression where GDP used

309
Back to Contents

as a dependent variable. Results of the panel A suggests that this study has shown that increase in capital reserves for the
GDP depends positively and significantly on its own lag. Palestinian banking sector cannot affect in GDP of the
Moreover, capital reserve ratio (CAD) has also positive and economy. Moreover, this study answers the research questions
significant effect on dependent variable (GDP). According to that the cyclical behavior is an appropriate indicator for
this regression GDP does not depends on its own lag. R-square Palestinian banking. Therefore, we propose the new concepts
of panel A is 95.11%. F-statistic is also significant which (Countercyclical Buffer) for Palestine Monetary Authorities
advocate the significance of the model. (PMA) to approve it to develop credit risk management
systems for Palestinian banks.
TABLE II. REGRESSION RESULTS
ACKNOWLEDGMENT
A. GDP as dependent variable
Std. This work is supported by the Nation Natural Science
Variable Coefficient t-Statistic Prob.
Error Foundation Of China Under Number 71173060.
C 4.3418 1.3625 3.1864 0.0066
GDP(-1) 0.4568 0.1707 2.6754 0.0181
CAD 1.9721 0.6899 2.8582 0.0126 REFERENCES
CAD(-1) 0.1267 0.9741 0.1300 0.8983 [1] A. F. Rossignolo, M. D. Fethi, and M. Shaban, "Market crises and Basel
R-squared 0.9511 capital requirements: Could Basel III have been different? Evidence
F-statistic 90.944 from Portugal, Ireland, Greece and Spain (PIGS)," Journal of Banking &
Prob(F- Finance, vol. 37, pp. 1323-1339, May 2013.
0.0000
statistic) [2] A. Demirguc-Kunt, E. Detragiache, and O. Merrouche, "Bank Capital:
B. CPI as dependent variable Lessons from the Financial Crisis," Journal of Money Credit and
Std. Banking, vol. 45, pp. 1147-1164, Sep 2013.
Variable Coefficient t-Statistic Prob.
Error [3] L. Morales and B. Andreosso-O'Callaghan, "The global financial crisis:
C 5.6112 1.8745 2.9933 0.0104 World market or regional contagion effects?," International Review of
GDP(-1) -0.1224 0.2581 -0.4744 0.6431 Economics & Finance, vol. 29, pp. 108-131, Jan 2014.
CAD 17.342 25.154 0.6894 0.5027 [4] BCBS, "Basel III: A Global Regulatory Framework for more Resilient
CAD(-1) -27.983 26.223 -1.0671 0.3053 Banks and Banking Systems," Bank for International Settlements,
R-squared R-squared 0.1411 December 2010 2011.
F-statistic F-statistic 0.7123 [5] J. Noh, "BASEL III Counterparty Risk and Credit Value Adjustment:
Prob(F- Prob(F- Impact of the Wrongway Risk," Global Economic Review, vol. 42, pp.
0.5618
statistic) statistic) 346-361, Dec 2013.
[6] BCBS, "Progress report on Basel III implementation. Bank for
In panel B we have reported the results of the regression International Settlements," Basel Committee on Banking Supervision,
where we have used the CPI as our dependent variable. We are April 2012 2012.
not able to interpret the results of panel B as we have not found [7] I. Tamas, "BASEL III: RETHINKING LIQUIDITY AND
any variable significant in this regression. R-square of this LEVERAGE," Ekonomska Istrazivanja-Economic Research, pp. 415-
regression is very low. Moreover, F-statistic also suggests that 432, 2013.
this model is not significant. [8] R. Repullo and J. Suarez, "The Procyclical Effects of Bank Capital
Regulation," Review of Financial Studies, vol. 26, pp. 452-490, Feb
2013.
IV. CONCLUSION [9] I. J. Chen, "Financial crisis and the dynamics of corporate governance:
The econometric consequence proposes the presence of Evidence from Taiwan's listed firms," International Review of
Economics & Finance, vol. 32, pp. 3-28, Jul 2014.
nearly pro-cyclical behavior of Palestinian bank capital from
1996 through to the end of 2014. As Palestinian economy is not [10] G. van Vuuren, "BASEL III COUNTERCYCLICAL CAPITAL
RULES: IMPLICATIONS FOR SOUTH AFRICA," South African
same like other developing economies, so we have found a Journal of Economic and Management Sciences, vol. 15, pp. 309-324,
little bit different results from Stolz and Wedow (2005). In this 2012.
study, we have not used the conventional variables that are [11] B. De Waal, M. A. Petersen, L. N. P. Hlatshwayo, and J. Mukuddem-
related to the banking sector specifically. To find the impact of Petersen, "A note on Basel III and liquidity," Applied Economics
bank capitalization on macroeconomic conditions of Letters, vol. 20, pp. 777-780, May 2013.
Palestinian economy we have used Gross Domestic Product [12] I. Drumond, "BANK CAPITAL REQUIREMENTS, BUSINESS
(GDP) as our dependent variable. CYCLE FLUCTUATIONS AND THE BASEL ACCORDS: A
SYNTHESIS," Journal of Economic Surveys, vol. 23, pp. 798-830, Dec
Most of the time, countercyclical buffer is used as an 2009.
instrument to keep business cycle in line. Basel III presents [13] N. Yoshino and T. Hirano, "Pro-cyclicality of the Basel Capital
new concepts that allow for more flexibility of the capital Requirement Ratio and Its Impact on Banks," Asian Economic Papers,
vol. 10, pp. 22-36, Sum 2011.
buffer and sat a rule that every national jurisdiction have to
[14] P. Antao and A. Lacerda, "Capital requirements under the credit risk-
determine their business cycle according to their state of the based framework," Journal of Banking & Finance, vol. 35, pp. 1380-
economy. As the conditions of Palestinian economy do not 1390, Jun 2011.
remain stable from time to time, so the banking authorities [15] S. Grosse and E. Schumann, "Cyclical behavior of German banks'
have decided to do not reduce the capital requirement as to capital resources and the countercyclical buffer of Basel III," European
hold back the sanctioned by authorities and the capital market. Journal of Political Economy, vol. 34, pp. S40-S44, Jun 2014.
We can also observe the same phenomenon in Basel III where [16] N. Gatzert and H. Wesker, "A Comparative Assessment of Basel II/III
it allows for more flexibility of the capital buffer. Results of and Solvency II," Geneva Papers on Risk and Insurance-Issues and
Practice, vol. 37, pp. 539-570, Jul 2012.

310
Back to Contents

[17] P. R. Agenor, K. Alper, and L. P. da Silva, "Capital Regulation, [19] I. Angeloni and E. Faia, "Capital regulation and monetary policy with
Monetary Policy, and Financial Stability," International Journal of fragile banks," Journal of Monetary Economics, vol. 60, pp. 311-324,
Central Banking, vol. 9, pp. 193-238, Sep 2013. Apr 2013.
[18] A. Guidara, V. S. Lai, L. Soumare, and F. T. Tchana, "Banks' capital [20] J. C. A. Teixeira, F. J. F. Silva, A. V. Fernandes, and A. C. G. Alves,
buffer, risk and performance in the Canadian banking system: Impact of "Banks' capital, regulation and the financial crisis," North American
business cycles and regulatory changes," Journal of Banking & Finance, Journal of Economics and Finance, vol. 28, pp. 33-58, Apr 2014.
vol. 37, pp. 3373-3387, Sep 2013. [21] M. S. Ebrahim, S. Girma, M. E. Shah, and J. Williams, "Dynamic capital
structure and political patronage: The case of Malaysia," International
Review of Financial Analysis, vol. 31, pp. 117-128, Jan 2014.

311
312
Back to Contents

Author Index

A F
Ahmed, Jaleel ................................................308 Faid, M. S. ......... 133, 138, 143, 148, 216, 220, 229
Ahn, Haeil ...................................................... 58 Fan, Chin-Yuan ............................................ 160
Ahn, Khi-Jung ...............................................298 Fernandez, Noime B. ...................................... 48
Al Sharqawi, Mohammed .............................238
Al-Aomar, Raid .............................................. 63
Alfarra, Ahmed N. K. ....................................308 G
Alhimairee, Mei ............................................238 Gao, Jie ........................................................... 68
Ali, M. O. .......... 133, 138, 143, 148, 216, 220, 229 Gourc, Didier ................................................ 224
Aljeneibi, Saeed ............................................. 63 Granada, S. ................................................... 130
Almazroui, Shereen ........................................ 63 Günthner, Willibald A. ................................... 15
Al-Salamah, Muhammad ..............................198 Gutierrez, Alexander V. ............................... 126
Amjad, Zubair ...............................................298 Gutierrez, Ma. Teodora E. ............................ 175

B H
Bassig, Karla Ishra S. ....................................262 Hamidi, Z. S. ..... 133, 138, 143, 148, 216, 220, 229
Bhat, Sanjay P. ..............................................117 Harun, Azwan............................................... 303
Bhatia, Anil ...................................................117 Hasan, Norashikin .......................................... 11
Bongo, Miriam F. ............................................. 1 Hasan, Oskar Hasdinor ................................... 11
Boonsothonsatit, Kanda ................................252 Hassan, Zulkafli ........................................... 303
Bougaret, Sophie ...........................................224 Higuchi, Yuki ............................................... 283
Hui, Xiaofeng ............................................... 308
Husien, Nurulhazwani ............ 133, 143, 148, 216,
C 220, 229
Cabacungan, P. M. ........................................130 Husin, Hazlina .............................................. 303
Chan, Shiau Wei............................................. 53 Hussien, NurulHazwani................................ 138
Chang, Hyun-Soo ........................................... 99
Chang, Shu-Hao ............................................160
Chellaboina, Vijaysekhar ...................... 107, 112 I
Chen, Jianguo ................................................. 94 Idris, Nurhidayu ........................................... 170
Chen, Yi Yang................................................ 39 Inohara, Takehiro ............................................. 6
Chen, Ying ..................................................... 86 Ishii, Yasuo................................................... 278
Chen, Yu Ting ................................................ 39
Chitsaz, Ehsan ...............................................308
Chun, Ingeol ..................................................153 J
Jain, Arihant .......................................... 107, 112
Jaluvka, Michal .............................................. 34
D Jeon, Jaeho ................................................... 153
Dela Cueva, Maida A. ...................................288 Johanyák, Zsolt Csaba .................................. 243
Deniaud, Ioana Filipas ..................................224 Joochim, Orapadee ................................ 211, 252
Dewi, Dian Retno Sari ..................................165
Doulatabadi, M............................................... 72
Dweekat, Abdallah Jamal..............................270 K
Kaber, David B. .............................................. 43
Kamariotou, Maria ....................................... 247
E Kang, Rui ....................................................... 82
El-Sayegh, Sameh M. ............................ 234, 238 Kang, Sungjoo .............................................. 153

313
Back to Contents

Kashif, Mustafa .............................................238 Omar, Siti Sarah ............................................. 53


Kim, Hyun-Jin ................................................ 99 Oppus, C. ...................................................... 130
Kipouridis, Orthodoxos .................................. 15 Osman, M. R. ............................................... 216
Kitsios, Fotis .................................................247 Oyanagi, Tatsuya .......................................... 278
Kong, Xianda ................................................188
Kotyrba, Martin .............................................. 34
Kuan, Lee Choo ............................................103 P
Kwon, He-Boong ..........................................122 Park, Jinwoo ................................................. 270
Payawal, John Michael M. ........................... 288
Peng, Shirui .................................................... 94
L Perez, T......................................................... 130
Lah, Nik Khairul Irfan Nik Ab ......................303 Preto, R. ........................................................ 130
Lee, In Jung .................................................... 48 Prompila, Ekarat ............................................. 24
Lee, Jooh .......................................................122
Li, Kai Way .................................................... 39
Li, Ruiying ..................................................... 82 Q
Liang, Dapeng ...............................................293 Quevedo, Venusmar C. ........................... 48, 288
Lim, Khan Horng ........................................... 53
Limpiyakorn, Yachai................................. 20, 24
Liu, Tzu-Hsin ................................................. 91
Lu, Qiang ....................................................... 68
R
Rahaju, Dini Endah Setyo ............................ 165
Rajendran, Rasvini ....................................... 178
Ramlan, R. ...................................................... 53
M Ranny, ........................................................... 29
Ma, Wenqi ...................................................... 43 Ronquillo, Darwin Joseph B. ......................... 48
Marmier, François .........................................224 Röschinger, Marcus ........................................ 15
Marquez, John Carlo F. .................................. 48
Masuda, Shiro ...............................................188
Matsumoto, Mitsuho ...................................... 77
Mauricio, Arsie P. .........................................288
S
Sa’adon, Ahmad Azri ................................... 103
Meekaew, Jumnong.......................................211
Sabri, S. N. U. ................ 133, 138, 143, 148, 216,
Meilizar, .......................................................256
220, 229
Moin, Noor Hasnah .......................................183
Shang, Chunling ............................................. 68
Monstein, C. .....................133, 138, 148, 220, 229
Shariff, N. N. M. ............ 133, 138, 143, 148, 216,
Moon, Sooyoung ...........................................153 220, 229
Morad, Mohamed ..........................................234 Shariff, S. Sarifah Radiah ............................. 183
Mujir, Mohd Shaleh ....................................... 11 Sheu, Shey-Huei ............................................. 91
Shon, Tae-shik ................................................ 99
Silverio, Hazeline A. .................................... 262
N Song, Minseok .............................................. 206
Nesti, Lisa .....................................................256 Song, Wang-Cheol ....................................... 298
Nikoula, Nilli.................................................238 Su, Guofeng .................................................... 94
Nyhuis, Peter .................................................193 Subramanian, Easwar ............................ 107, 112
Sugunnasil, Prompong.................................. 156
Suh, Jeong-Jun ............................................... 99
O Suteeca, Raslapat .......................................... 156
Ocampo, Lanndon A. ....................................... 1 Suzuki, Komei .............................................. 273
Omar, Abdul Razak ........................................ 53 Suzuki, Shinji ................................................... 6
Omar, Mohd ..................................................183

314
Back to Contents

T W
Takeyasu, Daisuke ........................................278 Wang, Xiao..................................................... 68
Takeyasu, Hiromasa ......................................283 Wang, Zhiru ................................................... 94
Takeyasu, Kazuhiro........................ 273, 278, 283 Wattanagul, Napajorn..................................... 20
Tamura, Yoshinobu ........................................ 77 Willeke, Stefan ............................................. 193
Tang, Ning ..................................................... 86
Tangonan, G. .................................................130
Thaha, Putranesia ..........................................256 Y
Torres, Oswaldo Jose Jimenez ......................293 Yamada, Shigeru ............................................ 77
Tsai, Hsin-Nan ............................................... 91 Yamamoto, Hisashi ...................................... 188
Tu, Thi Bich Hong ........................................206 Yamashita, Hirotake ..................................... 273
Yang, Chenxuan ............................................. 82
Yuan, ZengHui ............................................... 86
U Yusof, S. M. ................................................... 72
Ullmann, Georg .............................................193

Z
V Zainol, N. H. ...... 133, 138, 143, 148, 216, 220, 229
Volna, Eva ...................................................... 34 Zainuddin, Zaitul Marlizawati ............... 170, 178
Zaman, Izzuddin ............................................. 53
Zhang, Yanbo ................................................. 82
Zhang, Zhe-George ........................................ 91

315

Você também pode gostar