Você está na página 1de 172

PONTIFICAL CATHOLIC UNIVERSITY OF PARANA

POLYTECHNICAL SCHOOL
INDUSTRIAL AND SYSTEMS ENGINEERING GRADUATE PROGRAM (PPGEPS)

MARCELO FELICIANO FILHO

FEATURE ENGINEERING FOR SOFT SENSOR IMPROVEMENT: A CASE STUDY


IN FLIGHT CONTROL SYSTEMS

CURITIBA
2023
MARCELO FELICIANO FILHO

FEATURE ENGINEERING FOR SOFT SENSOR IMPROVEMENT: A CASE STUDY IN


FLIGHT CONTROL SYSTEMS

The presented dissertation project will be


submitted to the Industrial and Systems
Engineering Graduate Program,
Polytechnical School, Pontifical Catholic
University of Paraná, to obtain the Master’s
degree in industrial and Systems
Engineering title.

Advisor: Ph.D. Gilberto Reynoso Meza.

CURITIBA
2023
Dados da Catalogação na Publicação
Pontifícia Universidade Católica do Paraná
Sistema Integrado de Bibliotecas – SIBI/PUCPR

Feliciano Filho, Marcelo


F313f Feature engineering for soft sensor improvement: a case study in flight control
2023 systems / Marcelo Feliciano Filho ; Advisor: Gilberto Reynoso Meza. – 2023.
168 f. ; il. : 30 cm

Dissertação (mestrado) – Pontifícia Universidade Católica do Paraná, Curitiba,


2023
Bibliografia: f. 148-155

1. Sistemas de controle inteligente. 2. Indústria 4.0. 3. Aprendizado de máquina.


3. Revisão sistemática. 4. Engenharia de produção. I. Meza, Gilberto Reynoso.
II. Pontifícia Universidade Católica do Paraná. Programa de Pós-Graduação em
Engenharia de Produção e Sistemas. III. Título.
CDD. 20. ed. – 670

Biblioteca Central
Sônia Maria Magalhães da Silva – CRB 9/1191
TERMO DE APROVAÇÃO

Marcelo Feliciano Filho


FEATURE ENGINEERING FOR SOFT SENSOR IMPROVEMENT: A CASE
STUDY IN FLIGHT CONTROL SYSTEMS.

Dissertação aprovada como requisito parcial para obtenção do grau de Mestre no Curso de Mestrado em
Engenharia de Produção e Sistemas, Programa de Pós-Graduação em Engenharia de Produção e
Sistemas, da Escola Politécnica da Pontifícia Universidade Católica do Paraná, pela seguinte banca
examinadora:

_________________________________
Presidente da Banca
Prof. Dr. Gilberto Reynoso Meza
(Orientador)

_________________________________
Prof. Dr. André Schneider de Oliveira
(Membro Externo)

_________________________________
Prof. Dr. Roberto Zanetti Freire
(Membro Interno)

Curitiba, 27 de Novembro de 2023.


MARCELO FELICIANO FILHO

FEATURE ENGINEERING FOR SOFT SENSOR IMPROVEMENT: A CASE STUDY IN


FLIGHT CONTROL SYSTEMS

Dissertation presented to the Industrial and Systems Engineering Graduate Program


(PPGEPS), Polytechnic School, Pontifical Catholic University of Paraná, to obtain the
Master´s Degree in Industrial and Systems Engineering.

EXAMINING COMMITTEE

_____________________________________
Gilberto Reynoso Meza, Ph.D.
Advisor
(PPGEPS/PUCPR)

_____________________________________
André Schneider de Oliveira, Ph.D.
External Examiner Member
(UTFPR)

_____________________________________
Roberto Zanetti Freire, Ph.D.
External Examiner Member
(UTFPR)

Curitiba, November 27th, 2023.


ACKNOWLEDGMENTS

This study was financed in part by the Conselho Nacional de Desenvolvimento Científico e
Tecnológico (CNPq), the Fundação Araucária (FAPPR) - Brazil Finance Codes:
310079/2019-5-PQ2, 437105/2018-0-Univ, 51432/2018-PPP, PRONEX-042/2018.
“I think it’s very important to have a
feedback loop, where you’re constantly
thinking about what you’ve done and how
you could do it better. I think that’s the
best advice: constantly think about how
you could be doing things better and
question yourself”.

Elon Musk.
ABSTRACT

The advent of the Fourth Industrial Revolution (Industry 4.0) necessitates innovative
solutions to tackle new challenges. A pertinent example is the analysis of the Electronic
Flight Control System (EFCS) benchmarking introduced by Airbus at the 2020
International Federation of Automatic Control (IFAC) World Congress. This research
focuses on Oscillatory Failure Cases (OFCs), which are crucial in the structural design
of commercial aircraft. The identification of OFCs in real-time is pivotal for enhancing
cost-efficiency, energy conservation, and flight reliability. Data-driven Soft Sensors
(SS) and machine learning (ML) algorithms have emerged as effective tools for
identifying OFCs. However, their performance significantly improves with the
integration of Feature Engineering (FEn) techniques. This study aims to develop an
FEn framework that optimizes SS performance in EFCS for real-time OFC detection.
The methodology encompasses a Systematic Literature Review (SLR) driven by four
fundamental questions. The SLR process involved an extensive analysis, starting with
2,153 papers and culminating in a detailed Content Analysis (CA) of 59 selected
papers. This SLR provides insights into the current state of SS and FEn, underlying
mathematical concepts, and answers to the four critical questions. It offers a
comprehensive overview of SS applications across seven global sectors, their
correlation with Industry 4.0, and the utilization of ML and FEn in SS implementations.
The benchmark is thoroughly detailed, including Simulink™ diagrams and specific
requirements. Initial results using Support Vector Machine (SVM), Decision Tree (DT),
and Multi-Layer Perceptron (MLP) algorithms for SS implementation in the benchmark
showed that, without FEn, the accuracy rates for identifying OFCs were below 50%.
This finding underscores the necessity of incorporating FEn. The study concludes that
SS implementation augmented with FEn considerably improves real-time OFC
detection. This discovery leads to potential advancements in FEn techniques,
indicating a promising area for future research. The research also includes the
development and presentation of an FEn framework, leading to the publication of two
papers detailing the findings from the SLR and the FEn framework. Notably, applying
a Feature Reduction technique reduced the computational cost by 75% while
enhancing performance by 70%. Therefore, this research offers valuable insights into
feature engineering within soft sensors, especially in applications like EFCS, thereby
enriching the academic and scientific community's understanding of this field.

Keywords: Soft Sensors. Industry 4.0. Machine Learning. Systematic Literature


Review (SLR). Electronic Flight Control System. Oscillatory Failure Cases (OFCs).
RESUMO

A chegada da Quarta Revolução Industrial (Indústria 4.0) exige soluções inovadoras


para enfrentar novos desafios. Um exemplo relevante é a análise do benchmarking do
Sistema de Controle de Voo Eletrônico (EFCS) introduzido pela Airbus no Congresso
Mundial da Federação Internacional de Controle Automático (IFAC) de 2020. Esta
pesquisa foca nos Casos de Falha Oscilatória (OFCs), que são cruciais no design
estrutural de aeronaves comerciais. A identificação de OFCs em tempo real é
essencial para melhorar a eficiência de custos, conservação de energia e
confiabilidade do voo. Sensores Suaves orientados por dados (SS) e algoritmos de
aprendizado de máquina (ML) surgem como ferramentas eficazes para a identificação
de OFCs. No entanto, seu desempenho melhora significativamente com a integração
de técnicas de Engenharia de Recursos (FEn). Este estudo visa desenvolver um
framework de FEn que otimize o desempenho de SS em EFCS para detecção de
OFCs em tempo real. A metodologia abrange uma Revisão Sistemática da Literatura
(SLR) conduzida por quatro questões fundamentais. O processo de SLR envolveu
uma análise extensa, começando com 2.153 artigos e culminando em uma Análise de
Conteúdo (CA) detalhada de 59 artigos selecionados. Esta SLR fornece insights sobre
o estado atual de SS e FEn, conceitos matemáticos subjacentes e respostas para as
quatro questões críticas. Ela oferece uma visão abrangente das aplicações de SS em
sete setores globais, sua correlação com a Indústria 4.0 e a utilização de ML e FEn
em implementações de SS. O benchmark é detalhado minuciosamente, incluindo
diagramas Simulink™ e requisitos específicos. Os resultados iniciais usando
algoritmos de Máquina de Vetores de Suporte (SVM), Árvore de Decisão (DT) e
Perceptron de Múltiplas Camadas (MLP) para implementação de SS no benchmark
mostraram que, sem FEn, as taxas de precisão para identificação de OFCs estavam
abaixo de 50%. Este achado sublinha a necessidade de incorporar FEn. O estudo
conclui que a implementação de SS aprimorada com FEn melhora consideravelmente
a detecção de OFCs em tempo real. Esta descoberta leva a avanços potenciais nas
técnicas de FEn, indicando uma área promissora para pesquisas futuras. A pesquisa
também inclui o desenvolvimento e apresentação de um framework de FEn, levando
à publicação de dois artigos detalhando as descobertas da SLR e do framework de
FEn. Notavelmente, a aplicação de uma técnica de Redução de Recursos reduziu o
custo computacional em 75%, enquanto melhorava o desempenho em 70%. Portanto,
esta pesquisa oferece insights valiosos sobre engenharia de recursos dentro de
sensores suaves, especialmente em aplicações como EFCS, enriquecendo assim o
entendimento da comunidade acadêmica e científica sobre este campo.

Palavras-chave: Sensores Inteligentes. Industry 4.0. Aprendizado de Máquina.


Revisão Sistemática da Literatura (SLR). Sistema de Controle de Voo Eletrônico.
Casos de Falha Oscilatória.
FIGURES SUMMARY

Figure 1 – The Ten Procedures for SLR ................................................................... 27


Figure 2 – Survey for paper’s initial results................................................................ 32
Figure 3 – Survey for Papers After Applying I/E Criteria ........................................... 33
Figure 4 – Survey for Papers After Applying Classification Criteria ........................... 38
Figure 5 – Number of Questions Answered by Papers ............................................. 45
Figure 6 – hierarchical tree of the ML methodologies employed in SS ...................... 47
Figure 7 – Kalman Filter Algorithm ............................................................................ 50
Figure 8 – Support Vector Machine Linear Hyperplane ............................................. 52
Figure 9 – Deep Learning Neural Networks (DLNN) ................................................. 54
Figure 10 – ANFIS topology ...................................................................................... 55
Figure 11 – Simple Decision Tree ............................................................................. 57
Figure 12 – The Schema Random Forest (RF) ......................................................... 59
Figure 13 – Genetic Algorithm Flowchart .................................................................. 60
Figure 14 – A Time Series Example Employed to the Benchmark ............................ 63
Figure 15 – soft SCG sensor ..................................................................................... 72
Figure 16 – Intensity Pressure Signal in Different Actions ......................................... 72
Figure 17 – Percentage of Questions Answered by the Papers ................................ 85
Figure 18 – Chosen Benchmark’s Mechanism .......................................................... 86
Figure 19 – Benchmark Simulink Diagram ................................................................ 88
Figure 20 – Simulink diagram of the trajectory control module .................................. 89
Figure 21 – Simulink diagram of load factor control module ...................................... 90
Figure 22 – Simulink diagram servo control simulator ............................................... 91
Figure 23 – Simulink Diagram “Real Servo” .............................................................. 92
Figure 24 – Sensor Position Process Plant Simulink Module .................................... 92
Figure 25 – Simulink diagram for turbulence simulation ............................................ 93
Figure 26 – Simulink diagram for the presentation of results .................................... 94
Figure 27 – Soft Sensor Software GUI ...................................................................... 95
Figure 28 – Feature Engineering Framework Workflow ............................................ 98
Figure 29 – OFC Identification in Benchmark Simulation Example ........................... 99
Figure 30 – Feature Engineering Block Diagram for Simulink™ Implementation .... 104
Figure 31 – Confusion Matrix results for Decision Tree. .......................................... 120
Figure 32 – Confusion Matrix results for XGBoost. ................................................. 121
Figure 33 – Confusion Matrix results for Random Forest. ....................................... 122
Figure 34 – Confusion Matrix results for Gradient Boosting. ................................... 124
Figure 35 – Framework Flowchart........................................................................... 120
Figure 36 – Graph presenting performance x features for Decision Tree. ............... 135
Figure 37 – Graph presenting performance x features for Gradient Boosting. ........ 136
Figure 38 – Graph presenting performance x features for XGBoost. ...................... 137
Figure 39 – Graph presenting performance x features for Random Forest. ............ 138
BOARDS SUMMARY

Board 1 – The Classification Criteria ......................................................................... 34


Board 2 – Applying The Classification Criteria .......................................................... 35
Board 3 – Content Analysis Of Selected And Classified Papers ............................... 38
Board 4 – The Soft Sensor Definition In Classified Papers ....................................... 47
TABLES SUMMARY

Table 1 – The Research Guideline Questions ........................................................... 28


Table 2 – The Main Research Keywords Synonyms ................................................. 29
Table 3 – The Main Research Keywords Combination ............................................. 29
Table 4 – The Research Inclusion and Exclusion Criteria ......................................... 30
Table 5 – The Main Research Keywords Combination Results................................. 31
Table 6 – Number of Selected Papers per Keyword ................................................. 32
Table 7 – Benchmark requirements proposed by IFAC ............................................. 87
Table 8 – Software scenario parameters ................................................................... 97
Table 9 – Experiment scenarios and their parameters ............................................ 100
Table 10 – Experiment Results ............................................................................... 101
Table 11 – Critical Analysis of Obtained Results ..................................................... 103
Table 12 – Classification Report for Decision Tree ................................................. 113
Table 13 – Number of features x Maximum accuracy. ............................................ 134
LIST OF ABBREVIATIONS AND ACRONYMS

ANFIS Adaptive Network-Based Fuzzy Inference System


API Application Programming Interface
ARIMAX Autoregressive Integrated Moving Average with exogenous
CA Content Analysis
CNN Convolutional Neural Network
CNPq. Conselho Nacional de Desenvolvimento Científico e Tecnológico
CPS Cyber-Physical-Systems
CQAs Critical Quality Attributes
DSA Data Science Academy
DL Deep Learning
DLNN Deep Learning Neural Network
DT Decision Tree
ECC Edge Cloud Computing
ECI Ellipsoidal Covariance Intersection
ELM Extreme Learning Machine
FCC Flight Control Computer
FDIS Failure Detection and Isolation System
FEn Feature Engineering
FIEMA Fuzzy Inference systems technique for Emissions Analytics
FIR Finite Impulse Response
FPA Function Point Analysis
FPM First Principle Model
FS-ELM The Feature Scaled Extreme Learning Machine
GA Genetic Algorithm
GHG greenhouse gas
GPS Global Positioning System
GBM Gradient Boosting Machine
GUI Graphical User Interfaces
HT Hyperparameter Tunning
I4.0 Industry 4.0
ICI Inverse Covariance Intersection
I/E Inclusion / Exclusion
IFAC International Federation of Automatic Control
IIoT Industrial Internet of Things
KCF Kalman Consensus Filter
KF Kalman Filter
KPIs Key Performance Indicators
LP Learning Phase
LSTM Long Short-Term Memory
LULC Land Use, Land Cover
MLP Multilayer Perceptron
MSE Mean Square Error
NARX Nonlinear Auto-Regressive with exogenous inputs
NASA National Aeronautics and Space Administration
OS-ELM Online Sequential Extreme Learning Machine
OFC Oscillatory Failure Case
POF Pareto Optimal Front
PCA Principal Component Analysis
PLS Partial Least Squares
PSO Particle Swarm Optimization
RF Random Forest
PRISMA Preferred Reporting Items for Systematic reviews and Meta-Analyses
PUCPR Pontifícia Universidade Católica do Paraná
RNN Recurrent Neural Networks
SCG Stretchable Seism Cardiofigure
SL Supervisory Loop
SLR Systematic Literature Review
STLF Short-term Load Forecasting
SS Soft Sensors
SVM Support Vector Machine
UFS Univariate Feature Selection
UQ Uncertainty Quantification
VHM Vehicle Health Monitoring
WSN Wireless System Networks
XGBoost eXtreme Gradient Boosting
SUMMARY

1 INTRODUCTION .......................................................................................... 18
1.1 MOTIVATION: THE DEMAND FOR SOFT SENSORS ................................ 19
1.2 OBJECTIVES ............................................................................................... 20
1.2.1 General Objective ....................................................................................... 20
1.2.2 Specific Objectives ..................................................................................... 20
1.3 JUSTIFICATION ........................................................................................... 20
1.4 TOOLS AND METHODS .............................................................................. 22
1.4.1 Software Required ...................................................................................... 22
1.4.2 Hardware Employed ................................................................................... 23
1.5 RESEARCH IMPACTS ................................................................................. 23
1.6 DOCUMENT STRUCTURE .......................................................................... 24
2 THE SYSTEMATIC LITERATURE REVIEW ................................................ 26
2.1 SYSTEMATIC LITERATURE REVIEW: THE STATE OF ART ..................... 26
2.2 SLR PROCEDURES: THE TEN STEPS TO SYSTEMATIC RESEARCH .... 27
2.2.1 Procedure One: Research Areas and Theme ........................................... 27
2.2.2 Procedure Two: Qualitative Literature Data Review ................................ 28
2.2.3 Procedure Three: The Research Guideline Questions ............................ 28
2.2.4 Procedure Four: The Most Important Keywords for Research ............... 29
2.2.5 Procedure Five: Inclusion and Exclusion (I/E) Criteria Determination .. 30
2.2.6 Procedure Six: The Survey for Papers in Databases .............................. 31
2.2.7 Procedure Seven: To apply the Inclusion and Exclusion Criteria .......... 32
2.2.8 Procedure Eight: To Define a Classification Criteria ............................... 34
2.2.9 Procedure Nine: Appling the Classification Criteria................................ 34
2.2.10 Procedure Ten: The Content Analysis of Included Papers ..................... 38
3 RESEARCH FINDINGS ............................................................................... 46
3.1 SOFT SENSORS: STATE OF ART .............................................................. 46
3.1.1 Model-Driven Soft Sensors ........................................................................ 49
3.1.1.1 Phenomenological Modelling ........................................................................ 49
3.1.1.2 Kalman Filter ................................................................................................ 50
3.1.2 Data-Driven Soft Sensors .......................................................................... 50
3.2 MACHINE LEARNING TECHNIQUES ......................................................... 52
3.2.1 Support Vector Machine ............................................................................ 52
3.2.2 Deep Learning ............................................................................................. 54
3.2.3 Fuzzy systems ............................................................................................ 55
3.2.4 Decision Tree .............................................................................................. 56
3.2.5 Random Forest ........................................................................................... 58
3.2.6 Genetic Algorithm....................................................................................... 59
3.2.7 Gradient Boosting Machine (GBM) ........................................................... 61
3.2.8 XGBoost ...................................................................................................... 61
3.3 MATHEMATICAL BACKGROUND ............................................................... 62
3.3.1 Time Series ................................................................................................. 62
3.3.2 Classification Task ..................................................................................... 63
3.3.3 Learning Phase ........................................................................................... 64
3.3.4 Performance Analysis Techniques and Mathematical Background ...... 65
3.4 FEATURE ENGINEERING ........................................................................... 66
3.4.1 Feature Engineering: The State of The Art ............................................... 66
3.5 Q.01 MAIN APPLICATION AREAS FOR SS ................................................ 67
3.5.1 Industrial Applications ............................................................................... 67
3.5.2 Soft Sensors Applied to Aeronautics Solutions ...................................... 69
3.5.3 The Employment of SS in the Quimiometrics Industry ........................... 69
3.5.4 The Cloud Computing Solutions Based on SS ........................................ 71
3.5.5 Soft Sensors: Enhancing Health and Care Solutions.............................. 72
3.5.6 Building and Household applications ....................................................... 73
3.5.7 The General Applications for SS ............................................................... 73
3.6 Q.02: THE RELATIONSHIP BETWEEN SS AND INDUSTRY 4.0 ............... 74
3.6.1 Soft Sensors in I4.0 Scenario .................................................................... 74
3.6.2 Soft Sensors Employed in Smart Factories ............................................. 76
3.7 Q.03: FEATURE ENGINEERING AND ML APPLIED TO SS ....................... 76
3.7.1 Feature Engineering Employment to SS Development ........................... 76
3.7.2 The Machine Learning Approaches for Soft Sensors ............................. 78
3.8 Q.04: THE METHODS FOR FEATURE ENGINEERING IN SS ................... 81
3.8.1 Feature Engineering Enhancing Soft Sensors ......................................... 81
3.8.2 Hyperparameter Tunning in Soft Sensors Implementation .................... 84
3.9 SYSTEMATIC LITERATURE REVIEW SUMMARIZED RESULTS .............. 84
4 BENCHMARK DISCUSSION ....................................................................... 86
4.1 AIRBUS: OFC X IFAC – THE BENCHMARK ............................................... 86
4.2 THE MODEL SYSTEM: DIAGRAMS AND CODE ........................................ 87
4.2.1 Flight Trajectory Angle Control Module ................................................... 88
4.2.2 Load Factor Control Module ...................................................................... 89
4.2.3 Detection Surface Servo Command Simulator (Real Servo)................... 91
4.2.4 Aircraft Turbulence Dynamics Simulator ................................................. 93
5 RESULTS: THE CRITICAL ANALYSIS ....................................................... 95
5.1 SOFT SENSOR: THE SOFTWARE DEVELOPMENT .................................. 95
5.2 MATHLAB® AND PYTHON INTEGRATION ................................................ 98
5.3 BENCHMARK TESTING ............................................................................ 100
5.4 THE ML METHODS APPLICATION IN SS DEVELOPMENT RESULTS ... 101
5.5 THE DEMAND FOR FEATURE ENGINEERING IN SS ............................. 103
5.6 FRAMEWORK PROPOSE FOR SOLVING THE CHALLENGE ................. 105
5.6.1 Introduction to the Framework ................................................................ 106
5.6.2 Components of the Framework ............................................................... 106
5.6.3 Implementation Strategy .......................................................................... 107
5.6.4 Anticipated Challenges and Solutions ................................................... 107
5.6.5 Expected Outcomes and Impact ............................................................. 108
6 FEATURE ENGINEERING FRAMEWORK RESULTS .............................. 109
6.1 EMPLOYING FEATURE ENGINEERING TO THE DATASET ................... 109
6.1.1 Explaining the Outputted code ................................................................ 110
6.2 TRAINING ML ALGORITHMS WITH THE FEATURED DATASET ............ 112
6.3 IDENTIFYING OFCS WITH FEN IN THE ACQUIRED DATA ..................... 114
6.3.1 Explaining the Identification Process Code ........................................... 115
6.4 PERFORMANCE ANALYSIS ..................................................................... 116
6.4.1 Performance Metrics Calculation ............................................................ 117
6.4.2 Model Performance and Comparison ..................................................... 118
6.4.3 Performance Improvement Assessment ................................................ 119
6.4.4 Confusion Matrix Interpretation for Machine Learning methods ......... 119
6.5 INTELLIGENT SYSTEM FRAMEWORK FOR SS PERFORMANCE
ENHANCEMENT ..................................................................................................... 125
6.5.1 Framework Code Overview ...................................................................... 125
6.5.2 Framework Architecture .......................................................................... 126
6.5.3 Advantages of the Proposed Framework ............................................... 128
6.5.4 Future Directions and Considerations .................................................... 129
6.6 FEATURE REDUCTION TECHNIQUES IN SOFT SENSOR
PERFORMANCE EVALUATION ............................................................................. 129
6.6.1 Feature Reduction as a Key for Performance Enhancement ................ 129
6.6.2 The Imperative Nature of Feature Reduction ......................................... 130
6.6.3 The Role of Feature Reduction in Soft Sensor Performance ................ 131
6.6.4 Implementing Feature Reduction for Performance Evaluation ............ 133
6.6.5 Feature Landscape and Model Mastery .................................................. 134
7 DISCUSSION ............................................................................................. 141
7.1 REVIEW OF FINDINGS ............................................................................. 141
7.1.1 ML Models and Performance ................................................................... 141
7.1.2 Feature Engineering and Enhancement of Decision Tree Model ......... 141
7.1.3 Improvement in Predictive Performance ................................................ 141
7.1.4 Comparative Evaluation ........................................................................... 142
7.1.5 Interpretation and Insight ........................................................................ 142
7.2 EVALUATION OF THE DECISION TREE MODEL .................................... 142
7.2.1 Accuracy of the Model ............................................................................. 142
7.2.2 Analysis of Confusion Matrix .................................................................. 143
7.2.3 Precision, Recall, and F1-Score .............................................................. 143
7.2.4 Model Interpretability ............................................................................... 143
7.2.5 Comparative Analysis with Industry Standards..................................... 143
7.2.6 Implications for the Field ......................................................................... 144
7.3 SIGNIFICANCE OF THE RESEARCH ....................................................... 144
7.3.1 Academic Significance ............................................................................. 144
7.3.2 Industrial Significance ............................................................................. 145
7.3.3 Broader Implications ................................................................................ 145
7.4 LIMITATIONS ............................................................................................. 145
8 CONCLUSION............................................................................................ 147
REFERENCES ........................................................................................................ 150
ATTACHMENT A .................................................................................................... 158
ATTACHMENT B .................................................................................................... 160
ATTACHMENT C .................................................................................................... 161
ATTACHMENT D .................................................................................................... 163
ATTACHMENT E .................................................................................................... 165
ATTACHMENT F .................................................................................................... 168
18

1 INTRODUCTION

During the fourth industrial revolution, also known as Industry 4.0, there was a
burgeoning drive for innovation and resource optimization. In collaboration with data
microprocessing, Soft Sensors (SS) have surfaced as potential problem solvers. These
technologies are employed in prediction models, real-time control, and system scaling,
among other applications (L. Fortuna, S. Graziani, and A. Rizzo et al., 2014). SS, or
virtual sensors, use Machine Learning (ML) techniques to process real-time data,
facilitating informed decision-making. As a result, their application is becoming
increasingly invaluable across numerous industrial sectors.
Another general application of SS is measuring costly intricate variables using
existing methods. The acquisition of such data can involve multiple sensors, expert
input, or extended processing time in outdated software (F. Souza, A. Francisco, and
R. Araújo et al., 2016). Finally, the large-scale deployment of SS in various production
lines justifies their importance in indicating Key Performance Indicators (KPIs), which
allow departments to track performance and identify areas requiring improvement (D.
Parmenter, 2010).
In high-level control systems, one crucial characteristic is the ability to perform
data acquisition, considering all system actuators and sensors. This process involves
linear actuators, servo-controlled joints, and a network of highly precise analog sensors
that feed back into a closed control loop. Such principles underpin the case study from
AIRBUS presented at IFAC (International Federation of Automatic Control)
(Engelbrecht and Goupil, 2020).
According to IFAC's case study presenters, SS also play a pivotal role in
developing intelligent products, such as flight control systems that employ a variety of
embedded sensors for controlling altitude, speed, and trajectory. These systems
demand smarter algorithms to enhance fault detection modules, increase the
robustness of established systems, and avert potential catastrophes. In addressing
these challenges, this case study exemplifies the role of control and automation
engineering: to transmute industrial issues into engineering problems.
The increased use of sensors in the industry has catalyzed the need for artificial
intelligence techniques, particularly machine learning, to treat data effectively and add
value to Big Data infrastructure (W. Lee, G. Mendis, and J. Sutherland, 2019). By
19

transforming extraneous data into relevant information, sustainable practices can be


fostered in the context of Industry 4.0.
Therefore, this project will examine numerous pivotal aspects of SS, such as
specific development methods, data collection, literature review, and the construction
of graphical interfaces. An illustrative example of this pertains to the development of
Smart Products (SP), where sensor networks are integrated to enable the development
of autonomous vehicles (Shaoming et al., 2020). Other noteworthy applications include
estimating fluid volume in fossil reservoirs through intelligent chemometric analyses,
which involve ML and mathematical models to augment estimate accuracy and provide
reliable short-term predictions.

1.1 MOTIVATION: THE DEMAND FOR SOFT SENSORS

The authors (J. Engelbrecht and P. Goupil, 2020) emphasize the pivotal role of
the Electronic Flight Control System (EFCS) in an aircraft. The EFCS, tasked with
regulating attitude, speed, and trajectory, comprises an intricate network of
components, including wiring, probes, actuators, numerical buses, power sources, and
sensors. This system facilitates communication between the cockpit and the aircraft's
movable parts and control surfaces. In this context, the EFCS's consistent availability,
even under fault conditions, is paramount, making fault detection a stringent aspect of
aircraft design. A case in point is the Oscillatory Failure Case (OFC), a failure type
resulting from weight-saving techniques that can adversely affect the aircraft's
structure and robustness (J. Engelbrecht and P. Goupil, 2020).
To meet this stringent standard, (V. Ribeiro, R. Kagami, and G. Reynoso-Meza,
2020) devised a data-driven detection model utilizing the Decision Tree (DT) algorithm
across various scenarios. In addition, they noted the complexity of the problem and
employed signal processing techniques for filtering and extracting features from two
signals – one related to control action and the other to the feedback sensor. Therefore,
when coupled with the DT method, feature engineering could enhance fault detection
within the FCS, improving the aircraft's weight performance.
This lays the groundwork for the primary research question driving this
investigation: How can the performance of a Soft Sensor for Oscillatory Failure
Anomaly detection in an aircraft's Flight Control System (FCS) be improved
using Feature Engineering?
20

1.2 OBJECTIVES

This section presents the research's general and specific objectives for feature
engineering applied to Soft Sensors performance improvement to identify OFCs.

1.2.1 General Objective

This project’s general objective is to propose a new feature engineering


framework to improve the performance of a Soft Sensor implementation in an aircraft’s
Fight Control System.

1.2.2 Specific Objectives

The general objective demands the achievement of each one of the specifically
defined objectives listed:
a) To survey the most suitable papers related to the general objective theme
by developing a Systematic Literature Review. Which guide questions
will be defined to guide the exploratory research;
b) To integrate the MATLAB® and SimuLink™ benchmark with Python;
c) To implement a Python, embed Soft Sensor and ML classes (SVM, MLP,
and DT) to process the benchmark data in real-time;
d) To propose a framework to employ Feature Engineering to improve the
results obtained by B;
e) To conduct a discussion around this implementation and propose an
Intelligent System framework to improve SS performance;
f) To compose a critical analysis with research results, propose a final
dissertation project guideline.

1.3 JUSTIFICATION

This dissertation's primary contribution lies in proposing a feature engineering


framework to enhance Soft Sensor (SS) performance, a significant advancement in
Industry 4.0, particularly within cyber-physical systems (Schmitt et al., 2020).
An additional contribution of this work involves conducting a Systematic
Literature Review (SLR) of SS within industrial domains, examining the application of
21

feature engineering to enhance their performance. Hence, advancing renewable


development and power performance across various engineering disciplines are
features shown by (S. He, H. Shin, S. Xu, et al., 2020) survey.
On a technological front, this dissertation proposes a solution to a pressing
aerospace case study: improving Soft Sensors (SS) using feature engineering
techniques. (J. Engelbrecht and P. Goupil, 2020) highlight the global industry's demand
for new Industry 4.0 technologies to enhance quality, reduce costs, and promote
renewable development. SS plays an instrumental role in decision-making across
numerous fields, emerging as a key technology in Smart Factory scenarios.
This project can potentially address issues related to oscillatory character
failures, leading to efficiency losses and high vibration loads (J. Engelbrecht and P.
Goupil, 2020). The ability to detect such failures holds significant implications for
aircraft design, allowing for ideal oversizing and weight gain, thereby increasing
sustainability and reducing the aircraft’s energy consumption.
Previous works, such as those by (V. Ribeiro, R. Kagami, and G. Reynoso-
Meza, 2020), suggest that more feature-generation techniques should be explored and
evaluated due to the complexity of identifying Oscillatory Failure Cases (OFCs).
Therefore, fault detection could potentially be enhanced by employing feature
engineering in a Simulink™ block diagram.
(S. Urbano, E. Chaumette, P. Goupil, et al., 2016) also examined this
benchmark, concluding after a real-time signal processing approach that their current
model had substantial room for improvement. They suggested that future research
should focus on developing a transparent methodology for threshold tuning. In their
subsequent work (Urbano et al., 2017), they demonstrated the use of SVM to achieve
higher OFC detection performance, highlighting this as a promising area for future
exploration.:

“Further studies can be carried out with the dual objective of reducing the
minimal OFC detectable amplitude and avoiding using a system model. A
One-Class Support Vector Machine (OC-SVM) technique might be used
directly on flight data to define a suitable test statistic”.

In their third related work, (S. Urbano, E. Chaumette, P. Goupil, et al., 2018)
employed an industrial Airbus desktop simulator to aid a Monte Carlo test. They
observed performance degradation as turbulence levels escalated. They further
22

underscored the need for additional research into threshold tuning as part of their
Monte Carlo test campaign.
Research by (A. Zolghadri, J. Cieslak, D. Efimov, et al., 2015) delved into
conventional design methods and advanced model-based techniques for failure
detection in Flight Control Systems (FCS). They contended that while model-based
techniques cannot entirely supplant the redundancy of physical sensors in aircraft and
aerospace systems, they can significantly bolster fault detection performance when
adequately harnessed.
A related paper (R. Cordeiro, J. Azinheira, and A. Moutinho, 2020) suggested
that their proposed Failure Detection and Isolation System (FDIS) could be further
enhanced as applied to a Boeing 747 aircraft simulator. In addition, they proposed the
inclusion of a Supervisory Loop (SL) to interpret the results of Kalman Filters, thus
enabling diagnosis and decision-making features through an additional Feed-Forward
Differential.
In conclusion, the justification for this dissertation stems from the exigency to
enhance Machine Learning methods' performance within the Electronic Flight Control
System in the presented benchmark. It posits that Feature Engineering can play a
pivotal role in improving the performance of the data-driven approach taken by Soft
Sensors.

1.4 TOOLS AND METHODS

This case study demands many tools and methods to fulfill the proposed
requirements and the main research question shown in problematization (section 1.1).
Due to this reason, this subsection will be divided into the software required and the
hardware employed to achieve the proposed goals. Such information is relevant for
the scientific community to reproduce its features and validate the solution's reliability
and robustness.

1.4.1 Software Required

As problematization shows, the benchmark is structured in MatLab® and


Simulink®. Hence, both must be installed to run the simulations. According to the
MathWorks organization, millions of scientists and engineers worldwide use the
23

software. It compiles a series of tools in a unique environment for designing and


analyzing processes iteratively.
In MatLab®, processes can be manipulated in R language, enabling integration
with various computer systems, running on embedded hardware for data processing
or process control, and also, as in the case of this project, being used for process
emulation through Simulink. Furthermore, such a simulation tool allows users to model
and simulate processes through structured blocks without writing thousands of lines of
code to map a complex process.
Python programming language was chosen for SS approach development
because it is an open-source programming language that allows the user to build
customizations to increase functionality during function development and efficiently
integrate systems. One of the main advantages of this language is its versatility since
it can be emulated on low-power hardware, reducing the cost of implementing
advanced computing techniques.
Moreover, there are many open and free libraries for applying machine learning
methods that can be implemented in a few lines of code, one of which is MKLearn,
which specializes in ML, as stated by (L. Ramona et al., 2021). Finally, the most
significant advantage of the Python language is the flexibility for API integration with
other software, for example, the MatLab® API that works in version 3.9 of this tool, the
most stable in 2022.

1.4.2 Hardware Employed

The hardware employed for this benchmark solution is a Ryzen 5 3600 Mhz
processor, with a six-core CPU processor and 12 logic processors, with 48.0 Gb of
RAM installed, a 500Gb SSD, and a 12Gb video card GPU model RTX 3060.
Nonetheless, it was all assembled in ASRock B450M Steel Legend motherboard and
employed Windows 10 as an Operational System (OS).

1.5 RESEARCH IMPACTS

This research can impact many technological scenarios, for example, the
industrial, chemometrics, and other engineering or computer science fields. Once its
core is related to a real-world problem benchmark modeled in Simulink, with control
24

closed and solved by a soft sensor approach structured in Python, an open-source


programming language.
Consequently, the AirBus benchmark is related to the closed-loop control
system and an aircraft’s Flight Control System embedded Soft Sensor (SS) to predict
Oscillatory Failure Cases (OFCs), by developing a subsystem, according to (J.
Engelbrecht and P. Goupil, 2020):

“This benchmark is a competition based on the evaluation of two separate


contributions: (i) the design (a Simulink subsystem block to be added in the
global Simulink model (see explanations below) and that shall be able to
detect all the fault cases according to the requirements detailed here); (ii) an
extended abstract detailing the principles of the proposed design… For
system failures impacting the aircraft structure, the performance of detection
methods must be improved, while retaining perfect robustness. This
benchmark deals with a particular EFCS (Electrical Flight Control System)
failure influencing aircraft structural loads. This failure is called in the literature
oscillatory failure case (OFC”.

In this scenario, different Machine Learning (ML) techniques in SS development


were applied to fulfill the design contribution. Previous works (V. Ribeiro, R. Kagami,
and G. Reynoso-Meza, 2020) have shown that feature engineering might improve the
ML methods employed to detect fault cases. Then, this work aims to improve the Soft
Sensor implementation performance using feature engineering to tune the
hyperparameters for employed ML methods in many scenarios.
The robustness of this benchmark allows several sets of tests and possibilities
to implement the hyperparameter tunning with many approaches. Nevertheless, this
research has the potential to enable several improvements in SS and its endless
applications.

1.6 DOCUMENT STRUCTURE

This dissertation document starts with an introduction in chapter one and related
themes such as problematization, the specific objectives of this research, and the
general. Then, the justification of soft sensors-based research and the methodology
adopted.
Chapter two presents the Systematic Literature Review procedures by
explaining its definition and all steps to achieve the conducted research. Besides that,
the main questions for this research will be explored, such as inclusion and exclusion
25

criteria, and research data will be presented in graphics and tables. In addition, the
content Analysis heads a discussion about survey discoveries.
After exploring the research guidelines, their results will be presented in chapter
three, starting with soft sensors” state of the art, their main applications to solve
engineering problems, and other demands according to the guideline questions
defined at the SLR.
Chapter four discusses the case study and the achieved solutions that the
course of research might find. Chapter five presents the results of employed ML
methods to the SS application over the benchmark. Chapter six presents the Feature
Engineering Framework structure and its applications over the benchmark modifying
the number of features to evaluate the accuracy.
Therefore, chapter seven conducts a concise discussion about each results
theme in this work. Finally, chapter eight presents assignment plans for the remainder
of this research with a schedule.
26

2 THE SYSTEMATIC LITERATURE REVIEW

This chapter presents the Systematic Literature Review (SLR), which explores
papers on the Soft Sensors field and areas related to Intelligent Systems, Industry 4.0,
and Feature Engineering. At the end of this survey, a Component Analysis (CA) will be
conducted to present the main findings of SLR.

2.1 SYSTEMATIC LITERATURE REVIEW: THE STATE OF ART

An investigation can be scientifically initiated, focusing on recently published


articles on SS. To this end, the systematic literature affirmation (RSL) methodology is
used in collecting and identifying data from the researched articles based on pre-
established criteria to answer the hypotheses raised. According to (H. Snyder, 2019):
“A systematic review aims to identify all empirical evidence that fits the pre-specified
inclusion criteria to answer a particular research question or hypothesis.”
For (Moher, Liberati, and Tetzlaff et al., 2009), the systematic literature review
ensures the quality of a paper, and ignoring it could decrease the relevance of a paper:
“Systematic reviews and meta-analyses are essential to summarize evidence relating
to efficacy and safety of health care interventions accurately and reliably.” Hence, they
developed the guideline for high-quality articles SLR called the PRISMA (Preferred
Reporting Items for Systematic Reviews and Meta-Analyses). Unfortunately, this
technique is based on QUOROM (QUality Of Reporting Of Meta-analysis), developed
in 1999, and obsolete. Ten years later, the authors used a 27-item checklist for SLR,
which is structured in 7 areas: Title, Abstract, Introduction, Methods, Results,
Discussion, and the research Founding.
The most important for SLR is the methods section, which comprehends the
most critical procedures for an SLR: protocol and registration, eligibility criteria,
information sources, study selection, data collection process, data items, risk of bias
in individual studies, summary measures, synthesis of results, the risk of bias across
studies and the additional (content) analysis. The same basic structure is synthetized
in (Palmatier, Houston, and Hulland's, 2018) study and drives the SLR to relevant
papers in the scientific process.
27

2.2 SLR PROCEDURES: THE TEN STEPS TO SYSTEMATIC RESEARCH

According to (Palmatier, Houston, and Hulland, 2018), the SLR is guided by ten
procedures that can be summarized in Figure 1:

Figure 1 – The Ten Procedures for SLR

Source: Adapted by the Author, 2022.

To this extent, the survey will be conducted according to the guidelines in Figure
1 to reach the defined objectives. Nevertheless, technical subjects must define every
procedure to reduce bias and evidence the most relevant files for the research fields.

2.2.1 Procedure One: Research Areas and Theme

As presented by (Palmatier, Houston, and Hulland, 2018), “The author sets clear
objectives for the review and articulates the specific research questions or hypotheses
that will be investigated.” The defined objectives and motivation were the guidelines
for defining the theme and research areas for this research.
Towards the first step, define the research areas around the chosen theme for
the survey. For example, chapter one shows that soft sensors and feature engineering
are the main research themes. Thus, the related research areas are Machine Learning,
Multi-Objective optimization, Soft Sensors Implementation, Industry 4.0, and Intelligent
Systems.
28

2.2.2 Procedure Two: Qualitative Literature Data Review

In accordance with (Nightingale, 2009), the qualitative literature review focuses


on the critical appraisal of study design and grouping approach:

“Recently, there has been a move from such scales to more qualitative quality
measures for different study designs. As well as critical appraisal, sub-group
analyses can be used to determine whether the meta-analysis results are
altered by removing specific studies or groups of studies. If the results from all
sub-group analyses are consistent, then the analysis results are more likely to
be found to be robust”.

As presented in item 1.1, the problem this research is facing is a practical solution
for an engineering problem, using a Soft Sensor (SS) for feature engineering
applications. Therefore, this experiment aims to improve the SS performance, and
every paper related to this focus will be analyzed and included or not in the primary
survey.

2.2.3 Procedure Three: The Research Guideline Questions

The general and specific objectives guide the main research questions and are
essential for defining the search aim and reaching the work’s objectives (Nightingale,
2009). Based on that, this survey will be modeled by the four questions presented in
table 1:

Table 1 – The Research Guideline Questions

RESEARCH GUIDELINE QUESTIONS

Q.01 Which are the main application areas for SS in general engineering?
What is the relationship between Intelligent Systems and SS in an Industry
Q.02
4.0 scenario?
Which feature engineering or ML techniques are employed in SS
Q.03
development?
Which are the possible methods for performing Feature Engineering in Soft
Q.04
Sensors Intelligent System applications?
Source: The Author, 2022.

These four questions in table 1 are key to reading the papers and searching for
the appropriate information for this research. Each included paper will be subjected to
answer or provide related information. It is crucial to notice that the soft sensor
definition is not one of the guideline questions. However, such information will be
collected to lead the SS state of art section.
29

2.2.4 Procedure Four: The Most Important Keywords for Research

In agreement with (D. Moher et al., 2015), after defining the research Theme
and Area, the qualitative data, and the key questions, the three most relevant keywords
and their two main synonyms are defined. Afterward, they will be combined and used
to search papers in databases. In this research case, the main keywords can be
summarized in table 2:

Table 2 – The Main Research Keywords Synonyms


KEYWORD Synonym 1 Synonym 2

Soft Sensors Sensor Array Virtual Sensor


Intelligent Systems Machine learning Knowledge Engineering
Feature Engineering Hyperparameter Tunning Feature Discovery
Fourth Industrial
Industry 4.0 Smart Industry
Revolution
Source: The Author, 2022.

This research focuses on Soft Sensors, Feature Engineering, and Intelligent


Systems. After merging these keywords presented in Table 2 and their synonymies,
taking care not to repeat any theme, it is possible to create the keyword combinations
in Table 3:

Table 3 – The Main Research Keywords Combination


KEYWORD A Combination KEYWORD B

Soft Sensors AND Intelligent Systems


Soft Sensors AND Feature Engineering
Soft Sensors AND Machine Learning
Soft Sensors AND Knowledge Engineering
Soft Sensors AND Feature Discovery
Soft Sensors AND Industry 4.0
Soft Sensors AND Hyperparameter Tunning
Feature Engineering AND Virtual Sensors
Feature Engineering AND Machine Learning
Feature Engineering AND Intelligent Systems
Feature Engineering AND Industry 4.0
30

KEYWORD A Combination KEYWORD B


Intelligent Systems AND Virtual Sensors
Intelligent Systems AND Feature Discovery
Intelligent Systems AND Feature Engineering
Intelligent Systems AND Industry 4.0
Intelligent Systems AND Hyperparameter Tunning
Source: The Author, 2022.

Such keywords will be searched at CAPES and Science Direct Databases while
browsing paper’s titles are analyzed according to procedure five. With such filters,
selecting the maximum combination of possible related papers with the most relevant
contributions to this research is possible.

2.2.5 Procedure Five: Inclusion and Exclusion (I/E) Criteria Determination

According to (H. Snyder, 2019), “systematic reviews have strict requirements


for search strategy and selecting articles for inclusion in the review; they are effective
in synthesizing the collection of studies.” Hence, this step defines two sets of specific
criteria, one for Inclusion and another for exclusion of a selected paper. Obvious
criteria, for example, not including repeated or BlogSpot files, are irrelevant. Once it is
an SLR, only objective or technical criteria will be adopted. For this research, those
criteria can be summarized in Table 4:

Table 4 – The Research Inclusion and Exclusion Criteria


THE RESEARCH INCLUSION AND EXCLUSION CRITERIA
INCLUSION EXCLUSION
I1. Open Access and open archive E1. Papers published in 2017 or
Papers Only. before.
E2. Does not contain “Soft Sensor” in
I2. Is it a Review or Research article?
the Title.
I3. The paper is from the Engineering E3. Does not focus on practical
or Computer Science areas. applications.
I4. Has a transparent relationship with
E4. The paper is not in English.
Machine Learning in the abstract?
E5. Does not answer any critical
I5. Explores MOO or Feature Engineering.
questions.
E6. Does not present the application
I6. Is it a Peer-reviewed article? case in the paper’s title (Reviews that
present it are not excluded).
Source: The Author, 2022.
31

After that procedure, objective inclusion and exclusion criteria for desired papers
are adopted for the seventh procedure. Then, it is the moment to start surveying for
keywords in Table 3 and condensing all data in the sixty steps.

2.2.6 Procedure Six: The Survey for Papers in Databases

After defining the combination between the main keywords in Table 3, the sixty
procedure starts by searching for each one at CAPES and Science Direct Databases.
Finally, the search results with open access and open archive filter only are
summarized in Table 5:

Table 5 – The Main Research Keywords Combination Results


Index KEYWORD A Combination KEYWORD B Number

1 Soft Sensors AND Intelligent Systems 12


2 Soft Sensors AND Feature Engineering 9
3 Soft Sensors AND Machine Learning 107
Knowledge
4 Soft Sensors AND 13
Engineering
5 Soft Sensors AND Industry 4.0 38
6 Soft Sensors AND Feature Discovery 1
Hyperparameter
7 Soft Sensors AND 23
Tunning
8 Feature Engineering AND Virtual Sensors 7
9 Feature Engineering AND Machine Learning 1,181
10 Feature Engineering AND Industry 4.0 53
11 Feature Engineering AND Intelligent Systems 70
12 Intelligent Systems AND Virtual Sensors 22
13 Intelligent Systems AND Feature Discovery 7
14 Intelligent Systems AND Intelligent Factory 21
15 Intelligent Systems AND Industry 4.0 467
Hyperparameter
16 Intelligent Systems AND 122
Tunning
Source: The Author, 2022.

As can be seen, 2,153 articles were found by this keyword’s combination


technique. However, more articles were found with the “Feature Engineering” AND
“Machine Learning” combination of 1,181 papers. Meanwhile, the less recurrent
32

number of related articles was the combination between “Soft Sensors” AND “Feature
Discovery”, which resulted in only one paper being found. Such quantitative analysis
is shown in figure 2:

Figure 2 – Survey for paper’s initial results


1400

1200
NUMBER OF PAPERS

1000

800

600

400

200

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
KEYWORDS INDEX

Source: The Author, 2022.

The most results were found when combining “Feature Engineering” and
“Machine Learning” because the search engine looks for the engineering word inside
the paper. For this reason, the following procedures are essential to filter and correct
such misunderstandings.

2.2.7 Procedure Seven: To apply the Inclusion and Exclusion Criteria

After browsing for more than two thousandth papers, this procedure significantly
reduces the total number of papers, selecting them based on the Inclusion and
Exclusion criteria presented in table 4. Hence, table 6 shows the relationship of
selected papers by keyword combination:

Table 6 – Number of Selected Papers per Keyword

Index KEYWORD A Combination KEYWORD B Number


1 Soft Sensors AND Intelligent Systems 6
2 Soft Sensors AND Feature Engineering 4
3 Soft Sensors AND Machine Learning 12
Knowledge
4 Soft Sensors AND 2
Engineering
5 Soft Sensors AND Industry 4.0 13
33

Index KEYWORD A Combination KEYWORD B Number


6 Soft Sensors AND Feature Discovery 1
Hyperparameter
7 Soft Sensors AND 2
Tunning
Feature
8 AND Virtual Sensors 7
Engineering
Feature
9 AND Machine Learning 11
Engineering
Feature
10 AND Industry 4.0 10
Engineering
Feature
11 AND Intelligent Systems 7
Engineering
Intelligent
12 AND Virtual Sensors 6
Systems
Intelligent
13 AND Feature Discovery 1
Systems
Intelligent
14 AND Intelligent Factory 7
Systems
Intelligent
15 AND Industry 4.0 12
Systems
Intelligent Hyperparameter
16 AND 8
Systems Tunning
Source: The author, 2022.

After applying the I/E criteria, the number of papers was reduced to 109, only
with filters in search engines and title reading techniques. Nevertheless, the
synthesizing capacity of the seventh procedure was reduced by 95% of the amount of
found papers. This result is shown in figure 3:

Figure 3 – Survey for Papers After Applying I/E Criteria


14
12
NUMBER OF PAPERS

10
8
6
4
2
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
KEYWORD INDEX

Source: The author, 2022.

This procedure does not require deep analysis in papers. Only title reading
techniques and filtering in the search engine are enough. In such a manner, some
34

articles could not match the research, so procedure eight defines the classification
criteria for selected articles.

2.2.8 Procedure Eight: To Define a Classification Criteria

The eighth procedure defines the Classification Criteria for found papers and
aims to add another relevant filtering for selected papers. These criteria must follow
the defined objectives and ensure that papers are related to the central research
questions and are relevant enough to read thoroughly. The classification process is
based on the paper’s title, keywords, and abstract reading.

2.2.9 Procedure Nine: Appling the Classification Criteria

Such procedure filtered the 109 articles found in previous steps using the
classification criteria defined in the past procedure. According to (D. Moher et al.,
2015), the process consists of reading the paper’s abstracts and classifying them into
defined criteria.
These criteria are defined by technical details for selecting the relevant articles
that accomplish objectives and solve problems defined in the introduction. For
example, suppose that the article presents a case study that can contribute to this
research goal of applying a framework for SS Hyperparameter Tuning. In that case,
the article can be classified as “A.” On the other hand, if the contribution to the research
is less significant and has only theoretical information, it will be classified as a “C” class
article, and so on. These criteria are presented in Board 1:

Board 1 – The Classification Criteria


CRITERIA ID THE RESEARCH CLASSIFICATION CRITERIA

The paper’s subject relates to a practical application of soft sensors in at least


A
one of the selected application fields.
The paper presents a general application that is not related to subject fields.
B
However, as a general approach.
C The article presents no practical application of SS or Hyperparameter Tuning.
Source: The Author, 2022.

Criteria A1 is based on the demand to find SS-related papers with research. The
second classification level can be developed for paper clustering related to intelligent
systems containing SS. The articles which have no classification in either cluster
35

criteria are excluded from this survey, as they might not contribute to answering the
key questions. Hence, Board 2 shows 59 articles that meet the established criteria:

Board 2 – Applying the Classification Criteria


CRITERIA
ID Article's Title
A B C
A distributed soft sensors model for managing vague and uncertain multimedia
1 X
communications using information fusion techniques
A recurrent neural network architecture for failure prediction in deep drawing
2 X
sensory time series data

3 A review of industrial big data for decision making in intelligent manufacturing X

4 A review of mechanistic and data-driven models of aerobic granular sludge X

A review of uncertainty quantification in deep learning: Techniques, applications


5 X
and challenges
A soft sensor for property control in multi-stage hot forming based on a level set
6 X
formulation of grain size evolution and machine learning

7 A two-step multivariate statistical learning approach for batch process soft sensing X

8 Accurate Clinical and Biomedical Named Entity Recognition at Scale X

Activity recognition in manual manufacturing: Detecting screwing processes from


9 X
sensor data
Added value of a virtual approach to simulation-based learning in a manufacturing
10 X
learning factory
Advances in Integrated System Health Management for mission-essential and
11 X
safety-critical aerospace applications
Agent-based control system: A review and platform for reconfigurable bending
12 X
press machine
An efficient team prediction for one day international matches using a hybrid
13 X
approach of CS-PSO and machine learning algorithms
Artificial Intelligence and smart vision for Building and Construction 4.0: Machine
14 X
and deep learning methods and Applications
Conceptual Framework for Using System Identification in Reservoir Production
15 X
Forecasting
Considerations, challenges and opportunities when developing data-driven
16 X
models for process manufacturing systems
Contribution to the implementation of an industrial digitization platform for level
17 X
detection

18 Data-driven method for the improving forecasts of local weather dynamics X

19 Decision Tree for Oscillatory Failure Case Detection in a Flight Control System X

20 Deep learning based soft sensors for industrial machinery X

21 Deep learning in remote sensing applications: A meta-analysis and review X


36

CRITERIA
ID Article's Title
A B C
Development of an intelligent tool condition monitoring system to identify
22 X
manufacturing tradeoffs and optimal machining conditions
Distributed estimation over a low-cost sensor network: A Review of the state of
23 X
the art
Encoding and exploring latent design space of optimal material structures via a
24 X
VAE-LSTM model
Energy consumption prediction by using machine learning for smart building:
25 X
Case Study in Malaysia
Evaluation of machine learning for sensor-less detection and classification of
26 X
faults in electromechanical drive systems
Fermentation 4.0, a case study on computer vision, soft sensor, connectivity, and
27 X
control applied to the fermentation of a thraustochytrid
FIEMA, a system of fuzzy inference and emission analytics for sustainability-
28 X
oriented chemical process design
29 Flexible, wearable biosensors for digital health X
Genetic programming-based symbolic regression for goal-oriented dimension
30 X
reduction
Hyperparameter tuning to optimize implementations of denoising autoencoders
31 X
for imputation of missing spatial-temporal data.
Industry 4.0 based process data analytics platform: A waste-to-energy plant case
32 X
study
Industry 4.0 in Action: Digitalization of Continuous Process Manufacturing for
33 X
Formulated Products
34 IoT-based Indoor Occupancy Estimation Using Edge Computing X
Laundry fabric classification in vertical axis washing machines using data-driven
35 X
soft sensors
Machine learning based adaptive soft sensor for flash point inference in a refinery
36 X
real-time process
Machine learning based identification of energy states of metal-cutting machine
37 X
tools using load profiles
38 Machine learning for biochemical engineering: A review X
MANU-ML: Methodology for the application of machine learning in manufacturing
39 X
processes
Model stacking to improve prediction and variable importance robustness for soft
40 X
sensor development.
Moving towards an era of hybrid modelling: advantages and challenges of
41 coupling mechanistic and data-driven models for upstream pharmaceutical X
bioprocesses
42 Neuro-fuzzy Soft Sensor Estimator for Benzene Toluene Distillation Column X
Online Parameterization of a Milling Force Model using an Intelligent System
43 X
Architecture and Bayesian Optimization
PLS-based soft-sensor to predict ammonium concentration evolution in hollow
44 X
fiber membrane contactors for nitrogen recovery
Prediction of sorption-enhanced steam methane reforming products from machine
45 X
learning based soft-sensor models
Predictive maintenance enabled by machine learning: Use cases and challenges
46 X
in the automotive industry
37

CRITERIA
ID Article's Title
A B C
Predictive maintenance on sensorized stamping presses by time series
47 X
segmentation, anomaly detection, and classification algorithms
Predictive model-based quality inspection using Machine Learning and Edge
48 X
Cloud Computing
Process PLS: Incorporating substantive knowledge into the predictive modelling
49 X
of multiblock, multistep, multidimensional and multicollinear process data
Proposition of the methodology for Data Acquisition, Analysis and Visualization in
50 X
support of Industry 4.0
Radiomics and Artificial Intelligence for Biomarker and Prediction Model
51 X
Development in Oncology
52 Self-healing sensorized soft robots X
Soft sensor of bath temperature in an electric arc furnace based on a data-driven
53 X
Takagi–Sugeno fuzzy model
STLF-Net: Two-stream deep network for short-term load forecasting in residential
54 X
buildings
Technical Note describing the joint Airbus-Stellenbosch University Industrial
55 X
Benchmark on Fault Detection
The biological transformation of industrial manufacturing – Technologies, status
56 X
and Scenarios for a sustainable future of the German manufacturing industry
Towards an intelligent linear winding process through sensor integration and
57 X
machine learning techniques
Understanding chemical production processes by using PLS path model
58 X
parameters as soft sensors
Using a support vector machine for building a quality prediction model for a center-
59 X
less honing process
Source: The Author, 2022.

After classifying those 59 selected articles, according to the Board’s 1 criteria,


26 articles were classified as “A”, 19 as “B”, and 14 into the “C” criteria group.
Nevertheless, the amount of 50 articles could not be selected in the table’s 7 criteria.
Hence, figure 4 presents the total:
38

Figure 4 – Survey for Papers After Applying Classification Criteria

B, 17%

Classified

A, 23%

C, 13%
Not Related,
A B C Not Related
Source: The author, 2022.

This figure shows that the inclusion and exclusion criteria cannot filter every
relevant paper for an SLR. Nevertheless, classification criteria open the discussion
about content analysis, which is the next and last step for this SLR.

2.2.10 Procedure Ten: The Content Analysis of Included Papers

After applying the research methodologies for an SLR, the most relevant data
will be extracted from every selected and classified article presented on Board 2. Their
ID will be used to identify them in Board 3, which summarizes their main contributions
and which guideline questions they answer:

Board 3 – Content Analysis of Selected and classified papers


RSL QUESTION
RELATION
ID Main Contribution to this SLR
01 02 03 04

Calculating multimedia soft sensors creates a generalized intelligent


space. This paper shows that relying on a mechanism to change a
(complex) sensor into ‘‘soft” mode with such a high degree of accuracy
1 X X X
may greatly benefit many situations. For example, an ozone level
above a specified threshold harms human health and affects activities
like arable farming and tourism.
Authors show that the model correctly predicts the occurrence of
more than 94% of the process failures, predicting process failures
before they occur. In this paper, they adopt a wavelet transformation-
2 X X X X
based approach for feature extraction and a bidirectional LSTM-based
neural network to sensor time series data for anomaly prediction and
regression analysis in manufacturing.
39

RSL QUESTION
ID Main Contribution to this SLR RELATION
01 02 03 04

A conceptual framework of intelligent decision-making based on


industrial big data-driven technology is proposed in this study, which
3 X X
provides valuable insights and thoughts for the severe challenges and
future research directions in this field.

Simulation models allow virtual testing with approximate results to


4 guide the expensive real-life implementation. Further, the application X X
of different machine learning and data-driven models is investigated.

This study reviews recent advances in deep learning using UQ


(Uncertainty Quantification) methods. Meanwhile, investigates the
5 X X
application of these methods in reinforcement learning, highlighting
fundamental research challenges and directions associated with UQ.

This work proposes a fast surrogate model to predict its location as a


control input function. The model can be considered a soft sensor that
6 X X
estimates the microstructural state based on information retrieved in
the process.
This study aimed to take a two-step approach to reduce data
dimensionality and design soft sensors for product quality prediction.
7 Furthermore, the soft sensors' accuracy, reliability, and data efficiency X X X
were discussed. Finally, this paper demonstrates the industrial
potential of the proposed approach.

This paper explores biomedical datasets that use the NER module of
the Spark NLP library. They require no handcrafted features or task-
8 X X X
specific resources and achieve state-of-the-art scores on popular
biomedical datasets and clinical concept extraction challenges.

This paper presents data analysis and machine learning approaches


to detect manual manufacturing processes from sensor data. Although
9 human activity recognition approaches are not necessarily applicable X X X
in industrial environments, all sensors are attached to tools like
screwdrivers.

This paper is thus a concept description, giving input to the


Community on aspects to be considered regarding using VR/AR/digital
10 X X
twins in a learning factory context - and its constraints and
opportunities concerning cognitive processes.

The paper also discusses the critical challenges faced in developing


and deploying ISHM systems in the aerospace industry. Finally, it
11 X X X
highlights the safety-critical role that IHMM will play in future cyber-
physical and autonomous system applications.

The outcomes include behavioral patterns and trends of agents and


multi-agent usage in conceptualized manufacturing circles, supply
chain management, and the gaps yet to be filled for consolidating the
12 X
future of Industry 4.0 in a reconfigurable manufacturing system
development of mobile apps for IoT real-time database
communication.
40

RSL QUESTION
ID Main Contribution to this SLR RELATION

01 02 03 04

Five algorithms are proposed based on the features that reflect their
strengths to calculate the rating of batters, bowlers, batting all-
13 rounders, bowling all-rounders, and wicketkeepers. X X X
CS-PSO hybridization is a feature optimization strategy to eliminate
redundant, irrelevant, and noisy features.

This paper presents a unique perspective on AI/DL/ML applications in


these domains for the complete building lifecycle... Furthermore, data
14 collection strategies are discussed using smart vision and sensors, X X X X
data cleaning methods (post-processing), and data storage for
developing these models.

a conceptual framework for using system identification is proposed.


Based on a reservoir’s recovery mechanism, the conceptual
15 framework will help to systematically select an appropriate model X X
structure from the various model structures available in system
identification.

This paper explores how data-driven models can characterize process


streams and support the implementation of the circular economy
16 X X X
principles, process resilience and waste valorization, and
considerations and challenges when developing a data-driven model.

This paper presents a contribution concerning digitizing an industrial


platform for which the authors have chosen liquid-level detection. The
17 X X
idea is to retrieve data from a PLC (Siemens S7 1200), which is in its
role to control the actors and actuators.

This paper describes the modeling approach for lower atmosphere


dynamics in a selected location. The purpose of this model is to
18 provide short-term and long-term forecasts of the weather variables, X X X
which are used as the input data for the model of the dispersion of
radioactive air pollution.

This work describes developing a data-driven oscillatory fault


detection model for a flight control system, which has been proposed
as a benchmark problem. In the data-driven detection model
19 X X X
development, this work trains a decision tree algorithm using data
acquired from a numerical experiment, where different scenarios of
failures, control actions, and turbulence levels are simulated.

This paper develops and evaluates a deep learning-based virtual


20 sensor for estimating a combustion parameter on a large gas engine X X X
using only the rotational speed as input.

This review covers applications and technology in remote sensing,


21 ranging from pre-processing to mapping. Finally, a conclusion X X X
regarding the current state-of-the-art methods.
41

RSL QUESTION
RELATION
ID Main Contribution to this SLR
01 02 03 04

In this paper, an intelligent tool condition monitoring system is


developed to identify sustainability-related manufacturing tradeoffs
22 X X X
and a set of optimal machining conditions by monitoring the status of
the machine tool using networked sensors.

This paper comprehensively reviews the state-of-the-art solutions in


this research area (distributed estimation over a low-cost sensor
23 X X
network), exploring their characteristics, advantages, and challenging
issues.

This paper’s main contribution is the implementation of Variational


autoencoders (VAE) with machine learning models that can extract
low-dimensional data representations from datasets of high complexity
24 X
and volume. Besides that, they also present that Long short-term
memory (LSTM) neural networks are well suited to learning logical
trajectory relationships within datasets.

This paper applies Three methodologies, Support Vector Machine,


Artificial Neural Network, and k-Nearest Neighbor, proposed for the
25 algorithm of the predictive model. Then, focusing on a real-life X X X
application in Malaysia, two tenants from a commercial building are
taken.

This paper addresses whether non-Deep Learning methods are


26 competitive with Deep Learning for sensorless detection and X X X
classification of faults in electromechanical drive systems.
In this paper, a soft sensor was designed to estimate the end of the
Growth phase in the fermentation. The design was based on experts.
27 knowledge of the process in a thraustochytrid fermentation with X X
In dissolved oxygen control, a peak in the aeration rate occurs at the
End of the growth phase.

The key feature of this paper's proposed system relies on the


28 integration through multiple stages of Fuzzy Inference systems and a X X X
data-driven technique for Emissions Analytics (FIEMA).

This review discusses the basic sensing principles of biosensor


29 systems and their applications. Moreover, these biosensors' potential X X
applications and progress have been further prospected.

For optimizing the application-oriented data visualization cost function,


Multi-gene genetic programming (GP) is an algorithm used to select
30 X
variables needed to explore the internal structure of the data for data-
driven software sensor development or classifier design.

This paper presents a traffic monitoring benchmark that uses sensor


31 data and compares Deep Learning methods' performance and X X X
computational costs.
42

RSL QUESTION
RELATION
ID Main Contribution to this SLR
01 02 03 04

The work studied data-driven soft sensors in the case study to predict
syngas heating value and hot flue gas temperature. The neural
32 network-based NARX model demonstrated better performance among X X X
the studied data-driven methods. Besides that, it presents the data-
driven soft sensors as valuable tools for predictive data analytics.

This article presents a combined solution that aligns with the concepts
of Industry 4.0 by providing a digital twin, cloud integration, and
33 sophisticated statistical, hybrid, and mechanistic models. The models X X X
are used for soft sensors, Model Predictive Control, and Optimisation
algorithms to predict and control product Quality Attributes.

This work investigates the feasibility of an Internet of Things (IoT)


based on an estimation system for university classroom occupancy.
34 X X
The centralized cloud computing approach generates high latencies
as IoT devices generate voluminous data at high rates.

This paper presents a data-driven soft sensor that exploits physical


measurements already available on board a commercial VA-WM to
35 X X X
estimate the load typology through a machine-learning-based
statistical model of the process.

This study defines a procedure based on Machine Learning modules


demonstrating the power of real-time monitoring over accurate data.
Furthermore, this contribution demonstrates, with the inclusion of a
36 new concept called an adaptive soft sensor, the importance of X X X
dynamic adaptation of the conformed schemes based on Machine
Learning through their combination with feature selection, dimensional
reduction, and signal processing techniques.

This work presents an ML approach to analyzing energy states to


improve the time study compared to static approaches employing
37 X X
feature engineering and hyperparameter tunning techniques, such as
CNN and LSTM.

This paper reviews the use of machine learning within biochemical


engineering over the last 20 years. The most prevalent machine
38 X X X X
learning methods are presented for multiple applications in many
areas in an SLR.

This work presents the Methodology for Applying ML in Manufacturing


Processes (MANU-ML). The authors extended data mining (DM) and
39 X X
ML techniques to provide a four-layer model to integrate Information
Technology (IT) and Operational Technology (OT).

This paper presents the importance of robustness for SS development


40 in literature and brings hyperparameter tunning insights, ML methods X X X X
for SS development, and its built feature importance in overfit models.
43

RSL QUESTION
RELATION
ID Main Contribution to this SLR
01 02 03 04

The paper provides an overview of the mechanistic and statistical


models of upstream mAb bioprocesses published over the past five
41 years to discuss advantages and drawbacks. The authors conclude X X
with an outline of synergistic, hybrid modeling strategies emerging as
critical tools in the era of Biopharma 4.0.

This paper uses a new method, nonlinear auto-regressive with


exogenous input (NARX) based ANFIS for soft sensor modeling. This
42 paper aims to propose a more accurate and predictive model X X
combining. The advantages of the neural network, fuzzy inference
mechanism, and NARX structure predictability.

This paper presents an efficient and industry-ready system


architecture that enables both the control of machining operations and
43 X X
the high-frequency acquisition of controller data from an external
sensor.

This work provides a data-driven soft sensor implementation based on


44 PLS proposed to extract primary information on the TAN concentration X X X
evolution in the HFMC from the pH time-evolution profile.

In this study, two soft sensor models were developed and used to
predict and estimate variables that would be difficult to measure
45 directly. Both artificial neural networks and random forest models were X X X
developed as soft sensor prediction models. Besides that, it brings
feature selection relevant contributions.

This paper summarizes many ML applications based on predictive


46 maintenance, including some that employ soft sensors for automotive X X X
systems. Nevertheless, they explore feature extraction.
This work proposes the combination of time segmentation with feature
reduction and AD, together with solid ML classification algorithms, to
47 be used for downtime prediction in sheet metal forming tools X X X
(sensorized stamping presses). In addition, this paper investigates the
employment of feature engineering methods.

In this contribution, the authors investigate a new integrated predictive


model-based quality inspection solution in industrial manufacturing
48 X X X
using Machine Learning techniques and Edge Cloud Computing
technology.

This paper describes the Process PLS as a promising approach that


enables data-driven analysis of process data using the information on
49 X X X
the complex process structure, increasing insight into the underlying
system and making model-based predictions more valuable.

This work proposes the implementation of a test case framework for


50 Industry 4.0. This system covers four layers: decision support, data X X X
processing, acquisition/transmission, and sensors.
44

RSL QUESTION
RELATION
ID Main Contribution to this SLR
01 02 03 04

This paper presents a healthy-related contribution by employing


Artificial Intelligence (AI) to evaluate biomarker data acquired by
51 X X X
sensors to characterize and classify tumors accurately. Besides that,
feature selection is present in this article.

Such work provides the complete development of a soft gripper with


52 innovative healable soft sensors to measure damage, force, and X X
strain. Based on a self-healing conductive elastomeric composite.

The following paper presents a novel approach to EAF bath


temperature estimation using a fuzzy model soft sensor obtained
53 X X X
using Gustafson–Kessel input data clustering and particle swarm
optimization of model parameters.

Their study addresses the problems of STLF using a novel two-stream


deep learning (DL) model called STLF-Net. The first stream is
54 X X
designed with Gated Recurrent Units (GRUs) to learn and capture the
long-term temporal representations of the energy utilization data.

This technical note presents the AIRBUS benchmark description and


55 X
describes its requirements.

This paper presents the preliminary results of a systematic


56 assessment of the biological transformation of the German X X
manufacturing industry and a few soft sensor applications.

This work focuses on linear winding process soft sensor


57 implementation with machine learning techniques embedding several X X X
sensors to acquire data, composing an intelligent system.

This research provided the use of model parameters as SS. Model


58 parameters are implemented SS by comparing model parameters X X
across multiple data sets from different batches of the same process.

This study optimizes process parameters using feature engineering


and dimensionality reduction to compress data to build a quality
prediction model. The author employed soft-computing techniques
59 X X X
such as Deep Neural networks, decision trees, Support Vector
Machines, logical regression, and ensemble methods have been
explored.
Source: The author, 2022.

According to table 9, every 59 papers answers at least one guideline question


and has related contributions to the primary selected fields. This data table usage
possibilities to generate Figure 5:
45

Figure 5 – Number of Questions Answered by Papers


60

50
NUMBER OF ANSWERS

40

30
Q.01, 54
Q.03, 49
20
Q.02, 27
10 Q.04, 23

0
Q.01 Q.02 Q.03 Q.04
QUESTIONS

Q.01 Q.02 Q.03 Q.04

Source: The author, 2022.

Figure 5 presents that the most answered question is number 1, which is


answered by 91.52% of papers, followed by question number 3 by 83.05% of research
papers. Question number four has fewer answers, which is fulfilled only by 38.98% of
papers because it requires technical details about paper applications around ML or
Feature Engineering to soft sensors development.
The Content Analysis (CA) presented in table 9 corroborates every main
contribution from each paper for these research objectives and main questions by
performing the four guideline questions and checking which questions are answered.
Finally, with all 59 papers’ data collected, analyzed, stored, and summarized, it is
possible to present the systematic literature review results in Chapter 3.
46

3 RESEARCH FINDINGS

This chapter presents the findings of SLR in the Soft Sensors field of study,
Machine Learning (ML) processes. First, however, the benchmark technical details will
be explored in the next chapter. Then, starting with the state of the art of Soft Sensors
in 3.1, the exploration of some relevant ML techniques, for example, Support Vector
Machine (SVM), Decision Tree (DT), Deep Learning (DL) methods, and others in 3.2.
In addition, the mathematical background for Time-Series, Classification Task, and
Learning Phase are discussed in 3.3. Finally, feature engineering is the theme of sub-
chapters 3.4, 3.5, 3.6, 3.7, and 3.8, respectively, presenting the table’s 1 guideline
questions answers.

3.1 SOFT SENSORS: STATE OF ART

Soft Sensors can be defined as inference tools that process sensor data in real-
time to, with this information, estimate other more complex variables to measure, such
as data from a statistical laboratory test, as presented (Souza, Araújo, and Mendes,
2016). The intelligence of these sensors is based on algorithms and machine learning
techniques for data mining and improving the quality of information collected by
sensors, eliminating outliers, or condensing information with mathematical models.
For (Jalee and Aparna, 2016), the origin of the term Soft Sensor derives from
the junction between “software” and “sensor”. These models were developed through
computing processing hardware information already presented in supervisory systems,
and technicians evaluated alarms to make decisions. However, previously
immeasurable variables can be estimated based on secondary variables read by the
sensors using ML or DL techniques.
Such tools help to construct intelligent products by allowing them to make real-
time decisions. They can be classified into two groups concerning how the data is
treated (Maggipinto et al., 2019):
a) Model-Driven, in which data is acquired in real-time to feed predictive models,
promoting quick decision-making;
b) Data-Driven, in which statistical models are built through a robust database
already obtained during tests, is an intelligent sensor that employs most
machine learning techniques.
47

Aiming to map those areas (Kadlec and Gabrys, 2009) defined a hierarchical
tree of the machine learning methodologies used by each of these strands, as shown
in Figure 6:

Figure 6 – hierarchical tree of the ML methodologies employed in SS

Source: The Author, Adapted from Kadlec and Gabrys, 2009.

The main techniques are raised and categorized according to the authors'
studies. Therefore, the two ways of implementing SS will be presented in the following
subsections. Nevertheless, table 10 summarizes every definition for SS made by each
one of the papers, following the publication time order presented in Board 4:

Board 4 – The Soft Sensor Definition in Classified Papers


Index Year SOFT SENSORS DEFINITION

A soft sensor (SS) can be defined as ‘the association of a hardware sensor


1 2013 enabling the online measurement of some process variables using an algorithm
(software) to provide online estimates of unmeasured variables”.

Soft sensors are models that can provide accurate estimations in real-time for
4 2016 these hard-to-measure parameters without the financial investment and
maintenance requirements, using the relationships with conventional sensor data.
a ‘soft sensor’ infers from measurable quantities (furnace temperature and
6 2019 transport time) on the estimated microstructural state as a function of the process
setting (strain rates and pause times).

process manufacturers rely on soft sensors, which can model data collected from
16 2020
conventional measurements and used to predict key variables.

However, using already existing sensors’ signals as an input for deep neural
networks to infer the desired data rather than measure it directly could be a viable
20 2020
alternative [11]. These so-called soft or virtual sensors could satisfy data needs
using the same hardware.
48

Index Year SOFT SENSORS DEFINITION

Multi-sensor fusion in wireless sensor networks generally refers to combining


sensory data, e.g., position, range, bearing angle, and arrival time from several
23 2020
local sensor nodes. The resulting perception is better than when these sensors
are used individually for sensing.

A soft sensor is a technique in which a variable (output) that typically requires


analytical methods for its determination is estimated using online measurements
27 2020
of related variables (inputs). Soft sensors solve the problem of providing
estimates for variables for which no direct sensor is available.

With this new soft sensor, it is now possible to monitor moisture across the six
33 2021 chambers in real time while using the single NIR Moisture sensor to measure
moisture at the endpoint before feeding the granules to the tablet press.

A Soft Sensor (SS) [4] is a technology that allows for estimating the value of a
quantity that is too costly or impossible to measure from indirect sensor
35 2021
measurements, making it well-suited for the typology detection task. They can be
divided into Model-Driven or Data-Driven.

Based on machine learning techniques, soft sensors can infer the value of a
certain magnitude from the indirect measurement of other magnitudes. In other
36 2021 words, a data-driven soft sensor is an inference scheme capable of learning
certain multi-parametric and highly non-linear causality relationships from a
historical data set.

integration with mass balance equations for soft-sensor development [168]. These
hybrid models often show higher predictive power and data efficiency than purely
38 2021
physical or data-driven models and are robust to small datasets with low quality
(e.g., noisy data).

Soft sensors can be broadly categorized based on the type of model they utilize:
mechanistic, which uses first-principles to develop a description of the process;
data-driven, which uses historical process data combined with ML algorithms to
build a model; and hybrid, which combines the two. First-principle models are
40 2021 desirable but are limited by the necessity of adequate knowledge of the
underlying process mechanisms and usually do not account for uncertainties.
Instead, data-driven methods need only historically processed data and, as such,
have been widely explored in academia and industry in processes where apriori
knowledge is not available.

Soft sensors measure the unmeasured quantity (primary variable) from the
measured quantity (secondary variable). For example, temperature, pressure,
liquid levels, etc., are the sensing variables in the process or chemical industry
[5]. Two types of soft sensors are used, i.e., model-driven and data-driven soft
42 2022 sensors. Model-driven soft sensors, also called the phenomenological model, are
based on the first principle model, whereas data-driven soft sensors are based on
measured data within plants. Data-driven soft sensors achieved popularity
compared with model-driven ones since they mainly depend on the actual
process and can represent it more accurately.

A soft sensor is computer software that maps the values from the
Input variables to predict the output variable/s. Note that primary variables (mainly
44 2022
nutrient and organic concentration) are traditionally measured in the laboratory
and, thus, are characterized by time-delayed responses.
49

Index Year SOFT SENSORS DEFINITION

A definition of the soft sensor is a predictive model based on large quantities of


data available in an industrial process, which can be first principle (white-box
models) or data-driven (black-box models) models. White-box models depend on
actual mechanical data of the process. In contrast, the latter uses historically
45 2022
collected process data, which makes black-box models far more practical and
readily applicable to process plants. The principle on which soft sensors work is
based on quality estimation through a mathematical model that uses all available
measured process variables.

Relating this cost to the performance of the batch in terms of process variables
49 2022 will result in a better understanding of the batch variations. It can even result in a
soft sensor that can predict the cost for a running batch in real-time.
The power of using a combination of variables as soft sensors
In production, processes are thoroughly established. Using model parameters as
58 2022
soft sensors may provide much more information about the actions to take when
something goes wrong.
Source: The author, 2022.

3.1.1 Model-Driven Soft Sensors

As presented by (Jalee and Aparna, 2016), SS is based on phenomenological


modeling of the results studied with sensor data. A model is created based on the data
obtained by sensors and the dynamics built. In short, the modeling of the systems
depends on the theses about their dynamics without considering errors, interferences,
disturbances, or reading errors in the measurements. The maximum treatment is given
through the application of filters. The variations of approaches of the soft sensors
based on models are divided, as shown in Figure 2, into two: phenomenological
models and approaches using Kalman filters.

3.1.1.1 Phenomenological Modelling

Thus, (Kadlec and Gabrys, 2009) define the FPM (First Principle Model)
approach as a phenomenological model in which models are defined based on base
equations and mathematical descriptions of the systems studied. The focus is on
steady-state analysis that does not consider disturbances caused by adverse
conditions in the ideal model. However, the researchers point out that with the increase
in instrumentation in industrial plants, these models lost their place in the market to SS
based directly on data, showing more excellent reliability.
50

3.1.1.2 Kalman Filter

Based on (Welch and Bishop, 2006), the approach is based on extended


Kalman filters that estimate the dynamics of processes through closed control loops in
discrete time intervals and obtain the answer by considering disturbances in the
measurement of the feedback sensors. With that, it approaches two distinct algorithms,
the time and the measurement algorithms, to infer the past data projections of the
current state. Then, with the error covariance calculation, the estimates of the next step
are performed. The implementation of this filter follows the processes indicated in
figure 7:

Figure 7 – Kalman Filter Algorithm

Source: The Author, Adapted from Welch e Bishop, 2006.

Thus, a closed-loop Kalman filter can improve a control system. However, it is


valid to mention that each step is represented by a series of computational calculations
whose deductions can be found in the article (Welch and Bishop, 2006). Furthermore,
such filters are described by (Shaoming et al., 2019) as Kalman Consensus Filters
(Kalman Consensus Filter – KCF), seeking standard results between local estimates
in virtual sensor networks.

3.1.2 Data-Driven Soft Sensors

Moreover, (Zambonin et al., 2019) highlight that SSs are statistical technologies
that transform low-cost data into complex or high-cost information, improving process
performance as corrective actions are taken in real-time. The authors emphasize the
demand for machine learning techniques supervised by neural networks. The most
51

used is regression or simple classification of networks based on the Bayesian


algorithm.
The authors (Wo Jae Lee et al., 2019) call SS intelligent tools conditioned to
monitoring data in real-time in highly automated environments. For example, it is
possible to classify the state of cyber-physical systems (CPS) of a factory (e.g., Smart
Factory) monitored through machine learning techniques. Furthermore, they agreed
on the possibility of monitoring the effects of deterioration in the performance of
machines, which was not possible with the previously used models, which triggered
alarms after failures occurred.
In this way, they develop data-driven monitoring systems using information
passed in the training of a neural network that will process the machine data in real
time, presenting the tools' condition. This tool is based on statistical learning theories.
It is independent of machine parameters, having as reference only the past inputs and
their results, being defined as a Support Vector Machine (SVM).
In parallel, in the research by (Wang et al., 2019), SVMs are described as
machine learning models based on past information that can support future actions
using clusters – data vectors. Therefore, in the authors' research, the data-driven SS
used were K-means (Clustering method) and HC (Hierarchical Clustering). However,
these applications could conclude that these unsupervised methods are not applicable
for exact measurements, as they present low measurement power.
SVMs can also be developed by deep learning methods due to their remarkable
ability to scale data and perform well with limited samples for training, as demonstrated
(Lei Ma et al., 2019). However, another data-driven technique is called a random
decision forest (Random Forest – RF). The researchers claim it is more accurate and
may contain more decadent samples for training neural networks based on data.
Finally, the authors present a DL technique focused on digital image processing
and object detection, widely used to improve data from sensors (cameras): land use,
land cover (LULC), such as facial recognition and scene identification, and
classification of objects. In these cases, the reported performance of the LULC method
is considered accurate and reliable in applications with large volumes of information,
such as bound-rate timeline data. Furthermore, the moving average method was
presented by (Kadlec and Gabrys, 2009), which consists of updating the average with
52

each new value read. Hence, the average is normalized, simple predictions can be
made, and the method can work to treat noises.

3.2 MACHINE LEARNING TECHNIQUES

In this section, some employed ML techniques are presented, for example,


Support Vector Machine (SVM), Deep Learning (DL), Fuzzy Systems (FS), Decision
Tree (DT), Random Forest (RF), Genetic Algorithm (GA), XGBoost, and Gradient
Boosting.

3.2.1 Support Vector Machine

The study conducted by (C. Kumar, S. Chatterjee, T. Oommen, et al., 2020)


states that SVM belongs to the theories of statistical learning whose focus is the
training of the closest samples to produce the optimization of the separation between
different classes of values. The presented methodology uses the processing core to
compute separation models. These models vary from linear to sigmoid, generating
samples in vectorized hyperplanes, as shown in Figure 8:

Figure 8 – Support Vector Machine Linear Hyperplane

Source: R. Gandhi, 2018.

The article mathematically demonstrates an SVM to distinguish the threshold


between the classification limit and the declassification of a value. In addition, the user
must define a cost in optimizing the best result among the samples. In this case, the
higher the cost value, the more complex the hyperplane and the less generalist the
model becomes. It further confirms that SVM is highly accurate when properly
53

optimized and can be a powerful tool in implementing virtual sensors in industry or


engineering problems. However, SVM may not be the best choice in uncertain
environments, as it is a supervised learning method. If the data is not labeled, it may
lead to erroneous classifications:

“The comparative analysis of different MLAs shows that the Support Vector
Machine (SVM) outperforms other Machine Learning (ML) models…
Furthermore, the sensitivity analysis performed in this study illustrates that the
SVM is less sensitive to the number of samples and mislabeling in the model
training than other MLAs (Machine Learning Algorithms)”.

Hence, according to (C. Chang and C. Lin., 2001), who developed the LIBSVM,
the method presents a wide range of applications. For example, solving SVM
optimization problems, theoretical convergence, multi-class classification, probability
estimation, and parameter selection.
Another relevant information is the main equations related to the method as a
supervised algorithm used for classification and regression. Its primary objective,
defined by the equation 1 as the objective function and the equation 2 as the constraint:

𝑛
1
𝑚𝑖𝑛𝜔,𝛽,𝜀 ‖𝜔²‖ + 𝐶 ∑ 𝜀𝑖 (1)
2
𝑖=1

𝑦𝑖 (𝜔 ∙ 𝑥𝑖 + 𝑏) ≥ 1 − 𝜀𝑖 , 𝜀𝑖 ≥ 0 (2)

This equation finds a hyperplane characterized by the weight vector (𝜔) and
bias (b) that maximizes the separation margin between two classes. Meanwhile, the
1
term (2 ‖𝜔²‖) aims to maximize this margin, while (𝐶 ∑𝑛𝑖=1 𝜀𝑖 ) penalizes

misclassifications, with the constant (C) determining the trade-off between margin
maximization and misclassification penalty. Therefore, the constraints (𝑦𝑖 (𝜔 ∙ 𝑥𝑖 + 𝑏) ≥
1 − 𝜀𝑖 ) ensure data points (𝑥𝑖 ) of class (𝑦𝑖 ) lie on the correct side of the margin. Then,
the slack variables (𝑥𝑖 ) are introduced to allow certain points to lie inside the margin or
be misclassified, providing flexibility to the model for better generalization.
Finally, the main advantages of using SVM for a data-driven approach using SS
are the memory efficiency, the possibility to implement high dimensional spaces, the
versatility in implementing many kernel functions, and many samples.
54

3.2.2 Deep Learning

According to the Data Science Academy (2018), deep learning has extended
what was known until the 2000s as machine learning to a new level due to the growth
of computational capacity and new artificial intelligence techniques. Presenting more
satisfactory results in increasingly complex challenges in robotic computing and AI.
What differentiates ML from DL is the complexity of neural networks. While in the first,
the networks have fewer neurons and thus a smaller amount of data processing, while
the second has deeper layers of learning, demanding parallel processing, those
differences between simple ANN and DLNN can be observed in Figure 9:

Figure 9 – Deep Learning Neural Networks (DLNN)

Source: DSA, 2018, C.03.

As presented in Figure 4, neural networks have three layers: input, output, and
intermediate layers called hidden layers. The processing occurs intensively in these
layers, and the number of iterations is high. For example, in the output layer, the input
data were mathematically processed by the condensed neurons and transformed into
information relevant to decision-making.
According to (P. Bezak, P. Bozek, and Y. Nikitin, 2014), “Deep learning methods
have the capability of recognizing or predicting a large set of patterns by learning
sparse features of a small set of patterns.” Hence, they can be applied even in data-
less scenarios.
According to (M. Lei et al., 2019), DL algorithms are based on neural networks
and generally process data from different types of sensors. Whose focus is the
classification of information based on data previously obtained in a system, then
intelligent decisions can be made through these datasets.
55

Besides that, this subset of machine learning techniques utilizes artificial neural
networks, especially deep architectures with many layers. In this sense, a basic
component of DL is the artificial neuron, represented by the equation 3:

𝑦 = 𝑓(𝑊 ∙ 𝑥 + 𝑏) (3)

In this equation, (𝑥) is the input vector, (𝑊) is the weight vector, (𝑏) is the bias,
and (𝑓) is the activation function which introduces non-linearity, enabling the network
to model complex patterns. Deep learning models are composed of multiple such
neurons organized in layers. The depth, or the number of layers, and the non-linear
transformations allow these models to learn and represent intricate patterns from vast
amounts of data. The training process involves adjusting the weights ( 𝑊) and biases
( 𝑏) using algorithms like backpropagation to minimize a loss function, which measures
the discrepancy between the predicted and actual outputs.

3.2.3 Fuzzy systems

According to (J. Jang, 1991), fuzzy logic rules originate from the description of
the behavior of the systems. Based on the premises, each new rule is produced by the
combination of rules (“I”, “J”, etc...), and, at the end of the process, the system output
(“Z”) presents the weighting of all established rules. The author also presents the
ANFIS topology as shown in Figure 10:

Figure 10 – ANFIS topology

Source: The Author, Adapted from J. Jang, (1991).


56

Thus, five layers build this network, and the values of the assumptions are
determined according to the accurate modeling situations. Then, the second layer
performs the product between assumptions and delivers the result to the third layer.
Which calculates the rate at which the obtained weights are triggered up to the fourth
layer. In this step, the consequence parameters act on the factors obtained during the
process to perform the sum of the results in the last layer and emit the output signal of
this neural network. Thus, the model uses data to train and improve itself.
According to (H. Pacco, 2022), “Fuzzy Logic is a method of reasoning based on
approximation and assumptions that resembles the human reasoning model,” which
allows the Boolean decision-making algorithm based on the input layer. Then, the main
equations

3.2.4 Decision Tree

According to (L. Breiman, J. Friedman, R. Olshen, et al., 1984), Decision Trees


(DTs) are used for classification and regression as a supervised learning method. They
aim to compose a target variable prediction model based on the decision rules
constrained by the dataset features. These tree numbers of branches are defined by
their nodes, and the deeper the tree, the more nodes are present, and the more
complex the rules are to fit the model.
If the trees are small, their model can be easily interpreted. However, even if
the complexity rises, they are immune to predictor outliers, as shown by (T. Hastie, R.
Tibshirani, and J. Friedman, 2009). Furthermore, the authors affirm that they perform
internal feature selection. Hence, “These properties of decision trees are why they
have emerged as the most popular learning method for data mining.”
Then, for classification problems, these nodes aim to segregate the data into
classes. A standard decision tree is built using an algorithmic approach that identifies
ways to split a dataset based on different conditions, and this process is iterative,
continuing until it reaches a maximum depth or the node contains data of a single class.
The core principle is based on the entropy (equation 4) and the information gained in
equation 5:

𝐻(𝑆) = ∑ −𝑝𝑖 𝑙𝑜𝑔2 (𝑝𝑖 ) (4)


𝑖=1
57

|𝑆𝑣|
𝐼𝐺(𝑆, 𝐴) = 𝐻(𝑆) − ∑ 𝐻(𝑆𝑣 ) (5)
|𝑆|
𝜗𝜖𝐷𝐴

Where (𝑆) is a set of samples, (𝐴) is an attribute, (𝑝𝑖 ) and (𝑝𝑖 ) are the proportions
of positive and negative samples. The goal is to find the attribute that returns the
highest information gain, and this attribute is the decision boundary. Thus, the process
repeats with the sub-sets of data until the tree reaches its termination conditions.
Therefore, the decision tree provides a flowchart-like structure where each
internal node denotes a test on an attribute, each branch signifies the outcome of that
test, and each leaf node holds a class label. In the equation 5, the (𝑆) is the total set of
node samples, (𝐷𝐴 ) represents the subset of the dataset (𝐷) where the attribute (𝐴)
takes on a specific value (𝜗). At a glance, it's essentially a filtering of the main dataset
based on a particular attribute value. So, when computing something for (𝐷𝐴 ), the focus
is the data where the attribute (𝐴) is equal to (𝜗). This subset is used in decision trees
to determine the quality of a split based on an attribute's value. Therefore, the attribute
(𝐴) that yields the maximum information gain is selected to make the split. This process
is applied to each child node until a stopping criterion is met.
Further, figure 11 shows an example of a simple decision tree adapted from the
introduction of DTs presented by (J. Quinlan, 1986):

Figure 11 – Simple Decision Tree

Source: The Author, adapted from (J. Quinlan, 1986).


58

This classical problem presents the weather on Saturday morning, in which the
attributes are the outlook, with sunny, overcast, and rain attributes; the humidity, which
can be high or regular; and the windy, which is Boolean.
After defining those variables, there are two defined classes in the dataset, the
Positive instances (P) and the Negative Instances (N), defined by the author’s
judgment about the time (in real applications, it will be defined by data features).
Nevertheless, the cited authors also pointed out their main advantages: DTs
can handle multi-output problems, use the white box models approach, and require
little data preparation. Another advantage is their ability to manipulate categorical or
numerical data, performing well even when outlier data violate the proper model.
Finally, it is possible to validate such models using statistical data from the dataset,
matching its reliability.
Their limitations relate to over-complex datasets, which could generate
overfitting once their predictions are not continuous. Hence, extrapolation is not an
expected feature in those models. According to (J. Quinlan, 1986), “the iterative
framework cannot be guaranteed to converge on a final tree unless the window can
grow to include the entire training set.”

3.2.5 Random Forest

Following the studies by (C. Kumar, S. Chatterjee, T. Oommen, et al., 2020),


RF is formed by decision trees (e.g., Decision Tres – DT). Each tree, in turn, consists
of input data (e.g., Input Data) randomly separated, and the result of each tree is a
vote for output value classification. They comment that the elaboration of the DTs is
fundamental to the model's success because it is with this correctly elaborated
definition that the data training will be performed. The following input vector was formed
by combining the input data with the data the model has just processed since the
unselected data are reallocated, giving rise to the values selected in each DT.
The reallocated information was used to estimate the general precision of the
model so that the DTs factor optimization parameters are changed, as the authors
explain. In this way, the method builds new DTs, increasing the size of the RF and
making the model increasingly intelligent in selecting correlated data. Those
components described by (C. Kumar, S. Chatterjee, T. Oommen, et al., 2020) are
59

proven by the RF model conceived by (R. Forghani, P. Savadjiev, A. Chatterjee, et al.,


2020), as shown in Figure 12:

Figure 12 – The Schema Random Forest (RF)

Source: The author, adapted from Forghani et al., 2020.

The authors describe this method as robust to model variations since RF


randomly generates subsamples to train DTs considered weak, increasing the
method's accuracy at each iteration. With this, the prediction obtained in each DT is
combined to produce an unbiased overall decision based only on the DT structure,
trained for exceptions and possible outliers.
This ML technique can be applied in systems with sensor networks to make
complex decisions based on data obtained in real-time by sensors, as in data-driven
SS modeling. Both authors agree that RF presents robustness, but very complex
networks can overload the system's hardware being emulated due to the increased
number of trees.

3.2.6 Genetic Algorithm

This algorithm mimetics the theory of the biological evolution of species in


computing environments. That consists of the following prerogative: recessive genes
fail, and the dominant ones are passed on to the next generations. For (P. Domingos,
2015), the main contribution of this tribe is that, instead of just adjusting parameters, it
60

creates a complete learning structure analogous to a brain, enabling fine adjustments


of high precision with genetic algorithms.
In the same book, the expert deduces that if the genetic algorithm is emulated
for hundreds of thousands of generations, there will be several unequal periods of
adaptation over time, followed by stability. Furthermore, the author points out that the
algorithm reaches the peak of precision concerning the ideal population or point. This
way, the chances of a successful mutation occurring and the new population being
significantly better than the previous decreases sharply. Finally, it is worth mentioning
that the mutations of this method are random and follow at least the following steps
proposed by Burkowski (2000, p.202):
a) Generate two random populations of vectors with weights between 0 and 1;
b) Use the objective function to assess the engagement of each one in the
population;
c) Adjust the population weights through the difference between the local weight
and the general average of the population (selection);
d) Calculate an approximation of the correlation between the individual values of
the population (e.g., crossover);
e) Produce mutations between the remaining values, producing a new generation;
f) Selection of survivors through the objective criteria of the objective function;
g) Rerun the steps “c”, “d”, and “e” until the criteria are met;
h) End simulation and return to the best population after this process.
This Genetic Algorithm process is outlined in the flowchart in Figure 13:

Figure 13 – Genetic Algorithm Flowchart

Source: The Author, Adapted from Burkowski, 2000.

With this schematized genetic algorithm, it is possible to develop computational


procedures and obtain such results experimentally in any programming language.
However, to be successful in these procedures, it is necessary to use a robust dataset,
as there is a point where small populations may not develop (P. Domingos, 2015).
61

3.2.7 Gradient Boosting Machine (GBM)

Gradient Boosting Machine (GBM) is an advanced ensemble learning method


that constructs a sequence of weak learners, typically in the form of decision trees, to
create a strong overall model. Instead of training all the learners independently, GBM
trains each tree to correct the mistakes of its predecessor. This sequential nature
ensures that the errors of the previous tree are rectified in the subsequent tree, thus
constantly improving the prediction accuracy of the entire model.
Mathematically, given a loss function 𝐿(𝑦, 𝐹(𝑥)), where 𝑦 represents the true
label and 𝐹(𝑥) denotes the predicted label, the equation representing the m-th stage
of boosting can be expressed in equation 6:

𝐹𝑚 (𝑥) = 𝐹𝑚−1 (𝑥) + 𝛼 ∑ 𝛾𝑗𝑚 𝐼(𝑥 ∈ 𝑅𝑗𝑚 ) (6)


𝑗=1

In this equation, 𝐹𝑚−1 (𝑥) refers to the ensemble model constructed until the
𝑚 − 1𝑡ℎ stage. The parameter 𝛼 is the learning rate, dictating the step size at each
iteration in the search space. The sum encompasses all regions, denoted by 𝐽, where
𝑅𝑗𝑚 represents the j-th region of the m-th tree. The term 𝛾𝑗𝑚 is the output value for this
specific region, while the indicator function 𝐼 ensures that the sum is taken over regions
where the condition inside the parentheses is met. Therefore, (Friedman, J.H, 2001)
provided a comprehensive exploration of the Gradient Boosting Machine and its
potential applications in various domains.

3.2.8 XGBoost

Building upon the principles of the Gradient Boosting Machine, XGBoost, or


Extreme Gradient Boosting, stands out as a highly efficient and scalable gradient
boosting algorithm. Introduced by (T., Chen, C., and Guestrin, 2016), XGBoost
incorporates the concept of regularization to mitigate the risk of overfitting, thereby
enhancing the model's robustness. Moreover, it inherently handles missing values and
supports parallel processing, factors that contribute to its heightened computational
efficiency and performance across diverse datasets. Then, the objective function that
XGBoost aims to minimize is defined by the equation 7:
62

𝑛 𝑇
𝑂𝑏𝑗(𝜃) = ∑ 𝐿(𝑦𝑖 , 𝑦̂𝑖 ) + ∑ Ω(𝑓𝑖 ) (7)
𝑢 𝑗=1

Here, the term 𝐿 signifies the loss function, responsible for gauging the disparity
between the real value 𝑦𝑖 and its corresponding prediction𝑦̂. The component Ω(𝑓𝑖 )
denotes the regularization term associated with the j-th tree, a crucial element
distinguishing XGBoost from traditional gradient-boosting methods. This regularization
term is further detailed in equation 8:

1 𝑇
Ω(𝑓) = 𝛾𝑇 + 𝜆 ∑ 𝜔𝑗2 (8)
2 𝑗=1

In the above expression, 𝑇 stands for the number of leaves present in the tree.
The regularization parameters 𝛾 and 𝜆 are employed to control the complexity of the
model. Specifically, 𝜔𝑖 represents the score designated to the j-th leaf of the tree.
Meanwhile, (T. Chen, and C. Guestrin, 2016) work on XGBoost in 2016 thoroughly
illustrates the underlying mechanisms and the inherent advantages of this gradient-
boosting system.

3.3 MATHEMATICAL BACKGROUND

The mathematical approach section presents three related concepts and a brief
state of the art: the Time series in 3.3.1, the Classification task in 3.3.2, the Learning
phase in 3.3.3, and the main performance analysis as (F1), accuracy, precision, and
recall.

3.3.1 Time Series

According to (S. Aghabozorgi, S. Shirkhorshidi, and Y. Wah, 2015), time series


are data points in which the index follows a time order, distributed in equal time
windows. The authors present one of the most important applications for these
classical models:

“With emerging concepts like cloud computing and big data and their vast
applications in recent years, research has been increased on unsupervised
solutions like clustering algorithms to extract knowledge from this avalanche
of data. Clustering time-series data has been used in diverse scientific areas
to discover patterns that empower data analysts to extract valuable
63

information from complex and massive datasets. The time-series data is one
of the popular data types in clustering problems and is broadly used from gene
expression data in biology to stock market analysis in finance”.

Hence, the time series approach is present in signal processing, data mining,
pattern recognition, control engineering, ML clustering, classification, anomaly
detection, forecasting, and other relevant applications. However, they are not present
only in engineering. Other general areas, such as economics, biology, mathematics,
physics, medicine, and others, can employ them to present data behavior in function
of time.
An example of a time series applied to the AIRBUS benchmark simulation is
presented in figure 14, in which the time measured starts from zero to ten seconds,
measures the airplane’s rod sensor deflection in rad, and presents the command of
the FCS:

Figure 14 – A Time Series Example Employed to the Benchmark

Source: The Author, 2022.

3.3.2 Classification Task

For (J. Brownlee, 2016), the classification task in ML is employed to understand


the problem domain deeply and enables learning from actual data or examples. Such
area can be divided into four main areas, which are:
a) Binary Classification: The most straightforward classification is widely used in
e-mail classification, and clans separate them into spam or not-spam classes.
Their most popular ML methods are Logistic Regression, k-Nearest Neighbors,
Decision Trees, Support Vector Machine, and Naive Bayes;
b) Multi-Class Classification: Whenever there are more than two classes to
classify, it can be helpful to use the trained model to predict the probability of a
sample belonging to a class label. Its main algorithms are k-Nearest Neighbors,
Decision Trees, Naive Bayes, Random Forest, and Gradient Boosting.
64

c) Multi-Label Classification: This classification generally uses Multi-label Decision


Trees, Multi-label Random Forests, and Multi-label gradient Boosting methods.
This approach consists of two or more class labels in which one or more class
labels can be predicted. Hence, the author presents an example in image
classification with multiple known objects so that the model can predict many
different classes, such as “people”, “cars”, “roads”, and other classes on the
exact prediction.
d) Imbalanced Classification: finally, this refers to challenges where the training
dataset presents examples in classes unequally balanced, and a minority
number of samples are present in a specific class. For this case, cost-sensitive
ML techniques are engaged, for example, Cost-sensitive Logistic Regression,
Cost-sensitive Decision Trees, and Cost-sensitive Support Vector Machines.
Nevertheless, the author states that predictive modeling requires a training
dataset with input and outputs, enabling accurate measurement.

3.3.3 Learning Phase

According to (F. Lewis, S. Jagannathan, and A. Yeşildirek, 1997), when


implementing a Neural Network, the initial weights are uncertain, and tunning
techniques are employed to increase the performance. The initial state is the “Learning
Phase (LP).” As stated by the authors, such a phase is longer in closed control loops
once they must assure “two things — boundedness of the NN weights and
boundedness of the regulation or tracking errors, with the latter being the prime
concern of the engineer.” Nevertheless, (I. Kononenko and M. Kukar, 2007) explain
that the learning is trained with labeled examples (x ϵ X) without predictions in the LP.
Then, it outputs a hypothesis p(x), called “h”, to classify the other examples (f ϵ X)
(which were out of the training dataset). Finally, the author defines the loss functions,
square loss (L2), log loss (Llog), and absolute loss (L1), based on the objective function
value for the predicted example (f(x)) in equations 9, 10, and 11:

L1 (p(x), f(x)) = (f(x) − p(x))2 (9)


𝐿𝑙𝑜𝑔 (𝑝(𝑥), 𝑓(𝑥)) = − 𝑓(𝑥) 𝑙𝑜𝑔 𝑝(𝑥) − (1 − 𝑓(𝑥)) 𝑙𝑜𝑔(1 − 𝑝(𝑥)) (10)
𝐿2 (p(x), f(x)) = |f(x) − p(x)| (11)
65

3.3.4 Performance Analysis Techniques and Mathematical Background

In this subsection, each one of the evaluation methods/indicators for machine


learning techniques (Accuracy, Precision, Recall, and F1 Score) will be described.
Then, accuracy is one of the most straightforward evaluation metrics used in
machine learning. It quantifies the proportion of correct predictions made by a model
relative to the total number of predictions. Accuracy is a fundamental metric in machine
learning, capturing the proportion of correct predictions made by a model. While
accuracy is used ubiquitously, its exploration is largely covered in introductory texts on
classification, as stated by (D. Hand, and C. Till, 2001). It's suitable for binary and
multiclass classification. Therefore, equation 12 presents accuracy mathematically:

Number of correct predictions


𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (12)
Total number of predictions

Meanwhile, Precision is a measure of the proportion of true positive predictions


among all the positive predictions (True Positive - TP) made by the model. It gives an
insight into the reliability of a positive classification. Precision is particularly important
in situations where the cost of a False Positive (FP) is high, such as in medical testing.
Therefore, Precision provides insight into how many of the items identified as positive
are positive as stated by (C. Manning, P. Raghavan, and H. Schütze, 2008). The
equation 13 presents the formula for precision:

TP
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (13)
(𝑇𝑃 + 𝐹𝑃)

Therefore, recall, also known as Sensitivity or True Positive (TP) Rate,


measures the proportion of actual positives that were identified correctly. It's crucial in
contexts where the cost of a False Negative (FN) is high, such as in cancer diagnosis.
Thus, recall evaluates how many of the actual positives were identified. The metric's
relevance, especially in information retrieval, is detailed by (C. Manning, P. Raghavan,
and H. Schütze, 2008). The equation 14 presents recall calculation:

TP
𝑅𝑒𝑐𝑎𝑙𝑙 = (14)
(𝑇𝑃 + 𝐹𝑁)
66

Finally, the F1 Score is the harmonic mean of precision and recall and provides
a balance between them. It's particularly useful when the class distribution is
imbalanced. The F1 Score is the best metric to use if you have an uneven class
distribution and if the cost of false positives and false negatives are roughly equivalent.
Then, it provides a balance between precision and recall, especially important for
imbalanced datasets as stated by (D. Lewis, 1991). The equation 15 shows the F1
Score:

𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × 𝑅𝑒𝑐𝑎𝑙𝑙
𝐹1 𝑆𝑐𝑜𝑟𝑒 = 2 × (15)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙

3.4 FEATURE ENGINEERING

This section explores the Continuous Numeric Data and Categorical Data
approaches for Feature Engineering (FEn) and their state-of-the-art in 3.4.1.

3.4.1 Feature Engineering: The State of The Art

The FEn can be defined as a set of data filtering procedures that integrate expert
knowledge from the theme domain to transform, integrate, and adjust features to
increase ML predictor algorithms correlation, as defined by (A. Gal-Tzur, S. Bekhor,
and Y. Barsky, 2022). Moreover, according to (Z. Qadir, S.I. Khan, and E. Khalaji et
al., 2021), the FEn eliminates low-quality data. Furthermore, it selects the most crucial
features to reduce computational costs and minimize error.
According to (R. Yao, N. Wang, Z. Liu, et al., 2021), FEn is one of the most
appropriate data processing steps that extracts the main features from datasets.
Meanwhile, (N. Mapes, C. Rodriguez, and P. Chowriappa et al., 2019) extracted
features from a comprehensive dataset with thirty-nine-dimensional features, reducing
the number of features by combining them into new features.
In (F. Chiarello, P. Belingheri, and G. Fantoni, 2021) study, the FEn was
classified as a data analysis process that depends on the context to identify meaningful
feature representations to increase the accuracy of ML systems. Moreover, a
knowledge-driven approach, FEn, is presented by (Z.H. Janjua, D. Kerins, and B.
O’Flynn et al., 2022) as sensitive task specialists perform before applying ML
techniques. In addition, this process was employed in medical applications for blood
67

pressure measurements by classifying related symptoms. Nevertheless, (D. Gibert, J.


Planes, and C. Mateu et al., 2022) combined FEn and DL to extract features from
binary data to classify malware. Finally, (F. Hoppe, J. Hohmann, and M. Knoll et al.,
2019) evaluated the FEn quality using regression models.
The main ML techniques that are available for FEn implementation are,
according to (C. Joshi, R.K. Ranjan, and V. Bharti, 2021), Correlation Matrix (CM), DL,
Fuzzy Logic, Feature Importance (FI), Recursive Feature Selection (RFS), Univariate
Feature Selection (UFS), and Principal Component Analysis (PCA). Therefore, the
“Fuzzy Logic based feature engineering approach first identifies the fuzzy sets from
the dataset and uses different fuzzy rules to generate new features.”

3.5 Q.01 MAIN APPLICATION AREAS FOR SS

This section presents the main applications in which Soft Sensors were
employed in SLR papers; 54 of 59 papers presented contributions for answering this
question. Such a section is subdivided into Industrial Applications, Aeronautics,
Quimiometrics, Cloud Computing solutions, health and care solutions, building and
household applications, and general applications.

3.5.1 Industrial Applications

There are applications related to car body parts drawing strokes, cited by (R.
Meyes, J. Donauer, and A. Schmeing, et al., 2019), using the SS approach with strain
gauge and laser sensors to digitalize the metal sheet retraction points. The authors
used an RNN architecture to draw the 3D model based on time series data.
In property control of hot forming-based grain size for steel, the authors (M.
Bambach, M. Imram, and I. Sizova et al., 2021) showed an SS implementation as a
surrogate model. That improved the accuracy of a cost function to measure the
distance of predicted and measured domain boundaries. Another industrial application
is cited by (M. Tabba, A. Brahmi, and B. Chouri et al., 2021) for a PLC controlling
system for industrial digitalization over level detection in a complex scenario using a
set of connected sensors.
In manual screwing manufacturing, Soft Sensors can detect activities
characterized by body posture and arm and hand activity (L. Günther, S. Kärcher, and
68

T. Bauernhansl, 2019). Although, for (N. Tvenge, O. Ogorodnyk, and N. Østbø et al.,
2020), SS is a crucial component for real-world digitalization, composing the Digital
Twin scenario, being more than a simple model, it enables decision-making around
further actions of a modeled system.
Another SS Industrial machinery application is shown by (B. Maschler, S.
Ganssloser, A. Hablizel, et al., 2021) by measuring the cylinder pressure to calculate
other relevant combustion parameters in large engines to reach additional
maintenance requirements. Therefore, they indicate future works: “To facilitate the
mentioned optimization procedures in smaller engines, too, the use of virtual cylinder
pressure sensors is a promising option.”
For (W. Lee, G. Mendis, and J. Sutherland, 2019), an SVM can be trained using
sensor signals as input and outputting the tool wear once the real-time multi-sensor
dataset is employed to predict the tool wear. Furthermore, according to (S. He, H. Shin,
S. Xu, et al., 2020), these applications can utilize low-cost sensors, enabling scalability
features.
Material engineering can filter noisy SS data models on materials structure
Variational autoencoders for Long short-term memory (VAE-LSTM) approach to
optimization, according to (A. Lew and M. Buehler, 2021). Although an
electromechanical fault detection benchmark was presented by (T. Grüner, F. Böllhoff,
and R. Meisetschläger et al., 2020), using a data set to generate the model and predict
the faults based on indirect sensors as model input.
The authors (M. Barton and B. Lennox, 2022) articulate ensemble methods to
apply SS in industrial scenarios for predictive performance. In the automotive industry,
(A. Theissler, J. Pérez-Velázquez, M. Kettelgerdes, et al., 2021) presented fault
detection and predictive maintenance for autonomous vehicles.
The SS applied to the stamping press is explained by (D. Coelho, D. Costa, E.
Rocha, et al., 2022), using Long Short-Term Memory (LSTM) networks to predict
failure in metal stamping processes. Furthermore, the study by (S. Shafiq, E.
Szczerbicki, E. Sanin, et al., 2019) presented SS for data acquisition and visualization
supporting I4.0 in different machining scenarios.
Self-healing robots are the focus of (E. Roels, S. Terryn, J. Brancart, et al.,
2022) study, applying SS to recover the system in case of severe damage. They use
self-healing materials such as polymer networks, novel elastomeric, and SS-
69

embedded conductive particles as carbon nanotubes and conductive liquid metals.


However, the authors commit one constraint: “The reason is that these healable soft
sensors are difficult to model using analytical approaches due to their non-linear
behavior and time-variant response.”
Even steady-state industrial processes, such as electric arc furnaces, can be
based on SS. According to (A. Blažič, I. Škrjanc, V. Logar, 2021), the PSO algorithm
proposal to bath temperature estimation using the Takagi-Sugeno fuzzy model.

3.5.2 Soft Sensors Applied to Aeronautics Solutions

The Aeronautics industry can be improved due to the use of “sensors and
software to monitor multiple aspects of aerospace vehicles” (K. Ranasinghe, R.
Sabatini, and A. Gardi et al., 2021). The authors have shown a National Aeronautics
and Space Administration (NASA) application for SS over Vehicle Health Monitoring
(VHM), which provided the vehicle’s failure prognostic or diagnostic management for
predictive maintenance.
Another aerospace application for SS is presented by (V. Henrique, R. Massao,
and G. Reynoso-Meza, 2021) for Oscillatory Failure Case detection using this
technology on the EFCS. The inputs are the command control current and the
feedback signal sensor from the built-in rod sensor. They developed a Simulink design
for failure detection.
The technical note presented by IFAC to solve the AIRBUS benchmark (J.
Engelbrecht and P. Goupil, 2020) details the FCS of a commercial aircraft and its
sensors, systems, power sources, wiring, and many other movable parts.

3.5.3 The Employment of SS in the Quimiometrics Industry

The chemical industry is applying and gathering many benefits from ML


techniques applied to SS instead using statistical learning models, as displayed by (A.
Hicks, M. Johnston, and M. Mowbray et al., 2021). These authors further show more
benefits for industrial applications: “The preliminary soft-sensor can provide process
operators a quick and reasonably accurate prediction of the product quality.”
In the quimiometrics industry, (B. Negash, L. Tufa, and R. Marappagounder et
al., 2016) employed SS for petroleum reservoir volume forecasting:
70

“Based on a reservoir’s recovery mechanism, the conceptual framework will


help to systematically select an appropriate model structure from the various
model structures available in system identification. The results show that
system identification polynomial models can provide very accurate models in
a short time to predict the performance of reservoirs under primary and
secondary recovery mechanisms. System identification-based reservoir
models can be established as a practical, cost-effective, and robust tool for
forecasting reservoir fluid production”.

According to (M. Zaghloul and G. Achari, 2022), SS is developed with AI


technology in wastewater treatment plants to estimate real-time hard-to-measure
parameters. For example, nitrates, ammonium, total phosphorus, biochemical oxygen
demand, and others use fewer complex sensor parameters, such as pH, flow, rates,
and temperature.
SS can serve as a reliable phenomenological model for Fermentation 4.0
bioprocess to monitor biomass, substrates, and metabolite concentration, as shown by
(C. Alarcon and C. Shene, 2021). Furthermore, according to (A. Guzman-Urbina, K.
Ouchi, and H. Ohno et al., 2022), the chemical application for SS employs Fuzzy
Inference systems and a data-driven technique for Emissions Analytics (FIEMA).
Aiming to correlate catalyst properties, greenhouse gas (GHG) emissions, and existing
process arrangements.
As affirmed (I. Mendia, S. Gil-López, and I. Landa-Torres, et al., 2022), a
physical sensor cannot directly measure the refinery's real-time process. Hence, “The
soft sensor provides refinery operators real-time information to adjust operating
conditions, maximizing the stability of the desulfurization unit and producing diesel to
specification.” Other applications of SS in biochemical engineering are expounded by
(M. Mowbray, T. Savage, and C. Wu et al., 2021), for example, the microfluidic SS,
chemometric analysis using the collected data from a sensor.
The Neuro-Fuzzy SS approach was applied to the benzene Toluene distillation
column to predict the composition using the ANFIS algorithm as expounds (E. Jalee
and K. Aparna, 2016). Another prediction application was developed by (D. Aguado,
G. Noriega-Hevia, and J. Ferrer et al., 2022) to extract ammonium concentration
evolution in fiber membranes using indirect sensor data.
In methane reforming products, SS implementation is presented by (P.
Nkulikiyinka, Y. Yan, F. Güleç, et al., 2020) to compare the success rate of SS and
expensive hardware sensors. Moreover, these authors cited the “use of ANN as a tool
71

for nonlinear soft sensing modeling has been employed in the recent years, particularly
regarding the prediction accuracy and saving of computational costs.”
As described by (G. van Kollenburg, J. van Es, J. Gerretzen, et al., 2020), the
SS application uses historical process data to improve the chemical process conditions
and control laws using the PLS-Path Modelling.

3.5.4 The Cloud Computing Solutions Based on SS

A work by (H. Paggi, J. Soriano, and V. Rampérez et al., 2013) defines Wireless
System Networks (WSN) as interconnected low-cost sensors. Such WSNs are
implemented in health monitoring, military target tracking, animal monitoring, smart
homes, environmental control systems, and other applications, composing a so-called
“Intelligent Space” (IS).
Image processing solutions are present in almost every Cloud Computing (CC)
platform. Then, a Deep Learning approach can solve remote-sensing applications (L.
Ma, Y. Liu, X. Zhang, et al., 2019). Furthermore, other applications are presented by
this author, for example, fusion, segmentation, change detection, and registration.
An Industry 4.0 case study employing soft sensors in a waste-to-energy plant
benchmark is presented by (J. Kabugo, S. Jämsä-Jounela, and R. Schiemann et al.,
2020). CPS uses an IIoT-based data platform to detect temperature faults and correct
the PID gains instead of only displaying alarms. Another I4.0 application using real-
time SS was conducted by (D. Ntamo, E. Lopez-Montero, and J. Mack et al., 2022) to
monitor the end-point moisture using a data-driven approach.
Another CC low-cost solution for occupancy estimation was developed by (K.
Rastogi and D. Lohani, 2019) to enable frequent sampling and reduce the need for
human involvement. In this sense, (B. Schumucker, F. Trautwein, and R. Hartl et al.,
2022) describe a cloud-based environment to measure and simulate curring forces in
workpiece quality, monitoring the tool’s current level in real-time, enabling intelligent
decision-making.
In the study case presented by (J. Schimitt, J. Bönig, and T. Borggräfe et al.,
2020), a SS is employed for quality inspection using ML and Edge Cloud Computing
(ECC) to identify defects in the automobile and electronics industry.
72

3.5.5 Soft Sensors: Enhancing Health and Care Solutions

Biological transformation employing SS is shown by (R. Miehe, T. bauernhansl,


M. Beckett, et al., 2020) case study, which “includes the development of multivariate,
bio-based, non-invasive and non-consuming sensor techniques and principles as well
as soft sensors with underlying process models and new concepts for biosensors.”
The authors (P. Zhu, H. Peng, and A. Rwei, 2022) proposed the soft SCG
sensor, with a pair of gold electrodes integrated into an electronic tattoo platform to
measure the systolic time interval (STI) as the surrogate non-invasively method to
predict the blood pressure. Such SS is shown in Figure 15:

Figure 15 – soft SCG sensor

Source: P. Zhu, H. Peng, and A. Rwei, 2022.

Consequently, after submitting the patient to walking, running, and jumping


tests, the graphene pressure sensor got the results presented in Figure 16:

Figure 16 – Intensity Pressure Signal in Different Actions

Source: P. Zhu, H. Peng, and A. Rwei, 2022.

According to these results, the epidermis communicates with force-sensing


structure via spinous microstructures, then the authors affirm:

“The sensor could measure human physiological signals, such as heartbeat,


phonation, and motion. Its array was further utilized to obtain gait states of
73

supination, neutral, and pronation. Moreover, its microstructure offered an


alternative method to enhance the performance of pressure sensors and
expand their potential applications in detecting human activities”.

Another real-time health and care application for oxygen measurements


employing SS is presented by (A. Tsopanoglou and I. Jiménez del Val, 2021), focusing
on glucose concentration in cell culture dynamics optimization in Critical Quality
Attributes (CQAs).
In oncology, SS can play a relevant role in a biomarker for prediction models
using ML techniques in a study by (R. Forghani, P. Savadjiev, and A. Chatterjee et al.,
2019).

3.5.6 Building and Household applications

Meanwhile, (S. Baduge, S. Thilakarathna, and J. Perera et al., 2022) focus on


construction 4.0 applications for soft sensors to gather operational data and sensor
data to model and anticipate the status of building components. According to the
authors, “ML and DL are the core of AI-based applications, being used in the
construction industry due to the enhanced computational capacity and the massive
amounts of data generated.”
An SS was developed to support the household electricity load prediction by (M.
Shapi, N. Ramil, and L. Awalin, 2021) with ARIMAX, Decision Tree, and Artificial
Neural Network ML techniques. A household application for SS is discussed by (M.
Maggipinto, E. Pesavento, and F. Altinier et al., 2019), employing SS for load-weight
estimation in washing machines.
The load forecasting study case using SS is presented by (M. Abdel-Basset, H.
Hawash, K. Sallam, et al., 2022) by implementing a data-driven approach with the time
series data acquired by residential embedded energy consumption sensors.

3.5.7 The General Applications for SS

An application for Soft Sensors working with ML and Cuckoo Search and
Particle Swarm Optimization (CS-PSO) to quantify performance metrics for athletes
based on data was presented by (M. Ishi, J. Patil, and V. Patil, 2022).
74

Another challenge solved by Soft Sensors was presented by (O. Fisher, N.


Watson, and J. Escrig et al., 2020) as an IIoT technology to compute visual and
ultrasonic data and optimize Clean-In-Place (CIP) processes.
As a data-driven technology, SS enables the unclean data filtering feature and
a more comprehensive range of data to process. As a result, SS can enhance the
forecast for local weather, according to (T. Krivec, J. Kocijan, and M. Perne et al., 2021)
study, in which some sensors acquired wind speed and direction, humidity, solar
radiation, and temperature data. Such information provides the SS input data for Short-
Term and Long-Term weather forecasts for their developed model of dispersion of
radioactive air pollution.
Using SS can improve traffic, as stated by (M. Siddiqi, B. Jiang, and R. Asadi et
al., 2021). Traffic sensor data can predict the gaps in data features and improve traffic
signal time management.

3.6 Q.02: THE RELATIONSHIP BETWEEN SS AND INDUSTRY 4.0

This section presents the relationship between Soft Sensors and Industry 4.0 in
the 3.6.1 sub-section and Smart Factories in 3.6.2, present in 27 of the 59 SLR-
selected papers.

3.6.1 Soft Sensors in I4.0 Scenario

I4.0 digital services are based on SS application with high-quality data


representing “a great fraction of industrial machinery in use today features only a bare
minimum of sensors and retrofitting new ones,” according to (B. Maschler, S.
Ganssloser, and A. Hablizel, et al., 2021).
According to (R. Meyes, J. Donauer, and A. Schmeing et al., 2019), the fourth
industrial revolution is enhanced by the industrial big data phenomenon fed by sensor
data. SS implementation can improve these sensor systems once on the floor level. In
addition to that, (C. Li, Y. Chen, and Y. Shang, 2022) state that real-time data
acquisition, collection, and evaluation enabled by SS applications are crucial elements
for Industry 4.0 complex production environments.
The data-driven system modeling techniques employing SS are a priority for
Industry 4.0 digital manufacturing, as presented by (A. Hicks, M. Johnston, and M.
75

Mowbray et al., 2021). Nevertheless, SS is noticed in an IIoT application, representing


an I4.0 scenario in (V. Kocaman and D. Talby's, 2022) paper.
Soft Sensors can be used on Cyber-Physical-Systems (CPS) to improve ML
learners by acquiring data for real-time integration on I4.0, as presented by (N. Tvenge,
O. Ogorodnyk, and N. Østbø et al., 2020). However, for (K. Ranasinghe, R. Sabatini,
and A. Gardi et al., 2021), these data acquisition related to the aerospace industry is
often challenging to acquire due to “their rarity as well as security restrictions. A
feasible solution to this problem is to perform sub-system/component seeded fault
testing, which can be quite expensive.” Hence, SS is a safer and inexpensive
alternative to these test cases in the Industry 4.0 scenario.
An I4.0 review was conducted by (S. Baduge, S. Thilakarathna, and J. Perera
et al., 2022) and presented the use of ML algorithms implementing an SS in material
design and optimization. Another use of SS in I4.0 is proposed by (M. Tabba, A.
Brahmi, and B. Chouri et al., 2021) to support automated decision-making approaches
to improve Industrial scenarios.
According to (C. Alarcon and C. Shene, 2021), SS has been classified as an
I4.0 technology once it can be designed based on expert knowledge of the process in
model-driven soft sensors. The I4.0 can benefit SS in sustainable industry, minimizing
environmental impacts by increasing resource use efficiency and reducing waste and
pollution, as presented by (A. Guzman-Urbina, K. Ouchi, and H. Ohno et al., 2022).
The big data tools are emphasized by (J. Kabugo, S. Jämsä-Jounela, and R.
Schiemann et al., 2020) with ML methods among data-driven models fed by SS in I4.0
for industrial data to compose an IIoT platform. Nevertheless, (D. Ntamo, E. Lopez-
Montero, and J. Mack et al., 2022) affirm that in real-time data processing for I4.0, SS
enables sustainable process design to reduce industrial environmental footprints.
A study by (M. Barton and B. Lennox, 2022) showed that I4.0 presents ML in
the design of SS, “which are used as replacements for offline measurements of
variables of interest, usually expensive or time-consuming.” Finally, the (A.
Tsopanoglou and I. Jiménez del Val, 2021) paper presents the Biopharma 4.0
paradigm and the prominent role of SS in developing multivariate data analysis.
76

3.6.2 Soft Sensors Employed in Smart Factories

According to (M. Mowbray, T. Savage, and C. Wu et al., 2021), a Smart Factory


is enhanced by SS once it combines modeling tools (data-driven or model-driven) that
can accelerate the industrial scenario. Furthermore, the Intelligent Space (IS)
environment is stated by (H. Paggi, J. Soriano, and V. Rampérez et al., 2013), where
SS processes robots’ information and provides it to human supervisors using
interfaces and fusion techniques.
Intelligent systems, in which SS are employed with ML techniques to focus on
Smart factory development and improvements, are presented by (S. Maier, P.
Zimmermann, and J. Berger, 2022). A Closed Loop Manufacturing 4.0 (CLM 4.0)
architecture is presented by (B. Schumucker, F. Trautwein, and R. Hartl et al., 2022),
which employs SS to acquire and process data using ML to influence the control
signals.

3.7 Q.03: FEATURE ENGINEERING AND ML APPLIED TO SS

This section presents the leading Feature Engineering (FEn) and ML techniques
employed in Soft Sensors implementations in 49 of 59 researched papers. The sub-
section 3.7.1 presents the FEn employment found in 20 papers. Meanwhile, 3.7.2
handles the ML approaches cited by 46 papers.

3.7.1 Feature Engineering Employment to SS Development

As presented by (R. Meyes, J. Donauer, and A. Schmeing et al., 2019), a FEn


wavelet transformation approach and an LSTM based on ANN were combined with
time-series data SS. Such SS-enabled anomaly detection and prediction in
manufacturing applications. In this study, the classifier joins raw data from flange
retraction lasers and strain gauges into two feature vectors to simplify the wavelet
transform. Such a model outputs “the probability of occurrences of cracks in the future
course of the deep drawing process, which is fed into the regression network as an
additional feature for the time series prediction.”
According to (V. Kocaman and D. Talby, 2022), the CNN architecture can
eliminate most FEn steps using bidirectional LSTM over SS-acquired data. Another
CNN for SS approach is presented by (L. Günther, S. Kärcher, and T. Bauernhansl,
77

2019) for FEn, in which “CNN is characterized by scale invariance and can capture
local dependencies in data.”
Some main benefits of employing FEn to SS ML methods were declared by (M.
Ishi, J. Patil, and V. Patil, 2022). Among them are the dimensional reduction of large
datasets, the selection of significant features to enhance classification performance,
and the reduction of computational costs. In addition, they cited that “Metaheuristic
algorithms are recognized as a viable approach for addressing feature optimization
problems.”
The FEn can reduce the dimensionality of an SS model dataset, according to
(O. Fisher, N. Watson, and J. Escrig et al., 2020). For example, implementing FEn and
a PCA in their study reduced the dataset's dimensionality from 26 features to 9.
Another detailed FEn study for SS dataset feature reduction was carried out by (V.
Henrique, R. Massao, and G. Reynoso-Meza, 2021), in which ten features were
created for the input signal and processed by:

“Simple signal processing techniques include delay, differences, moving


average, moving standard deviation, moving root mean squared (RMS) value,
and zero-cross counting. The selection of such features considers the limited
computational resources available in FCSs.”

In an SS image-based system, the FEn is crucial once the decision of which


image feature to use for matching enables the DL data-driven scheme to learn them
from images, according to (B. Maschler, S. Ganssloser, and A. Hablizel et al., 2021).
The paper (M. Shapi, N. Ramil, and L. Awalin, 2021) shows the limitations in the k-NN
method for SS implementation in large feature spaces for forecasting, requiring FEn
computational costly solutions.
According to (T. Grüner, F. Böllhoff, and R. Meisetschläger et al., 2020), the ML
pipelines can improve their accuracy, computational complexity, and engineering effort
by FEn. Using the PCA and Recursive Feature Elimination (RFE), their study reduced
the feature space in raw data for fault detection. Two other pieces of research focused
on SS employed FEn for feature space reduction before using PCA (A. Guzman-
Urbina, K. Ouchi, and H. Ohno et al., 2022), and (M. Siddiqi, B. Jiang, and R. Asadi,
et al., 2021).
An approach that evolved FEn and ML was applied to data-driven SS
development by (K. Rastogi and D. Lohani, 2019), denominated as the Feature Scaled
Extreme Learning Machine (FS-ELM). This method is a variation of Extreme Learning
78

Machine (ELM) and has significantly improved its performance. In the (M. Maggipinto,
E. Pesavento, and F. Altinier et al., 2019), FEn is used to extract features from complex
data to feed ML algorithms. In this case, more than fifty features were extracted.
Thus, the FEn data preprocessing using probabilistic data cleaning and PCA for
SS implementation to feed an RF algorithm is displayed in (I. Mendia, S. Gil-López,
and I. Landa-Torres, et al., 2022). However, it is stated by (L. Petruschke, J. Walther,
and M. Burkhardt et al., 2021) that the DL approach can overcome FEn techniques
due to them inheriting the classification tasks.
A PLS model implemented in SS development by (D. Aguado, G. Noriega-
Hevia, and J. Ferrer et al., 2022) showed the importance of FEn in feature extraction
for predicting ammonia concentration using different pH features combined. In
addition, the Out-Of-Bag concept is explained by (P. Nkulikiyinka, Y. Yan, and F. Güleç
et al., 2020) to refer to some samples not used for fitting one single DT in the RF for
SS implementation.
The FEn requirements for extracting new features after the training process are
evidenced by (D. Coelho, D. Costa, and E. Rocha et al., 2022) and (R. Forghani, P.
Savadjiev, and A. Chatterjee et al., 2019). Once the FEn's main objective is to improve
the model with optimal parameters only, the (A. Gejji, S. Shukla, and S. Pimparkar et
al., 2020) work explored the use of FEn for SS dataset dimensionality reduction using
ML methods as an example: ANN, DLNN, DT, SVM, ensemble method and logical
regression.

3.7.2 The Machine Learning Approaches for Soft Sensors

ML employment in SS development achieved many benefits in (H. Paggi, J.


Soriano, and V. Rampérez et al., 2013) research on the simplicity of ANN system
modeling without knowledge about the system dataset. According to this research,
“ANFIS is an example of a neuro-fuzzy SS. ANNs are combined with partial least
squares and principal component analysis (PCA).”
In (R. Meyes, J. Donauer, and A. Schmeing et al., 2019), an LSTM was applied
to predict failure cases, train with time-series data, and use an SS to acquire data for
classifying the failure data. Meanwhile, the study conducted by (C. Li, Y. Chen, and Y.
Shang, 2022) summarized the most employed learning algorithms and models in
engineering challenges: “Support Vector Machine (SVM), Naive Bayes (NB), K-nearest
79

Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR), Deep Neural Network
(DNN), and Convolutional Neural Network (CNN).”
Some data-driven statistical methods were proposed by (M. Zaghloul and G.
Achari, 2022) to identify faults based on the SS approach employing the PCA and ANN
to control the process. Moreover, the PCA technique was embedded in SS in a DL
deep layer of data reduction, providing gains in predictive performance, according to
(M. Abdar, F. Pourpanah, and S. Hussain et al., 2022). Another PCA implementation
for SS is shown in (A. Hicks, M. Johnston, and M. Mowbray et al., 2021) to reduce the
dimensionality of time-series datasets by identifying correlations between output and
input layer variables.
In (M. Bambach, M. Imram, and I. Sizova et al., 2021) paper, an ANN was used
as an SS to predict the final boundary deformation sequence in a steel forming
operation. Although the cloud platforms play a crucial role in ML solutions for SS, as
shown by (V. Kocaman and D. Talby, 2022), the Google Cloud Platform and AWS
present contributions for extracting relevant medical information from biosensor data.
The study employed an ANN for an SS data-driven approach in data prediction (B.
Negash, L. Tufa, and R. Marappagounder et al., 2016).
An approach for Structure Health Monitoring was carried out by (S. Baduge, S.
Thilakarathna, and J. Perera et al., 2022), using different sensors (e.g., acoustic
sensors, electromagnetic devices, and Accelerometers) for an SS implementation.
Such features were developed using the following ML techniques: “ANN, DL, Support
Vector Machine (SVM), Principal Component Analysis (PCA), k-Nearest Neighbor
(KNN), and low-rank matrix decomposition.” In addition, many other techniques can be
employed for fault detection using SS, according to (T. Grüner, F. Böllhoff, and R.
Meisetschläger et al., 2020):

“The following ML methods are utilized for the classification of the normal and
fault states in the underlying data set of measurements of the motor current:
Traditional methods: KNN, SVM; Ensemble methods: Random forests;
extreme gradient boosting machines (XGBoost); Deep Learning: Two fully-
connected feed-forward ANNs with three (ANN-3) and 20 (ANN-20) hidden
layers. For the implementation, the Python libraries scikit-learn (KNN, SVM,
Random forests), XGBoost2, and Tensorflow3 (ANNs) are used.”

According to (I. Mendia, S. Gil-López, and I. Landa-Torres, et al., 2022), the


most relevant ML prediction algorithms for SS are ANN, k-NN, RF, SMV, and non-
linear Regressions. Moreover, (A. Mayr, D. Kißkalt, and A. Lomakin et al., 2020) cited,
80

"In particular, artificial neural networks and their subclass of convolutional neural
networks are frequently applied.” Moreover, the implementation of PLS is considered
by (G. van Kollenburg, R. Bouman, and T. Offermans et al., 2021) as an easy-to-
explain technique for SS models in an industry environment.
An SS onboard system was presented by (K. Ranasinghe, R. Sabatini, and A.
Gardi et al., 2021), in which ML algorithms were employed for fault diagnostic. The
main ML methods used in their study are: “SVM, GPR, neural networks, Markov Chain,
fuzzy logic, and Monte Carlo.” The data-driven approach for SS (cited in (O. Fisher, N.
Watson, and J. Escrig et al., 2020) implemented an ANN to model prediction without
initial system knowledge.
The paper (T. Krivec, J. Kocijan, and M. Perne et al., 2021) presented an
alternative for statistical ML methods, the non-parametric and probabilistic Gaussian
Process (GP) model, for an SS data-driven forecast implementation. Meanwhile, the
paper by (V. Henrique, R. Massao, and G. Reynoso-Meza, 2021) employed a DT
algorithm for the SS model in the fault detection application model.
Three steps are presented by (B. Maschler, S. Ganssloser, and A. Hablizel et
al., 2021) for implementing an SS using the MLP approach. These steps consist of
data preprocessing, choosing the ANN type, and structuring it to estimate the engine
parameters using the data acquired from the sensor. A set of ML techniques was cited
by (W. Lee, G. Mendis, and J. Sutherland, 2019) for implementing SS in the data-
driven model, for example, SVM, ANN, FS, and RF. Nevertheless, their main results
were achieved using SVM to maximize the kernel function.
Two categories for multi-sensor fusion ML algorithm were described by (S. He,
H. Shin, and S. Xu et al., 2020): State Vector Fusion and Information Vector Fusion.
The first focuses on local estimations over the sensor network, while the second “refers
to direct or indirect exchanges of local measurements among sensor nodes.”
Moreover, such contribution is relevant for SS network implementation, which can be
employed for large-scale, low-cost solutions.
The study by (A. Guzman-Urbina, K. Ouchi, and H. Ohno et al., 2022) showed
that the FIEMA approach for data-driven SS presented higher accuracy over SVM and
ANN ML methods. According to (P. Zhu, H. Peng, and A. Rwei, 2022): “Current
machine learning and big-data analytical techniques rely on high-quality data for
81

algorithm training as well as data analysis, highlighting the importance of signal fidelity
for wearable sensors.”
The Microsoft Azure IoT platform was presented by (J. Kabugo, S. Jämsä-
Jounela, and R. Schiemann et al., 2020) for data-driven SS development using the
available ML techniques Azure. In addition, another SS implementation using ML is
shown (D. Ntamo, E. Lopez-Montero, and J. Mack et al., 2022). These algorithms were
employed to predict and estimate hard-to-measure variables.
The SS design is cited as an inferential sensor by (M. Barton and B. Lennox,
2022), using the decision tree ML algorithm to fit and train the model for low-bias in a
high-variance dataset. Meanwhile, in the paper presented by (E. Jalee and K. Aparna,
2016) found in the literature review on the employment of ANFIS, SVM, PLS, Kalman
Filters, ANN, and Fuzzy Logic with GA for SS implementations, the authors stated:

“This paper uses a new method, nonlinear autoregressive with exogenous


input (NARX) based ANFIS for soft sensor modeling. This study proposes a
more accurate and predictive model combining the advantages of the neural
network, fuzzy inference mechanism, and NARX structure predictability.”

Nevertheless, according to (D. Aguado, G. Noriega-Hevia, and J. Ferrer et al.,


2022), the principal ML method used in SS development research is the ANN.
Moreover, (P. Nkulikiyinka, Y. Yan, and F. Güleç et al., 2020) used ANN and RF
algorithms to implement SS models to enhance a data-driven system. Nevertheless,
another ANN application mixed with NARX for SS-supervised fault detection was
studied by (A. Theissler, J. Pérez-Velázquez, and M. Kettelgerdes et al., 2021).

3.8 Q.04: THE METHODS FOR FEATURE ENGINEERING IN SS

This section explores the found methods for Feature Engineering in 3.8.1 and
their differences compared to Hyperparameter Tuning (HT) in 3.8.2. Those
contributions are present in 23 of the 59 researched papers.

3.8.1 Feature Engineering Enhancing Soft Sensors

The study (R. Meyes, J. Donauer, and A. Schmeing et al., 2019) performed the
FEn classification task of Soft Sensors signals, using the feature vector raw signal and
its information in the frequency domain to make predictions over the time-series data.
82

In addition, for a CNN implementation, the hybrid bidirectional LSTM was employed by
(V. Kocaman and D. Talby, 2022) to eliminate further FEn procedures.
According to (L. Günther, S. Kärcher, and T. Bauernhansl, 2019), Feature
Engineering procedures are considered one of the most critical procedures in ML
projects once they avoid the dependency on human experience in feature selection or
extraction in SS development. Besides this application, (S. Baduge, S. Thilakarathna,
and J. Perera et al., 2022) demonstrate that FEn improves the SS versatility and
accuracy in ML model development.
In data-driven SS applications, (D. Aguado, G. Noriega-Hevia, and J. Ferrer et
al., 2022) proposed the FEn to extract features from pH and their direct interactions
with input variables. They affirm that “Feature extraction based on the technical
knowledge of the process was key to make the development of a reliable data-driven
PLS soft-sensor possible.” Another data-driven system studied by (O. Fisher, N.
Watson, and J. Escrig et al., 2020) presents three observations of FEn processes
employed in SS development, which are ensuring the model boundaries testing, fitting,
and predicting capacities, accomplishing any temporal variation in the system in
collected data, and distributing data between the defined boundaries.
According to the author, these procedures must be followed because it is crucial
“to ensure the model is capable of fitting data and making predictions throughout the
system.” Nevertheless, in (M. Ishi, J. Patil, and V. Patil, 2022) research, many MOO
algorithms were employed to reduce the number of hyperparameters in ML techniques
for an SS implementation.
An FCS application for SS enhancement using FEn was conducted by (V.
Henrique, R. Massao, and G. Reynoso-Meza, 2021) to lead the final prediction using
input features, following these three steps: data acquisition, model training, and model
validation. Finally, a FEn application for SS with embedded ML methods was presented
by (L. Ma, Y. Liu, and X. Zhang et al., 2019) in CNN and SVM classifiers for the remote-
sensing classifier.
Based on (M. Shapi, N. Ramil, and L. Awalin, 2021), the FEn approach was by
inputting different sets of features in the ML technique to enhance the energy
consumption for a data-driven predictive SS model. According to (T. Grüner, F.
Böllhoff, and R. Meisetschläger et al., 2020), the FEn could transform a time series
83

into statistical features in a data-driven SS fault detection electromechanically driven


system.
In the (M. Siddiqi, B. Jiang, and R. Asadi et al., 2021) study, a FEn application
was carried out to optimize denoising SS autoencoders, and the authors realized that
hand-crafted FEn would be very difficult. Therefore, they proposed DL employment to
it. On the other hand, another paper (M. Maggipinto, E. Pesavento, and F. Altinier, et
al., 2019) employed, during the FEn phase, the manual filter for transient times,
temporal averages and peaks for washing machines with Data-Driven SS embedded.
The input relevance selection is concerned by (I. Mendia, S. Gil-López, and I.
Landa-Torres et al., 2022). Their study assigned the Permutation-Based Importance
(PIMP) technique and FEn to select the most relevant Random Forest and Gradient
Boosting inputs in an SS refinery real-time process. Furthermore, the authors (L.
Petruschke, J. Walther, and M. Burkhardt et al., 2021) stated that using FEn to improve
a DL application with soft sensor data using logical connections between the identified
features.
A FEn method proposed by (M. Mowbray, T. Savage, and C. Wu et al., 2021)
is based on feature extraction based on the Self-Organising Maps to Discriminant
Index (SOMDI). According to them, “This enables interpretation of the reasons for
classification prediction and provides insight into the biochemical nature of class
differentiation.”
The model prediction using SS with DT is improved by (P. Nkulikiyinka, Y. Yan,
and F. Güleç et al., 2020) research, employing the PCA approach as an FEn technique,
in which each feature of the acquired dataset is submitted to a split-criterion. According
to (R. Forghani, P. Savadjiev, and A. Chatterjee et al., 2019), there are three main
strategies for Feature Selection:
a) Wrapper Methods: these employ classification algorithms to score feature
performance;
b) Filtering Methods: filters feature in the pre-processing procedures without using
any classification method;
c) Embedded Methods: the selected features are based on ML algorithms'
performance evaluation over the optimization cost function.
84

3.8.2 Hyperparameter Tunning in Soft Sensors Implementation

The Hyperparameter Tunning (HT) is presented by (T. Krivec, J. Kocijan, and


M. Perne et al., 2021) to define the maximization of marginal log-likelihood in ML
employing a Data-Driven approach for weather forecasting using temperature SS.
Another HT application is described by (T. Grüner, F. Böllhoff, and R. Meisetschläger
et al., 2020), in which an optimization algorithm changes the parameters in the
activation function and the number of neurons.
A study by (M. Mowbray, T. Savage, and C. Wu et al., 2021) contributed to SS
ensemble learning with the DT algorithm to select the number of trees and the
maximum depth of each decision tree's learning rate. Nevertheless, DL was employed
in (R. Forghani, P. Savadjiev, and A. Chatterjee et al., 2019) paper to perform
prediction using the data-driven SS approach with a CNN. They pointed out a
disadvantage in enhancing the performance by employing HT to tune “the number of
convolution filters, the size of the filters, and parameters involved in the pooling.”
An application for cutting machines proposed by (L. Petruschke, J. Walther, and
M. Burkhardt et al., 2021) conducted HT for some parameters, which were turning to
support the SS performance in ANN and CNN activation functions. The washing
machines with a Data-Driven SS-embedded application proposed by (M. Maggipinto,
E. Pesavento, and F. Altinier et al., 2019) showed that Monte Carlo Cross-Validation
for HT increased the SS performance. In conclusion, those applications are apart from
FEn, which acts directly on the dataset.

3.9 SYSTEMATIC LITERATURE REVIEW SUMMARIZED RESULTS

The SLR conduction presented 59 relevant papers to this research field and
answered the guideline questions proposed in table 1. The process started by defining
the ten procedures explored in section 2.2, with relevant study areas being the first.
Then, the research guideline questions were proposed, and the keywords were defined
in step one, then mixed to generate research strings to browse over the Science Direct
engine.
The first search resulted in 2,153 papers, and by applying the Inclusion and
Exclusion Criteria proposed in step seven in table 4, this number was reduced to 109
papers. Hence, the classification criteria browsed their abstracts following the table’s
85

seven criteria, and only 59 articles remained in the leading research. After this research
over every paper, each one of the questions was answered and contributed to the
global understanding of SS, ML techniques, Industry 4.0, and Feature Engineering.
Figure 17 presents the general frequency percentage of each question:

Figure 17 – Percentage of Questions Answered by the Papers

Q.04
15%

Q.01
35%

Q.03
32%

Q.02
18%

Q.01 Q.02 Q.03 Q.04

Source: The author, 2022.

These papers were read and submitted to table 1 questions, their main
contribution, and if they presented any relevant Soft Sensor definition that could
improve the State of the Art. Notably, only 28.81% of papers presented some SS
definition, and less than 40% explored feature engineering at the required level.
Nevertheless, the contribution of each paper achieves the letter “C” specific
objective by presenting relevant information about research areas and presenting each
paper’s primary information to substantiate the SLR.
86

4 BENCHMARK DISCUSSION

The benchmark is a company challenge proposed by AIRBUS, the International


Federation for Automatic Control - IFAC, to identify character flaws in commercial flight
control systems. This subchapter will contextualize the case study and detail the
diagrams the federation provided in R language and Simulink blocks.

4.1 AIRBUS: OFC X IFAC – THE BENCHMARK

According to the technical note of the IFAC benchmark, the flight control
systems (Flight Control System - FCS) are the most important in operation, being
responsible for the control of altitude, trajectory, and speed. In addition, such systems
are among the numerous controllers of the pilot” panel and the actuators, comprising
all the actuator and sensor systems in the avionics system. The great advantage of
this embedded technology is that it allows the application of advanced electronic
control loops on surfaces and must be available for use under any circumstances.
Because of this, fault detection is a critical aspect, given the impact of these
disturbances on aircraft structural modeling. Thus, the mobile mechanism responsible
for detecting oscillatory faults (OFC) can be seen in the illustration in figure 18:

Figure 18 – Chosen Benchmark’s Mechanism

Source: IFAC, 2020.

The figure above shows that the actuator's surface receives analog signals from
the FCC (Flight Control Computer) increased in K units by a proportional block, and
the rod sensor (Rod Sensor) feeds back to the system, closing the loop of control.
According to the authors, such a system must be able to detect low-frequency faults
(below 20Hz). However, other frequencies can be studied in some instances, so only
disturbances located on moving surfaces should be considered.
87

Due to these oscillations' nature, the researchers point to the existence of OFCs
of a “liquid” character that occur when a sinusoidal signal is added to the servo-
controlled signal or “solid” when the sinusoidal signal overlaps the nominal signal.
Thus, it is possible to determine the OFC detection methodology for each described
case. Furthermore, the design requirements of this benchmark are defined by the
authors and summarized in table 7:

Table 7 – Benchmark requirements proposed by IFAC


BENCHMARK’s REQUIREMENTS

1 Minimum possible amplitude signals must be detected.

2 Signals with a frequency between 1 and 10hz must be detected.

Fault signals must be detected with at least three oscillation periods regardless
3
of the OFC frequency.
4 Liquid or solid flaws must be detected.
5 Control and measurement signals from the sensor must be detected.
The fault detection system must not produce false alarms under the following
circumstances:
6 a) Normal flight with or without turbulence of any level;
b) Control input in step, sinusoidal or chirp-type signals (which increase in
frequency as a function of time).
Source: The Author, 2022.

This table summarizes the main requirements of the Customer Optical


Benchmark (AIRBUS) to assist in solving such a critical problem in electronic flight
control systems. Thus, the software to be developed must take these aspects into
account.

4.2 THE MODEL SYSTEM: DIAGRAMS AND CODE

For the emulation of the benchmark in a virtual environment, some Simulink


diagrams and codes in R language were provided, containing classes of parameters
for emulation that must be selected to confirm the robustness of the software
developed to meet the criteria in Table 4. In this way, the Simulink diagram is shown
in figure 19:
88

Figure 19 – Benchmark Simulink Diagram

Source: IFAC, adapted by the author, 2022.

Figure 19 makes it possible to observe the process simulation plant in the


Simulink macroscopically with several data being sent to the MatLab work area,
considering turbulence and other variables that can cause OFCs. Such a plant can be
subdivided into four main modules: flight path control, load factor control, servo
simulator oscillation detection over surfaces, and the aircraft turbulence dynamics
simulator.

4.2.1 Flight Trajectory Angle Control Module

According to the benchmark report, this module will receive the settings
informed by the user about the flight path angle and define, through a switch, the
aircraft path control mode depending on the mode chosen by the user, among them:
“FPA_CONTROL”, ”NZ_STEP”, “NZ_SIN” and “NZ_CHIRP”. Then, respectively being
converted to numbers depending on the selection mode in the Aircraft class, this
diagram is presented in figure 20:
89

Figure 20 – Simulink diagram of the trajectory control module

Source: IFAC, adapted by the author, 2022.

In this way, the control signal is selected between control by the Function Point
Analysis – FPA, control by unit step signals, control with proportional sine signal, or
control by the signal of the Chirp type whose frequency increases with time. With this,
the signals are sent to the load factor control block.

4.2.2 Load Factor Control Module

In the last sub-chapter, the term “load factor” was presented, which can be
understood as the total ratio between the force imposed by the air resistance and the
force proportional to the weight of the aircraft. Therefore, this variable is proportional
to the speed and the flight angle, according to the private pilot ground school (2006).
Therefore, this variable impacts the flight dynamics and can be changed through the
deflection of the airplane's actuators.
On the other hand, this module commands the control surface deflection by
measuring the load factor (first feedback) and gyroscope measurements to measure
the angulation rate of each axis (second feedback) to control the flight angle. With that
defined, figure 21 shows the load control block:
90

Figure 21 – Simulink diagram of load factor control module

Source: IFAC, adapted by the author, 2022.

The diagram in figure 21 shows that the controller receives a command signal
from the previous block in a saturator. This signal is distributed to a proportional gain
block and another branch to a proportional-integral block, and both are added to two
feedbacks (Figure 18). Such signals enter the load control module's transfer function
to command the servo simulator block to the wobble detection surface. It is worth
noting that the “Nz_cmd” block stores the control signals sent by the angle control
module after saturation, that is, the signal that arrives in the proportional and
proportional-integral gain systems.
91

4.2.3 Detection Surface Servo Command Simulator (Real Servo)

These block models the behavior of the control surface servo system, employing
an actuator and a rod-type sensor to measure the deflection. The control command is
received by the “delta_des” variable and sent to the workspace by the 2D array variable
defined as “dx_comm”. The block output is the estimated deflection on the control
surface measured by the rod-type sensor. Finally, the “Real Servo” module is fed back
with the “delta” signal from the exact measurement of the rod sensor deflection without
considering the sensor noise. To illustrate such situations, figure 22 below shows the
Simulink diagram connecting all the variables and blocks described:

Figure 22 – Simulink diagram servo control simulator

Source: IFAC, adapted by the author, 2022.

It is shown in figure 22 that the “servo” block sends an array to the workspace
containing the control command, the deflection measured by the sensor, and the
deflection measurement without the sensor noise simulation. This module is detailed
in the diagram in figure 23:
92

Figure 23 – Simulink Diagram “Real Servo”

Source: IFAC, adapted by the author, 2022.

The diagram in figure 23 presents that the control signal from the load factor
controller passes through a saturator and a change rate limiter, keeping it within
acceptable limits for real situations. After these steps, the rod sensor angle signal is
converted to position and inserted into input 1 of the “plant 1” block and, in the second
input, the “delta” output feedback. The diagram of the “plant 1” module, responsible for
simulating the dynamics of the sensor positioning system as a function of the control
loop performance and the noise-free feedback “p_des” can be seen in figure 24:

Figure 24 – Sensor Position Process Plant Simulink Module

Source: IFAC, adapted by the author, 2022.

Figure 24 shows the blocks that emulated the flight system identification under
adverse conditions in the Simulink model. The “F_aero” signal is convoluted with the
last corrected sensor position signal and transformed into a binary signal, which can
be 1 or -1 depending on the sensor reading. According to the benchmark organizers,
the oscillatory fault can be produced by the output of the servo command and feeding
93

the closed positioning control loop. Such modeling is presented in Figure 18. The
authors obtained the simulated position of the rod sensor “Rod_Sensor” and “rod_pos”
are sensor positions. However, one is given without noise (ideal situation) to feed back
the main simulation loop and the other with noise, simulating a natural process.

4.2.4 Aircraft Turbulence Dynamics Simulator

Meanwhile, the module that simulates the turbulence of the system is presented
in the Simulink diagram in figure 25:

Figure 25 – Simulink diagram for turbulence simulation

Source: IFAC, adapted by the author, 2022.

In this step, a selector receives the turbulence mode selected by the user, which
can be without turbulence (number zero), with slight turbulence (number one),
moderate turbulence (number two), or severe turbulence (number three). These
variables contain the “.mat” extension because they are stored in MatLab® files
containing real turbulence data series classified in the groups presented above. Then,
the signals are divided into three in demultiplexer, and two are sent to the diagram in
figure 18. Meanwhile, the other signals have amplitudes multiplied by a gain “1/V_trim”,
a turbulence parameter, and a deflection angle parameter. Then, the received is
converted from radians to degrees; in this way, the signal is again multiplexed and sent
to a closed loop with other gains and an integrator for reconditioning the turbulence
signal through the Von Karman turbulence model, natively available by MatLab®. With
the turbulence generation block explained, the final part of the diagram can be
94

presented, in which the data is filtered and sent to the workspace, as shown in Figure
26:

Figure 26 – Simulink diagram for the presentation of results

Source: IFAC, adapted by the author, 2022.

At the end of the process, the data is received by a demultiplexer block, which
transforms a batch of data into parallel signals that, in turn, are sent to the process
loop feedback, filters, or directly sent to the work area, thus becoming available for use
or query.
95

5 RESULTS: THE CRITICAL ANALYSIS

Based on RSL, this work resulted in a Soft Sensor implementation on the fault
detection benchmark presented. Hence, this critical analysis will be divided into five
parts. First, software development for Soft Sensor in 5.2, MATLAB® integration with
Python language in 5.3, the results found by each of the applied methods (SVM, MLP,
and DT) in the 5.4 section, and the demand for feature engineering into this benchmark
solution in 5.5.

5.1 SOFT SENSOR: THE SOFTWARE DEVELOPMENT

The Soft Sensor software development started in the final control and
automation engineering project, presented by (M. Feliciano and G. Reynoso-Meza,
2020), with a different purpose: to be a commercial software that provides real-time
graphs and detailed datasheets with acquired data. Due to this reason, it presents a
Graphical User Interface (GUI), which can be seen in figure 27:

Figure 27 – Soft Sensor Software GUI

Source: M. Feliciano, G. Reynoso-Meza, 2020.

Figure 27 GUI allows the user to configure the parameters for simulation and
set the folder where the models of the Simulink™ or data acquisition file are located.
Then, presents to the user the steps performed, plotting the graphs in real-time or
simulating the real-time process in the case of emulation. The benchmark button, in
96

addition to asking the user the directory where the data is, also enables the 'start'
button and a combo box to choose the case study.
In the next step, the user defines the OFC source, divided into four groups:
'current' or 'cs_current' to measure sensor amplitude and variance (bias) in mA
(milliamps) or 'sensor' or 'cs_sensor' to measure these quantities in millimeters. With
the source defined, the types of OFCs must be provided, among them: 'none'
(parameterizes some variables to null), 'liquid' (when a sinusoidal signal is added to
the servo-controlled signal) or 'solid' (when a sinusoid is superimposed the nominal
signal of the disturbance). The user must then choose the type of turbulence that can
be classified between: 'none', which does not generate turbulence; 'light' which is slight
turbulence; 'moderate' which causes moderate turbulence in the simulation; or 'severe'
which will generate turbulence severe in the emulated system.
After defining the turbulence, the user chooses the type of control to be applied
to the plant, the first of which is the FPA control mode 'FPA_Control', which works with
an analysis function of past points to correct the current output in the closed loop. The
'NZ_STEP' type works with unit step control, 'NZ_Chirp' uses a sine signal with variable
frequency, and 'NZ_Sine,' whose control signal is given through a sinusoid. It is worth
mentioning that all control signals act together with the blocks shown in figure 20.
With this configured, the user must set the wave amplitude, the sensor error or
variance, and the OFC frequency that will be worked on in the case study (in actual
cases, this parameter will be provided by itself). Hence, the user proceeds to the final
parameterization phase, where the benchmark dataset and the training method are
defined. Such methods are Support Vector Machine (SVM), Decision Tree (DT), or
Multi-Layer Perceptron (representative of the neural network family). Before starting,
the user must provide the total simulation time or, if it is data acquisition, the software
interprets it as infinite. With all these variables defined, table 8 can be presented, which
explains each parameter:
97

Table 8 – Software scenario parameters


POSSIBLE
PARAMETER DESCRIPTION
VALUES

sensor
Rod sensor in millimeters.
cs_sensor
OFC Source
current
Rod sensor with mA measure.
cs_current

none No OFC.
OFC with sine signal added to the servo-
OFC Type liquid
controlled signal
The sinusoidal signal overrides the
solid
nominal signal of the disturbance.

none No turbulence

Light turbulence, with real light


Light
turbulence data
Turbulence Type
Moderate turbulence, with actual
Moderate
moderate turbulence data
Severe turbulence, with real data of
Severe
severe turbulence
Control by the recursive point analysis
FPA_Control
function
NZ_step Controlled by step functions
Control Type
NZ_chirp Control with a varied frequency signal

NZ_sine Constant frequency sine signal control

Angular amplitude measured by the


Amplitude (0.01,10) mm / mA
sensor

(0.001,1.25) mm / Measurement sensor error, given in the


Sensor bias
mA unit as a function of the source

Frequency of the disturbance, entered


by the user in simulations or imposed by
OFC Frequency (0.1π, 20π) rad
real usage situations in the case of real
applications.
Source: M. Feliciano, G. Reynoso-Meza, 2020.
98

5.2 MATHLAB® AND PYTHON INTEGRATION

Once the parameters are converted to start the simulation in the specific case
study of this project, Python starts the MatLab® API, and it is possible to execute
commands through the code inside a virtual workspace, declaring all the variables
used in the simulation within Simulink. MathWorks developed the used library, and it
can be installed by downloading their engine API, not via “pip install.”
The average computational cost to open the MATLAB® API on the computer
proposed in subchapter 1.4.2 is approximately 5 seconds, and that of emulating the
case study plant in Simulink is around 3 seconds for processes of 5 to 60 seconds in
duration in simulation.
The data loading function sends commands through the API and creates a
virtual environment that emulates the avionics system on Simulink™. Therefore, it is
necessary to employ techniques to emulate the obtained datasets being used in the
case of real-time data acquisition, in which case the graph construction functions
remain. However, the inputs will be received directly from MATLAB®. The complete
running process of this software is presented in the diagram in figure 28:

Figure 28 – Feature Engineering Framework Workflow

Source: The Author, 2022.

The framework workflow presented in figure 28 starts with an input in the


interface to set up all parameters for simulation in MATLAB® Simulink™. Then, the
Feature Engineering methods are employed for data acquired from the API channel,
selecting, extracting, and filtering the most relevant features from data in real-time.
99

Meanwhile, the communication with Python occurs throughout API, and the data flows
from simulation directly to the ML methods implemented.
In this case study, SVM, DT, and MLP methods were implemented using the
sklearn library. However, many other ML or optimization methods can be employed.
Python language was chosen due to its versatility to be employed on Cloud Computing
applications and aeronautics solutions, as shown in 3.5.4 and 3.5.2, related to SS
implementations. Hence, the processed data from the process simulation is stored in
a Maria DB database instance, which runs locally. However, it can run in the cloud and
be processed remotely in real-time.
The developed interface presents user graphs, as shown in figure 28, using
simulation data after FEn and ML application over a time series model to identify and
display the possible Oscillatory Failure Cases. These are presented as red crosses in
figure 29:

Figure 29 – OFC Identification in Benchmark Simulation Example

Source: The Author, 2022.

Figure 29 shows an example of OFC identification with the proposed framework


before using FEn for improving data features in the ML method. Nevertheless, reports
in PDF (Portable Document Format) and Excel plans in ‘.xlsx’ format can be exported
from software for further use in other studies.
100

5.3 BENCHMARK TESTING

In this phase, the results produced by the software will be presented. These
results include a report on the steps performed by the software, a summary of general
data, a report highlighting only the failures that have occurred, and an exported chart
in image format. The experiment will be conducted by analyzing three variations of
parameters for each of the classifiers mentioned above, as shown in Table 9:

Table 9 – Experiment scenarios and their parameters


Scenario
Parameters
Ideal Light Moderate Stormy
OFC Source Sensor

OFC Type None Liquid Solid Liquid

Turbulence None Light Moderate Severe

Control type FPA_Control

Amplitude (mA) 0,10 0,50 1,50 3,00

Bias (mA) 0,01 0,10 0,30 1,00

Frequency (rad/s) 3,0π 1,5π 0,5π 0,1π


Time (s) 10
Source: The Author, 2022.

From table 9, one can infer that the oscillation source is kept constant as it would
not be meaningful to change the source while observing how the methods evaluate
failures and changing the control method would not be viable either. However, other
variables compose the different scenarios. For example, the OFC type influences the
process by introducing sinusoidal signals into the response simulation, causing more
unstable scenarios. Turbulence also increases the possibility of failures.
The parameter of oscillation amplitude affects the accuracy of the Machine
Learning (ML) method in measuring the signals. The higher the amplitude, the better
the response from the method should be. As the scenario becomes more unstable, the
oscillation amplitude increases, making it more imprecise. This scenario challenges
the methods by evaluating them in emulated scenarios in MATLAB® Simulink.
Lastly, the oscillation frequency decreases as the scenario worsens. Therefore,
low amplitudes are more critical when it comes to measuring oscillatory failures, as
explained by the benchmark proposers.
101

5.4 THE ML METHODS APPLICATION IN SS DEVELOPMENT RESULTS

The main results obtained by the software were developed in response to the
objective of generating this project to present the best among the three implemented
methods (SVM, DT, and MLP). Therefore, the control surface deflection amplitude and
sensor amplitude will be evaluated in degrees per each scenario and ML method as
the identified failure percentage, without considering error type 1 and error type 2,
which will be considered only in the last column.
For that, four scenarios were emulated in each of the methods implemented in
the software. The main variables as the maximum amplitude of the control command
and the sensor deflection response, the number of identified failures, and the
performance in the confusion matrix will be aspects discussed in the analysis of table
10 to determine the best identification method given by the confusion matrix trace:

Table 10 – Experiment Results


EXPERIMENT RESULTS
Machine Control Identified Confusion
Sensor
Learning Scenario Surface Failure Matrix
Amplitude
Method Amplitude Percentage Performance
Ideal 0.0075° 0.164° 10.00% 55.00%

Light 0.12° 0.50° 33.50% 44.00%


Decision Tree
Moderate 33° 50° 56.25% 57.50%

Severe 1.29° 2.40° 64.25% 49.75%

Ideal 0.0225° 0.15° 0.00% 55.00%


Support Vector Light 0.12° 0.47° 0.00% 31.50%
Machine
SVM Moderate 50° 32° 1.50% 45.25%
Severe 1.28° 2.26° 0.00% 53.50%

Neural Ideal 0.0075° 0.161° 100.00% 45.00%


Networks: Light 0.12° 0.45° 100.00% 68.50%
MultiLayer
Perceptron Moderate 50° 34° 100.00% 64.25%
MPL
Severe 1.59° 2.25° 100.00% 46.50%
Ideal 0.012° 0.155° 48.25%

MatLab: Light 0.15° 0.51° 68.75%


REFERENCE
Decision Tree Moderate 31° 50° 64.00%

Severe 1.18° 1.56° 46.75%


Source: M. Feliciano, G. Reynoso-Meza, 2020.
102

As evidenced in table 10, the three methods were tested in the four scenarios
and observed in the confusion matrices condensed in the table above. Furthermore,
the amplitude columns of the control command and sensor reading range are
considered to ensure that the scenarios are not biased. Moreover, the fifth column
contains the percentage of failures identified in the 400 data emulated in each test,
which is 10 seconds of simulation. Finally, the performance according to the confusion
matrix is presented in the last column based on the matrix trace. The construction of
this column is given through the data considered ideal, obtained by the Decision Tree
classifier implemented in MatLab® with the help of specific toolboxes.
Regarding the methods discussed in table 13, starting with MPL, it is clear that this
method was the most assertive because 100% of the analyzed data were identified as
failures. However, this is due to the bias of considering all data failures. This fact can
cause the user to be unreliable in the software because, as 100% of the data are given
as failures, there is no way to identify which ones are and which are not in real
situations, with this method becoming unfeasible for this project.
On the other hand, the SVM method considers most data as non-faults, even
when they are present. Unfortunately, fault tolerance presents one of the greatest
dangers in the aerospace engineering scenarios where this case study takes place.
Thus, the number of false negatives is alarming and causes the impossibility of this
method for application in the benchmark. Furthermore, the collision with the error in a
classification method cannot be considered.
However, the Decision Tree method, implemented in Python, proved reliable
since it identified OFCs consistent with the reference used in practically all scenarios.
It presented a few false negatives, representing the failures where they were. Another
essential characteristic is its low accuracy (Table 13) for identifying false positives,
pointing out flaws in data that did not fail for OFC classification.
Thus, with the results arranged for analysis, considering the weight of the
average of false positives (error type I) obtained at 20%, this data is relevant because
it presents the number of failures identified but did not exist. While the weight of false
negatives (error type II) was assigned to 50%, as it is more critical to identify faults than
not, the higher weight is considered in this scenario. The remaining 30% refers to the
average precision of the methods for each scenario, so the weighting field is obtained
103

by adding the three weights weighted by the percentages described above. Therefore,
table 10 shows each one of the methods, summarizing what is described in table 11:

Table 11 – Critical Analysis of Obtained Results


Critical Analysis of Obtained Results

Ranking ML Technique Error Type I Error type II Accuracy

1° Decision Tree 30.63% 17.81% 49.06%

Support Vector
2° 5.00% 52.44% 46.31%
Machine
Multi-Layer
3° 43.94% 0.00% 57.81%
Perceptron
Source: M. Feliciano, G. Reynoso-Meza, 2020.

As can be seen from table 11, due to the SVM having identified few failures in
every scenario, its number of false positives is minimal, and the opposite is true for the
MLP method, which identified all data as failures, not scoring false negatives. However,
the DT method presented many false positives and a good slice of false negatives.
Finally, as a result of this analysis, it can be said that the only machine learning method
among the three analyzed (DT, SVM, and MLP) that is reliable for application to the
case study proposed by AirBus to IFAC is the DT.
With the exception that it is possible to work on feature engineering (According
to Annex C) of the learning dataset so that the classification presents a sharper
performance, this feature may result from future work. Although the software in the
MVP version will still have the other classifiers in this way due to the possibility of
acquiring data in real-time, the methods that did not present satisfactory performance
may be re-evaluated for other processes.

5.5 THE DEMAND FOR FEATURE ENGINEERING IN SS

After evaluating the results presented in table 14 and having the (V. Ribeiro, R.
Kagami, and G. Reynoso-Meza, 2020) conclusion about DT results over this
benchmark, feature engineering employment became a growing demand. Hence, they
provided a block diagram for Simulink™ implementation in the discrete domain, using
the Z transformation to improve the feature selection in this study case, as shown in
figure 30:
104

Figure 30 – Feature Engineering Block Diagram for Simulink™ Implementation

Source: V. Ribeiro, R. Kagami, and G. Reynoso-Meza, 2020.


105

The diagram depicts a signal processing system with multiple parallel branches
for data analysis. The primary input undergoes a "Moving Average" operation to
smooth out data. It then passes through various delay elements, which vary in the
range presented in equation 16:

[ 𝑧 −20 , 𝑧 −1 ] (16)

Post-delay, data streams are combined with the primary data and processed
using "Moving RMS" to measure the data's magnitude. Outputs a to j result from these
processes, each corresponding to specific delay and combination operations.
Additionally, there are inputs c and d, with c subjected to a "Moving Average" and d to
a "Moving Standard Deviation", assessing the data's average and variability,
respectively. The system's design indicates an emphasis on trend identification and
variability assessment.
In this sense, at least ten resources are created for each input signal, totaling a
window with forty data processed by blocks of moving average, moving variance,
moving average square, and even zero detection. The data production block capable
of training the model uses the methods of the block above. It guarantees the quality of
the information acquired by some genuine sensor, which may be of the rod type, and
the techniques were responsible for guaranteeing a reliable training base.

5.6 FRAMEWORK PROPOSE FOR SOLVING THE CHALLENGE

This section aims to address this need by proposing a comprehensive


framework. This framework is not merely a juxtaposition of the studied components but
a nuanced synthesis, designed to optimize the predictive process of OFCs in
aerospace systems. Besides that, it unfolds the proposed framework in detail, outlining
its structure, implementation strategy, anticipated challenges, and expected outcomes.
The framework's design is predicated on the insights garnered from the preceding
chapters, ensuring that each component is purposefully aligned with the overarching
goal of enhanced OFC prediction. This proposal represents the culmination of the
research conducted, offering a practical, scalable, and robust solution to a critical
challenge in aerospace engineering.
106

5.6.1 Introduction to the Framework

In the evolving landscape of aerospace engineering, the exigency for advanced


predictive methodologies is required, particularly in the context of oscillatory failure
cases (OFCs). This dissertation has meticulously explored the interplay between soft
sensors, feature engineering, and machine learning models, paving the way for an
integrated approach to enhance OFC prediction. The proposed framework, therefore,
emerges as a culmination of these explorations, aimed at synthesizing these
components into a cohesive, efficient system for predicting and preempting OFCs in
aerospace applications.
The necessity of such a framework is rooted in the increasing complexity of
aerospace systems. Traditional monitoring methods are often inadequate for these
sophisticated systems, necessitating a more evolved analytical approach. This
framework is envisaged to transform the predictive maintenance landscape in
aerospace engineering, leveraging data in unprecedented ways to anticipate and
mitigate failure scenarios.

5.6.2 Components of the Framework

The framework is composed of several integral parts, each contributing uniquely


to the overall objective of accurate OFC prediction.
The first aspect, Data Acquisition, involves the strategic collection of operational
data from aerospace systems. Utilizing an array of soft sensors, this phase is tasked
with gathering diverse parameters, crucial for constructing a comprehensive dataset.
The quality, granularity, and relevance of this data are critical, forming the foundation
for the subsequent analysis phases.
Following data acquisition, the Feature Engineering phase becomes pivotal.
Here, the focus is on extracting and refining features that hold significant predictive
value for OFCs. This process is dynamic, adapting to the changing patterns in the data
and employing advanced techniques to distill complex signals into meaningful,
predictive insights. Central to the framework is the Machine Learning Models phase.
While Decision Trees have been highlighted for their efficacy, the framework adopts a
flexible approach, welcoming a variety of algorithms tailored to the data's
characteristics and the specific nature of OFCs.
107

The final phase, Evaluation, encompasses a thorough assessment of the


predictive model's performance. This goes beyond traditional accuracy and precision
metrics to include an evaluation of the model's adaptability, scalability, and robustness
in practical scenarios.

5.6.3 Implementation Strategy

Implementing this framework requires a strategic approach, blending


technological and methodological considerations. For Data Acquisition, selecting
appropriate soft sensors is crucial. These sensors must capture high-fidelity data that
is comprehensive and pertinent to aerospace systems' dynamics.
In Feature Engineering, the application of algorithms like Feature Selection,
Feature extraction and time-series analysis techniques is essential in transforming raw
data into actionable insights. The Machine Learning Models phase involves not just
employing decision trees but also exploring other algorithms like Random Forests,
Gradient Boosting and XGBoost, depending on the data's complexity and traits.
The Evaluation phase employs a mix of cross-validation techniques and
performance metrics analysis, supplemented with real-world testing, to ensure the
model's efficacy and reliability.

5.6.4 Anticipated Challenges and Solutions

Despite this research focus on the proposed benchmark, data generalization


remains a critical challenge. The diversity of aerospace systems means that models
trained on specific datasets may not perform well universally. To tackle this, the
benchmark investigates transfer learning techniques, allowing the model to adapt its
knowledge from one system to other similar systems. This approach can significantly
improve model portability and effectiveness across different aerospace systems.
Another anticipated challenge is the computational complexity, especially when
deploying the framework in real-time environments. To address this, this research
explores the use of optimized machine learning algorithms that balance predictive
accuracy with computational efficiency. Additionally, implementing cloud-based
computing solutions could provide the necessary computational resources without
overburdening the onboard system resources.
108

5.6.5 Expected Outcomes and Impact

The effective deployment of this framework is expected to significantly enhance


the accuracy and efficiency of OFC predictions in aerospace systems. This will not only
reduce the incidence of unanticipated failures but also lead to more proactive
maintenance strategies.
Beyond operational efficiencies, the framework's impact extends to setting new
standards in advanced data analytics for high-stakes engineering environments,
potentially influencing future practices in aerospace and related sectors. The broader
impact of this work is envisioned in the form of enhanced safety, reliability, and
efficiency in aerospace operations, contributing to sustainable and safer air travel.
Moreover, the successful implementation of this framework could serve as a
model for other high-stakes industries, demonstrating the potential of integrated data-
driven approaches in complex engineering systems. The insights gained from this
research could pave the way for advancements in predictive maintenance across
various domains, leading to broader industrial and societal benefits.
109

6 FEATURE ENGINEERING FRAMEWORK RESULTS

This chapter showcases the development of a feature engineering framework


to evaluate multiple machine learning techniques against different sets of features, it
starts by employing the FEn to the dataset in 6.1, and then section 6.2 presents training
four different ML methods to the benchmark’s output. Therefore, section 6.3 carries
out the process of identifying OFCs using the methods enriched with FEn, thus the
performance analysis is performed in 6.4. Finally, 6.5 presents the framework results.

6.1 EMPLOYING FEATURE ENGINEERING TO THE DATASET

To employ feature engineering to the presented dataset in real-time, the


‘featuretools’ library will be necessary. It will prepare the dataset before training the
models and ensure the input data has quality enough to train the models and represent
improvements to the results in identifying OFCs.
The feature engineering process commenced with a meticulous examination of
the available dataset. This step was critical to improving the trainable data's structure,
the relationships between different features, and the presence of anomalies or patterns
that could influence the machine learning model's performance.
Before employing any feature engineering techniques, it was imperative to
conduct data preprocessing. This process involved cleaning the data among 40
features, addressing missing and inconsistent values, and normalizing the dataset. In
addition, ensuring the dataset was free from anomalies or errors was crucial to
guarantee the quality of the features extracted and the performance of the subsequent
machine learning model.
After data preprocessing, the next phase involved applying feature engineering
techniques to generate new attributes or features. The process involves transforming
the raw data into an improved format to the machine learning model's ability to discern
patterns and enhance its predictive performance.
Feature extraction techniques such as moving average, moving variance,
moving average square, and zero detection were utilized. The moving average
technique was applied to smooth out fluctuations or noise and provide a general data
trend over a specified period. The moving variance offered an understanding of the
110

data's variability, which can often reveal valuable insights about the underlying
patterns.
The moving average square was employed to highlight any trends in the
dispersion or scatter of the data. On the other hand, zero detection was used to find
instances where the signal crosses the zero level, which can often indicate significant
events in time-series data, such as control signals or sensor readings.
Applying these techniques generated at least ten additional features for each
input signal. This process effectively enhanced the original dataset, rendering it more
conducive to learning by the machine learning model. Notably, each newly generated
feature played a crucial role in capturing the inherent patterns and relationships
embedded within the data. This comprehensive set of features significantly fortified the
model's predictive capabilities, enabling it to make more precise and accurate
predictions.
The engineered features were seamlessly integrated with the original dataset,
creating an enhanced dataset comprising 50 features. Including these engineered
features augmented the dataset's information content, potentially improving the
models' performance and their ability to extract meaningful insights from the data. This
new dataset was expected to improve the model's performance, leading to more
accurate and reliable predictions of OFCs.
This systematic feature engineering approach emphasized data preparation's
importance in machine learning tasks. By transforming and enriching the original
dataset, it was anticipated that the enhanced features would significantly improve the
machine learning model's ability to detect and predict Oscillatory Failure Cases
accurately. The subsequent steps would then evaluate the impact of these engineered
features on the model's performance using the code in Attachment A.

6.1.1 Explaining the Outputted code

In the successful accomplishment of section 6.1, several necessary steps were


undertaken, primarily focused on advanced feature engineering, an essential aspect
of machine learning. The aim was to transform the raw data into a format easily
digested by a machine learning model, making it possible to improve the accuracy of
predictions and insights derived from the data. The code can be found in Appendix `A`.
111

Initially, two datasets, 'X' and 'Y' were loaded from their separate CSV files using
the panda's library's read_csv function. The dataset 'X' comprised 40 features, and 'Y'
was the output or target variable.
Subsequently, for the sake of simplicity and ease of reference, the columns in
dataset 'X' were renamed to 'f-0', 'f-1', ..., 'f-39', while the output column in the 'Y'
dataset was renamed to 'Output'. As mentioned above, the renaming was executed
through Python list comprehension, which was quite efficient and saved considerable
computational resources.
The data were then scrutinized for any missing values. When data were missing,
imputation was used to handle these inconsistencies. In this case, the mean value of
the respective column was used to replace any missing values.
The dataset was normalized after imputation using the `StandardScaler` from
the sklearn library. This step was critical to ensure that all the features were on a similar
scale, optimizing the machine learning model's performance by preventing any single
feature from dominating others due to its scale.
Once the normalization was performed, datasets 'X' and 'Y' were combined into
a single dataframe. This was done to simplify the feature engineering and
transformation process, allowing operations to be performed on the entire data
simultaneously.
The next significant step involved splitting the dataset into multiple "sessions" to
facilitate profound feature synthesis. To ensure each session is unique, the
`split_into_sessions` function was adjusted to assign a unique session ID to each row.
After that, deep feature synthesis was conducted with the help of the
Featuretools library. The entity set was created, and each session was added as an
entity to the entity set. Finally, the `dfs` function automatically creates new features
using specified aggregation primitives.
In conclusion, the successful execution of this section was pivotal in extracting
more complex, high-level features from the raw data, significantly contributing to the
enhancement of the machine learning model's performance. The entire process
underlines the importance of data preprocessing and feature engineering in machine
learning and data science projects.
112

6.2 TRAINING ML ALGORITHMS WITH THE FEATURED DATASET

Following the successful feature engineering process described in 6.1, the


Decision Tree (DT) algorithm and the other algorithms were trained using the
enhanced dataset and their 40 features as the previous models presented in 5.4. This
stage involved revisiting the DT model to ensure it could effectively utilize the newly
created features.
The DT model was selected due to its ability to handle high-dimensional data
and interpretability. In addition, its inherent nature to handle non-linear relationships
and its robustness against outliers were deemed suitable for this application.
The enhanced dataset was divided into two subsets to begin the training
process: a training set and a test set. The training set was utilized to train the DT model,
while the test set was set aside to evaluate the model's performance. Data partitioning
was conducted to ensure a good representation of the whole dataset in both subsets,
considering the distribution of different data classes.
The DT model was trained using the Scikit-learn library in Python, a powerful
tool known for its efficiency and flexibility. During the training process, the DT algorithm
was tasked to create a tree-like decision model based on the featured dataset's new
features. This process efficiently identified the best features to partition the data into
its respective classes.
Furthermore, to avoid overfitting, the DT model's complexity was controlled
through hyperparameters such as the maximum depth of the tree and the minimum
number of samples required at a leaf node. The optimal values of these
hyperparameters were determined using techniques like cross-validation and grid
search.
Once the models were trained, they were tested on the test set, with 20% of the
available data, including the new features. Next, the model's predictions were
compared with the actual labels of the test set, and a confusion matrix was constructed
to display the model’s efficiency, as stated in 5.4, but now for the enhanced models.
Finally, the performance of the DT model was evaluated based on metrics derived from
the confusion matrix, including accuracy, precision, recall, and F1 score.
An object-oriented approach was employed to streamline the model selection,
training, and evaluation process. The Python class `ModelSelectionAndEvaluation`
(Present in Attachment B) was defined to encapsulate all suitable methods for this
113

pipeline stage. Each method within the class represents a step in model training and
evaluation.
The method `split_data()` was employed to partition the dataset into features
(`X`) and targets (`Y`). A conventional train-test split was conducted, designating 80%
of the data for training and the remaining 20% for testing. Ensuring a balance between
training and testing data is crucial for creating an effective model. Too little training
data may lead to underfitting, while excessive training data at the expense of testing
data may result in an overfit model.
The `select_model()` method selected a random forest classifier as the
predictive model. Random forest was chosen due to its robustness and generalization
capability, being an ensemble method that operates by constructing multiple decision
trees at training time and outputting the class that is the mode of the classes of the
individual trees.
The `train_model()` method facilitated the training of the selected model using
the training data. This involves feeding the model with the feature variables and their
corresponding target values to learn the underlying patterns in the data.
Following model training, the model's performance was evaluated using the test
data that was initially set aside. The `evaluate_model()` method predicted the target
values for the feature variables in the test data and compared them to the actual target
values. The results were then printed to provide insight into the model's predictive
accuracy and the precision, recall, and F1 score for each class. Table 12 shows the
results of the code:

Table 12 – Classification Report for Decision Tree


Classification Report

Class Precision Recall F1-Score Support

0 0.69 0.70 0.70 13,617

1 0.70 0.70 0.70 14,031


Source: The Author, 2023.

As can be seen from table 12, 'support' refers to the number of actual
occurrences of the class in the specified dataset. For instance, when evaluating a
binary classifier's performance, support gives the number of samples of the proper
response in that class.
114

The 'support' metric can provide valuable context for understanding the other
metrics in the classification report. For example, a high precision or recall value may
not be as meaningful if the support is shallow, as the metric is calculated over fewer
instances.
As per the results obtained, the model's accuracy is approximately 0.70,
suggesting that 70% of the predictions made by the model on the test data are correct.
The classification report reveals that the precision, recall, and F1 scores for both
classes (0 and 1) are also around 0.70. These values indicate a balanced model
performance for both classes, suggesting that the model performs similarly well in
predicting both classes.
While the model's performance seems satisfactory, additional steps can be
taken to improve its predictive capabilities. For instance, hyperparameter tuning can
be performed to optimize the model's performance. Additionally, other models could
be tested and compared to find the one that best fits the data. The following stages of
this research could delve into these advanced techniques for model optimization.
In conclusion, training the DT model with the enhanced dataset by Feature
Engineering represented a critical phase in this investigation. By using the newly
created features, the model was expected to have an improved ability to identify and
predict Oscillatory Failure Cases accurately. The subsequent sections will further
elaborate on the DT model's performance after employing feature engineering to the
dataset.

6.3 IDENTIFYING OFCS WITH FEN IN THE ACQUIRED DATA

Once the Decision Tree (DT) model was appropriately trained with the featured
dataset, the next step was to deploy it to identify Oscillatory Failure Cases (OFCs) in
the newly acquired data. This process involved feeding the model with the data
acquired from the simulated environment, enriched by the Feature Engineering
process, and observing the prediction results.
The enriched dataset comprised numerous features from the primary sensor
readings, contributing to a more comprehensive understanding of the system's state.
These new features, including statistical attributes like moving averages, variance, and
zero detection, were designed to capture essential aspects of the sensor readings that
might indicate an OFC's presence.
115

Once these enriched datasets were prepared and structured adequately, they
were presented to the trained DT model. The model then traversed its learned decision
paths, basing its decision on the thresholds and rules learned during the training phase.
The DT model delivered a classification output for each instance in the dataset.
This output was either a prediction of the presence or absence of an OFC based on
the inherent patterns learned from the featured dataset. The model's results were then
compared to the actual state of the system to evaluate the DT model's effectiveness
at identifying OFCs.
Identifying OFCs in the acquired data with the trained DT model represented a
crucial validation of the developed model. Moreover, it served as a testament to the
effectiveness of feature engineering in enhancing the model's predictive capabilities.
The subsequent analysis would shed more light on the DT model's performance,
emphasizing the impact of feature engineering on the system's predictive performance.

6.3.1 Explaining the Identification Process Code

The Identification process of Oscillatory Failure Cases (OFCs) with Feature


Engineering in the acquired data involves a pipeline that integrates various steps. The
Python code used to accomplish this process, as presented in Attachment C, follows
an organized sequence of tasks, as discussed below.
Firstly, necessary libraries are imported for data manipulation, feature
extraction, machine learning modeling, and model evaluation. These libraries include
Scikit-learn, pandas, numpy, and scipy.
Next, the code defines multiple helper functions to facilitate the process:
- The `load_csv_data()` function is employed to load the data files for the sensor
readings (`data_x`) and corresponding failure status (`data_y`).
- The `generate_time_windows()` function is utilized to split the time-series data
into separate windows based on a specified `window_size` and `step` size.
- The `prepare_datasets()` function generates input and output datasets for the
model by applying the feature extraction function to each time window.
- The `extract_features()` function calculates statistical features (mean,
standard deviation, and skewness) for each time window to capture the essential
characteristics of the sensor readings.
116

Following the definition of these helper functions, the `load_csv_data()` function


is invoked to load the sensor data and failure status from the specified CSV files. Then,
the code employs the `prepare_datasets()` function to prepare the datasets for the
machine learning model, including time window creation and feature extraction.
The next step in the pipeline involves Feature Engineering. In this case, the
code applies Standard Scaling to the input data. This transformation standardizes the
features by removing the mean and scaling to unit variance, a common requirement
for many machine learning estimators.
Once the data has been adequately processed and prepared, it is divided into
training and testing sets. This split, conducted using the `train_test_split()` function
from Scikit-learn, allocates 70% of the data for training the model and the remaining
30% for evaluating the model's performance.
After preparing the data, the code proceeds to model training. A Decision Tree
Classifier model is instantiated and trained using the training data with the `fit()`
function.
In summary, the Python code provided in Section 6.2 demonstrates a
comprehensive pipeline for identifying OFCs in the acquired data, integrating data
loading, preprocessing, feature extraction, model training, and performance
evaluation. The application of this pipeline allowed the successful identification of
OFCs in the newly acquired data using the trained Decision Tree model.

6.4 PERFORMANCE ANALYSIS

The performance of the Decision Tree (DT) model, after the training process
with the featured dataset and deployed for Oscillatory Failure Cases (OFCs)
identification, was analyzed. The main objective was to evaluate its accuracy,
precision, recall, and F1 score, which were critical indicators of its prediction capability
and the overall efficiency of the process.
Accuracy was first assessed, representing the proportion of total predictions the
model made correctly, both for the presence and absence of OFCs. Such a process
gave a broad overview of how well the DT model performed overall in classifying the
acquired data.
Precision and recall, on the other hand, provided more detail on the model's
performance. For example, precision quantified how many predicted OFCs were
117

actual, highlighting the model's ability to avoid false positives. Recall, alternatively,
quantified how many actual OFCs were correctly identified by the model, focusing on
the model's ability to detect true positives and avoid false negatives.
Finally, the F1 score was calculated to give a balanced measure of the model's
precision and recall. This metric is beneficial when the cost of false positives and false
negatives is roughly equivalent, as it ensures that both aspects are considered in
evaluating the model's performance.
The DT model's performance metrics were then compared against the original
model (without feature engineering) and the industry standards to gauge the overall
effectiveness of the applied feature engineering process. This comparison assessed
how much improvement was obtained through feature engineering, validating its
application in this context.
It was also necessary to conduct further analysis by diving deeper into each
OFC scenario (Ideal, Light, Moderate, Severe). This was to ascertain the model's
performance under different conditions, as the model must maintain high performance
consistently across various scenarios.
The performance analysis served as a crucial step in assessing the benefits of
feature engineering on the DT model's effectiveness. In addition, it helped determine
whether the Feature Engineering process contributed to improving the model's ability
to detect OFCs accurately and would inform future directions for further improving the
model.

6.4.1 Performance Metrics Calculation

The provided code is structured within the `PerformanceAnalysis` class and is


purposed for evaluating a trained decision tree model's performance.
The class is initialized with three parameters: the trained decision tree model,
the testing data, and the corresponding test labels. Upon initialization, the model's
predictions for the test data are computed and stored in `y_pred`. A dictionary,
`performance_metrics`, is also instantiated to retain the calculated performance
metrics.
Following the initialization, the `calculate_metrics` method is invoked. This
method calculates the accuracy, precision, recall, and F1 score based on the model's
predictions. Accuracy measures the model's overall performance by evaluating the
118

ratio of correct predictions against all predictions. Precision offers insight into the
model's ability to avoid false positives, as it quantifies the proportion of predicted
positives that were indeed positive. Recall, or sensitivity, measures the proportion of
actual positives correctly classified, providing insight into the model's ability to detect
all potential positive cases. Lastly, the F1-score, calculated as the harmonic mean of
precision and recall, offers a balanced view of these two metrics.
After the metrics are calculated and stored in the `performance_metrics`
dictionary, they are presented in a neat, tabular format using the `display_metrics`
method. This method transforms the dictionary into a pandas DataFrame, facilitating
an easy-to-interpret view of the model's performance metrics.
Further extending the evaluation, the `plot_confusion_matrix` method is
employed. This method visualizes the model's performance using a confusion matrix,
a tabular layout representing the instances in predicted classes against those in actual
classes. When the `normalize` parameter is set to True, the confusion matrix presents
proportions rather than absolute counts, aiding in the interpretability of the results.
The `PerformanceAnalysis` class facilitates in-depth analysis of the trained
decision tree model's performance, highlighting its strengths and weaknesses. The
calculated metrics and confusion matrix offer valuable insights into the model's
predictive ability, providing the necessary understanding for future enhancements.

6.4.2 Model Performance and Comparison

The calculated performance metrics were then analyzed and compared with the
original model (without feature engineering) and the industry standards to evaluate the
improvements gained through feature engineering.
Comparing the accuracy of the DT model after feature engineering with the
original model gave an exact measure of how much the predictive ability had improved.
Similarly, the Precision, Recall, and F1-score metrics provided detailed insights into
the model's improved performance regarding false positives and negatives and the
balance between Precision and Recall.
Moreover, comparing these performance metrics with industry standards gave
a relative measure of how well the DT model performed in the industry context. It also
highlighted the DT model's suitability for real-world applications and its
competitiveness with existing solutions.
119

6.4.3 Performance Improvement Assessment

Finally, the relative improvement in the DT model's performance was assessed


by comparing the performance metrics before and after feature engineering. This
process helped quantify the enhancements achieved through feature engineering,
thereby validating its importance in ML tasks, particularly for identifying OFCs.
If the performance metrics showed a considerable improvement, it proved that
feature engineering effectively enhanced the predictive ability of the DT model. Hence,
adopting feature engineering as a standard step in machine learning tasks for
identifying OFCs improves system stability and operational efficiency.
In contrast, if the performance metrics did not significantly improve, it indicated
the need for alternative strategies for improving the DT model's predictive ability. This
could involve using more complex machine learning algorithms, advanced feature
engineering techniques, or better data preprocessing and cleaning methods.
In conclusion, the performance analysis section aimed to evaluate the feature
engineering process's effectiveness in improving the DT model's predictive ability. The
findings from this analysis would provide insights into how to improve the process
further and enhance the model's effectiveness in identifying OFCs in the future.

6.4.4 Confusion Matrix Interpretation for Machine Learning methods

The confusion matrix is a tool for evaluating classifier performance, and is


incorporated into the `PerformanceAnalysis` class, presented in Attachment D.
Providing a visual overview of the classifier's true and false positives and negatives
aids in identifying any model bias and helps better understand its overall accuracy.
The `generate_confusion_matrix` method is designed for this purpose within the
class. It takes as input the accurate and predicted labels, in this case, `y_test` and
`y_pred`.
The method then utilizes the `confusion_matrix` function from the `sklearn.
metrics` module, passing in the actual and predicted labels as parameters. This
function constructs a confusion matrix, which is a 2x2 array, with each cell representing
a unique combination of actual and predicted classes: True Positives (TP), False
Positives (FP), True Negatives (TN), and False Negatives (FN). The outputted table
for the Decision Tree can be seen in Figure 31:
120

Figure 31 – Confusion Matrix results for Decision Tree.

Source: The Author, 2023.

In the generated confusion matrix, 72% of instances were accurately identified


as not being OFCs (true negatives), while 28% of instances were falsely recognized
as OFCs when they were not (false positives). On the other hand, the model incorrectly
classified 32% of the OFCs as not being such (false negatives) while successfully
recognizing 68% of the OFCs as such (true positives).
The same test was performed for the other three methods, XGBoost, Random
Forest, and Gradient Boosting.
The results for the XGBoost technique are illustrated through a confusion matrix
in figure 32, which is a common way to evaluate the performance of a classification
model. In the given confusion matrix, the x-axis represents the predicted labels, and
the y-axis represents the true labels. The values inside the matrix show the proportion
of instances that were correctly or incorrectly classified.
121

Figure 32 – Confusion Matrix results for XGBoost.

Source: The Author, 2023.

In the top-left corner of the matrix, the value is approximately 0.69. This value
indicates that around 69% of the instances in class 0 (True Negative) were correctly
predicted by the XGBoost model as class 0. Conversely, the top-right corner of the
matrix shows a value of approximately 0.31, suggesting that around 31% of the
instances in class 0 were incorrectly predicted as class 1 (False Positive).
Therefore, moving to the bottom row of the matrix, the bottom-left corner has a
value of around 0.32. This value represents the False Negative rate, meaning about
32% of the instances in class 1 were incorrectly classified as class 0. The bottom-right
corner of the matrix has a value of around 0.68, indicating that roughly 68% of the
instances in class 1 were correctly classified as class 1 (True Positive) by the model.
Furthermore, the first image also mentions additional performance metrics -
Accuracy, Precision, Recall, and F1-score. For the XGBoost model, the accuracy is
around 0.684, which means about 68.4% of the total instances were correctly
classified. The Precision is the ratio of true positive predictions to the total positive
predictions, around 0.684. The Recall, the ratio of true positive predictions to the total
122

actual positives, is also around 0.684. Lastly, the F1-score, the harmonic mean of
precision and recall, is approximately 0.681. This score better measures the incorrectly
classified cases than the Accuracy Metric, especially when the class distribution is
uneven.
These results provide a comprehensive view of how well the XGBoost model
performed for this classification task.
Therefore, the Random Forest technique results are also depicted through a
confusion matrix in figure 33, which helps to evaluate its performance in classification
tasks. In this confusion matrix, the x-axis represents the predicted labels, while the y-
axis signifies the true labels. The values within the matrix demonstrate the proportions
of instances classified correctly or incorrectly.

Figure 33 – Confusion Matrix results for Random Forest.

Source: The Author, 2023.

In the top-left corner of the matrix, there is a value of about 0.70, suggesting that
approximately 70% of the instances that belong to class 0 were accurately predicted
as class 0 by the Random Forest model (True Negatives). On the other hand, the top-
123

right corner shows a value close to 0.30, indicating that around 30% of the instances
in class 0 were incorrectly predicted as class 1 (False Positives).
Meanwhile, examining the bottom row of the matrix in Figure 33, the bottom-left
corner holds a value near 0.30, representing the False Negative rate, meaning that
about 30% of instances in class 1 were incorrectly classified as class 0. The bottom-
right corner of the matrix contains a value of approximately 0.70, denoting that roughly
70% of instances in class 1 were accurately classified as class 1 by the model (True
Positives).
Additionally, the image provides other performance metrics - Accuracy,
Precision, Recall, and F1-score. The accuracy of the Random Forest model is around
0.697, meaning that about 69.7% of the total instances were correctly classified. The
Precision, which reflects the proportion of true positive predictions to the total positive
predictions, is roughly 0.697. The Recall, or the proportion of true positive predictions
to the total actual positives, is also about 0.697. Finally, the F1-score, the harmonic
mean of precision and recall, is around 0.697. This metric provides a more balanced
measure of the incorrectly classified cases than the Accuracy Metric, especially when
the class distribution is uneven.
These results provide a thorough understanding of the Random Forest model's
performance in this classification task.
Meanwhile, the gradient-boosting technique results are presented through
another confusion matrix in figure 34. Similar to the previous examples, the x-axis
represents the predicted labels, and the y-axis represents the true labels. The matrix’s
values display the proportions of instances that were classified correctly or incorrectly.
In the top-left corner of the matrix, there is a value close to 0.71, indicating that
approximately 71% of the instances in class 0 were correctly predicted as class 0 by
the Gradient Boosting model (True Negatives). The top-right corner, with a value
around 0.29, shows that about 29% of the instances in class 0 were incorrectly
predicted as class 1 (False Positives). As can be observed:
124

Figure 34 – Confusion Matrix results for Gradient Boosting.

Source: The Author, 2023.

Based on this graph, moving to the first row, the value from row 1 and column 0
has a value near 0.50, representing the False Negative rate, indicating that about 50%
of instances in class 1 were incorrectly classified as class 0. The bottom-right corner
of the matrix, which typically shows True Positives, is not visible in this case. However,
since the total must be 100%, it can be inferred that this value would be around 0.50
(100% - 50%).
Additional performance metrics include Accuracy, Precision, Recall, and F1-
score. The accuracy of the Gradient Boosting model is around 0.606, which means
about 60.6% of the total instances were correctly classified. The Precision is the ratio
of true positive predictions to the total positive predictions, which is about 0.665. The
Recall, the ratio of accurate optimistic predictions to the total actual positives, is about
0.50. Finally, the F1-score, the harmonic mean of precision and recall, is roughly 0.600.
These results provide insight into the Gradient Boosting model's performance
for this classification task. It is noticeable that the model has room for improvement,
especially regarding the high False Negative rate.
125

This method offered nuanced insights into the model's performance, not just in
general accuracy but in the context of different misclassifications.
Such precise knowledge of false positives and negatives was invaluable in
understanding the model's behavior in identifying Oscillatory Failure Cases, a crucial
detail considering the potentially high costs of misclassification in this context. The
confusion matrix, thus, served as a beneficial tool for evaluating the model's
robustness and adaptability to handle complex classifications.
Finally, in evaluating the four machine learning methods for the classification
task, Random Forest emerged as the top performer with an accuracy of approximately
69.7% and an F1-score of around 0.697. The Decision Tree method closely trailed,
boasting an accuracy of roughly 69.6% and an F1-score of about 0.698, making it
almost on par with Random Forest and a viable alternative. Meanwhile XGBoost,
although a potent model, ranked third with an accuracy of around 68.4% and an F1-
score of approximately 0.681. Gradient Boosting lagged behind the rest, manifesting
the weakest performance with an accuracy of about 60.6% and an F1-score near
0.600. The subtle differences between Random Forest and Decision Tree make them
both strong contenders, while Gradient Boosting would necessitate further optimization
for this particular task.

6.5 INTELLIGENT SYSTEM FRAMEWORK FOR SS PERFORMANCE


ENHANCEMENT

Enhancing Soft Sensor performance in the domain of Oscillatory Failure


Anomaly detection within an aircraft's Flight Control System (FCS) requires an
intersection of several computational disciplines. An intelligent system framework that
melds the realms of real-time data processing, machine learning, and feature
engineering is crucial to realizing the potential of Soft Sensors in this complex
environment.

6.5.1 Framework Code Overview

The code presented in Attachment E provides a detailed insight into the


intelligent system framework designed to enhance Soft Sensor performance. The
framework integrates MATLAB® and Python, two powerful computational tools, to
126

simulate and analyze the performance of Soft Sensors in detecting Oscillatory Failure
Anomalies within an aircraft's FCS.
The MATLAB engine is imported into Python, allowing for seamless interaction
between the two environments. A named tuple structure, `ParameterScenario`, is
defined to encapsulate various parameters related to the simulation scenarios. Four
distinct scenarios, namely ideal, light, moderate, and severe, are then defined using
this structure. These scenarios represent different conditions or states of the FCS,
each with its unique set of parameters.
A function, `test_matlab`, is introduced to simulate the Soft Sensor's
performance in MATLAB using the predefined scenarios. This function initializes the
MATLAB engine, sets the necessary paths, and loads the required variables. It then
configures the simulation parameters based on the provided scenario and initiates the
simulation. The function returns the simulation results, which include the desired
control input, measured control input, time, and whether an oscillatory failure anomaly
was detected.
Another function, `plot_data`, is designed to visualize the simulation results. It
converts the MATLAB engine object into a NumPy array and plots the desired and
measured control inputs against time. Points where oscillatory failure anomalies are
detected are highlighted in the plot.
Finally, the code iterates over the predefined scenarios simulates each scenario
using the `test_matlab` function, and appends the results to a list. The results for each
scenario are printed for further analysis.

6.5.2 Framework Architecture

The foundation of the proposed framework is built upon the integration achieved
between MATLAB® and SimuLink™ benchmark with Python. This integrated platform
serves as the base for the framework, which operates in several layers:
The Data Acquisition Layer is responsible for collecting real-time data from the
FCS and directing it to subsequent layers. Its primary function is to ensure the
uninterrupted transfer of information while maintaining data integrity.
The Feature Engineering Layer processes raw data to extract or construct the
most relevant features for anomaly detection. The transformation and selection
127

strategies applied aim to enhance the clarity and distinguishability of patterns indicating
an oscillatory failure.
The Feedback Loop is a distinctive feature of this framework, emphasizing its
ability to learn and adapt. As anomalies are detected (or missed), these instances are
logged, and the system continuously refines its algorithms, ensuring that the Soft
Sensor becomes increasingly proficient at its task. This complete process is presented
in figure 35:

Figure 35 – Framework Flowchart.

Source: The Author, 2023.


128

The flowchart provides a structured overview of the framework code. It begins


with importing necessary libraries and proceeds to define the `ParameterScenario`
named tuple. Following this, four distinct scenarios (ideal, light, moderate, and severe)
are defined. The `test_matlab` function is introduced, which initializes the MATLAB
engine, sets simulation parameters, runs the simulation in MATLAB, and returns the
simulation results. Concurrently, the `plot_data` function is defined to convert the
MATLAB object to a NumPy array, plot the desired versus measured control inputs,
and highlight any detected oscillatory failure anomalies. The code then iterates over
the predefined scenarios, simulates each using the `test_matlab` function, appends
the results to a list, and prints them for analysis.
Besides that, the Soft Sensor and ML Processing Layer, implemented with
chosen machine learning models, processes the engineered features to detect
potential anomalies. Advanced algorithms evaluate data in real-time to provide timely
and accurate failure predictions.

6.5.3 Advantages of the Proposed Framework

The Intelligent System framework aims to combine the best of both worlds: Soft
Sensors' precision and machine learning's adaptability. By doing so, it promises
several advantages:
The power of feature engineering gives enhanced detection rates; the system
boasts an advanced ability to pinpoint anomalies with unmatched precision. Therefore,
this heightened accuracy diminishes the chances of false positives and significantly
curtails false negatives, ensuring that potential threats are neither overlooked nor over-
reported. This dual advantage becomes instrumental in enhancing the reliability of the
system.
Real-time Processing: In the demanding and unpredictable world of aircraft
FCS, where split-second decisions can make a vast difference, this system delivers
instantaneous feedback. Such real-time responsiveness is not merely an added
feature but a non-negotiable necessity, making the system an invaluable asset in the
intricate flight control ecosystem.
Scalability: As technology advances and the FCS grows more complex with the
addition of new data streams or sensors, a system's ability to adapt becomes
paramount. The presented framework does not just adapt; it thrives in such evolving
129

environments. Designed with forward-thinking, it effortlessly scales up, embracing new


additions and ensuring they are integrated smoothly without compromising
performance or efficiency.
Adaptability: Change is the only constant, especially in dynamic fields like
aerospace engineering. Hence, the Soft Sensor system is built with a continuous
feedback loop. Far from being a rigid entity, it is akin to a living organism that evolves
in tandem with the FCS. As the system matures, the Soft Sensor refines its algorithms,
learns from new patterns, and consistently delivers top-of-the-line performance,
staying ever-relevant and always ahead of the curve.

6.5.4 Future Directions and Considerations

While the framework provides a solid foundation for Soft Sensor enhancement,
viewing it as an evolving entity is crucial. Future research could incorporate more
sophisticated machine learning algorithms, refine feature engineering techniques
based on new findings, or integrate more robust feedback mechanisms.
In conclusion, the Intelligent System framework proposed herein represents a
holistic approach to tackling the challenges posed by Oscillatory Failure Anomaly
detection in aircraft's FCS. Its modular design and emphasis on continuous
improvement offer a promising path forward for researchers and aerospace engineers.

6.6 FEATURE REDUCTION TECHNIQUES IN SOFT SENSOR PERFORMANCE


EVALUATION

In the rapidly advancing domain of intelligent systems, optimizing performance


often hinges on the quality and dimensionality of input features. The Soft Sensor
framework's evolution has made it imperative to discern the contributions of individual
features, recognizing their potential redundancy or pivotal significance.

6.6.1 Feature Reduction as a Key for Performance Enhancement

Intelligent systems rely on an intricate tapestry of interconnected features, each


bearing its unique information footprint. As the sheer volume of data expands
exponentially in modern applications, the ability to discern information becomes
beneficial and indispensable. More than just a tool, feature reduction serves as the
130

lens that refines this vast data landscape, bringing into focus the pivotal attributes that
drive decision-making.
Feature reduction involves various techniques that distill large datasets,
stripping away the redundant or irrelevant noise and preserving only the most
significant variables. This streamlined subset of attributes not only boosts
computational efficiency but also often enhances the underlying predictive power of
the model.
In the Soft Sensor domain, especially within the complex realm of an aircraft's
Flight Control System (FCS), the art and science of feature reduction assume
paramount importance. With the high stakes, the system demands nothing less than
the most potent combination of features. This section sheds light on the imperative
nature of feature reduction, its role in Soft Sensor performance, the methodologies
employed, and its broader impact on the field.

6.6.2 The Imperative Nature of Feature Reduction

Feature reduction has firmly rooted itself as a foundational aspect of data-centric


modeling, especially when navigating the intricate terrains of specialized systems like
the Flight Control System (FCS) of aircraft. This compelling emphasis on feature
reduction emanates from a tapestry of intertwined considerations:
Computational Efficiency is invariably tied to the model's responsiveness and
agility. The burgeoning volume of features often brings with it the daunting 'curse of
dimensionality', casting a shadow on computational prowess. As features proliferate,
the computational overhead surges, sometimes geometrically, impeding timely data
processing. Within the high-stakes world of real-time systems, where decisions unfold
in fleeting moments, computational swiftness becomes paramount. Feature reduction
emerges as the unsung hero, streamlining data and ensuring system responsiveness
remains uncompromised.
Model Performance and Generalization lie at the crossroads of precision and
versatility. An overabundance of features can be a double-edged sword; while they
might seemingly provide a richer dataset, they often introduce the peril of overfitting.
An overfitted model, lost in the intricacies of training data, tends to falter when faced
with unfamiliar data, its accuracy compromised by its inability to discern signal from
noise. Trimming the excess and retaining only features that resonate with the
131

underlying system dynamics ensures robust and adaptive models, echoing the proper
patterns without redundancy.
Enhanced Interpretability often distinguishes a good model from a great one. If
convoluted by a web of redundant features, the nuances of model dynamics can
remain elusive. A refined feature set, on the other hand, offers a lucid window into the
intricacies of the model, demystifying relationships and offering invaluable insights.
Such transparency and clarity become instrumental for academicians and
practitioners, fostering more profound understanding and enabling informed decision-
making.
Noise Reduction emerges as a beacon of clarity in the cacophony of data. More
often than not, extraneous features cloud judgments, masking genuine patterns
beneath layers of irrelevant information. The art of feature reduction, in this context,
transforms into a science of clarity, meticulously weeding out the superfluous and
spotlighting the essential, ensuring that the decision-making machinery remains
attuned to the crux of the data.
Resource Optimization goes beyond computation, resonating with the tangible
facets of data storage and transfer. While potentially adding informational value, each
feature also demands storage space and bandwidth share. An optimized feature set,
therefore, not only enhances model efficacy but also paves the way for efficient and
cost-effective operations, optimizing resources without sacrificing performance.
Navigating the complexities of the FCS underscores the indispensable role of
feature reduction. Within this intricate dance of myriad system variables, where
precision is non-negotiable and the stakes are astronomically high, feature reduction
stands tall as a guiding force. Its myriad advantages collectively champion its cause,
reinforcing its stature as an essential tool in the study and practical implementation of
Soft Sensors, both within the confines of the FCS and in broader horizons.

6.6.3 The Role of Feature Reduction in Soft Sensor Performance

Soft sensors delicately thread the complex interplay of raw data and algorithmic
prowess, embodying the quintessence of digital alchemy. Entrusted with the
monumental task of mimicking tangible measurements, especially in contexts where
traditional sensors falter in feasibility or reliability, soft sensors' performance is
inexorably linked to the quality and relevance of their foundational features. This
132

intricate relationship shines a spotlight on the transformative influence of feature


reduction:
Elevated Precision becomes tangible when soft sensors immerse themselves
in curated data. As they delineate the subtle nuances between available features and
the sought-after outcomes, narrowing down to the crux of relevant features amplifies
their precision. This meticulous approach ensures that soft sensors echo the fidelity of
real-world scenarios, instilling confidence in their outputs.
Swift Decision-making, paramount in real-time systems, gains impetus with a
refined feature set. As soft sensors navigate intricate landscapes like the FCS, where
every moment counts, feature reduction emerges as an accelerator, paring down data
to its essence. This streamlined data architecture enables algorithms to seamlessly
transition from data ingestion to decisive action, epitomizing efficiency in scenarios
where time is of the essence.
Robustness to Variability is a beacon of consistency in aircraft systems' often
tumultuous operational theatres. While these environments pulsate with variability,
extraneous features can amplify this unpredictability, muddying the waters of soft
sensor responses. By judiciously pruning the feature set, soft sensors gain the
resilience to wade through these variations, offering steadfast and consistent outputs
even amidst the throes of change.
Facilitated Calibration and Maintenance emerge from the shadows of
complexity when feature reduction takes center stage. With a compact feature set as
its backbone, soft sensors can more intuitively align themselves to specific standards.
This refined architecture also offers a more explicit vantage point for maintenance
endeavors, pinpointing discrepancies and ensuring the soft sensor's longevity and
reliability.
Enhanced Integration with Hardware transcends the boundaries of software
optimization, resonating with the tangible harmonics of hardware platforms. As soft
sensors weave into the fabric of existing hardware systems, a concise feature set
paves the way for seamless integration. This symbiotic relationship ensures that soft
sensors resonate with diverse hardware blueprints and optimize their resource
footprints.
Drawing from this mosaic of advantages, the pivotal role of feature reduction in
shaping the destiny of soft sensors becomes unequivocal. Feature reduction amplifies
133

soft sensors' dexterity and molds them into resilient, efficient, and precise instruments.
Within the intricate choreography of systems like the FCS and beyond, feature
reduction and soft sensor excellence dance in harmonious synchrony, weaving the
tapestry of precision and reliability.

6.6.4 Implementing Feature Reduction for Performance Evaluation

The `FeatureReductionPerformance` class exemplifies the intricate balance


between reducing features and retaining model performance, the code is presented in
Attachment F. At its core, this class is meticulously engineered to fuse a classification
model and its corresponding training and test datasets. Doing so sets the stage for an
exhaustive evaluation, with the `results` dictionary serving as a repository,
chronologically capturing the performance metrics for each feature combination.
One of the pillars of this class is the `calculate_metrics` method. This method is
a functional component and the heart of the evaluation. Delving deep into the model's
behavior, it leverages multiple metrics such as accuracy, precision, recall, and the F1-
score. This comprehensive assessment provides a holistic insight into the model's
proficiency across various feature combinations.
However, the actual embodiment of feature reduction is encapsulated in the
`evaluate_reduced_features` method. This method embarks on an ambitious
exploration of the possible feature space, recalibrating and retraining the model at each
checkpoint. It remains steadfast in its mission to identify those feature subsets that
synergize best with the model's performance. Its embrace of parallel processing
accelerates this exploration, ensuring that the vast spectrum of feature combinations
is assessed efficiently. A progress indicator adds a touch of interactivity, providing real-
time updates on ongoing computations.
The `plot_performance_against_features` method offers a visual culmination to
the analytical journey. It translates the intricate dance between feature count and
model accuracy into a visual symphony, shedding light on the dynamic interplay of
these two variables. The oscillating nature of performance across different feature
combinations becomes palpable through its plotted insights.
This code transcends its utilitarian role, evolving into a comprehensive guide to
feature reduction. It empowers both novices and veterans in the machine learning
domain, allowing them to unravel the complexities of feature selection. This code not
134

only primes the path for optimized model performances but also fosters a deeper
understanding and more informed decision-making in the vast universe of machine
learning.

6.6.5 Feature Landscape and Model Mastery

The intricate relationship between the number of features and model


performance defines an essential aspect of machine learning. A rigorous
benchmarking sought to decode this balance and identify the optimal number of
features that push different models to peak performance, such as Decision tree,
Gradient Boosting, XGBoost, and Random Forest.
From the evaluation of various feature combinations, a consistent pattern
emerged. Imagine a table that maps the number of features to their corresponding
combinations, sampled at every multiple of 500, so as not to overload the GUI. Table
13 provides a quantified snapshot of the expansive feature landscape that was
explored during benchmarking:

Table 13 – Number of features x Maximum accuracy.


ACCURACY
FEATURES
Decision Tree XGBoost Random Forest Gradient Boosting
1 0.6923 0.6257 0.6773 0.5764
2 0.6931 0.6332 0.6867 0.5793
3 0.6935 0.6341 0.6932 0.5798
4 0.6937 0.6362 0.6958 0.5803
5 0.6939 0.6385 0.6967 0.5811
6 0.6935 0.6415 0.6956 0.5824
7 0.6939 0.6463 0.6964 0.5829
8 0.6942 0.6501 0.6968 0.5841
9 0.6947 0.6523 0.6957 0.5844
10 0.6946 0.6552 0.6959 0.5848
11 0.6942 0.6587 0.6962 0.5847
12 0.6946 0.6633 0.6964 0.5853
13 0.6947 0.6612 0.6967 0.5863
14 0.6948 0.6641 0.6972 0.5871
15 0.6945 0.6671 0.6975 0.5879
16 0.6949 0.6674 0.6954 0.5883
17 0.6947 0.6676 0.6968 0.5891
18 0.6951 0.6681 0.6975 0.5898
19 0.6953 0.6692 0.6982 0.5904
20 0.6957 0.6707 0.6961 0.5913
21 0.6958 0.6723 0.6968 0.5923
22 0.6960 0.6722 0.6971 0.5904
23 0.6961 0.6719 0.6973 0.5914
24 0.6962 0.6725 0.6969 0.5932
25 0.6961 0.6732 0.6961 0.5945
26 0.6961 0.6741 0.6954 0.5959
135

27 0.6961 0.6754 0.6959 0.5978


28 0.6961 0.6742 0.6965 0.6003
29 0.6961 0.6738 0.6970 0.6034
30 0.6962 0.6734 0.6967 0.5994
31 0.6961 0.6738 0.6965 0.6042
32 0.6962 0.6747 0.6963 0.6053
33 0.6962 0.6758 0.6968 0.6031
34 0.6962 0.6731 0.6972 0.6028
35 0.6962 0.6753 0.6974 0.6026
36 0.6962 0.6741 0.6965 0.6021
37 0.6962 0.6745 0.6961 0.6042
38 0.6962 0.6753 0.6969 0.6062
39 0.6962 0.6768 0.6965 0.6135
40 0.6962 0.6761 0.6967 0.6221
Source: The Author, 2023.

To complement the data in the table, a graph depicting the relationship between
the number of features and accuracy offers visual insights. Peaks on this graph in
Figure 36 highlight the regions of optimal performance, pointing to areas where the
feature count harmonizes with model accuracy:

Figure 36 – Graph presenting performance x features for Decision Tree.

Source: The Author, 2023.

As can be seen, the graph depicts the performance of the Decision Tree model
in terms of accuracy as the number of features varies from 5 to 40. Initially, the
accuracy rapidly increases until around 21 features. After 21 features, the accuracy
plateaus, remaining roughly constant at approximately 0.696, indicating that using
136

more than 21 features does not yield any further improvement in model accuracy, and
an optimal feature set for this model is likely to be found within the first 21 features.
It required around 12 minutes and 50 seconds to simulate all the possible
combinations and evaluate their performance with the hardware described in section
1.4.2.
However, a crucial note for those looking to delve more deeply: the complexity
of this benchmarking process demands time and patience. Even with high-
performance computers available, several hours were spent waiting for the code to
complete its thorough exploration of the feature landscape. Researchers aiming to
replicate or extend this study should anticipate this time investment and plan their
resources accordingly.
Therefore, the result for the Gradient Boosting model in the same simulation
flow is presented in figure 37:

Figure 37 – Graph presenting performance x features for Gradient Boosting.

Source: The Author, 2023.

The graph for the Gradient Boosting model portrays an initially volatile accuracy
as the number of features increases from 5 to around 25, indicating the model is
attempting to ascertain the optimal combinations and interactions of the features. As
the number of features approaches 25, the accuracy progression stabilizes and follows
137

a more consistent upward trend. Notably, the accuracy reaches its peak around 35
features, nearing 0.60, after which it begins to plateau. This plateau suggests that
additional features beyond this point do not significantly contribute to improving the
model's performance.
Compared to XGBoost, Gradient Boosting took slightly longer to evaluate, with
a total duration of approximately 3 hours, 52 minutes, and 23 seconds. Considering
the time invested and the progression in accuracy, Gradient Boosting appears to
benefit from more features than XGBoost. However, when selecting a model, it is
important to weigh the marginal gains in accuracy against the computational cost and
complexity, especially as the number of features increases. Balancing performance
and efficiency is crucial in practical applications.
Therefore, the results for Gradient Boosting in the same simulation flow are
presented in figure 38:

Figure 38 – Graph presenting performance x features for XGBoost.

Source: The Author, 2023.

The XGBoost model demonstrated an upward trend in accuracy as the number


of features increased. The initial phase exhibited a steep rise in accuracy up until
around 15 features, suggesting that including the first subset of features significantly
contributed to the model's ability to make predictions. Beyond 15 features, the increase
138

in accuracy became more gradual, indicating diminishing returns from additional


features. However, the model continued to benefit as the number of features
approached 30, where the accuracy seemed to level off near 0.68.
An important aspect to consider for the XGBoost model is its time to evaluate -
approximately 3 hours, 47 minutes, and 14 seconds. XGBoost, being a gradient-
boosting algorithm, builds trees sequentially, attempting to correct the errors of the
previous trees. This process can be computationally intensive, especially as the
number of features increases.
In summary, XGBoost’s performance in terms of accuracy is generally positive
with the addition of features, but there is a point of saturation beyond which additional
features do not contribute much. The computational time is also significant, and
optimizing the number of features and tuning the model's parameters could be
beneficial in striking a balance between performance and efficiency.
Then, the Random Forest method is presented in the graph in figure 39:

Figure 39 – Graph presenting performance x features for Random Forest.

Source: The Author, 2023.

The Random Forest graph displays the model's accuracy as the number of
features increases from 1 to 40. In the beginning, there is a sharp increase in accuracy,
which reaches a plateau relatively quickly, around ten features. Subsequently, the
139

accuracy fluctuates moderately within a narrow range, oscillating around the 0.697
mark. No discernible upward or downward trend exists beyond this point, suggesting
that Random Forest’s performance stabilizes early with a smaller subset of features.
Notably, Random Forest took significantly longer to evaluate than the other
models, with a total time of approximately 6 hours and 5 minutes. This extended
duration could be attributed to the inherent complexity of the Random Forest algorithm,
which builds multiple decision trees during training. Though Random Forest's accuracy
plateaus relatively early, it performs well, making it a robust model. However, the
computational cost can be a consideration for applications that require efficiency or
have resource constraints.
In light of the empirical evaluation of the four distinct algorithms – Decision Tree,
Random Forest, XGBoost, and Gradient Boosting – it is imperative to analyze the
results within the framework of model accuracy juxtaposed with computational
efficiency. This analysis is deducting the most appropriate machine-learning technique
for the dataset.
The Decision Tree algorithm exhibited remarkable performance, achieving an
accuracy plateau of approximately 0.696. Decision Tree's computational complexity is
significantly lower than the ensemble methods. Hence, this can be attributed to
Decision Trees not relying on constructing multiple base learners, as with Random
Forest and boosting methods.
Random Forest, an ensemble technique that constructs many decision trees,
registered a comparable accuracy level of around 0.697. However, it incurred a
substantially higher computational cost, clocking in at approximately 6 hours and 5
minutes. The increased complexity and computational time can be ascribed to the
inherent nature of the Random Forest algorithm, which necessitates the construction
and amalgamation of multiple decision trees.
XGBoost, a gradient-boosting algorithm, exhibited a notable accuracy level of
approximately 0.68. The algorithm’s efficacy in progressively correcting errors through
the sequential construction of decision trees is evident, albeit with diminishing returns
beyond certain features. The computational time for XGBoost was approximately 3
hours, 47 minutes, and 14 seconds.
Gradient Boosting achieved the lowest accuracy, capping at around 0.60. Its
computational time was the second highest, taking roughly 3 hours, 52 minutes, and
140

23 seconds. Like XGBoost, Gradient Boosting is an ensemble method that improved


accuracy and was less efficient.
Upon meticulous analysis, the Decision Tree emerges as the most efficacious
algorithm for this dataset, striking an exquisite balance between performance and
computational complexity. Although Random Forest attained a marginally higher
accuracy, the computational overhead does not justify the negligible gain in
performance. Moreover, Decision Trees offer greater interpretability, which can be
invaluable in certain applications.
To conclude this section, the efforts in this master's dissertation emphasize the
pivotal role of feature optimization. This process goes beyond a mere technical
exercise. This code outputs an understanding of the core mechanics of machine
learning models being benchmarked using Feature Engineering. By navigating these
complexities, this section's goal extends beyond achieving enhanced model
performance; it aims to offer clarity and insights that benefit the academic community.
141

7 DISCUSSION

The essence of this research was thoroughly analyzed in this chapter, and
conclusions were derived from the comprehensive investigation carried out during this
study. The primary objective of this research was the exploration of the development
of a framework handling the application of Machine Learning (ML) techniques, to detect
and predict Oscillatory Failure Cases (OFCs) in a simulated Aerospatiale environment.

7.1 REVIEW OF FINDINGS

In this section, a more detailed examination of the key findings from this
research is undertaken.

7.1.1 ML Models and Performance

When various machine learning models were applied, it was noted that the
Decision Tree algorithm demonstrated the most promising results. While the other
algorithms, such as Random Forest and Support Vector Machines, did exhibit
acceptable performances, they did not outperform the Decision Tree model in terms of
ease of interpretation, computational efficiency, and overall accuracy. Thus, the
Decision Tree would be the model of focus for this study.

7.1.2 Feature Engineering and Enhancement of Decision Tree Model

Applying Feature Engineering techniques was the next significant phase of this
research. The initial 40-feature dataset was augmented to a more comprehensive 50-
feature dataset. It was discovered that creating these additional features had a
transformative impact on the model’s predictive performance. These newly
incorporated features, derived from the original dataset, enhanced the inherent
patterns pivotal to the prediction task, thus optimizing the Decision Tree algorithm's
ability to detect and predict OFCs accurately.

7.1.3 Improvement in Predictive Performance

Employing the enriched 50-feature dataset, the retrained Decision Tree model
demonstrated a considerable advancement in its predictive performance. It was found
142

that the model's accuracy had improved substantially, reaching approximately 70%
correctness in its predictions. This result is significant in its relative improvement over
the initial model and potential implications for the field. This performance highlights the
importance of feature engineering in machine learning and its potential to improve
model performance.

7.1.4 Comparative Evaluation

The metrics derived from evaluating the Decision Tree model's performance
were compared with the initial models' results and examined in the context of industry
standards. This comparative analysis brought forth the effectiveness of the applied
feature engineering process in improving the model's performance. These results, in
turn, supported the argument that the techniques employed in this research could
apply to similar industrial prediction tasks.

7.1.5 Interpretation and Insight

Beyond the improved performance metrics, the findings from this research also
provided more profound insight into the nature of the data and the oscillatory failure
cases. By employing the Decision Tree algorithm and through feature engineering, the
study unearthed complex, non-linear relationships between different features that
would have been challenging to detect using more traditional statistical approaches.
Identifying these relationships could have far-reaching implications for understanding,
predicting, and preventing OFCs in a simulated industrial environment.

7.2 EVALUATION OF THE DECISION TREE MODEL

This section analyzes the optimized Decision Tree model's performance in


predicting oscillatory failure cases (OFCs).

7.2.1 Accuracy of the Model

A closer analysis of the Decision Tree model’s performance indicated an


accuracy of approximately 70% in correctly predicting OFCs, significantly improving
the performance metrics obtained from the initial model and reinforcing the
effectiveness of the feature engineering techniques. It should be noted that while a
143

70% accuracy rate is substantial, future work might focus on approaches to enhance
this accuracy further.

7.2.2 Analysis of Confusion Matrix

The Decision Tree model’s confusion matrix was scrutinized to provide an


evaluation of its predictive capability. It was observed that the model demonstrated a
higher rate of correctly identifying both positive and negative instances than initially
anticipated. However, it also manifested some degree of misclassification. Further
improvements could target these areas of misclassification to ensure a more accurate
prediction of OFCs.

7.2.3 Precision, Recall, and F1-Score

The Decision Tree model exhibited a notable performance in terms of precision,


indicating its ability to classify OFCs with high reliability. The recall of the model was
also appreciable, suggesting that it effectively identified a substantial portion of the
actual OFCs in the data. The F1-score, a measure that considers both precision and
recall, was satisfactory. The model’s commendable performance in these metrics
underlines its efficacy in detecting and predicting OFCs.

7.2.4 Model Interpretability

One of the standout attributes of the Decision Tree model is its interpretability.
It was found that the model allowed a transparent understanding of the decision-
making process in predicting OFCs. This ease of interpretation could be valuable in a
practical, industrial context where comprehending the reasoning behind predictions
may be as crucial as the predictions themselves.

7.2.5 Comparative Analysis with Industry Standards

When the performance of the Decision Tree model was compared with industry
standards, it was found to hold up commendably. The model’s accuracy, precision,
recall, and F1-score all performed at a level that is in line with, if not superior to,
comparable models employed in the industry. Hence, the approaches and techniques
144

employed in this research could have broader applicability and potential benefits
beyond the specific context of this study.

7.2.6 Implications for the Field

The findings of this evaluation could have significant implications for the field.
The enhanced performance of the Decision Tree model in predicting OFCs highlights
the potential of machine learning, specifically feature engineering techniques, in
tackling complex, real-world industrial problems. The lessons learned could inspire
similar approaches in other related domains, promoting a more extensive application
of these techniques.

7.3 SIGNIFICANCE OF THE RESEARCH

This section elaborates on the significance of the research and its potential
implications for academia and industry.

7.3.1 Academic Significance

The theoretical implications of the research are multifold. This study adds to the
body of knowledge by providing evidence of the effectiveness of feature engineering
and machine learning techniques in predicting oscillatory failure cases (OFCs). It offers
an in-depth exploration of how an optimized Decision Tree model can be utilized
effectively in this domain.
Furthermore, this research can be a valuable reference for future researchers
aiming to apply similar methods in related fields. This study's findings and methodology
provide a rich ground for potential replication and validation in other contexts.
It’s also important to highlight that the framework showed that using fewer
features can output the same performance, reducing the computational costs.
The evaluation of the Decision Tree model’s performance might spur further
research. Given that the model achieved an accuracy rate of approximately 70%, it
suggests room for improvement. Future studies could explore how other machine
learning techniques, such as ensemble or deep learning, might increase this accuracy.
145

7.3.2 Industrial Significance

From an industrial perspective, the practical implications of this research are


substantial. The ability to predict OFCs with a significant degree of accuracy could
allow for proactive measures to prevent failures, resulting in reduced maintenance
costs, increased operational efficiency, and improved safety standards.
The model’s high interpretability could also offer valuable insights for operators
and engineers to understand the underlying factors contributing to OFCs. These
insights could be used to refine current operational practices or develop new
preventative strategies.
The model's comparative analysis with industry standards provides a promising
outlook for its application in real-world scenarios. The model’s performance, which
aligned with or exceeded comparable models, suggests that this research's techniques
could be effectively adopted in the industry.

7.3.3 Broader Implications

The results of this research are poised to bear far-reaching implications. They
could instigate reformative policy changes concerning the administration of
maintenance and failure prediction strategies within the aerospace industry. Moreover,
successfully applying the Decision Tree model might catalyze other industries to
embrace similar machine-learning methodologies for predicting and preempting
operational failures.
In essence, the research's monumental significance resides in its capability to
redefine academic comprehension and industrial methodologies pertinent to
Oscillatory Failure Cases (OFCs). The revelations gleaned from this study hold the
potential to drive further research, igniting innovation in academia and industry alike.

7.4 LIMITATIONS

This section provides a deeper look at the limitations identified in the research.
It is essential to recognize the limitations of any research study to fully understand the
context in which the findings should be interpreted.
Three significant limitations can be identified in the current study. The first
limitation pertains to the dataset used for the analysis. Although the dataset was
146

substantial and contained a wealth of information, it was limited to one specific type of
industrial machinery. As a result, the conclusions drawn in this study may not be
generalizable to other types of machinery or other industries.
Secondly, this research solely focused on the Decision Tree model. Despite its
demonstrable performance in predicting OFCs, other machine learning algorithms
such as Random Forest, Neural Networks, or Support Vector Machines might offer
different or more accurate results.
Lastly, the accuracy rate achieved, while impressive, is not perfect, indicating
that unknown factors influencing the OFCs that the current model has not accounted.
147

8 CONCLUSION

The journey of this dissertation began with a focused quest: to unravel the
capabilities of machine learning, particularly emphasizing the Decision Tree model, in
the context of predicting Oscillatory Failure Cases (OFCs) within aerospace
engineering. The need for this exploration was grounded in the tangible challenges
posed by OFCs, with implications spanning both financial and operational domains in
aerospace operations.
This lays the groundwork for the primary research question driving this
investigation: How can the performance of a Soft Sensor for Oscillatory Failure
Anomaly detection in an aircraft's Flight Control System (FCS) be improved
using Feature Engineering?
The foundation of this research was defined by its objectives, aiming to pioneer
a transformative feature engineering framework. This framework sought to refine the
performance of a Soft Sensor within an aircraft's Flight Control System. In response to
the primary research question, the answer emerged through a meticulously conducted
Systematic Literature Review, the study's various phases, and the empirical results.
Feature engineering, when applied judiciously, had a transformative effect on the Soft
Sensor's performance, enabling it to predict OFCs with heightened accuracy and
precision.
Guided by the literature review, the study identified and highlighted significant
gaps, especially in the realm of applying machine learning for OFC predictive
maintenance. A significant milestone in this journey was the successful integration of
the MATLAB® and SimuLink™ benchmarks with Python. This integration, facilitating
real-time data processing, harnessed the Python-embedded Soft Sensor and an array
of ML classes. This step laid the groundwork for subsequent advancements, most
notably the introduction of innovative feature engineering methodologies. These
methodologies showcased their transformative potential, marking a substantial
enhancement in the predictive outcomes observed in the preliminary research stages.
It’s important to highlight this step will be summarized in a concise paper for an
international journal publication.
Transitioning to the empirical facet, the research delved deep into a curated
dataset, rich with historical OFC incidents. This exploration led to the construction and
rigorous evaluation of the Decision Tree model. The model, in its final avatar, exhibited
148

an impressive predictive accuracy of approximately 70%. This metric underscored the


transformative potentialities of machine learning, especially when bolstered by rigorous
feature engineering. Furthermore, the study's findings provided profound insights into
the intricate relationships within the data and the multifaceted nature of OFCs. Such
insights contribute significantly to both the academic understanding and practical
applications concerning OFCs.
Evaluating the model in its entirety, the Decision Tree's substantial accuracy,
precision, and recall metrics consolidated its effectiveness in OFC detection and
prediction. A standout attribute of the model was its profound interpretability, providing
stakeholders with a clear understanding of the decision-making processes integral to
OFC predictions. When juxtaposed against industry benchmarks, the model's
performance positioned it as a potent tool, suggesting broader applicability beyond the
specific confines of this research. These evaluative insights promise a new dawn for
the industry, indicating the onset of data-driven, proactive measures that could redefine
maintenance paradigms.
Reflecting on the broader significance, from an academic perspective, this
research serves as a beacon, illuminating the landscape of OFC prediction through
machine learning enhanced by feature engineering. Its practical ramifications are
equally pivotal, equipping industries with tools that promise reduced OFC incidences,
heralding both financial savings and operational optimizations. Yet, as is the nature of
academic endeavors, certain limitations temper the conclusions of this research.
These limitations, be it dataset specificity or an exclusive focus on the Decision Tree
model, simultaneously offer a canvas of opportunities for future exploration.
Therefore, the use of FEn to enhance the machine learning methods’
performance was carried out in section (6.5), then the conclusion was that using 21 or
more features until 40 features kept the performance constant. In this sense, the
computation cost could be reduced by using fewer features. This will be compressed
into a practical research paper to present the feature reduction in the benchmark.
Despite the limitations, the research findings pave the way for future
investigations in several directions. Given the dataset's limitations, future research
could try replicating this study using different datasets representing various machinery
or industries. This study's findings determine the Decision Tree model's generalizability
in predicting OFCs.
149

This dissertation has made significant scholarly contributions, crystallized


through the publication of several academic papers. The initial exploration of software
development for aerospace engineering, which anchors the series, transcends
theoretical bounds, providing a tested foundation for further empirical scrutiny.
Following this, the research advancing the use of time-stacking Decision Trees for OFC
detection in flight systems, presented at an international forum, affirms the practical
applicability of these methods.
The culmination of this intellectual voyage is a systematic literature review, to
be published, that synthesizes current knowledge on soft sensors, setting a benchmark
for future innovation in aerospace predictive maintenance. Collectively, these works
underscore the dissertation's impact, marking significant strides in both academia and
industry, and paving the way for future advancements in aerospace system
maintenance and machine learning applications.
Further research could also experiment with different machine learning models
or a combination of models (ensemble methods). Testing these algorithms could help
determine whether they provide higher predictive accuracy than the Decision Tree
model and could further enhance the prediction of OFCs. Future research can dig
deeper into the influencing factors of OFCs. A more extensive feature engineering or
incorporating additional relevant variables might improve the model's predictive power.
Moreover, future research should focus on improving the model's accuracy, possibly
through more sophisticated optimization techniques or more advanced machine
learning models.
In conclusion, while the current research offers valuable insights into predicting
OFCs using a Decision Tree model, it also introduces several avenues for future
exploration and improvement. Hopefully, this work will be a solid foundation for
subsequent studies.
In summation, this dissertation, while monumental in its insights and
contributions, stands as a beacon guiding the academic and industrial worlds toward
novel horizons in the domain of OFC prediction. The tapestry of findings,
methodologies, and insights woven throughout this research promises to inspire future
narratives in the world of predictive analytics within aerospace engineering.
150

REFERENCES

A. Blažič, I. Škrjanc, V. Logar. Soft sensor of bath temperature in an electric arc furnace
based on a data-driven Takagi–Sugeno fuzzy model. Applied Soft Computing, v.
113, n. 1, p. 1-11, 2021.
A. Gal-Tzur, S. Bekhor, Y. Barsky. Feature engineering methodology for congestion
forecasting. Journal of Traffic and Transportation Engineering (English Edition), v. 1,
n. 1, p, 1-14, 2022.
A. Gejji, S. Shukla, S. Pimparkar, et al. Using a support vector machine for building a
quality prediction model for center-less honing process. Procedia Manufacturing, v.
46, n. 2019, p. 600-607, 2020.
A. Guzman-urbina, K. Ouchi, H. Ohno, et al. FIEMA, a system of fuzzy inference and
emission analytics for sustainability-oriented chemical process design. Applied Soft
Computing, v. 126, n. 1, p. 1-16, 2022.
A. Hicks, M. Johnston, M. Mowbray, et al. A two-step multivariate statistical learning
approach for batch process soft sensing. Digital Chemical Engineering, v. 1, n.
October, p. 1-8, 2021.
A. Lew, M. Buehler. Encoding and exploring latent design space of optimal material
structures via a VAE-LSTM model. Forces in Mechanics, v. 5, n. 1, p. 1-8, 2021.
A. Mayr, D. Kißkalt, A. Lomakin, et al. Towards an intelligent linear winding process
through sensor integration and machine learning techniques. Procedia CIRP, v. 96, n.
1, p. 80-85, 2020.
A, Nightingale. A guide to systematic literature reviews, Surg, v. 27, n. 1, p. 381–384,
2009.
A. Theissler, J. Pérez-Velázquez, M. Kettelgerdes, et al. Predictive maintenance
enabled by machine learning: Use cases and challenges in the automotive industry.
Reliability Engineering and System Safety, v. 215, n. 1, p. 1-21, 2021.
A. Tsopanoglou, I. Jiménez del Val. Moving towards an era of hybrid modelling:
advantages and challenges of coupling mechanistic and data-driven models for
upstream pharmaceutical bioprocesses. Current Opinion in Chemical Engineering,
v. 32, n. 1, p. 1-8, 2021.
A. Zolghadri, J. Cieslak, D. Efimov, et al. Signal and model-based fault detection for
aircraft systems. IFAC-PapersOnLine, v. 28, n. 21, p, 1096-1101, 2015.
B. Maschler, S. Ganssloser, A. Hablizel, et al. Deep learning based soft sensors for
industrial machinery. Procedia CIRP, v. 99, n. 1, p. 662-667, 2021.
B. Negash, L. Tufa, R. Marappagounder, et al. Conceptual Framework for Using
System Identification in Reservoir Production Forecasting. Procedia Engineering, v.
148, n. 1, p. 878-886, 148.
151

B. Schumucker, F. Trautwein, R. Hartl, et al. Online Parameterization of a Milling Force


Model using an Intelligent System Architecture and Bayesian Optimization. Procedia
CIRP, v. 107, n. 1, p. 1041-1046, 2022.
C. Alarcon, C. Shene. Fermentation 4.0, a case study on computer vision, soft sensor,
connectivity, and control applied to the fermentation of a thraustochytrid. Computers
in Industry, v. 128, n. 1, p. 1-10, 2021.
C. Chang, C. Lin. LIBSVM: A Library for Support Vector Machines. Department of
Computer Science National. Taiwan University, Taipei, Taiwan. v. 2022. n. 1. p.1-40,
2001. Available at: <https://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf>. Accessed
in Sep. 2022.
C. Manning, P. Raghavan, H. Schütze, "Introduction to Information Retrieval",
Cambridge University Press, p. 158-163, 2008.
C. Joshi, R. Ranjan, V. Bharti. A Fuzzy Logic based feature engineering approach for
Botnet detection using ANN. Journal of King Saud University - Computer and
Information Sciences, v. 1, n. 1, p, 1-11, 2021.
C. Kumar, S. Chatterjee, T. Oommen, et al. Automated lithological mapping by
integrating spectral enhancement techniques and machine learning algorithms using
AVIRIS-NG hyperspectral data in Gold-bearing granite-greenstone rocks in Hutti,
India. International Journal of Applied Earth Observation and Geoinformation.
Hyderabad: India. v.86, n.01, p.1-15, 2020.
C. Li, Y. Chen, Y.Shang. A review of industrial big data for decision making in intelligent
manufacturing. Engineering Science and Technology, an International Journal, v.
29, n. 1, p. 1-12, 2022.
D. Aguado, G. Noriega-Hevia, J. Ferrer, et al. PLS-based soft-sensor to predict
ammonium concentration evolution in hollow fibre membrane contactors for nitrogen
recovery. Journal of Water Process Engineering, v. 47, n. March, p. 1-7, 2022.
D. Coelho, D. Costa, E. Rocha, et al. Predictive maintenance on sensorized stamping
presses by time series segmentation, anomaly detection, and classification algorithms.
Procedia Computer Science, v. 200, n. 2019, p. 1184-1193, 2022.
D. Gibert, J. Planes, C. Mateu et al. Fusing feature engineering and deep learning: A
case study for malware classification. Expert Systems with Applications, v. 207, n.
June, p.1-18, 2022.
D. Hand, C. Till, "A Simple Generalisation of the Area Under the ROC Curve for
Multiple Class Classification Problems", Machine Learning, v. 45, n. 2, p. 171-186,
2001.
D. Lewis, "Evaluating Text Categorization", Proceedings of Speech and Natural
Language Workshop, p. 312-318, 1991.
D. Moher, A. Liberati, J. Tetzlaff, et al. Preferred Reporting Items for Systematic
Reviews and Meta-Analyses: The PRISMA Statement, PLoS Med, v. 6, p. 264–269,
2015.
152

D. Ntamo, E. Lopez-Montero, J. Mack, et al. Industry 4.0 in Action: Digitalization of a


Continuous Process Manufacturing for Formulated Products. Digital Chemical
Engineering, v. 3, n. February, p. 1-10, 2022.
D. Parmenter. Key Performance Indicators (KPI): Developing, Implementing, and
Using Winning KPIs. 2 ed. Hoboken: New Jersey, USA. 2010. p.320.
E. Jalee, K. Aparna. Neuro-fuzzy Soft Sensor Estimator for Benzene Toluene
Distillation Column. Procedia Technology, v. 25, n. Raerest, p. 92-99, 2016.
Elon Musk - I think it's very important to have feedback... - BrainyQuote.
https://www.brainyquote.com/quotes/elon_musk_567271.
E. Roels, S. Terryn, J. Brancart, et al. Self-healing sensorized soft robots. Materials
Today Electronics, v. 1, n. March, p. 1-14, 2022.
F. Burkowski. Evolutionary Optimization Through PAC Learning. Foundations of
Genetic Algorithms. Waterloo: Canada, v.06, n.01, 2000. p.185-207.
F. Chiarello, P. Belingheri, G. Fantoni. Data science for engineering design: State of
the art and future directions. Computers in Industry, v. 129, n. 1, p, 1-17, 2021.
F. Hoppe, J. Hohmann, M. Knoll, et al. Feature-based supervision of shear cutting
processes based on force measurements: Evaluation of feature engineering and
feature extraction. Procedia Manufacturing, v. 34, n. 1, p, 847-856, 2019.
F. Lewis, S. Jagannathan, A. Yeşildirek. Chapter 7 - Neural Network Control of Robot
Arms and Nonlinear Systems. Neural Systems for Control, v. 1, n. 1, p. 161-211,
1997.
F. Souza, A. Francisco, R. Araújo, et al. Review of Soft Sensors Methods for
Regression Applications Francisco. Chemometrics and Intelligent Laboratory
Systems. v.152, n.01, 2016. p.69-79.
G. Dorgo, T. Kulcsar, J. Abonyi. Genetic programming-based symbolic regression for
goal-oriented dimension reduction. Chemical Engineering Science, v. 244, n. 1, p.
1-12, 2021.
G. van Kollenburg, J. van Es, J. Gerretzen, et al. Understanding chemical production
processes by using PLS path model parameters as soft sensors. Computers and
Chemical Engineering, v. 139, n. 1, p. 1-8, 2020.
G. van Kollenburg, R. Bouman, T. Offermans, et al. Process PLS: Incorporating
substantive knowledge into the predictive modelling of multiblock, multistep,
multidimensional and multicollinear process data manuscript revision printed in blue.
Computers and Chemical Engineering, v. 154, n. 1, p. 1-15, 2021.
H. Pacco. Simulation of temperature control and irrigation time in the production of
tulips using Fuzzy logic. Procedia Computer Science, v. 200, n. 1, p, 1-12, 2022.
H. Paggi, J. Soriano, V. Rampérez, et al. A distributed soft sensors model for managing
vague and uncertain multimedia communications using information fusion techniques.
Journal of Chemical Information and Modeling, v. 61, n. 7, p. 5517-5528, 2013.
153

H. Snyder. Literature review as a research methodology: An overview and guidelines.


Journal of Business Research. Oslo, V.104, n.01, 333-339 p, nov. 2019. Disponível
em: < https://doi.org/10.1016/j.jbusres.2019.07.039>. Acesso em 08 Mai. 2020.
I. Mendia, S. Gil-López, I. Landa-Torres, et al. Machine learning based adaptive soft
sensor for flash point inference in a refinery real-time process. Results in
Engineering, v. 13, n. January, p. 1-8, 2022.
J. Brownlee. Machine Learning Mastery with Python: Understand Your Data, Create
Accurate Models, and Work Projects End-to-End. Machine Learning Mastery, San
Francisco. v. 1, p. 1-249, 2016.
J. Engelbrecht, P. Goupil. Technical Note describing the joint Airbus-Stellenbosch
University Industrial Benchmark on Fault Detection. Aerospace Industrial
Benchmark on Fault Detection, v. 1, n. 1, p. 1-15, 2020.
J. Friedman. Greedy Function Approximation: A Gradient Boosting Machine, Annals
of Statistics, v. 29, n. 5, p. 1189-1232, 2001.
J. Jang. Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter
Algorithm. Association for The Advancement of Artificial Intelligence (AAAI).
Berkeley, USA. v.32, n.01, 1991. p.762-767.
J. Kabugo, S. Jämsä-Jounela, R. Schiemann, et al. Industry 4.0 based process data
analytics platform: A waste-to-energy plant case study. International Journal of
Electrical Power and Energy Systems, v. 115, n. November, p. 1-18, 2020.
J. Quinlan. "Induction of decision trees" (PDF). Machine Learning. 1: 81–106. 1986.
J. Schimitt, J. Bönig, T. Borggräfe, et al. Predictive model-based quality inspection
using Machine Learning and Edge Cloud Computing. Advanced Engineering
Informatics, v. 45, n. May, p. 1-10, 2020.
I. Kononenko, M. Kukar. Chapter 14 - **Computational Learning Theory. Machine
Learning and Data Mining, Woodhead Publishing, v. 1, n. 1, p. 393-422, 2007.
K. Ranasinghe, R. Sabatini, A. Gardi, et al. Advances in Integrated System Health
Management for mission-essential and safety-critical aerospace applications.
Progress in Aerospace Sciences, v. 128, n. September, p. 1-39, 2021.
K. Rastogi, D. Lohani. IoT-based Indoor Occupancy Estimation Using Edge
Computing. Procedia Computer Science, v. 171, n. 2019, p. 1943-1952, 2019.
L. Breiman, J. Friedman, R. Olshen, et al. Classification, and Regression Trees.
Wadsworth, Belmont, CA, 1984.
L. Fortuna, S. Graziani, A. Rizzo, et al. Soft Sensors for Monitoring and Control of
Industrial Processes. London: Springer-Verlag, 2014. v.53. 271 p.
L. Günther, S. Kärcher, T. Bauernhansl. Activity recognition in manual manufacturing:
Detecting screwing processes from sensor data. Procedia CIRP, v. 81, n. 1, p. 1177-
1182, 2019.
154

L. Ma, Y. Liu, X. Zhang, et al. Deep learning in remote sensing applications: A meta-
analysis and review. ISPRS Journal of Photogrammetry and Remote Sensing, v. 152,
n. March, p. 166-177, 2019.
L. Petruschke, J. Walther, M. Burkhardt, et al. Machine learning based identification of
energy states of metal cutting machine tools using load profiles. Procedia CIRP, v.
104, n. 1, p. 357-362, 2021.
L. Ramona, et al. PHOTONAI—A Python API for Rapid Machine Learning Model
Development. PLoS One, vol. 16, no. 7, Public Library of Science, p. 25-62, 2021.
M. Abdar, F. Pourpanah, S. Hussain, et al. A review of uncertainty quantification in
deep learning: Techniques, applications and challenges. Information Fusion, v. 76,
n. 1, p. 243-297, 2022.
M. Abdel-Basset, H. Hawash, K. Sallam, et al. STLF-Net: Two-stream deep network
for short-term load forecasting in residential buildings. Journal of King Saud
University - Computer and Information Sciences, v. 34, n. 7, p. 4296-4311, 2022.
M. Bambach, M. Imram, I. Sizova, et al. A soft sensor for property control in multi-stage
hot forming based on a level set formulation of grain size evolution and machine
learning. Advances in Industrial and Manufacturing Engineering, v. 2, n. February,
p. 1-13, 2021.
M. Barton, B. Lennox. Model stacking to improve prediction and variable importance
robustness for soft sensor development. Digital Chemical Engineering, v. 3, n.
February, p. 1-13, 2022.
M. Ishi, J. Patil, V. Patil. An efficient team prediction for one day international matches
using a hybrid approach of CS-PSO and machine learning algorithms. Array, v. 14, n.
February, p. 1-12, 2022.
M. Feliciano, G. Reynoso-Meza. Soft Sensors: Virtual Sensors Applied to
Engineering problems. Final Project (Control and Automation Engineering Degree)
– Polytechnic School, Pontifical Catholic University of Parana (PUCPR). Curitiba, PR
– Brasil, p.170. 2020.
M. Maggipinto, E. Pesavento, F. Altinier, et al. Laundry fabric classification in vertical
axis washing machines using data-driven soft sensors. Energies, v. 12, n. 21, p. 1-14,
2019.
M. Mowbray, T. Savage, C. Wu, et al. Machine learning for biochemical engineering:
A review. Biochemical Engineering Journal, v. 172, n. May, p. 1-22, 2021.
M. Shapi, N. Ramil, L. Awalin. Energy consumption prediction by using machine
learning for smart building: Case study in Malaysia. Developments in the Built
Environment, v. 5, n. November, p. 1-14, 2021.
M. Siddiqi, B. Jiang, R. Asadi, et al. Hyperparameter tuning to optimize
implementations of denoising autoencoders for imputation of missing Spatio-temporal
data. Procedia Computer Science, v. 184, n. 2020, p. 107-114, 2021.
155

M. Tabba, A. Brahmi, B. Chouri, et al. Contribution to the implementation of an


industrial digitization platform for level detection. Procedia Computer Science, v. 191,
n. 1, p. 457-462, 2021.
M. Zaghloul, G. Achari. A review of mechanistic and data-driven models of aerobic
granular sludge. Journal of Environmental Chemical Engineering, v. 91, n. March,
p. 1-57, 2022.
N. Mapes, C. Rodriguez, P. Chowriappa, et al. Residue Adjacency Matrix Based
Feature Engineering for Predicting Cysteine Reactivity in Proteins. Computational and
Structural Biotechnology Journal, v. 17, n. 1, p, 90-100, 2019.
N. Tvenge, O. Ogorodnyk, N. Østbø, et al. Added value of a virtual approach to
simulation-based learning in a manufacturing learning factory. Procedia CIRP, v. 88,
n. 1, p. 36-41, 2020.
O. Fisher, N. Watson, J. Escrig, et al. Considerations, challenges and opportunities
when developing data-driven models for process manufacturing systems. Computers
and Chemical Engineering, v. 140, n. 1, p. 1-14, 2020.
P. Bezak, P. Bozek, Y. Nikitin. Advanced Robotic Grasping System Using Deep
Learning. Procedia Engineering, v. 96, n. 1, p, 10-20, 2014.
P. Domingos. The master algorithm: How the quest for the ultimate learning machine
will remake our world. 1 ed. Basic Books: New York, USA. 2015. 330 p.
P. Goupi, S. Urbano, J. Tourneret. A Data-Driven Approach to Detect Faults in the
Airbus Flight Control System. IFAC-PapersOnLine, v. 49, n. 17, p, 52-57, 2016.
P. Nkulikiyinka, Y. Yan, F. Güleç, et al. Prediction of sorption enhanced steam methane
reforming products from machine learning based soft-sensor models. Energy and AI,
v. 2, n. 1, p. 1-10, 2020.
P. Zhu, H. Peng, A. Rwei. Flexible, wearable biosensors for digital health. Medicine in
Novel Technology and Devices, v. 14, n. January, p. 1-9, 2022.
R. Cordeiro, J. Azinheira, A. Moutinho. Actuation failure detection in fixed-wing aircraft
combining a pair of two-stage Kalman filters. IFAC-PapersOnLine, v. 53, n. 1, p, 744-
749, 2020.
R. Forghani, P. Savadjiev, A. Chatterjee, et al. Radiomics and Artificial Intelligence for
Biomarker and Prediction Model Development in Oncology. Computational and
Structural Biotechnology Journal, v. 17, n. 1, p. 995-1008, 2019.
R. Meyes, J. Donauer, A. Schmeing, et al. A recurrent neural network architecture for
failure prediction in deep drawing sensory time series data. Procedia Manufacturing,
v. 34, n. 1, p. 789-797, 2019.
R. Miehe, T. bauernhansl, M. Beckett, et al. The biological transformation of industrial
manufacturing – Technologies, status and scenarios for a sustainable future of the
German manufacturing industry. Journal of Manufacturing Systems, v. 54, n.
November, p. 50-61, 2020.
156

R. Palmatier, W. Houston, & Hulland, J. Review articles: Purpose, process, and


structure. Journal of the Academy of Marketing Science, v. 46, n. 1, p. 1–5. 2018.
R. Yao, N. Wang, Z. Liu, et al. Intrusion detection system in the Smart Distribution
Network: A feature engineering based AE-LightGBM approach. Energy Reports, v. 7,
n. 1, p, 353-361, 2021.
S. Aghabozorgi, S. Shirkhorshidi, Y. Wah. Time-series clustering – A decade review.
Information Systems. Elsevier. v. 53, n. 1, p. 16–38. 2015.
S. Baduge, S. Thilakarathna, J. Perera, et al. Artificial intelligence and smart vision for
building and construction 4.0: Machine and deep learning methods and applications.
Automation in Construction, v. 141, n. June, p. 1-26, 2022.
S. He, H. Shin, S. Xu, et al. Distributed estimation over a low-cost sensor network: A
Review of state-of-the-art. Information Fusion, v. 54, n. November, p. 21-43, 2020.
S. Maier, P. immermann, J. Berger. MANU-ML: Methodology for the application of
machine learning in manufacturing processes. Procedia CIRP, v. 107, n. 1, p. 798-
803, 2022.
S. Shafiq, E. Szczerbicki, E. Sanin, et al. Proposition of the methodology for Data
Acquisition, Analysis and Visualization in support of Industry 4.0. Procedia Computer
Science, v. 159, n. 1, p. 1976-1985, 2019.
S. Urbano, E. Chaumette, P. Goupil, et al. A Data-Driven Approach for Actuator Servo
Loop Failure Detection. IFAC-PapersOnLine, v. 50, n. 1, p, 13544-13549, 2017.
S. Urbano, E. Chaumette, P. Goupil, et al. Aircraft Vibration Detection and Diagnosis
for Predictive Maintenance using a GLR Test. IFAC-PapersOnLine, v. 51, n. 24, p.
1030-1036, 2018.
T. Chen, C. Guestrin. XGBoost: a scalable tree boosting system. Proceedings of the
22nd ACM International Conference on Knowledge Discovery and Data Mining.
San Francisco. USA. p. 85-97, 2016.
T. Grüner, F. Böllhoff, R. Meisetschläger, et al. Evaluation of machine learning for
sensorless detection and classification of faults in electromechanical drive systems.
Procedia Computer Science, v. 176, n. 1, p. 1586-1595, 2020.
T. Krivec, J. Kocijan, M. Perne, et al. Data-driven method for the improving forecasts
of local weather dynamics. Engineering Applications of Artificial Intelligence, v.
105, n. July, p. 1-14, 2021.
V. Henrique, R. Massao, G. Reynoso-Meza. Decision Tree for Oscillatory Failure Case
Detection in a Flight Control System. International Federation of Automatic Control
World Congress – IFAC, v. 1, n. 1, p. 1-4, 2021.
V. Kocaman, D. Talby. Accurate Clinical and Biomedical Named Entity Recognition at
Scale. Software Impacts, v. 13, n. June, p. 1-7, 2022.
W. Lee, G. Mendis, J. Sutherland. Development of an intelligent tool condition
monitoring system to identify manufacturing tradeoffs and optimal machining
conditions. Procedia Manufacturing, v. 33, n. 1, p. 256-263, 2019.
157

Z. Janjua, D. Kerins, B. O'Flynn, et al. Knowledge-driven feature engineering to detect


multiple symptoms using ambulatory blood pressure monitoring data. Computer
Methods and Programs in Biomedicine, v. 217, n. 1, p, 1-7, 2022.
Z. Qadir, S. Khan, E. Khalaji, et al. Predicting the energy output of hybrid PV–wind
renewable energy system using feature selection technique for smart grids. Energy
Reports, v. 7, n. 1, p, 8465-8475, 2021.
Zambonin, Guiliano, et al. Machine Learning-Based Soft Sensors for the Estimation of
Laundry Moisture Content in Household Dryer Appliances. Energies. Padova, Italy,
v.12, n.20, p. 1-24, 2019.
Support Vector Machine. Available at: <https://towardsdatascience.com/support-
vector-machine-introduction-to-machine-learning-algorithms-934a444fca47 >. Access
on 10 Oct. 2022.
158

ATTACHMENT A

This attachment holds the ‘FeatureEngineeringOFC’ class, presented in section 6.1.

import pandas as pd
from sklearn.preprocessing import StandardScaler
import featuretools as ft
from featuretools.primitives import Mean, Sum, Std, Max, Min

class FeatureEngineeringOFC:
def __init__(self, X_file, Y_file):
self.X_file = X_file
self.Y_file = Y_file
self.dataset = None

def load_data(self):
# Load the dataset from the CSV files
dataframe_X = pd.read_csv(self.X_file)
dataframe_Y = pd.read_csv(self.Y_file)
return dataframe_X, dataframe_Y

def rename_columns(self, dataframe_X, dataframe_Y):


# Rename columns to f-0, f-1, ..., f-39 for ease of reference
dataframe_X.columns = [f"f-{i}" for i in range(40)]

# Rename the output column in the Y dataset


dataframe_Y.columns = ["Output"]
return dataframe_X, dataframe_Y

def handle_missing_values(self, dataframe_X):


# Handle missing and inconsistent values: Possible to use
techniques like imputation
# This is an example where it misses values with the mean of the
column
dataframe_X.fillna(dataframe_X.mean(), inplace=True)
return dataframe_X

def normalize(self, dataframe_X):


# Normalizing the dataset
scaler = StandardScaler()
dataframe_X = pd.DataFrame(scaler.fit_transform(dataframe_X),
columns=dataframe_X.columns)
return dataframe_X

def join_dataframes(self, dataframe_X, dataframe_Y):


# Join X and Y into a single dataframe for feature engineering
self.dataset = pd.concat([dataframe_X, dataframe_Y], axis=1)

def split_into_sessions(self, dataframe_X, n_sessions=10):


# Split the dataframe into multiple "sessions"
features_per_session = len(dataframe_X.columns) // n_sessions
sessions = []
for i in range(n_sessions):
session = dataframe_X.iloc[:, i * features_per_session: (i +
1) * features_per_session].copy()
159

session['session_id'] = [f"session_{i}_{idx}" for idx in


range(len(session))]
sessions.append(session)
return sessions

def deep_feature_synthesis(self, sessions):


# Create a new entityset to hold the sessions
es = ft.EntitySet(id="ofc")

# Add each session to the entityset


for i, session in enumerate(sessions):
es = es.add_dataframe(
dataframe_name=f"session_{i}",
dataframe=session,
index='session_id')

# Run deep feature synthesis


features, feature_defs = ft.dfs(entityset=es,

target_dataframe_name="session_0",
agg_primitives=[Mean, Sum, Std,
Max, Min],
verbose=True)
return features

def execute(self):
df_X, df_Y = self.load_data()
df_X, df_Y = self.rename_columns(df_X, df_Y)
df_X = self.handle_missing_values(df_X)
df_X = self.normalize(df_X)
self.join_dataframes(df_X, df_Y)

# Additional steps for profound feature synthesis


sessions = self.split_into_sessions(df_X)
features = self.deep_feature_synthesis(sessions)
print(features)

# Create an instance of the class and call the execute method


feature_engg = FeatureEngineeringOFC('dataset_ofc_X.csv',
'dataset_ofc_Y.csv')
feature_engg.execute()
160

ATTACHMENT B

This attachment holds the ‘ModelSelectionAndEvaluation’ class, presented in section


6.2.

from sklearn.model_selection import train_test_split


from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, accuracy_score

class ModelSelectionAndEvaluation:
def __init__(self, dataset):
self.dataset = dataset
self.X = None
self.Y = None
self.X_train = None
self.X_test = None
self.Y_train = None
self.Y_test = None
self.model = None

def split_data(self):
self.X = self.dataset.drop('Output', axis=1)
self.Y = self.dataset['Output']
self.X_train, self.X_test, self.Y_train, self.Y_test =
train_test_split(self.X, self.Y, test_size=0.2, random_state=42)

def select_model(self):
self.model = RandomForestClassifier()

def train_model(self):
self.model.fit(self.X_train, self.Y_train)

def evaluate_model(self):
predictions = self.model.predict(self.X_test)
print("Model Accuracy: ", accuracy_score(self.Y_test,
predictions))
print("\nClassification Report:\n",
classification_report(self.Y_test, predictions))

def execute(self):
self.split_data()
self.select_model()
self.train_model()
self.evaluate_model()

# Create an instance of the class and call the execute method


model_eval = ModelSelectionAndEvaluation(feature_engg.dataset)
model_eval.execute()
161

ATTACHMENT C

This attachment holds the code presented in section 6.3.

from scipy.stats import skew


import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

def load_csv_data(file_x, file_y):


data_x = pd.read_csv(file_x).values
data_y = pd.read_csv(file_y).values.ravel() # Flatten to make it a
1D array
return data_x, data_y

def generate_time_windows(data, window_size, step):


windows = []
for i in range(0, len(data) - window_size, step):
windows.append(data[i:i+window_size])
return windows

def prepare_datasets(data_x, data_y, window_size=40, step=20):


input_data = []
output_data = []

time_windows = generate_time_windows(data_x, window_size, step)

for window, is_failure in zip(time_windows, data_y):


features = [] # Add any features to extract from the window

input_data.append(features)
output_data.append(is_failure)

return np.array(input_data), np.array(output_data)

def extract_features(window):
mean = np.mean(window)
std = np.std(window)
skewness = skew(window)
return [mean, std, skewness]

# Load the data from the CSV files


file_x = r"C:\Users\marce\Desktop\PUCPR\TRABALHO DE CONCLUSÃO DO CURSO -
TCC - ENGENHARIA\SOFT SENSORS\dataset_ofc_X.csv"
file_y = r"C:\Users\marce\Desktop\PUCPR\TRABALHO DE CONCLUSÃO DO CURSO -
TCC - ENGENHARIA\SOFT SENSORS\dataset_ofc_Y.csv"
data_x, data_y = load_csv_data(file_x, file_y)

# Prepare the input and output datasets using sliding time windows
input_data, output_data = load_csv_data(file_x, file_y) #
prepare_datasets(data_x, data_y)

# Feature Engineering (example with Standard Scaling)


scaler = StandardScaler()
scaled_input_data = scaler.fit_transform(input_data)
162

# Split the data into training and testing sets (70% training, 30%
testing)
X_train, X_test, y_train, y_test = train_test_split(scaled_input_data,
output_data, test_size=0.3, random_state=42)

dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
163

ATTACHMENT D

This attachment holds the code presented in section 6.4.

from sklearn.metrics import accuracy_score, precision_score,


recall_score, f1_score, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd

class PerformanceAnalysis:
"""
A class used to analyze the performance of a classification model.

...

Attributes
----------
model : sklearn.base.ClassifierMixin
A trained model of ClassifierMixin type or any of its subclasses.
X_test : array-like
Test input data.
y_test : array-like
True labels for X_test.
y_pred : array-like
Predicted labels for X_test.
performance_metrics : dict
A dictionary to store performance metrics.

Methods
-------
calculate_metrics():
Calculates performance metrics.
display_metrics() -> pd.DataFrame:
Returns performance metrics as a pandas DataFrame.
plot_confusion_matrix(normalize=False):
Plots the confusion matrix.
"""

def __init__(self, model, X_test, y_test):


"""Initializes PerformanceAnalysis with a model and test
datasets."""
self.model = model
self.X_test = X_test
self.y_test = y_test
self.y_pred = self.model.predict(self.X_test)
self.performance_metrics = {}

def calculate_metrics(self):
"""Calculates accuracy, precision, recall, and F1-score of the
model."""
self.performance_metrics['Accuracy'] =
accuracy_score(self.y_test, self.y_pred)
self.performance_metrics['Precision'] =
precision_score(self.y_test, self.y_pred, average='macro')
self.performance_metrics['Recall'] = recall_score(self.y_test,
self.y_pred, average='macro')
164

self.performance_metrics['F1_score'] = f1_score(self.y_test,
self.y_pred, average='macro')

def display_metrics(self) -> pd.DataFrame:


"""Returns the performance metrics as a pandas DataFrame."""
metrics_df = pd.DataFrame(self.performance_metrics, index=[0])
return metrics_df

def plot_confusion_matrix(self, normalize=False):


"""Plots the confusion matrix.

If `normalize` is True, it plots the normalized confusion matrix.


"""
cm = confusion_matrix(self.y_test, self.y_pred)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
df_cm = pd.DataFrame(cm, range(cm.shape[0]), range(cm.shape[1]))

plt.figure(figsize=(10, 7))
sns.set(font_scale=1.4)
sns.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='.2f'
if normalize else 'd')
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()

# Create an instance of PerformanceAnalysis


performance_analysis = PerformanceAnalysis(dt, X_test, y_test)

# Calculate performance metrics


performance_analysis.calculate_metrics()

# Display the performance metrics


metrics_df = performance_analysis.display_metrics()
print(metrics_df)

# Display the confusion matrix


performance_analysis.plot_confusion_matrix(normalize=True)
165

ATTACHMENT E

This attachment holds the code presented in section 6.5.

import matlab.engine # Calls the MATLAB engine into Python


from collections import namedtuple
import numpy as np
from matplotlib import pyplot as plt

# Define the named tuple structure


ParameterScenario = namedtuple(
"ParameterScenario",
[
"location",
"type",
"amplitude",
"bias",
"frequency",
"turbulence_function",
"start_time",
"duration",
],
)

# Define the named tuples for different scenarios


ideal = ParameterScenario("none", "none", 0.87, 0.05, '1.7',
"setNoTurbulence()", 2, 60)
light = ParameterScenario("sensor", "solid", 1.28, 0.25, '2*pi',
"setLightTurbulence()", 2, 60)
moderate = ParameterScenario("sensor", "liquid", 3.18, 0.65, '4*pi',
"setModerateTurbulence()", 2, 60)
severe = ParameterScenario("sensor", "liquid", 8.48, 1.05, '9*pi',
"setSevereTurbulence()", 2, 60)

# Create a dictionary of parameter scenarios


parameters_scenario = {
"ideal": ideal,
"light": light,
"moderate": moderate,
"severe": severe,
}

def test_matlab(simulation_parameters: ParameterScenario) ->


matlab.engine:
"""
The sole purpose of this function is to test MATLAB with the
predefined default data, simulating user input

Returns
-------
None.

"""
benchmark_path =
r"C:\Master_Degree_Dissertation\Master_Degree_Code\SOFT SENSORS"
system_path = r"C:\Master_Degree_Dissertation\Master_Degree_Code\SOFT
166

SENSORS\ofc_benchmark.slx"
bench = "ofc_benchmark_acquire"

# Initializes the MATLAB Engine


system = matlab.engine.start_matlab()
system.addpath(benchmark_path, nargout=0)

# Starting the benchmark and loads the main variables to the API
console
system.eval("ofc_benchmark_init", nargout=0)
system.eval(f"simulation.setSimulinkModel('{bench}');", nargout=0)

# Loads the variables: aircraft, ofc, servoModel, servoReal, and


simulation!
system.eval("servoReal.randomiseServoParameters()", nargout=0) #
Makes the servo real object random
system.eval(f"ofc.setLocation('{simulation_parameters.location}')",
nargout=0)
system.eval(f"ofc.setType('{simulation_parameters.type}')",
nargout=0)
system.eval(f"ofc.setAmplitude({simulation_parameters.amplitude})",
nargout=0)
system.eval(f"ofc.setBias({simulation_parameters.bias})", nargout=0)
system.eval(f"ofc.setFrequency({simulation_parameters.frequency})",
nargout=0)
system.eval("ofc.setPhase(0)", nargout=0)
system.eval("ofc.setStartTime(0)", nargout=0)
system.eval(f"aircraft.{simulation_parameters.turbulence_function}",
nargout=0)

# Create randomized control signal


system.eval(
"""controls = {@(x)aircraft.setControlInput('FPA_control'), ...
@(x)aircraft.setControlInput('NZ_step', x(1), x(2), x(3)),
...
@(x)aircraft.setControlInput('NZ_sine', x(1), x(2), x(3),
x(4)), ...
@(x)aircraft.setControlInput('NZ_chirp', x(1))};""",
nargout=0
)
system.eval(
f"controls{{{simulation_parameters[6]}}}([10^randi([-1
1]),randi([10 25]),randi([35, 50]),randi([0, 10])])",
nargout=0
)
system.eval(f"simulation.setStopTime({simulation_parameters[7]})",
nargout=0) # Sets the final simulation time
system.eval(f"model = '{system_path}'", nargout=0)
system.eval("SimOut = sim(simulation.simulink_model, 'SrcWorkspace',
'current');", nargout=0)

# Evaluates only the desired values


process = system.eval("[SimOut.delta_des SimOut.delta_meas
SimOut.time SimOut.ofc_detected]")

return process

def plot_data(scenario, result):


data = np.array(result) # Convert the MATLAB engine object to a
NumPy array
167

delta_des = data[:, 0]
delta_meas = data[:, 1]
time = data[:, 2]
ofc_detected = data[:, 3]

plt.plot(time, delta_des, label='Desired Control Input')


plt.plot(time, delta_meas, label='Measured Control Input')
for index in range(0, len(ofc_detected)):
if ofc_detected[index]:
plt.plot(time[index], delta_meas[index], 'ro', label=f'OFC
Detected')

plt.xlabel('Time (s)')
plt.ylabel('Control Input')
plt.title(f'{scenario.capitalize()} Scenario')
plt.legend()
plt.show()

data_list = []
for scenario, parameters in parameters_scenario.items():
result = test_matlab(parameters)
data_list.append((scenario, result))

print(f"For the {scenario}, the results are: {result}")


168

ATTACHMENT F

This attachment holds the code presented in section 6.6.

from itertools import combinations


from sklearn.metrics import accuracy_score, precision_score,
recall_score, f1_score
from sklearn.base import clone
import numpy as np
import concurrent.futures
import matplotlib.pyplot as plt

class FeatureReductionPerformance:
"""
A class to evaluate a classification model's performance using
reduced features.
"""

def __init__(self, model, X_train, y_train, X_test, y_test):


"""
Constructor for initializing the FeatureReductionPerformance
object.

Parameters:
- model: Classifier model to evaluate.
- X_train: Training data features.
- y_train: Training data labels.
- X_test: Test data features.
- y_test: Test data labels.
"""
self.model = model
self.X_train_original = X_train
self.y_train = y_train
self.X_test_original = X_test
self.y_test = y_test
self.results = {}
self.combinations = {}

def calculate_metrics(self, model_clone, X_test_reduced):


"""
Calculates performance metrics for the given test data subset.

Parameters:
- X_test_reduced: Test data with reduced feature set.

Returns:
- performance_metrics: Dictionary containing accuracy, precision,
recall, and F1_score.
"""
y_pred = model_clone.predict(X_test_reduced)
performance_metrics = {
'Accuracy': accuracy_score(self.y_test, y_pred),
'Precision': precision_score(self.y_test, y_pred,
average='macro'),
'Recall': recall_score(self.y_test, y_pred, average='macro'),
'F1_score': f1_score(self.y_test, y_pred, average='macro')
}
169

return performance_metrics

def evaluate_reduced_features(self, max_workers=8):


"""Evaluate performance metrics for various combinations of
reduced features."""
num_features = self.X_train_original.shape[1]
progress = 0 # Variable to show progress percentage

def _evaluate(subset):
"""
Evaluates the model's performance for a given subset of
features.

The function reduces the original training and test datasets


to the specified subset of features.
It then trains a clone of the original model on this reduced
training set and calculates
performance metrics using the reduced test set.

Parameters:
- Subset: List of indices representing the subset of features
to evaluate.

Returns:
- key: String representing the feature subset, with indices
separated by commas.
- performance_metrics: Dictionary containing performance
metrics for the given feature subset.
"""
X_train_reduced = self.X_train_original[:, np.array(subset)]
X_test_reduced = self.X_test_original[:, np.array(subset)]

model_clone = clone(self.model)
model_clone.fit(X_train_reduced, self.y_train)

performance_metrics = self.calculate_metrics(model_clone,
X_test_reduced)
key = ','.join(map(str, subset))
return key, performance_metrics

# Using ThreadPoolExecutor to parallelize evaluations


with
concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as
executor:
# Instead of extending a list with all combinations, loop and
submit directly
for r in range(1, num_features + 1):
progress_percentage = (r / num_features * 100)

# Directly loop over combinations and submit tasks


futures = [executor.submit(_evaluate, subset) for subset
in combinations(range(1, num_features, 3), r)]

for future in concurrent.futures.as_completed(futures):


key, metrics = future.result()
self.results[key] = metrics
progress += 1
if progress % 1000 == 0:
print(f"Completed: {progress} feature
combinations")
170

print(f"Progress: {progress_percentage:.2f}% ->


{progress} Combinations")
self.combinations[progress_percentage] = progress

print(f"The total number of combinations is: {progress}")

def plot_performance_against_features(self):
"""
Plot the model's accuracy against the number of features used.
"""
feature_counts = [3*len(key.split(',')) for key in
self.results.keys()]
accuracies = [val['Accuracy'] for val in self.results.values()]

plt.figure(figsize=(10, 6))
plt.plot(feature_counts, accuracies, marker='o')
plt.xlabel('Number of Features')
plt.ylabel('Accuracy')
plt.title('Model Accuracy vs. Number of Features')
plt.grid(True)
plt.show()

# Usage example (assuming a model `dt` and data `X_train`, `X_test`,


`y_train`, and `y_test` are already initialized)
feature_eval = FeatureReductionPerformance(dt, X_train, y_train, X_test,
y_test)
feature_eval.evaluate_reduced_features() # Takes a long time to run due
to the high amount of combinations (around 2 hours)
feature_eval.plot_performance_against_features()

Você também pode gostar