Você está na página 1de 7

Neurocomputing 190 (2016) 117–123

Contents lists available at ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

Fed-batch fermentation penicillin process fault diagnosis and detection


based on support vector machine
Chengming Yang a,n, Jian Hou b
a
Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Heilongjiang 150001, China
b
College of Engineering, Bohai University, Liaoning 121013, China

art ic l e i nf o a b s t r a c t

Article history: With the increase of scale and complexity of modern chemical process, fault diagnosis and detection are
Received 11 October 2015 playing crucial roles in process monitoring. Accidents can be avoided if faults can be detected and
Received in revised form excluded in time. In this paper, Principal Components Analysis (PCA) and Recursive Feature Elimination
7 December 2015
(RFE) are combined with Support Vector Machine (SVM) for fault diagnosis and detection. Specifically,
Accepted 6 January 2016
the original SVM, PCA-SVM and SVM-RFE are respectively utilized to identify three faults from the
Communicated by Xudong Zhao
Available online 2 February 2016 simulation of Fed-Batch Fermentation Penicillin (FBFP) process. Experimental results show that PCA-SVM
and SVM-RFE perform better than the original SVM, and the fault detection schemes based on PCA-SVM
Keywords: and SVM-RFE generate satisfactory results.
Fault detection
& 2016 Elsevier B.V. All rights reserved.
Fault diagnosis
Support vector machine
Principal components analysis
Fed-batch fermentation penicillin

1. Introduction Support Vector Machine (SVM) as a machine learning method


based on statistical theory is much suitable for small sample data.
Industrial production systems have become more and more After it was proposed, it caused an extensive attention of people to
diverse and complex with the development of industrial technol- research in this field, since its excellent learning performance
ogy [1–4]. As a result, it is critical to apply fault detection and especially the generalization ability [26–28]. In this paper, SVM
diagnosis techniques to the key variables in modern industrial and PCA are introduced in the first place. Since SVM has a sound
processes [5–8]. Fault detection and fault diagnosis technology is theoretical foundation and remarkable generalization ability, it has
been widely applied in pattern recognition, classification and
widely used in monitoring light fermentation [9,10], chemical
regression problems. With PCA, the dimension of variables can be
processes [11–13] and other industrial processes [14,15], and
reduced and the information of the original data will be remained
especially multivariate statistical process control (MSPC) method
as much as possible. Recursive feature elimination (RFE) is used to
has become a big concern in industry and academia recently [16– sort the features in descending order. The PCA-SVM algorithm is
18]. Data will be mapped onto a lower dimensional space from a outlined followed by SVM-RFE. Three kinds of faults are classified
high dimensional space based on MSPC method [19], in order to respectively by original SVM, PCA-SVM and SVM-RFE finally.
reduce the interference information and keep the characteristics of
the initial data. The commonly used methods, e.g., principal
components analysis (PCA) [20], correspondence analysis (CA) 2. Related works
[21], principal components regression (PCR) [22], canonical variate
analysis (CVA) [23], partial least square (PLS) [24], and indepen- 2.1. Support vector machine
dent component analysis (ICA) [25], have been widely used in
According to statistical learning theory [29,30], SVM was pro-
industrial processes. The work of this paper is focused on the fault
posed in 1995 as a binary classifier. SVM is designed based on the
detection and diagnosis of the fed-batch fermentation penicillin
Vapnik–Chervonenkis (VC) Dimension theory and structural risk
(FBFP) process which has been extensively studied in statistical
minimization (SRM) principle. Hence, the VC dimension can be
process monitoring and process control. seen as the best indicator of the learning ability of function set. A
high VC dimension means a high complexity of the problem and a
n
Corresponding author. low generalization ability of learning methods. Since SVM is
E-mail addresses: chmyang@foxmail.com (C. Yang), jian_hou@163.com (J. Hou). designed based on the VC dimension, its classification result is

http://dx.doi.org/10.1016/j.neucom.2016.01.027
0925-2312/& 2016 Elsevier B.V. All rights reserved.
118 C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123

independent of sample dimension. The SRM theory has been The separating interval can be calculated as J w2 J . The objective
proposed to minimize the structural risk, i.e., the sum of empirical function to obtain the separating hyperplane can be written as
risk and confidence risk. By means of SRM, SVM possesses
minϕðwÞ ¼ 12 J w J 2 ð3Þ
remarkable generalization ability.
The original SVM can be applied to two-class classification In order to solve Eq. (3), Lagrange function is introduced
problems. The 1-v-r SVMs, 1-v-1 SVMs and H-SVMs methods can Lðw; d; aÞ ¼ 12 J w J  aðyððw  xÞ þ dÞ  1Þ ð4Þ
be used to solve multi-class classification generally.
In the 1-v-r SVMs method, each class of samples are regarded In Eq. (4), aðai 40Þ is the Lagrange multiplier. Then the problem
as in a category, and the rest are treated as in another category. In is converted to the corresponding dual problem
total c classification models will be constructed for classification X
l
1Xl Xl
problem with c class of samples. The advantages of 1-v-r SVMs lie max Q ðaÞ ¼ aj  a a y y ðx  x Þ
j¼1
2i¼1j¼1 i j i j i j
in the simplicity and computation efficiency. However, the short-
coming is that the classification may be inseparable. If there are X
l

many classes of samples, and one class of the samples is con- s:t: aj yj ¼ 0; j ¼ 1; 2; …; l; aj Z 0; j ¼ 1; 2; …; l ð5Þ
j¼1
siderably less than the sum of the others, this imbalance will affect
the classification accuracy. The optimal solution calculated by Eq. (5) is an ¼ ðan1 ; an2 ; …; anl ÞT ,
n
In the 1-v-1 SVMs method, the classification models are trained and the optimal weight vector wn and d are written as
with every two classes of samples. In total cðc 2 1Þ classification
X
l
models are trained for c classes of training samples. The training wn ¼ anj yj xj ð6Þ
samples of each classifiers are relevant, and the voting method is j¼1
used for classification.
All classes of samples are divided into two sub-categories in H- n
X
l
d ¼ yi  yj anj ðxj  xi Þ ð7Þ
SVMs method, and the sub-categories will be divided into another
j¼1
two categories after tested by SVM. After some iterations, the
single category can be obtained. Therefore H-SVMs method avoids Therefore, the optimal separating hyperplane is written as
n
the case that samples cannot be divided, and it has a stable pro- ðwn  xÞ þ d ¼ 0, and the optimal separating function is
80 1 9
motion performance [31]. < X l =
n n
f ðxÞ ¼ sgnfðw  xÞ þ d g ¼ sgn @ n A
aj yj ðxj  xi Þ þ d
n
ð8Þ
: ;
2.2. Principal components analysis j¼1

To solve the nonlinear problems, SVM maps the low dimen-


PCA was firstly proposed in 1901. Mudholkar and Jackson [32]
sional vectors into a higher space using kernel function according
introduced PCA into multivariate statistical process control and
to nonlinear transform
brought the squared prediction error (SPE) into PCA. Noise-
sensitive characteristic of residual space was utilized to monitor x-ϕðxÞ ¼ ðϕ1 ðxÞ; ϕ2 ðxÞ; …ϕl ðxÞÞT ð9Þ
the inapparent data points in T 2 statistics, in order to increase the
By replacing the input vector x with eigenvector ϕðxÞ, the
accuracy of fault diagnosis and detection. As a multivariate sta-
optimal separating function can be written as
tistical method, PCA is studied by academia and industry exten- !
sively. Multiple related variables of data sets are converted into a X
l
f ðxÞ ¼ sgnðw  ϕðxÞ þdÞ ¼ sgn ai yi ϕðxi Þ  ϕðxÞ þ d ð10Þ
few irrelevant variables based on the PCA statistics. It can not only
i¼1
reduce the dimension of variables, but also keep the information
of the original data as much as possible. According to the PCA
statistics, if the index of test data exceeds the threshold calculated 3.2. SVM-RFE algorithm
from normal data, the test data will be treated as faulty [33,12].
RFE aims at sorting the features in descending order. The
effectiveness of feature selection can be improved by combining
3. Algorithm RFE with SVM [34]. The feature ranking list is constructed by SVM-
RFE according to the weight vector w in the linear classifier. The
3.1. The original SVM classical SVM-RFE uses the linear kernel function, while RBF kernel
function is introduced into SVM in non-linear cases [35].
SVM shows some unique advantages over other approaches in In each iteration, the most irrelevant feature will be removed.
classification with linearly inseparable data, small samples and The feature removed first is the last one in the list, and this means
high feature dimension. It deals with these cases by means of error the feature is unnecessary. According to the feature ranking list,
penalty, slack variables and kernel functions. the most relevant variable of the faults can be obtained. Hence, the
With the original SVM, the input data should belong to two classification accuracy can be improved by the optimal subset
categories. The labels of the data in the positive category are consisting of the relevant features. The first feature in the list is
yi ¼ þ 1, and the labels of those in the negative category are treated as the most relevant one to be analyzed [36].
yi ¼  1. The training samples can be written as ðxi ; yi Þ
ði ¼ 1; 2; …; lÞ, x A Rn . 3.3. PCA-SVM algorithm
The separating hyperplane is
PCA is a classical data dimension reduction method. Redundant
ðw  xÞ þd ¼ 0 ð1Þ information in the original data space can be excluded by the
where w is the weight vector and d is a constant. To ensure that all principal component. Meanwhile, a lot of variance information is
samples are correctly classified and have the category interval, the retained and each principal component variable is mutually
following condition should be satisfied orthogonal [37].
Let us assume that X A Rnm is a data matrix under normal
yi ½ðw  xi Þ þ d Z 1 ð2Þ conditions, and m is the number of variables, n represents the
C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123 119

number of samples. After the singular value decomposition of the


matrix, the model of PCA statistics can be written as

X ¼ TP ¼ TP T þ T~ P~ ¼ TP T þ E
T T
ð11Þ

T ¼ XP ð12Þ

E ¼ T~ P~ ¼ XðI  PP T Þ
T
ð13Þ
nd md
T AR and P A R represent scoring matrix and loading
matrix respectively. E A Rnm is residual matrix and d o m is the
number of principal components. According to Eqs. (11) and (12),
the projection is scoring matrix in each direction of loading
matrix, and the loading vector of principal components is actually
the covariance matrix M of the data matrix X:
1 T
M¼ X X ð14Þ Fig. 1. Fed-batch fermentation penicillin process.
n1
Therefore, the correlation between variables can be reflected by
the principal components obtained by projection. When the PCA Table 1
statistics is used for fault diagnosis and detection, the PCA T 2 Simulation conditions of normal and fault operations.
statistics for the training data and the testing data are
Simulation conditions Normal Fault 1 Fault 2 Fault 3 Unit
T 2 ¼ x T P ΛP T x ð15Þ dataset

The PCA SPE statistics for the training data and the testing data Initial conditions
are Substrate concentration 15 15 15 15 g/L
Dissolved oxygen 1.16 1.16 1.16 1.16 g/L
T T
SPE ¼ x ðI PP Þx ð16Þ concentration
Biomass concentration 0.1 0.1 0.1 0.1 g/L
The threshold setting of PCA-SVM is the same as the one of Penicillin concentration 0 0 0 0 g/L
basic PCA, if the values of T 2 and SPE are under the corresponding Culture volume 100 100 100 100 L
thresholds, the data can be categorized as normal, otherwise the Carbon dioxide concentration 0.5 0.5 0.5 0.5 g/L
pH 5 5 5 5
data will be treated as faulty. Fermenter temperature 298 298 298 298 K
Generated heat 0 0 0 0 kcal
Set points
4. Fed‐batch fermentation penicillin process Aeration rate 8.6 8.6 8.6 6 L/h
Agitator power 29.9 21 29.9 29.9 W
Substrate feed flowrate 0.0426 0.0426 0.027 0.0426 L/h
The FBFP process is a typical dynamic, non-linear, time-varying Substrate feed temperature 296 296 296 296 K
and multi-stage batch process. In general, the process can be Temperature set point 298 298 298 298 K
divided into three stages, namely cell growth stage, penicillin pH set point 5 5 5 5
Sampling Interval 1 1 1 1 h
synthesis stage and cell autolysis stage. In the cell growth phase,
Simulation Time 400 400 400 400 h
nutrient substance of the initial substrate is consumed and new
cells are synthesized constantly. After cell growth stage, penicillin
synthesis stage begins and productivity is maximized until the
ability of penicillin synthesis declines. Cell autolysis is the final
Table 2
stage, where pH of the fermentation broth increases and the Variables of FBFP process.
ability to synthesize penicillin also weakens. Throughout the
entire fermentation process, many factors affect the effectiveness Number Process variable Unit
of penicillin fermentation, such as temperature, pH, substrate
1 Aeration rate L/h
concentration, dissolved oxygen concentration and so on. There- 2 Agitator power W
fore it is imperative to conduct effective process monitoring. 3 Substrate feed temperature K
4 Substrate concentration g/L
5 Dissolved oxygen concentration g/L
6 Biomass concentration g/L
5. Fault diagnosis 7 Penicillin concentration g/L
8 Culture volume L
Professor Cinar of Illinois Institute of Technology developed a 9 Carbon dioxide concentration g/L
simulation software pensim2.0 [38] for penicillin fermentation 10 pH
11 Fermenter temperature K
monitoring and process modeling in 2002, in order to simulate the
12 Cooling water flow rate L/h
aeration rate, biomass concentration, substrate feed temperature
and other variables of the fermentation process. This software
provides a benchmark simulation platform for penicillin fermen-
initial phase of about 45 h and the fed-batch stage of around
tation modeling, optimal control and process monitoring, and
many scholars have done a lot of works [39–42] in penicillin fer- 355 h. The simulation conditions of normal and fault operations
mentation process monitoring and fault diagnosis with the plat- are listed in Table 1, and the sample number of each data set is 400
form. In this paper, the data sets were generated from the simu- since the sampling interval is chosen as 1 h. The data sets con-
lator on the web site http://simulator.iit.edu/web/pensim/simul. sisting of 12 process variables are listed in Table 2.
html. Fermentation Process Flow Chart of the benchmark is shown Fault detection of test samples is based on SVM. The classifi-
in Fig. 1. The whole duration of each batch is 400 h, comprising cation of the test samples can be obtained with the model trained
120 C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123

by the training data sets. The test sample is normal if the predicted 1.5
label of the sample is 0. Otherwise, the test sample is faulty.
The feature ranking list is obtained based on SVM-RFE and the
most relevant variables are shown in Table 3. The cause of failure
1
can be analyzed by the relevant variables.
As shown in Fig. 2, the threshold of PCA T 2 is 18.4753 and the

The class of the data


threshold of SPE statistics is 0.0208. Both thresholds are all cal-
culated based on the normal data set of the fermentation process.
0.5
The data exceeding the red dotted line can be treated as faulty.
Since the first 45 samples which represent the first 45 h are in the
cell growth stage, there are some abnormal points. After that, most
of the 355 samples are under the threshold except for a few 0
abnormal individual points.

5.1. Fault 1 −0.5


50 100 150 200 250 300 350 400
Test dataset
As shown in Fig. 3, the detection accuracy of fault 1 based on
SVM is 72%, and many samples are classified to the wrong class. Fig. 3. Fault detection based on SVM for Fault 1.

Although data preprocessing can improve the classification accu-


racy of SVM effectively, the classification accuracy is still very low. normal data set
32
Fig. 4 shows the most relevant variable for fault 1 is variable 2, training data set
testing data set
which represents agitator power. The values of this variable of the
normal data set are about 30, and those of training data set lie 30

between 23 and 25. In contrast, the values of this variable of the


testing data are about 21. So SVM model learns through the 28
Variable 2

training samples and obtains good classification results of the test


samples. 26
Fault detection results based on PCA statistics are shown in
Fig. 5. PCA-based statistics are sensitive to detect the fault. The
24
testing samples of fault 1 are all above the threshold that means
the samples are all faulty. The accuracy of fault detection based on
22
PCA statistics is 99.7%, much higher than that of SVM.

Table 3 20
0 50 100 150 200 250 300 350 400
The most relevant variable for each fault.
Samples

Fault Variable Fig. 4. Plot of variable 2 in Fault 1.

Fault 1 2
Fault 2 11 1000
Fault 3 1
800

600
T2

(α = 0.99) T2 statistic(normal operating condition) with 5 PCs 400


80
200
60 0
50 100 150 200 250 300 350 400
2

40
T

20 3
SPE

0 2
50 100 150 200 250 300 350 400

(α = 0.99) SPE statistic(normal operating condition) with 5 PCs 1


0.08
0
50 100 150 200 250 300 350 400
0.06
sample
SPE

0.04 Fig. 5. Fault detection based on the PCA statistics for Fault 1.

0.02
5.2. Fault 2
0
50 100 150 200 250 300 350 400 As shown in Fig. 6, the majority of the samples are classified
samples
correctly. The accuracy of fault detection for fault 2 based on SVM
Fig. 2. The PCA statistics under normal conditions. is 92%, lower than that of PCA.
C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123 121

1.5 100

80

60

T2
1 40
The class of the data

20

0
50 100 150 200 250 300 350 400
0.5
0.5

0.4

0 0.3

SPE
0.2

0.1

−0.5 0
50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400
Test dataset sample

Fig. 6. Fault detection based on SVM for Fault 2. Fig. 8. Fault detection based on the PCA statistics for Fault 2.

90
normal data set
1.5
training data set
80 testing data set

70

1
60
Variable 11

The class of the data

50

40 0.5

30

20
0
10

0
0 50 100 150 200 250 300 350 400
Samples −0.5
50 100 150 200 250 300 350 400
Fig. 7. Plot of variable 11 in Fault 2. Test dataset

Fig. 9. Fault detection based on SVM for Fault 3.


As shown in Fig. 7, variable 11 represents the carbon dioxide
concentration. The first 45 samples of normal data, training data
and testing data remain stable since they are in the cell growth
phase. Penicillin synthesis stage begins after that, and the samples
of normal data, training data and testing data are all have an
obvious change. This character is helpful to improve classification 9.5
accuracy. normal data set
Fig. 8 shows the PCA-based statistics for fault 2 detection. The training data set
9
testing data set
red dotted line represents the threshold of T 2 and SPE. Although
the T 2 statistics is under the threshold, the value of SPE statistics 8.5
has an obvious change after the first 45 samples. The fault
detection accuracy of fault 4 based on PCA statistics is 100%. 8
Variable 1

5.3. Fault 3 7.5

As shown in Fig. 9, the detection accuracy of fault 3 based on 7


SVM is 76.5%, and a few samples are classified to the wrong class.
The main reason for the low classification accuracy is that the 6.5
feature extraction is not reasonable or effective.
Fig. 10 shows that the most relevant variable for fault 3 is 6
variable 1 which represents aeration rate. This variable of the
normal data set, the training data set and the testing data set is 5.5
8.6, 6.8 and 6, respectively. So the feature extraction is effective 0 50 100 150 200 250 300 350 400
Samples
and SVM model learn through the training samples to obtain good
classification results of the test samples. Fig. 10. Plot of variable 1 in Fault 3.
122 C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123

As shown in Fig. 11, PCA statistics are sensitive to fault 3 and Compared with the original SVM algorithm, SVM-RFE and PCA-
the samples of testing data set are all above the threshold, that SVM improve the classification accuracy significantly. While the
means all the data are faulty. The accuracy of fault detection based classification accuracy of PCA-SVM is slightly lower than that of
on PCA statistics is 99.5%, much higher than that of SVM. Com- SVM-RFE, the efficiency of PCA-SVM is obviously better than that
pared with SVM classification method, PCA statistics can be used of SVM-RFE.
to simplify the classification model and improve classification
accuracy in fault detection.
6. Conclusion
5.4. Fault detection
In this paper, SVM, PCA and RFE are firstly reviewed. PCA and
The most relevant variables can be obtained by feature extraction RFE are combined with SVM for fault diagnosis and detection.
based on RFE. In order to obtain the maximum classification accuracy, Then, the original SVM, PCA-SVM and SVM-RFE are respectively
different numbers of features are chosen in prediction by SVM-RFE utilized to identify three faults from the simulation of FBFP
and the results are shown in Table 4. Choosing an appropriate number process.
of variables is helpful to improve the classification accuracy of fault As a result, the classification accuracy of SVM is lower than that
detection. In contrast, if the number of the variables is improper, it will of SVM-RFE and PCA-SVM evidently. PCA-SVM statistical threshold
will result in a decrease in classification accuracy. Therefore, SVM-RFE calculated by the normal data set of FBFP process is used to predict
can be used to maximize the classification accuracy of fault detection whether the samples are faulty or not. The classification accuracy
by selecting the best combination of the variables. based on SVM-RFE can be improved by extracting the most rele-
In the PCA-SVM algorithm, we firstly pre-process the raw data vant variables. Experimental results show that the classification
to remove inappropriate variables. Then we use the PCA method accuracy of SVM-RFE is not only higher than that of original SVM
for feature extraction and dimensionality reduction to obtain the obviously, but also slightly higher than that of PCA-SVM. However,
the classification accuracy may be influenced, if the number of
main features of each fault. In this way, the efficiency and accuracy
features is selected inappropriately.
of PCA statistics based classification is improved.
SVM, SVM-RFE and PCA-SVM are used to detect the three kinds
of faults, and the classification accuracy is shown in Table 5.
References
1500
[1] S.J. Qin, Survey on data-driven industrial process monitoring and diagnosis,
Annu. Rev. Control 36 (2) (2012) 220–234.
1000 [2] S. Yin, O. Kaynak, Big data for modern industry: challenges and trends, Proc.
IEEE 103 (2) (2015) 143–146.
2
T

[3] A. Pouliezos, G.S. Stavrakakis, Real Time Fault Monitoring of Industrial Pro-
500 cesses, vol. 12, Springer Science& Business Media, Dordrecht, 2013.
[4] S. Yin, Z. Huang, Performance monitoring for vehicle suspension system via
fuzzy positivistic c-means clustering based on accelerometer measurements,
0 IEEE/ASME Trans. Mechatron. 20 (5) (2014) 2613–2620.
50 100 150 200 250 300 350 400 [5] S. Yin, X. Xie, J. Lam, K. Cheung, H. Gao, An improved incremental learning
approach for kpi prognosis of dynamic fuel cell system, IEEE Trans. Cybern.
30
http://dx.doi.org/10.1109/TCYB.2015.2498194.
[6] T.J. Rato, M.S. Reis, Fault detection in the tennessee eastman benchmark
process using dynamic principal components analysis based on decorrelated
20
residuals (dpca-dr), Chemomet. Intell. Lab. Syst. 125 (2013) 101–108.
SPE

[7] S. Yin, G. Wang, H. Gao, Data-driven process monitoring based on modified


orthogonal projections to latent structures, IEEE Trans. Control Syst. Technol.
10
http://dx.doi.org/10.1109/TCST.2015.2481318.
[8] S. Yin, X. Zhu, Intelligent particle filter and its application on fault detection of
nonlinear system, IEEE Trans. Ind. Electron. 62 (6) (2015) 3852–3861.
0
50 100 150 200 250 300 350 400 [9] R.J. Patton, P.M. Frank, R.N. Clark, Issues of Fault Diagnosis for Dynamic Sys-
sample tems, Springer Science& Business Media, London, 2013.
[10] A.R. Khan, A.Q. Khan, M.T. Raza, M. Abid, G. Mustafa, Design of robust fault
Fig. 11. Fault detection based on the PCA statistics for Fault 3. detection scheme for penicillin fermentation process, IFAC-Papers OnLine 48
(21) (2015) 589–594.
[11] E.L. Russell, L.H. Chiang, R.D. Braatz, Data-Driven Methods for Fault Detection
and Diagnosis in Chemical Processes, Springer Science& Business Media,
Table 4 London, 2012.
Fault detection rate based on SVM-RFE for training and testing data. [12] S. Yin, S.X. Ding, A. Haghani, H. Hao, P. Zhang, A comparison study of basic
data-driven fault diagnosis and process monitoring methods on the bench-
The number of variables Training data Testing data mark tennessee eastman process, J. Process Control 22 (9) (2012) 1567–1581.
[13] M.M. Rashid, J. Yu, Hidden Markov model based adaptive independent com-
3 75.3 82.2 ponent analysis approach for complex chemical process monitoring and fault
6 96.3 98.5 detection, Ind. Eng. Chem. Res. 51 (15) (2012) 5506–5514.
9 98 96.4 [14] S. Yin, X. Li, H. Gao, O. Kaynak, Data-based techniques focused on modern
12 97.8 99 industry: an overview, IEEE Trans. Ind. Electron. 62 (1) (2015) 657–667.
[15] S. Yin, P. Shi, H. Yang, Adaptive fuzzy control of strict-feedback nonlinear time-
delay systems with unmodeled dynamics, IEEE Trans. Cybern. http://dx.doi.
org/10.1109/TCYB.2015.2457894.
[16] J. MacGregor, T. Kourti, Statistical process control of multivariate processes,
Table 5 Control Eng. Pract. 3 (3) (1995) 403–414.
Accuracy rate of fault detection for testing data. [17] A. Ferrer, Multivariate statistical process control based on principal compo-
nent analysis (mspc-pca): some reflections and a case study in an autobody
Fault SVM SVM-RFE PCA-SVM assembly process, Quality Eng. 19 (4) (2007) 311–325.
[18] S. Yin, X. Zhu, O. Kaynak, Improved pls focused on key-performance-indicator-
Fault 1 72 100 99.7 related fault diagnosis, IEEE Trans. Ind. Electron. 62 (3) (2015) 1651–1658.
Fault 2 92 100 100 [19] J. Davis, B. Bakshi, K. Kosanovich, M. Piovoso, Process monitoring, data analysis
Fault 3 76.5 100 99.5 and data interpretation, in: AIChE Symposium Series, vol. 92, American
Institute of Chemical Engineers, New York, 1996, pp. 1–11.
C. Yang, J. Hou / Neurocomputing 190 (2016) 117–123 123

[20] J.V. Kresta, J.F. MacGregor, T.E. Marlin, Multivariate statistical monitoring of [39] C. Ündey, E. Tatara, A. Çınar, Intelligent real-time performance monitoring and
process operating performance, Can. J. Chem. Eng. 69 (1) (1991) 35–47. quality prediction for batch/fed-batch cultivations, J. Biotechnol. 108 (1)
[21] K. Detroja, R. Gudi, S. Patwardhan, Plant-wide detection and diagnosis using (2004) 61–77.
correspondence analysis, Control Eng. Pract. 15 (12) (2007) 1468–1483. [40] Y. LIU, H.-q. WANG, Pensim simulator and its application in penicillin fer-
[22] E. Vigneau, D. Bertrand, E.M. Qannari, Application of latent root regression for mentation process, J. Syst. Simul. 12 (12) (2006) 3524–3527.
calibration in near-infrared spectroscopy. Comparison with principal compo- [41] S.K. Maiti, R.K. Srivastava, M. Bhushan, P.P. Wangikar, Real time phase detec-
nent regression and partial least squares, Chemomet. Intell. Lab. Syst. 35 (2) tion based online monitoring of batch fermentation processes, Process Bio-
(1996) 231–238. chem. 44 (8) (2009) 799–811.
[23] E.L. Russell, L.H. Chiang, R.D. Braatz, Fault detection in industrial processes [42] J.-M. Lee, C.K. Yoo, I.-B. Lee, Enhanced process monitoring of fed-batch peni-
using canonical variate analysis and dynamic principal component analysis, cillin cultivation using time-varying and multivariate statistical analysis, J.
Chemomet. Intell. Lab. Syst. 51 (1) (2000) 81–93. Biotechnol. 110 (2) (2004) 119–136.
[24] J.F. MacGregor, C. Jaeckle, C. Kiparissides, M. Koutoudi, Process monitoring and
diagnosis by multiblock pls methods, AIChE J. 40 (5) (1994) 826–838.
[25] V. Venkatasubramanian, R. Rengaswamy, S.N. Kavuri, K. Yin, A review of
process fault detection and diagnosis: Part iii: process history based methods,
Chengming Yang received the B.E. degree in agri-
Comput. Chem. Eng. 27 (3) (2003) 327–346.
cultural mechanization and automation from Northeast
[26] C.-W. Hsu, C.-J. Lin, A comparison of methods for multiclass support vector
Agricultural University, Harbin, China, in 2010, the M.E.
machines, IEEE Trans. Neural Netw. 13 (2) (2002) 415–425.
degree in agricultural mechanization engineering from
[27] M. Ge, R. Du, G. Zhang, Y. Xu, Fault diagnosis using support vector machine
Kunming University of Science and Technology,
with an application in sheet metal stamping operations, Mech. Syst. Signal
Kunming, China, in 2013. He is currently working
Process. 18 (1) (2004) 143–159.
toward the Ph.D. degree in control science and engi-
[28] H.J. Shin, D.-H. Eom, S.-S. Kim, One-class support vector machines: an appli-
neering at Harbin Institute of Technology.
cation in machine fault detection and classification, Comput. Ind. Eng. 48 (2)
His research interests include fault diagnosis, fault
(2005) 395–408.
tolerant control, process monitoring and their applica-
[29] V.N. Vapnik, V. Vapnik, Statistical Learning Theory, vol. 1, Wiley, New York,
tions to large-scale industrial processes.
1998.
[30] V. Vapnik, The Nature of Statistical Learning Theory, Springer Science& Busi-
ness Media, New York, 2013.
[31] S. Abe, Support Vector Machines for Pattern Classification, vol. 2, Springer,
London, 2005.
[32] J.E. Jackson, Principal components and factor analysis: Part i—principal com- Jian Hou received his Ph.D. degree in 2007 from Harbin
ponents, J. Quality Technol. 12 (4) (1980) 201–213. Institute of Technology, China. From 2007 to 2010, he
[33] P. Nomikos, J.F. MacGregor, Monitoring batch processes using multiway worked as a postdoctoral researcher in National Uni-
principal component analysis, AIChE J. 40 (8) (1994) 1361–1375. versity of Singapore, Singapore and University of
[34] R. Kohavi, G.H. John, Wrappers for feature subset selection, Artif. Intell. 97 (1) Venice, Italy. In 2011, he was with Ningbo Institute of
(1997) 273–324. Materials Technology and Engineering of Chinese
[35] S.S. Keerthi, C.-J. Lin, Asymptotic behaviors of support vector machines with Academy of Sciences and Xuchang University, China. He
Gaussian kernel, Neural Comput. 15 (7) (2003) 1667–1689. joined the School of Information Science and Technol-
[36] S. Yin, G. Wang, X. Yang, Robust pls approach for kpi-related prediction and ogy at Bohai University, China in 2012, where he is
diagnosis against outliers and missing data, Int. J. Syst. Sci. 45 (7) (2014) currently an associate professor. His research interests
1375–1382. include computer vision, pattern recognition and
[37] L. Xie, X. Lin, J. Zeng, Shrinking principal component analysis for enhanced machine learning. He is a member of the IEEE.
process monitoring and fault isolation, Indust. Eng. Chem. Res. 52 (49) (2013)
17475–17486.
[38] G. Birol, C. Ündey, A. Cinar, A modular simulation package for fed-batch fer-
mentation: penicillin production, Comput. Chem. Eng. 26 (11) (2002)
1553–1565.

Você também pode gostar