Você está na página 1de 86

COMPOSITION OF IDENTIFIED PARTIAL MODELS FOR FAULT

DETECTION OF DISCRETE-EVENT SYSTEMS

Diego Angelo Libanio

Dissertação de Mestrado apresentada ao


Programa de Pós-graduação em Engenharia
Elétrica, COPPE, da Universidade Federal do
Rio de Janeiro, como parte dos requisitos
necessários à obtenção do título de Mestre em
Engenharia Elétrica.

Orientadores: Gustavo da Silva Viana


Marcos Vicente de Brito
Moreira

Rio de Janeiro
Abril de 2023
COMPOSITION OF IDENTIFIED PARTIAL MODELS FOR FAULT
DETECTION OF DISCRETE-EVENT SYSTEMS

Diego Angelo Libanio

DISSERTAÇÃO SUBMETIDA AO CORPO DOCENTE DO INSTITUTO


ALBERTO LUIZ COIMBRA DE PÓS-GRADUAÇÃO E PESQUISA DE
ENGENHARIA DA UNIVERSIDADE FEDERAL DO RIO DE JANEIRO COMO
PARTE DOS REQUISITOS NECESSÁRIOS PARA A OBTENÇÃO DO GRAU
DE MESTRE EM CIÊNCIAS EM ENGENHARIA ELÉTRICA.

Orientadores: Gustavo da Silva Viana


Marcos Vicente de Brito Moreira

Aprovada por: Prof. Gustavo da Silva Viana


Prof. Marcos Vicente de Brito Moreira
Prof. Felipe Gomes de Oliveira Cabral
Prof. Antonio Eduardo Carrilho da Cunha

RIO DE JANEIRO, RJ – BRASIL


ABRIL DE 2023
Libanio, Diego Angelo
Composition of identified partial models for fault
detection of Discrete-Event Systems/Diego Angelo
Libanio. – Rio de Janeiro: UFRJ/COPPE, 2023.
XII, 74 p.: il.; 29, 7cm.
Orientadores: Gustavo da Silva Viana
Marcos Vicente de Brito Moreira
Dissertação (mestrado) – UFRJ/COPPE/Programa de
Engenharia Elétrica, 2023.
Referências Bibliográficas: p. 72 – 74.
1. Partial model identification. 2. Fault detection.
3. Discrete-event systems. I. Viana, Gustavo da Silva
et al. II. Universidade Federal do Rio de Janeiro, COPPE,
Programa de Engenharia Elétrica. III. Título.

iii
“Ideais devem ser ditos apenas
por aqueles fortes o bastante para
cumpri-los”.

iv
Agradecimentos

Gostaria de agradecer a meus pais e minha família que sempre prestaram todo o
auxílio que necessitei, e sempre estiveram presente em minha vida diante de mo-
mentos bons e ruins.
À minha namorada por ser uma companheira e amiga durante todo o meu trajeto
pela minha graduação, sempre me apoiando e motivando a seguir em frente. Aos
meus amigos, que tornaram a jornada mais leve ao longo desses anos de faculdade.
A todos os professores e funcionários do Programa de Engenharia Elétrica, por
tornarem tudo isso possível. Em especial, aos meu orientadores Gustavo e Marcos,
pela excelente orientação, apoio e motivação.
Por fim, agradecer a todos que contribuíram de alguma forma em minha jornada
até aqui.

v
Resumo da Dissertação apresentada à COPPE/UFRJ como parte dos requisitos
necessários para a obtenção do grau de Mestre em Ciências (M.Sc.)

COMPOSIÇÃO DE MODELOS PARCIAIS IDENTIFICADOS COM OBJETIVO


DE DETECÇÃO DE FALHAS DE SISTEMAS A EVENTOS DISCRETOS

Diego Angelo Libanio

Abril/2023

Orientadores: Gustavo da Silva Viana


Marcos Vicente de Brito Moreira
Programa: Engenharia Elétrica

Neste trabalho, é proposto um método para o cálculo de um modelo monolítico


com o objetivo de detecção de falhas para Sistemas a Eventos Discretos (SEDs)
distribuídos com comportamento concorrente. Para isso, é proposto um modelo
chamado Autômato Autônomo Não-determinístico com Outputs Modificado (M-
NDAAO), que corrige o problema de reinicialização do modelo Autômato Autônomo
Não-determinístico com Outputs (NDAAO) já apresentado na literatura, que pode
ocorrer quando lidamos com sistemas cíclicos. Baseado neste modelo modificado, é
definida uma composição síncrona dos modelos parciais identificados, que represen-
tará um modelo monolítico para o sistema.
Usando esse modelo monolítico, pode-se calcular a linguagem gerada pelo método
proposto, verificando sua eficiência em diagnosticar falhas. Além disso, é proposta
uma redução da linguagem em excesso através de duas abordagens: variando um
parâmetro livre utilizado para identificar os modelos do subsistema, e identificando
comportamentos impossíveis na composição síncrona. Por fim, um exemplo prático
é utilizado para ilustrar os resultados do método proposto.

vi
Abstract of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the
requirements for the degree of Master of Science (M.Sc.)

COMPOSITION OF IDENTIFIED PARTIAL MODELS FOR FAULT


DETECTION OF DISCRETE-EVENT SYSTEMS

Diego Angelo Libanio

April/2023

Advisors: Gustavo da Silva Viana


Marcos Vicente de Brito Moreira
Department: Electrical Engineering

In this work, we present a method for computing a monolithic model with the
purpose of fault diagnosis for distributed Discrete-Event Systems with concurrent
behavior. In order to do so, a model called M-NDAAO (Modified Nondeterministic
Autonomous Automaton with Outputs) is proposed, which solves the reinitialization
problem of the NDAAO model already defined in the literature, which can occur
when we deal with cyclic systems. Based on this modified model, we define a modular
synchronous composition of the partial models identified that represent a monolithic
model for the system.
Using this monolithic model, we can compute the language generated by the
proposed method, verifying its efficiency in fault diagnosis. In addition, we propose
the reduction of the exceeding language by means of two approaches: varying a free
parameter used to identify the partial models, and identifying impossible behaviors
in the synchronous composition. Finally, a practical example is used to illustrate
the results of the proposed method.

vii
Contents

List of Figures x

List of Tables xii

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Preliminares 7
2.1 Discrete-Event Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Unary operations . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Parallel Composition . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 Nondeterministic Autonomous Automata with Outputs . . . . 15
2.4.2 Languages of Identified Discrete Event Systems . . . . . . . . 20

3 Modified Nondeterministic Autonomous Automata with Output 22


3.1 The problems of NDAAO . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 NDAAO proposed in Klein et al. . . . . . . . . . . . . . . . . 23
3.1.2 NDAAO proposed in Roth et al. . . . . . . . . . . . . . . . . . 29
3.2 The Modified NDAAO (M-NDAAO) . . . . . . . . . . . . . . . . . . 32
3.3 M-NDAAO languages . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Distributed Identification 43
4.1 The Problems of Monolithic Identification . . . . . . . . . . . . . . . 43
4.2 Distributed Identification . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 The modular synchronous compostion . . . . . . . . . . . . . . . . . . 50

5 Pratical Example 64

viii
6 Conclusion and future works 70
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

References 72

ix
List of Figures

1.1 Closed-loop Discrete-Event System, figure from [1]. . . . . . . . . . . 2


1.2 Principle of the model-based fault detection, figure from [1] . . . . . . 2
1.3 Flowchart of the fault diagnosis study. . . . . . . . . . . . . . . . . . 5
1.4 Schematic of Distributed Identification with M-NDAAOs. . . . . . . . 6

2.1 Queuing example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8


2.2 Elevator automaton. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 (a) G, (b) CoAc(G), (c) Ac(G), (d) T rim(G). . . . . . . . . . . . . 12
2.4 Automaton G1 and G2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Parallel Composition. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Path of a system with 4 I/Os. . . . . . . . . . . . . . . . . . . . . . . 15
2.7 NDAAO of Path pj from Figure 2.6. . . . . . . . . . . . . . . . . . . 17
2.8 Paths p1 and p2 , for the construction of NDAAO. . . . . . . . . . . . 18
2.9 Modified paths p21 and p22 from Example 2.8. . . . . . . . . . . . . . . 18
2.10 NDAAO of Example 2.8, for k = 1. . . . . . . . . . . . . . . . . . . . 18
2.11 NDAAO of Example 2.8, for k = 2. . . . . . . . . . . . . . . . . . . . 18
2.12 Reduced NDAAO of Example 2.8, for k = 2. . . . . . . . . . . . . . . 19
2.13 Relation between the Languages of NDAAO, figure from [1]. . . . . . 20

3.1 Enhanced NDAAO proposed in [2] from Example 3.1. . . . . . . . . . 25


3.2 Simplified NDAAO from Example 3.1. . . . . . . . . . . . . . . . . . 26
3.3 First reduction of the NDAAO from Example 3.3. . . . . . . . . . . . 27
3.4 Reduced NDAAO from Example 3.3. . . . . . . . . . . . . . . . . . . 28
3.5 NDAAO proposed in [3] from Example 3.5. . . . . . . . . . . . . . . . 31
3.6 M-NDAAO example. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.7 M-NDAAO from Example 3.8, for k = 1. . . . . . . . . . . . . . . . . 40
3.8 M-NDAAO from Example 3.8, for k = 2. . . . . . . . . . . . . . . . . 40
3.9 M-NDAAO from Example 3.8, for k = 3. . . . . . . . . . . . . . . . . 41

4.1 Concurrent behavior example. . . . . . . . . . . . . . . . . . . . . . . 44


4.2 Partial Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Fault detection architecture. . . . . . . . . . . . . . . . . . . . . . . . 48

x
4.4 Partial paths π1,1 and π1,2 from Example 4.3. . . . . . . . . . . . . . . 48
4.5 Modified partial paths π1,1
2
and π1,2
2
from Example 4.3. . . . . . . . . 49
4.6 Partial model M1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.7 Partial model M2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.8 Join Function examples. . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.9 Possible transition created by fMc , according to Definition 4.3. . . . . 52
4.10 Possible transition created by fMc , according to Definition 4.4. . . . . 53
4.11 Partial paths π1,1 and π1,2 from Example 4.4. . . . . . . . . . . . . . . 55
4.12 Modified partial paths π1,1
2
and π1,2
2
from Example 4.4. . . . . . . . . 56
4.13 Partial Model M1 from Example 4.4. . . . . . . . . . . . . . . . . . . 56
4.14 Partial Model M2 from Example 4.4. . . . . . . . . . . . . . . . . . . 57
4.15 Composed model Mc . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.16 Automaton Mac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.17 Partial Models M′ 1 and M′ 2 . . . . . . . . . . . . . . . . . . . . . . . 62
4.18 Partial Models M1 and M2 . . . . . . . . . . . . . . . . . . . . . . . . 62

5.1 Sorting unit system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 65


5.2 Convergence of partial model M1 . . . . . . . . . . . . . . . . . . . . . 66
5.3 Convergence of partial model M2 . . . . . . . . . . . . . . . . . . . . . 67
5.4 Convergence of the monolithic model M. . . . . . . . . . . . . . . . . 67

xi
List of Tables

3.1 Exceeding language of the example. . . . . . . . . . . . . . . . . . . . 41

5.1 Identified language of Mc . . . . . . . . . . . . . . . . . . . . . . . . . 68


5.2 Reduction of the exceeding language of Mc . . . . . . . . . . . . . . . 68
5.3 Fault detection for different values of k. . . . . . . . . . . . . . . . . . 69

xii
Chapter 1

Introduction

1.1 Motivation
Nowadays, with the implementation of smart factories, it has become increasingly
important to use effective methods for fault detection and isolation [4]. In order to
do so, it is necessary to compute a model of the system suitable for fault diagnosis.
One way to do this is by means of Discrete-Event Systems, i.e., systems whose state
evolution depend entirely on the occurrence of, in general, asynchronous discrete
events over time [5], [6].
The problem of fault diagnosis for Discrete-Event Systems was introduced in
[7], defining the concept of diagnosability, which is the capability of detecting and
isolating the fault occurrence within a bounded number of event occurrences. Since
then, several works have proposed different fault diagnosis strategies in addition to
other methodologies for verifying diagnosability [8],[9], [10], [11], [12], [13]. In all
these works it is assumed that the complete behavior of the system is known, i.e..
the model of the system before and after the occurrence of the fault is known, which
can be a difficult or impossible task for complex and large systems. In addition,
the complete model assumes prior knowledge of the fault occurrences, in which
unpredictable faults cannot be detected by the diagnoser.
In order to overcome these difficulties, techniques for system identification, with
the aim of fault diagnosis, have been proposed in the literature [2], [3], [1], [14],
[15]. In these works, there are two main ideas executed in sequence as follows. The
first one is to automate the process of building the fault-free model of the system by
using identification. The model is obtained from observed sequences of binary signals
exchanged between the plant and the controller (sensor signals sent by the plant and
actuator commands generated by the controller) as shown in Figure 1.1. The second
one is the fault detection strategy shown in Figure 1.2: a fault is detected by means
of a discrepancy between the observed system behavior and the expected behavior

1
Figure 1.1: Closed-loop Discrete-Event System, figure from [1].

Figure 1.2: Principle of the model-based fault detection, figure from [1]

(fault-free), using a technique based on residuals for fault localization ([16],[17],[18]).


Petri nets and automata are two suitable formalisms to build models from iden-
tification of Discrete-Event Systems. Petri nets are used as a formalism to obtain
an identified model in [19],[20],[21],[22]. In [19], a fault-free model of a system is
given in terms of Petri net, where an identification method of faulty behavior is
proposed, assuming that the set of transitions is divided into observable and unob-
servable ones. In [20], an identification algorithm is presented, using the observation
of the events and available output vectors, to build a Petri net model to represent a
partially known system. In [21], a tool, called IdentifyTPN, is presented in order to
solve identification problems using Timed Petri Net (TPN) models and algorithms.
In [22], a distributed identification method for reverse engineering purposes is pro-
posed with the objective of obtaining interpreted Petri nets (IPN) for the partial
models.
Identification by using a Petri nets usually assumes complete or partial knowledge

2
of the system structure and dynamics. However, when working with black-box
systems, i.e., systems where we have no information about their behavior, limited
to its inputs and outputs of the systems, automata become a suitable formalism for
identification due to its more basic structure. In addition, generating an automatic
fault detection method for systems modeled by automata is simpler than for systems
modeled by Petri nets.
In [2] an identification algorithm of a new class of automaton called Nondeter-
ministic Autonomous Automaton with Outputs (NDAAO) is presented using data
read directly from the Programmable Logic Controller (PLC) as seen in Figure 1.1.
The vector composed of the signals of the system, i.e., sensors and actuators, is
called Input/Output (I/O) vector. In addition, it is introduced a parameter k,
which adjust the trade-off between size and efficiency of the model. Higher values
of k lead to more accurate, but larger models, in the sense of number of states and
transitions. Thus, a model is built without requiring prior knowledge of the system’s
behavior, just through the observation of the controller signals.
Another approach in identifying black-box systems is presented in [1]. In this
paper a model called Deterministic Automaton with Outputs and Conditional Tran-
sitions (DAOCT) is presented. The model, different from the one proposed in [2]
has event and path index sets, where a path is defined as a sequence of signals from
the observed controller. Each transition created through the identification method
has an associated event and a path index that led to its construction. By adding
these information, the efficiency of the model in detecting faults increases. A more
compact model than DAOCT is proposed in [23], but keeping the idea of path index
sets.
In [14] and [15], extensions of NDAAO and DAOCT adding time information
are proposed, respectively. The idea behind those approaches are adding time con-
straints to the identification method in the form of guards, i.e., time intervals that
are related to the transitions of the model.
The main drawback with all these strategies is that a monolithic model is iden-
tified, which may require the observation of the system inputs and outputs for a
long time to obtain as many fault-free system behaviors as possible. This prob-
lem emerges when the system is composed of several subsystems with concurrent
behavior, leading to a large number of different possible behaviors of the complete
system. It is important to remark that if a short observation time is used for model
identification, then, several original behaviors may not be observed, and thus would
not be represented in the identified model. These behaviors are considered faulty
sequences by the diagnosis scheme of Figure 1.2, and false alarms are raised.
In order to circumvent this problem, [3] presents a distributed identification
methodology in which the system is partitioned into partial models, since the con-

3
vergence of simpler models may be obtained using less observations of the system.
These partial models are identified, and then executed in parallel to represent the
complete fault-free system behavior. The NDAAO is used for the identification of
the partial models in [3]. The main drawback with the model proposed in [3] is that,
for cyclic systems, the reinitialization of the model may not be correctly performed,
and the original observed system language may not belong to the language of the
identified model, i.e., the identified model does not simulate the observed system
behavior. Another important problem with the strategy of playing partial models in
parallel to detect faults is that the composition of partial models generates, in gen-
eral, an exceeding language, i.e., the language accepted as fault-free by the partial
models running in parallel can be larger than the original fault-free system language,
which implies that some faults may not be detected by the fault diagnosis scheme.
Thus, it is important to reduce the exceeding language to obtain an efficient fault
diagnosis scheme.
Finally, [24] presents a method for partitioning a system into partial models by
observing concurrent behaviors in order to eliminate them from the identification.
In order to do so, the system actuators are separated into partial models when they
are activated simultaneously. After the separation, the sensors are integrated to
the partial models based on causality relations between them and the actuators.
However, this work do not deal with the partitioning problem for complex systems.
This work aims on developing a suitable model of partial models which allow the
creation of a parallel composition among them.
Therefore this work address the identification of black-box systems with concur-
rent behavior, where the computed model is used for fault diagnosis. Due to the lack
of prior knowledge of the plant, the formalism used to represent the system behavior
is an automaton. Furthermore, due to the complexity of the system, a distributed
and untimed identification is performed in order to map as much behavior as possible
into the identification model. Figure 1.3 represents a flowchart indicating in which
category this work belongs in the vast literature of fault diagnosis of Discrete-Event
Systems.

1.2 Objectives
In this work, we propose a modification in the NDAAO model presented in [2] and
[3], which allows the definition of a parallel composition between partial models
for cyclic systems, called modular synchronous composition. A monolithic model
that represents all sequences of I/O vector that are accepted as fault-free by the
diagnosis scheme is computed by making the modular synchronous composition
between all partial models. Based on the monolithic model, we can compute the

4
Fault Detection

Diagnosability
Identification
Sampath (1995)
Moreira (2011)
Cabral (2015)
Viana (2019)
Automata Petri nets
Dotoli (2008)
Basile (2017)
Saives (2018)

Monolithic Partial Models Zhu (2019)

Timed Untimed Untimed


Schneider (2015) Klein (2005) Roth (2010)
De Souza (2020) Moreira (2019) Castro (2022)
Machado (2023) This Work

Figure 1.3: Flowchart of the fault diagnosis study.

language generated using the proposed method and verify its efficiency to diagnose
faults. We also analyze the reduction of the exceeding language by varying the free
parameter used to identify the partial models. Figure 1.4 represents the schematic
of the distributed identification methodology that is proposed in this work.
In order to illustrate the results, a virtual plant using the 3D simulation software
Factory I/O, controlled by a programmable logic controller (PLC), is used. We build
the identified model of the virtual plant and perform the fault diagnosis using the
proposed method. Furthermore, we analyze the behaviors generated by composed
model, verifying how exceeding behaviors can affect fault diagnosis. From the mono-
lithic model, a study is carried out to analyze how the generated language of the
identified model represents more behaviors tjan the plant and how these exceeding
sequences can be reduced.
Thus, this work is organized as it follows: In Chapter 2, some preliminary con-
cepts about Discrete-Event Systems and system identification are presented. In

5
Partial Model 1 Partial Model 2 Partial Model 3
I/O1 I/O2 I/O3 I/O4

Identification Identification Identification

x0 x1
|| y0 y1
|| z0 z1

M-NDAAO1 M-NDAAO2 M-NDAAO3

x0 y0 z0 x1 y1 z1 x1 y1 z0 x0 y1 z1

x0 y0 z1 x1 y0 z0 x1 y0 z1

M-NDAAO||

Fault Diagnosis

Figure 1.4: Schematic of Distributed Identification with M-NDAAOs.

addition, this chapter formalizes the NDAAO introduced by [2]. In Chapter 3, the
problems around the NDAAO are presented using examples. From this, we modify
the identification algorithm, introducing the modified NDAAO. Finally, we perform
an analysis of the language generated by the model, studying how to reduce its ex-
ceeding language. In Chapter 4, we present the problem of monolithic identification
for systems with concurrent behavior. Thus, a methodology for synchronizing par-
tial models identified by the modified NDAAO methodology is presented, building
a monolithic model that represents the parallel language of the models. Then, the
model’s language is analyzed in order to reduce its exceeding language, using a free
parameter, in addition to eliminating impossible behaviors identified in the mono-
lithic model. In Chapter 5, a virtual plant simulated using a 3D simulation software
and controlled by a programmable logic controller is used to illustrate the proposed
method. Finally, in Chapter 6, we conclude the work and propose future works.

6
Chapter 2

Preliminares

This chapter introduce concepts which permit the understanding of the proposed
work. Since our goal is to establish a methodology to obtain monolithic models from
identified partial models, it is necessary to introduce some basic concepts. Thus, in
Section 2.1, the concept of Discrete-Event Systems (DES) is presented. In Section
2.2, the definition of language of a DES is presented. In Section 2.3, the definition
of automata, a tool for representing the language of a DES, is presented, as well
as some useful functions applied to it. In Section 2.4, we present the concept of
system identification, modeled as automaton. In this section, we introduce a class
of automaton used for system identification, called Nondeterministic Autonomous
Automata with Outputs (NDAAO).

2.1 Discrete-Event Systems


Discrete-Event (dynamic) Systems (DESs) form a continuously growing class of
automation systems that has become popular in the past decades due to the prolif-
eration of digital computing technologies, network interconnectivity, the emergence
of cyber-physical systems and Internet of Things applications. The behavior of this
class of dynamical systems is determined by the asynchronous occurrence of events.
An event may be identified with a specific taken action, or may be viewed as a
spontaneous occurrence dictated by nature or, still, the result of several conditions
which are all suddenly met [6]. The arrival of a client to a queue, the beginning and
ending of a task or sending a message in a communication system are examples of
events. The occurrence of an event causes, in general, a change in the system, which
may or may not manifest itself to an external observer. In addition, a change can
be caused by the occurrence of an event internal to the system itself, such as the
termination of an activity or timing. In any case, these changes are characterized by
being abrupt and instantaneous, i.e., by perceiving an event occurrence, the system
reacts immediately, accommodating itself to a new situation where it remains until

7
Figure 2.1: Queuing example.

a new event occurs. Thus, the simple passing of time is not enough to ensure that
the system evolves.
According to [6] it is possible to define Discrete-Event Systems as follows.

Definition 2.1 (Discrete-Event Systems). Discrete-Event Systems are dynamic sys-


tems with two defining characteristics: their state spaces are discrete and potentially
infinite, and their dynamics are event-driven as opposed to time-driven.

Example 2.1. A simple example of a system that can be modeled as DES is a


queue. We can discretize the states of this system as the set of non-negative integers,
X = {0, 1, 2, . . .}, representing the number of people present in the queue, and the
event set Σ = {a, l}, representing the arrival and the exit of a person in the queue.
Every time the event a occurs, the system goes from state n to n + 1, while the event
l takes the system from state n to n − 1, n ∈ X. A graphical representation of this
system evolution, over time t, is shown in Figure 2.1.

There are several methodologies to represent and study DES. One of them is
using language theory and automata. These concepts are presented in Sections 2.2
and 2.3, respectively.

2.2 Languages
A formal way to represent the behavior of a DES is using the concept of language.
In Section 2.1, the concept of events of DES is presented, which form the event set
Σ. These events form the alphabet of the system, such that the concatenation of

8
events forms sequences that can be interpreted as words of a language. Thus, a
language is a set of sequences formed with the elements of Σ. According to [6], the
language defined over a finite set Σ can be defined as follows.

Definition 2.2 (Language). A Language L, defined over an event set Σ, is a set of


finite length sequences formed with events in Σ.

The length of a sequence s is the number of events that form it, counting multiple
occurrences of the same event, and it is denoted by ∥s∥. The sequence with zero
length is called the empty sequence and is denoted by ε. The language formed of all
sequences of finite length obtained from Σ plus the empty sequence ε is called the
Kleene-Closure of Σ denoted by Σ∗ .

Definition 2.3 (Kleene-Closure). Let L ⊆ Σ∗ , then L∗ := {ε} ∪ L ∪ LL ∪ LLL ∪ . . .,


where Σ∗ is the set of all finite sequences of elements of Σ including the empty
sequence.

Example 2.2. Consider the queue example presented in Example 2.1, we can find
the Kleene-Closure of the event-set Σ = {a, l} as:

Σ∗ = {ε, a, l, aa, al, la, ll, aaa, . . .}.

Note that every language formed with the elements of Σ is a subset of Σ∗ , since
the Kleene-Closure contains all possible words generated from Σ.

A sequence s ∈ Σ∗ can be decomposed into three parts: the prefix, the subword
and the suffix.

Example 2.3. Let s = ala be a sequence that belong to the Kleene-Closure of Σ


presented in Example 2.2, then:

• ε, a, al, ala are prefixes of s;

• ε, a, l, al, la, ala are subwords of s;

• ε, a, la, ala are suffixes of s.

Since a language is defined as a set of finite length sequences, we can define


the Prefix-Closure set, which creates a new language consisting of all prefixes of all
sequences of the original language.

Definition 2.4 (Prefix-Closure). Let L ⊆ Σ∗ , then L̄ := {s ∈ Σ∗ : (∃t ∈ Σ∗ )[st ∈


L]}

Another operation over languages is the concatenation, which concatenates every


sequence that belong to a language La with every sequence that belong to a language
Lb .

9
Definition 2.5 (Concatenation). Let La , Lb ∈ Σ∗ , then La Lb := {s ∈ Σ∗ : (s = sa sb )
with (sa ∈ La ) and (sb ∈ Lb )}.

We conclude that the behavior of a DES can be represented as the possible


sequences of events that a system can perform, which represent its language. In the
sequel, the definition of a deterministic automaton and nondeterministic automaton
are presented, in addition to some operations performed on the language represented
by automaton.

2.3 Automata
An automaton is a formalism that allows the representation of the possible sequences
of the system, i.e., its language. According to [6], “the simplest way to present
the notion of automaton is to consider its directed graph representation, or state
transition diagram”, which leads to Definition 2.6.

Definition 2.6 (Deterministic Automaton). A Deterministic Automaton denoted


by G, is a six-tuple

G = (X, Σ, f, Γ, x0 , Xm )

where X is the set of states, Σ is the finite set of events associated with G, f :
X × Σ → X is the transition function, Γ : X → 2Σ is the active event function (or
feasible event function), x0 is the initial state, Xm ⊆ X is the set of marked states.

The Γ function can be omitted since it can be deduced from the f function,
making it redundant.

Example 2.4. Consider an elevator that serves a three-store building, which can go
up and down, in such a way that the number of each floor represents a state and the
actions of moving represent its events. Therefore, the set of states of this system
is X = {1, 2, 3} where 1, 2 and 3 represents the floors the elevator may pass and
Σ = {u, d} represents the actions of going up and down, respectively. Therefore,
the elevator example can have its language represented by automaton G, shown in
Figure 2.2, where X = {1, 2, 3}, Σ = {u, d}, f (1, u) = 2, f (2, u) = 3, f (3, d) = 2, and
f (2, d) = 1, x0 = 1, and Xm = {3}.

A deterministic automaton has a unique initial state, all transitions have event
labels σ ∈ Σ, and the transition function is deterministic in the sense that if event
σ ∈ Γ(x) occurs, then σ causes a transition from x to a unique state y = f (x, σ).
However, when modeling a system, lack of knowledge about it can lead us to con-
struct ε - transitions, i.e. transitions labeled with the empty sequence ε. These

10
u u

1 2 3

d d

Figure 2.2: Elevator automaton.

transitions may represent events that cause a change in the internal state of a DES
but are not “observable” by an outside observer. Thus, we generalize the notion of
automaton and define the class of nondeterministic automata.

Definition 2.7 (Nondeterministic Automaton). A Nondeterministic Automaton de-


noted by G, is a six-tuple

G = (X, Σ ∪ {ε}, fnd , Γ, x0 , Xm )

where fnd is a function fnd : x × Σ ∪ {ε} → 2X and x0 ⊆ X.

In order to analyze DES modeled by automata, it is important to define some


operations to modify a single automaton and compose two or more automata. Thus,
we introduce some unary operations that alter the state transition diagram of an
automaton in the sequel.

2.3.1 Unary operations


The accessible part of automaton G, Ac(G), corresponds to the automaton that
generates all sequences starting at the initial state, x0 . Thus Ac(G) eliminates all
states not reachable from the initial state x0 and their related transitions.

Definition 2.8 (Accesible Part). Ac(G) := (Xac , Σ, fac , x0 , Xac,m ), where Xac =
{x ∈ X : (∃s ∈ Σ∗ )[x ∈ fnd (x0 , s)]}, Xac,m = Xm ∩ Xac , and fac : Xac × Σ → 2Xac is
computed from fnd by restricting its domain to the accessible states of G.

The Coaccesible Part of automaton G, denoted as CoAc(G), corresponds to the


automaton that generates all sequences that starts in any state x ∈ X and end
in any marked state xm ∈ Xm . Coaccessibility is closely related to the concept of
deadlock, where an automaton finds a deadlock when it reaches a state that makes
it impossible to reach a marked state.

Definition 2.9 (Coaccessible Part). CoAc(G) := (Xcoac , Σ, fcoac , x0,coac , Xm ), where


Xcoac = {x ∈ X : (∃s ∈ Σ∗ )(∃xm ∈ Xm )[xm ∈ fnd (x, s)]}, fcoac : Xcoac × Σ → 2Xcoac
is obtained from fnd by restricting its domain to the coaccessible states of G, and
x0,coac = x0 , if x0 ∈ Xcoac , or undefined otherwise.

11
0 1 0 1 0 0
b b
a a a a
c c
2 3 2 2 3 2

a a a a

4 4 4 4

(a) (b) (c) (d)

Figure 2.3: (a) G, (b) CoAc(G), (c) Ac(G), (d) T rim(G).

The Trim operation of automaton G, denoted by T rim(G), computes both ac-


cesible and coaccessible part of the automaton.

Definition 2.10 (Trim Operation). T rim(G) := CoAc[Ac(G)] = Ac[CoAc(G)]

Example 2.5. Consider a automaton G depicted figures 2.3(a). Thus, Figure


2.3(b), 2.3(c), and 2.3(d) represent the coaccesible part, the accesible part, and the
trim operation of G, respectively.

2.3.2 Parallel Composition


Parallel composition is the standard way of building models from individual system
components models. When we model systems composed of interacting components,
the events are of two types: common events and particular events. Common events
are events present in more than one subsystem, and therefore the occurrence of
them must always be performed by all subsystems simultaneously. Private events
are events present in only one subsystem, and can be performed without the need
of synchronization. According to [6] the parallel composition of two automaton G1
and G2 can be defined as it follows.

Definition 2.11. The parallel composition of two automaton G1 and G2 can be


defined as

G1 ||G2 = Ac(X1 × X2 , Σ1 ∪ Σ2 , f, Γ1||2 , (x01 , x02 ), Xm1 × Xm2 ),

where



 (f1 (x1 , σ), f2 (x2 , σ)) if σ ∈ Γ1 (x1 ) ∩ Γ2 (x2 )

 (f (x , σ), x )
1 1 2 if σ ∈ Γ1 (x1 ) \ Σ2
f ((x1 , x2 ), σ) :=


 (x1 , f2 (x2 , σ)) if σ ∈ Γ2 (x2 ) \ Σ1

 undef ined otherwise

and thus Γ1||2 (x1 , x2 ) = [Γ1 (x1 ) ∩ Γ2 (x2 )] ∪ [Γ1 (x1 ) \ Σ2 ] ∪ [Γ2 (x2 ) \ σ1 ].

12
a

x 0 b

g
a a b
b

b y z 1 a
a, g

Figure 2.4: Automaton G1 and G2 .

b b

g g
(x, 0) (z, 0) (y, 0)

a
a b a b

g a, g
a (x, 1) (z, 1) (y, 1)

Figure 2.5: Parallel Composition.

Example 2.6. Consider the two automata that partially represent the language of a
complex system, G1 and G2 , shown in Figure 2.4, where the event set of G1 and G2
are Σ1 = {a, b, g} and Σ2 = {a, b}, respectively. To build a monolithic model, i.e., a
single automaton which represents the entire language, performed by the subsystems,
we can compute the parallel composition, G = G1 ||G2 , shown in Figure 2.5.
Note that, in this example, the events a and b are common events, so that for one
subsystem to realize them, it must be possible to do so in the other, thus synchronizing
the occurrence of the common events. Private events, on the other hand, can occur
freely, when possible, by the subsystem that observe them, since they are exclusive.

The established notion of synchronization of subsystems by parallel composition


is very important in Chapter 4, since we build a monolithic model from a distributed
identification.
Therefore, the automaton is a powerful and simple tool to represent the language
of systems, allowing even a partial representation of their behavior from subsystems,

13
whose parallel composition represents the total system. However, to construct an
automaton to represent a system, some prior knowledge of the system to be an-
alyzed is necessary, which can be a laborious task. Due to this challenge system
identification methods are proposed in the literature.

2.4 System Identification


Basically, the system identification methodology builds a model that represents the
behavior of the system to be analyzed. As mentioned in Chapter 1, this work aims
in developing a discretized model, represented by a Discrete-Event System, from
input and output signals of the system (sensor and actuators signals, respectively).
Thus, its construction is directly related to the previous knowledge of the system
under study, which can be: (i) white box, in which we have complete knowledge of
all the system’s behaviors, (ii) gray box, in which there is partial knowledge of its
possible behaviors, and (iii) black box, in which knowledge of the system is very
limited ([25]). In this work, the identification of black box systems is performed.
Although it is not very common in industry, the advantage of building an efficient
methodology for the identification of black box systems is that it can be applicable
to any problem to be analyzed, since its application limitations are reduced.
The systems to be analyzed are controlled by programmable logic controllers
(PLC) that exchange information with the plant. The PLC receives the binary
values from the plant sensors according to its pre-established programming and
sends the command signal to the actuators, which can change the sensor values,
following the work cycle. Since we work with black box systems, we do not have
knowledge of the system, and its identification is based on the observation of the
signals of the controller, over time. This is possible since, in general, the PLC is an
accessible device, where the input signals (signals received by sensors) and output
signals (signals sent to actuators) are available by the controller itself. Figure 1.1
shows the closed loop system in which the signals are observed.
Therefore, let us consider the closed-loop system depicted in Figure 1.1, and let
nI and nO denote the number of binary inputs and outputs of the PLC, respectively.
Then, the vector formed of the input and output signals of the PLC at a time instant
ti ∈ R is denoted as ui = [I1 (i) I2 (i) . . . InI (i) O1 (i) O2 (i) . . . OnO (i)], where Iβ (i)
and Oδ (i), for β ∈ {1, 2, . . . , nI } and δ ∈ {1, 2, . . . , nO } are, respectively, the input
and output signals of the controller at time instant ti . The I/O vector is a vector
ui ∈ Zn2 , where n = nI + nO and Z2 = {0, 1} and an I/O ∈ ui of index m ∈ N is
represented as ui [m].
The main goal of system identification is to compute a model that simulates the
behavior of the real plant, i.e., a model that represents the same I/O vector sequences

14
            
0 1 0 0 0 0
0 0 0 1 1 0
pj = 
0 , 0 , 0 , 0 , 0 , 0
           

0 1 1 1 0 0

Figure 2.6: Path of a system with 4 I/Os.

executed by the plant. Let pj = (νj,1 , νj,2 , . . . , νj,lj ) be an observed path where νj,z ,
for z = 1 . . . , lj , is an observed I/O vector. In addition, let sj = νj,1 νj,2 . . . νj,lj be a
sequence of I/O vectors associated with pj . Let P = {p1 , p2 , . . . , p|P| } be the set of
distinct observed paths. In this work, the following assumption is made.

Assumption 2.1. All paths of the system start with the same I/O vector, i.e.,
νi,1 = νj,1 , for all pi , pj ∈ P, and are cyclic, i.e., for all pj ∈ P, νj,1 = νj,lj .

Everytime an I/O that belongs to the analyzed vector undergoes a change, a


transition occurred in the model, i.e., the system has reached a new state. In this
work, the system evolution is observed until it returns to its initial state, previously
defined. When the system return to this state, a task is completed and the path is
ended. Figure 2.6 represents a cyclic path of a system with 4 I/Os.
Note that the observed I/O vector paths are the only information that we have
from the system. Due to this lack of information, the creation of a deterministic
automaton presented in Section 2.3 can be a difficult task. Thus, it is necessary
to adapt the formalism presented in Section 2.3, to build a simpler model which
allows the use of the observed information. In [2], a model called Nondeterministic
Autonomous Automata with Outputs is presented.

2.4.1 Nondeterministic Autonomous Automata with Out-


puts
As described in Section 2.3, automata are an useful formalism for describing lan-
guage. However, for black box strategies, since we have limited information about
the system, building deterministic automaton models can be an arduous task. There-
fore, in [2], it is developed a new class of automaton called Nondeterministic Au-
tonomous Automata with Outputs (NDAAO).

Definition 2.12 (Nondeterministic Autonomous Automata with Outputs). A Non-


deterministic Autonomous Automata with Outputs can be defined as

N = (X, Ω, fnd , λ, x0 , xf )

where X is the set of states, Ω = {u1 , u2 , . . . , u|Ω| } is the set of output symbols,
represented by I/O vectors ui , fnd : X → 2X is the nondeterministic autonomous

15
transition function, λ : X → Ω is the output function, x0 is the initial state, and xf
is the final state.

The simplicity of the model allows an identification from the observation of the
I/O vectors of the black box system, as illustrated in Figure 1.1. As in this work,
[2] considers that all paths used to build the NDAAO are cyclic and start with the
same I/O vector.
Since NDAAO is an autonomous automaton, it has no associated events, and
thus being nondeterministic. Furthermore, states x ∈ X are associated with the
outputs u ∈ Ω, through a function λ, where the set Ω is constructed from the I/O
vectors observed during the identification process. Therefore, NDAAO has a unique
state x0 ∈ X which is associated with the initial I/O vector observed on all paths
pj . As we assume that all observed paths are cyclic, we also have an unique xf ∈ X
which is associated with the same I/O vector as x0 .
According to [2], the language generated by the NDAAO is expressed as the
possible sequences of I/O vectors it can perform. Therefore, we can define the set
of words of length n generated by NDAAO as

Ln (N ) = Lnxi (N ),
S
xi ∈X

where Lnxi (N ) is the set of words of length n generated by an NDAAO starting from
a state xi defined as

Lnxi (N ) = {s = λ(x0 )λ(x1 ) . . . λ(xn ) ∈ Ω : (∃xi , x1 , ..., xn ∈ X) ∧ (∀t ∈


{1, 2, . . . , n − 1}, xt+1 ∈ fnd (xt ))}.

.
The basic concept used in NDAAO’s construction is that each change observed
in a I/O vector during a path is seen as an autonomous transition in the automaton,
i.e., a transition without an event linked to it.

Example 2.7. Consider the path pj , depicted in Figure 2.6. The NDAAO computed
from this path is N = (X, Ω, fnd , λ, x0 , xf ), where,

• X = {x0 , x1 , x2 , x3 , x4 , xf } ;
         
0 1 0 0 0
• Ω= 0 , 0 , 0 , 1 , 1
0 0 0 0 0 ;
0 1 1 1 0

• fnd : fnd (x0 ) = x1 , fnd (x1 ) = x2 , fnd (x2 ) = x3 , fnd (x3 ) = x4 , fnd (x4 ) = xf ;
       
0 1 0 0
• λ : λ(x0 ) = 00 , λ(x1 ) = 00 , λ(x2 ) = 00 , λ(x3 ) = 10 , λ(x4 ) =
  0  1 1 1
0 0
1 , λ(x ) = 0 ;
0 f 0
0 0

16
x0 x1 x2 x3 x4 xf
           
0 1 0 0 0 0
0 0 0 1 1 0
λ(x0 ) = 
0 λ(x1 ) = 0 λ(x2 ) = 0 λ(x3 ) = 0 λ(x4 ) = 0 λ(xf ) = 0
          

0 1 1 1 0 0

Figure 2.7: NDAAO of Path pj from Figure 2.6.

• x0 = x0 ;

• xf = xf .

Figure 2.7 depicts the NDAAO computed from path pj of Figure 2.6.

In addition, the NDAAO has a free parameter k ∈ N that adjusts the accuracy of
the built model. This parameter sets the number of I/O vectors that are associated
with each state. Therefore, each state of the NDAAO model is associated with a
distinct subsequence of I/O vectors of a given length k observed in the paths pj ∈ P,
and it is computed such that all observed subsequences of length k are associated
with the states of the NDAAO model. Then, the free parameter k leads to a trade off
between the size of the model and its accuracy in describing the observed behavior
[2].
Let’s introduce function Remove : Ωk → Ωk−1 , that removes the first vec-
tor of a sequence of I/O vectors of length k, i.e., if σ k = u1 u2 . . . uk , then
Remove(σ k ) = u2 . . . uk . In order to compute the NDAAO model it is first nec-
essary to generate modified paths, pkj , from pj , according to the free parameter
k as pkj = (σj,1 k k
, σj,2 k
, . . . , σj,lj +k−1
), where σj,1
k
is formed of the concatenation of
k I/O vectors all equal to νj,1 , σj,z k k
= Remove(σj,z−1 )νj,z , for 1 < z ≤ lj , and
σj,z = Remove(σj,z−1 )νj,1 , for lj < z ≤ lj + k − 1. After the computation of the
k k

modified paths pkj , the NDAAO model can be computed such that each state is
associated with a distinct sequence of I/O vectors σj,z k
, and the transition between
two states exists in the NDAAO model if, and only if, the associated sequences of
I/O vectors has been observed in at least one pkj . The value of λ(x), where x ∈ X,
is defined as the last I/O vector of the sequence of I/O vectors associated with x.
In order to visualize how the parameter k changes the language generated by the
NDAAO we present in the following example.

Example 2.8. Consider the observed paths p1 and p2 presented in Figure 2.8, where
we build the NDAAO for k = 1 and k = 2. Then, for a parameter k = 2, it is
necessary to modify every observed paths, where the information of the past k − 1
I/O vectors are added to the existing I/O vectors. Therefore, the modified paths p21
and p22 are shown in Figure 2.8. Finally, the resulting NDAAOs, for k = 1 and
k = 2 are shown in Figures 2.10 and 2.11, respectively.

17
           
0 1 0 0 0 0
p1 =  0 , 0 , 0 , 1 , 1 , 0,
         
0 1 1 1 0 0
           
0 1 1 0 0 0
p2 =  0 , 0 , 0 , 1 , 1 , 0.
         
0 0 1 1 0 0
Figure 2.8: Paths p1 and p2 , for the construction of NDAAO.

             
0 0 0 1 1 0 0 0 0 0 0 0 0 0
2
p1 =   0 0 , 0
  0 , 0
  0 , 0
  1 , 1
  1 , 1
  0 , 0
  0,
0 0 0 1 1 1 1 1 1 0 0 0 0 0
             
0 0 0 1 1 1 1 0 0 0 0 0 0 0
2
p2 =  0 0 , 0
  0 , 0
  0 , 0
  1 , 1
  1 , 1
  0 , 0
  0.
0 0 0 0 0 1 1 1 1 0 0 0 0 0

Figure 2.9: Modified paths p21 and p22 from Example 2.8.

         
0 1 0 0 0
λ(x0 ) = 0 λ(x1 ) = 0 λ(x2 ) = 0 λ(x3 ) = 1 λ(x4 ) = 1
0 1 1 1 0

x0 x1 x2 x3 x4 xf
 
0
λ(xf ) = 0
  0
1
x6 λ(x6 ) = 0
0

Figure 2.10: NDAAO of Example 2.8, for k = 1.

           
0 1 0 0 0 0
λ(x0 ) = 0 λ(x1 ) = 0 λ(x2 ) = 0 λ(x3 ) = 1 λ(x4 ) = 1 λ(x5 ) = 0
0 1 1 1 0 0

x0 x1 x2 x3 x4 x5 xf
     
1 1 0  
0
λ(x6 ) = 0 λ(x7 ) = 0 λ(x8 ) = 1 λ(xf ) = 0
0 1 1 0
x6 x7 x8

Figure 2.11: NDAAO of Example 2.8, for k = 2.

18
           
0 1 0 0 0 0
λ(x0 ) = 0 λ(x1 ) = 0 λ(x2 ) = 0 λ(x3 ) = 1 λ(x4 ) = 1 λ(x5 ) = 0
0 1 1 1 0 0

x0 x1 x2 x3 x4 xf
     
1 1 0
λ(x6 ) = 0 λ(x7 ) = 0 λ(x8 ) = 1
0 1 1

x6 x7 x8

Figure 2.12: Reduced NDAAO of Example 2.8, for k = 2.

Note that, for k > 1, the algorithms presented in [2] create, at most, k − 1 addi-
tional sequences of I/O vector for each observed paths. The goal of these additional
vector is to force that each path that belong to the language generated by the NDAAO
starts and ends with the same sequence of I/O vectors, i.e., σj,1 = σj,lj +k−1 , where
σj,1 is associate with the initial state x0 and σj,lj +k−1 is associate with the final state
xf . However, these additional vectors create additional states in the model that rep-
resent sequences of repeated I/O vectors, such as the sequence s = λ(x5 )λ(xf ), which
should be eliminated. Therefore the final step in creating the NDAAO is to reduce
the final state k − 1 times, in order to eliminate all additional states created, where
for each reduction, the new final state is going to be one of the predecessor states
of the actual final state. Following the example, for k = 2, we only have to do the
reduction process one time, where the only predecessor of the final state xf is x5 .
Therefore, Figure 2.12 shows the reduced NDAAO, for k = 2.
Note that the free parameter k leads to a trade off between the size of the
model and its accuracy in describing the observed behavior. For k = 1, each
state created only associate the actual I/O vector observed in each path, which
leads to a more compact model, with behaviors that were not observed, e.g., pexc =
(λ(x0 ), λ(x6 ), λ(x1 ), λ(x2 ), . . . , λ(xf )). Thus, increasing the value of k for k = 2,
adds the information of the previous h i observed
h i I/O vector. The additional infor-
1 0
mation differs the I/O vectors 0 and 1 observed in both p1 and p2 , since the
1 0
previous
h i observed I/O vector, in each path, are different. For
h i example, the I/O vec-
1 0
tor 0 observed in path p1 is preceded by the I/O vector 0 , while in path p2 , the
1 h i 0
1
same I/O vector is preceded by the I/O vector 0 . Thus, despite representing the
h i 0
1
same I/O vector, the vector 0 is represented by two distinct states, x1 and x7 , in
1
the NDAAO for a value of k = 2. Therefore, for k = 2, we have a larger model
which only represents the observed behavior.

In the following chapter, the algorithms to compute NDAAO presented in [2] are
carefully analyzes in order to identify possible flaws in their construction that the

19
Figure 2.13: Relation between the Languages of NDAAO, figure from [1].

classical model may present.


Therefore, the NDAAO is an adaptation of the automaton presented in Section
2.3, designed especially for black box system identification problems, where the only
information is the I/O vectors of the PLC. When working with identification of
systems methods, such as NDAAO, it is interesting to extend the language concepts
presented previously, since our goal is to represent exclusively the language of a
system.

2.4.2 Languages of Identified Discrete Event Systems


In order to obtain a model capable of reproducing all possible behaviors of the
system in fault-free operation, it would be necessary infinite observation of the paths
executed by the system. Thus, the observed behavior is expected to be a subset
of the original fault-free behavior. To reduce the difference between these sets, the
observation is, in general, performed for a long time. A way to measure the accuracy
of the identified model is through the definition of five important languages shown
in Figure 2.13.
The languages depicted in Figure 2.13 are:

• LOrig : is the original language of the system, i.e., the language generated by
the fault-free behavior;

• LObs : is the observed language;

• LIden : is the identified language;

• LOrigN I : is the language that is original but not identified;

20
• LExc : is the exceeding language, i.e., part of identified language that is not
part of the original one.

The language generated by the identified model is called LIden . Since the model
must reproduce the observed behavior, then LObs ⊂ LIden .
The original language LOrig is the one that truly represents the system fault-free
behavior. Actually, LObs ⊂ LOrig , because every observed sequence is obviously
generated by LOrig .
Next, two languages of interest are defined, since they are important to evaluate
the quality of the model obtained with respect to the fault diagnosis [2].

Definition 2.13 (Exceeding language). The exceeding language is the language


which contains all sequences of events that are generated by the identified model
but do not belong to the original language of the system without faults. So LExc =
LIden \ LOrig .

The exceeding language LExc is related to possible faults which are interpreted
as part of the fault-free behavior, since they belong to LIden . Therefore, the ability
to diagnose faults may decrease as LExc grows.

Definition 2.14 (Original nonidentified language). The original nonidentified lan-


guage LOrigN I is the language which contains all the sequences that the fault-free
behavior is able to reproduce but they do not belong to the language obtained by the
identification method. Therefore, LOrigN I = LOrig \ LIden .

In this way, as sequences that belong to LOrigN I are not part of LIden , a sequence
in LOrigN I is interpreted as a fault sequence. Thus, a great cardinality of LOrigN I
can generate a lot of false alarms during detection. The LOrigN I language can be
reduced by increasing the observation time of the system without faults. Thus,
the biggest problem related to the fault detection process becomes reducing the
exceeding language LExc .
This chapter provides a review of the theory and bibliography of discrete event
system identification and the fundamentals necessary to understand it. Some con-
cepts such as NDAAO and its language are analyzed in the next chapter, where the
problems presented in this approach are studied, and a new identification method-
ology for DES with cyclic behavior is proposed.

21
Chapter 3

Modified Nondeterministic
Autonomous Automata with Output

This chapter presents a method for identification of black box systems, which is a
modified model of the NDAAO introduced in Section 2.4. This modification aims
to solve the reinitialization problem, which may lead the NDAAO model not to
represent correctly the original language of the system. This problem is presented
through an example. Thus, in this chapter, the construction of the NDAAO is
carefully analyzed in order to find possible flaws in the representation of the original
language.
Thus, this chapter is organized as follows: In Section 3.1, we analyze the algo-
rithms to construct NDAAO presented in [2] and [3], showing the inefficiency of these
models for certain types of problems. In Section 3.2, the modified NDAAO model
is presented, along with its construction algorithm. In Section 3.3, the language of
the modified model is analyzed.

3.1 The problems of NDAAO


Before proposing the modified NDAAO, the original NDAAO presented in [2] and,
its adaptation proposed in [3] are introduced. Thus, in Subsection 3.1.1, we present
the algorithms to construct NDAAO proposed by [2], to show that the model may
not represent the observed language of certain cyclic systems. In Subsection 3.1.2,
the algorithms to construct NDAAO proposed by [3] are presented, which may not
reinitialize for certain types of cyclic systems, making the identified model does not
represent the original language of the system. An example is used throughtout the
Subsections 3.1.1 and 3.1.2, in order to illustrate the problems presented by both
models.

22
3.1.1 NDAAO proposed in Klein et al.
Although the main idea of the NDAAO, proposed in [2], is presented in Section 2.4,
its construction is detailed in this section. In [2], each observed path pj is modified,
creating a modified path pkj , composed of sequence of I/O vectors of length k, called
k-vectors. Algorithm 3.1 construct the modified paths pkj as follows.

Algorithm 3.1 Construction of the modified paths [2].


Input: P = {p1 , p2 , ..., p|P| },k
Output: P k

1: for p1 , p2 , ..., p|P| do 


 νj,1 , 1 ≤ m ≤ k
2: Create αj,m = νj,m−k+1 , k < m ≤ k + lj − 1
νj,lj , k + |pj | ≤ m ≤ 2k + lj − 2

3: m←1
4: while m < lj + k − 1 do
k
5: σj,m = (αj,m αj,m+1 . . . αj,m+k−1 )
6: m←m+1
7: pkj = (σj,1k k
, σj,2 k
, ..., σj,|p k|)
j
k
S|P| k
8: P = j=1 pj

In line 1 of Algorithm 3.1 the loop for each observed path is performed. In line
2, the variable αj,m is created. If 1 ≤ m ≤ k, then the value of αj,m is equal to the
first I/O vector νj,1 of the observed path pj . If k < m ≤ k + lj − 1, then αj,m is
equal to the I/O vector νj,m−k+1 . If k + lj ≤ m ≤ 2k + lj − 2, then αj,m is equal
to the last I/O vector νj,lj of path pj . In line 3, Algorithm 3.1 sets the parameter
m to 1. In line 4, each k-vector σj,m
k
is computed. In lines 5-7, the modified path is
obtained from the k-vectors. Finally, in line 8, the new set of modified paths P k is
constructed.
The k-vectors computed in Algorithm 3.1 are construct from the concatenation
of k I/O vectors from the observed paths. Note that Algorithm 3.1 creates k − 1
last additional k-vectors in order to force that the last k-vector of each modified
path is equal to the initial k-vector. In the following, an example to illustrate the
construction of the modified paths is presented.

Example 3.1. Consider paths p1 = (A, B, A, C, A) and p2 =


(A, B, C, B, A, C, A, B, A), where A, B, C, represent I/O vectors which are
different from each other. The modified paths computed according to Algorithm
3.1, for k = 3 are: p31 = (AAA, AAB, ABA, BAC, ACA, CAA, AAA) and
p32 = (AAA, AAB, ABC, BCB, CBA, BAC, ACA, CAB, ABA, BAA, AAA). Note
that, according to line 2 of Algorithm 3.1 k − 1 additional k-vectors are created for
each path, in order to make the last element equal to the first one.

23
In [2], the authors define the enhanced NDAAO, N k = (X, Ωk , fnd , Λ, x0 , xf ),
where the set Ωk is formed by all k-vectors observed in the modified paths, and
Λ : X → Ωk is the modified output function. Therefore, we build the enhanced
NDAAO by connecting each k-vector to the previous one, through a transition,
where each different k-vector σj,m k
∈ Ωk is represented by a unique state x ∈ X.
Only the first and last I/O vectors are represented by different states, being the
initial and final states, respectively. Algorithm 3.2 performs the construction of N k .

Algorithm 3.2 Construction of NDAAO.


Input: P k = {pk1 , pk2 , ..., pk|P k | }
Output: N k = (X, Ωk , fnd , Λ, x0 , xf )

1: j ← 1, t ← 1
2: while j ≤ |P k | do
3: Create the initial state x0 , Λ(x0 ) = σj,1
k

4: Add x0 to X and σj,1 to Ω


k k

5: xc ← x 0 , m ← 2
6: while m < |pkj | do
k
7: if σj,m / Ωk then

8: Create state xt , Λ(xt ) = σj,m
k

9: Add xt to X and σj,m to Ωk


k

10: Create transition fnd (xc ) = xt


11: xc ← xt
12: t←t+1
13: else
14: Find xp , which Λ(xp ) = σj,m
k

15: Create transition fnd (xc ) = xp


16: xc ← xp
17: m←m+1
18: Create state xf , Λ(xf ) = σj,m
k

19: Add xf to X
20: j ←j+1

In line 1, Algorithm 3.2 creates the variable t responsible for enumerating the
states xt ∈ X and j which indicates the path index. In line 2, the looping is
started and performed for all modified paths pkj , for j = 1, 2, . . . , |P k |. In lines 3-
4, Algorithm 3.2 initializes the NDAAO, creating the initial state x0 as well as its
output Λ(x0 ). In line 5, the variables xc and m are created, where xc stores the last
state visited during the construction of the NDAAO and m is a counter. In lines
7-16, the NDAAO is constructed by visiting each k-vector σj,m k
from pkj . If σj,mk
has
not been visited yet, a state xt is created in line 8. In addition, the output of xt
is defined as the k-vector σj,m
k
. In line 9, the observed σj,m
k
is added to set Ωk and
the state xt is added to the state set X. In line 10, a transition from xc to xt is
created. If σj,m
k
has already been visited, a search is performed in line 14 to find

24
Λ(x9 ) = CAB Λ(x10 ) = BAA
x9 x10

Λ(x0 ) = AAA Λ(x1 ) = AAB Λ(x3 ) = BAC Λ(x5 ) = CAA


x0 x1 x2 x3 x4 x5 xf

Λ(x2 ) = ABA Λ(x4 ) = ACA Λ(xf ) = AAA

Λ(x7 ) = BCB

Λ(x6 ) = ABC x6 x7 x8 Λ(x8 ) = CBA

Figure 3.1: Enhanced NDAAO proposed in [2] from Example 3.1.

which state xp of X satisfies Λ(xp ) = σj,m k


. Then, in line 15, xp is added to the
transition function of the last state visited xc . Finally, in lines 18-19, Algorithm 3.2
creates the final state xf as well as its output Λ(xf ) and add it to the set X.
The following example illustrates the computation of the enhanced NDAAO from
Algorithm 3.2.

Example 3.2. The enhanced NDAAO, for k = 3, computed from paths p31 and p32 ,
defined in Example 3.1, is shown in Figure 3.1.
This model has states whose outputs are represented by k-vectors, where each
state is associated with a different k-vector, except for initial state x0 and the final
state xf . According to [2], the final and initial states of the NDAAO are equivalent,
but these are not merged to not create an exceeding language in the NDAAO model.

Since we deal with identification for fault diagnosis, it is more interesting to


work with the real current I/O vector of the system. Thus, a simplified NDAAO is
proposed in [2], which is obtained by defining a new output function in the enhanced
NDAAO, keeping the last vector of each k-vector only. For instance, the k-vectors
ABC and BAC, which are outputs from states x6 and x3 from NDAAO of Figure
3.1, respectively, are simplified to C. The simplified NDAAO of Example 3.1, is
shown in Figure 3.2.
Note that, in the NDAAO of Figure 3.1, the outputs Λ(x0 ) = AAA, Λ(x2 ) =
ABA, Λ(x4 ) = ACA, Λ(x5 ) = CAA, Λ(x8 ) = CBA, Λ(x10 ) = BAA, and, Λ(xf ) =
AAA, are simplified to λ(x0 ) = λ(x2 ) = λ(x4 ) = λ(x5 ) = λ(x8 ) = λ(x10 ) = λ(xf ) =
A, in the NDAAO of Figure 3.2. Since the simplified NDAAO can have states with
the same output, it may generate sequences with repeated adjacent I/O vectors.
Therefore, it is necessary to eliminate these states by the process called reduction,
which is presented in Algorithm 3.3.

25
λ(x9 ) = B λ(x10 ) = A
x9 x10

λ(x0 ) = A λ(x1 ) = B λ(x3 ) = C λ(x5 ) = A


x0 x1 x2 x3 x4 x5 xf

λ(x2 ) = A λ(x4 ) = A λ(xf ) = A

λ(x7 ) = B

λ(x6 ) = C x6 x7 x8 λ(x8 ) = A

Figure 3.2: Simplified NDAAO from Example 3.1.

Algorithm 3.3 Reduction of NDAAO.


Input: N = (X, Ω, fnd , λ, x0 , xf )
Output: Reduced NDAAO N = (X, Ω, fnd , λ, x0 , xf )

1: Repeat k − 1 times
2: Take xf ∈ X
3: Create the set of pre-states: P RE(xf ) = {xi ∈ X : fnd (xi ) = xf }
4: Choose a state xf −1 from P RE(xf )
5: Update the transition function fnd : ∀(xm , xn ) ∈ (X \ P RE(xf ) \ xf −1 ) ×
(P RE(xf ) \ xf −1 ), if xn ∈ fnd (xm ), then add xf −1 to fnd (xm ) and remove (xn )
from fnd (xm )
6: ∀xm ∈ P RE(xf ) \ xf −1 , remove xm from X
7: Remove xf from X
8: xf = xf −1

26
λ(x9 ) = B
x9

λ(x0 ) = A λ(x1 ) = B λ(xf ) = A

x0 x1 x2 x3 x4 xf

λ(x2 ) = A λ(x3 ) = C λ(x4 ) = A

λ(x7 ) = B

λ(x6 ) = C x6 x7 x8 λ(x8 ) = A

Figure 3.3: First reduction of the NDAAO from Example 3.3.

In line 1, Algorithm 3.3 performs the algorithm k − 1 times to eliminate every


state that can create sequence with repeated adjacent I/O vectors. In line 2, the
actual final state xf of the NDAAO is considered. In line 3, Algorithm 3.3 creates
the set of predecessors states P RE(xf ), i.e., states that have a transition to the
final state. In line 4, one state xf −1 ∈ P RE(xf ) is picked up to be setted as the new
final state. In line 5, Algorithm 3.3 redefines the transition function of the NDAAO.
To do this, every transition that reaches a predecessor state which was not chosen
is relocated to the chosen predecessor state. In line 6, every predecessor state that
was not chosen is eliminated from the NDAAO. In line 7, final state xf and every
transition that reaches it are eliminated. Finally, in line 8, the chosen predecessor
state becomes the new final state.
The purpose of Algorithm 3.3 is to reduce the automaton computed in Algorithm
3.2 to eliminate the additional states created by the additional k-vectors constructed
in Algorithm 3.1. However, for certain systems, this reduction may change the
identified language of the model and not represent the observed language. In order
to illustrate this problem, consider Example 3.3.

Example 3.3. Let us apply Algorithm 3.3 to obtain a reduced NDAAO from the
simplified NDAAO of Figure 3.2. Therefore, let define the set of predecessor states
P RE(xf ) = {x10 , x5 } and choose state x10 as the new final state. Then, we update
the transition function, deleting the transition from state x4 to state x5 and adding
a transition from state x4 to state x10 . Finally, we eliminate x4 and xf , and set the
new final state as x10 . The resulting automaton from the first reduction is present
in Figure 3.3.
Since the model was modeled for a free parameter k = 3, it is necessary to perform
Algorithm 3.3 one more time. Now, the set of predecessor states from the NDAAO

27
λ(x9 ) = B
x9

λ(x0 ) = A λ(x1 ) = B λ(x3 ) = C


x0 x1 xf x3

λ(xf ) = A

λ(x7 ) = B

λ(x6 ) = C x6 x7 x8 λ(x8 ) = A

Figure 3.4: Reduced NDAAO from Example 3.3.

presented in Figure 3.3 is P RE = {x2 , x4 } and we choose state x4 as the new final
state. Then, the transitions from states x9 and x2 are deleted and a transition from
state x9 to state x4 is added. In addition, a transition from state x1 to state x2 is
deleted and a transition from state x1 to state x4 is added. Note that the transition
from state x2 to state x3 is not added. Finally, we eliminate state x2 and state xf ,
and set state x4 as the new final state. The resulting automaton from the second
reduction is presented in Figure 3.3.
Note that, after the second reduction, the NDAAO does not represent the first
observed sequence, p1 = (A, B, A, C, A), and then, it does not represent the observed
language. Note that, for k = 3, the k-vectors are repeated throughout the observed
sequences. Two k-vectors can represent final states in the reduced model, ABA,
which is represented by state x2 , being the last element of p32 , and ACA, represented
by state x4 , being the last element of p31 . In addition, these k-vectors also appear
in the observed modified paths, not necessary being the last elements of them. This
leads to a doubt when we reach states x2 or x4 : has the system finished a task yet?
This doubt is called reinitialization problem. In practice, the reinitialization problem
allows states x2 and x4 to have transitions to other states which are not additional
states, i.e., states that are not associated with one of the additional k-vectors created
in Algorithm 3.1. However, Algorithm 3.3 does not consider such transitions, only
adding transitions that reach the predecessor states. Thus, in this example, the
transition from state x2 to state x3 is deleted, consequently modifying the identified
language of the automaton.

Therefore, the NDAAO construction method proposed in [2] may not represent
the observed language when the modified paths of the system present the reini-
tialization problem. It is important to emphasize that the problem presented in

28
Example 3.3 occurs only for k > 1, since for k = 1 it is not necessary to reduce the
NDAAO. In this case NDAAO does not have additional states, consequently, the
model does not have sequences with repeated adjacent I/O vectors. In the following
subsection, we present the adaptation presented in [3] of the NDAAO model, where
this approach may not represent the original language of the system when we have
modified paths with the reinitialization problem.

3.1.2 NDAAO proposed in Roth et al.


In [3], it is proposed an adaptation of the NDAAO model, to generalize its applica-
tion to non-cyclic systems. Therefore, initial and final I/O vectors are not necessarily
the same in observed paths. As a consequence, the only restriction is that all paths
start at the same I/O vector. To allow such properties, the modified paths pkj are
constructed according to Algorithm to 3.4.

Algorithm 3.4 Construction of the modified paths [3].


Input: P = {p1 , p2 , ..., p|P| },k
Output: P k

1: for p1 , p2 , ..., p|P| do 


νj,1 , 1 ≤ m ≤ k
2: Create αj,m =
νj,m−k+1 , k < m ≤ k + lj − 1
3: m←1
4: while m ≤ lj do
k
5: σj,m = (αj,m αj,m+1 . . . αj,m+k−1 )
6: m←m+1
7: pkj = (σj,1k k
, σj,2 k
, ..., σj,|p k|)
j
k
S |P| k
8: P = j=1 pj

In line 1, Algorithm 3.4 performs the loop for each observed path. In line 2,
variable αj,m is created. If 1 < m ≤ k, then the value of αj,m is equal to the first
I/O vector νj,1 of the observed path pj . If k < m ≤ k + lj − 1, then αj,m is equal to
the I/O vector νj,m−k+1 . In line 3, Algorithm 3.4 sets the parameter m to 1. In line
4, the loop is started to construct each k-vector σj,m
k
. In lines 5-7 the modified path
is constructed from k-vectors. Finally, in line 8, the new set of modified paths P k
is obtained.
The only difference between Algorithms 3.1 and 3.4 is that Algorithm 3.4 does
not construct additional k-vectors. Actually, since the addition of these vectors are
carried out to force that the final and initial k-vector are equal. In the following,
an example for the computation of modified paths according to Algorithm 3.4 is
presented.

29
Example 3.4. Consider again the paths p1 = (A, B, A, C, A) and p2 =
(A, B, C, B, A, C, A, B, A) presented in Example 3.1, then, the new modified
paths, for k = 3, are p31 = (AAA, AAB, ABA, BAC, ACA) and p32 =
(AAA, AAB, ABC, BCB, CBA, BAC, ACA, CAB, ABA). Note that these modified
paths are different from those ones presented in Section 3.1.1. Since in [3], the au-
thors deal with non-cyclic systems, the modified paths p31 and p32 do not end with the
initial k-vector.

After the computation of the modified paths pkj , the NDAAO model can be com-
puted such that each state is associated with a distinct sequence of I/O vectors
k
σj,z , a transition between two states belong to NDAAO if, and only if, the associ-
ated sequences of I/O vectors has been observed in at least one modified path pkj .
Algorithm 3.5 performs the construction of the NDAAO according to [3].

Algorithm 3.5 Construction of NDAAO [3].


Input: P k = {pk1 , pk2 , ..., pk|P k | }
Output: N k = (X, Ωk , fnd , Λ, x0 )

1: j ← 1, t ← 1
2: while j ≤ |P k | do
3: Create the initial state x0 , Λ(x0 ) = σj,1
k

4: Add x0 to X and σj,1 k


to Ωk
5: xc ← x 0 , m ← 2
6: while m ≤ |pkj | do
k
7: if σj,m / Ωk then

8: Create state xt , Λ(xt ) = σj,m
k

9: Add xt to X and σj,m k


to Ωk
10: Create transition fnd (xc ) = xt
11: xc ← xt
12: t←t+1
13: else
14: Find xp , which Λ(xp ) = σj,m
k

15: Create transition fnd (xc ) = xp


16: xc ← xp
17: m←m+1
18: j ←j+1

In line 1, Algorithm 3.5 creates the variable t to enumerate states xt ∈ X and j


which indicates the path index. In line 2, the looping is started and performed for
all observed paths pkj , for j = 1, 2, . . . , |P k |. In lines 3-4, the NDAAO is initialized,
creating the initial state x0 as well as its output Λ(x0 ). In line 5, the variables xc
and m are created, where xc stores the last state visited during the construction of
the NDAAO and m is a counter. In lines 7-16, the NDAAO is constructed, visiting
each k-vector σj,m
k
from pkj . If σj,m
k
has not been visited yet, a state xt is created in

30
λ(x8 ) = B

x8

λ(x0 ) = A λ(x1 ) = B λ(x3 ) = C

x0 x1 x2 x3 x4
λ(x2 ) = A λ(x4 ) = A

λ(x6 ) = B

λ(x5 ) = C x5 x6 x7 λ(x7 ) = A

Figure 3.5: NDAAO proposed in [3] from Example 3.5.

line 8. In addition, the output of xt is defined as the k-vector σj,mk


. In line 9, the
observed σj,mk
is added to set Ωk and the state xt is added to state set X. In line
10, a transition from xc to xt is created. If σj,m
k
has already been visited, a search is
performed in line 14 to find which state xp of X satisfies Λ(xp ) = σj,m k
. Finally, in
line 15, xp is added to the transition function of the last state visited xc .
The main difference between the NDAAO constructed according to Algorithm
3.2 and Algorithm 3.5 is that, in Algorithm 3.5 a final state is not added. As
a consequence, the enhanced NDAAO proposed by [3] is a five-tuple, N k =
(X, Ωk , fnd , Λ, x0 ). In the following example it is shown that this identification
method may lead to a model that does not simulate the original language of a
system.

Example 3.5. Consider the modified paths p31 = (AAA, AAB, ABA, BAC, ACA)
and p32 = (AAA, AAB, ABC, BCB, CBA, BAC, ACA, CAB, ABA) computed in
Example 3.4. Thus the simplified NDAAO computed according to Algorithm 3.5
in presented in Figure 3.5. It is worth remembering that the simplified NDAAO is
obtained by keeping only the last vector of each k-vector that is associate with each
state x ∈ X, where X is the set of states of the enhanced NDAAO computed in
Algorithm 3.5.
Although the resulting automaton represents all observed paths, the model does
not represent the original behavior of the system. Since the system is cyclic, the
model should be able to perform sequences of paths such as p′2 , p1 , where p′j =
(νj,1 , νj,2 , . . . , νj,lj −1 ) is the path obtained from pj by eliminating its last I/O vec-
tor.
Consider that the system runs path p2 = (A, B, C, B, A, C, A, B, A). Then, the

31
NDAAO model is played following the path of states (x0 , x1 , x5 , x6 , x7 , x3 , x4 , x8 , x2 ),
detecting no faults as expected. Thus, after reaching state x2 , the model should
reset, returning to the initial state x0 and allowing the execution of another task.
However, since fnd (x2 ) ̸= ∅, there is no indication that the end of a task has been
reached, and the model is in state x2 . Suppose now that the system runs path p1 =
(A, B, A, C, A). Then, when the second I/O vector of path p1 , B, is observed, the
fault detection scheme would indicate a fault, since there does not exist a state xi
such that λ(xi ) = B and xi ∈ fnd (x2 ). This example shows that, although path p′2 p1
can be executed by the original fault-free behavior, it is detected as a fault by using
the NDAAO model presented by [3], this is because it is not possible to reinitialize
the model after the execution of a system task.

It is important to emphasize that, as in the NDAAO proposed in [2], the reini-


tialization problem occurs only for k > 1, since, for k = 1, the model always returns
to the initial state after playing a complete cyclic path pj ∈ P.
Therefore, this work aims to build a model that takes into account the reini-
tialization of the model after the execution of a path pj ∈ P and do not create
additional k-vectors in order to represent the end of a task.
In the following section a modified NDAAO, called Modified Nondeterminis-
tic Autonomous Automaton with Outputs is presented. This model considers the
reinitialization problem and so represents correctly the original language of cyclic
systems.

3.2 The Modified NDAAO (M-NDAAO)


In this section a modified NDAAO model that takes into account the reinitialization
of the model after the execution of a path pj ∈ P is presented. The model to be
developed, as seen in the NDAAO, is built from the observed I/O sequences. In
addition, the model has the same free parameter k, which modifies the number of
I/O vectors that compose the output of each state. This model is called Modified
Nondeterministic Autonomous Automata With Outputs (M-NDAAO).

Definition 3.1 (Modified Nondeterministic Autonomous Automaton with Out-


puts). A Modified Nondeterministic Autonomous Automaton with Outputs (M-
NDAAO) is a six-tuple:

M = (X, Ω, fM , λ, x0 , Xm ),

where

• X is the set of states,

32
• Ω = {u1 , u2 , ..., u|Ω| } is the set of I/O vectors,

• fM : X → 2X is the nondeterministic transition function,

• λ : X → Ω is the output function,

• x0 ∈ X is the initial state.

• Xm is the set of marked states.

The difference of the modified model relies in a distinct construction of the model
from the modified paths pkj , leading to a different transition function fM : X → 2X ,
and the definition of a set of marked states Xm . In M-NDAAO, the unique marked
state is the initial state x0 , i.e., Xm = {x0 }, to represent that the model has been
reinitialized.
Since the system to be identified has cyclic paths, we are able to know when the
system has restarted, i.e., a path has been completed whenever the initial state is
visited. In addition, according to Assumption 2.1, it is always possible to reach the
initial state from any constructed state.
Now, the algorithms for formulating the M-NDAAO based on the observed se-
quences are presented. The first step is to transform the observed paths into modified
paths.

Algorithm 3.6 Create the modified paths for M-NDAAO.


Input: P = {p1 , p2 , ..., p|P| },k
Output: P k {pk1 , pk2 , ..., pk|P| }

1: for p1 , p2 , ..., p|P| do


2: Create σj,1 k
= (νj,1 , νj,1 , ..., νj,1 ), where νj,1 is repeated k times
3: m←2
4: while m < lj do
k
5: σj,m = (νj,(m−k+1) , νj,(m−k+2) , ..., νj,(m) )
6: if νj,t has t < 1 then
7: νj,t ← νj,1
8: m←m+1
k k k k
9: pj = σj,1 , σj,2 , ..., σj,lj −1
k
S|P| k
10: P = j=1 pj

In line 1 of Algorithm 3.6, the loop that is performed for every observed path
starts. In line 2, we create the initial k-vector, which is composed of the first I/O
vector νj,1 repeated k times. In lines 3-8, we create the others k-vector, where each
is composed of k observed I/O vectors. Notice that if a negative index is observed
in the algorithm, it means that it is still at the initial I/O vector, i.e. νj,1 . Finally,
in lines 9-10, we create the modified path with the newly created k-vectors and the

33
new set of modified paths P k . Notice that the last element of each path is not added
to the modified path. According to Assumption 2.1, all observed paths are cyclic,
thus we force that the transition of our model takes the state associate to the output
k
σj,lj −1
to the initial state associate with the output σj,1
k
, therefore, reinitializing the
system.
As the NDAAO proposed in [3], the k − 1 additional k-vector are not created,
such that the model does not need to be reduced, for k > 1, according to the
Algorithm 3.3. The major difference in the construction of the modified paths
between Algorithms 3.4 and 3.6 is the elimination of the last I/O vector of each
path. According to Assumption 2.1, it can be seen that all observed paths are
cyclic. Using this information, we know that the last I/O vector of each path is
identical to the first one, that is, each path must end in the initial state. So, we do
not add the last I/O vector of the paths, since we know that, when we reach the
second-to-last I/O vector of a path, we have to reset the model, that is, create a
transition to the initial state. Therefore, in Algorithm 3.7, the modified NDAAO is
constructed from the modified paths obtained from Algorithm 3.6.

Algorithm 3.7 M-NDAAO construction.


Input: P k = pk1 , pk2 , ..., pk|P| and P = p1 , p2 , ..., p|P|
Output: M = (X, Ω, fM , λ, x0 , Xm )

1: j ← 1, t ← 1
2: Create the initial state x0
k
3: X ← {x0 }, Xm = {x0 } Ω ← {νj,1 }, O ← {σj,1 }
4: Define Λ(x0 ) = σj,1 and λ(x0 ) = νj,1
k

5: while j ≤ |P| do
6: xc ← x 0 , m ← 2
7: fM (xc ) ← ∅
8: while m ≤ |pkj | do
k
9: if σj,m ∈
/ O then
10: Create state xt
11: fM (xt ) ← ∅
12: Define λ(xt ) = νj,m and Λ(xt ) = σj,m
k
k
13: X ← X ∪ {xt }, Ω ← Ω ∪ {νj,m }, O ← O ∪ {σj,m }
14: fM (xc ) ← fM (xc ) ∪ {xt }
15: xc ← xt
16: t←t+1
17: else
18: Find xp such that Λ(xp ) = σj,m
k

19: fM (xc ) ← fM (xc ) ∪ {xp }


20: xc ← xp
21: m←m+1
22: fM (xc ) ← fM (xc ) ∪ {x0 }
23: j ←j+1

34
In line 1, the variable t responsible for enumerating the states xt ∈ X and j
which indicates the path index are created. In lines 2-4, the MNDAAO is initialized,
creating the initial state x0 as well as its output λ(x0 ) and define the set of marked
state being only the initial state. In addition, the set O is created, which stores all
k-vectors of the paths pkj , and the bijective function Λ : X → O, that associates
each state of X with a distinct k-vector σj,m k
. In line 5, the looping is started and
performed for all observed paths pj , for j = 1, 2, . . . , |P|. In line 6, the variables xc
k

and m are created, where xc stores the last state visited during the construction of
the MNDAAO and m is a counter. In lines 7-21, the M-NDAAO from each pkj is
computed, by visiting each k-vector σj,m k
. If σj,m
k
has not been visited yet, a state
xt is created in line 10. In line 12, the output of xt is defined as the last I/O vector
of σj,m
k
and xt is associated with σj,mk
. In line 13, the observed σj,m k
is added to set
O, the last vector of σj,m is added to the output set Ω, and the state xt is added
k

to the state set X. In line 14, a transition from xc to xt is created. If σj,m k


has
already been visited, a search is performed in line 18 to find which state xp of X
satisfies Λ(xp ) = σj,m
k
. Then, in line 19, xp is added to the transition function of the
last state visited xc . According to Assumption 2.1, the last element of a path pj is
equal to the initial one of any other possible path pi ∈ P. Thus, in line 28, we force
the system to reset, creating a transition from the last state visited xc to the initial
state x0 .
It is important to remark that, according to line 28 of Algorithm 3.7, there may
exist states {xc , xt } ∈ X in the M-NDAAO model, such that {xt , x0 } ⊆ fM (xc ),
and λ(xt ) = λ(x0 ). This always occurs when we work with paths that present the
reinitialization problem. Thus, either the model observes the completion of a task,
returning to the initial state x0 , or observes the continuation of an incomplete path,
reaching state xt . When the reinitialization problem occurs, there is an ambiguity
and different paths can be followed. In the following we present an example of
the construction of the M-NDAAO according to Algorithms 3.6 and 3.7, where the
observed paths of Example 3.1 are considered again.

Example 3.6. Let us consider again the paths p1 = (A, B, A, C, A) and p2 =


(A, B, C, B, A, C, A, B, A) presented in Example 3.1. Thus, we build the M-NDAAO
which represents the language of the observed paths, for a parameter k = 3. First,
we compute the modified paths from Algorithm 3.6, p31 = (AAA, AAB, ABA, BAC)
and p32 = (AAA, AAB, ABC, BCB, CBA, BAC, ACA, CAB). Note that the last
I/O vector from each path since is eliminated, according to Assumption 2.1, the last
I/O vector must be equal to the first. Then, we construct M-NDAAO, according to
Algorithm 3.7. The result is present in Figure 3.6.
Note that the M-NDAAO obtained according to Algorithm 3.7 is a cyclic model
that simulates the observed paths and all possible concatenations of them. In addi-

35
λ(x8 ) = B

x8

λ(x0 ) = A λ(x1 ) = B λ(x2 ) = A λ(x3 ) = C

x0 x1 x2 x3 x7
λ(x7 ) = A

λ(x5 ) = B

λ(x4 ) = C x4 x5 x6 λ(x6 ) = A

Figure 3.6: M-NDAAO example.

tion, for values of k > 1, the model may present an ambiguity when it is reinitialized,
which may create an exceeding language.

Let define the state estimate function E : 2X × Ω∗ → 2X , where Ω∗ denotes


the Kleene-closure of Ω, as E(XS , ε) = XS , for all XS ⊆ X, where ε denotes the
empty sequence of I/O vectors, E(XS , u) = {xi ∈ X : (λ(xi ) = u) ∧ (∃x ∈ XS )[xi ∈
fM (x)]}, and E(XS , sω u) = E(E(XS , sω ), u), for all sω ∈ Ω∗ and u ∈ Ω. Note that
an ambiguity can only occur in the M-NDAAO model if there exists a state x ∈ X
such that |E(x, u)| > 1.
In the sequel, we prove that the ambiguity created in M-NDAAO is solved in, at
least, (k − 1) steps as long as another ambiguity is not created during these steps.
To prove this property, the following definitions are presented.

Definition 3.2 (Reset transition). Let us define a reset transition as a transition


from a state x ∈ X to the initial state x0 , i.e., x0 ∈ fM (x).

Definition 3.3 (Set of states with reset transitions). The set formed of all states
from which there exists a reset transition is defined as:

Xa = {x ∈ X : x0 ∈ fM (x)}.

Theorem 3.1. Let M N DAAO be a modified NDAAO obtained according to Al-


gorithm 3.7 for k > 1, and let xa ∈ Xa . Let Xn,a = E(xa , λ(x0 )). Then,
|E(Xn,a , sω )| = 1, for all ∥sω ∥ = k − 1, where sω is a sequence of I/O vectors
that can be played in the M N DAAO from a state in Xn,a , if none of the transitions
executed in the model while playing sω is a reset transition.

36
Proof. Note that each state x ∈ X of the M-NDAAO model is associated with
a distinct σj,m
k
, for j ∈ {1, 2, . . . , |P|} and m ∈ {1, 2, . . . , lj − 1}, with length k.
In addition, according to Algorithm 3.7, the same state x such that Λ(x) = σj,m k
,
where σj,m
k
does not have two consecutive equal I/O vectors, is always reached after
the execution of sequence σj,mk
. Thus, if none of the transitions executed in the
model while playing sω is a reset transition, the model reaches a unique state,
independently of the origin state x ∈ Xn,a , after executing (k − 1) I/O vectors.
Thus, |E(Xn,a , sω )| = 1.

Theorem 3.1 shows that the exceeding language generated due to the ambiguity
caused by the reset transitions vanishes after k − 1 I/O vector observations, when
these observations do not lead to a new reset transition. It is important to remark
that even when new reset transitions are executed before k − 1 I/O vector obser-
vations, it is possible in some cases to eliminate the ambiguity within a bounded
number of observations. We present this case in the sequel.

Example 3.7. Consider the same M-NDAAO model obtained in previous exam-
ple, and consider that path p1 = (A, B, A, C, A) is observed. After playing the
model following path p1 , we have that E(x0 , ŝ1 ) = {x0 , x7 }, where ŝ1 is the se-
quence of I/O vectors obtained from p1 after eliminating the initial I/O vector, i.e.,
ŝ1 = BACA. Let us consider now that sequence BAC is observed after ŝ1 ., Then,
E({x0 , x7 }, BAC) = {x3 }, and only state x3 can be reached. Note that, in this case,
state x8 is reached from state x7 after the observation of B. Although it is a state
from which departs a reset transition, the ambiguity created from states {x0 , x7 } is
eliminated after the observation of BAC. It can be seen that, in this example, all
created ambiguities are solved after at most three I/O vector observations.

Thus, the modified NDAAO model allows the observation of cyclic paths in
sequence, which may lead to an ambiguity that is solved in (k − 1) steps, as long
as another ambiguity does not occur during these steps. However, to prove that
the model used is applicable for fault diagnosis, we need to prove that its identified
language contains the observed and original language of the system. Therefore, it is
necessary to mathematically formalize the definition of these languages, which have
only been mentioned as concepts so far in this work.

3.3 M-NDAAO languages


As discussed in Subsection 2.4.2, the M-NDAAO, as the original NDAAO, represents
an identified language that attempts to simulate the original language of the system
by observing its fault-free behavior. However, the model may represent not observed
sequences, characterizing an exceeding language.

37
In this section, in order to simplify the study of these languages, we consider that
all possible behaviors that a system can perform have been observed. Therefore, the
original unidentified language is a empty set and the exceeding language of the
identified model may represent undetectable faults, which should be reduced as
much as possible.
First, we define the identified language, formed of all sequences of I/O vectors
of length n, generated from the initial state x0 of the M-NDAAO, LnIden,M .

Definition 3.4 (Identified language of lenght n of the M-NDAAO). The identified


language of length n, LnIden,M , generated by the M-NDAAO, is LnIden,M = nh=1 WM h
S
,
h h
where WM = {sω ∈ Ω : sω = λ(x0 )λ(x1 )...λ(xh−1 )∧(xη+1 ∈ fM (xη ), 0 ≤ η < h−1)}
is the set of identified paths of length h starting at the initial I/O vector.

Next, we define the observed language of length n, LnObs .

Definition 3.5 (Observed language of length n). The observed language of length
n, LnObs , is defined as LnObs = nh=1 WObs
h h
= |P|
S S
, where WObs j=1 {νj,1 νj,2 . . . νj,h } is the
language formed of all sequences of I/O vectors of length h initiating at the first I/O
vector, obtained from the observed paths pj , j = 1, . . . , |P|.

In order to define the original language, note that all paths performed by the
system are cyclic, which implies that the original system behavior must be able to
perform sequences of paths such as p′1 , p′2 , p′3 , etc, where p′j = (νj,1 , νj,2 , . . . , νj,lj −1 )
is the path obtained from pj by eliminating its last I/O vector, and form the set
P ′ = {p′j : j ∈ {1, 2, . . . , |P|}}. Then let P ′ = {p′j : j ∈ {1, 2, . . . , |P|} and P ′∞
denote the set formed by the concatenation of all paths of P ′ with all paths in P ′ ,
an arbitrarily long number of times. Now consider a sequence πj = (µj,1 , µj,2 , . . .) ∈
P ′∞ , where µj,z is an I/O vector, for z = 1, . . . , ∞. Then, as it is considered that
all possible distinct paths pj have been observed, it is possible to define the original
language formed of all sequences of I/O vectors of length n, LnOrig .

Definition 3.6 (Original language of length n). LnOrig =


Sn h h
h=1 WOrig , where WOrig =
S∞
j=1 {µj,1 µj,2 . . . µj,h }.

Finally the exceeding language is defined.

Definition 3.7 (Exceeding language of length n). The exceeding language of length
n, LnExc , is defined as LnExc = LnIden \ LnOrig .

Therefore, the identified language of size n is defined by the possible paths, of


size one through n, starting at the initial state x0 , that the model can perform.
The language is formed by the observed I/O vectors. The reason for defining all
language, to be made up of sequences always starting from the initial I/O vector

38
comes from Assumption 2.1. Since the system is cyclic, it makes sense to analyze a
language in terms of work cycles, i.e., always starting from the initial I/O vector.
With the languages to be analyzed properly defined, let consider that all possible
paths generated by the system have been observed, and let M be the identified M-
NDAAO obtained from the observed paths. Then, model M is said to be complete.
In the sequel, we show that LnOrig ⊆ LnIden,M if M is complete.

Theorem 3.2. Let M be a complete model. Then, LnOrig ⊆ LnIden,M , for all n ∈ N.

Proof. According to lines 5 to 20 of Algorithm 3.7, each path pj ∈ P is used to


construct an equivalent path in the M-NDAAO, such that for all pj ∈ P, there exists
a state path in the M-NDAAO (x0 , x′1 , x′2 , . . . , x′lj −2 ), where x′1 , x′2 , . . . , x′lj −2 ∈ X
are not necessarily distinct, such that λ(x0 ) = νj,1 , and λ(x′i ) = νj,i+1 , for all i =
1, 2, . . . , lj −2. In addition, since all paths pj are cyclic, in line 22 of Algorithm 3.7, a
transition from the last state visited following the procedure of lines 5 to 20, x′lj −2 , to
the initial state x0 is created, i.e., x0 ∈ fM (x′lj −2 ). Thus, sj ∈ LnIden , where sj is the
sequence of I/O vectors associated with pj , for all q ≤ lj , j = 1, . . . , |P|. Since after
playing a complete path pj in the M-NDAAO, x0 is always one of the possible states,
then any path pi ∈ P can be played in the model after p′j . Thus, according to the
assumption that all possible distinct paths pj have been observed, and according
to the definition of LnOrig and LnIden,M , respectively, then LnOrig ⊆ LnIden,M , for all
n ∈ N.

As discussed in Section 2.4, when a system is identified, the language of the


identified model may have not observed sequences of size n, which characterizes
an exceeding language. These sequences are created when the model present
cycles that are not characterized by a return to the initial state, i.e., sω =
λ(xn )λ(xn+1 ) . . . λ(xn+i )λ(xn ), where xn ̸= x0 . These cycles are computed according
to Algorithm 3.7, being part of the observed language, sω ∈ LObs . However, when
the model reaches state xn , it may play the sequence s′ω = λ(xn )λ(xn+1 ) . . . λ(xn+i )
infinitely, which is a behavior that necessarily does not belong to the original lan-
guage.
According to the M-NDAAO construction algorithm, when we analyze an I/O
vector of an observed path, we have two options: either it does not belong to
the set of k-vectors O, so we create a new state to represent it, or it has al-
ready been added to the model, where we perform a search to identify which
state represents such word. In this second case, we can create the cyclic se-
quence s = (λ(xn )λ(xn+1 ) . . . λ(xn+i λ(xn )) where the concatenation of this sequence
sω = sss . . . i.e., the recurrence of this cycle, may create a not observed sequence.
Therefore, the solution to reduce the exceeding language is avoid the creation
of these cycles. To do this, it is necessary to avoid the recurrence of k-vectors in

39
λ(x0 ) = A
x0 x1 λ(x1 ) = B

x2 λ(x2 ) = C

Figure 3.7: M-NDAAO from Example 3.8, for k = 1.

λ(x0 ) = A λ(x1 ) = B λ(x2 ) = A


x0 x1 x2

x4 x5 x3

λ(x4 ) = C λ(x5 ) = B λ(x3 ) = C

Figure 3.8: M-NDAAO from Example 3.8, for k = 2.

the modified paths, which can be achieved by raising the value of the parameter
k. Increasing the parameter k means adding more information to each state, in
order to distinguish its output from the others observed. This means that when
we increase k, we can reduce the recurrence of k-vectors during modified paths and
the occurrence of these cyclic sequences. Furthermore, we have that, according to
Theorem 3.2, the original language is always contained in the M-NDAAO model,
regardless of the value of k. However, increasing the value of k may affect other
aspects of the modeling. As the size of the model depends on the number of states
created via Algorithm 3.7, decreasing the recurrence of k-vectors during the observed
paths leads to the creation of more states and thus, increase the size of the model.

Example 3.8. To further illustrate how modifying the parameter k affects the ex-
ceeding language and size of an M-NDAAO model, consider two observed paths
p1 = (A, B, A, B, C, A) and p2 = (A, C, B, A, C, A) where A, B and C are I/O vec-
tors different from each other. The M-NDAAO models, for the values of k = 1, 2
and 3, are shown in Figures 3.7, 3.8, and 3.9.
Notice that, for k = 1, each state represents only the current output, consequently
leading to 3 different states, each one representing an observed I/O vector (A, B, C).

40
λ(x0 ) = A λ(x1 ) = B λ(x2 ) = A λ(x3 ) = B λ(x4 ) = C
x0 x1 x2 x3 x4

x5 x6 x7 x8

λ(x5 ) = C λ(x6 ) = B λ(x7 ) = A λ(x8 ) = C

Figure 3.9: M-NDAAO from Example 3.8, for k = 3.

Table 3.1: Exceeding language of the example.


k=1 k=2 k=3
1
|LExc | 0 0 0
2
|LExc | 2 2 0
|L3Exc | 8 6 0
4
|LExc | 22 16 0
|L5Exc | 52 34 0

So we have a compact model that contains several cycles that generate unobserved
sequences. By increasing the value of the free parameter to k = 2, we add the
information of the previously observed I/O vector for each k-vector, which leads to
the creation of new states. If we increase the number of states in the model, then the
number of cycles are decreased, and consequently, reducing the exceeding language.
Finally, we increase the value of the free parameter to k = 3, which eliminate the
exceeding language represented by the model. Since, for k = 3, every observed k-
vector appears only once in every modified path, each k-vector creates a new state
according to Algorithm 3.7, never returning to a previously built state, and thus, not
creating any cycles that do not return to the initial state. Consequently, the number
of states in this model is larger. Table 3.1 shows the cardinality of the exceeding
language |LnExc |, for different values of k and n.

Therefore, the M-NDAAO presents an effective method of representing the origi-


nal language of a cyclic system, in which the parameter k allows a trade-off between
efficiency and model’s size. However, for the discussion about exceeding language
made in this section be applicable, it is necessary that the identified model is com-
plete for the different values of k used. For complex systems, this task may require a

41
long observation time, which may be unreachable, requiring a distributed identifica-
tion of the system. Therefore, the following chapter develop a methodology to build
a monolithic M-NDAAO from identified partial models. It is important to note that
such approach is possible due to the development of M-NDAAO, which permits the
correct representation of the original language of cyclic system, for values of k > 1.

42
Chapter 4

Distributed Identification

In this chapter a methodology for obtaining a monolithic model from a distributed


identification is presented. In order to so, it is necessary to understand the challenges
in obtaining a model from monolithic identification for complex systems. In addition,
we formalize the concepts of distributed identification, and develop an algorithm to
allow the synchronization of partial models into a monolithic model.
Thus, this chapter is structured as follows: in Section 4.1, we present the problem
of monolithic identification for systems with concurrent behavior. In Section 4.2, we
introduce the concepts of distributed identification and partial models. Finally, in
Section 4.3, we propose the composition of partial models modelled as M-NDAAO,
called modular synchronous composition, into a monolithic model. In addition, we
analyze the generated language of the composition.

4.1 The Problems of Monolithic Identification


In order to make MNDAAO efficient for fault diagnosis, it is necessary that the
identified model is complete, that is, all possible paths that the system can perform
have been observed, such that the observed language is equivalent to the original
language. However, for complex systems, the time required to observe all possible
paths can grow exponentially when the system has concurrent behavior. In the
sequel, we illustrate how concurrent behaviors can affect the convergence of a model.

Example 4.1. The system depicted in Figure 4.1 represents a process, where pieces
are rotated, such that green pieces are rotated on the left-hand conveyor, and blue
pieces are rotated on the right-hand conveyor. The rotation process of each part is
the same: first the piece is lifted, then rotated, and finally lowered back onto the belt.
Let us consider u = [o11 o12 o13 o21 o22 o23 ]T the I/O vector of the system, where
o11 , o12 and o13 are the actuators responsible for lifting, rotating, and lowering the
green pieces pieces on the left conveyor, respectively. The variables o21 , o22 and o23

43
Figure 4.1: Concurrent behavior example.

are the actuators responsible for lifting, rotating, and lowering the blue pieces pieces
on the right conveyor, respectively. Note that the left-hand rotation system always
performs the following path pl :
         
0 1 0 0 0
         
 0   0   1   0   0 
         
 0   0   0   1   0 
pl =   ,   ,   ,   ,  ,
         
− − − − −
         
− − − − −
         
− − − − −

independently of the I/O values from the actuators of the right-hand rotation system.
The right-hand system, always performs the path pr :
         
− − − − −
         
− − − − −
         
− − − − −
pr =   ,   ,   ,   ,  ,
         
 0   1   0   0   0 
         
 0   0   1   0   0 
         
0 0 0 1 0

independent of the I/O value from the actuators of the left-hand rotation system.
The problem is that these systems may not be synchronized by time, leading to a
high number of possible paths. Considering the rotation systems individually, where
one is represented by the first three I/Os and the other by the last three I/Os of the
I/O vector, there are only 2 possible paths. However, when considering the combined
I/Os, there are over 100 possible paths. Thus, this simple example shows that the
existence of concurrent behaviors can lead to a long observation time of the plant to

44
identify a complete monolithic model, since the combination of inputs and outputs
can lead to a high number of possible paths generated by the system.

Thus, the basic idea of this work is to obtain partial models that observes part of
the entirely system, and then, compose them to obtain a monolithic model, which is
used for fault detection. In this work, a composition of partial M-NDAAO models is
proposed. It is important to remark that the computation of the inputs and outputs
used to describe the behavior of each partial model is out of scope of this work, and
are assumed to be obtained using any technique proposed in the literature such as
the methods presented in [3] and [24].

4.2 Distributed Identification


Let us define the partial models computed using MNDAAO in the following.

Definition 4.1 (Partial Models). The partial models that represent part of the en-
tirely system are Mℓ = (Xℓ , Ωℓ , fMℓ , λℓ , x0,ℓ , {x0,ℓ }) for ℓ = 1, . . . , r.

Each partial model Mℓ has associated a set of integers Φℓ ⊂ Φ, where Φ =


{1, 2, . . . , n} is the set formed of the indexes in the observed I/O vectors associated
with the system inputs Iβ , β = 1, . . . , nI , and outputs Oδ , δ = 1, . . . , nO . Therefore
Φℓ represents the indexes of the I/Os observed by the partial model Mℓ .
Thus, Mℓ is computed from the observed paths pj , j = 1, . . . |P| by considering
only the inputs and outputs of the system that belong to Φℓ . In order to do so,
the inputs and outputs of each I/O vector of pj that do not belong to Φℓ must be
replaced with the symbol −, which represents that their values are not relevant, and
thus they are not considered in each vector νj,z of pj leading to a new vector νj,z ℓ
,
called partial vector.

Example 4.2. Consider two partial models M1 and M2 where Φ1 = {1, 3, 4} and
Φ2 = {2, 4}. Thus, Figure 4.2 represents the partial vectors generated from an I/O
h iT
vector νj,t = 1 0 0 1 .
Note that partial model M1 does not observe the second I/O, since 2 ∈ / Φ1 , thus
it becomes − (don’t care symbol). Therefore, any transition characterized by an
exclusive change in the second I/O is not observed by the partial model M1 .

In the following, Algorithm 4.1 creates the partial paths, composed of partial
vectors, from the original observed paths.
In lines 1 and 2 of Algorithm 4.1, we start the loop for each path pj ∈ P. In line
3, we introduce the variable t, which is responsible for checking which I/O vector
νj,t ∈ pj , t ∈ {1, 2, . . . , lj } is analyzed. In line 5, we introduce the variable m, which

45
1 2
νj,t νj,t νj,t
     
1 1 −
− 0 0
     
0 0 −
1 1 1
Φ1 = {1, 3, 4} Φ2 = {2, 4}

Figure 4.2: Partial Vectors.

Algorithm 4.1 Creation of partial paths.


Input: P = {p1 , p2 , ..., p|P| },Φℓ
Output: Pℓ

1: j←1
2: while j ≤ |P| do
3: t←1
4: while t ≤ lj do
5: m←1
6: while m ≤ |νj,t | do
7: if m ∈ Φℓ then
8: νj,t [m] ← νj,t [m]
9: else
10: − (don’t care symbol) ← νj,t [m]
11: m←m+1
12: t←t+1
ℓ ℓ ℓ
13: if ∃νj,t , νj,t = νj,t+1 then
14: Remove νj,t+1 ℓ

ℓ ), where lj ≤ lj
ℓ ℓ ℓ ℓ
15: πj,ℓ = (νj,1 , νj,2 , ..., νj,l
j
S|Pℓ |
16: Pℓ = j=1 πj,ℓ

46
is responsible for checking the index of each I/O of the vector νj,t , where νj,t [m]
denotes the m-th element of νj,t . In lines 6-10, the observed vectors by the partial
model Mℓ are computed. If the index m is observed by the partial model, i.e.,
m ∈ Φℓ , it is not modified. However, if m ∈ / Φℓ , then the I/O is modified, becoming
− (don’t care). In lines 13-14, after adapting all the vectors of a path, a search is
made to find, and eliminate, identical adjacent vectors.
The process made in lines 6-10 of Algorithm 4.1 can be defined as a projection
operation which transforms the original observed I/O vector into the partial vector.
Let the projection operation Pℓ : Zn2 → Zn2,− , where Z2,− = Z2 ∪ {−}, as Pℓ (u) = uℓ ,
where uℓ [m] = u[m], if m ∈ Φℓ , and uℓ [m] = −, if m ∈ / Φℓ . Then, νj,z

= Pℓ (νj,z ).
According to Algorithm 4.1, the projection operation Pℓ can be defined for a sequence
of I/O vectors recursively as Pℓ (su) = Pℓ (s)Pℓ (u), for all sequences of I/O vectors

s ∈ Zn2 and I/O vector u ∈ Zn2 .
Note that by reducing the number of I/Os observed in an I/O vector it is possible
that the new computed partial path has consecutive equal I/O vectors. Thus, in
order to obtain path πj,ℓ , which corresponds to the partial observation of path pj by
the partial model Mℓ , it is necessary to merge all consecutively equal I/O vectors
into a single one, which is done in lines 13-14 of Algorithm 4.1. Thus, the length of
path πj,ℓ can be smaller than the length of pj . In addition, two different paths in
P may lead to the same corresponding path πj,ℓ , which implies that the number of
distinct paths πj,ℓ , can be smaller than |P|. Thus, each partial model has its own
set of partial paths, denoted as Pℓ .
After obtaining the set of partial paths πj,ℓ ∈ Pℓ for each partial observation,
models Mℓ can be computed using Algorithms 3.6 and 3.7, presented in Section 3.2.
In [3] a composition of partial NDAAO models, called cross product, is proposed.
The main problem with the composition proposed in [3] is that the partial models
may not be correctly reinitialized for k > 1. As a consequence, the corresponding
composed model may be incorrect or lead to a large exceeding language. In the se-
quel, we propose a composition for M-NDAAO models, called modular synchronous
composition, leading to a new composed model Mc . As shown in Figure 4.3, the
fault detection architecture proposed in this work is based on the composed system
model, that observes the I/O vector generated by the system, and detect a fault
when an observed I/O vector is different from the expected I/O vectors according
to the current state of the model.
In order to illustrate the distributed identification method consider the following
example.

Example 4.3. Let path p1 = (A, B, A, D, C, A) be the fault-free behavior of a system,


where A, B, C, and D are the following I/O vectors:

47
Figure 4.3: Fault detection architecture.
         
0 0 0 1 0
 0  1 0  1   0 
π1,1 =   
− ,
− , − ,
     ,   ,
− −
− − − − −
       
0 1 1 0
− − − −
π1,2  0  ,  1  ,  0  ,  0 .
=        

0 1 0 0

Figure 4.4: Partial paths π1,1 and π1,2 from Example 4.3.
       
0 0 1 1
       
0 1 1 1
0 , B = 0 , C = 0 , D = 1 ,
A=       
       
0 0 0 1

and consider the identification of two partial models M1 and M2 , such that Φ1 =
{1, 2} and Φ2 = {1, 3, 4}. Thus, the partial paths π1,1 and π1,2 , computed according
to Algorithm 4.1, are shown in Figure 4.4.
Note that partial paths π1,1 and π1,2 have smaller length than the original path p1
since the partial models do not observe all I/O vectors changes. For instance, M1
does not observe the difference between I/O vectors C and D, while M2 does not
distinguish I/O vector A from I/O vector B.
2 2
If we choose k = 2, modified partial paths π1,1 and π1,2 observed by partial models
M1 and M2 , respectively, are computed according to Algorithm 3.6 and depicted in
Figure 4.5.

48
       
0 0 0 0 0 0 0 1
 0 0 0 1 ,  1 0  ,  0 1  ,
2
   
π1,1 = 
−
,
− − −   − −   − −
− − − − − − − −
     
0 0 0 1 1 1
2
− − − − − − 
π1,2 = 
 0
, ,  .
0 0 1 1 0 

0 0 0 1 1 0

Figure 4.5: Modified partial paths π1,1


2
and π1,2
2
from Example 4.3.
       
0 0 0 1
0
 λ1 (x1 ) =  1  λ1 (x2 ) =  0  λ1 (x3 ) =  1 
     
λ1 (x0 ) = 
− − − −
− − − −

x0 x1 x2 x3

Figure 4.6: Partial model M1 .

After the computation of the modified partial paths, we compute the identified
M-NDAAO models according to Algorithm 3.7. Models M1 and M2 are presented
in Figures 4.6 and 4.7, respectively.

Distributed identification aims to reduce the observation time required to iden-


tify a complete model, by reducing the number of I/Os observed by each partial
model. However, the use of partial models, with the purpose of fault diagnosis, can
present other issues. The parallel behavior of the partial models can generate an in-
crease in their identified language, and thus, generating an increase in the exceeding
language. Thus, in the following section, we introduce a methodology to compute
a composition of partial models, where, by constructing a monolithic model from
the identified language of the partial models, we identify and eliminate sequences
that are impossible to belong to the original system, thus reducing the exceeding
language of the composition.

     
0 1 1
− − −
λ2 (y0 ) =  
 λ2 (y1 ) =  
 λ2 (y2 ) =  

0 1 0
0 1 0

y0 y1 y2

Figure 4.7: Partial model M2 .

49
           
− 1 1 0 1 c
 1  − 1  1  − 1
J
−  0  = 0 J −  0  = 0
           

1 1 1 − 1 1
Figure 4.8: Join Function examples.

4.3 The modular synchronous compostion


In this section, we propose a composition for partial M-NDAAO models, called
modular synchronous composition, inspired on the cross product proposed in [3]. For
k = 1, we use an adaptation of the cross product presented in [3], applying the trim
operation to the resulting automata. However, for k > 1, the cross product must
not be used since it does not consider the reinitialization of the models. In addition,
different from [3], the modular synchronous composition, for k > 1, uses the reset
transitions to synchronize the end of paths in the partial models. Thus, when a
complete path pj is executed in the system generating the sequence of I/O vectors
sj , its partial observations Pℓ (sj ) are all synchronized, leading to the initial states
of all partial models. This reduces the exceeding language that can be generated by
the partial models running in parallel.
Let us first define the join function of two partial vectors J : Zn2,− × Zn2,− →
(Z2,− ∪{c})n , where c is a symbol used in the join function to represent contradiction
[3].

Definition 4.2 (Join Function). Let ui , uj ∈ Zn2 denote I/O vectors, where i is
not necessarily different from j, and let uℓi 1 = Pℓ1 (ui ) and uℓj2 = Pℓ2 (uj ), where
ℓ1 , ℓ2 ∈ {1, . . . , r} and ℓ1 ̸= ℓ2 . The join of uℓi 1 with uℓj2 , is a vector with n elements
such that the m-th element of J(uℓi 1 , uℓj2 ), denoted by J(uℓi 1 , uℓj2 )[m], is defined as:



 uℓi 1 [m], if uℓi 1 [m] = uℓj2 [m]
 uℓ1 [m],

if uℓi 1 [m] ̸= − ∧ uℓj2 [m] = −
ℓ1 ℓ2 i
J(ui , uj )[m] = ℓ
 uj2 [m],

 if uℓi 1 [m] = − ∧ uℓj2 [m] ̸= −
uℓi 1 [m] ̸= uℓj2 [m] ̸= −

 c, if
for m = 1, . . . , n

Note, according to Definition 4.2, the symbol c is used to represent that the
m-th element of the I/O vectors cannot be synchronized since uℓi 1 [m] ̸= uℓj2 [m],
and both are different from −, which means that both values are observed by their
corresponding partial models. Note also that when symbol − appears in uℓi 1 [m] or
uℓj2 [m], and the other is different from −, then this value is assigned to J(uℓi 1 , uℓj2 )[m].
Figure 4.8 shows some examples of the join function applied to two partial vectors.
In the sequel, we present the definition of modular synchronous composition for

50
two cases: (i) k = 1, and (ii) k > 1. The difference relies on the fact that for k = 1
the composed model naturally resets since each state x ∈ X of the composed model
corresponds to a unique I/O vector λ(x), while for k > 1 it is necessary to force the
reset of all partial models when a complete task is executed by the system.

Definition 4.3 (Modular Synchronous Composition for k = 1). Let Mℓ =


(Xℓ , Ωℓ , fMℓ , λℓ , x0,ℓ , {x0,ℓ }), for ℓ = 1, 2 be two M-NDAAO partial models obtained
for k = 1. The Modular Synchronous Composition of M1 and M2 is defined as:

Mc = M1 ||M2
= T rim(X, Ω, fMc , λ, (x0,1 , x0,2 ), Xm ),

where,
X = {(x1 , x2 ) ∈ X1 × X2 : c ̸= J(λ1 (x1 ), λ2 (x2 ))[m], ∀m ∈ {1, 2, . . . , n}}.
Ω = {J(λ1 (x1 ), λ2 (x2 )) : (x1 , x2 ) ∈ X}.
fMc (x1 , x2 ) = {(x′1 , x′2 ) ∈ X : (x′1 , x′2 ) ∈ [({x1 } ∪ fM1 (x1 )) × ({x2 } ∪ fM2 (x2 )) ∧
(x′1 , x′2 ) ̸= (x1 , x2 )]}.
λ(x1 , x2 ) = J(λ1 (x1 ), λ2 (x2 )), ∀(x1 , x2 ) ∈ X.
Xm = {(x0,1 , x0,2 )}.

According to Definition 4.3, the state (x1 , x2 ) belongs to X only if the associated
I/O vectors, λ1 (x1 ) and λ2 (x2 ) do not lead to a contradiction, indicated by symbol
c in at least one of the elements of the join vector J(λ1 (x1 ), λ2 (x2 )). Set Ω is formed
of all possible join vectors obtained from the elements of X. The basic idea of the
composition transition function fMc , for k = 1, is that for each pair of transitions
(x1 ) → (x′1 ) and (x2 ) → (x′2 ) of two partial models M1 and M2 to be analyzed,
we create three possibilities: (i) the partial model M1 plays its transition and the
partial model M2 does not play, reaching the state (x′1 , x2 ), (ii) the partial model
M2 plays its transition and the partial model M1 does not play, reaching the state
(x1 , x′2 ), and (iii) both partial models play their respective transitions, reaching
the state (x′1 , x′2 ). Thus, it is analyzed if the states reached belong to X, i.e., the
transitions do not lead the model to a contradiction. If the state belongs to X,
then the transition that led to it is modeled. Figure 4.9 illustrates the idea of the
transition function of the composition, where a dashed line represents a transition
that is not modeled.
It is important to remark that the difference between the modular synchronous
composition proposed in Definition 4.3 and the cross product proposed in [3] is that
the initial state of Mc is marked, and a trim operation is performed to obtain the
composed model. The trim operation can be executed in Mc since it is a monolithic
model, and therefore, sequences that lead to non coaccessible states can be computed

51
(x1 → x′1 ) (x2 → x′2 )
       
− − 1 0
0  1  − −
 →  ,  → 
1 0 1 1
1 1 1 1

           
1 1 1 0 1 0
0 1 0 0 0 1
 →   →   → 
1 c 1 1 1 c
1 1 1 1 1 1
(x1 , x2 ) → (x′1 , x2 ) (x1 , x2 ) → (x1 , x′2 ) (x1 , x2 ) → (x′1 , x′2 )

Figure 4.9: Possible transition created by fMc , according to Definition 4.3.

and eliminated from Mc . This reduces the exceeding language in comparison with
the language formed of the sequences that are accepted as fault-free by the method
proposed in [3].
In Chapter 3, it is shown that the reset of partial models, identified for a param-
eter of k > 1, is forced. Let πj,ℓ = (νj,1 ℓ ℓ
, ..., νj,l ℓ

, νj,l ℓ ) be the partial observation of
j −1 j
path pj by the partial model Mℓ , according to Algorithm 4.1. In addition, consider
that Mℓ is identified for a parameter k > 1. Then, if Pℓ (νj,lj −1 ) ̸= Pℓ (νj,lj ), where
Pℓ (νj,lj −1 ) = νj,ljℓ −1 and Pℓ (νj,lj ) = νj,l

ℓ , the partial model Mℓ resets simultaneously
j
with the system after completing path pj . When the inequality Pℓ (νj,lj −1 ) ̸= Pℓ (νj,lj )
is valid for every partial model Mℓ , ℓ = 1, . . . , r and every path pj , j = 1, . . . , P, all
resets of every partial models occur simultaneously with the system, which leads to
Assumption 4.1.
Assumption 4.1. All partial models Mℓ complete every observed path pj simulta-
neously with the system, i.e., ∀ℓ ∀j(Pℓ (νj,lj −1 ) ̸= Pℓ (νj,lj )).
When a partial model identified for a parameter of k > 1 plays a reset transition,
the system returns to its initial state, so all partial models, identified for the same
parameter k, observing the same system, must return to their respective initial states
simultaneously. Thus, the modular synchronous composition for partial models
identified for a parameter k > 1 synchronizes the reset transitions of all partial
models. In the sequel, the modular synchronous composition of two M-NDAAO
models computed for k > 1, is defined.
Definition 4.4 (Modular Synchronous Composition for k > 1). Let Mℓ =
(Xℓ , Ωℓ , fMℓ , λℓ , x0,ℓ , {x0,ℓ }), for ℓ = 1, 2 be two M-NDAAO partial models obtained
for k > 1. The Modular Synchronous Composition of M1 and M2 is defined as:

Mc = M1 ||M2
= T rim(X, Ω, fMc , λ, (x0,1 , x0,2 ), Xm ),

52
(x1 → x0,1 ) (x2 → x′2 )
       
− − 1 0
1  0  − −
 →  ,  → 
0 0 0 0
0 0 0 0

           
1 1 1 0 1 0
1 0 1 1 1 0
 →   →   → 
0 0 0 0 0 0
0 0 0 0 0 0
(x1 , x2 ) → (x0,1 , x2 ) (x1 , x2 ) → (x1 , x′2 ) (x1 , x2 ) → (x0,1 , x′2 )

Figure 4.10: Possible transition created by fMc , according to Definition 4.4.

where,
X = {(x1 , x2 ) ∈ X1 × X2 : c ̸= J(λ1 (x1 ), λ2 (x2 ))[m], ∀m ∈ {1, 2, . . . , n}}.
Ω = {J(λ1 (x1 ),λ2 (x2 )) : (x1 , x2 ) ∈ X}.

 {(x′1 , x′2 ) ∈ X : (x′1 , x′2 ) ∈ ({x1 } ∪ fM1 (x1 )) × ({x2 } ∪ fM2 (x2 ))

′ ′
 ∧(x1 , x2 ) ̸= (x1 , x2 )}, if (x1 = x0,1 ) ∨ (x2 = x0,2 )



fMc (x1 , x2 ) = {(x′1 , x′2 ) ∈ X : (x′1 , x′2 ) ∈ [({x1 } ∪ fM1 (x1 )) × ({x2 } ∪ fM2 (x2 ))]

\[{x0,1 } × (fM2 (x2 ) \ {x0,2 }) ∪ (fM1 (x1 ) \ {x0,1 }) × {x0,2 }]∧





 ′ ′
(x1 , x2 ) ̸= (x1 , x2 )}, otherwise.
λ(x1 , x2 ) = J(λ1 (x1 ), λ2 (x2 )), ∀(x1 , x2 ) ∈ X.
Xm = {(x0,1 , x0,2 )}.

Note, according to Definition 4.4, that the set fMc (x1 , x2 ) is formed of all possible
pairs of states that do not lead to a contradiction, reached in both models after a
transition, except if one of the transitions is a reset transition, and the other is not.
Thus, both models can only be reset at the same time, indicating that the associated
system path has been concluded. It is important to remark that the synchronization
in the end of the observed paths in each model is possible using only the M-NDAAO
model, since in the M-NDAAO a reset transition is forced to the initial state after
the complete observation of a path. It is also important to remark that the strategy
of forcing the reset transitions of all partial models works only when at least one
input or output of each partial model is altered when the complete observed path
reinitializes. This guarantees that the synchronization between reset transitions
represents correctly the system behavior and Assumption 4.1 is valid. Figure 4.10
illustrates the restriction imposed for the composition computed for k > 1, where a
dashed line represent a transition that is not modeled and x′2 ̸= x0,2 .
Since, as in Definition 4.3, only the initial state of Mc in Definition 4.4 is marked,
then, after executing the trim operation, only accessible states and coaccessible
states that represent that both models can be reinitialized are kept in the model.

53
The trim operation reduces the exceeding language, and avoids the reach of states
without output transitions that do not belong to the fault-free system behavior
since, according to Assumption 2.1, all tasks executed by the system are cyclic.
Note that, for any value of k, we can define the composition of M-NDAAOs for
higher number of models, by applying the associative rule: Mc = M1 ||M2 ||M3 =
((M1 ||M2 )||M3 ).
Now, let LnIden,Mc denote the language formed of all sequences of I/O vectors of
length n, possible to be played in the model computed from the modular synchronous
composition of all partial models Mℓ , ℓ = 1, 2, . . . , r, and let Φ = rℓ=1 Φℓ . In the
S

sequel we show that if the partial models Mℓ are complete, i.e., the structure of the
M-NDAAO models Mℓ does not change even observing the system behavior for an
infinitely long time, then LnOrig ⊆ LnIden,Mc .

Theorem 4.1. Let Mℓ , ℓ = 1, 2, . . . , r, denote complete M-NDAAO partial models


computed for k > 1, and let Φ = rℓ=1 Φℓ . Let Mc = ∥rℓ=1 Mℓ . Then, LnOrig ⊆
S

LnIden,Mc .

Proof. Let s = ν1 ν2 . . . νn ∈ LnOrig be a sequence of I/O vectors of length n associated


with an observed path p. Since Mℓ are complete partial models, then Pℓ (s) ∈
LnIden,Mℓ , where LnIden,Mℓ is the set formed of all sequences of I/O vectors of length
one up to n generated by the partial model Mℓ . Note, according to the definition
of X in Definition 4.3 and 4.4, and since models Mℓ are complete, there exists
(x1 , x2 ) ∈ X for each I/O vector νk , k = 1, . . . , n, such that J(λ1 (x1 ), λ2 (x2 )) = νk .
In addition, note, according to the definition of fMc in Definition 4.4, that all states
(x′1 , x′2 ) in ({x1 } ∪ fM1 (x1 )) × ({x2 } ∪ fM2 (x2 )) that do not lead to a contradiction
in the join function J(λ(x′1 ), λ(x′2 )) can be reached from a state (x1 , x2 ) ∈ X, except
from the states that are reached after a reset transition in only one of the partial
models. Thus, there is a transition between states (x1 , x2 ), (x′1 , x′2 ) ∈ X such that
J(λ1 (x1 ), λ2 (x2 )) = νk and J(λ1 (x′1 ), λ2 (x′2 )) = νk+1 , for all k = 1, . . . , n − 1. Let us
consider that p does not represent a complete task executed by the system. Then,
since the initial state of all models are synchronized and are used to form the initial
state of the composed model Mc , and since the models are complete, then the
sequence of I/O vectors ν1 ν2 . . . νn can be played by the composed model Mc . Let
us consider now that p is formed by the concatenation of paths p′j ∈ P ′ , i.e., a
system task may be completed while executing p. Then, since the unique marked
state of Mc is its initial state and Mc is coaccessible, and since the model is reset
only when all partial models are reset allowing a new system task to be played in
the models, then s ∈ LnIden,Mc .

In the sequel, we present an example that shows the reduction in the exceeding
language of the composed model, when reset transitions are synchronized in all

54
             
0 1 0 1 1 0 0
 1   0   1   0   1   0   1 
             
 0  ,  1  ,  0  ,  1  ,  1  ,  1  ,  0  ,
π1 =               
− − − − − − −
− − − − − − −
             
− − − − − − −
− − − − − − −
 0  ,  1  ,  1  ,  0  ,  1  ,  1  ,  0 .
             
π2 =               
 0   1   1   0   1   1   0 
0 0 1 0 1 0 0
Figure 4.11: Partial paths π1,1 and π1,2 from Example 4.4.

partial models.

Example 4.4. Let p = (A, B, C, A, C, E, D, A) be the system observed path, where


A, B, C, D, and E represent the following I/O vectors:
         
0 1 1 0 1
1 0 0 0 1
         
         
A = 0 , B = 1 , C = 1 , D = 1 , E = 
       
1 ,

0 1 1 1 1
         

0 0 1 0 1

and consider the identification of two partial models M1 and M2 , such that Φ1 =
{1, 2, 3} and Φ2 = {3, 4, 5}. Thus, the partial paths π1 and π2 , observed according
to Φ1 and Φ2 are depicted in Figure 4.11
The structure of the partial models was chosen to simplify the observed path so
that the M-NDAAOs that simulate each partial path are smaller than the monolithic
model that would represent the original path p. Therefore, as in Example 4.3, the
partial paths π1,1 and π1,2 have smaller length than the original path p, since M1
does not observe the difference between I/O vectors B and C, and M2 does not
observe the difference between I/O vectors C and E. Therefore, we choose a free
2 2
parameter k = 2, where modified partial paths π1,1 and π1,2 are computed according
to Algorithm 3.6 and presented in Figure 4.12.
The identified M-NDAAO partial models can be computed from the modified par-
tial paths, according to Algorithm 3.7. Models M1 and M2 are presented in Figures
4.13 and 4.14, respectively.
After the computation of the partial models, we can obtain the composed system
model by using the modular synchronous composition. Since all inputs and outputs
are observed by the partial models, i.e., Φ1 ∪ Φ2 = Φ, the resulting M-NDAAO is
a monolithic representation of the complete system behavior. The composed system
model Mc , depicted in Figure 4.15, is obtained according to Definition 4.4. Note

55
           
0 0 0 1 1 0 0 1 1 1 1 0
 1 1 1
  0 0
  1 1
  0 0
  1 1
  0 
2
 
π1,1  0
= 0 , 0 1 , 1 0 , 0 1 , 1 1 , 1 1 
,
      
− − − − − − − −  − − − − 

− − − − − − − − − − − −
           
− − − − − − − − − − − −
− − −
  − −
  − −
  − −
  − −
  − 
2
 
π1,2  0
= 0 , 0 1 , 1 1 , 1 0 , 0 1 , 1 1 .
      
 0 0 0 1 1 1 1 0 0 1 1 1 
0 0 0 0 0 1 1 0 0 1 1 0

Figure 4.12: Modified partial paths π1,1


2
and π1,2
2
from Example 4.4.

 
0
1
x2
 
0
λ1 (x2 ) =  
  −
0
1 −
 
0
λ1 (x0 ) =  
−

 
1
1
x0 x1 x3
 
1
λ1 (x3 ) =  
−

 
1
0
 
1
λ1 (x1 ) =  
−  
0
− 0
x4
 
1
λ1 (x4 ) =  
−

Figure 4.13: Partial Model M1 from Example 4.4.

56
       
− − − −
− − − −
       
0
λ2 (y0 ) =  1
λ2 (y1 ) =  1
λ2 (y2 ) =  0
λ2 (y3 ) = 
   
0 1 1 0
0 0 1 0

y0 y1 y2 y3

y5 y4
   
− −
− −
   
1
λ2 (y5 ) =  1
λ2 (y4 ) = 
 
1 1
0 1

Figure 4.14: Partial Model M2 from Example 4.4.

that Mc is a cyclic automaton and simulates the observed path p. It is important to


remark that, since a reset transition can only be executed in the composed model Mc
when both partial models M1 and M2 execute it simultaneously, some I/O vectors
that do not represent a contradiction are not modeled, which, according to Theorem
4.1, leads to a reduction in the exceeding language of the composed model. Note that
there does not exist a transition in Mc from (x1 , y5 ) to a state (x2 , y0 ), even though
(x2 , y0 ) does not have a contradiction in its associated I/O vector J(λ1 (x1 ), λ2 (x2 )).
This occurs since if we reach state y0 , then a sequence of the system in which M2
has finished a task. On the other hand, if we reach x2 in M1 , then the task has
not finished. As a consequence, this sequence does not belong to the original system
behavior, and thus, must not belong to the language generated by the composed model.
In order to show the need for the trim operation, in Figure 4.16, we present
the accessible automaton obtained before taking the coaccessible part of the trim
operation, denoted as Mac . Note that there are several states that are eliminated
after the trim operation that are not coaccessible and, therefore, cannot be reached
in the fault-free system behavior. Thus, by taking the coaccessible part, the exceeding
language of the model is reduced.

Note that, according to Theorem 4.1, if the partial models Mℓ , ℓ = 1, 2, . . . , r, are


complete, then the composed model Mc = ∥rℓ=1 Mℓ simulates the original language
of the system. However, to be certain that a complete partial model has been
obtained, it is necessary to observe the system for an infinitely long time. Thus, in
practice, it is necessary to specify a criterion to consider that the partial model is

57
       
0 1 1 0
1 0 0 1
       
0
λ(x0 , y0 ) =  1
λ(x1 , y1 ) =  1
λ(x1 , y2 ) =  0
λ(x2 , y3 ) = 
   
0 1 1 0
0 0 1 0

x0 , y0 x1 , y1 x1 , y2 x2 , y3
   
1 1
0 0
   
1
λ(x1 , y5 ) =  1
λ(x1 , y4 ) = 
 
1 1
0 1

x1 , y5 x1 , y4
 
1
1
 
1
λ(x3 , y5 ) =  
1
 
0 0
0
1 x4 , y5 x3 , y5
 
λ(x4 , y5 ) =  
1
0

   
0 1
0 1
1 x4 , y4 x3 , y4
   
λ(x4 , y4 ) =  1
λ(x3 , y4 ) = 
 
1 1
1 1

Figure 4.15: Composed model Mc .

58
   
0 0
0 0
1 x4 , y1 x4 , y2
   
λ(x4 , y1 ) =  1
λ(x4 , y2 ) = 
 
1 1
0 1

   
1 1
1 1
1 x3 , y1 x3 , y2 λ(x3 , y2 ) = 
   
λ(x3 , y1 ) =  1
 
  1   1  
0 0 1 1 0
1 0 1
     
0
λ(x0 , y0 ) =  1
λ(x1 , y2 ) =  0
λ(x2 , y3 ) = 
  
0 1 0
0 1 0

x0 , y0 x1 , y1 x1 , y2 x2 , y3
   
1   1
0 1 0
  0  
1
λ(x1 , y1 ) =  1
λ(x1 , y4 ) = 
   
1
λ(x1 , y5 ) =  
1 1
1
0 1
0
x1 , y5 x1 , y4
 
0
0
 
1
λ(x4 , y5 ) =  
1
0  
1
1
x4 , y5 x3 , y5
 
1
λ(x3 , y5 ) =  
1
0

   
0 1
0 1
1 x4 , y4 x3 , y4
   
λ(x4 , y4 ) =  1
λ(x3 , y4 ) = 
 
1 1
1 1

Figure 4.16: Automaton Mac .

59
sufficiently close to be complete and suitable for fault diagnosis. This leads to the
definition of η-convergence.

Definition 4.5 (η-convergence). A model is said to have converged if its number of


transitions does not grow if after observing η paths executed by the system, a new
transition is not added to the model, following the steps of Algorithms 3.6 and 3.7,
where η is a free parameter.

In this work, we assume that the model is sufficiently close to be complete when
the model converges after the observation of η paths. In order to do so, the number
of transitions after each new observed path is computed and compared with the
last number of transitions of the model. If the number of transitions does not grow
after η new observed paths, then the model has converged and the identification
procedure stops. Parameter η has been chosen in the example presented in Chapter
5 of this work as a percentage of the total number of observed paths. Other methods
to choose the free parameter η can be derived, and will be studied in a future work.
In [2] it is shown that the exceeding language of a monolithic NDAAO model is
reduced by increasing the value of the free parameter k. In the sequel, we show the
influence of the free parameter k in the reduction of the exceeding language of the
composed model Mc obtained from the modular synchronous composition of the
partial M-NDAAO models.

Theorem 4.2. Let k and k ′ be free parameters such that k > k ′ , and let Mℓ and
M′ℓ , for ℓ = 1, . . . , r, be identified partial models computed for k and k ′ , respectively.
Let Mc = ∥rℓ Mℓ and M′c = ∥rℓ M′ℓ . Then, LnIden,Mc ⊆ LnIden,M′c .

Proof. Note that if k > k ′ , then LIden,Mℓ ⊆ LIden,M′ℓ since when the free parameter
is increased, the number of states is also increased, leading to a reduction in the
language generated by the model.
Without loss of generality, let us consider that the composed model is computed
by making the modular synchronous composition of only two partial M-NDAAO
models. Let us consider a sequence of I/O vectors s = λ(x0 )λ(x1 ) . . . λ(xn−1 ) ∈
LnIden,Mc . Thus, each output λ(xη ), η = 1, . . . , n − 1, is created from the join
function of two partial vectors, i.e., each vector λ(xη ) = J(u1η , u2η ), where u1η ∈ Ω1
and u2η ∈ Ω2 . Let s1 = P1 (s) and s2 = P2 (s), and let s̃1 and s̃2 be sequences of I/O
vectors obtained from s1 and s2 , respectively, after eliminating consecutive equal
I/O vectors. Then, there are in Mℓ , ℓ = 1, 2, paths of states πs,ℓ , where each state
of πs,ℓ is associated with a sequence of k I/O vectors computed from s̃ℓ , such that
s̃ℓ ∈ LIden,Mℓ . Since, for k > k ′ , LIden,Mℓ ⊆ LIden,M′ℓ , then s̃ℓ ∈ LIden,M′ℓ , ℓ = 1, 2,
which implies that there are also in M′ℓ paths of states πs,ℓ ′
, with the same length as
πs,ℓ , where each state of πs,ℓ is associated with a sequence of k ′ I/O vectors computed

60
from s̃ℓ , such that s̃ℓ ∈ LIden,M′ℓ . Thus, since the i-th state in path πs,ℓ

of M′ℓ has the
same output as the i-th state in path πs,ℓ of Mℓ , then, according to the definition
of the modular synchronous composition, s ∈ LnIden,M′c .
In order to show that LnIden,M′c may not be equal to LnIden,Mc , let us assume that
LnIden,M′c \ LnIden,Mc = ∅ and present a counterexample.
Consider a system that can execute a unique path
           
0 1 0 0 1 0
p = 0 , 0 , 0 , 0 , 1 , 0 ,
           

0 1 0 1 1 0

and let the system be identified by two partial models such that Φ1 = {1, 2} and
Φ2 = {2, 3}. For a free parameter value k ′ = 2, we compute the partial models M′1
and M′2 according to Algorithms 3.6 and 3.7, where both models are presented in
Figure 4.17. Note that, the partial model M′1 generates only the partial observation
of the original path p, while partial model M′2 generates the partial observation of
path p and an exceeding language. Now, let k = 3 and compute the partial models
M1 and M2 , depicted in Figure 4.18, using the new parameter value. Note that,
for k = 3, both partial models M1 and M2 generate only the corresponding partial
observations of the original path p. It is not difficult to see that the sequence of I/O
vectors      
0 1 1 1 1
s = 0 0 0 0 0 ∈ LIden,M′c ,
     

0 1 0 1 0
where s corresponds to the following path of states in M′c :
((x0 , y0 ), (x1 , y1 ), (x1 , y2 ), (x1 , y1 ), (x1 , y2 )). However, s ∈
/ LIden,Mc , since there
does not exist in Mc a path of states associated with s, which concludes the
proof.

According to Theorem 4.2, an increase in the value of the free parameter k may
reduce the language generated by the composed model Mc , which also reduces its
exceeding language. However, it is important to remark that the increase of the
value of k also leads to more states and transitions in the partial models, reducing
its generated language, and the convergence of the partial models may need more
path observations to be achieved. Thus, there is a trade-off between the value of
parameter k and the convergence of the partial models, which is necessary to assume
that the partial models are close to be complete and can be used for fault detection
without generating a large number of false alarms.
Therefore the modular synchronous composition provides a monolithic represen-
tation of an identified system from a distributed identification. This model rep-

61
       
0 1 − −
λ1 (x0 ) =  0  λ1 (x3 ) =  1  λ1 (y0 ) =  0  λ2 (y3 ) =  1 
− − 0 1

x0 x3 y0 y3

x1 x2 y1 y2
       
1 0 − −
λ1 (x1 ) =  0  λ1 (x2 ) =  0  λ2 (y1 ) =  0  λ2 (y2 ) =  0 
− − 1 0

(M′ 1 ) (M′ 2 )

Figure 4.17: Partial Models M′ 1 and M′ 2 .

       
0 1 − −
λ1 (x0 ) =  0  λ1 (x3 ) =  1  λ1 (y0 ) =  0  λ2 (y4 ) =  1 
− − 0 1

x0 x3 y0 y4

x1 x2 y1 y2 y3
         
1 0 − − −
λ1 (x1 ) =  0  λ1 (x2 ) =  0  λ2 (y1 ) =  0  λ2 (y2 ) =  0  λ2 (y3 ) =  0 
− − 1 0 1

(M1 ) (M2 )

Figure 4.18: Partial Models M1 and M2 .

62
resents the original language of the system, eliminating exceeding behaviors and
making it more reliable. Furthermore, since in Chapter 3 we developed a model ca-
pable of reinitializing for values of k > 1, we can use the parameter k as a trade-off
between exceeding language and model size in the computation of the composition.
In the following chapter, we present a practical example of an industrial distribu-
tion system, which is identified in partial models from the algorithms presented in
Section 3.2. The composition of these models is computed for different values of k,
according to the Definitions presented in Section 4.3. Finally, the efficiency of the
identified models in detecting simulated faults in the original plant is verified.

63
Chapter 5

Pratical Example

In this chapter it is presented a virtual plant of a sorting unit system controlled by


a virtual PLC. The system is simulated using the 3D simulation software Factory
IO, and controlled using a virtual PLC. This plant is treated as a black-box system,
where the unique information is the system inputs and outputs. Thus, we identify
the partial M-NDAAO models for different values of the free parameter k and per-
form the modular synchronous composition between the identified partial models.
Using the identified monolithic model, we compute the language generated by it
and verify its efficiency to diagnose faults. In addition, we show the reduction of the
exceeding language by increasing the free parameter k used to identify the partial
models.
Let us consider the sorting unit system depicted in Figure 5.1, first presented
in [1]. The system is composed of a feeder conveyor (F C), a distribution conveyor
(DC), Pushers 1 and 2 (P 1 and P 2, respectively), and sensors ki , i = 1, . . . , 8. Thus,
the complete system has 4 actuators and 8 sensors, and the I/O vector u is given
by:
u = [k7 k8 k5 k6 k2 k1 k4 k3 P 2 P 1 F C DC]T .

The objective of the system is to sort high boxes in the second slide and small
boxes in the first slide. Only one box can be on the distribution conveyor at a
time. Thus, if there is a box in the distribution conveyor and another box arrives
at sensor k1 , then the feeder conveyor is stopped, and it is turned on again only
after observing the rising edge of sensors k6 or k8 , indicating that pushers P 1 or P 2,
respectively, have been retracted and the box has already been sorted. Sensor k2 is
used to indicate if the box is high, and sensors k3 and k4 are used to indicate that
the box is in front of pushers P 1 and P 2, respectively. After observing the falling
edge of sensor k3 (resp. k4 ), the box is in the position to be sorted by pusher P 1
(resp. P 2), and the distribution conveyor is stopped. Sensors k5 and k7 indicate
that Pushers P 1 and P 2 are completely extended, respectively.

64
Figure 5.1: Sorting unit system.

In order to separate the system into partial models the algorithms presented in
[24] are used, which first separate the actuators that work concurrently, and then
establish a causal relationship between sensors and actuators. According to the
method proposed in [24], two partial models have been obtained, where the first par-
tial model observes the following set of inputs and outputs {k8 , k5 , k6 , k2 , k1 , k3 , F C},
and the second partial model observes {k7 , k8 , k5 , k6 , k2 , k4 , k3 , P2 , P1 , DC}. Thus,
based on the indexes of the elements of the I/O vector u, we have that the sets of
indexes for the first and second partial models are given by Φ1 = {2, 3, 4, 5, 6, 8, 11}
and Φ2 = {1, 2, 3, 4, 5, 7, 8, 9, 10, 12}, respectively.
In order to identify the system models, we have observed continuously 2577 I/O
vectors generated by the system, which corresponds to 197 cyclic paths, with 13
distinct paths. With this data, we have identified, using Algorithms 3.6 and 3.7,
two partial models M1 and M2 , and the monolithic model M, for k = 1, 2, 3, 4.
In Figures 5.2 and 5.3, we show the number of transitions of M1 and M2 ,
respectively, versus the number of observed system paths. As it can be seen, for
k = 1, the number of transitions of M1 and M2 reaches a maximum value after
observing 76 cyclic paths and 2 cyclic paths, respectively. For k > 1, the number
of transitions of M1 still reaches its maximum after 76 cyclic paths, but the the
number of transitions of M2 reaches its maximum after 73 cyclic paths. This shows
that increasing the value of k, may require to observe more paths to reach the
maximum size of the partial models. In Figure 5.4, the number of transitions of
the monolithic model M versus the number of observed paths is presented. Note

65
40

Number of transitions in M1
35
30
25
20
15 k =1
10 k =2
k =3
5 k =4
0
0 20 40 60 80 100 120 140 160 180197
Number of observed paths pj

Figure 5.2: Convergence of partial model M1 .

that, the number of transitions of the monolithic model reaches its maximum after
146 cyclic paths, which shows, as expected, that the monolithic identification needs
much more observations than the distributed identification. In this case, if we choose
η for the convergence equal to half of the total observed paths, i.e., 99 cyclic paths,
then we conclude that the partial models converge after 175 cyclic paths and the
monolithic model does not converge after 197 observed cyclic paths.
Since the partial models have converged, then we can compute the composed
model Mc for different values of k. In Table 5.1, we present the number of sequences
in the identified language of Mc , LnIden,Mc , for different values of n. Note that there
is a huge reduction for k = 2 in comparison with k = 1 in the cardinality of
the identified language. In Table 5.2, we present the reduction of the exceeding
language as long as the free parameter k is increased. To show the reduction, we
have computed the difference between the cardinalities of the identified language for
k = k ′ + 1 and k = k ′ , k ′ = 1, 2, 3. Note that there is a significant reduction of the
exceeding language with the value of k ′ = 1, i.e., making the difference between the
identified languages for k = 2 and k = 1. For k ′ = 2, there is no reduction in the
exceeding language, and for k ′ = 3, the reduction is small.
Finally, we check the efficiency of the identified models for fault detection. In
order to do so, we have simulated 45 intermittent and permanent faults in all sensors
and actuators of the system. The simulated faults are forcing only one of the signals
of the sensors or actuators equal to one or zero. Table 5.3 presents the number of
faults detected by the composed model Mc for values of k = 1, 2, 3, 4. Note that
for k = 2 or higher, the number of detected faults is equal to 37, which corresponds
to approximately 82% of the total simulated faults. Thus, in this example, the

66
30

Number of transitions in M2
25

20

15

10 k =1
k =2
5 k =3
k =4
0
0 20 40 60 80 100 120 140 160 180197
Number of observed paths pj

Figure 5.3: Convergence of partial model M2 .

80
Number of transitions in M

70
60
50
40
30 k =1
20 k =2
k =3
10 k =4
0
0 20 40 60 80 100 120 140 160 180197
Number of observed paths pj

Figure 5.4: Convergence of the monolithic model M.

67
Table 5.1: Identified language of Mc .
LnIden,Mc k=1 k=2 k=3 k=4
n=1 12 5 5 5
n=2 50 12 12 12
n=3 209 21 21 21
n=4 822 35 35 30
n=5 3.218 61 61 39
n=6 12.501 112 112 52
n=7 48.521 189 189 77
n=8 188.244 288 288 122
n=9 730.446 414 414 188
n = 10 2.834.416 592 592 296

Table 5.2: Reduction of the exceeding language of Mc .


LnExc,Mc k′ = 1 k′ = 2 k′ = 3
n=1 7 0 0
n=2 38 0 0
n=3 188 0 0
n=4 787 0 5
n=5 3.157 0 22
n=6 12.389 0 60
n=7 48.332 0 112
n=8 187.956 0 166
n=9 730.032 0 226
n = 10 2.833.824 0 296

68
Table 5.3: Fault detection for different values of k.
k=1 k=2 k=3 k=4
Faults detected 27 37 37 37

proposed method has a high efficiency using the composed model for k = 2.
Thus, this chapter aimed, through a practical example, to show the benefits of
using M-NDAAOs modeling for partial models, along with its modular synchronous
composition. As M-NDAAO allows the synchronization of subsystems for values of
k > 1, it is possible to improve the efficiency of the model by increasing the parame-
ter k, still keeping the observation time required to converge the models smaller than
the monolithic identification. Furthermore, building a monolithic model through
synchronization highlights fault behaviors that should be eliminated, improving the
model’s effectiveness in detecting faults.

69
Chapter 6

Conclusion and future works

In this section we summarize all contributions of this work and propose future works.

6.1 Conclusion
In this work, we present an effective methodology for building a monolithic model
for complex systems, whose monolithic identification is unfeasible due to the large
observation time required for model convergence. This problem occurs when the
system is composed of several concurrent behaviors.
In Chapter 3, we propose an identification model called modified nondetermin-
istic autonomous automaton with output (M-NDAAO), which is suitable for cyclic
system. Using a parameter k, it is possible to modify the number of vectors as-
sociated with each output, thus modifying the model’s efficiency in representing
the observed behavior and reducing exceeding behavior. In addition, the model is
able to represent the original language of a system that performs paths with the
reinitialization problem.
In Chapter 4, we introduce the modular synchronous composition to obtain the
composed system model that is used in the fault detection scheme.
Finally, in Chapter 5, a digital twin controlled by a virtual PLC was used, to show
the efficiency of the proposed method. The composed model, for a free parameter
k = 2, is able to detect approximately 82% of the total simulated faults presenting
a high efficiency of the composed model.

6.2 Future works


To circumvent the problems presented in monolithic identification, it is necessary
to separate, with few observations, the concurrent behaviors present in the system.
When we have a black box system, this task can be difficult since we do not have

70
knowledge of the system.
In [24], a method for detecting concurrent behaviors was proposed, which was
used in Chapter 5 of this work. However, the algorithms presented by [24] have
several free parameters that modify the inputs and outputs that are observed by
each partial model. These parameter were modified and tested manually, in order
to identify the best partition. Therefore, as future work, we propose to develop a
methodology for computing the parameters presented in [24], in an automated way,
seeking an optimal solution, considering the trade-off between convergence time and
exceeding language generated by the composition model.
Another possible work is to develop a timed M-NDAAO model. The time in-
formation can reduce the exceeding language created in the modular synchronous
composition.

71
References

[1] MOREIRA, M., LESAGE, J. “Discrete Event System Identification with the
Aim of Fault Detection”, Discrete Event Dynamic Systems, v. 29, n. 2,
pp. 191–209, 2019.

[2] KLEIN, S., LITZ, L., LESAGE, J. “Fault Detection of Discrete Event Systems
Using an Identification Approach”, IFAC Proceedings, v. 38, n. 1, pp. 92–
97, 2005.

[3] ROTH, M., LESAGE, J., LITZ, L. “Black-box Identification of Discrete Event
Systems with Optimal Partitioning of Concurrent Subsystems”, Proceed-
ings of the 2010 American Control Conference, pp. 2601–2606, 2010.

[4] LI, Z., WANG, Y., WANG, K.-S. “Intelligent predictive maintenance for fault
diagnosis and prognosis in machine centers: Industry 4.0 scenario”, Ad-
vances in Manufacturing, v. 5, pp. 377–387, 2017.

[5] ZAYTOON, J., LAFORTUNE, S. “Overview of fault diagnosis methods for


discrete event systems”, Annual Reviews in Control, v. 37, n. 2, pp. 308–
320, 2013.

[6] CASSANDRAS, C. G., LAFORTUNE, S. Introduction to Discrete Event Sys-


tems. 2nd ed. New York, Springer, 2008.

[7] SAMPATH, M., SENGUPTA, R., LAFORTUNE, S., et al. “Diagnosability of


discrete-event systems”, IEEE Transactions on Automatic Control, v. 40,
n. 9, pp. 1555–1575, 1995.

[8] DEBOUK, R., LAFORTUNE, S., TENEKETZIS, D. “Coordinated decentralized


protocols for failure diagnosis of discrete event systems”, Discrete Event
Dynamic Systems, v. 10, n. 1, pp. 33–86, 2000.

[9] MOREIRA, M., JESUS, T., BASILIO, J. “Polynomial time verification of de-
centralized diagnosability of discrete event systems”, IEEE Transactions
on Automatic Control, v. 56, n. 7, pp. 1679–1684, 2011.

72
[10] CABRAL, F. G., MOREIRA, M. V. “Synchronous Diagnosis of Discrete-Event
Systems”, IEEE Transactions on Automation Science and Engineering,
v. 17, n. 2, pp. 921–932, 2019.

[11] CABRAL, F. G., MOREIRA, M. V., DIENE, O., et al. “A Petri Net Diagnoser
for Discrete Event Systems Modeled by Finite State Automata”, IEEE
Transactions on Automatic Control, v. 60, n. 1, pp. 59–71, 2015. doi:
10.1109/TAC.2014.2332238.

[12] VIANA, G. S., BASILIO, J. C. “Codiagnosability of discrete event systems


revisited: A new necessary and sufficient condition and its applications”,
Automatica, v. 101, pp. 354–364, 2019.

[13] VIANA, G. S., MOREIRA, M. V., BASILIO, J. C. “Codiagnosability analysis


of discrete-event systems modeled by weighted automata”, IEEE Trans-
actions on Automatic Control, v. 64, pp. 4361–4368, 2019.

[14] SCHNEIDER, S. Automatic modeling and fault diagnosis of timed concur-


rent discrete event systems. Logos Verlag, École Normale Supérieure de
Cachan, (Phd. Thesis), 2015.

[15] DE SOUZA, R., MOREIRA, M., LESAGE, J. “Fault detection of Discrete-


Event Systems based on an identified timed model”, Control Engineering
Practice, v. 105, pp. 104638, 2020.

[16] MOREIRA, M., LESAGE, J. “Fault Diagnosis Based on Identified Discrete-


Event Models”, Control Engineering Practice, v. 91, pp. 104101, 2019.

[17] ROTH, M., LESAGE, J., LITZ, L. “Fault diagnosis based on identified discre-
teevent models”, Control Engineering Practice, v. 19, pp. 978–988, 2011.

[18] BASILE, F., FERRARA, L. “Residuals-based fault diagnosis of industrial au-


tomation systems using timed and untimed Interpreted Petri nets”, Con-
trol Engineering Practice, v. 129, pp. 105361, 2022.

[19] ZHU, G., LI, Z., WU, N., et al. “Fault Identification of Discrete Event Systems
Modeled by Petri Nets With Unobservable Transitions”, IEEE Transac-
tions on Systems, Man, and Cybernetics: Systems, v. 49, pp. 333–345,
2019.

[20] DOTOLI, M., FANTI, M. P., MANGINI, A. M. “Real time identification of


discrete event systems using Petri nets”, Automatica, v. 44, n. 5, pp. 1209–
1219, 2008.

73
[21] BASILE, F., CHIACCHIO, P., COPPOLA, J. “IdentifyTPN: a tool for the
identification of Time Petri nets”, IFAC-PapersOnLine, v. 50, pp. 5843–
5848, 2017.

[22] SAIVES, J., FARAULT, G., LESAGE, J. “Automated Partitioning of Concur-


rent Discrete-Event Systems for Distributed Behavioral Identification”,
IEEE Transactions on Automation Science and Engineering, v. 15, n. 2,
pp. 832–841, 2018.

[23] MACHADO, T. H., VIANA, G. S., MOREIRA, M. V. “Event-Based Automa-


ton Model for identification of discrete-event systems for fault detection”,
Control Engineering Practice, v. 134, pp. 105474, 2023. ISSN: 0967-0661.

[24] CASTRO, J. G. V., VIANA, G. S., MOREIRA, M. V. “Distributed Iden-


tification of Discrete-Event Systems with the Aim of Fault Detection”,
IFAC-PapersOnLine, 2022.

[25] KHAN, M. E., KHAN, F. “A Comparative Study of White Box, Black Box
and Grey Box Testing Techniques”, Int. Journal of Advanced Computer
Science and Applications, v. 3, pp. 12–15, 2012.

74

Você também pode gostar