Você está na página 1de 6

Neural Networks in Automated Measurement Systems: State

of the Art and New Research Trends


Octavian POSTOLACHE

Pedro GIRO

Miguel PEREIRA

Technical University of Iasi


Electrical Engineering
B-dul D. Mangeron 53, Iasi,
6600, ROMANIA
poctav@alfa.ist.utl.pt

Instituto de Telecomunicaes
Av. Rovisco Pais, 1049-001,
Lisboa, PORTUGAL
psgirao@alfa.ist.utl.pt

Instituto Politcnico de
Setbal, EST
Vale de Chaves, Estefanilha
2910 Setbal, PORTUGAL
joseper@est.ips.pt

Abstract
Application of artificial neural network (ANN) data
processing in measuring systems is reviewed. Neural
network types better suited for different kind of
applications in that domain are briefly described, with
identification of the particular characteristics that make
each type suitable for each kind of application. Several
aspects concerning optimization and the virtual and
hardware implementation of ANN are also examined.

1 Introduction
Artificial neural networks are an attempt at modeling the
information processing capabilities of nervous system.
The model of artificial neurons as components of the
ANN was proposed in 1943 by Warren McCulloch [1].
Since then, ANNs have been widely applied in disciples
ranging from mathematics and physics to engineering. In
Metrology, ANNs are essentially related with applications
where measurements are obtained with little or no
intervention from a human operator (automated
measurement systems, AMSs) for two sort of reasons: (a)
because those systems naturally have the means to
implement in software or hardware the network; (b)
because the added intelligence due to the inclusion of
ANNs allows a considerable increase of systems
performances (intelligent AMSs) and, in particular allows
the utilization of low cost, robust but poor performing
measuring transducers. AMSs are designed and
implemented for many different applications. The
constitution of such measuring systems highly depends on
the application and can range from a personal computer
and general purpose instrumentation to the so-called
smart sensors (most of the times of reduced dimensions)
that incorporate equipments and systems such as home
appliances, moving vehicles, machine tools or medical
apparatus.

0-7803-7044-9/01/$10.00 2001 IEEE

Even if ANNs are used to process data at the measuring


system level, it is precisely at the transducer level that
they are mostly utilized in functions such as transducer
characteristic linearisation, prediction and correction of
errors due to quantities of influence and fault detection
and isolation.
The present paper is an attempt to summarize the
contribution of ANNs to AMSs. It starts with a brief
presentation of the most applied architectures in AMSs
applications and includes the presentation and discussion
of several ANN applications in the AMSs domain and a
brief presentation of ANN implementation solutions. The
paper concludes with some considerations regarding
future trends in ANN design and application in
metrological related activities.
2 ANN Architectures in AMSs
2.1 Network Architectures
Artificial neural networks can be defined as a data
processing system containing a large number of simple
and highly interconnected processing units capable to
store knowledge and to make it available for use [2].
Based on stored knowledge, functions as linearisation [34], compensation [5-7], fault detection and isolation [810], can be performed.
Referring to the network architectures, three classes of
ANN can be underlined: the single-layer feedforward
network, the multilayer feedforward network and the
recurrent network. In the AMSs domain, the multilayer
feedforward networks are mainly applied. Thus, the core
of neural processing applications in AMSs uses ANN
architectures based on multilayer perceptrons (MLP) or
radial basis functions (RBF).
2.1.1 Multilayer Perceptron-ANN. This type of fully
connected feedforward artificial neural network consists
of a set of source nodes (input layer), one or more
intermediate layers of computation nodes (hidden layers)

2310

and a final layer of computation nodes (output layer). The


number of neurons in the input layer is equal to the
number of input variables and the number of output
neurons is equal to the number of output variables.
The model of each neuron, represented in Figure 1,
includes non-linear activation functions typically in the
hidden layers.

UNDERFITTING
250

200

150

R()

100

xk1

wk1

Neuron k
50

wk2

Input xk2
Signals
xk3

wk3

wkm

xkm

(.)

yk
0
0

Output
Signal

10

20

30

40

50
60
Temperature (C)

70

80

90

100

70

80

90

100

.
(a)

bk

OVERFITTING
250

Figure 1: Neuron model: xki- input i (1im); yk- output;


wki- weights; bk- bias; (.)- activation function
Commonly used non-linear functions, differentiable
everywhere, are the sigmoid and tansignoid functions
[2][11].
The number of hidden layer neurons must be enough to
avoid underfitting, a situation where no set of weights and
bias can produce outputs reasonably close to the targets.
However, an excessive number of hidden layer neurons
can generate the opposite problem, overfitting, a situation
where the network looses its generalization capabilities
giving large errors for inputs outside the training set.
Figure 2(a) and 2(b) represent the results of an
underfitting and overfitting ANN characteristic modeling,
of a negative temperature coefficient (NTC) termistor,
when a reduced and an excessive number of neurons are
used in the hidden layer, respectively.
Determining the proper number of hidden layer neurons is
often accomplished by experimentation. Generally there
is a wide range in the number of neurons that can be used
successfully. Its optimal value depends on the specific
application and can be obtained using different optimization algorithms, such as genetic algorithms (GA)[12].
2.1.2 Radial Basis Functions-ANN. The radial basis
function ANN (RBF-ANN) is also a fully connected
feedforward artificial neural network architecture. In its
most common form includes 3 layers: input, hidden and
output layer. In AMSs applications the input layer
includes the source nodes usually related with sensor
variables, the hidden layer includes a variable number of
neurons, with gaussian activation functions [1-2], and the
output layer includes a set of neurons with linear
activation functions connected to the output variables.
The main goal of this type of ANN is curve fitting in high
dimensional spaces. In this case, learning is equivalent to
finding a surface in a multidimensional space that
provides the best curve fit to a given set of data (training
set).

200

150

R()

100

50

0
0

10

20

30

40

50
60
Temperature (C)

(b)
Figure 2: NTC termistor characteristic modeling
anomalies using MLP-ANN: (a) underfitting case; (b)
overfitting case (continuous line- experimental
characteristic; o- training set; dashed lined- interpolated
characteristic)

The individual activation function of each hidden layer


neuron is given by:

(x) = e

X C /2 2

(1)
where vector X represents the input values of the neuron,
C is the vector of neuron center coordinates and the
width of the radial function.
The model of each hidden layer neuron is represented in
Figure 3

2311

Neuron k

xk1
Input xk2
Signals
xk3
Xk
xkm

||-||

Ck

bk=1/k

e -n

yk
Output
Signal

Figure 3: Hidden layer neuron model of a RBF-ANN:


xki- input i (1im); yk- output; Ck- neuron center
coordinates; k- width of radial function.

The argument of the activation function of each hidden


neuron corresponds to the Euclidean norm between the
input vector, Xk, and the center coordinates of each
neuron, Ck, divided by the width of the radial function
(k). This activation function has a maximum value equal
to 1 and a minimum value equal to 0.
The output neurons of an RBF-ANN simply sum the
weighted outputs of the hidden layer neurons without
using any activation function.
These networks create a local approximation of a nonlinear input-output function. The local approximation,
instead of the global approximation performed by MLPANN, may require a higher number of neurons for the
same degree of accuracy, but RBF-ANN can be designed
in a reduced time and its performance is higher when a
large number of training vectors are available.
Besides, the use of radial basis activation functions
requires a careful choice of the number of hidden neurons
and an adequate size of training set in order to cover all
the input space while overlapping in just the right way,
especially when a good generalization is needed.
2.2 Learning Processes
The learning process (also referred to as network training)
requires learning and test data sets to adjust the ANN
internal parameters. This adjustment can be established by
the application of different learning algorithms.
2.2.1 Redundancy of Input Learning Data. To obtain
the weights and biases of the ANN, a set of values
including the input and output values are required.
Experience has shown that the existence of a high degree
of redundancy in the learning data usually has an adverse
influence on the results of ANN modeling. All that is
needed to train successfully an ANN is an adequate set of
data representative of the information that is important to
solve a problem. If inadequate data is used, correlation
become difficult to find and training time may become
excessive. This often happens with backpropagation
algorithms (for MLP-ANN) when they use an excessive
number of hidden layer neurons. The networks train well
but test poorly due to the memorization of the individual
training set elements.
2.2.2 MLP-ANN Learning Process. This type of ANNs
uses a supervised learning mode, during which the
weights and biases of the neurons are adjusted based on a
given training set of pairs [X(ti), T(ti)], where X(ti)
represents an instance of the input vector and T(ti) is the
correspondent target vector for the ANN output (Y(ti)).
The learning rule calculates the updated values of neuron
weights and biases based on the difference between target
and ANN output. Backpropagation, also known as
generalized delta rule, is the most popular algorithm used
for training purposes. In this case, weights and biases are
adjusted based on error derivative vector back propagated

through the network.


Other frequently used MLP-ANN learning algorithms are
backpropagation with variable learning rate, backpropagation with momentum and Levenberg-Marquardt.
The evaluation of network performance must take into
consideration not only the degree of approximation
obtained for the training set (generally measured by the
mean square error, MSE), but also the fit obtained for a
different set (validation or test set). The test set range is
inside the training set range when ANN generalization
capabilities are evaluated. When extrapolation capabilities
of an ANN are aimed, the test set must also include values
outside the training set.
2.3.2 RBF-ANN Learning Process. The training of RBFANNs differs substantially from the training used in
MLP-ANNs. It consists of 2 separate phases. During the
first phase the parameters of the radial basis functions,
centers and widths, are set using an unsupervised training
mode until their values are stabilized. In a second phase
the weights of the connections between hidden and output
neurons are established using a supervised training mode
that minimizes the errors between ANN outputs, Yi, and
correspondent targets, Ti, for a given set of input training
vectors, Xi.
In AMSs, it is particularly important to establish the stop
condition for the training process that leads to the best
trade-off between the performance and the complexity of
a RBF-NN. Several methods on this topic have been
reported [13] with successful results.
3

Applications of ANNs in AMSs

3.1 Linearisation and Compensation


One of the most powerful uses of ANNs is function
approximation. The advantages of neural networks as an
approximation tool for single and multivariable functions
are: (a) capability to operate based on a multivariate and
intrinsically noisy or error-prone reduced training data
set; (b) potentiality of conveniently modeling nonlinear
characteristics; (c) lower approximation errors than other
classical methods like polynomial interpolation; (d) good
generalization and extrapolation capabilities.
The main architectures applied in the interpolation or in
the linearisation of AMSs elements
(i.e. sensors,
conditioning circuits) are the MLP-ANN [14-15] and
RBF-ANN [16].
3.1.1 Linearisation. Referring to MLP-ANNs and RBFANNs as solution to AMSs characteristics linearisation of
single variable transfer functions, the following aspects
must be considered: (a) RBF-ANN network has a single
hidden layer, whereas an MLP-ANN may have one or
more hidden layers; (b) MLP-ANN constructs global
approximations to nonlinear input-output mapping and
RBF-ANN, using exponentially decaying localized

2312

nonlinearities (e.g. Gaussian functions), constructs local


approximations.
Related with the number of the MLP-ANN hidden layers
for the particular case of non-linear AMSs characteristic
modeling, it must be underlined that one hidden layer
represents the optimal solution for a large number of
applications. Several simulation results for a particular
case of a temperature sensor (ON400 termistor) are
presented in Table 1. In the table, nh_layers represents the
number of MLP-ANN hidden layers, nh represents the
total number of hidden neurons, L1 and L2 are the
number of neurons in the first and in the second hidden
layers, fop is the number of floating point operations in
the training phase, MSE represents the mean square error
associated with the training phase and er is the
approximation error associated with the validation phase
(test set).
Table 1: The MLP-ANN interpolation results for a
ON400 temperature sensor.
nh_layers
2
2
2
1
nh
6
6
6
6
L1
5
3
1
6
L2
1
3
5
0
fop
3.18E+6 7.42E+5 3.54E+6 3.97E+5
MSE
2.05E-6 8.93E-7 2.07E-6 1.75E-6
er[%]
3.33
2.98
2.56
1.05
The obtained results for the non-linear temperature sensor
in question show that the multiple hidden layer MLPANN is not the best solution since the interpolation errors
are higher than for a single hidden layer ANN.
For the same number of layers, an input layer, a hidden
layer and an output layer, the MLP-ANN and the RBFANN are characterized by different levels of complexity
expressed by the number of hidden neurons and the
neuron activation functions.
Referring to the number of hidden neurons nh for the same
aim (e.g. er1%), the MLP-ANN requires less hidden
neurons. An example of this behavior for the particular
case of PN(dN) characteristic associated to the bifurcated
fiber bundle displacement sensor [17] is synthesized in
Table 2. The PN values represent the normalized received
power and dN the displacement of the reflective surface
[18].
Table 2: MLP-ANN and RBF-ANN performances for a
bifurcated fiber bundle displacement sensor
ANN
nh
fop
er
MLP
7
2.42E+7
1.04
RBF
25
3.5E+4
7.83
Analyzing the results one can conclude that for the same
training stop condition (SSE1E-4) and a learning set that
includes 25 pairs of (PN,dN) values, the obtained MLPANN is less complex and better performing that the RBN-

NN. However, the RBF-ANN has lower computation


requirements for training purposes.
3.1.2 Compensation. In compensation applications, the
number of ANN inputs is greater than one. One of the
inputs is related to the main acquired value, that
characterizes the process, for example the voltage
delivered by a pressure transducer [19], and the others
inputs can be associated to quantities of influence
(disturbance factors) such as temperature. The network is
then trained to obtain a temperature compensated value of
the pressure. Similar successful works have been reported
in this area [5-6][15].
Referring to ANN architectures, MLP-ANNs prove to be
a good solution for multivariable modeling with
applications in AMSs error compensation [5-6].
3.2 Fault Detection and Diagnostic
The prompt detection of anomalous conditions of AMSs
elements involves the implementation of fault detection
and diagnosis routines. The use of neural networks
represents an important solution in the fault detection area
[20-21]. Several results concerning instrument fault
detection and isolation (IFDI) in AMSs have been
reported [22-23]. The considered AMSs are of the virtual
type (based on data acquisition or GPIB instruments) with
signals acquired from the AMSs sensors applied at the
ANN inputs. The information is processed by the ANN,
which delivers outputs associated to undesired events.
IFDI architectures including a set of ANNs (MLP-ANNs
or RBF-ANNs), alternated with pre and post-processing
layers are reported in [24].
As a research trend in the area, it can be mentioned the
optimization of the ANN applied on the IFDI scheme
using
genetic
algorithms
(GA)[22]
and
the
implementation of neuro-fuzzy networks [25].
4 ANN Implementation
Implementation of neural networks in AMSs includes two
different alternatives [1]. The first alternative, more
widely used, is the software simulation in microprocessorbased systems (PC, DSP or microcontroller) [26].
The second alternative is the hardware implementation
that includes analog and digital solutions [27-28].
4.1 Software Implementations
Software solutions for ANN implementation are also
known as virtual networks considering that the ANN
elements are not physically mapped. This implies that
different architectures with different internal parameters
can be successively implemented using the same
hardware support. The ANN parameters (weights and
biases) can be obtained after an off-line or on-line
learning procedure and sent to the processor (e.g. a digital
signal processor DSP) [16]. Figure 4 represents an ANN

2313

implementation based on a DSP with off-line or on-line


ANN learning capabilities.
Programs implementing ANN operational phase are
consist mainly of matrices products. For a PC based
AMS, matricial operations are easy to implement using
different programming languages (e.g. C++, LabVIEW,
Visual Basic).
The implementation of the ANN in a microcontroller or in
a DSP requires the usage of assembler or compiler tools
followed by code optimization procedures taking into
account the practical limits of the support system in terms
Off-line

Calibration
samples
Real value

Estimated

value

the training speed and reduce the MSE of a given testing


set.
Genetic algorithms can also be used to guide the design of
the ANN structure, number of inputs, type of activation
function for each neuron, and selection of learning
algorithm parameters. The advantage of using this
approach, compared to classical optimization approaches,
is that it allows the exploration of large amounts of the
design space that could otherwise be left unexplored.
Two types of applications of ANNs that we did not
mentioned in this paper and that have a metrological side
are recovery of signals buried in noise and classification
of signals (pattern recognition). In the last case examples
are abundant. Please refer, for instance to [32] and the
references it includes. For signals whose detection and
classification requires both time and frequency analysis,
some authors have been proposing and successfully using
ANNs with a preprocessing wavelet block [33]. This is a
trend that we expect to be develop in the near future.

sensor
6 Conclusion
on-line

Figure 4: ANN implementation based on DSP with offline (--) or on-line ANN learning capabilities
of memory and computational complexity.
In terms of operational times, software-implemented
networks are characterized by a higher computational
time than hardware-implemented networks.
4.2 Hardware Implementations
In hardware implementations, signals through the network
are coded using an analog or digital model. In the analog
approach, a signal is represented by the magnitude of a
current, or a voltage difference. One of the advantages of
ANN analog hardware implementations [29] is that they
can be easily interfaced to the physical system without
requiring A/D and D/A converters. Another advantage
that analog implementation have over digital
implementation is that all weights can be coded by a
single analog element, such as resistor, and very simple
circuit rules, like Kirchhoffs laws, can be used to carry
out the addition of input signals. Although the analog
hardware solution is attractive, the actual technology
restricts the application of this type of implementation
especially in the AMSs domain. Related to ANN digital
hardware implementation, the most usual solutions are
based on FPGA [30] and on specialized microprocessors
[31].
5 New Research Trends
One of the directions of future trends is based on the
usage of GA to guide a backpropagation based ANN in
finding the optimal set of neural connections that enhance

Neural networks are useful tools for data processing and,


consequently, they have been increasingly used in
metrology, mainly in automated measuring systems.
Nowadays, and thanks to the work of many researchers,
the selection of the type of network and its design can be
more objectively performed taking basically into
consideration the type of application, the required
performances and the implementation constraints.
7 Acknowledgements
This work was supported in part by Portuguese Science
and Technology Foundation PRAXIS XXI program
FCT/BPD/2203/99 and the Project FCT PNAT/ 1999
/EEI/15052. This support is gratefully acknowledged. We
would also like to thank the Centro de Electrotecnia
Teorica e Medidas Electricas, IST Lisboa, for their
important technical support.
8 References
[1] R. Rojas, Neural Networks A Systematic
Introduction, Springer Berlin Heidelberg, New York,
1996.
[2] S. Haykin, "Neural Networks - A Comprehensive
Foundations", Prentice Hall, 1999.
[3] J. Patra, G. Panda, R. Baliarsingh, Artificial neural
network-based nonlinearity estimation of pressure
sensors", IEEE Trans. Instr. Meas., Vol. 43, pp. 874-881,
Dec. 1994.
[4] V. Ferrari, A. Flammini, P. Maffezoni, D. Marioli, M.
Sansoni, A. Taroni, Application of neural algorithm
based on Radial Basis Functions to sensor data
processing, Proc. IMEKO World Congress, Vol.V, pp.

2314

34-39, Tampere, Finland, 1997.


[5] J.M.D. Pereira, O. Postolache, P. Giro, M. Cretu,
Minimizing Temperature Drift Errors of Conditioning
Circuits Using Artificial Neural Networks, IEEE Trans.
Instr. Meas., Vol. 49, No. 5, pp. 1122-1127, Oct. 2000.
[6] O. Postolache, M. Pereira, P. Giro, M. Cretu, C.
Fosalau, Application of Neural Structures in Water
Quality Measurements, Proc. IMEKO World Congress,
Vol. IX, pp. 353-358, Wien, Austria, Sept. 2000.
[7] J. Patra, A. Kot, G. Panda, An Intelligent Pressure
Sensor Using Neural Netwoks, IEEE Trans. Instr. Meas.,
Vol. 49, No. 4, pp. 829-834, Aug. 2000.
[8] G. Betta, M. DellIsola, C. Liguori, and A.
Pietrosanto, An artificial intelligence-based instrument
with fault detection and isolation capability, Proc. VIII
IMEKO TC-4, pp. 334-337, Budapest, Hungary, Sept.
1996.
[9] B. Koppen-Selinger and P.M. Frank, Fault detection
and isolation in technical processes with neural
networks, Proc. 34th IEEE Conf. Decision and Control,
pp. 2414-2419, New Orleans, USA, 1995.
[10] A. Bernieri, G. Betta, A. Pietrosanto and C. Sansone,
A Neural Network Approach to Instrument Fault
Detection and Isolation, IEEE Trans Instr. Meas, Vol.
44, pp. 747-750, June 1995.
[11] Howard Demuth, Mark Baele, Neural Network
Toolbox for Use with MATLAB Users Guide, The
MathWorks Inc., Sept. 1993.
[12] O. Postolache, J.M. Dias Pereira, M. Cretu, P. Silva
Giro, An ANN Fault Detection Procedure Applied in
Virtual Measurement Systems Case, Proceedings of
IMTC/98, Vol. 1, pp. 257-260, St. Paul, Minnesota,
USA, May 1998.
[13] C. Alippi, V. Piuri, F. Scotti, A Methodology to
Solve Performance/Complexity Trade-off in RBF Neural
Networks, Proc. IEEE International Workshop on
Virtual and Intelligent Measurement System, pp. 147-150,
Annapolis, USA, 2000.
[14] S.W. Moore, J.W. Gadner, E. Hines, W. Gopel, U.
Weimar, A modified multilayer percepton model for gas
mixture analysis, Sens. Actuators B, vol. 15-16, pp. 344348, 1993.
[15] J.M. Dias Pereira, O. Postolache, P. Silva Giro, A
Temperature Compensated System for Magnetic Field
Measurements Based on Artificial Neural Networks,
IEEE Trans. Instr. Meas., Vol. 47, No .2, pp. 494-498,
April 1998.
[16] A. Flammini, D. Marioli, B. Pinelli, A. Taroni, A
DSP Implementation of Simple Neural Network to Sensor
Data Processing, Proc. IMEKO TC-4 Symp., pp. 585590, Naples, Italy, 1998.
[17] J.A. Brando Faria, O. Postolache, J.M. Dias Pereira, P.
Silva Giro, Automated Characterization of a Bifurcated
Optical Fiber Bundle Displacement Sensor Taking into Account
Reflector Tilting Perturbation Effects, Microwave and Optical
Tecnology Letters, John Wiley & Sons Inc., Vol. 26, No. 4, pp.

242-247, Aug. 2000.

[18] J.M. Dias Pereira, O. Postolache, P. Silva Giro, J.A.


Brando Faria, M. Cretu, An Optical Temperature
Transducer Based on a Bimetallic Sensor, Proceedings
of ISDDMI98, Vol. 2, pp. 661-664, Naples, Sept. 1998.
[19] J. Patra, A. Kot, G. Panda, An Intelligent Pressure
Sensor Using Neural Netwoks, IEEE Trans. Instr. Meas.,
Vol. 49, No. 4, pp. 829-834, Aug. 2000.
[20] G. Betta, M. DellIsola, C. Liguori, and A. Pietrosanto,
An artificial intelligence-based instrument fault detection and
isolation capability, Proc. VIII IMEKO TC-4, pp. 334-337,
Budapest, Hungary, Sept 1996.

[21] A. Bernieri, G. Betta, A. Pietrosanto and C. Sansone


A neural network approach to instrument fault detection
and isolation IEEE Trans.Instr. Meas., vol. 44, pp.747750, June 1997.
[22] O. Postolache, J. Pereira, M. Cretu, P. Giro, An
ANN Fault Detection Procedure Applied in Virtual
Measurement System Case, IMTC98, Vol. I, pp. 257260, St Paul Minnesota, May 1998.
[23] O. Postolache, P. Giro, H. Ramos, M. Pereira, A
Temperature Fault Detection as an Artificial Neural Network,
Proc. IEEE Melecon 98, vol. I, pp. 678-681, Tel-Aviv, Israel,
May 1998.
[24] G. Betta, A. Pietrosanto "Instrument Fault Detection and
Isolation: State of the Art and New Research Trends", IEEE
Trans. Instr. Meas. Vol. 49, No.1, pp.100-107, Febr. 2000.
[25] Constantin von Altrock, "Fuzzy Logic and Neurofuzzy
Applications", Prentice Hall, 1995.
[26] R. Eckmiller, G. Hartmann, G. Hanske, Parallel
Processing in Neural Systems and Computers, Elsevier Science
Publishers, 1990.
[27] C. Mead, Analog VLSI and Neural Systems, AddisonWesley, 1989.
[28] J.B. Theeten, M. Duranton, N. Mauduit, J.A. Sirat, The
LNeuro Chip: A Digital VLSI with a On Chip-Learning
Mechanism, Proc. of International Conference on Neural
Networks, Vol. 1, pp. 593-596, Paris, July 1990.
[29] G. Canwenbergs, "An Analog VLSI Neural Network
Learrning a Continuous Time Trajectory", IEEE Trans. on
Neural Networks, Vol. 7, No. 2, pp. 346-361, Jully 1996.
[30] M. Costa, D. Palmisano, E. Pasero, L. Bovio Ferassa, A.
Di Lello, A High Performances and High Versatility
Reconfigurable System for Fast Prototyping of Digital Neural
Network Based on FPGA, Proc. 10th Italian Workshop on
Neural Nets (WIRN VIETRI '98), pp. 297-300, Salerno, Italy,
May 1998.

[31] H.T. Kung, C.E. Leierson, Systolic Arrays Sparse


Matrix Proceedings, Academic Press, 1979.
[32] S. Marco, A. Ortega, A. Pardo, J. Samitier, "Gas
Identification with Tin Oxide Sensor Array and SelfOrganizing Maps: Adaptive Correction of Sensor Drifts",
IEEE Trans. Instr. Meas., Vol. 47, No. 1, pp. 316-320,
Feb. 1998.
[33] L. Agrisani, P. Daponte, M. DApuzzo, A Method Based
on Wavelet Networks for the Detection and Classification of
Transients, Proc. IEEE Inst. Meas. Techn. Conference, pp. 903908, St. Paul, Minnesota, USA, May 1998.

2315

Você também pode gostar