Você está na página 1de 155

EURASIP Journal on Advances in Signal Processing

Signal Processing in Advanced


Nondestructive Materials Inspection
Guest Editors: Joo Manuel R. S. Tavares and Joo Marcos A. Rebello

Signal Processing in Advanced Nondestructive


Materials Inspection

EURASIP Journal on
Advances in Signal Processing

Signal Processing in Advanced Nondestructive


Materials Inspection
Guest Editors: Joao Manuel R. S. Tavares and
Joao Marcos A. Rebello

Copyright 2010 Hindawi Publishing Corporation. All rights reserved.


This is a special issue published in volume 2010 of EURASIP Journal on Advances in Signal Processing. All articles are open access
articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.

Editor-in-Chief
Phillip Regalia, Institut National des Telecommunications, France

Associate Editors
Adel M. Alimi, Tunisia
Yasar Becerikli, Turkey
Kostas Berberidis, Greece
Enrico Capobianco, Italy
A. Enis Cetin, Turkey
Jonathon Chambers, UK
Mei-Juan Chen, Taiwan
Liang-Gee Chen, Taiwan
Satya Dharanipragada, USA
Kutluyil Dogancay, Australia
Florent Dupont, France
Frank Ehlers, Italy
Sharon Gannot, Israel
Samanwoy Ghosh-Dastidar, USA
Norbert Goertz, Austria
M. Greco, Italy
Irene Y. H. Gu, Sweden
Fredrik Gustafsson, Sweden
Sangjin Hong, USA
Jiri Jan, Czech Republic
Magnus Jansson, Sweden
S. Jayaweera, USA
Soren Holdt Jensen, Denmark

Mark Kahrs, USA


Moon Gi Kang, South Korea
Walter Kellermann, Germany
Lisimachos P. Kondi, Greece
A. Kot, Singapore
Ercan E. Kuruoglu, Italy
Tan Lee, China
Geert Leus, The Netherlands
T.-H. Li, USA
Husheng Li, USA
Mark Liao, Taiwan
Y.-P. Lin, Taiwan
Shoji Makino, Japan
Stephen Marshall, UK
C. Mecklenbrauker, Austria
Gloria Menegaz, Italy
Ricardo Merched, Brazil
Marc Moonen, Belgium
Christophoros Nikou, Greece
Sven Nordholm, Australia
Patrick Oonincx, The Netherlands
Douglas OShaughnessy, Canada
B. Ottersten, Sweden

Jacques Palicot, France


Ana Perez-Neira, Spain
Wilfried R. Philips, Belgium
Aggelos Pikrakis, Greece
Ioannis Psaromiligkos, Canada
Athanasios Rontogiannis, Greece
Gregor Rozinaj, Slovakia
M. Rupp, Austria
William Sandham, UK
B. Sankur, Turkey
Erchin Serpedin, USA
Ling Shao, UK
Dirk Slock, France
Yap-Peng Tan, Singapore
J. Tavares, Portugal
George S. Tombras, Greece
Dimitrios Tzovaras, Greece
Bernhard Wess, Austria
Jar-Ferr Yang, Taiwan
Azzedine Zerguine, Saudi Arabia
Abdelhak M. Zoubir, Germany

Contents
Signal Processing in Advanced Nondestructive Materials Inspection, Joao Manuel R. S. Tavares and
Joao Marcos A. Rebello
Volume 2010, Article ID 954623, 2 pages
Geometrical Feature Extraction from Ultrasonic Time Frequency Responses: An Application to

Miralles, Valery Naranjo, and Ignacio Bosch


Nondestructive Testing of Materials, Soledad Gomez,
Ramon
Volume 2010, Article ID 706732, 10 pages
Cyclic Biaxial Stress Measurement Method Using the Grain Growth Direction in Electrodeposited
Copper Foil, Yuichi Ono, Cheng Li, and Daisuke Hino
Volume 2010, Article ID 928216, 8 pages
Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data,
Matthias Temann, Stephan Mohr, Svitlana Gayetskyy, Ulf Haler, Randolf Hanke, and Gunther Greiner
Volume 2010, Article ID 545030, 9 pages
Digital Radiography Using Digital Detector Arrays Fulfills Critical Applications for Oshore Pipelines,
Edson Vasques Moreira, Jose Maurcio Barbosa Rabello, Marcelo dos Santos Pereira, Ricardo Tadeu Lopes,
and Uwe Zscherpel
Volume 2010, Article ID 894643, 7 pages
Appling a Novel Cost Function to Hopfield Neural Network for Defects Boundaries Detection of Wood
Image, Dawei Qi, Peng Zhang, Xuefei Zhang, Xuejing Jin, and Haijun Wu
Volume 2010, Article ID 427878, 8 pages
Attenuation Analysis of Lamb Waves Using the Chirplet Transform, Florian Kerber, Helge Sprenger,
Marc Niethammer, Kritsakorn Luangvilai, and Laurence J. Jacobs
Volume 2010, Article ID 375171, 6 pages
Flexible Riser Monitoring Using Hybrid Magnetic/Optical Strain Gage Techniques through RLS Adaptive
Filtering, Daniel Pipa, Sergio Morikawa, Gustavo Pires, Claudio Camerini, and Joao Marcio Santos
Volume 2010, Article ID 176203, 14 pages
Analysis of Approximations and Aperture Distortion for 3D Migration of Bistatic Radar Data with the
Two-Step Approach, Luigi Zanzi and Maurizio Lualdi
Volume 2010, Article ID 192378, 9 pages
ICA Mixtures Applied to Ultrasonic Nondestructive Classification of Archaeological Ceramics,
Addisson Salazar and Luis Vergara
Volume 2010, Article ID 125201, 11 pages
On the Evaluation of Texture and Color Features for Nondestructive Corrosion Detection,
Fatima N. S. Medeiros, Geraldo L. B. Ramalho, Mariana P. Bento, and Luiz C. L. Medeiros
Volume 2010, Article ID 817473, 7 pages
Fluctuation Analyses for Pattern Classification in Nondestructive Materials Inspection, A. P. Vieira,
E. P. de Moura, and L. L. Goncalves
Volume 2010, Article ID 262869, 12 pages

A Study of Concrete Hydration and Dielectric Relaxation Mechanism Using Ground Penetrating Radar
and Short-Time Fourier Transform, W. L. Lai, T. Kind, and H. Wiggenhauser
Volume 2010, Article ID 317216, 14 pages
Strain and Cracking Surveillance in Engineered Cementitious Composites by Piezoresistive Properties,
Jia Huan Yu and Tsung Chan Hou
Volume 2010, Article ID 402597, 6 pages
Heuristic Enhancement of Magneto-Optical Images for NDE, Matteo Cacciola, Giuseppe Megali,
` Salvatore Calcagno, Mario Versaci, and Francesco Carlo Morabito
Diego Pellicano,
Volume 2010, Article ID 485695, 11 pages
A Machine Learning Approach for Locating Acoustic Emission, N. F. Ince, Chu-Shu Kao, M. Kaveh,
A. Tewfik, and J. F. Labuz
Volume 2010, Article ID 895486, 14 pages

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 954623, 2 pages
doi:10.1155/2010/954623

Editorial
Signal Processing in Advanced Nondestructive Materials
Inspection
Joao Manuel R. S. Tavares1 and Joao Marcos A. Rebello2
1 Department

of Mechanical Engineering, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
of Metallurgy and Materials, Faculty of Engineering, Federal University of Rio de Janeiro, Technological Center, Room
F-210, Ilha do Fundao, RJ, Brazil

2 Department

Correspondence should be addressed to Joao Manuel R. S. Tavares, tavares@fe.up.pt


Received 31 October 2010; Accepted 31 October 2010
Copyright 2010 J. M. R. S. Tavares and J. M. A. Rebello. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

Over the last decades, a large number of new and improved


nondestructive testing (NDT) techniques have been successfully developed. For example, NDT techniques have proven
to be especially eective and reliable in crack detection
and sizing due to more stringent design requirements
of the current structures and equipments. Nuclear power
plants, oshore platforms for deep water oil extraction, and
aircraft and aerospace engines are some promising examples
to which the NDT inspection techniques can be applied
aiming to assure that defect detection and characterization
is successfully achieved.
One of the major problems faced by NDT techniques
is the diculty on the ecient and truthful processing and
analysis of the acquired signals and images. Examples can
be eortlessly found in situations in which the defects either
are so closely spaced that it becomes extremely dicult to
detach them or are located in environments that introduce
corrupted data in the acquired signals.
Digital signal processing concepts have been successfully
applied to NDT for detecting, conditioning, and automatically classifying a large variety of defects as well as in the
characterization of materials. Actually, digital signal processing concepts allow, for example, the improvement of the time
resolution and signal-to-noise ratio and also the exposing
of some details that would be hardly ascertained from the
raw signals. Additionally, pattern recognition strategies are
very often employed in NDT for automatically classifying the
findings.
Furthermore, NDT characterization has been capable of
detecting alterations in material properties. In this case, some

physical properties, as frequency-dependent sound velocity


and attenuation, are used. However, the practical diculty
of extracting the information needed also demands the use
of proficient methods of signal processing.
This issue of the EURASIP Journal on Advances in
Signal Processing constitutes the first special issue related
with signal processing in advanced nondestructive materials inspection, and covers numerous outstanding topics
such as pattern recognition using fractal analyses applied
to several specific NDT techniques. In the same way, a
classifier based on independent component analysis mixture
modelling using ultrasonic was used in the characterization
of materials of archaeological interest, a modified Hopfield
neural network with a novel cost function was presented
for detecting the boundaries of wood defects, and a neural
network-based machine learning approach was employed for
locating acoustic emission signals.
Additionally, the characterization of materials deserved
special attention from several papers: the short time Fourier
transform of ground penetration radar waveforms was used
to determine concrete hydration properties; attenuation
analysis of Lamb waves used the Chirplet transform in
aluminum plates; piezoresistive properties of engineered
cementitious composites were used as their own sensors to
quantify their resistivity-strain relationship; a geometrical
estimator of the time-frequency ultrasonic response of some
dispersive materials was adopted. The technique of ground
penetration radar was also focused on in a paper where
a fast algorithm for 3D migration was developed, aiming
to investigate its application in a medium that is vertically

EURASIP Journal on Advances in Signal Processing

heterogeneous, as is the case of layered structures such as


walls, floors, and pavements.
In respect to image analysis, computed tomography
was used to determine fiber-length distribution in fiberreinforced polymer components, and corrosion in carbon
steel storage tanks was evaluated by using the Fisher linear
discriminant analysis of visual digital images. Furthermore,
grain growth direction in electrodeposited copper foil was
measured by means of cyclic biaxial stress and image
treatment.
Moreover, defect detection and sizing in pipeline welds
were performed by digital radiography, and cracks in rivet
were studied by the magneto optic technique. A novel
method of inspection, which employs hybrid magnetic
optical strain-gage techniques, was used for the automatic
monitoring of armor layers of flexible risers in the oil and
gas industry.
For this special issue, 22 works, from 10 countries (Algeria, Brazil, China, France, Germany, Italy, Japan, Slovenia,
Spain, and the USA) were submitted. Then, 15 works were
accepted for publication after being thoroughly reviewed by
international experts on NDT.

Acknowledgments
The guest editors would like to express their deep gratitude to
the Editor-in-Chief and Associate Editors of EURASIP Journal on Advances in Signal Processing for this opportunity, to
all authors who shared their excellent works with us, and to
all members of the Scientific Committee of this special issue,
helped us in the review process.
Joao Manuel R. S. Tavares
Joao Marcos A. Rebello

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 706732, 10 pages
doi:10.1155/2010/706732

Research Article
Geometrical Feature Extraction from Ultrasonic Time Frequency
Responses: An Application to Nondestructive Testing of Materials
1 Ramon

Miralles (EURASIP Member),1 Valery Naranjo,2 and Ignacio Bosch1


Soledad Gomez,
1 Departamento

de Comunicaciones, Instituto de Telecomunicaciones y Aplicaciones Multimedia (iTEAM),


Universidad Politecnica de Valencia, Camino de Vera S/N, 46022 Valencia, Spain
2 Instituto de Bioingenier
a y Tecnologa Orientada al Ser Humano, Universidad Politecnica de Valencia, Camino de Vera S/N,
46022 Valencia, Spain
Miralles, rmiralle@dcom.upv.es
Correspondence should be addressed to Ramon
Received 30 December 2009; Revised 1 March 2010; Accepted 17 March 2010
Academic Editor: Joao Manuel R. S. Tavares

Copyright 2010 Soledad Gomez


et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Signal processing is an essential tool in nondestructive material characterization. Pulse-echo inspection with ultrasonic energy
provides signals (A-scans) that can be processed in order to obtain parameters which are related to physical properties of inspected
materials. Conventional techniques are based on the use of a short-term frequency analysis of the A-scan, obtaining a timefrequency response (TFR), to isolate the evolution of the dierent frequency-dependent parameters. The application of geometrical
estimators to TFRs provides an innovative way to complement conventional techniques based on the one-dimensional evolution of
an A-scan extracted parameter (central or centroid frequency, bandwidth, etc.). This technique also provides an alternative method
of obtaining similar meaning and less variance estimators. A comparative study of conventional versus new proposed techniques is
presented in this paper. The comparative study shows that working with binarized TFRs and the use of shape descriptors provide
estimates with lower bias and variance than conventional techniques. Real scattering materials, with dierent scatterer sizes, have
been measured in order to demonstrate the usefulness of the proposed estimators to distinguish among scattering soft tissues.
Superior results, using the proposed estimators in real measures, were obtained when classifying according to mean scatterer size.

1. Introduction
Signal processing is an essential tool in nondestructive material characterization. Modern technologies can take benefit
of more sophisticated algorithms allowing to classify and
characterize materials precisely. One of the techniques that
takes advantage of all these advances is the nondestructive
testing (NDT) using ultrasounds. Thanks to the advances in
signal processing it is now easy to find applications of NDT
using ultrasonics in materials, that some years ago was very
hard to find [13].
The Signal Processing Group (GTS) of the Universidad
Politecnica de Valencia published a technique [2] that allows
to characterize dispersive materials by means of pulseecho inspection with ultrasonic energy. The aforementioned
technique was based on extracting time of flight-dependent
parameters from the ultrasonic A-scan. This technique

involves assuming a Linear Time Varying (LTV) model


for the ultrasonic inspection of dispersive material. The
extracted parameters were aected by the physical properties
of the material and automatic classifiers could be used.
In this paper we introduce a novel technique to extract
parameters, based on the shape analysis of time frequency
responses, that complement or in some situations improve
the performance of the previously published methods.
This work is going to be structured as follows. In
Section 2 we describe a simple model that demonstrates
how physical properties of scattering materials aect the
time frequency representation (TFR) of the A-scan. Later,
in Section 3, we briefly review the traditional parameter
estimators presented in [2]. In Section 4 a new technique
based on computing geometrical descriptors from the TFR
is introduced. A comparative study of the traditional versus
the new proposed technique is presented in Section 5. An

EURASIP Journal on Advances in Signal Processing

example of application to characterize mean scatterer size on


soft dispersive materials is also shown in this section. Finally
in Section 6, conclusions are presented.

2. Ultrasonic Pulse Modeling in


the Frequency Domain

S() = A e(c ) /B

(1)

S(, z) = A e(c ) /B e0
2

= A (z) e(c (z))

y z

/B 2 (z)

(2)
.

The new parameters A (z), c (z), and B (z) are the


new amplitude, central pulsation, and bandwidth as the
ultrasonic pulse travels deep inside the material. These
parameters provide new information about the tissue dependent attenuation parameters (0 and y). A couple of these
parameters are shown in (3) and (4) when y = 2. For a
complete derivation see [4].
c (z) =

c
,
1 + 0 z B2

B (z) = 

B
1 + 0 z

B2

(3)
.

(4)

This idea can be extended to most of the attenuation


phenomena that suer an ultrasonic pulse as they travels
through a material (absorption, scattering, etc.). If we take
into account that most of these phenomena can be modeled
by power laws [6], their individual eect can be accumulated
and the final envelope of the pulse can be still modeled with
a Gaussian expression. We are going to illustrate this idea
with the example of attenuation due to Stochastic scattering.
Stochastic scattering is frequently modeled by
s () = eS0 D z ,
2



with A, c , and B, being the pulse amplitude, transducer


central pulsation, and transducer bandwidth. When the
ultrasonic pulse propagates deep within material (z-axis),
then, it was demonstrated in [4] that for an attenuation law
y
of the form a () = e0 z (with y = 1, or y = 2), the
previous expression can be reformulated as
2

S(, z) = A (z) e(c (z)) /B


= A (z) e(c (z))

The use of Gaussian envelope pulses is very extended for


the modeling of ultrasonic echoes [4, 5]. Ping He [4]
demonstrates in his study that a Gaussian pulse propagating
in soft tissues could also be modeled as a Gaussian pulse with
parameters changing as the pulse propagates deep within
tissue. Let us assume that the ultrasonic pulse can be modeled
in the frequency domain as,
2

what happens in (2) the eect of the Stochastic scattering can


be obtained using the previous equations for y = 2

(5)

where S0 is the attenuation due to Stochastic scattering and


D is the mean scatterer size. Without lack of generality and
in order not to obtain long equations we will assume that
Stochastic scattering is the only source of attenuation of
the ultrasonic pulse. Under this assumption and similarly to

2

(z)

eS0 D

2 z

(6)

/B 2 (z)

The new parameters A (z), c (z), and B (z) that take
into account attenuation due to Stochastic scattering can be
derived by equations (7), (8), and (9)
c
,
1 + S0 D z B 2

c (z) =

B (z) = 

B
1 + S0 D z B 2

2

A (z) = A e(c

(7)

(8)


/B 2 )(S0 DzB 2 /(1+S0 DzB 2 ))

(9)

Equations (7) and (8) predict a downshift in the central


frequency and a pulse narrowing due to Stochastic scattering
attenuation. Similarly, and as it was expected, (9) predicts
faster attenuation in depth for materials with larger mean
grain size. All these behaviors aect the shape of the
A-scan TFR and can be used to design algorithms for
material classification, based on the amplitude, frequency,
or bandwidth profiles. This shape dependence can also be
used for scatterer mean size estimation. The TFR of a register
obtained in NDT of scattering materials can be modeled
using (6). The parameters A (z), c (z), and B (z) will aect
the shape of the TFR.
Figure 1 shows the aspect of the TFR as described in
(6). This figure was simulated for a fc = 500 KHz central
frequency ultrasonic pulse with initial fractional bandwidth
B = 75% that propagates according to the law defined in (6)
up to z = 1.5e3 m for three dierent mean scatterer size
0.5, 1, and 1.5 mm. Figure 1 also shows the superimposed
parameters c (z) (dashed line) and B (z) (solid line).

3. Conventional Parameter
Extraction for Material Characterization:
The Ultrasonic Signature (US) Concept
As already mentioned, information about the material is
included in the A-scan. Among some other possibilities to
extract information about the materials [5, 7], the analysis
of the variant impulse response (or equivalently the variant
frequency response) of the LTV is a feasible alternative. The
time-variant characteristic of the model leads naturally to a
nonstationary analysis of the recorded signal.
This technique proposes the use of a short-term frequency analysis of the signal to isolate the evolution of
the dierent frequency components. This can be done
by means of explicit implementation of a bank of filters
or, more usually, by means of some type of linear or
nonlinear time-frequency transformation, including nonconstant bandwidth analysis like wavelet transform. From
the time-frequency signal we obtain the US which is a
one-dimensional signal hopefully encompassing the relevant

106

106

2.5

2.5
Pulse central frequency and bandwidth (Hz)

Pulse central frequency and bandwidth (Hz)

EURASIP Journal on Advances in Signal Processing

1.5

0.5

0.5

1.5

0.5

1.5
103

Depth (m)

0.5

1
Depth (m)

(a)

1.5

103

(b)
106

Pulse central frequency and bandwidth (Hz)

2.5

1.5

0.5

0.5

1
Depth (m)

1.5

103

(c)

Figure 1: Simulations of the proposed model for Stochastic scattering attenuation. Simulation parameters of the ultrasonic pulse were
central frequency fc = 500 KHz, fractional bandwidth B = 75%, and mean scatterer size D = {0.5, 1 and 1.5 mm}. (a) D = 0.5 mm, (b) D =
1 mm, and (c) D = 1.5 mm.

information needed for every particular purpose. The US [2]


is obtained by computing for every time instant, along a finite
discrete time interval, a spectral parameter; some possible
alternatives are
(i) centroid frequency (normalized first moment):
 2

c (z) =

|S(, z)| d
,
1 |S(, z)| d

 2

(10)

where |S(, z)| is the magnitude of the timefrequency transformation, and [1 , 2 ] defines the
integration band,
(ii) The fractional bandwidth:
BW3 dB (z)
100%
BW% (z) =
2 c (z)

(11)

(iii) the central pulsation:


max (z) = max
 |S(, z)|,

(12)

(iv) many other depth-dependent (z) parameters that can


be obtained from |S(, z)| (higher-order statistics,
median, etc.).

4. An Alternative Technique to
Conventional Parameter Extraction for
Material Characterization
2-D shape analysis can be applied to the TFR of the ultrasonic
A-scans for material characterization. The motivation of this
idea was based on the observation that the mean scatterer

EURASIP Journal on Advances in Signal Processing


B (z) take into account material-related parameters (0 , y,
S0 , and D) as derived by (7) and (8). If we take into account
(for simplicity) only the Stochastic scattering, we can obtain
that the area of the binarized TFR, up to a given depth (z0 ),
is given by equation (14). Note that to compute the area, the
shift term c (z) can be omitted

106

2.5

Frequency (Hz)

1.5

I(, z) = rect
1

0.5

10
Depth (m)

15

104

Figure 2: An example of a binarized TFR diagram with the mixed


threshold. Simulation parameters of the ultrasonic pulse were
central frequency fc = 500 KHz, AWGN of variance 0.15, and D =
1.5 mm.

(13)

For an arbitrary threshold selection, equality of (13) and


(14) does not hold. However, proportionally relationship
makes these expressions equally valid for classification
purposes. Equation (14) shows that the higher D or S0
are, the lower the final area of the binarized response is.
This simple demonstration confirms that basic geometrical
descriptors can provide important information to compare
material characteristics such us attenuation or mean scatterer
size

Area I , z, S0 , D
 
=

diameter, D, aects TFR shapes (see Figure 1). It is expected


that using shape-related parameters applied to the TFR, we
will be able to classify materials with dierent scatterer sizes.
Additionally, if a process of binarization of TFR is employed,
previous to the extraction of geometrical parameters, the
obtained parameters should be less aected by noise.
The application of geometrical parameters to TFR diagrams provides an innovative way to complement classical
techniques based on one dimensional US and an alternative
way of obtaining similar meaning and less variance estimators (as we will show in Section 5).
After computing the TFR of the ultrasonic A-scan,
binarization with an adequate threshold is done. If we
assume that the ultrasonic signal recorded is contaminated
with additive white Gaussian noise (AWGN), the binarized
TFR will exhibit some sort of two dimensional jitter. This
jitter will aect the shape of the binarized TFR and, of course,
the geometrical parameters derived from it. What we propose
in this work is to choose a variable with depth-adaptive
threshold located at the maximum slope of the Gaussian
pulse for the region of the TFR where Gaussian pulse is
higher than AWGN. This will minimize the eect of noise in
the binarized shape. In the zone of the TFR where amplitude
of the Gaussian pulse is comparable to AWGN power we use
a constant threshold. An example of a binarized TFR using
the adaptive threshold is shown in Figure 2.
Shape-related parameters are obtained from the binarized TFR matrix. These geometrical parameters depend on
physical properties of the inspected material. An example
of how this can be mathematically modeled is given as
follows. Being Figure 2 the binarized TFR generated with the
mixed threshold previously described. Let us call I(, z) the
binarized TFR of Figure 2. This representation can be mathematically formulated, in a first approximation, as in (13) if
the threshold is properly selected. The parameters c (z) and

c (z)
.
B (z)

=
=

 z=z0
z=0

rect

dw dz
B (z)
(14)

B (z)dz

2
S0 D B

1 + S0 D z0 B2 1 .

We are going to see in the next section the set of


geometrical parameters that allow us to classify materials
according to scatterer size.
4.1. Geometrical Descriptors. From I(, z), the binarized TFR
generated with the mixed threshold, we can calculate many
geometrical descriptors [8, 9]. Our contribution, at this
point, is to work with shape or geometrical parameters
having a physical meaning related to the expected changes
produced in the TFR. It is expected that geometrical
descriptors will provide a most intuitive representation of
the model in comparison with the classical signal-processing
parameters. For example, we can establish visual relations
between orientation parameters and physical variations of
attenuation or frequency along depth.
The most representative geometrical descriptors that
have proven to give good classification results are given
below.
4.1.1. Area. For a generic discrete function in two variables,
the moments are defined as
m pq =



z p q I(, z),

(15)

where I(, z) is the binarized TFR, at coordinates (, z).


The area is related to attenuation parameters and mean
scatterer size as it was demonstrated in (14). 
The
 area can
I(, z)
be obtained as the zero-order moment m00 =
Area.

106

106

1.8

1.6

Centroid

1.4
1.2

Pulsation ()

Pulsation ()

EURASIP Journal on Advances in Signal Processing

5
6
Depth (z)

9
103

Centroids

1.8
1.6
1.4
1.2

5
6
Depth (z)

103

Figure 3: Description of BS and eccentricity parameters.

Figure 4: Centroid estimation using decomposition of the binarized TFR into small rectangles.

If we use the area parameter to distinguish between


materials with similar attenuation coecient but including
scatterers with dierent sizes, it is expected that materials
with higher scatterer sizes get lower value of the area
descriptor.

We can use the eccentricity parameter to distinguish


between materials with similar attenuation coecient but
including scatterers with dierent sizes. For lower scatterer
sizes eccentricity is expected to be higher than for higher
scatterer size materials. Seeing TFR diagrams depicted in
Figure 1, we can notice how materials with higher value of D
have more circular shapes than those having lower D. So, it
is expected that the higher D the lower the eccentricity value
is.

4.1.2. Center of Gravity. By using first-order moments, the


center of gravity or centroid of a binary representation can
be calculated.  

z I(, z) and m01 =
I(, z),
Being m10 =
the center of mass can be defined as (cz , c ) where
cz =

m10
,
m00

c =

m01
.
m00

(16)

By dividing the binary representation in smaller regions,


along horizontal z-axis, we will be able to study the central
frequency evolution with depth. Moreover, the center of
gravity is used in the definition of the second-order moments
as described in (17), note the invariance with respect to
response scaling
pq =

1 
(z cz ) p ( c )q I(, z).
m00

(17)

4.1.3. Orientation. Object orientation () can be calculated


using second-order moments. It is geometrically described
as the angle between the major axis of the
object and the z
[(z cz ) cos
axis. By minimizing the function S() =
2
( c ) sin ] , we get the next expression for the orientation


211
1
.
= arctan
2
20 02

(18)

4.1.4. Eccentricity. An important parameter, which is also


dependent on TFR shape, is the eccentricity . The eccentricity allows to estimate how similar to a circle an object is.
The value ranges from 0 to 1 (0 1). For circular objects
= 0 and for elliptical objects 0 < < 1. To compute the
eccentricity we have used the next expression


b
a

(19)

where a and b are, respectively, the maior and minor axis of


the object, as it is depicted in Figure 3.

4.1.5. Boundary Signature (BS). The BS is a 1-D representation of an object boundary. One of the most simple ways
to generate the BS of a region is to plot the distance from
the center of gravity of the region to the boundary as a
function of the angle, . Figure 3 illustrates this concept.
The changes in size of binarized TFR result in changes in
the amplitude values of the corresponding BS. It is expected
that the higher the value of D is, the lower the amplitude of
the corresponding BS. Moreover, the BS not only provides
information about area changes but also provides the angular
direction of such changes. To compute the BS we need to
compute for each angle, , the Euclidean distance between
the center of gravity and the boundary of the region. As will
be demonstrated, it is expected that dierent values of D,
for the model or material under test, will correspond with
dierent BSs for the binarized TFR.
4.1.6. Frequency-Derived Parameters. Some frequencyderived parameters have been also tested such as centroid
frequency, central frequency, or bandwidth.
To compute the central frequency we work with the TFR
in gray scale (not binarized). We divide the TFR diagram into
small rectangles (see Figure 4) along the z-axis (horizontal),
and then we compute the maximum of each rectangle.
The final result is the evolution of central frequency along
depth.
To compute the centroid frequency evolution with depth
we divide the binzarized TFR into small rectangles along the
z-axis and then we compute the center of gravity (described
above) for every rectangle, the final result is the evolution of
centroid frequency along horizontal direction.
The process to compute the bandwidth evolution is
similar to centroid frequency computation. In the case of
bandwidth the width of each rectangle is computed, thus
obtaining the evolution of bandwidth with depth.

EURASIP Journal on Advances in Signal Processing

100
200
300
400
500
Grain microstructure of the material
K K type distribution of M parameter
ARMA (4,4) pulse model
c (z)
B  (z)
A (z)

Observation noise
n AWGN
n} = 0
n2 , E{

Figure 5: Linear time-variant model structure used for simulations.

5. Experiments
5.1. Application of Conventional and Geometrical Descriptors
to Simulated Signals. Simulated signals have been generated
according to the model presented in Section 2.
Transducer response (ARMA order) was estimated using
phantom data based on the final prediction error and
residual time series methods [10]. The best results for
the employed transducer that will be later used in this
section, were achieved with an ARMA(4, 4) model. The LTV
system was modeled according to (20) with the expressions
for A (z), c (z) and B (z) given by (7), (8) and (9),
respectively.
y(n) = b0 (z) x(n) + b1 (z)x(n 1) + b2 (z)x(n 2)
+ b3 (z)x(n 3) + b4 (z)x(n 4) a1 (z)y(n 1)
a2 (z)y(n 2) a3 (z)y(n 3) a4 (z)y(n 4).

(20)
The block diagram of the simulated signal generator
is represented in Figure 5. Several signals (A-scans) were
generated using this model. The purpose of these simulations
was to compare conventional signal-processing estimators
with the geometrical estimators described in Section 4.1. The
results are presented in the form of bias/variance graphs
for each estimator as the amount of observation noise
increases (AWGN). The variance is presented in the graphs
with vertical color bars whereas the bias is presented with
a convenient marker to distinguish between conventional or
geometrical estimators.
Figure 6 was generated using D = 0.5 mm and AWGN
variance varying from 0.05 to 0.5. The figure shows the
variance (red bars) and bias (marker ) of conventional
estimators: central frequency, centroid frequency, and fractional bandwidth, as described in Section 3. Superimposed,
Figure 6, also represents the variance (green bars) and bias
(marker o) of shape analysis operators: central frequency,

centroid frequency and fractional bandwidth, as they were


described in Section 4.
Figure 7 was generated for D = 1.5 mm and AWGN
variance varying from 0.05 to 0.5. It represents the same
information that has been described in the above point but
changing D = 0.5 mm with D = 1.5 mm.
From the observation of Figures 6 and 7 we can conclude
that both methods are equivalent when estimating the
central frequency. This is quite obvious, if we take into
account, that the central frequency estimator is computed
using the nonbinarized TFR diagram and it computes the
maximum of each block along z-axis (2-D shape analysis is
not applied). However, the estimator behavior changes when
extracting 2-D geometrical parameters from the binarized
diagrams (centroid and fractional bandwidth estimators). If
we compare the fractional bandwidth estimator computed
using the conventional technique with the fractional bandwidth estimator computed over binarized TFR diagrams,
it can be appreciated the lower variance (represented by
shorter vertical bars) but higher bias (represented by higher
value markers). If we compare centroid frequency parameter
computed with both presented techniques, we observe that
we get lower variance (vertical bars) and bias (markers)
when geometrical estimator is employed. For the centroid
frequency parameter, superior performance of the geometrical estimator is obtained in high noise conditions.
It is also worth mentioning that if we compare the
centroid estimator in Figures 6 and 7, benefits of using
the proposed geometrical estimators are as big as the mean
scatterer diameter (D) increases. Note, that the bias increases
when D increases for the conventional centroid estimator.
5.2. Application of Conventional and Geometrical Descriptors
to Distinguish Variable Size Scatterers in an Agar-Agar Matrix.
Real measurements were performed on a set of 8 test pieces.
The 8 test pieces were created at the laboratory of the group
and were composed of a uniform matrix of Agar-Agar and

EURASIP Journal on Advances in Signal Processing


104

105

Central frequency

4
3

2.5

1
0

1.5
1

0.5

0.1

0.2

0.3
AWGN

Centroid frequency

Bias/variance

Bias/variance

0.4

0.5

0.5

0.1

0.2

(a)

0.3
AWGN

0.4

0.5

(b)
Fractional BW

15

Bias/variance

10

10

0.1

0.2

0.3
AWGN

0.4

0.5

Conventional estimator
Geometrical estimator
(c)

Figure 6: Central Frequency, Centroid Frequency, and Fractional BandWidth operators variance and deviation from the theoretical result
(bias). Simulation parameters for LTV were mean scatterer diameter D = 0.5 mm, AWGN of variance varying from 0.05 to 0.5. Bias (marker
or o), and variance (vertical bars).

3 Armstrong porosity molecular sieves of dierent sizes.


All the pieces were made with the same concentration of
molecular sieves and Agar-Agar. With this homogeneous
matrix of Agar-Agar and Molecular sieves, we can simulate
soft tissues containing scatterers resembling the theoretical
model proposed at Section 2. The detailed composition of
the test pieces is given in Table 1. Note that, in order to check
the repeatability of the process, two test pieces were created
for each scatterer size.
The molecular sieves (scatterers) were homogeneously
distributed into the uniform Agar-Agar matrix. Figure 8
shows the aspect of a test piece.

Table 1: Composition of the test pieces.


Test piece
1 and 2
3 and 4
5 and 6
7 and 8

Agar-Agar
concentration

Ner of scatterers

mean D

2% in distilled
water
2% in distilled
water
2% in distilled
water
2% in distilled
water

1000 molecular
sieves
1000 molecular
sieves
1000 molecular
sieves
1000 molecular
sieves

0.5 mm
0.7 mm
1.3 mm
1.8 mm

EURASIP Journal on Advances in Signal Processing


104

105

Central frequency

Centroid frequency

4
3.5

3
2.5
Bias/variance

Bias/variance

2
1
0

1.5
1
0.5

2
3

0.5

0.1

0.2

0.3
AWGN

0.4

0.5

0.1

0.2

(a)

0.3
AWGN

0.4

0.5

(b)
Fractional BW

15

Bias/variance

10

10

0.1

0.2

0.3
AWGN

0.4

0.5

Conventional estimator
Geometrical estimator
(c)

Figure 7: Central Frequency, Centroid Frequency, and Fractional BandWidth operators variance and deviation from the theoretical result.
Simulation parameters for LTV were mean scatterer diameter D = 1.5 mm, AWGN of variance varying from 0.05 to 0.5. Bias (marker or
o), and variance (vertical bars).

The measurement equipment was a PC with an ultrasonic board IPR-100 (Physical Acoustics) working in pulseecho mode with 400 V of attack voltage, 40 dB in the
receiver amplifier, and damping impedance of 2000 Ohms.
The transducer frequency was chosen to be 1 MHz (K1SC
transducer probe from Krautkramer and Branson). Received
signal was acquired with the Tektronix 3000 oscilloscope ( fs
= 50 MSamples/s).
The set of 8 test pieces was separated in two subsets: the
odd subset (composed by test pieces 1, 3, 5, and 7) and the
even subset (composed by test pieces 2, 4, 6, and 8). Both subsets were measured separately and individual estimators were
computed and compared between subsets. The measurement

procedure was as follows: uniformly distributed A-scans


were obtained around each test piece contour. Individual
A-scan TFRs were obtained using the Spectrogram (by
means of the Short-Time Fourier Transform). Final TFR
for each test piece was obtained averaging individual Ascan TFRs. After thresholding the final TFR, geometrical
descriptors presented in Section 4 were calculated for each
subset. The parameters and graphs obtained after processing
each subset were similar, for that (and for representation
purposes) all parameters and graphs presented in this
section were averaged for even and odd subsets, thus
representing an only value for each parameter for every value
of D.

EURASIP Journal on Advances in Signal Processing

9
106

Centroid frequency evolution (conventional extraction)

Centroid frequency (Hz)

Figure 8: Test piece.

2.5

1.5

1
1000

1500

2000

Table 2: 2-D shape analysis: Area, orientation, and eccentricity


descriptors of test pieces.
D
0.5 mm
0.7 mm
1.3 mm
1.8 mm

Test Piece
1 and 2
3 and 4
5 and 6
7 and 8

Area
651
383
258
279

Orientation
0.0953
0.0344
0.3297
0.1041

2500
3000
Time (samples)

3500

4000

4500

Scatterers diameter 0.5 mm


Scatterers diameter 0.7 mm
Scatterers diameter 1.3 mm
Scatterers diameter 1.8 mm

Eccentricity
0.8762
0.8329
0.6965
0.6423

(a)
106

Centroid frequency evolution (geometrical estimator)

Table 2 shows area, orientation, and eccentricity parameters obtained from the test pieces created in the experiment.
The area values obtained in Table 2 agree with the
expected behavior described in Section 4. It can be noticed
that higher scatterer sizes get lower value of the area
descriptor. This trend is coarsely maintained among all
scatterer diameter sizes (D).
The orientation parameter values presented in Table 2
also agree with the expected behavior described by (7) since it
predicts a downshift in the TFR shape (see Figure 1). Physical
explanation is based on the fact that the higher the value
of D, the higher the attenuation of the ultrasonic energy at
high frequencies. As a result of that, higher D values get
higher negative slope (with respect to horizontal axis). The
orientation parameter allows to distinguish coarsely between
small scatterer test pieces (D = 0.5 and 0.7 mm) and large
(D = 1.3 and 1.8 mm).
However, there are geometrical parameters that allow a
precise classification of test pieces according to D: eccentricity, centroid frequency, and BS are the main ones.
The eccentricity parameter values presented in Table 2
show that the higher D the lower the eccentricity value is.
This behavior agrees with theoretical equations and allows to
classify all the test pieces.
Figure 9 represents the centroid frequency evolution with
depth (time of flight). The parameter has been estimated
using both techniques presented. The left figure was obtained
using the conventional estimator (see (10)) and the right figure was obtained using the geometrical estimator (Figure 4).
Results were averaged for both subsets (even and odd). Both
estimators should give results of the same order of magnitude
as it can be verified. However, as the ultrasonic pulse travels
deep into the agar-agar matrix (increasing time axes) it

Centroid frequency (Hz)

2.5

1.5

1
1000

1500

2000

2500
3000
Time (samples)

3500

4000

4500

Scatterers diameter 0.5 mm


Scatterers diameter 0.7 mm
Scatterers diameter 1.3 mm
Scatterers diameter 1.8 mm
(b)

Figure 9: Centroid frequency evolution with depth (time of


flight). Comparison between the same parameter computed with
conventional estimator (a) and geometrical estimators (b).

suers from attenuation, whereas the grain noise remains


constant. This phenomenon produces that late-time samples
estimates requires lower variance estimators to be able to
distinguish among categories (grain diameter D). Figure 9(a)
only allows to distinguish among scatterer mean diameter at
the very beginning of the centroid frequency (from sample
1000 to 1150). Figure 9(b) allows to distinguish in a wider

10

EURASIP Journal on Advances in Signal Processing


and variance, a better classification of scattering materials
can be achieved. This behavior has been validated through
simulations.
The results were applied to real test pieces created at the
laboratory. Traditional estimators could hardly be used to
classify according to mean scatterer size. However, estimators
based on geometrical descriptors of the binarized A-scan
TFR could easily distinguish among the dierent scattering
sizes. Concretely, area and orientation parameters can classify
test pieces in two categories (large and small scatterer sizes)
while eccentricity, centroid frequency and BS provide better
results since they are able to distinguish among the four
dierent scatterers sizes.

Signature

80
70

Amplitude

60
50
40
30
20
10
0

50

100

150

200
Angle

250

300

350

Scatterers diameter 0.5 mm


Scatterers diameter 0.7 mm
Scatterers diameter 1.3 mm
Scatterers diameter 1.8 mm

Figure 10: BS descriptor.

range (from sample 1000 to 1750 and from sample 3600


to 4400). This experiment confirms once more the superior
bias/variance performance predicted by simulations (Figures
6 and 7).
Promising results are also obtained using the BS parameter, see Figure 10. Note that the amplitude of BS increases
as the scatterer size decreases. This result is coherent with the
eccentricity result, see Table 2 where pieces with lower D have
higher eccentricity than pieces with higher D.
To sum up, from Table 2 and Figures 9(b) and 10, it
is important to stress that area and orientation parameters
can classify test pieces in two categories (large and small
scatterer sizes) whereas eccentricity, centroid frequency and
BS provide better results since they are able to distinguish
among the four dierent scatterers sizes.

6. Conclusions
In this paper we show that parameters extracted from
the TFR of ultrasonic A-scans can be used for material
characterization/classification. The novelty of this work is
based on the use of TFRs as input information in 2D-shape
analysis algorithms, specifically geometrical descriptors. This
technique compliments traditional classification parameters
(attenuation, longitudinal ultrasonic velocity, etc.) with
shape-related parameters. Additionally, for some parameters,
the new technique allows to obtain lower variance estimators.
When binarized TFRs are processed and 2-D geometrical
modeling, inherent in our approach, is used, a new set
of estimators can be derived. The proposed geometrical
estimators can provide better estimates and moreover, they
are less sensitive to noise than conventional estimators.
Thanks to this superior performance, in terms of bias

Acknowledgment
This work was supported by the national R + D program
under Grant TEC2008-02975 (Spain), FEDER programme
and Generalitat Valenciana PROMETEO 2010/040.

References
[1] M. Edwards, Ed., Detecting Foreign Bodies in Food, Woodhead,
Cambridge, UK; CRC Press, Boca Raton, Fla, USA, 2004.
[2] L. Vergara, J. Gosalbez, J. V. Fuente, R. Miralles, and I. Bosch,
Measurement of cement porosity by centroid frequency
profiles of ultrasonic grain noise, Signal Processing, vol. 84,
no. 12, pp. 23152324, 2004.
[3] J. Gosalbez, A. Salazar, I. Bosch, R. Miralles, and L. Vergara, Application of ultrasonic nondestructive testing to the
diagnosis of consolidation of a restored dome, Materials
Evaluation, vol. 64, no. 5, pp. 492497, 2006.
[4] P. He, Simulation of ultrasound pulse propagation in lossy
media obeying a frequency power law, IEEE Transactions on
Ultrasonics, Ferroelectrics, and Frequency Control, vol. 45, no.
1, pp. 114125, 1998.
[5] R. Demirli and J. Saniie, Model-based estimation of ultrasonic echoespart I: analysis and algorithms, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol.
48, no. 3, pp. 787802, 2001.
[6] M. Karaoguz, N. Bilgutay, and B. Onaral, Modeling of
scattering dominated ultrasonic attenuation using power-law
function, in Proceedings of the IEEE Ultrasonics Symposium,
vol. 1, pp. 793796, October 2000.
[7] L. Vergara, J. Gosalbez, J. V. Fuente, et al., Ultrasonic
nondestructive testing on marble rock blocks, Materials
Evaluation, vol. 62, no. 1, pp. 7378, 2004.
[8] I. Pitas, Digital Image Processing Algorithms and Applications,
Wiley-Interscience, New York, NY, USA, 1st edition, 2000.
[9] R. C. Gonzalez and R. E. Woods, Digital Image Processing,
Prentice-Hall, Englewood Clis, NJ, USA, 2007.
[10] A. K. Nandi, Blind Estimation Using Higher-Order Statistics,
Kluwer Academic Publishers, Boston, Mass, USA, 1999.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 928216, 8 pages
doi:10.1155/2010/928216

Research Article
Cyclic Biaxial Stress Measurement Method Using the Grain
Growth Direction in Electrodeposited Copper Foil
Yuichi Ono, Cheng Li, and Daisuke Hino
Department of Mechanical and Aerospace Engineering, Tottori University, 4-101 Koyama-cho minami, Tottori-shi,
Tottori 680-8552, Japan
Correspondence should be addressed to Yuichi Ono, ono@mech.tottori-u.ac.jp
Received 28 December 2009; Accepted 11 April 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 Yuichi Ono et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A method that uses grain growth direction in electrodeposited copper foil to measure cyclic biaxial stress is examined in this paper.
The grain growth direction is measured by image processing software after a cyclic loading test for various biaxial stress ratios is
carried out. Since the grain growth occurs in two directions and its directions correspond closely with the direction of maximum
shearing stress when the biaxial stress ratio is negative, the principal stress can be measured using Mohrs stress circle. On the
other hand, when the biaxial stress ratio is positive, above-mentioned feature does not occur. Therefore, the first principal stress
can be measured based on the grain growth density. The number of grains necessary to measure the biaxial stress is estimated by a
statistical approach.

1. Introduction
The copper electroplating method is used to measure cyclic
stress that causes metal fatigue [13]. If copper foil adhered
to a machine element is subjected to repeated loads, grain
growth occurs in the copper foil. Since the grain growth
density is controlled by the maximum shearing stress and
the number of cycles, the maximum shearing stress can
be measured based on the grain growth density in the
prescribed number of cycles [4]. This method has the
advantage of detecting stress in microscopic regions like the
stress concentration region. Moreover, this method can be
easily applied to rotating machines and machine elements in
sealed casings, since it does not need an output line like an
electrical resistance strain gauge.
Since the principal stresses that are important for
evaluating metal fatigue cannot be detected by this method,
a new method using copper foil with circular holes has been
developed [4, 5]. However, this new method is somewhat
complex, because the grain growth length at hole edges as
well as the grain growth density in the copper foil must be
measured. This also means that two kinds of copper foils
(foil with and without circular holes) are necessary for the
principal stress measurement.

From the above viewpoint, we examined the principal


stress measurement using only one piece of foil without
circular holes. To do this, we focused on the grain growth
direction, since the growth direction of an individual grain
is expected to correspond closely with the direction of
maximum shearing stress. The principal stress measurement
becomes possible using this feature as described in the next
chapter. First, we proposed the principal stress measurement
method based on the grain growth direction. Second, we
investigated the relative frequency distribution of the grain
growth direction for various biaxial stress conditions. Finally,
the number of grains necessary to measure the principal
stress was estimated by regarding the relative frequency
distribution as the normal distribution.

2. Biaxial Stress Measurement Method


Figure 1 shows the stress state of the copper foil depicted
by Mohrs stress circle. Namely, Figures 1(a) and 1(b) show
conditions when the biaxial stress ratio C (= the second
principal stress 2 /the first principal stress 1 ) is negative
and positive, respectively. Since there is usually no vertical
load to the surface of machine element, the principal stress
3 perpendicular to this surface is zero. If the copper foil is

EURASIP Journal on Advances in Signal Processing

max

Table 1: Mechanical properties of Ti-6Al-4V alloy.

max

3 = 0

Proof stress [MPa]


946
3 = 0 2

max

max
(a) C 0

(b) C 0

Figure 1: Stress state in copper depicted by Mohrs stress circle.

adhered to this surface, the third principal stress in the foil 3


is also zero, since the foil is very thin. Therefore, the principal
stresses 1 and 2 are expressed as the following equation
using the maximum shearing stress max and the biaxial stress
ratio C:
1 =

2
max ,
1C

1 = 2max ,

2 = C1
2 = C1

(C 0),
(C 0).

(1)
(2)

We focus on the first principal stress 1 because it is the most


important in metal fatigue evaluation. Since the maximum
shearing stress max can be measured by the conventional
method based on the grain growth density in the copper
foil, the first principal stress 1 can be obtained easily
when C 0. However, it is necessary to measure the
biaxial stress ratio C in addition to max in the case of
C 0. Namely, determining the sign of biaxial stress ratio
is required to measure the first principal stress, since the
basic equation to obtain the first principal stress is dierent.
Figure 2 illustrates grain growth in copper. The grain growth
caused by cyclic stress in copper is considered to be a kind
of thermal recrystallization [6]. The dislocation movement
caused by mechanical rather than thermal energy results in
the grain growth. Since the shearing stress is responsible for
the dislocation movement, the grain growth is considered
to be controlled by the shearing stress. max occurs in the
direction that divides the 1 and 2 directions in two when
C 0 as shown in Figure 1(a). Therefore, the plane in which
the shearing stress becomes the maximum is an xy-plane,
as shown in Figure 2(a). On the other hand, max occurs in
the direction that divides the 1 and 3 directions in two
when C 0 as shown in Figure 1(b). So, the plane in which
the shearing stress becomes the maximum is an xz-plane, as
shown in Figure 2(b). In addition, there are two directions
where the shearing stress reaches the maximum, and the
interval between the two is 90 , as understood from Mohrs
stress circle shown in Figure 1. As a result, the grain growth
observed by electrochemical polishing occurred in the two
directions and the interval between the two is 90 when
C 0. On the other hand, the grains do not show the abovementioned features on the xy-plane when C 0, since grains
grow in the thickness direction. Therefore, the sign of biaxial
stress ratio can be determined by using these characteristics.

Tensile strength [MPa]


1033

Elongation [%]
15.2

If the sign of biaxial stress ratio is determined, 1 can


be obtained easily when C 0 using (2). Since the biaxial
stress ratio C becomes negative in the combined stress
state of bending and torsion that exists widely in actual
machine elements, the principal stress measurement under
this condition is important. From (1), the measurement of
biaxial stress ratio is necessary in this condition. Therefore,
the method using copper foil with circular holes has been
developed to measure biaxial stress ratio C [4, 5]. A new
method using grain growth direction is examined to measure
the principal stress using only a copper foil without circular
holes. Figure 3 shows a machine element that receives
bending moment MB and torsional moment MT at both
longitudinal ends. The stress state of a small element is in the
combined stress state of a bending stress x and a shearing
stress xy as shown in Figure 3. Mohrs stress circle of this
small element is shown in Figure 4. Since the first principal
stress and the second principal stress have opposite signs, the
biaxial stress ratio becomes negative. The angle between
the axial direction and the direction of principal stress is
expressed as
1
1+C
.
= cos1
2
1C

(3)

Moreover, the relationship between the direction of principal


stress and the direction of maximum shearing stress is
expressed as
1
3
= + , + .
4
4

(4)

Since the growth direction of individual grains can be


expected to correspond with , the biaxial stress ratio C
can be obtained from (3) and (4) when is measured by
using the grain growth direction. Then, the principal stress
can be measured from (1). Since this new method uses only
one piece of foil, the principal stress can be measured more
easily than with the conventional method.

3. Experimental Procedures
3.1. Test Specimen and Testing Machine. A copper foil was
obtained as follows. A stainless steel plate (200 mm
100 mm 1 mm) was electroplated with copper sulfate
solution [1]. Since the stainless steel plate is polished by
bung before plating, the deposited layer can easily strip
from the stainless steel plate. This deposited layer is called
a copper foil. All subsequent experiments were carried out
by cutting this single foil to small pieces. The copper foil was
about 20 m thick and the initial grain size was about 1 m
[4]. This grain size is considerably smaller than the grown
grain size.
A titanium alloy (Ti-6Al-4V) was used as the specimen
material. The mechanical properties are shown in Table 1.

EURASIP Journal on Advances in Signal Processing

3 (z)

3 (z)
max
2 (y)

1
(x)

max

2 (y)
1
(x)

max

max

90
Grown grain
90

Copper foil
(a) C 0

(b) C 0

Figure 2: Schematic diagram of grain growth in copper foil.

MB

max

xy max

MT

13

xy

MB

94
110

x
MT

max xy

max

xy

30

Copper foil

4-6
t = 4 mm
(a) Plate-type (C < 0)

MB : bending moment
MT : torsional moment

8-6.5

Figure 3: Biaxial stress state of bending and torsion.

max

Copper foil
xy
P.C.D 100

2
x
2

xy

120

t = 4 mm
(b) Disk-type (C > 0)

Figure 4: Stress state depicted by Mohrs stress circle (C < 0).

Figure 5: Geometry and dimensions of test specimen.

Figure 5 shows the geometry and dimensions of test specimen. A plate-type specimen was used for C < 0, and a
disk-type specimen was used for C > 0. A Schenck type
fatigue testing machine was used, and tests were performed
at a frequency of 60 Hz by mounting the bending-torsion

apparatus shown in Figure 6 in the test machine when C <


0. This apparatus is able to produce various biaxial stress
ratios by changing the attachment angle . Table 2 shows
the relationship between the biaxial stress ratio C and the
attachment angle obtained by a strain gauge rosette. On the

EURASIP Journal on Advances in Signal Processing


MT

Specimen

Table 2: Biaxial stress ratio obtained by a strain gauge rosette.

MB

Machine type
Schenck
Servo-hydraulic

[ ]
30
45
60
75
90

0.16 0.33 0.52 0.72 1.0


1.0
C

Figure 6: Bending-torsion apparatus.

120
Specimen fixing block

R722.5

Copper foil

acrylate-based strain gauge cement. After the specimen was


installed in the apparatus, cyclic loading tests were carried
out under max = 55 MPa. The image was captured as
the longitudinal direction of the plate-type specimen was
corresponding to the horizontal direction of the image. On
the other hand, arbitrary direction of the disk-type specimen
was chosen as the horizontal direction of the image. The
growth direction of individual grains was measured after
electrochemically polishing and etching. was defined as
the angle between the horizontal direction of the image and
the principal axis of the grown grain (the axis along which
a moment of inertia of area is minimized), as shown in
Figure 8(b). Namely, is expressed as
2Ixy
1
= tan1
.
2
Ix I y

(5)

Specimen

Ix , I y , and Ixy are the moment of inertia of area for the x axis,
the moment of inertia of area for the y axis, and the product
of inertia of area, respectively. Image processing software can
automatically calculate the grain growth direction based on
(5) using a binary image of the grown grains. The calculation
was carried out for grains that did not coalesce with each
other.

Punch block

4. Results and Discussion

other hand, the closed-loop servohydraulic testing machine


was used, and tests were carried out at the same frequency
by mounting the disk-bending apparatus shown in Figure 7
in the test machine when C > 0. If the punch that has
convex shape with a large curvature compresses the opposite
side of the specimen surface where copper foil is adhered,
the biaxial stress ratio of a copper foil becomes positive [7].
Since the stress state is dierent with the location on the
disk, the copper foil is adhered to the central portion. The
biaxial stress ratio C measured by a strain gauge rosette
was 1.0. Images of grown grains were captured using a
personal computer from a digital camera installed on an
optical microscope (200x magnification), and we used image
processing software to measure the grain growth direction.

4.1. Statistical Distribution of the Grain Growth Direction.


Figure 8 is a microphotograph of three examples of grain
growth due to the cyclic stress. The direction of maximum
shearing stress is also shown in the figure when C < 0.
As predicted, most grains grew in two directions of the
maximum shearing stress and the interval between the two
is 90 in the case of C < 0. On the other hand, these features
are not recognized in the case of C = 1.0. Figure 9 shows
some examples of the relative frequency distribution of grain
growth direction . In these figures, m means the number
of samples. The maximum shearing stress directions
obtained by substituting the values of Table 2 for (3) and (4)
are also shown in Figures 9(a) and 9(b). When C < 0, peaks
of the distribution correspond well with the direction of
maximum shearing stress and the interval from one peak
to the next peak is almost 90 . These tendencies occurred
for all experimental conditions in C < 0. However, these
tendencies are not recognized in the case of C = 1.0. From
these results, it can be concluded that grain growth occurs
readily in the direction of the maximum shearing stress.

3.2. Experimental Procedure. A piece of copper foil was


adhered to the central portion of a specimen using cyano-

4.2. Biaxial Stress Measurement Using the Grain Growth


Direction. The following periodic function is applied to

Pinhole for positioning


76

Figure 7: Disk-bending apparatus.

EURASIP Journal on Advances in Signal Processing

5
Table 3: Biaxial stress ratio in each experimental condition.

max

C
j [%]
k [ ]
l [rad]
f0 [%]

max

0.16

0.33

0.52

0.72

1.0

3.11
99.3
2.99
2.98

4.13
91.9
2.69
2.82

4.50
94.6
2.48
2.96

4.78
96.1
2.22
3.09

4.85
94.7
1.88
3.03

1.0
1.99
53.1

100 m

approximation. Therefore, we next check the j value. If this


is the large value as shown in Table 3 ( j > 3), it can be
concluded that the biaxial stress ratio is negative. If not so,
the biaxial stress ratio is positive.
Next, we discuss the measurement of biaxial stress
ratio when C < 0. Since the approximated equation (6)
corresponds well with the experimental results, the peak
value of can be obtained by dierentiating this equation,
and the resultant equation is

(a) C = 1.0

max
max

peak

100 m

100 m
(c) C = 1.0

Figure 8: Grown grains occurred at the copper foil.

approximate the relative frequency distribution, since the


peaks occur periodically when C < 0:


2
+ l + f0 ,
fsin = j sin
k

(n = 1, 2, 3, . . .).

(7)

obtained by the above equation in each


Figure 10 shows peak
biaxial stress ratio C and the theoretical curve of obtained

is obtained
by substituting (3) for (4). In this figure, peak

from the approximation curve at m = 200. peak


obtained
by the new method using grain growth direction agrees well
with a theoretical curve of . Therefore, it is possible to
measure biaxial stress ratio C when C 0 by substituting

for in (4) and using (3). Since the maximum


peak
shearing stress max can be measured by the conventional
method based on the grain growth density in a microscopic
region, the principal stress can be obtained by (1).

(b) C = 0.52

k 4n + 1
=
l
2
2

(6)

where fsin is the relative frequency of , j is a constant that


relates to the height of the peak (amplitude), k is a constant
that expresses the interval of a peak (period), l is a constant
that denotes a phase at = 0 (initial phase), and f0 is a
constant that expresses the translation amount of upward
direction. The values of each constant are listed in Table 3.
The sin curve in Figure 9 graphs (6). The approximation
curve corresponds well with the experimental results when
C < 0. k are almost 90 in the case of C < 0 from Table 3.
Moreover, since the grains are clearly grown in the xy-plane
as shown in Figure 2(a), j are large values compared with
the case of C = 1.0. Therefore, the sign of biaxial stress
ratio can be decided by using these parameters. Namely, we
first pay attention to the k value. If this value is almost 90 ,
biaxial stress ratio is probably negative. However, the k value
might be about 90 even when C > 0 as a result of the

4.3. Estimation of the Number of Grains Necessary to Measure


the Biaxial Stress. Since this method has the advantage of
enabling measurements of the stress in a microscopic region,
it is preferable to reduce the number of measured grains
m as much as possible. The number of grains necessary to
determine the sign of biaxial stress ratio is only about 50,
since the feature shown in Figure 9 is the same within the
range from 50 to 150. Therefore, we pay attention to the
number of grains m necessary to measure the biaxial stress
ratio. Namely, m necessary to keep prescribed accuracy in
the stress measurement can be statistically estimated.
It is thought that the distribution shown in Figures
9(a) and 9(b) consists of two groups, since there are two
directions of maximum shearing stress in an xy-plane.
Therefore, data within the range of 45 from one peak
of the distribution can be assumed to be one group. The
cumulative relative frequency from one group in Figures
9(a) and 9(b) is plotted on normal probability paper and
shown in Figure 11. Since most data can be plotted as
straight lines for each biaxial stress ratio, the grain growth
direction of one group can be considered to be the normal
distribution expressed by the following probability density
function fnormal :


1
1
fnormal = exp 2 ( )2 ,
2s
s 2

(8)

EURASIP Journal on Advances in Signal Processing


20

20

=0

= 74.7

15

90

Relative frequency [%]

Relative frequency [%]

15

10

= 164.7

10

0
0

15

30

45

60

75

90 105 120 135 150 165 180

Grain growth direction


m = 50
m = 100
m = 200

15

30

45

60

75

90 105 120 135 150 165 180

Grain growth direction [ ]

[ ]
m = 50
m = 100
m = 200

m = 50
m = 100
m = 200

m = 50
m = 100
m = 200

(a) C = 1.0

(b) C = 0.33

20

Relative frequency [%]

15

10

15

30

45

60

75

90 105 120 135 150 165 180

Grain growth direction [ ]


m = 50
m = 80

m = 50
m = 80
(c) C = 1.0

Figure 9: Relative frequency distribution diagram of .

where is the population mean and s2 is the population


variance. When m samples are extracted from a population
with a normal distribution, the distribution of the average

also becomes a normal distribution,


of the m samples m

and the standard deviation of m samples becomes s/ m


[8]. Therefore, when the error margin % is permitted to

based on , the probability that m


is in the range
be m

of (1 0.01) can be calculated statistically [9]. Table 4

shows the values of m obtained for various , and Figure 12


shows the first principal stress 1 obtained by substituting the

calculated by extracting m samples shown


mean value m
in Table 4 for in (4). The error range of 1 obtained
from (1 0.01) is also shown in the figure. Moreover, the
solid curve in the figure shows the theoretical curve obtained
by substituting max = 55 MPa to (1). The first principal
stress 1 calculated by using m samples was in the specified

EURASIP Journal on Advances in Signal Processing

100

110
100
Principal stress 1 [MPa]

peak
and

90

80

70

90
80
70
60

60
50
50

0.8

0.6

0.4

0.2

40

0.8

Biaxial stress ratio C

0.6

0.4

0.2

Biaxial stress ratio C

Figure 12: Relationship between 1 and C ( = 3%).

peak

Theoretical curve

( )

Figure 10: Relationship between peak


( ) and C.

Table 4: Number of necessary grains m (Confidence value: 95%).


C

99.99
99.9

[%]

Cumulative frequency [%]

99
90

3
5
7

0.16

0.33

0.52

0.72

1.0

330
117
61

182
66
34

71
26
13

48
18
9

44
16
8

5. Conclusions

70

We examined a method that uses the growth direction of


grains in copper foil to measure cyclic biaxial stress. The
number of grains necessary to measure the biaxial stress was
also estimated statistically.
The results obtained are summarized as follows.

50
30
10
1
0.1
0.01

20

C = 0.52
C = 0.72
C = 1

40
60
80
Grain growth direction [ ]

100

120

C = 0.16
C = 0.33

Figure 11: Grain growth direction plotted on normal probability


paper.

error range. Moreover, 1 obtained by the new method agrees


well with the theoretical curve. The area necessary for the
principal stress measurement is only about 5 mm2 , even
when a condition requires many samples (C = 0.16, =
3%). Therefore, this new method can detect the cyclic biaxial
stress in a small area using only one piece of copper foil.

(1) When the biaxial stress ratio is negative, peaks of the


relative frequency distribution of the grain growth
direction corresponded well with the direction of
maximum shearing stress, and the interval from one
peak to another peak was almost 90 .
(2) The above-mentioned features are not recognized
when the biaxial stress ratio is positive. Therefore,
the sign of biaxial stress ratio is determined by using
these features.
(3) The principal stress was obtained with Mohrs stress
circle and the peak of the sin curve obtained by
approximating the relative frequency distribution
when the biaxial stress ratio is negative.
(4) The grain growth direction within the range of
45 from one peak of the distribution followed
the normal distribution. Therefore, the number
of grains necessary for the principal stress measurement could be estimated to the demanded
accuracy.

EURASIP Journal on Advances in Signal Processing


(5) The first principal stress 1 obtained by this new
method agreed well with the result obtained by
a strain gauge rosette. The area necessary for the
principal stress measurement was only 5 mm2 .
(6) Since this method can measure the principal stress
with only one piece of foil, it is more ecient than
conventional methods.

References
[1] H. Ohkubo, Copper electroplating method of stress analysis,
Memoirs of the School of Engineering Nagoya University, vol. 2021, p. 1, 1968.
[2] A. Kato and T. Mizuno, Stress concentration factors of grooved
shaft in torsion, Journal of Strain Analysis for Engineering
Design, vol. 20, no. 3, pp. 173177, 1985.
[3] Y. Nagase and T. Yoshizaki, Fatigue gage utilizing slipinitiation phenomenon in electroplated copper foil, Experimental Mechanics, vol. 33, no. 1, pp. 4954, 1993.
[4] S. Kitaoka and Y. Ono, Cyclic biaxial stress measurement by
electrodeposited copper foil with circular holes, Strain, vol. 42,
pp. 4956, 2006.
[5] S. Kitaoka, J.-Q. Chen, N. Egami, and J. Hasegawa, Measurement of biaxial stress using electrodeposited copper foil with a
microcircular hole, JSME International Journal, Series A, vol.
39, no. 4, pp. 533539, 1996.
[6] A. Kato, Stress measurement by copper electroplating aided by
a personal computer, Experimental Mechanics, vol. 27, no. 2,
pp. 132137, 1987.
[7] S. Timoshenko and S. Woinowsky-krieger, Theory of Plates and
Shells, McGraw-Hill, New York, NY, USA, 2nd edition, 1959.
[8] D. C. Montgomery and G. C. Runger, Applied Statistics and
Probability for Engineers, John Wiley & Sons, New York, NY,
USA, 3rd edition, 2003.
[9] D. C. Montgomery and G. C. Runger, Applied Statistics and
Probability for Engineers, John Wiley & Sons, New York, NY,
USA, 4th edition, 2003.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 545030, 9 pages
doi:10.1155/2010/545030

Research Article
Automatic Determination of Fiber-Length Distribution in
Composite Material Using 3D CT Data
Matthias Temann,1 Stephan Mohr,2 Svitlana Gayetskyy,2 Ulf Haler,2
Randolf Hanke,2 and Gunther Greiner1
1 Computer

Graphics Group, University of Erlangen-Nuremberg, Am Wolfsmantel 33, 91058 Erlangen, Germany


EZRT, Dr.-Mack-Str. 81, 90762 Fuerth, Germany

2 Fraunhofer

Correspondence should be addressed to Matthias Temann, matthias.tessmann@informatik.uni-erlangen.de


Received 31 December 2009; Accepted 24 March 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 Matthias Temann et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber
length has a strong influence on overall strength, stiness, and stability of the material. The approximate fiber length distribution
is usually determined early in the development process, as conventional methods require a destruction of the sample component.
In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber
reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together
with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected
automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach
is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of
the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the
segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

1. Introduction
Fiber reinforced polymers (FRPs) are used increasingly in
the aerospace and automotive industry, since those components facilitate the cost-eective building of lightweight
but rigid components. One manufacturing method for
the construction of fiber reinforced polymers is the long
fiber reinforced thermoplastics (LFRP-D) process, where a
matrix material, for example consisting of polypropylene
and additives, is heated and mixed with fibers, for example
carbon or glass fibers. This process happens directly, that
is, without the usage of an intermediate semifinished part.
Thereby, components can be manufactured that are capable
of acting as supporting elements with respect to rigidity and
stability. Due to these properties, many parts made of LFRP
are already used in the automotive industry, for example for
frontends, underbody casing, supporting elements, or parts
of an engine compartment.
The length of the fibers has a strong influence on the
strength, stiness, and impact resistance of the component

[1]. From measurements, it is known that 95% of the


maximal possible material stiness is reached at a fiber length
of 1 mm. However, the desirable length of fibers within a
certain component is strongly dependent on the purpose of
the product [2]. Moreover, since fibers can be damaged and
divided by the production device during the creation process,
there is not only one single fiber length but a distribution
of the fiber lengths within a component. Furthermore, the
opposite can happen if the cutting of the fibers by the cutting
unit does not work well, which results in longer fibers than
expected [3]. Because of this uncertainty of the actual fiber
length within the component, it is important to determine
the distribution of the fibers and their lengths after the
production process in order to estimate the quality of the
resulting product.
In this paper, a novel method for the determination of the
fiber length distribution is proposed using high resolution
computed tomography as scanning method. Therefore, the
3D CT image is processed in order to segment the fibers.
Subsequently, the length of each segmented fiber can be

EURASIP Journal on Advances in Signal Processing

(a)

(b)

Figure 3: Preprocessing of the CT data. In (a) the original image


can be seen. In (b) the result of subtracting the eroded image from
the original image is shown.
y

Figure 1: CT scan of a LFRP component with an isotropic voxel


size of 4.36 m.

V3

V2

Figure 2: CT scan of a sample component. The tight packing of


fibers in the material can be clearly seen.

calculated and their distribution can be quantified. As a


consequence, the presented method avoids destruction of
the test sample and makes an inspection of in-use products
feasible.
The remaining sections of this paper are structured as
follows. Section 2 briefly discusses the current methods
in fiber length distribution determination. Following in
Section 3, our algorithm for automatic fiber segmentation
and length calculation on high-resolution CT images of
components is explained. The results achieved by applying
this method to artificial and real datasets are presented in
Section 4. The concluding Section 5 contains a discussion of
the presented method and gives an outlook to planned future
research topics in this area.

2. Previous Work
The established method for the determination of a fiber
length distribution within a component is to pyrolyze its
matrix [4, 5]. Therefore, the component is put into an oven
which is heated to about 450 C. After about 90 minutes,
the matrix component is reduced to ashes and the skeleton
of the fibers remain. In order to determine their length
distribution, several methods are possible and in use. One
mechanical process is to sieve the fibers with dierent sieve
sizes. However, this method does not work satisfactory with
long fibers that are strongly felted. Another common method

V1

Figure 4: The model of the ideal fiber including the coordinate


system around a center point.

is the usage of scanners or high-resolution CCD-cameras


and flash devices [5]. For this method, the fibers have to be
singularized in order to reduce measurement errors caused
by fiber crossings and fiber entanglements. Subsequently, the
fibers are segmented with a digital image analysis system and
the length of the segmented fibers is determined.
The main disadvantage of these approaches is that the
component has to be destroyed in order to determine
fiber lengths. Consequently, these methods can only be
applied during the development process for the first article
inspection or the evaluation of spot samples. Therefore, it
would be strongly desirable to have methods that allow
for the non-destructive evaluation of in-use products or
parts thereof for quality assessment. However, currently there
are only limited non-destructive evaluation technologies
available or under active research [6].
The use of Micro-CT (CT) technology allows for three
dimensional imaging of structures with a very high spatial
resolution (up to 700 nanometers). Therefore, it is possible
to acquire high-quality images of the inside of components

EURASIP Journal on Advances in Signal Processing

+Min vector

Min vector

Figure 5: Scheme of the tracing procedure along a single fiber.


New center points are determined by tracing along the minimum
eigenvector in two directions. Additionally, a circular area is filled
using the other eigenvectors as a basis with a previously known
radius.

Figure 7: Schematic view of the fiber gap problem. Due to image


noise or crossing fibers, tracing may terminate early and produce
gaps in the segmentation. This can lead to multiple segmentation of
the same fiber and therefore to wrong length distributions.

Figure 8: Schematic view of the solution to the gap problem. During the tracing, critical voxels are detected (red). In a subsequent
pass, dierent fibers containing the same critical voxels are merged
into a single fiber.
Figure 6: Schematic view of the fiber crossing problem. The
currently used minimum eigenvector is pointing into a dierent
direction than the previous one. This can lead to a wrong
segmentation result.

built with fiber reinforced polymers, since these fibers exhibit


diameters of well below 0.1 mm. A corresponding 3D view of
a CT scan of a LFRP sample is shown in Figure 1.
Due to high resolution scanning technology combined
with a high image quality, the development of nondestructive evaluation algorithms becomes more and more
practical. In the following section, a model-based approach
for the automatic detection and segmentation of fibers from
such image data is presented.

3. Fiber Segmentation
A model of an ideal fiber is the foundation of the following
segmentation approach. It is reasonable to assume that a
general fiber is cylindrically shaped. Furthermore, in the
high-resolution scans acquired the grey value profile of the
fibers exhibited a clear maximum at their centers. Moreover,
all fibers of a common class usually have constant and
previously known diameters [5]. As a consequence, these
characteristic features can be exploited by using a modelbased segmentation approach. The segmentation algorithm
itself is modeled as a multistep process. Firstly, the whole
image is filtered and reduced by a closing operation in order
to achieve a good fiber separation in the image data. Then,

EURASIP Journal on Advances in Signal Processing

(a)
(a)
A
(57 13 61): 0.2625 GV

2 cm

Slice: 61
Timepoint: 0
66, 51, 135, grey, 1
1.000, 1.000, 1.000

User mode
scan:
LUT C/W: 0.2625/0.9275

(b)
(b)

H
(57 13 61): 0.2625 GV

6 cm

Slice: 7
Timepoint: 0
51, 135, 66, grey, 1
1.000, 1.000, 1.000

User mode
scan:
LUT C/W: 0.2625/0.9275

(c)
H

(c)

Figure 9: The three artificially generated datasets. (a) Very short


fibers, wide spacing. (b) Medium length fibers, tighter spacing. (c)
Varying length fibers, very tight spacing.

the image is scanned for fiber-center voxels by the means of


a discrimination function based on eigenvalue analysis. The
detected center points are then used as starting seeds for the
tracing algorithm. Model-based segmentation of a centeraxis representation is performed by tracing fiber-center
points along the direction determined during eigenvalue
analysis.
Using the a-priori known radius of the fibers, a segmentation mask is generated in order to remove the corresponding fiber from the input data. These steps, as explained in
detail below, are repeated until the input dataset is fiber-free,
that is the algorithm cannot find any remaining cylindrical
structures in the image data. Finally, the length of each fiber
can be determined and a distribution graph can be created.
3.1. Preprocessing. One major problem for the automatic
segmentation method is that the fibers contained within a

(57 13 61): 0.2625 GV


6 cm

R
Slice: 13
Timepoint: 0
66, 135,51, grey, 1
1.000, 1.000, 1.000

User mode
scan:
LUT C/W: 0.2625/0.9275

(d)

Figure 10: Real product sample one. Fiber length is high and the
fibers are packed very tightly, including some overlap. However, the
shape of the fibers is quite clear and approximating the ideal case.

scanned product sample are usually tightly packed (Figure 2).


Therefore, it is dicult to apply standard segmentation
algorithms, such as region growing [7], since fiber borders
may be partly unclear and unwanted flooding may occur. It
is, however, possible to preprocess the data so that the edges
of the single fibers can be greatly enhanced.
In order to enhance the fiber borders, a morphological
erosion filter [7] is applied to the original image I which

EURASIP Journal on Advances in Signal Processing

Fiber-length distribution

60

(20 37 63): 0.3


Fiber count

50
30
20

450550

410450

310410

250310

170250

110170

User mode
scan:
LUT C/W: 0.425/1

70110

Slice: 63
Timepoint: 0
66, 51, 135, grey, 1
1.000, 1.000, 1.000

030

3070

10

2 cm

40

Fiber-length (m)

Figure 12: Resulting fiber-length distribution of dataset one. As


expected from the visual inspection, the majority of the detected
fibers are rather long.

(a)

extraction on 3D data, which allows for the association of


voxel grey values to estimated shapes.
Originally applied to magnetic resonance image data,
it turns out that this model applies well to the extraction
of cylindrical shapes from CT data. The discrimination
function is based on an eigenvalue analysis of the structure
that is to be segmented. For each voxel, the Hessian matrix
containing the partial second order derivatives of the data is
computed as

2 I
2 I
2 I
xx xy xz

2 I
2 I
2 I

.
H x, y, z =
yx yy yz

I
2 I
2 I


(b)

Figure 11: Result of the automatic segmentation algorithm on


dataset one. (a) Slice image with the segmentation mask overlaid.
(b) 3D view of the extracted fibers. It can be seen that almost all
fibers have been segmented accordingly.

(2)

zx zy zz
Since the Hessian is a symmetric matrix, it can be rearranged
to yield

yields the image Ie . As a consequence, dark image areas, such


as fiber borders, become enlarged in Ie . Subsequently, the
work image Iw is created by subtracting the eroded image
from the original image, that is
I w = I Ie .

(1)

The result of this operation is shown in Figure 3. It can


be observed that single fiber borders show more contrast
after the filtering procedure. However, another result of this
operation is that fibers which lie close together are now
merged to a single fiber in the image. Thereby, fibers are
lost during the preprocessing step. Since the original image is
processed repeatedly, however, the lost fibers will be detected
in a subsequent pass. All of the following operations are
carried out exclusively on the work image Iw .
3.2. Center Point Determination. For the segmentation of
fiber structures from the image, starting seed points have to
be determined. Therefore, image voxels have to be checked
whether they belong to a fiber or not. Frangi et al. [8]
presented a discrimination function for model-based shape

2 I

2 I

2 I

xx xy xz

2
2
2

 I
I
I

.
H x, y, z =

xy
yy
yz

2
2
I
I
I

(3)

xz yz zz
This order facilitates the calculation of the eigenvalues and
eigenvectors of this matrix. We have

1 0 0

H x, y, z = V 0 2 0 VT ,
0 0 3


(4)

where V is the matrix containing the eigenvectors 


v1 , 
v2 and

v3 as its columns and 1 , 2 , 3 are the eigenvalues corresponding to the eigenvectors. For the numerical solution of
these equations, the reader is referred to [9].
The eigenvalues of the Hessian calculated from the image
voxel data contain information about the gray value change
in the image neighborhood. Since for cylindrical shaped

EURASIP Journal on Advances in Signal Processing

10 cm

(a)

(a)
A
(363 69 54): 0.02 GV

20 cm

Slice: 54
Timepoint: 0
511, 511, 129, grey, 1
1.000, 1.000, 1.000

User mode
scan:
Lut C/W: 0.2625/0.9275

(b)
H

(b)

Figure 14: Result of the automatic segmentation algorithm on


dataset two. (a) Slice image with the segmentation mask overlaid.
(b) 3D view of the extracted fibers. Despite the high curvature of
the fibers and the high overlap rate, most of the fibers have been
segmented correctly.

(365 69 54): 0.04 GV

10 cm

Slice: 260
Timepoint: 0
511, 129, 511, grey, 1
1.000, 1.000, 1.000

User mode
scan:
Lut C/W: 0.2625/0.9275

(d)

Figure 13: Real product sample two. Fiber length is medium to


short. The fibers are packed more loosely than in dataset one, but
exhibit a higher curvature and show a high amount of overlap.

objects a specific change pattern is likely, this analysis allows


for the determination of the starting seed points. Moreover,
since the eigenvectors belonging to the eigenvalues of the
voxel span an orthogonal coordinate frame, the direction and
extent of the fiber can be estimated.
In [8], an overview of dierent shape interpretations
based on the computed eigenvalues is given. For example,

27

12

0.91

0.8-0.9

0.70.8

0.60.7

0.50.6

(365 69 54): 0.04 GV

0.40.5

0.30.4

(c)

Fiber-length distribution

0.20.3

User mode
scan:
Lut C/W: 0.2625/0.9275

800
700
600
500
400
300
200
100
0

0.10.2

Slice: 300
Timepoint: 0
511, 129, 511, grey, 1
1.000, 1.000, 1.000

Fiber count

10 cm

00.1

Fiber-length (mm)

Figure 15: Resulting fiber-length distribution of dataset two. As


expected from the visual inspection, many short fibers are detected.

finding all eigenvalues to be of high magnitude and positive


value indicates a spherical shape. For the purpose of fiber
extraction, the most practical combination is the one given
for bright tubular structures and is denoted as {L, H , H }
for the three eigenvalues 1 , 2 , 3 . Hence, 1 is expected to
be of low magnitude, whereas 2 and 3 should expose high
magnitude and should be negative.
The relation of the cylindrical structure to the eigenvectors of the Hessian is shown in Figure 4. It can be seen that
high values for the eigenvalues 2 and 3 are associated with
a high gray value change in the direction of the cylinder
boundary. Consequently, for the image data this indicates a
high gradient magnitude towards the borders which is typical
for the expected shape.

EURASIP Journal on Advances in Signal Processing

In order for a voxel to be located inside a cylindrical


structure, the following criteria have to be met.
(1) If |1 | |2 | |3 |, then 1 should be as small as
possible (0 would be ideal),
(2) 2 and 3 should have great magnitude and should be
almost equal.
These properties can be combined into a discrimination
function introduced in [8]

F(x) =

0,
if 2 > 0 or 3 > 0,
D(x), else,

(5)

with

D(x) = 1 eRA /2 eRB /2 1 eS /2c ,


2

(6)

where
RA =

|2 |
,
|3 |

|1 |
,
|2 3 |


RB =
S =

(7)

2j .

The parameters , , and c can be used to tune the sensitivity


of the function to deviations. For the ideal fiber, the
maximum of F(x) is reached at the fiber center, while its
value decays smoothly towards the border of a cylindrical
structure. Consequently, this discriminator can be used to
detect image voxels which exhibit the greatest likelihood of
belonging to a fiber center. These points are then used as
starting seeds for the model-based fiber tracing.
Since performing eigenvalue analysis and evaluating the
discrimination function for each voxel is computationally
very expensive, our implementation includes some optimizations. Firstly, only relevant voxels are taken into account
when carrying out the calculations, that is voxels whose
grey values exceed a predefined threshold . This approach
is useful, as every dataset contains many elements which
cannot be part of a fiber, for example background voxels and
low valued image noise. Furthermore, since the eigenvalue
analysis of the image data is an independent process for
each individual volume element, this part of the algorithm is
ideally suited for a parallel implementation. Consequently, it
was implemented running on a high-end consumer graphics
card using NVIDIAs CUDA technology [10].
3.3. Fiber Tracing. Having detected possible fiber center
point candidates, the algorithm starts the tracing process in
order to extract a centerline from the fibers. Since the fiber
shape is known a-priori, a model-based cylinder approximation scheme is used. For this purpose, one candidate is
selected as a starting point and the fiber is traced along
the minimum eigenvector 
v1 in both, positive and negative,
directions. As known from previous analysis, this vector is

directed along the central axis of the cylindrical structure


(Figure 4).
As a result of the fiber tracing, a center point list is
generated. Subsequently, the remaining two eigenvectors and
the generated center points can be used in order to segment
a circular shape (Figure 5). Using the remaining eigenvectors
as a coordinate frame basis, all voxels within the circle
centered at the center point and the a priori known fiber
radius are segmented.
Since the radius of the fibers is material dependent
and known beforehand, this segmentation approach is very
robust to the presence of image noise on the fiber borders.
Once a fiber is fully segmented, it is used as a mask on the
original data in order to remove it from the image entirely.
The process is repeated until no more fibers can be found in
the image. However, due to the nature of the data, there are a
few special cases that have to be dealt with, namely crossing
fibers and partial fiber segmentation.
3.3.1. Crossing Fibers. When two or more fibers are overlapping within the image data (Figure 6), one or more voxels
usually belong to several dierent fibers. As a consequence,
a sudden change of direction during tracing along the minimum eigenvector is very likely. Hence, this results in a wrong
fiber segmentation. In order to solve this issue, the angle
between two consecutive direction vectors of fiber center
points is restricted to be lower than 45 . This ensures an
approximate C 1 -continuity of the extracted fiber centerline.
If the angle between consecutive center point candidates is
found to lie above this threshold, the neighborhood of this
voxel is searched for a better fitting one and the tracing is
continued in the detected direction.
Furthermore, if a voxel is encountered that allows more
than one propagation direction, it is again added to the
seed point list. Thereby, the voxel is not marked as already
segmented and can be reused while tracing the crossing fiber.
If no adequate continuation direction is found, the tracing
process stops. This can lead to partial fiber segmentation,
another possible problem.
3.3.2. Partial Fiber Segmentation. Due to the imposed continuity restriction or image noise, it is possible that a fiber
is only partially segmented (Figure 7). Consequently, during
the tracing process, the condition of the discrimination
function will not be satisfied anymore and the algorithm
terminates. As a direct result, gaps within a single fiber
may occur. Another possibility is that a previously partially
segmented fiber gets completely segmented when starting a
subsequent iteration at a dierent seed point. However, this
would lead to a double detection of the fiber, which is also
not desirable.
In order to solve this problem, a binary volume is created
during fiber tracing. For each successfully segmented voxel,
the bit at the corresponding position in the binary volume is
set. Furthermore, during the tracing process, each new center
voxel which is about to be segmented is checked against
this binary volume. If it is already contained in the volume,
the corresponding position is marked as a critical point and

EURASIP Journal on Advances in Signal Processing

Table 1: Correlation between the real length of the artificial fibers


and the automatic detection results.
Dataset
1
2
3

Correlation in %
97.29%
98.03%
97.87%

tracing in the current direction terminates. Once a fiber has


been extracted, it is checked for critical points. If it contains
a critical point, a search on the already extracted fibers is
started for the ones containing the same critical point. If one
or more fibers containing equal critical points are found, they
are merged in order to create a single fiber mask (Figure 8).
Having extracted all fibers from the dataset, their length
computation is straightforward. Since the number of center
points is known and the voxel size is isotropic, in the case of
axis-aligned center points a simple calculation yields
Li = ns,

(8)

where n is the number of voxels of the fiber i and s is the voxel


size.

4. Results
For the evaluation of the presented method, two types of
data were used. First of all, artificial test datasets were
created. They contained varying types of fibers with dierent
densities and lengths. The major advantage of this data type
was that the length of each individual fiber was known
beforehand and thus allowed an exact evaluation of the
produced results.
Moreover, the algorithm was also evaluated on CT scans
of real plastic components. Two of them will be presented
in this paper. The first component was built of straight and
long cylindrical fibers, approximating the ideal fiber very
closely. The second dataset, however, was more dicult to
deal with. It contained mostly short, curved, and heavily
overlapping fibers. Nevertheless, good segmentation results
were achieved in all cases.
4.1. Artificial Test Data. Three datasets containing ideal fiber
models were created for evaluation (Figure 9). In order to
simulate a real CT scan, random noise was added to the
image data before processing.
During the detection phase, all fibers from the test
data were found and have been segmented correctly. The
correlation with the known results is very high in all cases, as
shown in Table 1. In summary, the accuracy of the detection
process over the three test cases was about 98%. Due to the
presence of the artificial image noise, the small detection
error is tolerable. Tests without addition of the extra noise
yielded a fiber-length correlation of 100%.
4.2. Real Test Data. From the evaluation on real fiber
reinforced polymer components, two selected examples are
presented. 3D and slice views are shown in Figures 10 and 13,

respectively. Dataset one had a dimension of 66 51 135


in X, Y , and Z directions and an isotropic voxel size of
3.96 m. As can be obtained from the figures, the fibers in the
component exhibit a mostly straight cylindrical structure.
Moreover, the fibers are very long with respect to the size of
the dataset.
The dimensions of dataset two were 138 413 129
with an isotropic voxel size of 8.73 m. The fibers in this
component are less tightly packed, shorter in size with
respect to the dataset size and they exhibit a curved structure
which indicates a more problematic segmentation task for
the presented algorithm.
The segmentation results of the real product samples
have been evaluated by manual inspection of the dataset,
since pyrolizing the sample components was not yet possible.
However, the results of applying our algorithm to dataset one
were found to be very good as can be seen in Figure 11.
All relevant fibers have been segmented correctly in terms
of size and radius. Therefore, it is reasonable to assume that
the resulting length distribution is correct within acceptable
statistical deviations. Figure 12 shows a plot of the calculated
length distribution. As expected, the majority of the detected
fibers are very long (450550 m), with only a few fibers to
be found in the low to mid-sized range.
On dataset two, most of the fibers were also segmented
correctly (Figure 14). This indicates that the model-based
segmentation approach is robust even in the presence
of curved fibers. However, the number of gaps between
the fibers on this type of dataset is higher than for the
straightforward case. The resulting length distribution of
dataset two is shown in Figure 15. As expected, the majority
of the detected fibers are very short with respect to the dataset
size (up to 0.1 mm).

5. Conclusion
In this paper, a novel, model-based approach for the
automatic detection, segmentation, and length distribution
calculation of fibers in CT data of fiber reinforced polymers
was presented. Since fiber length distribution within the
material is essential for the stability of an assembly, having
a non-destructive evaluation method is highly desirable.
The presented approach uses a segmentation scheme
which was shown to be robust even in the presence of
curved fibers and image noise. The algorithm is also able
to handle tightly packed and crossing fibers, though the
accuracy suers in these cases, as not all fibers may be
detected fully or gaps may occur within single fibers. In order
to estimate a systematic error in this situations, more datasets
have to be investigated and the automatic results have to be
compared to the outcome of a pyrolysis analysis. Moreover,
current CT scanning devices are still restricted to scanning
small sample sizes only, which currently limits the practical
applicability of this method.
However, the results show that the presented algorithm
can achieve a reasonably good segmentation and thus can act
as a basis for further research on this topic. Further research
will include the acquisition of reference data by pyrolizing

EURASIP Journal on Advances in Signal Processing


the investigated sample components. Moreover, distribution
statistics and error measurements of real samples have to be
included in the evaluation of our material testing approach in
order to devise a systematic error measurement. These steps
are important for an intense evaluation and will be carried
out in the near future.
With the current advent of high-resolution CT scanning
devices that are capable of taking images of bigger structures,
the presented method could become a valuable tool for broad
inspection of varying material. This could be especially useful
in industries where material function is vital and undetected
wearout could have severe impacts.

Acknowledgments
The authors are thankful to T. Potyra and M. Reif (Fraunhofer ICT) for providing the samples used for evaluation.
This paper was cofinanced by the European Union and the
Free State of Bavaria, Germany.

References
[1] A. M. A. Hug and J. Azaiez, Eects of length distribution on
the steady shear viscosity of semiconcentrated polymer-fiber
suspensions, Polymer Engineering and Science, vol. 45, no. 10,
pp. 13571368, 2005.
[2] M. Neitzel and P. Mitschang, Handbuch Verbundwerkstoe,
Carl Hanser, Munchen, Germany, 2004.
[3] S.-Y. Fu, C.-Y. Yue, X. Hu, and Y.-W. Mai, Characterization
of fiber length distribution of short-fiber reinforced thermoplastics, Journal of Materials Science Letters, vol. 20, no. 1, pp.
3133, 2001.
[4] R. Smallman and R. Bishop, Modern Physical Metallurgy and
Materials Engineering, Butterworth-Heinemann, 1999.
[5] G. Erhard, Designing with Plastics, Hanser Gardner, 2006.
[6] G. Washer and F. Blum Jr., Raman spectroscopy for the
nondestructive testing of carbon fiber, Research Letters in
Materials Science, vol. 2008, Article ID 693207, 3 pages, 2008.
[7] W. Pratt, Digital Image Processing, 2007.
[8] A. F. Frangi, W. J. Niessen, R. M. Hoogeveen, T. van Walsum,
and M. A. Viergever, Modelbased quantitation of 3-d magnetic resonance angiographic images, IEEE Transactions on
Medical Imaging, vol. 18, no. 10, pp. 946956, 1999.
[9] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P.
Flannery, Numerical Recipes: The Art of Scientific Computing,
2007.
[10] M. Fatica, D. Luebke, I. Buck, et al., High-performance
computing with cuda, SUPERCOMPUTING Tutorial, 2007.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 894643, 7 pages
doi:10.1155/2010/894643

Research Article
Digital Radiography Using Digital Detector Arrays Fulfills
Critical Applications for Offshore Pipelines
Edson Vasques Moreira,1, 2 Jose Maurcio Barbosa Rabello,3 Marcelo dos Santos Pereira,4
Ricardo Tadeu Lopes,5 and Uwe Zscherpel6
1 Nondestructive

Testing Laboratory of TenarisConfab, 475 Av. Dr. Gastao Vidigal Neto, Pindamonhangaba, 12414-900, SP, Brazil
Estadual Paulista (UNESP)-FEG, 333 Av. Doutor Ariberto Pereira da Cunha, Guaratingueta, 12516-410, SP, Brazil
3 Petrobras/Engenharia/SL/SEQUI/CI, Km 143 Rodovia Presidente Dutra, S
ao Jose dos Campos, 12220-840, SP, Brazil
4 Department of Materials and Technology, UNESP, FEG, 333 Av. Doutor Ariberto Pereira da Cunha,
Guaratingueta, 12516-410, SP, Brazil
5 Nuclear Instrumentation Laboratory, COPPE, Universidade Federa do Rio de Janeiro, Rio de Janeiro, Brazil
6 BAM, Federal Institute for Materials Research and Testing, Radiological Methods, 87 Unter den Eichen, 12205, Berlin, Germany
2 Universidade

Correspondence should be addressed to Edson Vasques Moreira, edsonvasques@uol.com.br


Received 11 November 2009; Revised 10 February 2010; Accepted 27 April 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 Edson Vasques Moreira et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Digital radiography in the inspection of welded pipes to be installed under deep water oshore gas and oil pipelines, like a presalt
in Brazil, in the paper has been investigated. The aim is to use digital radiography for nondestructive testing of welds as it is already
in use in the medical, aerospace, security, automotive, and petrochemical sectors. Among the current options, the DDA (Digital
Detector Array) is considered as one of the best solutions to replace industrial films, as well as to increase the sensitivity to reduce
the inspection cycle time. This paper shows the results of this new technique, comparing it to radiography with industrial films
systems. In this paper, 20 test specimens of longitudinal welded pipe joints, specially prepared with artificial defects like cracks,
lack of fusion, lack of penetration, and porosities and slag inclusions with varying dimensions and in 06 dierent base metal wall
thicknesses, were tested and a comparison of the techniques was made. These experiments verified the purposed rules for parameter
definitions and selections to control the required digital radiographic image quality as described in the draft international standard
ISO/DIS 10893-7. This draft is first standard establishing the parameters for digital radiography on weld seam of welded steel pipes
for pressure purposes to be used on gas and oil pipelines.

1. Introduction
Industrial radiographic films have been utilized for many
years in the quality control by NDT of a variety of products;
however, the use of digital radiography has recently been
implemented in several sectors, for example, the medical,
aerospace, security, automotive and petrochemical sectors.
In addition to the technological trend it has been demonstrated that digital radiography sometimes oers a series of
benefits in terms of productivity, sensitivity, environmental
aspects, image treatment tools, cost reduction, security, POF
improvement [1], and so forth.
Among the current options, the digital detector array,
DDA, Varian 2520V, 127 m, employed in this paper is

considered one of the best solutions to use online in plants


that produces pieces in series and for obtaining digital images
in place of films and reducing the inspection cycle time
thanks to its high degree of automation [2].
Therefore, the work reported here involved the testing
and evaluation of results achieved with this new technique,
comparing them with those obtained by conventional film
radiography. In this paper, test specimens of longitudinal
welded pipes by the submerged-arc welding, process, especially prepared with artificial defects of the most varied
dimensions, were tested and a comparison was made of the
sensitivity of the techniques employed.
After conducting several experiments to evaluate the
highest contrast sensitivity using wire-type Image Quality

2
Indicator (IQI), Basic Spatial Resolution (BSR), and Signal Noise Ratio (SNR) normalized by the Basic Spatial
Resolution and comparing artificial defects, the digital
method showed better results and advantages compared with
conventional film technique. These experiments were carried
out to support the voting and the development of the first
ISO document applicable to digital radiography using DDA
for weld seam inspection on welded pipes for pressure, the
ISO/DIS 10893-7 specification [3].

2. Digital Radiography
Digital radiography systems oer the possibility of obtaining
images with much less strict exposure requirements than
those of conventional film systems. Exposure imprecision
normally leads to radiographs that are dark, light or show
little contrast, which are easily improved and enhanced using
digital techniques.
Some the advantages of digital radiographic systems
include: image display, reduction of X-ray doses, image
processing, automated acquisition, partially or completely
automated evaluation, image storage, and the retrieval is
significantly reduced.
The entire operation is simplified, from obtaining the
image to the cycle time involved in obtaining, evaluating
and storing each image with ensured traceability [4], as
illustrated in Figure 2.
Dierent to industrial films, a fully integrated environment for digital radiographic images adds even other
advantages [5] to these of the DDAs, for example: productivity and sensitivity are increased resulting in fast decisions
using remote access, meetings, training, Level 3 supervision,
process control monitoring, and so forth.

3. Materials and Method


For this investigation a Digital Detector Array, PaxScan
2520 V from Varian was used with a 25 20 cm2 input screen
of a scintillator DRZ Plus (GdO2S), and 127 m pixel size,
resulting in a Basic Spatial Resolution (SRb) of 130 m. The
data transfer to the computer via GBit Ethernet interface
allows an image transfer rate of 10 frames per second in full
resolution. The software Image 3500DD from YXLON was
used for data acquisition, image integration, DDA calibration
and data storage [6].
The image evaluation was done using the BAM Software,
ISee!, see http://www.kb.bam.de/ic. Figure 3(a) shows an
inside photo of the DDA and Figure 3(b) shows details of
the DDA construction, which consists of a matrix of millions
of light sensitive photo diodes direct into contact to the
scintillator screen, not shown.
3.1. Materials Involved. The pipes are manufactured with
laminated carbon steel plates, according to the requirements
established by API 5L and ISO 3183 specifications [7, 8].
For high-strength pipes, microalloyed steels are produced
with a high level of control of the fundamental parameters

EURASIP Journal on Advances in Signal Processing


throughout the manufacturing process, comprising a specific set of steels whose chemical composition and other
parameters are especially developed to attain high values of
mechanical properties.
The samples of steel API 5L, grade X65 were manually
welded by welding process SMAW (Shielded Metal Arc
Welding). The manufacturing and selection of samples was
done at TenarisConfab, they contain a huge number of introduced critical artificial welding flaws such as longitudinal
and transversal cracks, lack of penetration, lack of side wall
fusion, and porosities and slag inclusions.
3.2. Technique. The currently applied conventional technique using industrial films of class 1, in accordance with
ASTM 1815, was evaluated and compared with the digital
technique using the described DDA. For these investigations
a High Power X-ray tube Y.TU 225 D04: was used, with max.
225 kV and a small focus 0.4 mm with 800W and a large focus
of 1 mm with max. 1.8 kW (certified according to EN 125432), anode angle 11 , and 4 mm inherent Aluminum filter at
tube exit window [9].
The setup for digital radiography is shown in Figure 4
and the source to detector distance (SDD) and the object
to detector distance (ODD) were varied to change the
magnification factor (a max. of 1,2 was used) in accordance
with the focal size used (0,4 or 1,0 mm) and the wall
thickness to be inspected.
The diaphragm at the tube port was adjusted that only
the length to be inspectioned was exposed by X-rays to
reduce the amount of scattered radiation
3.3. Compensation Principle. In accordance with ISO/DIS
10893-7 purpose it is possible to apply the compensation
principle if duplex IQI required by the mentioned standard
cannot be achieved by the used detector system. A visibility
increase in the contrast IQI single wire can compensate
too high unsharpness values. If ISO/DIS 10893-7 requires
to show, that is, the duplex wire number D12 and the
contrast sensitivity wire number 14, W14, but they are not
achieved at the same time for a specific detector setup, an
increased contrast sensitivity W16 but a larger unsharpness
D10 provides equal detection sensitivity (compensation
principle) [10, 11].
The contrast sensitivity depends for DDA on the used
integration time and X-ray tube setting used for acquisition
of the radiographic images. An increased exposure time or
dose of the DDA allows or increase the contrast sensitivity to
values higher than reachable with industrial films [3, 10].
3.4. Magnification Technique. The pixel size of this DDA
systems is large (127 m) compared with the small grain size
in the film [9]. As a result the Basic Spatial Resolution is
limited to 130 m, which corresponds to the duplex wire D9.
This diculty was circumvented by the following two
possible approaches:
(1) increase the Signal to Noise Ratio (SNR) in the image
for higher wire sensitivity to compensate the reduced
duplex wire resolution,

EURASIP Journal on Advances in Signal Processing

Ocean

Layer of salt
Pre-salt

Figure 1: Pipeline in deep water. Presalt in Brazil.

Meetings
Video conference

R&D

Supervision
Level 3 inspector
Process control
monitoring
Training

Welding
Repairs

Interpreter
Level 2 inspector
Outside inspector
Costumers
Auditors

Backup

Electronic
comunication

Remote access
Laboratory

Figure 2: Full integrated environment for digital radiographic image [4].

(2) increase the X-Ray geometric magnification. In these


experiments the magnifications were between 1.1 to
1.2.

4. Results
The basic parameters for evaluation of the image quality are
the following: the normalized Signal-to-Noise Ratio (SNRn)
at base material, the Basic Spatial Resolution (SRb) and the
Contrast Sensitivity (CS) by the wire type IQI. Finally, the
defect visibilities obtained with DDA were compared with
those obtained with digitalized films.
4.1. Normalized Signal-to-Noise Ratio, SNRn. The normalized SNRn (see ASTM E 2597 for details) for the DDA system

is a function of the number of integrated image frames


during the exposure time. This is a basic dierence to film
exposures with their limited density range of 2.3 < D < 4.2
and a fixed exposure time resulting from the film sensitivity
(ISO film speed) of the selected film system class and the
density requirements.
DDA allow varying the SNRn by the overall image
integration time in the computer in a much wider range, the
maximum achievable SNRn is limited only by the quality of
detector calibration. In Figure 5 curves of SNRn are shown
as function of the integration time in dierent points.
The normalized SNRn measurement was made on the
base material near to the wire IQI. The ROI (Region of
Interest) size for calculation of SNRn was 20 55 pixels, 1100
points.

EURASIP Journal on Advances in Signal Processing


TFT
Photo
diode
TFT

TFT

Photo
diode

TFT

Photo
diode

Photo
diode

TFT
Photo
diode

TFT

Photo
diode
TFT
Photo
diode

TFT
Photo
diode

TFT
Photo
diode

TFT

TFT

TFT

TFT

TFT

Photo
diode
TFT
Photo
diode
TFT
Photo
diode

(a) Photo of DDA from backside

TFT

Photo
diode
TFT

TFT

Photo
diode

Photo
diode
TFT

Photo
diode
TFT

Photo
diode

Photo
diode
TFT

Photo
diode
TFT

Photo
diode

Photo
diode

Photo
diode
TFT

Photo
diode

Photo
diode

TFT

TFT
Photo
diode

TFT

TFT
Photo
diode

Photo
diode

Photo
diode

Photo
diode

Photo
diode

TFT
Photo
diode
TFT
Photo
diode
TFT

Photo
diode
TFT

TFT

Photo
diode

Photo
diode
TFT

TFT

TFT

Photo
diode
TFT

TFT

Photo
diode

Photo
diode
TFT

TFT

TFT

Photo
diode

Photo
diode
TFT

Photo
diode

Photo
diode

TFT
Photo
diode
TFT
Photo
diode
TFT
Photo
diode
TFT
Photo
diode
TFT
Photo
diode
TFT
Photo
diode

(b) Scheme of light sensitive detector matrix

Figure 3: Digital Detector ArrayDDA.

SDD
DDA Varian
PaxScan 2520 V

ODD

Sample

Diaphragm
Sample

X-ray tube
Y.TU 225-D04

DDA

X-ray tube
(a)

(b)

Figure 4: Arrangement: (a) Basic scheme, (b) Detailed view [9].

For short exposure times the SNRn is proportional to


the square root of the exposure time (number of integrated
frames) because of the SNRn limitation by quantum noise
generated by the X-ray photons [10, 12]. With longer
exposure time the noise in the calibration data limits the
achievable SNRn in the integrated image. Therefore it is
necessary to use at minimum a double number of frames
for the detector calibration as later during the real image
acquisition of the inspected samples. The dependence of
SNRn and wire perceptibility at 32,3 mm steel and 225 kV,
8 mA tube settings and a geometric setup of 700 mm
SDD/80 mm ODD in the central region of the sample is
shown in Figure 6. The integration time was varied between
1 s and 512 s (with 1 s frame time), the gain calibration was
done with 250 s integration time and 32,3 mm steel, the oset
image with 500 s.

4.2. SRb and Contrast IQI. In Figures 7 and 8 the performance of digital radiography is shown in terms of
contrast sensitivity (single wire IQI read-out) as function

of integration time of 1, 2, 4, 8, 16 and 32 seconds. The


requirements of ISO/DIS, using the compensation principle
as indicated, is shown as blue line. Figure 7 shows that the
minimum requirements of ISO/DIS 10893-7 can be reached
with a minimum integration time of 4 s (W18, diameter
63 m), 2 s (W17, diameter 80 m) and 1 s (W15, diameter
130 m) for the wall thickness of 4,9 mm to 9,7 mm [10].
In addition, the recognized single wire IQI is given for all
exposure times. The single wire number W11, is recognized
for SNRn values above 100, as required by ISO/DIS 108937, the maximum contrast sensitivity reachable with this
calibration is W14.
In Figure 8 it is possible to see that the requested
sensitivities were obtained with an integration time of 1 s
(W14, diameter 160 m), 2 s (W12, diameter 250 m) and 4 s
(W11, diameter 320 m) for the wall thickness of 19,2 mm,
25,3 mm or 32,3 mm, respectively [10].
For the integration time of 32 s wire sensitivities were
obtained: W19 for a wall thickness of 4,9 mm and 6,4 mm,
W18 for a wall thickness of 9,7 mm, W16 (wire diameter
100 m) and W15 for a wall thickness of 19,2 mm, W14

EURASIP Journal on Advances in Signal Processing

5
Digital radiography-ISO 10893-7 (base metal)

Sample 25.3 mm-ODD 73 mm


16
Contrast IQI-wire type

SNRn

450
400
350
300
250
200
150
100
50
0
5

10

15

20

25

30

35

Integration time (s)


P4
P5
P6

Required ISO 10893-7 class B


P1
P2
P3

13
12
11
10
9

500
400

14

300

13
13

200
100

10

12
12
11

0
0

10

15

20

25

SQR exposition time ( s )


Required ISO 10893-7

Figure 6: SNRn and wire perceptibility [10].

Digital radiography-ISO 10893-7 (base metal)


Focal spot 0.4 mm

19
(+2 wires)

Using compensation principle

18
(+1 wire)

17
16

(+1 wire)

15
14
13
4.9 mm
ODD 100

4.9 mm
9.7 mm
9.7 mm
6.4 mm
ODD 55 ODD 103 ODD 108 ODD 57
WT-object detector distance (ODD)
1s
8s
2s
16 s
4s
32 s
Required ISO 10893-7

Figure 7: IQI wire sensibility4,9 mm to 9,7 mm.

8s
16 s
32 s

for a wall thickness of 25,3 mm and W13 (wire diameter


of 200 m) for a wall thickness of 32,3 mm. All these
sensitivities are 2 wires higher than requested by ISO/DIS
10893-7.

14

14

14

19.2 mm 19.2 mm 25.3 mm 25.3 mm 32.3 mm 32.3 mm


ODD 125 ODD 66 ODD 137 ODD 73 ODD 155 ODD 80
WT-object detector distance (ODD)

Figure 8: IQI wire sensibility19,2 mm to 32,3 mm.

SNR and wire perceptibility


depending on exposure time

600

SNRn

14

1s
2s
4s
Required ISO 10893-7

Figure 5: SNRn for WT 25,3 mm, image quality class B.

Contrast IQI-wire type

Focal spot 0.4 mm

8
0

20

15

4.3. Defects Visibilities. In this section comparisons are made


between radiographic images obtained from digitalized films
and the corresponding image from digital radiography using
the DDA. The critical defects shown in the welds were
artificially generated during the welding process for the
purpose of comparison of indications.
The images shown in Figures 9 and 10 were generated
with the program ISee! and were displayed in negative
mode (like film). They are results of high-pass filtering using
the filter Enhance Detail in ISee! [6].
This special 2D-FFT filter did not require any adjustable
parameter and is optimized for optimum presentation of
welds on 8 bit displays with only very weak filter artifacts
[10]. At none of the shown examples the reduced total image
unsharpness of the DDA system limits the visibility of fine
indication details when compared to the film images. Quite
contrary, the detail visibility with DDA is even improved
by the limitation of high-frequency image noise as observed
on the digitized film images. The minimum time indicated
was the specific integration time that fulfilled the defects
visibilities compared with film.
Figure 9 shows a performance comparison for a base
metal wall thickness of 4,9 mm: (a) digitized AGFA D4 film
(b) and (c) digital radiography with 1 s and 32 s integration
time. In terms of defects visibility, they are better. In this
case of 4,9 mm wall thickness the requirements of ISO/DIS
for a minimum SNRn > 100 for class B was fulfilled already
with an integration time of 1 s and the IQI requirement was
fulfilled with 4 s integration time.
In Figure 10 the performance of digital radiography is
compared, for a wall thickness of 25,3 mm, base metal. The
DDA with 1 s (b) and 32 s (c) integration time is shown in

EURASIP Journal on Advances in Signal Processing

(a)
4.9 mm-CP04-film D4
(b)
4.9 mm-CP04-1 s

Crack
(c)

4.9 mm-CP04-32 s

Figure 9: WT of 4,9 mm: (a) Film D4, DDA with (b) 1 s and (c) 32 s of integration time.

(a)

25.3 mm-CP03-film D4
(b)
Lack of penetration
25.3 mm-CP03-8 s
(c)

25.3 mm-CP03-32 s

Figure 10: WT of 25,3 mm: (a) Film D4, DDA with (b) 8 s and (c) 32 s of integration time.

SQRT (exposition time/s) ( s)

100 for class B was fulfilled with an integration time of 8 s,


and the IQI requirement was fulfilled with 4 s integration
time, as reported previously.

Time comparation between film and digital

18
16
14

KODAK M100

12
10

Film

5. Integration Time of DDA Versus


Exposure Time of Films

AGFA D4

8
6
4
ISO 10893-7

Digital

DNV, API 5L and ISO 3183

0
4.9

6.4

9.7
19.2
WT (mm)

25.3

32.3

Figure 11: Film KODAK M100 and AGFA D4 in comparison with


DDA.

comparison to the AGFA D4 film (a) in terms of defects


visibilities. Independent of the noise, it is possible to see
better details on the digital radiographs (b) and (c) than
in the film (a). In this case of 25,3 mm wall thickness, the
requirement of ISO/DIS 10893-7 for a minimum SNRn >

The integration time of digital radiography and the exposure


time of conventional film technique using 2 dierent films
were compared in Figure 11. The advantage of reduced
inspection time of DDA compared to films is clearly shown
in the complete wall thickness range tested. It is important
to note that the integration times used for the digital
technique are very small in comparison with the practicable
exposure times used with traditional films, while fulfilling all
requirements of the investigated standards. For Kodak M100
film, Class 1, for the wall thickness of 4,9 mm this variation
is 7 times and for the rest of wall thickness the variation is
between 22 to 40 times.
For AGFA D4 film, Class 1, for the wall thickness of
4,9 mm this variation is 5 times and for the rest of wall
thickness the variation is between 15 to 28 times.

EURASIP Journal on Advances in Signal Processing

6. Conclusions
Based on the above results, it can be concluded that the direct
digital radiographic technique using DDAs is more sensitive
than the conventional film technique, both in terms of visible
wires of the Image Quality Indicators and in the detection of
small real defects in the welds [13].
Hence, as foreseen in the purposed ISO/DIS 10893-7,
digital radiography using DDAs can be employed directly on
the productionlines of oil and gas pipelines, with important
advantages over the conventional technique.
This digital technique therefore represents an advance
in the quality of radiographic testing currently employed,
in addition to its high degree of automation, which will
allow for improved productivity and greater environmental
friendliness.

Acknowledgments
The authors are indebted to the company XYLON International for carrying out the tests, as well as the sta
responsible for the Post-graduations program at the UNESP,
Universidade Estadual Paulista-FEG. The authors would also
like to thank TenarisConfab for its support in terms of
technical and financial resources, which enabled this paper
to be carried out.

References
[1] L. Pick and O. Kleinberger, Technical highlights of digital
radiography for NDT, Materials Evaluation, vol. 67, no. 10,
pp. 11111116, 2009.
[2] E. V. Moreira, H. R. Simoes, J. M. B. Rabello, J. R. De
Camargo, and M. Dos Santos Pereira, Digital radiography to
inspect weld seams of pipelinesbetter sensitivity, Soldagem
e Inspecao, vol. 13, no. 3, pp. 227236, 2008.
[3] ISO/DIS 10893-7, Non-destructive testing of steel tubes
part 7: digital radiographic testing of the weld seam of
welded steel tubes for the detection of imperfections, Geneva,
Switzerland, 2009.
[4] A. G. Farman, C. M. Levato, D. Gane, and W. C. Scarfe, In
practice: how going digital will aect the dental oce, Journal
of the American Dental Association, vol. 139, 2008.
[5] R. Pincu and O. Kleinberger, Portable X-ray in the service of
art, Materials Evaluation, vol. 68, no. 3, pp. 311318, 2010.
[6] U. Ewert, U. Zscherpel, C. Bellon, G. R. Jaenish, J. Beckmann, and M. Jechow, Flaw size dependent contrast reduction and additional unsharpness by scattered radiation in
radiographyfilm and digital detectors in comparison, in
Proceedings of the 17th World Conference on Non-Destructive
Testing, Shanghai, China, 2008.
[7] API 5L, Specification for Line Pipe, American Petroleum
Institute, Washington, DC, USA, 2007.
[8] ISO 3183, Petroleum and natural gas industriessteel pipes
for pipeline transportation systems, Geneva, Switzerland,
2007.
[9] E. Moreira, M. C. Fritz, H. R. Simoes, J. M. B. Rabello,
and J. R. Camargo, Flat-panel detectors are accepted for
digital radiography in place of conventional radiography in
pipeline weld inspection, in Proceedings of the 4th Conferencia
Panamericana de END, Buenos Aires, Argentina, 2007.

7
[10] E. Moreira, R. Lopes, M. Pereira, J. M. B. Rabello, U. Zscherpel,
and D. Oliveira, Real application stage of DR in weld seam of
pipes for gas and oil linepipes, in 10a COTeq, Conferencia de
Tecnologia, Salvador, Brasil, 2009.
[11] U. Ewert, K. Bavendiek, J. Robbins, et al., New compensation
principles for enhanced image quality in industrial radiology
with digital detector arrays, Materials Evaluation, vol. 68, no.
2, pp. 163168, 2010.
[12] D. F. Oliveira, Analise da Radiografia Computadorizada em

Condicoes de Aguas
Profundas, Dissertacao (Mestrado em
Engenharia Nuclear), COPPEUniversidade Federal do Rio
de Janeiro, Rio de Janeiro, Brazil, 2007.
[13] U. Zscherpel and K. Bavendiek, High quality radiography
with digital detector arrays, in Digital Imaging VIII Conference, Foxwoods, Conn, USA, 2005.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 427878, 8 pages
doi:10.1155/2010/427878

Research Article
Appling a Novel Cost Function to Hopfield Neural Network for
Defects Boundaries Detection of Wood Image
Dawei Qi, Peng Zhang, Xuefei Zhang, Xuejing Jin, and Haijun Wu
College of Science, Northeast Forestry University, Harbin 150040, China
Correspondence should be addressed to Dawei Qi, qidw9806@yahoo.com.cn
Received 31 December 2009; Revised 14 April 2010; Accepted 13 May 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 Dawei Qi et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A modified Hopfield neural network with a novel cost function was presented for detecting wood defects boundary in the image.
Dierent from traditional methods, the boundary detection problem in this paper was formulated as an optimization process
that sought the boundary points to minimize a cost function. An initial boundary was estimated by Canny algorithm first. The
pixel gray value was described as a neuron state of Hopfield neural network. The state updated till the cost function touches the
minimum value. The designed cost function ensured that few neurons were activated except the neurons corresponding to actual
boundary points and ensured that the activated neurons are positioned in the points which had greatest change in gray value. The
tools of Matlab were used to implement the experiment. The results show that the noises of the image are eectively removed, and
our method obtains more noiseless and vivid boundary than those of the traditional methods.

1. Introduction
X-ray wood nondestructive testing is an eective method
for accessing to internal information of wood. Comparing
with other conventional wood nondestructive testing, such as
appearance judgment, acoustic emission testing, ultrasonic
testing, microwave testing, and stress wave testing, this
method can acquire distinct wood internal structure images
by an X-ray imaging system. Through the wood images, the
positions of wood defects can be easily identified; the scales
of the defects can be roughly estimated. Furthermore, we can
make use of computer technology to automatically extract
wood defects information from the images for automatically identifying defects characteristics such as areas, types,
and severity, which can help making the optimal sawing
solution. However, extracting accurate defects information
depends on the accurate boundary detection. There are many
edge detection algorithms. Most previous edge detection
algorithms used first-order derivative operators such as the
Sobel edge operator [1, 2], the Roberts edge operator, and
the Prewitt edge operator [3]. If a pixel point is on the
boundary, its neighborhood will be a zone of transition.
The Laplacian operator [4] is a second-order derivative

operator and is used to detect boundary at locations of the


zero crossing. The Canny operator [5, 6], another gradient
operator, is used to determine a class of optimal filter for
dierent types of boundaries. All these operators detect
boundary points by gray gradient change of the image pixels
in the neighborhood; the disadvantage of these methods are
sensitive to noise.
Comparing with traditional edge detection methods,
Hopfield neural network, which regarded an edge detection
process as an optimization process, has been applied in
the field of the low-level image processing of boundary
detection in the recent years. Chao and Dhawan [7] used
a Hopfield neural network to perform edge detection on a
gray-level image. The results were found to be comparable
to a Sobel operator on gray-level noisy images. Chang
[8] applied Contextual-based Hopfield neural network to
medical image edge detection and designed the specific
energy function for the medical images. The results showed
that the method can obtain better edge points than the
conventional methods. Active contour model (Snake) [9] was
used in image processing these years [1013]. Zhu and Yan
[10] attempted to combine Hopfield neural network with
active contour model for brain image boundary detection.

2
That method showed the results comparable to those of
standard snakes- based algorithms, but it requires less
computing time.
In this paper, we presented a novel approach to automatically detect wood defects boundaries using a modified
Hopfield neural network with a specific cost function
designed for wood defects image. The boundary detection
problem in this paper was regarded as an optimization
process that sought the boundary points to minimize
a cost function. Hopfield neural network was used as
computational networks for solving optimization problems.
Because of its highly interconnected structure of neurons,
the network was not only very eective in computational
complexity, but also very fault tolerant. In consideration of
the accuracy of the detection, an initial boundary must be
estimated before using the Hopfield neural network. Every
pixel in the image with an initial boundary was represented
by a neuron which was connected to all other neurons but
not to itself. The image was considered as a dynamic system
which was completely depicted by a cost function. The states
of the neurons updated according to the cost function till
the convergence. Then, the result image was given by the
states of the neurons. The tools of Matlab were applied to
implement the experiment in this paper. The results showed
that our method can obtain more continued and more
accurate boundary points than the traditional methods of
boundary detection.
The remainder of this paper is organized as follows. In
Section 2, a basic imaging principle of X-ray and a wood
nondestructive detection imaging system are described. A
Hopfield neural network theory and its application in solving
optimization problems are illustrated in Section 3. Section 4
discusses how to implement the boundary detection algorithm using a Hopfield network. This section is divided into
four phases. We first discuss how to initiate defects boundaries, then how to map the boundary detection problem into
a Hopfield neural network, and then a novel cost function
for wood defects boundaries detection is described. Finally,
we illustrate the summary of the algorithm. In Section 5,
experimental results and a discussion are given. We illustrate
a conclusion and a perspective in Section 6.

2. X-Ray Wood Nondestructive


Detection Theory
X-ray detection method has been widely applied in the
field of wood nondestructive detection in recent decades. As
the major application way using X-ray, wood defects image
was acquired first by an X-ray image system. Then, wood
defects and other internal structure features were detected by
subsequent evaluation methods.
2.1. Basic Imaging Principle of X-Ray. X-ray is a kind of
electromagnetic wave which has shorter wavelength than
visible lights. It can penetrate a certain thick opaque body.
After penetrating the body, the intensity of X-ray is related
to the property and thickness of the body and energy of
the X-ray. For a monochromatic narrow beam X-ray (which

EURASIP Journal on Advances in Signal Processing

I0
X-ray

Figure 1: Attenuation diagram of X-ray imaging law.

has a single wavelength), when it penetrates a thin part of


homogeneous substance which part has a thickness as T,
the decay intensity of the X-ray is proportional to incident
ray intensity and thickness of the substance, as T = T.
Therefore, after the X-ray has intensity as I0 , penetrating
homogeneous substance has a thickness as T, the intensity
of the X-ray is
I = I0 eT ,

(1)

where, I0 is the intensity of incident ray, I is the intensity


of transmitted ray, T is the thickness of the substance, and
is the attenuation coecient. It is the basic attenuation
principle of monochromatic narrow X-ray [14]. An attenuation diagram of X-ray imaging law is shown in Figure 1.
In the practical testing, the X-ray from the source is a broad
beam and continuous spectrum ray, which includes dierent
energy photons, so the attenuation formula is complex.
The attenuation coecient of broad beam and continuous
spectrum ray changes with increasing of thickness of the
penetrated substance. When the thickness gets a threshold
value, the attenuation coecient gets nearly a fixed value. In
this case, the continuous spectrum ray can be approximately
regarded as monochromatic ray.
2.2. X-Ray Wood Nondestructive Detection Imaging System.
The block diagram of X-ray wood nondestructive detection
imaging system is shown in Figure 2. The system used in our
experiment is capable of producing wood defects images. The
log will be placed between the X-ray source and the image
intensifier. The X-ray source gives o the X-ray which will be
absorbed partly by the wood material when it penetrates the
objects. Absorption quantity is related to the types and the
density of log defects. The attenuation of X-ray in the logs
reduces the energy, reflected in dierent degrees of activating
the same image intensifier screen. The visual information
of image intensifier is transmitted to a computer by a CCD
camera. The digital signals transmitted by the A/D converter
circuit from the simulation signals are deposited in the image
storage system for the wood defects image detection.

3. Hopfield Neural Networks


3.1. Basic Theory of Hopfield Neural Networks. The Hopfield
neural network is one of the most famous artificial neural
network models. As a recurrent neural network, it is
constructed from a single layer of neurons, every of which
has feedback connections to all other neurons, but not to
itself. Figure 3 shows a diagram of a Hopfield neural network
structure with four neurons.

EURASIP Journal on Advances in Signal Processing


CCD camera

Log specimen
Rotation
X-ray source

Image intensifier

Rotating plate

Figure 2: The block diagram of X-ray wood nondestructive


detection imaging system.

z 1

z 1

3.2. Hopfield Neural Networks for Solving Optimization Problem. Hopfield neural networks have been used successfully
in solving optimization problems such as the traveling
salesman problem (TSP) [1517]. In recent years, take
advantage of their optimization computation capabilities,
Hopfield neural networks were applied in image processing
[1820]. Mapping a practical problem to an energy function
is a key step for Hopfield neural networks to solve optimization problems. The basic form of the energy function was
described in the literature [21] as

z 1

z 1

In a Hopfield neural network, a neuron can not only be


used for an input neuron, but also an output neuron. Every
Hopfield neural network has a so-called cost function (or an
energy function), which is used for measuring stability of a
Hopfield neural network. Signals were circularly transmitted
in the whole network. The operation course can be regarded
as a recovered and strengthened processing for an input
signal. In the course, the network approach gradually to a
stable state when the cost function is minimized. If a problem
can be mapped to the task of minimizing a cost function, the
Hopfield neural network will be implemented to obtain an
optimal (or near optimal) solution.

E=
Neuron
z 1

Unit delay operator

Figure 3: The diagram of a Hopfield neural network with four


neurons.

N N
N

1 
Ti j vi v j Ii vi .
2 i=1 j =1
i=1

(4)

The general steps for solving optimization problems are


described as follows.
First of all, an objective function of a problem should
be illustrated by the penalty function method. The designed
optimization problem is
min (v1 , v2 , . . . , vN ),

Every neuron in a Hopfield neural network has computational capabilities, which can process an input and give
a relevant output. If the ith neuron is described by two
variables: its input ui and its output vi . The output is the
state which is computed by a given activation function f . The
transformation is described as
vi = f (ui ).

(2)

In a discrete model, vi is a discrete variable with a value of


zero or one.
The input ui of the ith neuron is related to the weighted
sum of other neurons and their corresponding weights
which was described as an interconnection of strength. The
interconnection of strength between the ith neuron and the
jth neuron is represented by Ti j . For ensuring the network
convergence, the interconnective strengths are constrained to
be symmetrical, which means that Ti j = T ji . In addition,
each neuron has a bias of Ii fed to its input. The input, or
the current state of the ith neuron, is updated by a function
described as
ui =

N

j=
/i

Ti j v j + Ii .

(3)

restriction: pi (v1 , v2 , . . . , vN ) 0,

i = 1, 2, . . . , k,

(5)

k is the number of the restriction. The objective function is


J = (v1 , v2 , . . . , vN ) +

k


i F pi (v1 , v2 , . . . , vN )

(6)

i=1

i is a suciently large constant. may have dierent


dereferencing. By comparing each term in (5) with the
corresponding terms in (6), we can determine the network
parameters, the inter-connective strength Ti j , and the bias
Ii of each neuron. Secondly, the dynamic equation of the
network is written out. For a continuous network, the
dynamic equation can be calculated by
E(v1 , v2 , . . . , vN )
dui
= ki
,
dt
vi

ki > 0.

(7)

For a discrete network, the dynamic equation can be


calculated by
ui = ki

E(v1 , v2 , . . . , vN )
,
vi

ki > 0.

(8)

After obtaining the dynamic equation, the original inputs


can drive the network till it achieves a stable state. Then, the
optimization result is worked out.

EURASIP Journal on Advances in Signal Processing

(a) An original wood image

(b) Processed images by Canny algorithm

(c) The image that (a) merged (b)

Figure 4: An original wood image and processed wood images.

4. Hopfield Neural Networks for Wood Defects


Boundary Detection
The boundary detection problem in this paper was regarded
as an optimization process that sought the boundary points
to minimize a cost function. Hopfield neural network was
used as computational networks for solving optimization
problems.
4.1. Initiate Boundary. An initial boundary must be estimated before using our Hopfield neural network boundary
detection method. The Canny detector was selected to
implement the initiation. The Canny detector is the most
powerful edge detector provided by function edge. In the
Canny algorithm, an image is smoothed using a Gaussian
filter with a specified standard deviation, , to reduce noise.
The local gradient, g(x, y) = [G2x +G2y ]1/2 , and edge direction,
(x, y) = tan1 (G y /Gx ), are computed at each point. An
edge point is defined to be a point whose strength is locally
maximum in the direction of the gradient. The algorithm
then tracks along the top of these ridges and sets to zero all
pixels that are not actually on the ridge top so as to give a
thin line in the output. Finally, the algorithm performs edge
linking by incorporating the weak pixels that are 8-connected
to the strong pixels. Figure 4(b) shows the processed wood
defects images by Canny algorithm. Figure 4(a) is an original
wood image with a defect of crack. Figure 4(c) shows the
image that Figure 4(a) merged Figure 4(b). Eective edge
detection can be implemented using the Canny algorithm.
We can get a good detection result, less noise, and single lines.
However, the edge points do not exactly match the actual
boundary of the crack, while the edge can be regarded as the
initiate boundary of the Hopfield neural network boundary
detection.

To design such a neural network with an energy function


for an entire image is impossible and impractical. However,
we noticed that the influence is small between two distant
elements. Thus, a small window is applied to the image. The
neurons inside the window are fully connected to each other.
The correlation between the central element and the element
outside the window can be ignored without aecting the final
result [22].
In this M N window, every pixel in the image is represented by a neuron. As shown in Figure 5, a two-dimensional
(2D) binary Hopfield neural network is constructed. All the
initiate boundary points estimated by the Canny operator
are mapped to the 2-D network. The number of rows equals
the number of rows of initial boundary image, while the
number of columns equals the number of columns of initial
boundary image. Each neuron is denoted as a point (i, j),
where 1 i N and 1 j M. A binary output, 0
(for resting) or 1 (for activate), is assigned to each neuron
representing the absence or presence of boundary elements.
According to (4), we can define the energy function of the
2-D Hopfield neural network as
E=

N M N M
N 
M

1   
Ti, jk,l vi, j vk,l
Ii, j vi, j ,
2 i=1 j =1 k=1 l=1
i=1 j =1

where vi, j is the binary state of the neuron in row i and


column j, Ti, jk,l is the interconnection weight between the
neuron in row i and column j and the neuron in row k and
column l. A neuron (i, j) in the network receives weighted
inputs Ti, jk,l vk,l from the neuron (k, l) and a bias input Ii, j
from outside. The total input to neuron (i, j) is computed as
ui, j =

M
N 


Ti, jk,l vk,l + Ii, j .

(10)

k=1 l=1

The output of each neuron is computed as




4.2. Boundary Detection with a Novel Cost Function. Once


we have found the initial boundary of a defect in a wood
image, we determine an approximate region where the actual
boundary is most likely to be located. A slight adjustment
can be made to seek the actual boundary which will be
implemented by a Hopfield neural network.

(9)

vi, j = f ui, j ,

(11)

and the activation function f in the network is defined by




if ui, j >
f ui, j =
0 otherwise.

(12)

EURASIP Journal on Advances in Signal Processing


A pixel

A neuron

log
Hopfield
neural network

Crack
Image

Figure 6: Original wood image.

Figure 5: The diagram of the relationship between the wood image


and the Hopfield neural network structure. Every pixel in the left
image is represented by a neuron in the right network structure.

The states of the neurons corresponding to the initial


boundary points are activated. As the network is running, the
operating rule drives the network towards to the direction
of minimizing the energy function, while the neurons representing the actual boundary points are activated gradually.
Therefore, the energy function should be designed to meet
that the energy of the network is minimum when the neurons
corresponding to the actual boundary points are activated.
An objection function meeting the above conditions are
described as
E=

m


i=1

n


vi, j

j =1

m 
n

i=1 j =1

n


ai, j

j =1

(13)
vi, j


 
,

 

Gi, j Gi, j+1  + Gi, j Gi, j 1 

where vi, j is the output of the neuron in row i and column


j, ai, j is the initial value of the neuron in row i and column
j. Gi, j is the gray value of the original wood defects image.
and are constant coecients.
The first term of the energy function ensures that fewer
neurons are activated except the neurons corresponding to
the actual boundary points in each row. The second term
ensures that the activated neurons are positioned in the
points which have greatest change in gray value.
By expanding (13) and comparing each term with the
corresponding terms in (9), we can determine the network
parameters, the inter-connective weights Ti, jk,l , and the bias
inputs Ii, j as
Ti, jk,l = 2i,k ,
Ii, j = 2

n

l=1

ai,l 


Gi, j

 i, j
,
 

Gi, j+1  + Gi, j Gi, j 1 

(14)

where i, j = 1 if i = j and zero otherwise. Once the


parameters Ti, jk,l and Ii, j are obtained using (14), each
neuron can evaluate and adjust its state according to (10) and
(12).

Once the initial state of the neurons has been set,


the Hopfield neural network begins to work continuously
until the energy function of the network stops decreasing.
Through the network evolutions, the optimal (or near
optimal) boundary points are detected. The position of these
activated neurons indicates the detected boundary locations.
4.3. Summary of the Algorithm. The algorithm of wood
defects boundary detection using the Hopfield neural network can be summarized as follows.
Step (1) Set the initial state of the neurons based on the
initial boundary points which is detected by the Canny edge
detection algorithm.
Step (2) Calculate the input of each neuron, ui, j , using
(10).
Step (3) Calculate the output of each neuron, vi, j , using
(11).
Step (4) Check the state of neurons; if the state does not
change comparing with the last state, stop; otherwise, go back
to step (2).
Step (5) The final states of neurons are the output result
of the network, which represent the final boundary points.

5. Experimental Results and Discussion


The purpose of our boundary detection approach is to detect
boundaries of wood defects in an image and separate it
from normal wood structure. Once isolated, the detected
defect can be further processed for recognition of defect type
and other defect characteristic. To show that the proposed
method have good capability of boundary detection, the proposed method is compared with the conventional methods
such as Sobel edge operator, Roberts edge operator, Prewitt
edge operator, Laplacian operator, and Canny operator.
Matlab is a high-level technical computing language.
We can solve technical computing problems faster than
with traditional programming languages such as C and
C++. It has a toolbox of image processing which have
some traditional image processing functions such as Sobel,
Roberts, Prewitt, Laplacian, and Canny. We can conveniently
implement the traditional image processing methods by
some simple commands. M-files are macros of Matlab
commands that are stored as ordinary text files. An M-file can

EURASIP Journal on Advances in Signal Processing

Figure 7: Image after edge detection using Sobel operator.

Figure 9: Image after edge detection using Prewitt operator.

Figure 8: Image after edge detection using Roberts operator.

Figure 10: Image after edge detection using LoG operator.

be either a function with input and output variables or a list


of commands. All macros of image processing commands in
Matlab are stored in M-files. We can program the proposed
commands using M-files to implement some works of image
processing including the Hopfield neural network method.
A computer program coded by M-files of Matlab7.0 was
used to implement the proposed method. The values of the
parameters and were determined by the experiment. The
value was set to 0.5, while the value of was set to 0.05. The
initial boundary points were estimated using the Canny edge
algorithm, and the result image was shown in Figure 4(b).
The conventional methods were implemented by the image
processing toolbox of Matlab7.0. The original wood images
used for evaluating the method were acquired from the Xray wood nondestructive detection imaging system. Figure 6
shows an original X-ray wood image with a crack on it.
Figures 7, 8, 9, 10, and 11 show separately the boundary
detection images using Sobel edge operator, Roberts edge
operator, Prewitt edge operator, Laplacian operator, and
Canny operator. Figures 12, 13, and 14 show separately the
boundary detection images using our method with dierent
thresholds of which are separately 0.006, 0.005, and
0.004.
Comparing with conventional boundary detection methods, this approach converted a boundary problem to an
optimization process that seeks the boundary points to
minimize a cost function. The gray value of image pixel was
described as the neuron state of Hopfield neural network.
The state updated till the cost function touches the minimum

Figure 11: Image after edge detection using Canny operator.

Figure 12: Image after boundary detection using our method with
threshold of 0.006.

EURASIP Journal on Advances in Signal Processing

7
received a good result. As shown in the Figures 614, the
method based on Hopfield neural network in detecting
boundary of wood defects was eective; the noises were
eectively removed. We can get a more noiseless and vivid
wood defect boundary. Thus, a promising method of wood
boundary detection based on Hopfield neural network with
a novel cost function is provided. All the courses of image
processing and building a Hopfield neural network in this
paper were implemented using the tools of Matlab. The tools
of Matlab are well done in the study of images.

Figure 13: Image after boundary detection using our method with
threshold of 0.005.

Figure 14: Image after boundary detection using our method with
threshold of 0.004.

value. The final states of neurons were the result image


of boundary detection. Taking advantage of the collective
computational ability and energy convergence capability of
the Hopfield network, the noises will be eectively removed.
The experimental results showed that our method can obtain
more noiseless and more vivid boundary points than the
traditional methods of boundary detection.

6. Conclusion
An X-ray imaging technique was applied in wood nondestructive detection. Through wood images acquired by this
technique, the wood defects information such as locations,
scales, and types was visual. The detected defects can be
further processed for recognition of defects types and other
defects characteristics.
Hopfield neural network was applied in the boundary
detection of wood images. We designed a novel cost function
for a Hopfield neural network to detect a defect boundary
as solving an optimization problem. After the boundary
initiation using Canny edge algorithm, a slight adjustment
can be made to seek the actual boundary which will be
implemented by a Hopfield neural network with the cost
function. Those points that decreased the network energy
were detected as boundary points. Taking advantage of
the collective computational ability and energy convergence
capability of the Hopfield neural network, the experiment

References
[1] L. S. Davis, A survey of edge detection techniques, Computer
Graphics Image Processing, vol. 4, pp. 248270, 1975.
[2] D.-S. Lu and C.-C. Chen, Edge detection improvement by ant
colony optimization, Pattern Recognition Letters, vol. 29, no.
4, pp. 416425, 2008.
[3] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image
Processing Using MATLAB, Publishing House of Electronics
Industry, Beijing, China, 2004.
[4] R. C. Gonzalez and R. E. Woods, Digital Image Processing,
Publishing House of Electronics Industry, Beijing, China, 2nd
edition, 2002.
[5] J. Canny, Computational approach to edge detection, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol.
8, no. 6, pp. 679698, 1986.
[6] X. Xu, Z. Yang, and Y. Wang, A method based on rankordered filter to detect edges in cellular image, Pattern
Recognition Letters, vol. 30, no. 6, pp. 634640, 2009.
[7] C. H. Chao and A. P. Dhawan, Edge detection using Hopfield
neural network, in Proceedings of Conference on Applications
of Artificial Neural Networks, vol. 2243, pp. 242251, 1994.
[8] C.-Y. Chang, A contextual-based Hopfield neural network
for medical image edge detection, in Proceedings of the IEEE
International Conference on Multimedia and Expo (ICME 04),
pp. 10111014, June 2004.
[9] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: active
contour models, International Journal of Computer Vision,
vol. 1, no. 4, pp. 321331, 1988.
[10] Y. Zhu and H. Yan, Computerized tumor boundary detection
using a hopfield neural network, IEEE Transactions on
Medical Imaging, vol. 16, no. 1, pp. 5567, 1997.
[11] A. K. Hamou and M. R. El-Sakka, Optical flow active
contours with primitive shape priors for echocardiography,
EURASIP Journal on Advances in Signal Processing, vol. 2010,
Article ID 836753, 10 pages, 2010.
[12] Y. Zheng, G. Li, X. Sun, and X. Zhou, A geometric active
contour model without re-initialization for color images,
Image and Vision Computing, vol. 27, no. 9, pp. 14111417,
2009.
[13] Y. Yang and X. Gao, Remote sensing image registration
via active contour model, AEUInternational Journal of
Electronics and Communications, vol. 63, no. 4, pp. 227234,
2009.
[14] Z. Liu, Modern Ray Detection Technology, China Standard
Press, Beijing, China, 1999.
[15] J. F. Roler and W. H. Gerstacker, On the convergence
of iterative receiver algorithms utilizing hard decisions,
EURASIP Journal on Advances in Signal Processing, vol. 2009,
Article ID 803012, 8 pages, 2009.

8
[16] U.-P. Wen, K.-M. Lan, and H.-S. Shih, A review of Hopfield neural networks for solving mathematical programming
problems, European Journal of Operational Research, vol. 198,
no. 3, pp. 675687, 2009.
[17] J. J. Hopfield and D. W. Tank, Neural computation of
decisions in optimization problems, Biological Cybernetics,
vol. 52, no. 3, pp. 141152, 1985.
[18] G. Pajares, M. Guijarro, and A. Ribeiro, A Hopfield Neural
Network for combining classifiers applied to textured images,
Neural Networks, vol. 23, no. 1, pp. 144153, 2010.
[19] R. Cierniak, A 2D approach to tomographic image reconstruction using a Hopfield-type neural network, Artificial
Intelligence in Medicine, vol. 43, no. 2, pp. 113125, 2008.
[20] R. Sammouda and M. Sammouda, Improving the performance of Hopfield neural network to segment pathological
liver color images, International Congress Series, vol. 1256, pp.
232239, 2003.
[21] J. J. Hopfield, Neurons with graded response have collective
computational properties like those of two-state neurons,
Proceedings of the National Academy of Sciences of the United
States of America, vol. 81, pp. 30883092, 1984.
[22] S. Lu, Z. Wang, and J. Shen, Neuro-fuzzy synergism to
the intelligent system for edge detection and enhancement,
Pattern Recognition, vol. 36, no. 10, pp. 23952409, 2003.

EURASIP Journal on Advances in Signal Processing

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 375171, 6 pages
doi:10.1155/2010/375171

Research Article
Attenuation Analysis of Lamb Waves Using
the Chirplet Transform
Florian Kerber,1 Helge Sprenger,2 Marc Niethammer,3 Kritsakorn Luangvilai,4
and Laurence J. Jacobs4
1 Institute

of Mathematics and Computing Science, University of Groningen, 9700 AV Groningen, The Netherlands
of Applied and Experimental Mechanics, University of Stuttgart, Pfaenwaldring 9, 70569 Stuttgart, Germany
3 Computer Science Department, University of North Carolina, Chapel Hill, NC 27599-3175, USA
4 School of Civil and Environmental Engineering and G.W. Woodru School of Mechanical Engineering, Georgia Institute of Technology,
Atlanta, GA 30332, USA
2 Institute

Correspondence should be addressed to Florian Kerber, f.j.kerber@rug.nl


Received 22 December 2009; Revised 26 March 2010; Accepted 10 June 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 Florian Kerber et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Guided Lamb waves are commonly used in nondestructive evaluation to monitor plate-like structures or to characterize properties
of composite or layered materials. However, the dispersive propagation and multimode excitability of Lamb waves complicate their
analysis. Advanced signal processing techniques are therefore required to resolve both the time and frequency content of the timedomain wave signals. The chirplet transform (CT) has been introduced as a generalized time-frequency representation (TFR)
incorporating more flexibility to adjust the window function to the group delay of the signal when compared to the more classical
short-time Fourier transform (STFT). Exploiting this additional degree of freedom, this paper applies an adaptive algorithm based
on the CT to calculate mode displacement ratios and attenuation of Lamb waves in elastic plate structures. The CT-based algorithm
has a clear performance advantage when calculating mode displacement ratios and attenuation for numerically-simulated Lamb
wave signals. For experimental data, the CT retains an advantage over the STFT although measurement noise and parameter
uncertainties lead to larger overall deviations from the theoretically expected solutions.

1. Introduction
Ultrasonic waves are often used in nondestructive testing
to evaluate the integrity of structural components, as
well as to determine material properties of composite or
layered materials. In various disciplines such as civil or
aerospace engineering, multimode, dispersive guided waves
such as Lamb waves have been applied, see Chimenti
[1] for an overview. However, complicated signal analysis
is the trade o for their versatility. In fact, the main
challenges to process Lamb wave signals are due to their
very characteristics. Firstly, dispersion phenomena require
a resolution of the frequency content of a Lamb wave
signal over time which is inherently compromised by the
uncertainty principle. Secondly, Lamb waves are multimodal, which means that interferences between individual
modes complicate the allocation of energy and displacement

related quantities to a specific mode of excitation. A powerful


technique to address these diculties are time-frequency
representations, see for an overview Niethammer et al. [2].
To improve results obtained with conventional methods like
the STFT or WT, Hong et al. [3] developed an advanced
algorithm based on the STFT using window functions that
approximated the group delay of each mode of propagation
individually. Kuttig et al. [4] further refined this approach
by using the chirplet transform as a generalized TFR, which
allows for higher order approximations of the group delay.
Encouraged by these advances in signal processing, this
paper further explores the potential of CT-based methods
for dispersive wave analysis. The problem considered is
to extract displacement and energy-related quantities of
individual Lamb wave modes in elastic plates, a problem
which is relevant for several NDE applications. Variations of
the energy associated with a particular mode can for example

EURASIP Journal on Advances in Signal Processing

be used to localize notches in plates by means of a correlation


technique, see [5]. Ultrasonic attenuation describes the
amplitude decay of wave modes due to energy leakage or
the geometry of a specimen. Geometric spreading of Lamb
waves in plate structures was examined by Luangvilai et al.
[6] using the STFT. Energy leakage in absorbing plates was
studied by Luangvilai et al. [7] to determine attenuation
coecients using a refined STFT algorithm. While STFTbased techniques analyze multimode time-domain signals
as a whole, the CT-based algorithm uses basis functions
specially adjusted to the dispersion relation of each mode of
propagation. Physical quantities like displacement or energy
can thus be allocated more consistently to individual modes.
Since the shaping of the basis functions depends on the
knowledge of the dispersion relation for a given set-up,
this work considers both numerically simulated ([6]) and
experimentally generated ([5]) time-domain signals of Lamb
waves in an aluminum plate to evaluate the robustness of the
CT-based algorithm as well as its performance.
The paper is organized as follows: first a general definition of the CT is given before describing its use in
NDE applications to resolve the time-frequency content of
dispersive wave signals by means of an adaptive model-based
algorithm. Section 3 contains a description of the candidate
NDE problem. The results for mode displacement ratios and
geometric spreading of both theoretically and experimentally
generated wave signals are presented in Section 4. The
concluding remarks of Section 5 outline possibilities to apply
the presented technique to other NDE applications.

2. The Chirplet Transform and Its Use in


Dispersive Wave Analysis

t0 , 0 , s, q, p =
=

1
2

x(t)gt0 ,0 ,s,q,p (t)dt

X()Gt0 ,0 ,s,q,p ()d,

where denotes complex conjugation. The basis function


g(t) as well as its Fourier transform G() belongs to a family
of chirp signals,
Gt0 ,0 ,s,q,p () = Tt0 F0 Ss Qq P p H(),
gt0 ,0 ,s,q,p (t) = Tt0 F0 Ss Qq P p h(t),

(3)

F0 H() = H( 0 ).

(4)

Frequency shift:
F0 h(t) = ei0 t h(t),
Scaling:

1
Ss h(t) = h(t/s),
s

Ss G() = sG(s).

Time shear:

 1/2


(6)

p
P p H() = exp i 2 H().
2
Frequency shear:

q
Qq h(t) = exp i t 2 h(t),
2


1/2

Qq H() = (iq)

22
exp i
 H() .
q

(7)

Higher order time shear P p1 ,p2 ,... H() can also be applied
resulting in
p1 2 p2 3
+ +
2
3

H(),

(8)

and similarly, higher order frequency shear is given by




Qq1 ,q2 ,... h(t) = exp i

q1 2 q2 3
t + t +
2
3

h(t).

(9)

The energy density P ct of the chirplet transform at every


point (t0 , 0 , s, q, p) in the five dimensional parameter space
is given by

 
 2
P ct t0 , 0 , s, q, p = C ct t0 , 0 , s, q, p .

(10)

By comparison, the short-time Fourier transform only


allows to shift the window function in time and frequency,
Gt0 ,0 () = Tt0 F0 H(),

stft

C (t0 , 0 ) =

gt0 ,0 (t) = Tt0 F0 h(t),

1
=
2

(11)

x(t)gt0 ,0 (t)dt

(12)

X()Gt0 ,0 ()d.

The energy density of the STFT, the spectrogram, is given by



P stft (t0 , 0 ) = C stft (t0 , 0 ) .


(2)

(5)

t2
exp i
 h(t),
2p

P p h(t) = ip

to obtain
(1)

Tt0 H() = eit0 H().

Tt0 h(t) = h(t t0 ),

2.1. Definition of the Chirplet Transform. The standard


definition of the chirplet transform is given by the inner
product of a basis function g(t) and the signal x(t),


Time shift:

P p1 ,p2 ,... H() = exp i

The chirplet transform has been introduced as a generalized


time-frequency representation by Mann and Haykin [8].
The basis function can be adjusted by means of shift,
shear, and scaling operators, resulting in a five-dimensional
parameter space for the energy density which comprises as
projections the respective densities obtained from a shorttime Fourier transform (time and frequency shift) and a
wavelet transform (time shift and scaling).


ct

where the operators Tt0 , F0 , S s , Qq , and P p act in the


following manner on the window function h(t) or its Fourier
transform H(), respectively,

(13)

A more detailed discussion of TFRs used for dispersive wave


analysis can be found, for example, in [2].

EURASIP Journal on Advances in Signal Processing

1 t t0
exp
g(t) =
4
2
2
s0
s0
1

2 

t() = t0 + p1 ( 0 ) + p2 ( 0 )2 + + p5 ( 0 )5 .
(15)
The group delay g () of a signal H() = A() exp[i()]
in frequency domain is given by
d
().
d

s1
700

(16)

The group delay of the window function Gt0 ,0 ,p1 ,p2 ,...,p5 () =
Tt0 F0 P p1 ,p2 ,...,p5 H() can thus be fitted to (15) for every
mode of propagation by a fifth-order time shear (8) with
parameters p1 , . . . , p5 . Figure 1 depicts the 3-regions of
window functions adjusted by the adaptive algorithm for
the first symmetrical mode s0 . The CT is not calculated
in frequency regions of interference with other modes, for
example, around 2 MHz at the intersection of the a0 - and
s0 -mode. More details about the adaptive algorithm can be
found in [4]. The same Gaussian window function (14) was
also used for the STFT-based analysis.

s4

a1

a2

a4

s2

600
500

a3
a0

400
s0

300
200

s3
100

4
5
6
Frequency (MHz)

10

Figure 1: CT basis functions adjusted to the s0 -mode using 5thorder time shear. The dispersion curves are the solution of the
Rayleigh-Lamb equations (17) for an aluminum 3003 plate of
thickness 0.99 mm and source-receiver distance of 90 mm.

3. Problem Setting
In this paper, Lamb waves traveling in aluminum plate
structures are considered. Due to the relatively simple
geometry of the plate, it is possible to compute dispersion
curves based on the analysis of the Rayleigh-Lamb equations
for stress-free boundaries as derived in Achenbach [9],


4k2 pq
tan qh
  = 
2 ,
tan ph
q2 k 2

(14)

with a default value of s0 = 2.2 s. Scaling Ss is 1 by default


unless the 3-isopleths of (14) described by ellipses with
half-axis of 3s0 in time and 1/(3s0 ) in frequency intersect with
the dispersion curve of another mode. Thus, at least 99.9%
of the energy of the window function is concentrated around
the mode of interest. The dispersion curves are approximated
by a fifth-order polynomial around every point (t0 , 0 ),

g () =

800

Time (s)

2.2. Adaptive Algorithm Based on the Chirplet Transform. For


ease of visualization, only subspaces of the five-dimensional
parameter space of the chirplet transform are considered.
In a fashion analogous to the STFT and its energy density representation, the spectrogram, the time-frequency
plane is chosen to analyze dispersive waves. According
to the definition of the energy density P i , i {stft, ct},
the squared amplitude of a time-domain signal recording
particle displacement is proportional to the energy of the
incident Lamb wave, but comprises contributions of all
modes of propagation. The objective is to identify energy
or displacement related components of individual modes
in regions of sucient mode separation. To that end, the
energy content of the time-domain signal is averaged in the
time-frequency plane over a region around every point of
the dispersion curve of a particular mode using a specially
designed window function. In the case of the STFT, time
and frequency shift operations result in a region of averaging
that approximates the group delay of an individual mode
of propagation with zeroth order, whereas the CT-based
algorithm as described by Kuttig et al. [4] additionally uses
time shearing resulting in higher order approximations. Note
that the dispersion curves for a given system depend on the
material propertiesin the example of a single aluminum
plate, its elastic modulus and density as well as its thickness
which determines the robustness of the CT-based algorithm.
For this research, the CT is calculated with a normalized
Gaussian window

2

(17)

tan qh
q2 k 2
  =
,
tan ph
4k2 pq
where
p2 =

2
k2 ,
cL2

q2 =

2
k2 ,
cT2

(18)

and 2h is the plate thickness. The numerical solution of (17)


for an aluminum plate is shown in Figure 1 up to frequencies
of 10 MHz. The a0 - and s0 - mode are well separated in the
frequency ranges 01.8 MHz and again between about 23 MHz. The same holds for the a1 -mode between 2-3 MHz
and 4-5 MHz and for s1 -mode between 3.55 MHz, so that
the evaluation will be restricted to the first two symmetric
(si ) and antisymmetric (ai ) modes in these frequency ranges.
Consider two dierent time-domain signals: firstly, numerically simulated data was taken from Luangvilai et al. [6] to
have an undisturbed signal for performance evaluation. The
authors used normal mode expansion to simulate the outof-plane displacement field for particles on the plate surface
excited by a point-like source for source-receiver distances
between 50 mm and 90 mm. Secondly, real measurement
data of the out-of-plane velocity field at the surface of an
aluminum plate acquired by Benz et al. [5] was available to
determine the robustness of the proposed signal processing
technique. The experimental setup in this case consisted of

EURASIP Journal on Advances in Signal Processing


Geometric spreading d2 /d1

Displacement ratio a0 /s0

0.8
0.7
0.6
0.5

1.4
1.3
1.2
1.1
0.5

0.4

1.5
2
Frequency (MHz)

2.5

(a)
0.3

0.2

Geometric spreading d2 /d1

Normalized mode displacement

0.9

CT-based results for mode a0


1.5

0.1
0
0

4
5
6
Frequency (MHz)

10

Theoretical
STFT
CT

Figure 2: Out-of-plane displacement ratio of mode s0 normalized


to mode a0 for the synthetic signal.

an aluminum plate of dimensions 100 mm 100 mm


1 mm and a noncontact, point-like laser measurement and
detection system. A laser source was used to generate Lamb
waves in the aluminum plate for dierent source-receiver
distances ranging from 50 to 150 mm.
For each of the two data sets, the mode displacement
ratios for selective modes are determined as a means to detect
material irregularities, for example, for notch localization
as in [5]. Apart from that, the amplitude decay over time
of individual modes is analyzed as it contains information
about the geometry of the specimen. Geometric spreading is
given by the quotient d2 /d1 for two propagation distances
d1 < d2 . Such a normalized measure for geometric spreading
is chosen since the eect of the excitation source on the
energy density will be canceled out.

4. Results
First consider the results obtained for the numerically
simulated signal. The particle displacement associated to a
particular mode is extracted from the modulus |C i (t0 , 0 )|,
i {stft, ct} of each transform. To eliminate the eect of the
excitation source, these values are normalized to a particular
modeTable 1 contains the results for normalization with
respect to a0 and s0 by taking the point-by-point quotient
of the respective moduli at every frequency 0 . Figure 2
shows the ratio s0 /a0 as obtained from the STFT- and CTbased algorithm versus the exact theoretical solution. The
latter are very close to the theoretical solution, while the
amplitude ratio extracted from the STFT deviates especially
at frequencies where individual modes are highly dispersive
such as the s0 -mode for frequencies between 2 and 3 MHz.
Since the STFT does not use window functions adjusted to

CT-based results for mode a0

2.2
2
1.8
1.6
1.4
1.2
1
0.5

1.5
2
Frequency (MHz)

2.5

(b)

Figure 3: Geometric attenuation for the a0 -mode of the synthetically generated signal determined with the STFT (b) and
CT (a).
Dashed lines represent the theoretically expected solution d2 /d1 ,
dash-dotted lines are the results obtained from the CT- and STFTbased algorithm, respectively, for the distances 40 mm (pink),
50 mm (green), 60 mm (black), 70 mm (blue), and 80 mm (red)
related to 90 mm of propagation distance.

the dispersion behavior of individual modes, drastic changes


in the group delay can lead to inconsistent values using the
STFT, whereas the CT can keep a high level of accuracy.
In order to quantify the level of accuracy of each method,
a simple metric p is introduced that maps a function x(t) on
a positive real number,
p(x(t)) =
x(t) 



x(t), x(t)

(19)

where L = dt and , is defined


 as the inner product for
functions [10], x(t), y(t) = x (t)y(t)dt. This metric
will be used to measure the mean absolute deviation of
quantities extracted with the introduced signal processing
techniques from the theoretical solution. Note that the
adaptive CT-based algorithm only computes energy densities
in frequency regions where individual modes are suciently
separated, that is, when the 3-region of averaging does not
intersect with any other mode. The performance measure for
both the STFT- and the CT-based method will therefore be
restricted to these regions only. Table 1 confirms that the CTbased results for the numerically simulated signal deviates
much less from the theoretical solution compared to the ones
obtained from the STFT.
A similar observation also holds true when calculating
geometric spreading for individual modes. Figure 3 depicts
the results for the first antisymmetric mode a0 for frequencies up to 3 MHz. The dashed lines indicate the theoretically

EURASIP Journal on Advances in Signal Processing

Table 1: Average deviation from theoretical mode displacement ratio in %.


displacement ratio
CT (in %)
STFT (in %)
CT (in %)
STFT (in %)

numerical signal
experimental signal

a1 /a0
9.49
18.60
173.73
115.36

s0 /a0
67.89
686.36
46.96
83.06

s1 /a0
8.46
14.65
343.88
223.07

a0 /s0
10.54
51.33
35.75
52.99

a1 /s0
4.00
66.28
272.29
289.76

s1 /s0
12.19
10.24
484.40
385.70

Table 2: Average deviation in % from theoretical geometric attenuation.


mode a0

distance

mode a1

CT

STFT

80/90 mm
70/90 mm
60/90 mm
50/90 mm
40/90 mm

0.31
0.43
0.49
0.24
0.28

3.64
6.40
8.48
10.12
14.04

120/150 mm
90/150 mm
60/150 mm
50/150 mm

1.91
7.26
2.50
3.44

4.16
14.95
16.48
19.21

mode s0

CT
STFT
CT
Synthetically generated signal
1.65
2.54
1.71
2.80
3.62
1.99
4.65
5.20
6.23
1.79
7.80
6.56
3.14
7.28
6.33
Experimentally generated signal
15.03
22.78
17.01
15.83
22.08
9.63
9.71
26.91
13.56
3.12
14.56
10.78

expected ratio d2 /d1 for dierent source receiver distances


normalized to the longest distance 90 mm. The theoretical solution is compared to the quotient of the moduli
|C i (t0 , 0 )|, i {stft, ct} which represent the amplitudes
calculated at a frequency 0 for a particluar mode. The CTbased algorithm almost exactly predicts geometric spreading
over the frequency range from about 0.3 to 1.5 MHz. In
contrast, the STFT results dier from the theoretically
expected amplitude decay even in those regions where the
a0 -mode is separated. This is confirmed by earlier results of
Luangvilai et al. [6] who reported that the amplitude decay
due to the propagation pattern cannot be recalculated exactly
using the spectrogram. A similar observation can be made
for modes a1 , s0 , and s1 as well, that is, the relative error for
the CT is up to ten times smaller than the STFT results, c. f.
Table 2. Only in frequency regions in which the group delay
is almost constant, for example, at about 1-2 MHz for the a0 mode, both transforms produce similar amplitude ratios. In
general, longer propagation distances improve the resolution
due to better mode separation in the time-frequency plane.
The analysis of the experimentally obtained data yields
smaller dierences between the two methods as shown by
the mean relative deviations from the exact solution for
both the mode displacement ratios and geometric spreading,
see Tables 1 and 2. The CT produces results closer to the
theoretically expected solution, especially if source-receiver
distances are large enough as can be seen in Figure 4 when
comparing geometric spreading for propagation distances of
50 mm (green curve) and 120 mm (red curve) relative to
150 mm. However, the level of accuracy drops considerably
compared to the previous results. The extraction of both
displacement and energy related quantities associated with

mode s1
STFT

CT

STFT

9.08
11.78
24.37
28.91
28.26

5.61
8.39
13.19
38.22
132.4

19.96
24.37
24.09
32.26
40.03

20.91
20.63
32.68
31.37

32.92
24.63
67.88
76.49

18.7
16.38
14.46
5.29

individual modes from a time-frequency representation


depends on the dispersion relation, which in turn is determined by the material properties of the specimen. Parameter
variations as well as measurement noise therefore influence
the accuracy of the STFT-based approach and even more
the CT-based algorithm since in the latter case, the basis
functions are adjusted for the dispersion, too. When the
theoretical dispersion curves are closely matched as for the
numerically simulated signal, the performance advantage of
the CT-based algorithm becomes apparent.

5. Conclusions
The main goal of this paper is to evaluate the potential of the
chirplet transform for dispersive wave analysis. The problem
of associating displacement or energy related quantities to
individual modes of propagation is of interest in nondestructive evaluation. The theoretical advantage of the proposed
method, that is, tailoring regions of averaging to individual
modes based on the dispersion relation, becomes apparent
when analyzing numerically simulated Lamb wave signals
traveling in an aluminum plate. Extracting displacement
ratios and geometric spreading for individual modes of
propagation succeed with high accuracy in regions with
sucient mode separation. This strongly indicates that
the CT-based algorithm can achieve a better performance
than more conventional approaches like the spectrogram.
The potential to extract displacement and energy-related
quantities associated with a particular mode of a dispersive
wave therefore qualifies it as a versatile tool in NDE
applications. As a model-based approach, the CT based
algorithm uses information about the dispersion relation.

Geometric spreading d2 /d1

EURASIP Journal on Advances in Signal Processing


CT-based results for mode a0
1.8

[3]

1.6
1.4
1.2

[4]

1
0.8

0.5

1.5

2
2.5
3
Frequency (MHz)

3.5

[5]

Geometric spreading d2 /d1

(a)

[6]

CT-based results for mode a0


2.8
2.6
2.4
2.2
2
1.8
1.6
1.4
1.2

[7]

0.5

1.5

2
2.5
3
Frequency (MHz)

3.5

[8]

(b)

[9]

Figure 4: Geometric attenuation for the a0 -mode of the experimentally generated signal determined with the STFT (b) and CT
(a).
Dashed lines represent the theoretically expected solution d2 /d1 ,
dash-dotted lines are the results obtained from the CT- and STFTbased algorithm, respectively, for the distances 50 mm (green),
60 mm (black), 90 mm (blue), and 120 mm (red) related to 150 mm
of propagation distance.

[10]

Since the dispersion relation in turn depends on the


material properties and geometry of the specimen, precise
knowledge about experimental set-up is a prerequisite to
obtain reliable results with this technique. Consequently,
the level of accuracy is considerably lower when applied
to the experimentally generated data, also for the STFTbased approach. Improving robustness properties as well
as algorithmic eciency remains a goal of future research
to make the CT-based technique more easily available and
applicable for quantitative nondestructive evaluation.

Acknowledgment
The Deutscher Akademischer Austausch Dienst (DAAD)
provided partial support to F. Kerber.

References
[1] D. E. Chimenti, Guided waves in plates and their use in
materials characterization, Applied Mechanics Reviews, vol.
50, no. 5, pp. 247284, 1997.
[2] M. Niethammer, L. J. Jacobs, J. Qu, and J. Jarzynski,
Time-frequency representations of Lamb waves, Journal of

the Acoustical Society of America, vol. 109, no. 5, pp. 1841


1847, 2001.
J.-C. Hong, K. H. Sun, and Y. Y. Kim, Dispersion-based shorttime Fourier transform applied to dispersive wave analysis,
Journal of the Acoustical Society of America, vol. 117, no. 5, pp.
29492960, 2005.
H. Kuttig, M. Niethammer, S. Hurlebaus, and L. J. Jacobs,
Model-based signal processing of dispersive waves with
chirplets, Journal of the Acoustical Society of America, vol. 119,
no. 4, pp. 21222130, 2006.
R. Benz, M. Niethammer, S. Hurlebaus, and L. J. Jacobs,
Localization of notches with Lamb waves, Journal of the
Acoustical Society of America, vol. 114, no. 2, pp. 677685,
2003.
K. Luangvilai, L. J. Jacobs, and J. Qu, Far-field decay of lasergenerated, axisymmetric Lamb waves, in Review of Progress in
Quantitative Nondestructive Evaluation, D. O. Thompson and
D. E. Chimenti, Eds., vol. 700 of AIP Conference Proceedings,
pp. 158164, 2003.
K. Luangvilai, L. J. Jacobs, P. D. Wilcox, M. J. S. Lowe, and
J. Qu, Broadband measurement for an absorbing plate, in
Review of Progress in Quantitative Nondestructive Evaluation,
D. O. Thompson and D. E. Chimenti, Eds., vol. 24 of AIP
Conference Proceedings, pp. 297304, 2005.
S. Mann and S. Haykin, The chirplet transform: a generalization of Gabors logon transform, in Proceedings of the Vision
Interface, pp. 205212, 1991.
J. D. Achenbach, Wave Propagation in Elastic Solids, Elsevier,
New York, NY, USA, 1st edition, 1999.
P. K. Jain, O. P. Ahuja, and K. Ahmed, Functional Analysis, John
Wiley & Sons, New York, NY, USA, 1st edition, 1995.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 176203, 14 pages
doi:10.1155/2010/176203

Research Article
Flexible Riser Monitoring Using Hybrid Magnetic/Optical Strain
Gage Techniques through RLS Adaptive Filtering
Daniel Pipa, Sergio Morikawa, Gustavo Pires, Claudio Camerini, and Joao Marcio Santos
Materials, Equipments and Corrosion Department (TMEC), Petrobras Research and Development Center (CENPES),
Av. Horacio Macedo, 950. Cidade Universitaria, 21941-915 Rio de Janeiro, RJ, Brazil
Correspondence should be addressed to Daniel Pipa, danielpipa@gmail.com
Received 30 November 2009; Revised 5 April 2010; Accepted 7 May 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 Daniel Pipa et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Flexible riser is a class of flexible pipes which is used to connect subsea pipelines to floating oshore installations, such as FPSOs
(floating production/storage/o-loading unit) and SS (semisubmersible) platforms, in oil and gas production. Flexible risers are
multilayered pipes typically comprising an inner flexible metal carcass surrounded by polymer layers and spiral wound steel
ligaments, also referred to as armor wires. Since these armor wires are made of steel, their magnetic properties are sensitive to
the stress they are subjected to. By measuring their magnetic properties in a nonintrusive manner, it is possible to compare the
stress in the armor wires, thus allowing the identification of damaged ones. However, one encounters several sources of noise
when measuring electromagnetic properties contactlessly, such as movement between specimen and probe, and magnetic noise.
This paper describes the development of a new technique for automatic monitoring of armor layers of flexible risers. The proposed
approach aims to minimize these current uncertainties by combining electromagnetic measurements with optical strain gage data
through a recursive least squares (RLSs) adaptive filter.

1. Introduction
Flexible risers are an important component of oshore production systems of oil and gas. They are used to link subsea
pipelines to floating installations, such as FPSOs (floating
production/storage/o-loading unit). Flexible risers have
been one of the preferred deepwater riser solutions in many
regions of the world due to their good dynamic behavior and
reliability [1].
Petrobras is a Brazilian multinational petroleum company whose businesses include oil and gas exploration,
production, transportation, refining, and distribution. Since
most Brazilian oil reserves are located oshore and often
under deepwater, Petrobras oil production is highly dependent on platforms and oshore equipments such as flexible
risers. Integrity management of flexible risers is essential to
ensure the safe operation of a production unit.
The main failure mode of flexible risers, when operating
in deep waters, occurs at the risers top section close to end
fitting due to fatigue in tensile armor wires. It is known,
however, that riser failure only happens after the rupture

of a significant number of wires. Therefore, the structural


integrity of a riser-end fitting connection may be assessed
through the monitoring of wire rupture. Close to the end
fitting, wires are subjected to tensile stress at 30% to 50% of
yield point. Considering that rupture reduces stress to zero,
the structural integrity of end fitting connection may be also
assessed through monitoring tensile armor wire stress [2].
By identifying an unstressed wire among stressed ones,
it is possible to infer that the remnant wires are subjected to
higher loads than their operational design. This implies that
riser integrity is uncertain and insecure. MAPS-FR (MAPS
is a registered trade mark of MAPS Technology Ltd.) is an
equipment capable of magnetically comparing the stress in
the wires through the polymer layers, thus indicating broken
wires and assessing riser integrity.
However, one encounters several sources of noise when
measuring electromagnetic properties contactlessly, such as
movement between specimen and probe, and magnetic
noise. This paper describes the development of a new
technique for automatic monitoring of armor layers of
flexible risers. The proposed approach aims to minimize

2
these current uncertainties by combining electromagnetic
measurements with optical strain gage data through a
recursive least squares (RLSs) adaptive filtering technique.
This paper is organized as follows. Section 2 introduces
the flexible riser and comments its main failure modes.
Section 3 presents the proposed method as well as each
of its components. Finally, Section 4 presents the results
obtained in a laboratory trial, which attests the potential of
the method. A conclusion is drawn in Section 5. Flexible
pipe is a general term and denotes a type of pipe, whereas
flexible riser designates the vertical segment of a pipe
which is usually connected to an oshore production unit.
This article deals with signal processing algorithms,
rather than physical phenomena underlying the correspondence between mechanical load in ferromagnetic materials
and their electromagnetic properties. The idea is to show
that this relation does not need be fully determined and
understood if one uses a global load reference. Additionally,
if there exists an unknown or unstable gap between probe
and sample, this relation can be such complex that a
nonreferenced measurement of stress can be dicult. On
the other hand, some global load estimate can enhance the
results.

EURASIP Journal on Advances in Signal Processing


Table 1: Layers functions.
Layer
Carcass
Internal
pressure sheath
Interlocked
pressure armor
Back-up
pressure armor
Antiwear layer
Inner layer of
tensile armor
Antiwear layer
Outer layer of
tensile armor
Outer sheath

Flexible risers are flexible pipes which are generally used to


link subsea pipelines to floating oshore installations, such
as FPSOs (floating production/storage/o-loading unit). In
deep water oil and gas exploration flexible risers are used for
oil and gas production, water and gas injection, and oil well
control and monitoring [3]. Flexible risers are also used for
oil and gas exportation to the shore or to a storage unit, such
as FSOs (floating storage/o-loading unit).
A flexible pipe is made up of several dierent layers.
The main components are leakproof thermoplastic barriers
and corrosion-resistant steel wires. The helically wound steel
wires give the structure its high-pressure resistance and
excellent bending characteristics, thus providing flexibility
and superior dynamic behavior. This modular construction,
where the layers are independent but designed to interact
with one another, means that each layer can be made fitfor-purpose and independently adjusted to best meet a
specific field development requirement [4]. Figure 1 shows
an example of flexible riser and Table 1 summarizes the
function of each layer.
This paper focuses on the integrity monitoring of the
outer layer of tensile armor, which supports axial load and
the riser weight. Its integrity is important to maintain a
reliable connection between riser and floating unit (i.e.,
FPSO or platform).
2.1. Riser Failure Modes. Failure modes denote possible
processes which cause the failure of a flexible pipe. In
practice, a failure constitutes a loss of ability to transport
product safely and eectively. This may be catastrophic
(the pipe ruptures or breaks) or may constitute a minor,
uncontrolled loss of pipe integrity or pipe blockage [5].

Internal fluid integrity


Hoop stress resistance
Hoop stress resistance
Reduce friction between layers
Crosswound armor wires used for tensile
stress resistance balanced with outer layer
Reduce friction between layers
Crosswound armor wires used for tensile
stress resistance balanced with inner layer
External fluid integrity

Table 2: Main failure modes of flexible pipes.


No.
1

2. Flexible Risers

Function
Prevent collapse

2
3
4
5
6
7
8
9

Failure mode

Description
Collapse of carcass and/or pressure armor
Collapse
due to excessive tension, excessive
external pressure or installation overloads
Rupture of tensile or pressure armors due
Burst
to excess internal pressure
Rupture of tensile armors due to excess
Tensile failure
tension
Compressive
Birdcaging of tensile armor wires
failure
Rupture or crack of external or internal
Overbending
sheaths
Torsional
Failure of tensile armor wires
failure
Fatigue
Tensile armor wire fatigue
failure
Erosion
Of internal carcass
Of internal carcass or tensile/pressure
Corrosion
armor exposed to seawater or diused
product

Table 2 describes the main failure modes of flexible


pipes. The inspection and monitoring techniques suggested
to detect and/or predict each failure mode are described in
Section 2.2.
Periodic inspections have detected a considerable incidence of damage in the top section of risers (i.e., endfitting, Figure 2), which may aect their structural integrity
and eventually induce dierent failure mechanisms. These
include mostly external sheath damage, corrosion, and/or
fatigue-induced damage to the tensile armors and torsional
instability. These flaws are generally originated during installation or, more frequently, during operation due to contact
with another riser or the platform structure [2, 6]. Figure 3
shows an example of a failure where rupture of tensile wires
occurred inside the end-fitting.

EURASIP Journal on Advances in Signal Processing

Outer sheath
Outer layer of tensile armor
Anti-wear layer
Inner layer of tensile armor
Anti-wear layer
Back-up pressure armor
Interlocked pressure armor
Internal pressure sheath
Carcass

Figure 1: Unbonded flexible pipe.

Pontoon

I-tube

Bellmouth

Bend
stiener

Figure 4: MAPS-FR probe.

Flexible riser

Figure 2: End-fitting.

Figure 3: End-fitting failure example.

2.2. Flexible Riser Inspection and Monitoring. The Recommended Practice for Flexible Pipe [7], also known as API
17B, from the American Petroleum Institute, recommends
some inspection and monitoring methods for in-service
flexible pipes. Table 3 lists the monitoring methods as well

as the failure modes that are covered by each method. Visual


inspection and periodic pressure testing have been, to date,
the most common forms of in-service monitoring used for
the demonstration of continued fitness for purpose.
Several methods for managing the integrity of flexible
pipes have been proposed in literature depending on the
failure mode aimed [818]. As this work focuses on the
detection of damage to tensile armor wires, the following
survey of state-of-the-art methods will concentrate on
techniques which directly or indirectly estimate the number
of broken wires in armor layers.
Automated visual inspection has been employed by
Petrobras as torsion monitoring. The method focuses on
small angle deformation detection and on online data
acquisition, in order to provide immediate identification for
nonconformities. It consists of attaching a target on the riser
and observing its behavior through a video camera, installed
above the end fitting. The rupture of wires in the inner or
outer layer can lead the riser to an unbalanced condition,
thus generating torsion [2, 19]. However, if the number of
broken wires is not significant, torsion might not occur and
broken wires might not be detected.
Acoustic emission has been applied to detect the instant
of rupture of armor wires. An acoustic emission scheme

EURASIP Journal on Advances in Signal Processing


MAPS signals

y(n)

X(n)

Display

W(n)

Load reference
Adaptive
filter

e(n)

d(n)

Figure 7: Hybrid adaptive filter.

18
16
14
12
10
8
6
4
2

Figure 5: MAPS-FR ring.

0
2

x(n)

y(n)

Adaptive
filter

Desired signal
Adaptive
algorithm

10

20

30

40

50

60

Output

e(n)

d(n)

Load
MAPS-FR f1
MAPS-FR f2

MAPS-FR f3
MAPS-FR f4
Proposed method

Figure 6: General adaptive filter configuration.

Figure 8: Pure MAPS-FR signals aims to estimate the riser load.


However, better results are achieved when the four signals are
combined by the proposed method.

was developed in [20] based on laboratory tests and field


experience. The idea is that when a tensile armor wire
rupture takes place, a strong sound signal is generated. This
great amplitude and energy sound wave can be distinguished,
in relation to environmental noise, making acoustic emission
a potential wire rupture detection technique. In [20], a
procedure was designed to filter relevant acoustic events from
spurious noisy emissions. The filtering scheme was applied
to real data from a riser installed in the field. The riser
was monitored for 11 months and then a dissection was
performed. As no failure was found during the dissection,
the filter parameters were adjusted to match the observed
results. A drawback of this method is the need of continuous
monitoring; that is, the rupture is not detected if the system
is momentarily o. Also, other acoustic noises from the
platform can cause false indications.
Fiber-optics Bragg grating (FBG) sensors technology has
also been used to monitor flexible riser. In [21] two methodologies to monitor strain in flexible risers are developed.
In the first approach, permanent FBG strain gages were
installed on all wires of the armor layer. This approach
allows identification of abrupt changes in the strain states of
the wires, which may provide instantaneously detection of
failure in one or more wires. The problem is that this method
requires that the outer sheath be partially removed to access
the wires. This is not always permitted in in-service risers.

In the second methodology, a thin steel collar instrumented with FBG strain gages was placed around the riser
outer layer, measuring circumferential strains and changes in
its diameter. Wire failures can be detected as they can cause
variation in the external diameter of the polymeric jacket
covering the riser. The disadvantage of this technique is that
the number of broken wires needed to cause a detectable
variation in the external diameter can be significant.
Another scheme using FBG strain gages was proposed
in [1]. It is based on a retrofit clamp that monitors axial
elongation and torsion of a flexible riser. The clamp is
instrumented with FBG strain gages. As the previously
presented methods, it suers from sensibility. That is, one
single broken wire is unlikely to be detected as its eects on
external geometry are minimal.
In [22] a technology that integrates FBG sensors along
grooves in the tensile wires during manufacturing of the pipe
is described. Thus, strain and temperature can be monitored
along several meters of the wires and ruptures are easily
detected. Although new flexible pipes can be manufactured
with this feature, the technology cannot be applied to existing
pipes.
The electromagnetic tool MAPS-FR, on which the proposed method is based, is described in [3]. This equipment
can estimate the stress on armor wires in a noninvasive
manner. Additionally, it is sensitive to a single broken wire

EURASIP Journal on Advances in Signal Processing

Connector A

Connector B

Figure 9: Trial.

3. Proposed Method

Table 3: Flexible riser inspection and monitoring techniques


suggested by API 17B and the failure modes (FM) covered by each
method.
Monitoring
method
Visual
inspection
(internal and
external)

Description

FM
covered

Assessment of leakage or visible


deformation or damage to pipe or
outer sheath

1, 4, 5,
6, 8, 9

Pressure test

Pressure applied to pipe and decay


measured as a function of time.
Leakages or anomalies identified

1, 2, 5

Destructive
analysis of
removed
samples

Prediction of the state of aging or


degradation

8, 9

Load,
deformation
and
environment
monitoring

Measured parameters include wind,


wave or current environment, vessel
motions, product temperature,
pressure and composition, and
structural (or flexible pipe) loads and
deformations

Radiography to establish the condition


of steel tensile armor and pressure
armor layers in service

2, 3, 4,
6, 7

Nondestructive testing
of pipes in
service
Gaging
operations
Spool Piece

Test pipe

Annulus
Monitoring

Gaging pigs to check for damage to the


internal pipe profile
To predict the state of aging or
degradation of the internal pressure
sheath
Use of a flexible test pipe in series or in
parallel with the flow which is
periodically removed for destructive or
nondestructive testing
Measurement of annulus fluid (pH,
chemical composition, volume).
Prediction of degradation of the steel
pressure armor or tensile armor layers
or the aged condition of the internal
pressure sheath or susceptibility of
annulus environment to such
degradation

Basically, most techniques that directly estimate the


number of broken wires are intrusive, whereas nonintrusive
techniques are not precise. It will be shown that the
proposed methodology, on the other hand, combines both
advantages detecting a single wire break in a nonintrusive
manner. Moreover, the proposed method produces graphical
representation of stress distribution on wires which can be
eectively used for break detection.

1, 8, 9
8, 9

8, 9

7, 9

since it does not depend on geometrical deformations in the


external sheath. Section 3.1 is devoted to describe MAPS-FR.

This section describes the proposed method. In Section 3.1


the electromagnetic equipment used to measure internal
tensile stress in the wires is briefly presented. The RLS
filter technique is deduced in Section 3.2. Finally, the hybrid
approach, which combines MAPS-FR signals with optical
strain gage data through RLS filtering, is presented in
Section 3.3.

3.1. MAPS-FR: Stress Measurement Technology. The tool


used as nonintrusive stress gage for tensile armors is MAPSFR. This equipment was developed by MAPS Technology
in partnership with Petrobras. At the end of development
process, Petrobras acquired the tool and has been developing
its own signal processing algorithms, which are the main
objective of this document. In the next lines, a brief
presentation of MAPS-FR [23] tool is made. For a more
complete description of MAPS-FR see [3].
In service the axial armor wires are subjected to tensile
stress. In a failed wire, however, the applied tensile stress
will be zero at the point of failure and will increase over
some distance along the ligament from the break. The length
over which the stress is increased depends on the amount of
frictional load transfer to adjacent ligaments. If the length
over which the stress reduction occurs is suciently long and
the stress in the armor layer wires can be monitored, this
would oer a method for detecting armor failure remotely
from the actual failure location [3].
Most stress measurement techniques are not appropriate
to monitor riser armor layers. Some, such as hole drilling,
are clearly not satisfactory as they are not nondestructive and
require access to the armor wires. Others, including neutron
diraction, and X-ray diraction, are not suited to installed
operation or, like ultrasonic methods, also need to be directly
coupled to the material being measured [3].
Magnetic methods, on the other hand, do have the
necessary attributes for an appropriate technology as intimate contact with the material being measured is not
necessary [24, 25]. Stress measurement is possible as it
is known that the magnetic properties of ferromagnetic
materials are sensitive to internal stress. However, there
are important issues to overcome as it is also known that
mechanical hardness, grain size, texture, and other material
properties also aect magnetic parameters. MAPS [23] stress
measurement technology has been adapted to perform stress
measurement in flexible pipes. This technique involves a
number of low-frequency electromagnetic measurements,

EURASIP Journal on Advances in Signal Processing


FO
1.5

35

MAPS: 74

45

2.5
2

40
1

30

1.5

35

1
25

0.5

20
0
15
0.5

10

30

0.5

25

20

0.5
1

15

1.5

10
1

5
30

40

50

60

2.5

30

70

40

50

(a)

60

70

(b)

Figure 10: Normal cyclic loading.


FO

MAPS: 274

45

15

35
1
30

40
10

35
0

25

20
15

10
3

30

25

20

15

10

10
15

5
4

230

240

250

260

20

230

270

(a)

240

250

260

270

(b)

Figure 11: Wire 37 break.

some of which monitor material variations, whilst others are


mainly stress sensitive [3].
3.1.1. MAPS-FR Tool Description. The basic component of
the current MAPS-FR equipment is the so-called probe,
shown in Figure 4. Each probe contains an excitation coil,
which generates the electromagnetic field that propagates
through risers wires, and three sensing coils, which read the
response of a wire or group of wire to the excitation field.
As previously mentioned, the value read by sensing coils
depends on the stress that the wires are subjected to.
Five probes are grouped together to form a ring, as
shown in Figure 5. This assembly can now be mounted and
fixed around the outer layer of the riser. As each probe
has three sensing coils, a ring has fifteen sensing coils. The
complete MAPS-FR equipment is composed by three rings,

comprehending 45 sensing coils. Hence, the current MAPSFR set permits monitoring of approximately 45 wires on the
external armor layer, although this can be altered to suit
requirements.
3.1.2. MAPS-FR Data. The goal achieved by current MAPSFR technology is to compare tensile stress present in
armor wires. Nonetheless, the interpretation of raw data
requires an analysis by MAPS-FR experts. As a result, an
indication of a possible wire rupture is signalized including
its circumferential localization.
During the development of MAPS-FR system, Petrobras
and MAPS Technology jointly performed several controlled
laboratory tests. In these tests, specific wires were induced to
failure by the introduction of notches on their surfaces. Blind
tests were also performed, where only the Petrobras team was

EURASIP Journal on Advances in Signal Processing

FO

MAPS: 463

45

35

30
25

40

35

30

4
2
0

25

20

15

10

5
4

420

430

440

450

20

15

10

460

420

430

(a)

440

450

460

10

(b)

Figure 12: Wire 35 break.


FO

MAPS: 613

45

35

30

25

20

15

40

35

4
3

30

10

570

580

590

600

25

20

15

10

2
3

570

610

(a)

580

590

600

610

(b)

Figure 13: Wire 30 break.

acquainted with damage wires and the MAPS Technology


team had to give indication of broken wires based only on
MAPS-FRs signals. In the final blind test, MAPS-FR experts
correctly indicated 100% of wire breaks with 1 false positive
indication over 9 correct indications.
The raw MAPS-FR signals exhibit a slow time drift
probably due to accommodations of risers internal layers
during a load variation. This drift must be carefully considered in order not to be misinterpreted as a wire break.
Automatic break detection algorithms have to compensate
these phenomena avoiding false calls. In a nonreferenced
monitoring, that is, when MAPS-FR operates without a
global riser load estimate, one cannot say whether this drift
is actually a load change or only the drift behavior.
When continuously operating in a o-shore environment, MAPS-FR can generate huge amounts of data, yielding

the human-based interpretation arduous and unfeasible. An


automatic approach is essential as a preliminary analysis,
signalizing only important events to be reviewed by experts.
The method proposed in this document is the first step
towards an automatic wire break detection system.
3.2. RLS Adaptive Filtering. Filters are a particular important
class of linear time-invariant systems [26]. Strictly speaking,
the term frequency-selective filter suggests a system that passes
certain frequency components and totally rejects all others,
but in a broader context any system that modifies certain
frequencies relative to others is also called a filter [27].
Another meaningful definition is that filter is a device that
maps its input signal to another output signal facilitating the
extraction of the desired information contained in the input

EURASIP Journal on Advances in Signal Processing


FO

MAPS: 714

45

35

2
40

30
25
20

35

30

2
1
0

25

20

15

10

5
4

670

680

690

700

15

10

670

710

680

690

(a)

700

710

(b)

Figure 14: Wire 6 break.

FO
2

MAPS: 354

45

10

35
1

30

40
5
35

25
20
15

30

25

20

10

10

15

15

10
4

20

310

320

330

340

310

350

(a)

320

330

340

350

(b)

Figure 15: Wire 5 break.

signal [28]. The latter definition is particularly interesting in


the context of this document.
Adaptive filters are, in turn, time-varying systems which
adapt their parameters to a more suitable condition or
operation point in order to achieve a specified behavior. In
other words, the filter coecients are changed so as an input
signal is transformed in an output signal which is as equal as
possible to a desired signal.
RLS adaptive filter class aims at the minimization of the
sum of the squares of the dierence between the desired signal
and the filter output signal. When new samples of incoming
signals are received at every iteration, the solution for the
least-squares problem can be computed in recursive form
resulting in the recursive least-squares (RLSs) algorithms
[28].

Let x(n) be the input signal, let y(n) be the output signal,
and let d(n) be the desired signal, with n representing the
time. That is, d(0) is the value of desired signal at time 0. The
input vector is formed by the last N + 1 values of the input
signal and is given by
x(n) = [x(n) x(n 1) x(n N)]T .

(1)

The filter, which transforms the input signal x(n) into the
output y(n), is given by
w(n) = [w0 (n) w1 (n) wN (n)]T ,

(2)

where N is the filter order. Note that due to its adaptive


nature, the filter coecients w(n) are time-varying, denoted

EURASIP Journal on Advances in Signal Processing

9
MAPS: 547

FO
2

45

35
1

30

40
35

25
20

15

10

30
1

25
20

15

500

510

520

530

10
2

5
500

540

510

520

(a)

540

530
(b)

Figure 16: Wire 17 break.


FO

MAPS: 847

45

35

15

2
40

30

25

30

20

25

15

20

10

10

35

15
5

10
5

10

800

810

820

830

800

840

810

820

(a)

840

830
(b)

Figure 17: Wire 7 break.

by letter n. The output signal at any instant n can be obtained


by
y(n) = xT (n)w(n 1).

The goal of RLS methods is to minimize not only the last


error but also the sum of all past output errors. Thus, the
objective function is given by

(3)
(n) =

The prediction error is at instant n given by


e(n) = d(n) y(n).

(4)

Figure 6 depicts the general scheme of an adaptive filter. An


adaptive algorithm adjusts the main filter coecients based
on some metric applied to the output error e(n). In general,
the adaptive algorithm will choose the main filter parameters
so that the output error e(n) is minimized.

n


n


i=0

i=0

ni 2 (i) =

2

ni d(i) xT (i)w(n) ,

(5)

where 0  < 1 is an exponential weighting factor also


referred to as forgetting factor. The forgetting factor permits
to put more significance and weight on recent output errors
than distant past errors. The lesser the forgetting factor is,
the less important are old output errors to the coecient
updating.
By dierentiating (n) with respect to w(n) in (5)
and performing some algebraic manipulations, the final
algorithm, shown in Algorithm 1, can be deduced.

10

EURASIP Journal on Advances in Signal Processing


FO

MAPS: 1184

45

35

2
1

30

25

20

15

10

8
40
6

35
30

25

20

15

10
4

1140

1150

1160

1170

1140

1180

1150

1160

(a)

1170

1180

(b)

Figure 18: Wire 13 break.

MAPS: 1282

FO

45

35
30
25

40

35

30

20

15

10

1240

1250

1260

1270

1280

4
3
2

25

20

15

10

1240

(a)

1250

1260

1270

1280

(b)

Figure 19: Wire 27 break.

The RLSs are known to pursue fast convergence and


have excellent performance when working in time-varying
environments. See [28, 29] for more information on adaptive
filters and RLS algorithms.
3.3. Hybrid Approach. The current MAPS-FR tool uses 4
excitation frequencies during the acquisition yielding 4
signals per sensing coil. Each frequency shows a dierent
sensitivity to wire stress depending on wire size, wire depth,
and so forth. It will be shown that a proper combination
of the 4 signals per sensing coil gives a better estimate of
the stress than the one obtained by considering each signal
independently.
The idea of the hybrid approach is to find a set of linear
systems which map each of MAPS-FR sensing coils signals

into realistic load values. These linear systems are continuously recalculated at every iteration to compensate the slow
time drift exhibited by MAPS-FR signals. Although magnetic
properties of metals vary nonlinearly with mechanical load,
linear systems can be used to do this mapping if some
adaptation is permitted. That is, the correspondence holds
(i.e., mapping becomes linear) in a small region surrounding
a given operation point. Once the operation point changes,
the adaptive filter recalculates its coecients. The new filter
coecients are valid within this new region.
The hybrid approach needs an estimate of riser global
load to be used as the desired signal d(n). Indeed, if all wires
are unbroken, the riser global load is approximately equally
divided to each wire and it can be used as an estimate of stress
in each wire. Since only dierences between wire stresses are

EURASIP Journal on Advances in Signal Processing

11

important to detect a broken wire, the riser global load is


taken as desired signal for each wire (Section 3.2).

Initialization
S(1) = I
w(1) = [0 0 0]T
Do for n 0:
e(n) = d(n) xT (n)w(n 1)
(n) = S(n
1)x(n)

(n) T (n)
1
S(n 1)
S(n) =

+ T (n)x(n)
w(n) = w(n 1) + e(n)S(n)x(n)
y(n) = xT (n)w(n)
(n) = d(n) y(n)

3.3.1. Problem Statement. MAPS-FR signals are arranged in


a matrix form as

(1)
(n)
x2(1) (n)
x(1) (n)

x45
(1)1

x (n 1) x(1) (n 1) x(1) (n 1)
1

2
45

..
..
..
..

.
.
.
.

(1)
(1)
(1) (n

N) x2 (n N) x45 (n N)
x1

x1(2) (n)

(2)
x1 (n 1)

..

(2)
x1 (n N)

(3)
X(n) =
x1 (n)
(3)
x1 (n 1)

..

.
(3)
x (n N)
1

x1(4) (n)
(4)
x (n 1)
1

..

(4)
x1 (n N)

x2(2) (n)
(2)
x2 (n 1)

..
..
.
.
x2(2) (n N)
x2(3) (n)
(3)
x2 (n 1)

..
..
.
.
(3)
x2 (n N)
x2(4) (n)

x2(4) (n 1)
..
..
.
.
(4)
x2 (n N)
1

(2)
x45 (n)

(2)
(n 1)
x45

..

(2)
x45 (n N)

(3)
(n)
x45
, (6)

(3)
x45 (n 1)

..

(3)
x45 (n N)

(4)
x45 (n)

(4)
(n 1)
x45

..

(4)
x45 (n N)

(p)

where xq (n) is the value of qth MAPS-FR sensing coil at


frequency p at instant n. The matrix X(n) is of size M K,
where M = 4N + 1 are the N past values of MAPS-FR signal
for each of 4 frequencies, plus a constant for bias correction,
and K = 45 are the 45 sensing coils.
The filter coecients are given by

w1,1 (n) w1,2 (n) w1,45 (n)

w2,1 (n) w2,2 (n) w2,45 (n)

.
.
.
.

.
.
.
.
.
W(n) = .
.

.
.

wM,1 (n) wM,2 (n) wM,45 (n)

(7)

Initialization
Do for 0 k 45
wk (1) = [1 1 1]T
Do for n 0
Do for 0 k 45
yk (n) = xkT (n)wk (n 1)
Update wk (n) for each k as in Algorithm 1
Algorithm 2: Complete Hybrid algorithm.

Table 4: Hybrid algorithm variables.


Variable

Meaning
Desired signal vector: global riser
d(n) =
load estimate obtained from
[d(n) d(n) d(n)]T
optical strain gages.
MAPS-FR signals matrix
X(n) = [x1 (n) x45 (n)]
organized as in (6).
y(n) =
Output vector which will be
[y1 (n) y2 (n) y45 (n)]T plotted for wire break detection.
Adaptive filter matrix coecients
W(n) = [w1 (n) w45 (n)] as in (7) that are updated at every
iteration.

The estimate error can, thus, be written in vector shape


as
e(n) = y(n) d(n).

A wires stress estimate is given by


yk (n) = xkT (n)wk (n 1),

Algorithm 1: Complete RLS algorithm.

(8)

where yk (n) is the estimate of stress in kth wire at instant n,


xk (n) is the kth column of matrix X(n), and wk (n 1) is the
k column of matrix W(n 1).
Arranging yk (n) in a vector form, one can write y(n) =
[y1 (n) y2 (n) y45 (n)]T . As the only reference available
for the riser load is d(n), it can be written as a vector d(n) =
[d(n) d(n) d(n)]T = d(n)[1 1 1]T .

(9)

Figure 7 shows the block diagram of the hybrid adaptive


filter. A summary of variables is given in Table 4 and the
complete algorithm is shown in Algorithm 2.
Notice that yk (n) is calculated with the filter coecients
wk (n 1) obtained in the previously iteration. Consequently
if a wire break occurs between time n 1 and n at, say, wire
k = k0 , the signal yk0 (n) tends to abruptly diverges from
other yk (n)s at instant n, indicating the rupture. As time
passes, wk0 (n) are recalculated for the new condition and the
divergence between k0 and other ks vanishes.

12
The vanishing behavior is explained next: suppose that
there are as many sensing coils as wires in the external
armor layer. Even if each sensing coil is located exactly above
each wire, respectively, due to the gap between sensing coil
and wire (i.e., external polymer layer thickness), magnetic
field from adjacent wires leaks laterally and aects the
measurements of each other. Therefore, there exists some
portion of magnetic field surrounding a broken wire that
contains signals from unbroken wires. The adaptive filter
parameters wk0 (n) are recalculated based on this portion
of signal which is coherent to the loading d(n). Although
this seems to be a problem, detectability at exact instant of
rupture is unaected, as will be shown in Section 4.
The units of stress and magnetic fields are irrelevant in
this context. The whole system works by comparison; that
is, the goal is to determine whether there is an inactive
wire among active ones. Nevertheless, it is possible to
establish a calibration procedure which would give rise to
consistent measurements, though it is out of the scope of this
document.

4. Results
A trial has been carried out to evaluate the performance of
the proposed method. The trial took place at the riser fatigue
test rig of Physical Metallurgy Laboratory (LAMEF) in Porto
Alegre, Brazil. The facility allows the application of static and
dynamic tensile loads exceeding 220 tons. The test sample
was a section of 6 nominal bore new flexible production
riser, rated for 3000 psi of approximately 5 m length. One end
of the riser was fixed and subjected to axial load (nominally
connector A), whereas the other was free to rotate during
loading (nominally connector B). The loading was cyclic
and sinusoidal, from 160 to 220 ton and at a frequency of
0.0167 Hz.
In the tests, the riser loadings were chosen to simulate as accurately as possible the field conditions, namely,
approximately 30% to 50% of yield point. However, other
field conditions such as internal pressure, arbitrary load
instead of cycling load, and riser orientation (vertical instead
of horizontal) were not considered. The influence of these
circumstances on the results is intended to be studied in
future tests.
Two windows were opened on the external sheath to
access the wires. The first window was near to connector B
and had a circumferential shape, giving access to all wires.
This window (right side of Figure 9) was used to cut the
wires during loading, simulating a real rupture. The second
window was close to connector A and was used to instrument
all wires with optical strain gages. The signals from these
strain gages were used as references (i.e., real stress of wires).
The global riser load needed for the hybrid processing was
estimated averaging all strain gages signals.
The MAPS-FR was installed in the middle of the sample.
This configuration was chosen to ensure that the strain
gages measured similar tension values to those sensed by the
MAPS-FR.

EURASIP Journal on Advances in Signal Processing


Table 5: Comparison between pure MAPS-FR signals and proposed methods signal and load signal.
Signal
MAPS-FR frequency 1
MAPS-FR frequency 2
MAPS-FR frequency 3
MAPS-FR frequency 4
Proposed method

MSE
1.1164
0.9781
0.7956
0.5291
0.2676

Table 6: Trial details.


Laboratory
Local
Date
Sample
Total external diameter
External sheath
thickness
Number of external
sheaths
MAPS-FR mounted
diameter
External armor layer
Reference
instrumentation
Global riser load
estimate d(n)
Loading
Magnetic stress
measuring tool
Signal processing
algorithm

Laboratory LAMEF at UFRGS University


Porto Alegre-RS-Brazil
2009/06/24 and 25
5 meters long 6 riser sample
250.7 mm
7 mm
2
(1 sheath removed) 236.7 mm
37 flat wires: 15 mm 5 mm at 30 degrees
Optical strain gages on every wire
Average of optical strain gages
Sinusoidal, 160 to 220 ton, frequency
0.0167 Hz
MAPS-FR
Hybrid MAPS-FR/Optical strain gage

Figure 8 shows a comparison between pure MAPS-FR


signals and the signal obtained by the proposed method. Pure
MAPS-FR signals do not represent the load well. However,
when properly combined, it is possible to obtain a signal
which is similar to the load that the riser is subjected to.
Table 5 summarizes the mean-square error between pure
MAPS-FR signals and the loading. The phase dierence
between the signals was corrected previously to the error
computation. The proposed methods signal is the one that
best represents the riser load.
Figure 9 illustrates the setup and Table 6 summarizes the
trial resources and details.
The main events of the trial are listed in Table 7. As
already mentioned, the wire breaks were produced cutting
them at the first window, close to connector B. In all events,
the optical strain gages almost instantaneously detected the
respective wire rupture. In some cases, the hybrid algorithm
indicated the rupture seconds before optical strain gages.
Figure 10 illustrates a series of cyclic loads. Each of the
following figures is composed of two graphs: the optical
strain gages signals are shown on the left, and processed
MAPS-FR signals are shown on the right. The abscissae

EURASIP Journal on Advances in Signal Processing


Table 7: Important events.
Date
2009/06/24

2009/06/25

Time
15 h 15
16 h 30
17 h 12
17 h 38
13 h 34
14 h 36
16 h 14
17 h 52
18 h 21

Time stamp
250
439
589
690
330
523
823
1160
1258

Event
Wire 37 break
Wire 35 break
Wire 30 break
Wire 6 break
Wire 5 break
Wire 17 break
Wire 7 break
Wire 13 break
Wire 27 break

13
installed on the external sheath or by the FBG strain gage
collar mentioned in Section 2.2 and described in [21]. The
collar can detect and estimate the riser global load in a
nonintrusive manner.
The presented results showed that the proposed technique produces graphical representations on which visual
detection of wire breaks can be eectively performed in
most cases. The proposed method can be straightforwardly
extended to automatically detect wire ruptures. A simple
fixed threshold or statistically variable threshold could be
employed for this purpose.

Acknowledgments
represent time stamp, whereas the ordinates represent wire
number (a) or sensing coil number (b). Given a wire number
and a time instant, the correspondent color indicates the
stress level. In both graphs, there is a scale bar on the right
which relates the color to a statistically normalized stress
parameter.
Figures 11 to 19 show the rupture moment of several
wires. It is clear on the left graphs which wire was cut. The
color dierence indicates that the damaged wire diverges
from the unbroken ones; that is, cut wires tend to loosen.
Although there is some noise on the right graphs, it is
possible to determine the rupture instant in most cases.
Moreover, the scale range of processed MAPS-FR signals
remains between 3 and +3 during normal operation. When
a rupture occurs, its limits reach from 3.5 to 20. This
could be used as a criterion for automatic detection.

5. Conclusion
Flexible risers are multilayered pipes used in oil and gas
industry. Their complex geometry imposes diculties (i.e.,
unknown wire arrangement) when assessing stress in internal layers through the outer polymeric sheath. Unknown
wire arrangement and in-service wire reaccommodation
introduce uncertainties while measuring internal stress.
This article proposes a new estimation method of
internal stress distribution by combining electromagnetic
measurements with optical strain gage data. Electromagnetic
measurements are converted into load values through adaptive filters. Optical strain gage signal is used as an estimate
of riser global load. This signal is used as desired signal in the
adaptive context. In other words, it is assumed that in normal
conditions riser global load is equally divided between wires.
A set of adaptive linear filters is calculated so as each
of the MAPS-FR signals is converted into load. The filters
inputs are electromagnetic signals, and the filters outputs are
load estimates of correspondent wires. When a wire rupture
occurs, the filter produces indications that the load changed,
and the break can be detected.
The main advantage of the proposed technique is that
it does not need the external sheath to be removed; that
is, it is a nonintrusive technique. Yet, it can detect singlewire ruptures. The riser global load estimate, required by
the proposed method, can be obtained through strain gages

The authors would like to acknowledge Petrobras Research


and Development Center (CENPES) for its incentive and
financial support. The authors would like to acknowledge
also MAPS Technology and MAPS team for the partnership,
collaboration, corrections, and experience exchange during
the production of this work.

References
[1] H. Corrignan, R. T. Ramos, R. J. Smith, S. Kimminau, and L. El
Hares, New monitoring technology for detection of flexible
armor wire failure, in Proceedings of the Oshore Technology
Conference (OTC 09), 2009.
[2] M. G. Marinho, C. S. Camerini, S. R. Morikawa, D. R.
Pipa, G. P. Pires, and J. M. Santos, New techniques for
integrity management of flexible riser end-fitting connection,
in Proceedings of the 27th International Conference on Oshore
Mechanics and Arctic Engineering, Estoril, Portugal, June 2008.
[3] J. C. McCarthy and D. J. Buttle, Non-invasive magnetic
inspection of flexible riser, in Proceedings of the Oshore
Technology Conference (OTC 09), Houston, Tex, USA, May
2009.
[4] Technip, Flexible pipe brochure, 2008, http://www.technip
.com/pdf/Flexible Pipe.pdf.
[5] Health and S. Executive, Guidelines for integrity monitoring
of unbonded flexible pipe, Tech. Rep., Health and Safety
Executive, 1998.
[6] M. G. Marinho, J. M. dos Santos, and R. D. O. Carneval,
Integrity assessment and repair techniques of flexible risers,
in Proceedings of the 25th International Conference on Oshore
Mechanics and Arctic Engineering (OMAE 06), Hamburg,
Germany, June 2006.
[7] A. P. Institute, API RP 17BRecommended Practice for Flexible
Pipe, API Publishing Services, Washington, DC, USA, 2002.
[8] A. Berg and N. J. Rishj-Nielsen, Integrity monitoring of
flexible risers by optical fibres, in Proceedings of the 21st
International Conference on Oshore Mechanics and Arctic
Engineering (OMAE 02), vol. 3, pp. 4752, 2002.
[9] L. A. Mesquita, J. M. Santos, P. Loureiro, and A. L. Carvalho,
Monitoramento das valvulas de despressurizacao de gas
percolado no espaco anular de risers de producao e exportacao

de oleo
e gas, in Proceedings of the Rio Pipeline Conference and
Exposition, 2005.
[10] A. Felix-Henry and P. Lembeye, Flexible pipes in-service
monitoring, in Proceedings of the 23rd International Conference on Oshore Mechanics and Arctic Engineering (OMAE
04), vol. 3, pp. 149154, 2004.

14
[11] J. Marsh, P. Duncan, and I. MacLeod, Oshore pipeline and
riser integritythe big issues, in Proceedings of the Oshore
Technology Conference (OTC 09), 2009.
[12] C. Saunders and T. OSullivan, Integrity management and
life extension of flexible pipe, in Proceedings of the Oshore
Technology Conference (OTC 07), 2007.
[13] J. W. Picksley, K. Kavanagh, S. Garnham, and D. Turner,
Managing the integrity of flexible pipe field systems: industry
guidelines and their application, in Proceedings of the Annual
Oshore Technology Conference, pp. 609618, 2002.
[14] J. Picksley, State of the art flexible riser integrity issues: study
report, Tech. Rep., MCS International, 2001.
[15] N. Weppenaar, A. Kosterev, L. Dong, D. Tomazy, and F. Tittel,
Fiberoptic gas monitoring of flexible risers, in Proceedings of
the Oshore Technology Conference (OTC 09), 2009.
[16] R. Roberts, S. Garnham, and B. DAll, Fatigue monitoring
of flexible risers using novel shape-sensing technology, in
Proceedings of the Oshore Technology Conference (OTC 07),
2007.
[17] R. Thethi, H. Howells, S. Natarajan, and C. Bridge, A fatigue
monitoring strategy and implementation on a deepwater top
tensioned riser, in Proceedings of the Oshore Technology
Conference (OTC 05), 2005.
[18] E. Binet, P. Tuset, and S. Mjen, Monitoring of oshore
pipes, in Proceedings of the Oshore Technology Conference
(OTC 03), 2003.
[19] M. G. Marinho, C. S. Camerini, J. M. Santos, and G. P. Pires,
Surface monitoring techniques for a continuous flexible riser
integrity assessment, in Proceedings of the Oshore Technology
Conference (OTC 07), Houston, Tex, USA, 2007.
[20] S. D. Soares, C. S. Camerini, and J. M. C. de Santos,
Development of flexible risers monitoring methodology
using acoustic emission technology, in Proceedings of the
Oshore Technology Conference (OTC 09), 2009.
[21] S. R. K. Morikawa, C. S. Camerini, D. R. Pipa et al.,
Monitoring of flexible oil lines using FBG sensors, in
Proceedings of the 19th International Conference on Optical
Fibre Sensors, vol. 7004 of Proceedings of SPIE, pp. 70 046F170 046F-4, April 2008.
[22] M. Andersen, A. Berg, and S. Saevik, Development of an
optical monitoring system for flexible risers, in Proceedings
of the Oshore Technology Conference (OTC 01), 2001.
[23] http://www.maps-technology.com/.
[24] B. DJ, Emerging technologies for in-situ stress surveys, in
Proceedings of the 6th International Conference on Residual
Stresses (ICRS 00), 2000.
[25] B. DJ and S. CB, Residual stresses: measurement using
magnetoelastic eects, in The Encyclopaedia of Materials:
Science and Technology, 2001.
[26] P. S. R. Diniz, E. A. B. da Silva, and S. L. Netto, Digital Signal
Processing: System Analysis and Design, Cambridge University
Press, Cambridge, UK, 2002.
[27] J. B. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal
Processing, Prentice-Hall, Upper Saddle River, NJ, USA, 2nd
edition, 1997.
[28] P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical
Implementations, Springer, Boston, Mass, USA, 3rd edition,
2008.
[29] S. Haykin, Adaptive Fiter Theory, Prentice-Hall, Upper Saddle
River, NJ, USA, 3rd edition, 1996.

EURASIP Journal on Advances in Signal Processing

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 192378, 9 pages
doi:10.1155/2010/192378

Research Article
Analysis of Approximations and Aperture Distortion for
3D Migration of Bistatic Radar Data with the Two-Step Approach
Luigi Zanzi and Maurizio Lualdi
Dipartimento di Ingegneria Strutturale, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy
Correspondence should be addressed to Luigi Zanzi, luigi.zanzi@polimi.it
Received 31 December 2009; Accepted 7 June 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 L. Zanzi and M. Lualdi. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
The two-step approach is a fast algorithm for 3D migration originally introduced to process zero-oset seismic data. Its application
to monostatic GPR (Ground Penetrating Radar) data is straightforward. A direct extension of the algorithm for the application
to bistatic radar data is possible provided that the TX-RX azimuth is constant. As for the zero-oset case, the two-step operator
is exactly equivalent to the one-step 3D operator for a constant velocity medium and is an approximation of the one-step 3D
operator for a medium where the velocity varies vertically. Two methods are explored for handling a heterogeneous medium; both
are suitable for the application of the two-step approach, and they are compared in terms of accuracy of the final 3D operator.
The aperture of the two-step operator is discussed, and a solution is proposed to optimize its shape. The analysis is of interest for
any NDT application where the medium is expected to be heterogeneous, or where the antenna is not in direct contact with the
medium (e.g., NDT of artworks, humanitarian demining, radar with air-launched antennas).

1. Introduction
In 1983, Gibson et al. [1] introduced the fast two-step
migration technique for 3D poststack seismic data. In a
companion paper, Jakubowicz and Levin [2] showed that in
a constant velocity medium the method is equivalent to the
classical one-step 3D migration. In their paper, Gibson et
al. performed a detailed analysis of the dierences between
the two-step approach and the one-step approach when the
velocity varies within the medium. They showed that in
normal conditions these dierences are negligible so that
the method was suggested as a quite attractive solution for
fast 3D migration of poststack seismic data. The extension
of the two-step approach to prestack 3D migration is not
straightforward although achievable as shown by Canning
and Gardner [3]. They proposed a scheme composed of 3D
DMO, cross-line 2D PSI, inline 2D DMO1 , velocity analysis,
and 2D inline depth migration. A variation of this scheme
was proposed by Meinardus et al. [4].
Here, it is shown that under a very restrictive condition,
that is, when the source-receiver azimuth is constant, the

two-step approach can be directly extended to non-zerooset data. In seismics, this would be the case of 3D marine
data collected by a single ship, equipped with a single cable,
and shooting along parallel lines. Of course this is not very
interesting for the seismic community where 3D acquisitions
are designed aiming at a balanced azimuth distribution to
get a good picture of 3D structures. Instead, the result is
interesting for GPR applications where 3D experiments are
normally executed by maintaining a constant orientation of
the antenna box, that is, a constant TX-RX azimuth. With the
present hardware technology, this approach is what is needed
by GPR users to achieve the goal of real-time visualization of
3D migrated volumes. Thus, the following sections discuss
the non-zero-oset extension of the two-step method, the
approximations resulting from the application to vertically
variable velocity fields, and finally the distortion eects on
the aperture of the migration operators. The quantitative
results are derived assuming the usage of an ultra highfrequency radar with air-launched antennas. This type of
hardware is normally preferred to ground-coupled antennas
to speed up the NDT acquisitions on highways and bridges. It

EURASIP Journal on Advances in Signal Processing


0
2
4
6
8
10
12
14
16
18
20

0.3
0.25
0.2
0.15
0.1
0.05
0

10
15
r r0 (cm)

20

25

z0 (cm)

z0 (cm)

2
0
2
4
6
8
10
12
14
16
18
20

0.3
0.25
0.2
0.15
0.1
0.05
0

10
15
r r0 (cm)

(a)

20

25

(b)

50

45

10
15

35

rms

20
25

vsoil = 12 cm/ns
H = 10 cm
Oset = 14 cm

40
LLNL

10

15

20
25 30
r r0 (cm)

vsoil = 8 cm/ns
vsoil = 12 cm/ns

35

40

45

50

vsoil = 16 cm/ns
vsoil = 20 cm/ns

Figure 2: Migration aperture (r r0 ) for the rms and the LLNL


approximations as a function of z0 (target depth). The antennamedium distance is 10 cm, and the radar wave velocity into the
medium is examined in the range from 8 to 20 cm/ns. The aperture
is limited to the contributions aected by a phase error at the
highest frequency (6 GHz) lower than /2.

y y0 (cm)

z0 (cm)

Figure 1: Travel time error [ns] for the rms approximation (a) and for the LLNL approximation (b) as a function of z0 (target depth) and
r r0 (lateral distance of the antenna from the vertical above the target). The system is monostatic. The antenna-medium distance is 10 cm,
and the radar wave velocity into the medium is 12 cm/ns.

30
LLNL

25
20
rms

15
10
5
0

10

z0 = 5 cm
z0 = 10 cm

is also preferred for humanitarian demining to prevent mine


activation and for diagnostic inspections on cultural heritage
and artworks to prevent damages to delicate decorations,
paintings, precious materials, and so forth. The air gap that
separates the antenna from the medium generates a situation
where the migration velocity field varies vertically even if the
medium is homogeneous. Nevertheless, the discussion that
follows is also of interest for radars with ground-coupled
antennas when they are used to investigate a medium that is
vertically heterogeneous. This is a frequent situation when a
GPR is applied to NDT inspections of layered structures such
as walls, floors, and pavements.

2. The Two-Step Approach


The two-step approach originally introduced by Gibson et
al. [1] is exact in a constant velocity medium explored with

20
30
x x0 (cm)

40

50

z0 = 15 cm
z0 = 20 cm

Figure 3: Migration aperture with the one-step approach for the


rms and the LLNL approximations at four dierent target depths.
The antenna-medium distance is 10 cm; the radar wave velocity
into the medium is 12 cm/ns; the TX-RX distance is 14 cm with
the azimuth oriented in the y-direction. A small distortion of the
circular shape is observed on the migration aperture as a result of
the TX-RX separation.

a zero-oset experiment, for example, with a monostatic


radar system. In this case, a scattering point located at
P0 (x0 , y0 , z0 ) in the model space will produce a 3D diraction
surface in the data space given by
t 2 = T02 +

4(x x0 )2 4(y y0 )2
+
,
v2
v2

(1)

EURASIP Journal on Advances in Signal Processing

50

50
vsoil = 12 cm/ns
H = 10 cm
Oset = 14 cm

45

40

35

35

30

30

y y0 (cm)

y y0 (cm)

40

25
20
rms

15

20
15
10

5
0

LLNL

25

10

vsoil = 12 cm/ns
H = 10 cm
Oset = 14 cm

45

10

20
30
x x0 (cm)

z0 = 5 cm
z0 = 10 cm

40

50

z0 = 15 cm
z0 = 20 cm

10

20
30
x x0 (cm)

z0 = 5 cm
z0 = 10 cm

40

50

z0 = 15 cm
z0 = 20 cm

(a)

(b)

Figure 4: Migration aperture with the two-step approach for the rms (a) and the LLNL (b) approximations at four dierent target depths.
The antenna-medium distance is 10 cm; the radar wave velocity into the medium is 12 cm/ns; the TX-RX distance is 14 cm with the azimuth
oriented in the y-direction.

where T0 = 2z0 /v. The diraction surface is a hyperboloid,


and its intersection with a vertical plane, for example, with a
plane parallel to the y-axis, is a hyperbola given by

t=

t 2 = t02 +

The diraction surface is no more a hyperboloid, and its


intersection with a vertical plane parallel to the y-axis is

4(y y0 )
,
v2

(2)


 2
 t0




(y y0 d)2  t0
+
+
v2
2

2

(y y0 + d)2
,
v2
(5)

where

where


2

t02 = T02 +

4(x x0 )
.
v2

(3)

The two-step approach consists of performing a 2D migration in the y-direction according to (2) followed by a 2D
migration in the x direction according to (3). Note that in
the monostatic case any summation order is valid.
Let us consider now a bistatic system where the TX-RX
separation is 2d, and let us assume that the 3D experiment is
executed by keeping a constant TX-RX azimuth. If we rotate
the coordinate system in such a way that the y-direction is
the azimuth direction, the 3D diraction surface will be now
given by

t=


 2
 T0


 2
 T0

(x x0 )2 (y y0 d)
+
v2
v2
+

2
(x x0 )2 (y y0 + d)
+
.
v2
v2

(4)

t0
2

2


=

T0
2

2

(x x0 )2
.
v2

(6)

The extension of the two-step approach consists of performing a 2D non-zero-oset migration in the y-direction
according to (5) followed by a 2D zero-oset migration in
the x direction according to (6). Note that in the bistatic
case the summation order is relevant, that is, the first step
must be in the azimuth direction. The conclusion is that
the extension of the two-step approach to a homogeneous
medium investigated with a bistatic radar is possible, and
the algorithm is totally equivalent to an exact one-step 3D
migration.

3. Approximations for the Vertically


Heterogeneous Medium
The standard method to migrate the diractions observed
in a medium where the velocity varies vertically consists of
using the rms velocity function to extend the use of the
equations derived for the constant velocity medium. Another

EURASIP Journal on Advances in Signal Processing


1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0
(a)

0
(b)

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0
(c)

0
(d)

Figure 5: Dierent forms that can be used to shape the migration operator aperture in order to resemble the real radar footprint; (a) and
(b) are applicable to the one-step 3D operator to simulate a monostatic and a bistatic footprint, respectively; (c) is the aperture of the twostep operator obtained with a trivial 2D aperture limitation applied to both the x and y-direction steps; (d) is the aperture of the two-step
operator that can be obtained with a smooth weighting approach to resemble a shape similar to (b).

approach was successfully experimented at the Lawrence


Livermore National Laboratories by Johansson and Mast
[5]. This approach is applicable when the radar measurements are performed with air-launched horn antennas as
those often used for nondestructive testing of highways
and bridges, for humanitarian demining, for diagnostic
investigations on art-works, and so forth. The air gap that
separates the antenna from the medium generates a situation
where the migration velocity field varies vertically even if the
medium is homogeneous. Johansson and Mast proposed a
method based on an approximate estimation of the inflection
point, that is, the point on the medium surface where
the antenna-target raypath is bent according to Snells law.
Both methods preserve the property discussed above, that
is, the possibility to split the 3D migration operation into

a sequence of two bidimensional migrations provided that


a proper order is followed when the system is bistatic. Let
us examine both approximations, shortly indicated in the
following as rms and LLNL solutions, and let us perform
a kinematical analysis of the expected errors with respect
to the exact 3D migration. We will see that the final
errors are the combination of the errors induced by the
approximation adopted to estimate the diraction surface
plus the additional errors induced by the application of the
two-step approach. Thus, let us discuss first the errors for the
monostatic case and the bistatic case when the migration is
performed in one step, and then let us consider the additional
errors introduced by the two-step approach.
Starting with the monostatic system, Figure 1(a) shows
an example of the travel time error, that is, the dierence

EURASIP Journal on Advances in Signal Processing

z0 (cm)

15
20
25
30

50

60

40

35
20

(cm

20
10

10

30
)
(cm

Figure 6: 3D reconstruction of a plastic antipersonnel mine


obtained by applying the two-step migration algorithm to real
laboratory data. The mine is buried 10 cm below the surface of
a sand box. The frequency range is from 2 to 6 GHz; the TX-RX
distance is 14 cm; the antenna is moved 13 cm above the soil. The
3D reconstruction is a 3D contour of a selected amplitude of the
migrated signal that emphasizes the energy reflected by the sand
surface and the energy scattered by the mine.

between the exact and the approximated diraction surface,


obtained with the rms approach. In the same way we
can explore the LLNL approximation and, as claimed by
Johansson and Mast [5], a lower error level is to be expected
(Figure 1(b)). As a result, we also expect that a larger aperture
of the migration operator can be selected for the LLNL
method. For a quantitative comparison let us conventionally
limit the migration aperture to the circular area where the
phase errors of the highest frequency contributions do not
exceed /2. The aperture comparison is shown in Figure 2,
assuming, as an example, that the highest frequency is 6 GHz.
Let us consider now a bistatic system. The travel time
error is still expected to be a function of the target depth
and of the lateral displacement of the antenna box from
the vertical above the target, but a further parameter will
influence the error: the azimuth direction. As a result, the
limit of the migration aperture varies with the direction from
which the contributions come. This is shown in Figure 3
where the migration aperture for the one-step approach is
plotted on horizontal planes at four dierent target depths.
Finally, let us extend the error analysis to the two-step
approach. With respect to the one-step approach, we have
to include a further error, that is, the dierence between
the 3D approximation of the diraction surface and the
actual diraction surface over which the contributions are
taken when the two-step approach is applied. For the rms
approximation, the dierence comes from the velocities that
are applied to perform step 1 with (5) and step 2 with (6):
in principle, both of these velocities should be equal to the
rms velocity observed at the zero-oset time T0 , whereas
in practice the velocity applied for the first step is the rms
velocity observed at a higher zero-oset time given by t0 . In
other words, the problem is due to the fact that when the
velocity varies vertically, for a given t0 , the intersection of the
one-step 3D diraction surface with a vertical plane parallel
to the y-axis, (5) is not only a function of t0 but also depends

on the zero-oset time T0 (see (6)). A similar comment


is applicable to the LLNL approach. Again the problem
comes from the fact that the first step should collapse in
t0 contributions that belong to dierent intersection curves
depending on the final T0 where the diraction is going to be
focused.
As an example, Figure 4 shows how the migration aperture of Figure 3 is further reduced when the two-step
approach is applied. Again we see that the LLNL approximation is more accurate than the rms approximation but the
gap is now less remarkable than in Figure 3, meaning that the
rms approximation is particularly robust with respect to the
degradation introduced by the two-step approach.
Finally, let us remind the reader that the migration
aperture that we are discussing here is the aperture that we
would like to select to perform a constructive interference
of the summed contributions. As we are going to see in
the next section, the actual shape of the migration aperture
that we can obtain with the two-step approach might be
very dierent. Besides, we want to stress the point that
the aperture discussed in this section has nothing to do
with the real footprint of the radar system, that is, with
the area actually illuminated by the antenna that depends
on many other factors related with the antenna distance
from the medium, the medium absorption, the medium
permittivity, and so forth. Nevertheless, the results of the
analysis are encouraging because the real footprint measured
on experimental data with physical parameters similar to
those assumed in the above examples is seldom wider than
the conservative aperture of the two-step operator suggested
by Figure 4 to prevent destructive interference of unfocused
data.

4. Design of the Operator Aperture


When 3D migration is performed in one step, the aperture of
the operator can be designed according to any desired shape,
for example, as a circle or an ellipse (Figure 5(a)) to resemble
a monostatic footprint, or as an intersection of circles or
ellipses to resemble a bistatic footprint (Figure 5(b)). Instead,
the two-step approach is strongly limited regarding this
aspect. Gibson et al. [1] pointed out that the operator
aperture cannot be circular or elliptical, rather, a rectangular
shape would result if a constant limit is applied to both
the x and the y migration steps. Furthermore, if this limit
is time variant in following the expected footprint increase
with depth, a characteristic distortion of the rectangular
shape is expected (Figure 5(c)). A low cost solution that
we propose in order to return to pseudo-elliptical apertures consists of applying a smooth weighting function
rather than an on-o function to select the data that will
contribute to focusing a point in the migrated space. For
example, if the W(x, y) weighting function is obtained as
W(x, y) = Wx (x) W y (y), so that it is suited for the
two-step approach, and both Wx and W y are designed as
smooth functions, for example, Hanning functions, the 3D
smooth aperture of the two-step operator will appear as in
Figure 5(d).

EURASIP Journal on Advances in Signal Processing

0
5

10

z (cm)

z (cm)

0
M3B 15

20

M3A 15

15
20

M3B 10

25

M3A 15

M3B 10

10

M3B 15

20

y (c

80

100

40
m)

60
40
y (c
m)

80
60

60

40

80

x (c

20

m)

20

20

(a)

60
40
)
(
x cm

80

100

(b)
90
M3A 15
80
70

M3B 10

0
z (cm)

y (cm)

60
50

M3B 10

40

10

30

M3B 15

20

20

M3A 15

0
20

100
40

y(

cm

60

60

40

80

20

x(

80

cm)

(c)

M3B 15

10
0
10

20

30

40

50

60 70
x (cm)

80

90 100 110

(d)

Figure 7: 3D representations through cross-sections (a, c) and through iso-amplitude plots (b, d) of the focused data over a sector of the
clay box of the JRC mine field. The sector explored with a 1 GHz antenna contains three mines buried at 10 cm and 15 cm. M3B indicates
mines with low metal content, while M3A indicates mines with high metal content.

5. Application Examples
A few examples are shortly presented to illustrate situations
where the vertical heterogeneity of the medium is successfully handled by using the two-step migration approach with
the rms velocity approximation to focus the data collected
with high-frequency bistatic radar systems.
Figure 6 presents the 3D image of a nonmetal dummy
mine (diameter 11 cm, thickness 6 cm) buried 10 cm in a
sand box. The target was explored with a stepped frequency
radar prototyped by RST. The area was manually scanned by
executing parallel profiles with a TX-RX aperture of 14 cm
and keeping the air-launched antennas at an approximate
height of 13 cm above the sand surface. The final scanning
grid was approximately 0.8 cm in the profile direction and
3 cm in the orthogonal direction. The frequency range was
26 GHz with a frequency step of 16 MHz. Despite of the
very low permittivity contrast between the sand and the
mine material, the weak signal scattered by the target was
successfully collapsed by the migration operator producing

a final focused image where the mine energy is suciently


higher than the noise.
The second example is taken from a test performed at
the landmine field prepared at the Joint Research Center
(JRC) in Ispra (Italy). The test site was designed to evaluate
the performances of any new equipment proposed for
humanitarian demining. The field consists of seven boxes
with dierent soils; every box is 6 m long and 5.7 m wide
and is prepared by reproducing exactly the same scheme of
buried mines or objects. Here we collected some 3D data
with a 1 GHz commercial antenna controlled by a pulse radar
unit from Mala. The final goal was to explore the eect of a
2 cm layer of ballistic material used to cover a small sector
of the investigation area (e.g., 1 1 m) in order to protect
the operator and the radar antenna in case of a landmine
activation. The GPR profiles were manually performed by
sliding the antenna over the ballistic protection layer. Of
course, the ballistic material was chosen among materials
with good dielectric properties and with low density so that
the radar signal is not absorbed, and the weight of the

EURASIP Journal on Advances in Signal Processing

0
0

50
x

50

100

100

150
0

20

40

60

80 100
120 140
160 180
y

150
0

50
z

(a)

50

0 0

100

150

(b)

Figure 8: 3D reconstruction of the joints that were used to connect the marble fragments when the monument was rebuilt in the 20th
century. Data collected by sliding a 1 GHz antenna over a cardboard in order to preserve the integrity of the delicate carvings that decorate
the explored wall. The focused data are displayed by showing from two dierent perspectives the 3D contour of an iso-amplitude surface.
Dimensions are indicated in cm.

armored pad plus the weight of the antenna are not expected
to trigger a landmine. The 3D processed data are shown
in Figure 7 using two dierent representation methods. The
quality of the final results is quite good considering that these
data were collected on the unfavorable clay soil box. The
result was also compared with data collected without the
protection pad validating the expectation that the ballistic
material does not introduce any significant degradation. This
is also a demonstration that the two-step migration operator
was properly dealing with the heterogeneity of the medium
consisting of two dierent materials: the ballistic layer and
the clay soil.
The third example is a 3D survey of a wall of a marble
monument built in Rome in 13 BC. The fragments of the
monument were found during archeological excavations,
and the monument was rebuilt in the third decade of the
20th century. Unfortunately, some details about the reconstruction are missing. The data presented in Figure 8 were
collected with a GSSI pulse radar equipment by scanning
the wall with a 1 GHz antenna. A cardboard was interposed
between the antenna and the wall in order to create a flat
surface and preserve the delicate carvings that decorate the
wall. As a result, the antenna is partially detached by the wall.
Again, this situation creates a material heterogeneity that can
be roughly assimilated to a two-layer structure where the first
layer is of a few centimeters and is mainly consisting of voids,
and thus is very fast for the radar signal, while the second
layer consists of marble. The focused data reveal the existence
of few metallic bars that were used to connect the marble
fragments.
The last example is taken from an investigation performed on a historical Palace in Venice in order to map
the position and length of hidden iron connection devices
that have been used until the nineteenth century in Venetian

building to link the wooden floors to the masonry external


facades. These metal joints were called fiube and were
either nailed to the floor planks or nailed to the timber beams
supporting the wooden floor [6]. The heads of these metal
elements are sometimes visible on the external facade; other
times they are hidden inside the stone masonry structure
or are hidden by the plaster or by ornamental elements
decorating the facade. This was the case of the investigated
Palace where a high-frequency (2 GHz) GPR system was
used to locate these elements by surveying the floors of
the Palace especially close to the external walls and in the
corners of the building. Figure 9 shows an example of a
3D radar image after the application of the 3D migration
algorithm. The floor in this room consists of a sort of ceramic
pavement deposited over a timber floor consisting of planks
supported by beams. As a result, the radar signal diracted
by the fiube travels in a heterogeneous material consisting
of ceramics, timber, and void. The image presented in
Figure 9 is a depth slice taken at 12.5 cm below the floor
surface. At this depth the radar survey was intersecting
two fiube with dierent headings, being the survey close
to a building corner. The radar profiles were run in the
direction perpendicular to the shorter fiuba. Thus, the
profiles were intersecting the longer fiuba at an angle of
about 45 . The well-focused images obtained for both metal
elements, regardless of their orientation and regardless of the
heterogeneity of the material, demonstrate the eectiveness
of the 3D migration algorithm based on the two-step
approach.

6. Conclusions
A direct extension of the two-step approach for fast 3D
migration of bistatic GPR data is possible provided that the

EURASIP Journal on Advances in Signal Processing

90
80
70

y (cm)

60
50
40
30
20
10
0

20

40

60

80

100

120

x (cm)

Figure 9: Depth slice at 12.5 cm below the floor surface extracted from a 3D radar survey performed with a 2-GHz antenna in proximity
of a building corner of a historical palace in Venice. Data were processed with the two-step 3D migration algorithm. Two fiube (one coming
from the building corner) appear in the radar image.

source-receiver azimuth is constant. As for the zero-oset


case, the two-step operator is exactly equivalent to the onestep 3D operator for a constant velocity medium and is an
approximation of the one-step 3D operator for a medium
where the velocity varies vertically.
Two methods have been considered (rms and LLNL) for
migrating data collected with air-launched antennas where
the air gap that separates the antennas from the medium
generates a situation that requires a vertically variable
migration velocity even if the medium is homogeneous. Both
methods are suitable for the application of the two-step
approach, and they have been compared in terms of accuracy
of the final 3D operator. The result of the analysis is that
both the rms and the LLNL methods can be applied with the
two-step approach producing a negligible degradation of the
migration accuracy. A solution has been also proposed for an
optimal shaping of the two-step operator aperture.
The impact of the two-step algorithm on the CPU cost
of the 3D migration is quite interesting, as 3D images such
as the mine reconstruction of Figure 6 can be produced in a
few seconds, that is, in real time, with a standard personal
computer. The advantage is of great interest if we consider

that currently the GPR suppliers are producing multichannel GPR equipment with more and more antennas
mounted in a cart to increase the productivity. These systems
generate huge amounts of data that cannot be migrated
in real time by a single computer unless a very eective
algorithm is used.
Finally, the rms method and the accuracy discussion
are also of interest when radars with ground-coupled
antennas are used to investigate a medium that is vertically
heterogeneous. This is a frequent situation when the GPR
is applied to NDT inspections of layered structures such as
walls, floors, and pavements.

Acknowledgments
The authors are grateful to RST GmbH that developed
the stepped frequency radar prototype for humanitarian
demining, to the Joint Research Center in Ispra that gave free
access to the mine test field, to Dr. G. Lenzi of ISMES S.p.A.
who performed the acquisitions on the marble monument in
Rome, and to IDS S.p.A. that supplied the 2 GHz system for
the experiments in the Venetian Palace.

EURASIP Journal on Advances in Signal Processing

References
[1] B. Gibson, K. Larner, and S. Levin, Ecient 3-D migration
in two steps, Geophysical Prospecting, vol. 31, no. 1, pp. 133,
1983.
[2] H. Jakubowicz and S. Levin, A simple exact method of 3-D
migrationtheory, Geophysical Prospecting, vol. 31, no. 1, pp.
3456, 1983.
[3] A. Canning and G. H. F. Gardner, A two-pass approximation to
3-D prestack migration, Geophysics, vol. 61, no. 2, pp. 409421,
1996.
[4] H. Meinardus, C. Nieto, A. Chaveste, and J. Castaneda,
Ecient, target-oriented 3-D prestack depth migration in two
steps, Leading Edge, vol. 19, no. 2, pp. 138144, 2000.
[5] E. M. Johansson and J. E. Mast, Three-dimensional groundpenetrating radar imaging using synthetic aperture timedomain focusing, in Advanced Microwave and Millimeter-Wave
Detectors, vol. 2275 of Proceedings of SPIE, pp. 205214, July
1994.
[6] G. Mirabella-Roberti, L. Zanzi, and F. Trovo, Detecting hidden
ties in historic venetian palace by means of GPR, in Proceedings of International RILEM Conference on Site Assessment of
Concrete, Masonry and Timber Structures (SACoMaTiS 08), pp.
965974, Varenna, Italy, September 2008.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 125201, 11 pages
doi:10.1155/2010/125201

Research Article
ICA Mixtures Applied to Ultrasonic Nondestructive Classification
of Archaeological Ceramics
Addisson Salazar and Luis Vergara
Grupo de Tratamiento de Senal, Instituto de Telecomunicaciones y Aplicaciones Multimedia, iTEAM,
Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain
Correspondence should be addressed to Luis Vergara, lvergara@dcom.upv.es
Received 23 December 2009; Revised 7 May 2010; Accepted 7 June 2010
Academic Editor: Joao Manuel R. S. Tavares
Copyright 2010 A. Salazar and L. Vergara. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
We consider a classifier based on Independent Component Analysis Mixture Modelling (ICAMM) to model the feature jointprobability density. This classifier is applied to a challenging novel application: classification of archaeological ceramics. ICAMM
gathers relevant characteristics that have general interest for material classification. It can deal with arbitrary forms of the
underlying probability densities in the feature vector space as nonparametric methods can do. Mutual dependences among the
features are modelled in a parametric form so that ICAMM can achieve good performance even with a training set of relatively
small size, which is characteristic of parametric methods. Moreover, in the training stage, ICAMM can incorporate probabilistic
semisupervision (PSS): labelling by an expert of a portion of the whole available training set of samples. These properties of
ICAMM are well-suited for the problem considered: classification of ceramic pieces coming from four dierent periods, namely,
Bronze Age, Iberian, Roman, and Middle Ages. A feature set is obtained from the processing of the ultrasonic signal that is
recorded in through-transmission mode using an ad hoc device. A physical explanation of the results is obtained with comparison
with classical methods used in archaeology. The results obtained demonstrate the promising potential of ICAMM for material
classification.

1. Introduction
Determining the historical period of archaeological ceramic
shards is important for many archaeological applications,
particularly to reconstruct human activities of the past. In
fact, the standardization of an ecient and nondestructive testing (NDT) method for ceramic characterization
could become an important contribution for archaeologists.
Chemical, thermoluminescence, and other analyses have
shown to measure the age of ceramics accurately, but they are
expensive, time-consuming and involve some destruction of
the analyzed pieces [1]. Relative dating by comparison with
ceramic collections is nondestructive but very inaccurate
[1].
Ultrasound has been used in archaeological applications
such as ocean exploration to detect wrecks, imaging of
archaeological sites, and cleaning archaeological objects [2
4]. In this paper, we consider a method to sort archaeological

ceramic shards based on ultrasonic nondestructive evaluation. This method aims to be economic, fast, precise,
and innocuous for the ceramic pieces. It consists of three
steps: measuring by the through-transmission technique,
extracting features from the measured ultrasonic signals, and
classifying the feature set in classes corresponding to historic
or protohistoric periods.
The estimation of the chronological period of an archaeological fragment is not a straightforward work, especially
if we consider that the fragment might be moved from
its context of origin due to migrations, wars, or trade
exchange, and so forth. In addition, some external features
used for classification of archaeological objects, such as
particular shapes and decorations, might be not evident in
the fragments, and thus these aspects would not provide
information for a correct classification of the fragments.
Through-transmission was selected because the ceramic
produces large attenuation to the propagation of ultrasound,

2
so the pulse-echo technique cannot be implemented at
the required operating frequency. Time, frequency, and
statistical features (to be described later) were extracted using
standard signal processing techniques. The characteristics
of the classification problem oer a good case study for
testing advanced classifiers, like those based on modelling
the underlying statistical densities of the feature space as
mixtures of independent component analyzers.
In consequence, we dedicate Section 2 to presenting the
ultrasound through-transmission model from a linear system perspective and to defining the selected features. Then,
in Section 3 we present the rationale for these classifiers and
describe them based on mixtures of independent component analyzers. Section 4 presents the experiments and the
results obtained in the sorting of ceramic pieces from four
dierent periods: Bronze Age, Iberian, Roman, and Middle
Ages. Section 5 presents the conclusions and future line of
work.
We reported some preliminary results related to this
archaeological application which was presented in conference [5]. The following significant new contributions are
presented in this paper: rationale and selection of new
ultrasonic features; use of a classifier that is based on
probabilistic semisupervision of independent component
analyzers (ICA) mixture models that are suitable for handling expert uncertainty; implementation of an ad hoc
device designed to avoid the uncontrolled conditions of a
totally manual measurement procedure; and demonstration
of physical interpretation of the results obtained by the
proposed method in comparison with classical methods
used in archaeology. Therefore, this work provides the
foundations to implement a practical method to complement or even replace some of the destructive and
time-consuming techniques that are currently employed in
archaeology.

2. Through-Transmission Model and


Features Definition
A simplified model of ultrasonic through-transmission analysis is to consider that the recorded signal is the convolution
of the material reflectivity with a linear time-varying system
(LTV) (see Figure 1). The variant impulse response of the
LTV is the injected ultrasonic pulse travelling through
the material, which bears the eects of attenuation and
dispersion that aect both its amplitude and frequency
content. Actually, some nonlinearity may be incorporated
into this simple model in some specific cases; however, in
general, the linear assumption is adequate for a large number
of situations, or is at least enough to be able to obtain
practical solutions yielding reasonable performance. Thus,
the received signal x(t) looks similar to the one shown in
Figure 1.
If we consider that x(t) is a realization of a nonstationary
stochastic process {
x(t)} having instantaneous power spectral density Px ( f , t), dierent ultrasonic signatures us(t)
can be computed like those included in the following

EURASIP Journal on Advances in Signal Processing


 

Centroid frequency fc :
 f2
f1

us(t) = fc (t) =

f Px f , t df

 f2
f1

Px f , t df

Maximum frequency fmax :




us(t) = fmax (t) = max


 Px f , t .
f

(1)

Bandwidth (BW):
 f2 

us(t) = BW(t) =

f1



Px f , t df
.



f fc (t)
 f2
f1

2

Px f , t df


Maximum frequency amplitude A fmax :




us(t) = max Px f , t .
These signatures are measures of the spectral content
variations that are aected by the ultrasonic pulse travelling
inside the material. They can be estimated by means of wellknown smoothing techniques of time-frequency spectral
analysis [6].
From us(t), we can obtain features in dierent
forms. For
t
example, the time average value (1/(t1 t0 )) t01 us(t)dt or the
instantaneous value at one particular time instant us(t0 ) can
be elements of the feature vector in the observation space.
Other time-domain features, such as the parameters A
and corresponding to an exponential model of the signal
attenuation x
(t) = Aet or the total signal power received
T
P = 0 |x(t)|2 dt/T, are also possible to complement the
frequency-domain features.
More features can be defined considering special conditions of the through-transmission model. For example,
higher-order statistics can be used to detect the possible
degree of non-Gaussianity of the reflectivity by measuring
higher-order moments of the received signal like HOM =
E[x(nTs ) x((n 1)Ts ) x((n 2)Ts )] [7], where E[] means
statistical expectation and 1/Ts is the sampling frequency.
Departures from the linear model of Figure 1 can be tested
in dierent forms, for example, using the so-called timereversibility [8], which is defined by TR = E[(dx(t)/dt)3 ].

3. Independent Component Analysis


Mixture Model
Let us consider a probabilistic classification context where
some selected features are organized as elements of vectors
belonging to an observation space to be divided into K
classes {Ck } k = 1 K. Given an observed feature vector
x, we want to determine the most probable class. More
formally, we want to determine the class Ck that maximizes
the conditional probability p(Ck /x). Since classes are not
directly observed, Bayes theorem is used to express p(Ck /x)
in terms of the class-conditioned observation probability
density p(x/Ck ) in the form p(Ck /x) = p(x/Ck )p(Ck )/ p(x).
Note that p(x) is a scaling factor that is irrelevant to the
maximization of p(Ck /x), and that a priori probability p(Ck )

EURASIP Journal on Advances in Signal Processing

x(t)
H(t, )
LTV
Material reflectivity:
Gaussian or nonGaussian white noise

Ultrasound pulse:
Attenuation and dispersion
eects
Linear time variant system

Recorded signal:
Grain noise (Gaussian or
non-Gaussian coloured noise)
Observation white Gaussian
noise

Figure 1: The through-transmission linear time variant model.

is assumed to be known (or equal to 1/K for all classes).


Hence, the key problem focuses on estimation of p(x/Ck ).
A nonparametric classifier tries to estimate p(x/Ck ) from
a training set of observation vectors, but this becomes
progressively intractable as the dimension of the observation
space (number of features) increases, because the required
size of the training set becomes prohibitive. On the other
hand, a parametric classifier assumes a given form for
p(Ck /x) and, thus, tries to estimate the required parameters
from the training observation set [9]. Most of the classifiers
from parametric approaches consider Gaussian densities to
simplify the problem in the absence of other information
that could lead to better choices. Moreover, both parametric
and nonparametric classifiers are very much complicated in
semisupervised scenarios, that is, when part of the observed
vectors belonging to the training set have unknown classes
[10].
Therefore, procedures that would be of interest in the
general area of classification should combine the following
characteristics: the versatility of the nonparametric approach
(from the point of view of the assumed form of p(x/Ck ));
the simplicity of the parametric methods (in the sense that
most eort will concentrate on the estimation of a finite
set of parameters); and operate in semisupervised scenarios.
This is especially remarkable in the area of nondestructive
classification of materials. On one hand, the prediction of the
joint density of some selected features is almost impossible
(Gaussianity is an assumption that is too restrictive in many
cases). On the other hand, there are some applications where
the available set of specimens used to obtain the training set
can hardly be classified. This happens, for example, when the
specimen cannot be destroyed to find the true inner state or
when the definition of the K classes is not clearly known a
priori.
The classification application considered in this paper
has the conditions necessary for verifying the usefulness
of a versatile classifier that is capable of working with
semisupervised training. Ceramic composition is assumed to
be dierent in dierent historic and protohistoric periods,
so there should be opportunities to classify the pieces from
features derived from ultrasonic analysis. Nevertheless, exact
modelling of the propagation of ultrasound in ceramic and
statistical characterization of the features is a complex matter.

Hence, it is advisable not to assume particular parametric


distributions (like normal density) in the design of the
classifier. On the other hand, very often, the archaeologist
does not know the period of all the available specimens
that could be used to form the training set of observation:
semisupervised training is a requirement in this application.
Even more interesting is that the expert archaeologist can
assign some probabilities of classes (ranging from 0 to 1)
to part of the pieces of the training set, scenario that we
will call probabilistic semisupervision (PSS). Most of the
semisupervised classifiers are not capable of dealing with
PSS, only if the assigned probabilities to the labelled feature
vectors are 0 or 1.
In this paper, we consider the application of a classification method based on the independent component analysis
mixture model (ICAMM) [1115]. ICAMM has the two
required conditions: versatile modelling and the possibility
of PSS training. In ICAMM, it is assumed that every class
satisfies an independent component analysis (ICA) model
[16, 17]: vectors xk corresponding to a given class Ck k =
1 K are the result of applying a linear transformation
Ak to a (source) vector sk , whose elements are independent
random variables, plus a bias or centroid vector bk , that is,
xk = Ak sk + bk k = 1 K. This implies that the overall
density of the observation vectors may be expressed as the

mixture p(x) = Kk=1 p(x/Ck )p(Ck ) which gives the name
to the model. Moreover, it also implies that p(x/Ck ) =
| det Ak1 | p(sk ) where sk = Ak1 (x bk ). Thus, estimation
of p(x/Ck ) means estimation of Ak1 = Wk , and bk (like
in a parametric method) plus estimation of p(sk ). However,
this problem is simpler than the original one since the
joint density of the elements of sk can be expressed as the
product of the marginals p(sk ) = p(sk1 )p(sk2 ) p(skN ).
Therefore, a very complex N-dimensional problem (where
N is the number of features) is broken down into N onedimensional problems that are more tractable. Actually,
many dierent types of densities can be assumed for the
marginals, thus relaxing the Gaussianity constraint, and
allowing a nonparametric estimation. In this sense, ICAMM
can be considered a hybrid method that compiles the
advantages of nonparametric and parametric models. There
are a few references of the application of this method in NDT
[18].

EURASIP Journal on Advances in Signal Processing

In summary, given one measured feature vector x, the


assigned class is given by

p(i) (Ck /xm ) =






(i1) 
(i1)
det Wk  p(i1) skm p(Ck )



= K 
,

(i1)  (i1) (i1)
sk m p(Ck )
k =1 det Wk  p

C(x) = max
 p(Ck /x),
Ck

= max
 |det Wk | p(sk )p(Ck ),

k = 1 K,

p(i) (Ck /xm )p(Ck )


,
p(xm )

(2)

m = M1 + 1 M,

Ck

b(i)
k
where sk = Wk (x bk ), and Wk ,bk and p(sk ) are estimated
by means of a PSS training. This is achieved using an iterative
algorithm that we briefly describe below (a more detailed
description can be found in [15]). A relevant concept
in ICAMM learning is the embedded ICA algorithm. As
ICAMM is a set of multiple ICA models, learning of the
ICAMM parameters is essentially equivalent to simultaneous
learning of a set of ICA parameters. Thus, any ICA algorithm
could be used as part of the global ICAMM learning
algorithm as we describe below.
Let us consider that the set of training feature vectors
is formed by xm m = 1 M. We divide the set into
two subsets. The first subset is formed by xm m =
1 M1 M1 M vectors such that the expert archaeologist
is capable of assigning some p(0) (Ck /xm )ranging between 0
and 1 for k = 1 K. The second subset is formed by
M2 = M M1 vectors where no knowledge exists about
the possible class they belong to. The learning algorithms
proceed in the following manner.
Initialization. For k = 1 K, compute

M1
(0)
b(0)
m=1 xm p (Ck /xm ) (If M1 = 0, then select
k =
the initial centroids randomly);

W(0)
k (randomly);
p(0) (sk ) (in a form depending on the selected embed(0)
(0)
ded ICA algorithm) using s(0)
km = Wk (xm bk ).
Updating. For i = 1 I and for k = 1 K, compute

for the probabilistically labelled vectors


p(i)

Ck
xm


= p(0)

Ck
xm

For the unlabelled vectors

m = 1 M1 .

(3)

M

m=1

xm p

(i)

(4)

Ck
,
xm

(i1)
W(i)
+ Wk(i1) ,
k = Wk

Wk(i1) =

M

m=1

(i1)
Wkm(ICA)
p(i1)

Ck
xm

(i1)
is the updating due to training sample xm
(Wkm(ICA)
in the selected embedded ICA algorithm).
p(i) (sk ) (in a form depending on the selected embed(i)
(i)
ded ICA algorithm) using s(i)
km = Wk (xm bk ).

4. Experiments and Results


Two identical transducers (one in emitter mode and the
other one in receiver mode) with a nominal operating
frequency of 1.05 MHz were used to obtain the throughtransmission signals. This operating frequency was selected
after performing dierent tests, as the most appropriate to
achieve small ultrasound attenuation with resolution enough
to separate dierent kinds of ceramics. Sampling frequency
was 100 MHz and the observation time was 0.1 ms (10000
samples) for every acquisition. To reduce observation noise
16 acquisitions were averaged. The size of the transducers
was also important since the ceramic pieces were small (a
few centimetres in height and length, and less than one
centimetre in width, see Figure 2).
The ceramic pieces were measured using a device where
the ceramic piece is placed between two cases that adjust to
the curved surfaces of the piece (see Figure 3). This device
was implemented to perform controlled and repeatable
measurements, thereby improving the manual recording. A
rubber adaptor was used as coupling medium to match the
acoustical impedance of the transducer to the piece. The
adaptor has a good coupling to the surface of the material
and be innocuous to the piece. The emitter is located in
a case on the lower side of the piece and the receiver is
located in case on the upper side of the piece. Note that the
transducers are embedded into a case that has a pressure
control that allows the force that is applied to the material
to be the same for each measurement. Since the propagation
velocity is an important feature for classification, the device
has a mechanism that allows that piece thickness to be
measured and transmitted to the signal processing system
simultaneously with the ultrasound measurement.
The distribution of the pieces was: 47 Bronze Age, 155
Iberian, 138 Roman, and 140 Middle Ages. Thus, a total of

EURASIP Journal on Advances in Signal Processing

Bronze Age

Iberian

Roman

Middle Ages

Figure 2: Images of typical ceramic pieces.

480 pieces were used in the experiments from deposits at the


Valencian Community in Spain. The features were selected
from the features defined in Section 2. A total of 11 features
were considered. The first 4 were the time averages over
the whole acquisition interval of the 4 ultrasonic signatures
defined in (1). The squared magnitude of the Short Term
Fourier Transform was used to estimate Px ( f , t).
Feature number 5 was fc (t0 ), the instantaneous value
of the centroid frequency at a specific time instant. The
parameters A, , P, HOM, and TR that are defined in
Section 2, were also included in the feature vector. Finally,
the velocity of propagation v of the ultrasound, which
was measured by dividing the piece thickness by the pulse
arrival delay, was also considered, since it is a standard
variable in the ultrasonic characterization of materials.
Figure 4 shows examples of the time record, spectrum, and
histogram for each period. It also shows the eleven features
obtained for each example. Note the significant dierences
(in general) among the feature values corresponding to
dierent periods, which provide the opportunity for good
classification performance.
First, the signal features were preprocessed with Principal
Component Analysis (PCA) [19] to reduce the dimension of
the problem as much as possible and to detect redundancies
among the selected features. This resulted in only 6 significant features (components), which were linear combinations
of the original ones. These 6 components explained a total of
90% of the data variance.
We had a total of 4800.75 = 360 original samples for
training. By adding spherical Gaussian noise to the original
samples, three replicates were estimated to obtain a total of
1440 samples for training. We performed 100 runs varying
the sets of 360 samples used for training and 120 used for
testing. The percentage of success in determining the correct
class was then evaluated for a total of 120 100 testing
samples.
Dierent alternative ICAMM classifiers were implemented together with other classical classifiers. We considered four embedded ICA algorithms: nonparametric ICA

Table 1: Classification accuracy (percentage) obtained with the


dierent variants of ICAMM.
PSS ratio
1
0.8
0.6
0.4

NP-ICA
0.83
0.79
0.72
0.65

JADE
0.81
0.75
0.67
0.64

TDSEP
0.79
0.69
0.66
0.55

fastICA
0.81
0.71
0.60
0.59

Table 2: Classification accuracy (percentage) obtained with the


other methods.
LDA
0.73

RBF
0.64

LVQ
0.59

MLP
0.67

kNN
0.64

(NP-ICA) [15, 20]; JADE [21]; TDSEP [22]; and fastICA


[23]. Several PSS ratios were also tested (PSS ratio is defined
as the proportion between probabilistically labelled and
unlabelled data in the training stage). Linear Discriminant
Analysis (LDA) classifier [19] was also verified as it is
representative of a supervised classifier optimum under
Gaussianity assumptions. Some other classifiers based on
neural networks schemes were also implemented: Radial
Basis Function (RBF), Learning Vector Quantization (LVQ),
and Multilayer Perceptron (MLP) [24]. As well the k-nearest
neighbor (kNN) was tested [19].
Table 1 shows the overall percentage of classification
accuracy achieved by the dierent ICAMM variants.
Table 2 shows the overall percentage of classification
accuracy achieved by the other dierent methods implemented. Note that dierent values of the fitting variables
required in each method (e.g., the value k in kNN) were
tested and the results shown are the best ones obtained.
The best performance in classification was obtained
using ICAMM NP-ICA at PSS ratio of 1 (total probabilistic
supervision), achieving a classification accuracy of 83%,
which is much better than the rest of supervised methods
(LDA, RBF, LVQ, MLP, and kNN). As the PSS ratio in

EURASIP Journal on Advances in Signal Processing

Piece
thickness
measure

Coupling
medium
Ultrasound
transducer
Connection
to ultrasound
system

Recevier
case

Pressure
control

Ceramic
shard
Emitter
case
Transducer
carcass
Connection
to ultrasound
system
(a)

(b)

Figure 3: Measurement device employed in ultrasonic signal acquisition. A detail of the ultrasound transducer case is included.

ICAMM is reduced, the performance gets worse. However,


for PSS ratio 0.8, ICAMM NP-ICA is still the best one
with classification accuracy of 79%. For PSS ratio 0.6,
only LDA gives a slightly better result. This confirms the
convenience of not assuming any parametric model of the
underlying probability density as is assumed in LDA and in
the parametric ICAMM variants. Besides, other supervised
nonparametric methods (RBF, LVQ, MLP, and kNN) cannot
compete with NP-ICA since it is a hybrid method with an
implicit parametric model (ICA), which allows of a training
set of relatively small size.
To gain more insight into the classifier performance,
we include Table 3, which contains the confusion matrix
obtained by NP-ICA for 1 PSS ratio. The Roman and Iberian
categories are not very dicult to classify, but they are often
confused with each other. The pieces from the Middle Ages
are confused with Bronze Age pieces 14% of the time, and
Roman pieces cause misclassification of some pieces from the
Bronze Age and the Middle Ages.

5. Discussion
In order to draw a physical interpretation of the results
obtained by ultrasounds, a diversity of morphological and
physiochemical characterization analyses were carried out
using conventional instrumental techniques. A stratified
random sampling analysis was made using data from the
physical analysis of the pieces: open porosity and apparent

Table 3: Confusion matrix (percentages) by NP-ICA with 1 PSS


ratio.
Bronze Age
Iberian
Roman
Middle Ages

Bronze Age
0.79
0
0.05
0.02

Iberian
0
0.89
0.19
0

Roman
0.07
0.09
0.69
0.05

Middle Ages
0.14
0.02
0.07
0.93

density [25, 26]. Thus, a sample of the ceramic pieces


for the dierent periods was obtained. The raw material
composition of the selected pieces was analyzed using
optical microscope and scanning electron microscope (SEM)
[27, 28]; and also the processing methods of the pieces
were studied. From those analyses, the dierences of the
ceramic physical properties for the dierent periods and the
ultrasound propagation are discussed.
5.1. Open Porosity and Apparent Density. A sample of the
pieces was selected for morphological and physiochemical
characterization based on open porosity and apparent density analyses of the pieces. For stratified random sampling,
the values of these physical properties for the dierent
periods were considered as random variables that follow
Gaussian distribution. First, an estimation of the variable
variance for the dierent periods (statistical strata) was
made. This estimation was obtained from 45 representative

EURASIP Journal on Advances in Signal Processing

7
Frequency

0
v = 1431.7
P = 28.4

0.1

1000

2000

3000

Statistics

fc = 1529303.7
fmax = 112500 A fmax = 0.4
BW = 15000 17500
fc (t0 ) = 250636.8

0.4
0.3
0.2

1000
Bronze Age

= 94101
A = 0.1

0.1

Bronze Age

Bronze Age

Time

0.1

4000

0.5

1.5

HOM = 0
TR = 0

500

2.5

0.1 0.05

0.05

0.1

(a)
Frequency

v = 1052.9
P = 1.7

1000

2000

3000

fc = 918334.4
fmax = 202500 A fmax = 259.7
BW = 95000 207500
fc (t0 ) = 440885

250
200
150
100
50

4000

Statistics

0.5

1.5

Iberian

= 171490.7
A = 3.2

Iberian

Iberian

Time
4
2
0
2
4
6

1000
500
0

2.5

HOM = 0.4
TR = 0.1

1500

(b)
Frequency

0
v = 832.1
P = 12.4

1000

2000

3000

Statistics

fc = 1488143.1
fmax = 107500 A fmax = 3.1
BW = 15000 110000
fc (t0 ) = 447521.2

Roman

= 222167.7
A = 1.1

Roman

Roman

Time

4000

0.5

1.5

1000
500
0

2.5

HOM = 0
TR = 0.4

1500

(c)
Frequency

0.5
0
0.5

v = 969
P = 16.6

1000

2000

3000

4000

Statistics

fc = 1117217.7
fmax = 115000 A fmax = 39.7
BW = 110000 115000
fc (t0 ) = 315079.8

40
30

Middle Ages

= 221114.5
A = 0.8

Middle Ages

Middle Ages

Time

20
10
0

0.5

1.5

2.5

3000

HOM = 0
TR = 0.3

2000
1000
0

0.5

0.5

(d)

Figure 4: Some examples of time signals, spectra, histograms, and corresponding features extracted from ultrasonic signals for archaeological ceramic pieces from dierent periods. Units are: Time axis: sample number; Frequency axis: MHz; Statistics axis: bins of signal values; P
(dB); v (m/s); fc , fmax , BW, fc (t0 ) (Hz).

Table 4: Porosity and density statistics of the prior study.


Open porosity (%) Apparent density (gr/cm3 )
Standard
Standard
Period i
Mean

Mean

deviation
i
deviation
i
(1) Bronze Age 28.20
1.80
3.7794
0.0676
(2) Iberian
22.70
1.85
3.3320
0.0663
(3) Roman
31.06
1.79
8.3532
0.1607
(4) Middle Ages 22.69
1.84
5.3441
0.0949

pieces that were physically tested for open porosity and


apparent density. The results of this prior study are shown
in Table 4.
The objective of the sampling was to provide estimators
with small variances at the lowest cost possible (considering
that morphological and physiochemical characterization are
costly). To estimate the fraction of the total sample size n

corresponding to the stratum i, we applied the so-called



Neyman allocation [29], ni /n = Ni i /( Li=1 Ni i ), where L is
the number of strata (4 periods for this application), Ni is the
sample number in the stratum i (3, 15, and 15, 12 for Bronze
Age, Iberian, Roman, and Middle Ages pieces, respectively),
and i is the standard deviation for the stratum i (estimates
of Table 4 were applied). The results for the strata i = 1 4
were: n1 /n = 6.85%, n2 /n = 19.90%, n3 /n = 44.42%,
and n4 /n = 28.83% for open porosity; and n1 /n = 6.5%,
n2 /n = 21.01%, n3 /n = 45.34%, and n4 /n = 27.16% for
apparent density, respectively.
We specified that the estimate of the sample mean should
lie between B units of the population mean, with probability
equal to .95. This is equivalent to impose that the mean
estimate should lie in the interval 2 , that is, B = 2 .
From the analysis of the variable means of Table 4, we chose
B = 1.1% and B = 0.02 gr/cm3 as the bounds on the
error of estimation of the population mean for open porosity

Figure 5: Bits taken from the ceramic fragments included in the test
probes prepared for the Scanning Electron Microscope.

and apparent density, respectively. These bounds allowed the


stratum mean of the sampling to be separated adequately.
The total number of samples was estimated using [29],


2
n = ( Li=1 Ni i ) /(N 2 D + Li=1 Ni i2 ), where D = B2 /4.
Thus, we obtained the total number of samples n = 79 and
n = 83 for open porosity and apparent density, respectively.
These were the number of pieces that the morphological
and physiochemical characterization analyses were applied
to. Using the estimated fractions ni /n for the strata and
the total number of samples n, we obtained the sampling
population for each stratum. The final results of the stratified
random sampling for an error margin of .05 are in Table 5.
The estimate of the population mean for open porosity
and apparent density for each stratum are shown with an
approximate 2 standard deviation bound on the error of
estimation.
Table 5 shows that the samples of the dierent strata
(chronological periods) can be clearly separated by open
porosity, since the bounds of the distributions define the
most part of the densities to be disjoint. The separation of
the samples by apparent density is more dicult because
there is a degree of overlapping between densities of Roman
and Bronze Age pieces, and a higher overlapping between
densities of Iberian and Middle Ages pieces. However the
joint densities of these two collections of pieces are wellseparated between them. In conclusion, physical properties
of the ceramics shows that it is possible a separation of
the pieces in the dierent chronological periods of this
study. Dierent porosities and densities of the pieces are
determined by the material composition and processing
methods employed in the ceramic manufacturing. These
issues are studied in the next section.
5.2. Ceramic Composition and Processing. The selected pieces
were observed, photographed, and then analyzed using an
optical microscope and a scanning electron microscope
(SEM). Some of the test tubes prepared for SEM are shown
in Figure 5.
The data provided by optical microscope and SEM
show that there are clear dierences at a morphological
level between the dierent groups of processed fragments.
Therefore, the ceramic pieces corresponding to the Bronze
Age exhibited a dark brown tone and the presence of a lot
of dark-toned ferrous-composition spots that are associated
with magnetite as well as reddish ferrous iron oxide nuclei.
The Iberian ceramic pieces had varying shades between

EURASIP Journal on Advances in Signal Processing


orange and black. The quartz temper was big or very big
grains and abundant ferrous iron oxide nuclei as well as
more isolated dark magnetite spots were found. This was
an iron-rich ceramic (up to 7.45% of Fe2 O3 ) with a high
content of calcium (up to 6.30% of CaO). The fragments
of Roman ceramic had variable characteristics depending
on the typology (sigillata, common, and amphora). In any
of these, the pieces were made of an orange-toned paste
with small-size porosity and small quantity of temper that
increased from the amphora to the sigillata typology. Roman
ceramic showed content of Fe2 O3 of 5.71%, 6.36% and
9.24%, and content of CaO of 0.67%, 2.92% and 1.29%
for sigillata, common, and amphora, respectively. Finally,
the ceramic from the Middle Ages had a bright orange to
brown colour that indicates they are made of ferrous paste.
This ceramic contains abundant small to very small nuclei of
red ferrous iron oxide as well as dark-toned magnetic spots
and quartz temper of big or very big grains. Also, limestone
aggregates of white tone associated with high content of CaO
(around 8%) were observed.
With regard to the methods used to manufacture the
ceramics, they were dierent according to the evolution
in time of the processing techniques. The set of ceramic
fragments were from three regions (Requena, Enguera,
and Liria) from the Valencia Community at the East of
Spain. The pieces of the Bronze Age were from Requena
(XXX-XX centuries B.C.). They were handmade using basic
appliances, with an appearance very coarse, rudimentary,
and of irregular texture for household. Manufacturing was
local and authentic of every town; it was related to the
womens domestic activities. From the dark tone of the
Bronze Age ceramics, it can be inferred that they were
made in reducing atmosphere, that is, in closed oven at
low temperatures. Iberian fragments corresponded to brushdecorated with geometric, zoomorphic, and human motifs
or nondecorated vessels. These pieces have been dated at
about V-III centuries B.C and they were from three dierent
deposits. Paste of the Iberian ceramics was much more fine
and elaborated than the Bronze Age ceramic paste. The
technological innovation in the processing of the pieces was
the use of lathe.
The Roman fragments of the three groups (sigillata,
common, and amphora) showed technical perfection of
manufacture using dierent techniques: lathe, handmade,
and mold. They were from I-III centuries. In this period, the
applications of molds for potters allowed mass production of
ceramics. Sigillata ceramic features a red bright varnish that
is obtained applying a clay solution to the ceramic surface
and cooking at high temperatures in open oven (oxidizing
atmosphere). Sigillata pieces were decorated with reliefs of
dierent motifs and were luxury ceramic. Common and
amphora types of Roman ceramic were made using lathe.
They were rough appearance without decoration and for
household and/or storage or transport use. The Middle Ages
pieces were of two subperiods: Islamic and Christian (around
VIII-X centuries). The Islamic pieces were from caliphate
vessels of paste simple elaborated without decoration and
special treatment. The Christian pieces were white gross
paste of diverse typologies, some without decoration and

EURASIP Journal on Advances in Signal Processing

Table 5: Statistics of the stratified random sampling for open porosity and apparent density.


Open porosity

Apparent density

(1) Bronze Age


(2) Iberian
(3) Roman
(4) Middle Ages
(1) BronzeAge
(2) Iberian
(3) Roman
(4) Middle Ages

Ni
47
155
138
140
47
155
138
140

ni
5
16
35
23
5
17
38
23

some with incisions or decorations in black painted with


manganese oxide.
5.3. Ceramic Physical Properties and Ultrasound Propagation.
The dierences in physical properties, composition and
processing of the ceramic pieces, presented above, suggest
the possibility of devising nondestructive techniques for
archaeological ceramic classification. In Section 5.1, it was
shown that the pieces could be separated by chronological
periods using measures of their open porosity and apparent
density. Besides, it is well-known that porosity and density
of a material have a definite influence on the propagation
of the ultrasound [30, 31]. Thus, it is clear that should
be there correlation between the results obtained by the
proposed method based on ultrasounds (Section 4) and
the dierences in physical properties of the pieces for the
dierent chronological periods.
There are several factors that can determine the porosity
and density of ceramics, such as the raw material composition and the processing method employed to manufacture
the pieces. However, in the case of archaeological ceramics,
the original ceramic physical properties after manufacturing,
can be altered by other factors such as the ceramic use (i.e.,
over-heating for cooking, etc.) and in general with the pass
of the time (i.e., fractures, loss of cover layers, etc.). Thus,
an exhaustive analysis of physical properties and how these
properties were derived for archaeological ceramics becomes
a very complex problem that needed an important amount
of information that is outside the scope of this work. Note
that the objective of this work is to provide a new NDT
procedure to classify archaeological ceramics from the basis
of training with a set of pieces of known class made with the
intervention of an expert. A correct training will determine
the achievement of the procedure to classify ceramics of
unknown class.
The analysis of the results obtained by ultrasounds
provided here consider correct (or at least probabilistic)
labelling made by the expert and are based on available data
of the composition, processing and physical features of the
ceramics shown in Sections 5.1 and 5.2. Let us explain the
misclassifications in the confusion matrix of the ultrasoundbased classification of Table 3. Misclassification is obtained
from similar responses of pieces from dierent periods to
the ultrasounds. Table 3 shows that Roman ceramics is the
most misclassified group. Confusion between Roman and

i
29,30
22,50
32,00
23,80
1,85
1,77
1,87
1,78

i 2 ((Ni ni )/Ni )(i2 /ni )


27,70
21,71
30,78
22,78
1,82
1,75
1,85
1,76

30,90
23,29
33,22
24,82
1,88
1,79
1,89
1,80

Iberian pieces (19% and 9%) can be explained from ceramic


composition and processing. The amphora and common
Roman pieces were made from iron-rich paste and using
lathe as well as the Iberian pieces. Thus, the mechanical and
physical properties for these two groups were similar.
The confusion between Roman and Bronze Age pieces
(5% and 7%) can be explained due to changes in the
structure of some of the Roman pieces of the sigillata
subgroup that had lost the cover varnish. The high value
of porosity shown by the fragments of sigillata is associated
with pores of very small-size and very connected, which
allows big water absorption once the varnish is removed.
Thus, these two groups of pieces show similar physical
properties due accidents cause with the pass of the time.
Regarding to the confusion between Bronze Age and Middle
Ages pieces (14% and 2%), this also can be explained
from composition and processing. The Islamic subgroup of
Middle Ages pieces were from the paleoandalus period
(early centuries of the Islamic period in Spain). During,
this period, the productive strategy of household chose
intentionally to simplify the production process. Simple
ways of ceramic manufacture and cooking were employed
to obtain kitchens recipients with thermal shock resistance.
Thus, ceramics were manually made from little-decanted
clays and cooked at low temperatures. The results were
coarse pieces from the Middle Ages with physical properties
comparable to the Bronze Age pieces [32].
Let us analyze the ultrasound-based results from the
point of view of the porosity and density. We observed
that the porosity and density of the Bronze Age pieces are
relatively close to porosity and density of the pieces from
Roman and Middle Ages. This explains why 7% and 14%
of the Bronze Age pieces were assigned to the Roman and
Middle Ages periods in Table 3. Similarly, the pieces from the
Iberian period and the Middle Ages have similar porosities
and densities, so this may justify why 2% of the Iberian pieces
were assigned to the Middle Ages.
The 9% of pieces of the Iberian period that should
have been assigned to the Roman period were incorrectly
assigned because the Iberian ceramic is very close to one
of the three kinds of Roman ceramics (sigillata, common,
and amphora)the common kind. This also explains
why the corresponding 19% of pieces of the Roman period
were incorrectly assigned to the Iberian period. No clear
explanation exists for the lack of symmetry in the confusion

10
matrix of Table 3; however, it must be taken into account that
the training process introduced some degree of arbitrariness
because of the probabilistic labelling of the expert. Thus, it
seems that the expert was able to clearly identify the pieces
from Iberian and Middle Ages, but had more diculties
with the Bronze Age and Roman ones. This uncertainty may
have been transmitted to the classifier during the training
stage.
The experiments with classical methods of ceramic
characterization used in archaeology not only show that
correlations between the extracted parameters from the
ultrasound signals and the physical properties of the materials were found. Moreover, they also have demonstrated some
advantages of the proposed ultrasound method. The equipment required for nondestructive evaluation by ultrasound
is, in general, less costly, and the experiments are easier to
perform. The pieces are not damaged in any way during
testing, nor is it necessary to alter or destroy any of the
material that is analyzed. Very significant dierences for the
time required to analyze the pieces were demonstrated: the
ultrasound analysis (measuring, processing, and automatic
classification) for 480 pieces took only 6 hours; the SEM
analysis (tube preparation and electron microscope analysis)
for 80 pieces took 274 hours; the porosity and density
analyses (immersion and weighing of the pieces) for 80 pieces
took 288 hours.
There are limitations to the application of this procedure
due to the fact that the training of the classifier must be
performed from a specific set of data. Thus, the classifier
must be adapted to a specific data model and its eciency
is restricted by the fact that the new data to be classified must
follow the same data model. Nevertheless, the training of
the classifier could progressively be improved by increasing
the number of pieces for each known chronological period.
With proper training, the classifier would be able to provide
a prediction of the chronological period for pieces that
do not have clear chronological markers. In addition, the
semisupervised training mode could be used to model the
uncertainty that expert archaeologists may have about the
chronological period to which the pieces belong.

6. Conclusions
We have presented the results of applying ICAMM to a
challenging application in the area of nondestructive testing
of materials: the classification of archaeological ceramic
pieces into dierent historic periods. We have demonstrated
the interest of using methods that are able to consider nonGaussian models of the underlying probability densities in
the feature vector space. Thus, an ICAMM classifier was
tested using dierent variants depending on the embedded
ICA algorithm. ICAMM has the additional merit of allowing
PSS labelling, which is of practical interest in the considered
application. Note that in any ICAMM variant, the mutual
dependence among features is modelled in a parametric
form; also note that in nonparametric ICAMM, the estimated marginals are nonparametric. This confirms that
nonparametric ICAMM shares the good general modelling
capability of nonparametric classifiers and also can work

EURASIP Journal on Advances in Signal Processing


with a training set of relatively small size, which is a relevant
property of parametric techniques. This explains the fact that
nonparametric ICAMM has shown the best results and is
able to produce acceptable performance for even low ratios
of PSS.
The experiments show promising results in defining
a standardised method that could complement or replace
destructive, costly, and time-consuming techniques, which
are currently being used by archaeologists in the area of
ceramic characterization. Extensions of the procedures presented in this work to other emergent material applications
are planned for future work.

Acknowledgments
This paper has been supported by the Spanish Administration and the FEDER Programme of the European
Community under Grant TEC 2008-02975; the Generalitat
Valenciana under Grant GV-ACOMP/2009/340, and Grant
PROMETEO/2010/040.

References
[1] R. E. Taylor and M. J. Aitken, Chronometric Dating in
Archaeology, vol. 2 of Advances in Archaeological and Museum
Science Series, Springer, New York, NY, USA, 1997.
[2] R. Cribbs and F. Saleh, An ultrasonic based system used for
non-destructive imaging of archaeological sites, in Proceedings of Informatica ed Egittologia allInizio Degli Anni 90, pp.
97108, Rome, Italy, 1996.
[3] A. Murray, M. F. Mecklenburg, C. M. Fortenko, and R. E.
Green, Detection of delaminations in art objects using aircoupled ultrasound, in Proceedings of Materials Issues in Art
and Archaeology III Symposium, pp. 371378, San Francisco,
Calif, USA, 1992.
[4] W. I. Sellers and A. T. Chamberlain, Ultrasonic cave mapping, Journal of Archaeological Science, vol. 25, no. 9, pp. 867
873, 1998.
[5] A. Salazar, R. Miralles, A. Parra, L. Vergara, and J. Gosalbez, Ultrasonic signal processing for archaeological ceramic
restoration, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 06),
pp. III1160III1163, Toulouse, France, May 2006.
[6] L. Cohen and P. Loughlin, Recent Developments in TimeFrequency Analysis, Springer, New York, NY, USA, 1998.
[7] R. Miralles, L. Vergara, and J. Gos`albez, Material grain noise
analysis by using higher-order statistics, Signal Processing, vol.
84, no. 1, pp. 197205, 2004.
[8] R. Miralles, L. Vergara, A. Salazar, and J. Igual, Blind detection of nonlinearities in multiple-echo ultrasonic signals,
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency
Control, vol. 55, no. 3, pp. 637647, 2008.
[9] A. K. Jain, R. P. W. Duin, and J. Mao, Statistical pattern
recognition: a review, IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 22, no. 1, pp. 437, 2000.
[10] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised
Learning, MIT Press, Cambridge, Mass, USA, 2006.
[11] T.-W. Lee, M. S. Lewicki, and T. J. Sejnowski, ICA mixture
models for unsupervised classification of non-Gaussian classes
and automatic context switching in blind signal separation,

EURASIP Journal on Advances in Signal Processing

[12]

[13]

[14]

[15]

[16]
[17]
[18]

[19]
[20]

[21]

[22]

[23]

[24]
[25]
[26]

[27]
[28]

[29]
[30]
[31]
[32]

IEEE Transactions on Pattern Analysis and Machine Intelligence,


vol. 22, no. 10, pp. 10781089, 2000.
R. A. Choudrey and S. J. Roberts, Variational mixture of
Bayesian independent component analyzers, Neural Computation, vol. 15, no. 1, pp. 213252, 2003.
C.-T. Lin, W.-C. Cheng, and S.-F. Liang, An on-line
ICA-mixture-model-based self-constructing fuzzy neural network, IEEE Transactions on Circuits and Systems I, vol. 52, no.
1, pp. 207221, 2005.
C. A. Shah, P. K. Varshney, and M. K. Arora, ICA mixture
model algorithm for unsupervised classification of remote
sensing imagery, International Journal of Remote Sensing, vol.
28, no. 8, pp. 17111731, 2007.
A. Salazar, L. Vergara, A. Serrano, and J. Igual, A general
procedure for learning mixtures of independent component
analyzers, Pattern Recognition, vol. 43, no. 1, pp. 6985, 2010.
A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons, New York, NY, USA, 2001.
P. Comon, Independent component analysisa new concept? Signal Processing, vol. 36, no. 3, pp. 287314, 1994.
A. Salazar, L. Vergara, and R. Llinares, Learning material
defect patterns by separating mixtures of independent component analyzers from NDT sonic signals, Mechanical Systems
and Signal Processing, vol. 24, no. 6, pp. 18701886, 2010.
R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification,
Wiley-Interscience, New York,NY, USA, 2nd edition, 2000.
R. Boscolo, H. Pan, and V. P. Roychowdhury, Independent
component analysis based on nonparametric density estimation, IEEE Transactions on Neural Networks, vol. 15, no. 1, pp.
5565, 2004.
J. F. Cardoso and A. Souloumiac, Blind beamforming for
non-Gaussian signals, IEE Proceedings, Part F, vol. 140, no.
6, pp. 362370, 1993.
A. Ziehe and K. R. Muller, TDSEP- an ecient algorithm for
blind separation using time structure, in Proceedings of the 8th
International Conference on Artificial Neural Networks (ICANN
98), pp. 675680, Skovde, Sweden, September 1998.
A. Hyvarinen, Fast and robust fixed-point algorithms for
independent component analysis, IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 626634, 1999.
C. M. Bishop, Neural Networks for Pattern Recognition, Oxford
University Press, Oxford, UK, 2004.
P. M. Rice, Pottery Analysis: A Sourcebook, The University of
Chicago Press, Chicago, Ill, USA, 1989.
K. G. Harry and A. Johnson, A non-destructive technique for
measuring ceramic porosity using liquid nitrogen, Journal of
Archaeological Science, vol. 31, no. 11, pp. 15671575, 2004.
A. M. Pollard and C. Heron, Archaeological Chemistry, The
Royal Society of Chemistry, Cambridge, UK, 2008.
S. L. Olsen, Scanning Electron Microscopy in Archaeology,
British Archaeological Reports, Institute of Physics, Oxford,
UK, 1998.
S. K. Thompson, Sampling, Wiley-Interscience, New York, NY,
USA, 2nd edition, 2002.
J. D. Cheeke, Fundamentals and Applications of Ultrasonic
Waves, CRC Press, LLC, Boca Raton, Fla, USA, 2002.
J. Krautkramer, Ultrasonic Testing of Materials, Springer,
Berlin, Germany, 4th edition, 1990.
M. Alba-Calzado and S. Gutierrez-Lloret, Las producciones
al Mundo Islamico: el problema de la ceramica
de transicion
paleoandalus (siglos VIII y IX), in Ceramicas Hispanorromanas: Un Estado de la Cuestion, B. Casasola and A. Ribera
i Lacomba, Eds., pp. 585613, Universidad de Cadiz, Cadiz,
Spain, 2008.

11

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 817473, 7 pages
doi:10.1155/2010/817473

Research Article
On the Evaluation of Texture and Color Features for
Nondestructive Corrosion Detection
Fatima N. S. Medeiros,1 Geraldo L. B. Ramalho,2 Mariana P. Bento,1
and Luiz C. L. Medeiros3
1 Teleinformatics

Engineering Department, Federal University of Ceara, Campus of Pici, 6007 Fortaleza, CE, Brazil
Institute of Education, Science, and Technology of Ceara, Campus of Maracanau, Av. Contorno Norte 10,
61925-315 Maracanau, CE, Brazil
3 Petroleo Brasileiro S.A., LUBNOR/IE, Av. Leite Barbosa s/n, 60180-420 Fortaleza, CE, Brazil
2 Federal

Correspondence should be addressed to Fatima N. S. Medeiros, fsombra@deti.ufc.br


Received 2 January 2010; Accepted 15 June 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 Fatima N. S. Medeiros et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
We present a methodology for automatic corrosion detection in digital images of carbon steel storage tanks and pipelines from a
petroleum refinery. The database consists of optical digital images taken from equipments exposed to marine atmosphere during
their operational life. This new approach focuses on color and texture descriptors to accomplish corroded and noncorroded surface
area discrimination. The performance of the proposed corrosion descriptors is evaluated by using Fisher linear discriminant
analysis (FLDA). This approach presents two main advantages: No refinery stoppages are required and potential-related
catastrophes can be prevented.

1. Introduction
Corrosion is the destructive attack of a metal by chemical
or electrochemical reaction with its environment [1]. The
exposure of metallic surfaces structures to rust degradation
during their operational life is a known problem and it aects
storage tanks, steel bridges, pipelines, and ships [2].
Storage tanks and pipelines are commonly made of
carbon steel and low alloy steel not resistant to corrosion
in natural environments. Corrosion resistant paints and
coatings are used in almost all applications. Despite of this
protection, some steels will rust quite rapidly in humid air
even though condensation is not evident [3].
Atmospheric corrosion causes economic losses usually
due to production interruptions, replacement of expensive
materials, and contamination of products. A good deal of
attention should be paid to safety risks and the environment pollution due to corrosion [4]. Therefore, corrosion
monitoring is a relevant task to detect corroded regions
before failures occur by using inspection methods, so that

appropriate decision-making can be taken to avoid any


untoward incidents.
There are dierent corrosion analysis methods as
mechanical measurements, chemical analysis, and visual
inspection by experts. Inspection usually refers to evaluation
of attributes in relation to a specification. For economical
and security reasons, the petroleum and gas industry requires
nonintrusive or nondestructive measurement techniques to
avoid disturbing their properties and performance [5].
Visual inspection of metallic surface is a common
practice employed to identify and detect sources of failures.
A tough problem associated with this task is its tedious and
subjective nature. Significant refinement can be achieved if
specialists have access to automatic computerized inspection
methods.
In many materials, the corrosion process produces a
typical rough surface. Therefore, texture analysis is highly
recommended to discriminate specific surface roughness.
Texture is defined as repetition of a pattern over a region
as a set of small variations, generally described by a spatial

2
function [6]. Another issue is the random aspect of texture,
because size, shape, and orientation of pattern elements can
vary over a region.
The potentiality of image processing techniques for
automatic rust steel detection is investigated in [2]. The
methodology introduced an iterative multivariate data analysis to examine the eects of rust steel descriptors, that is,
texture and color distribution on a set of classifier algorithms.
In this analysis, a selector of classifiers indicates the algorithm
that provides good classification results (high sensitivity)
and acceptable time response for the automation of the
system [2].
In 1981, Itzhak et al. [7] employed computer image
processing techniques for statistical evaluation of pitting
corrosion in a plate of AISI 304L stainless steel exposed to a
corrosive water solution containing 10%FeCl3 . The purpose
of this work was to introduce and to evaluate new tools for
analyzing the eects of pitting corrosion process [7]. The
algorithm was capable of estimating the number and area
of pits in the binary image and therefore provided better
evaluation of pitting corrosion damages.
A popular image processing algorithm for texture analysis extracts features from the gray level co-occurrence matrix
(GLCM) [8]. In this paper, we explore the power of these
features to deal with the stochastic pattern of corrosion for
damage detection in metallic surfaces. Parameters extracted
from the GLCM can be used to define similarity properties
for corrosion detection purpose in image segmentation
methods based on region approach. This approach consists
in determining the regions that contain neighbor pixels in
the image that have similar properties, that is, gray level
and spatial relationship [9]. Two GLCM parameters, namely
contrast and energy, are considered to be the most ecient
for discriminating dierent textural patterns [10].
A wide variety of literature works [6, 8, 10, 11] have
reported that texture features are proper to characterize
corroded surfaces. In addition, typical color changes of
metallic surfaces are often related to corrosion. Thus, color
attributes carry out relevant information to design corrosion
detection systems. Moreover, some works have reported that
feature combination carries more discriminant power to
applications designed on small database image samples [12].
Methods based on neural networks and feature selection are
able to handle with high data dimensionality maintaining
good generalization level [13, 14].
This paper proposes a robust feature set for reliable
detection of atmospheric corrosion on metallic surfaces
using optical images acquired by charge-coupled device
(CCD) cameras. A total of 13 attributes per image sample
were computed using color and texture models: HSI (hue,
saturation, and intensity) color histogram statistics and
GLCM probabilities. The GLCM probabilities measure the
roughness, and HSI statistics characterize the color of
metallic surface samples. A sequential bottom-up feature
selection procedure [15] was applied as a result of the small
sample size and the high data dimensionality. We use Fisher
linear discriminant analysis (FLDA) and receiver operating
characteristic (ROC) curve to investigate the performance of

EURASIP Journal on Advances in Signal Processing

Figure 1: Image database. The three columns on the left illustrate


metallic surfaces attacked by atmospheric corrosion. The three
columns on the right illustrate a variety of smooth and rough non
corroded surfaces.

the proposed approach when combining texture and color


feature subsets.
The outline of the paper is as follows. In the next section,
we describe the image database and give a brief overview
of texture and color attributes used for corrosion characterization. Section 3 introduces the proposed methodology
and aspects of feature sets evaluation. The experimental
results and performance analysis are covered in Section 4. In
Section 5, we draw the concluding remarks.

2. Materials and Methods


2.1. Image Database. A set of 33 high-resolution images
was collected from storage tanks of a petroleum plant. The
images were obtained under dierent acquisition conditions
of illumination and magnification. Some images show a large
number of corrosion defects, while others give a detailed view
of a single defect. An expert selected 84 regions of interest
(ROI), each one resulting 128 128 pixels small images
containing true corrosion, corrosion-like and noncorrosion
samples. A subset of 43 ROI images represent dierent
corrosion damages. The remaining 41 ROI images contain
non corroded surfaces or corroded-like surfaces. Figure 1
illustrates the image database.
2.2. Texture Attributes. Texture is formally defined as the set
of local neighborhood properties of the gray levels of an
image region [16]. It reaches intuitive attributes as roughness, granulation, and regularity. There are four dierent
methods for texture analysis in the literature: statistical,
structural, model-based and transform-based methods.
The gray level intensity distribution of an image is based
on the assumption that texture information is contained in
the spatial relationship between the intensities of a pixel
and its neighbor [8]. This information is condensed in the

EURASIP Journal on Advances in Signal Processing

GLCM. The gray level intensity distribution can be specified


by a matrix of relative frequencies, in which two neighbor
elements of texture labeled i and j, separated by a distance d
in an orientation q, occur in the image, one with property i
and other with property j.
GLCM encompasses at least 14 texture attributes [8].
Although, for simplicity sake, we adopt an optimized subset
of 4 attributes that is, contrast, correlation, energy, and
homogeneity [10] given by
Contrast =

  
2

S i, j ,

(1)

Correlation =


 i jS i, j i j

i j

i, j

Energy =

 
2

S i, j ,

(2)
(3)

i, j

Homogeneity =


i, j

S i, j

2 .
1+ i j

(4)

The matrix S represents GLCM and the sum index k


in (1) is denoted by GLCM size minus one less one. The
parameters i , j , i , and j in (2) represent, respectively, the
mean value and standard deviation of line i and column j
from GLCM.
Contrast measures the dissimilarity intensity between a
pixel and its neighbor over the whole image. Correlation
represents how a pixel is related to its neighbor over the
whole image. Energy is the sum of squared elements in
GLCM, also known as uniformity of energy. Homogeneity
stands for the similarity between gray level values of image
pixels.
Homogeneity and contrast identify organized structures
in the image. Energy and correlation characterize the
complexity and nature of gray level transitions in the
image. Even though these attributes contain information
about image texture, it is dicult to identify which specific
texture characteristic is represented by each attribute. Hence,
texture attributes are stored in a feature database for further
characterization by a classification process.
2.3. Color Attributes. Color is the visual perception of the
spectral distribution of the light. Optical imaging uses three
color channels, usually associated with red (R), green (G),
and blue (B), sucient for the visual interpretation of spectra
[16]. In applications of image corrosion detection by using
digital image processing and pattern recognition algorithms,
it is relevant to identify the best color model to represent
color attributes.
The HSI system constitutes a model that best describes
how humans naturally respond to color. Thus, the HSI color
space is appropriate for this purpose since it allows describing characteristics separately from brightness chrominance
[11].

The hue, saturation, and intensity are obtained from


RGB color space by using the following transformations:
H=
S=1

R+G+B
,
3

(5)

3
[min(R, G, B)],
R+G+B

I = cos1 

[(R G) + (R B)]/2
(R G)2 + (R B)(G B)

(6)
.

(7)

Hue (H) is proportional to the color frequency as (5)


describes. For a corroded surface, H lies between yellow and
red wavelengths.
Saturation (S) refers to the dominance of hue in the color
and is given by (6). A corroded surface is normally more
saturated than other areas because metallic surface is often
painted in light colors as gray and white.
Intensity (I) is given by (7) and describes the strength
of the light. As explained before, the color of non corroded
surface tends to white wavelength (high intensity).
Color attributes are obtained by using statistical
moments extracted from each HSI channel histogram. We
adopted the histogram definition as a frequency xn for each
pixel value, where n = 1 . . . N and it refers to imaging
quantization.
Each statistical moment provides a dierent meaning.
Furthermore, the first moment (8) indicates where the
individual color generally lies in the HSI color space. The
second moment (9) incorporates the information on the
spread or scale of the color distribution. Non corroded
surfaces are often homogeneous and they imply low variance.
The third moment (10) measures the asymmetry of the data
around the sample mean and indicates when the HSI values
lie toward maximum or minimum in the scale. The fourth
moment (11) measures the flatness or peakedness of the
color distribution as follows:
1
xn ,
N n=1

(8)

E(X) =


1
(xn E(X))2 ,
N n=1

(9)

1
(xn E(X))3 ,
N n=1

(10)

1
(xn E(X))4 .
N n=1

(11)

E (X E(X))2 =
E (X E(X))3 =
E (X E(X))4 =

3. Methodology for Corrosion Characterization


3.1. Corrosion Descriptors. Automatic corrosion damage
detection on metallic surfaces is a complex task which
requires a multistep procedure. Figure 2 summarizes the
feature extraction step used to design our proposed NDE
(nondestructive evaluation) system. In our approach, we
perform a discriminant analysis based on digital image
features encompassing texture and color.

EURASIP Journal on Advances in Signal Processing


Table 1: Confusion Matrix for corrosion characterization based on
texture features.

Digital
image

Predicted

ROIs

Corroded
Non corroded

Corroded
0.4167
0.0953

Real
Non corroded
0.1071
0.3809

HSI
statistics

GLCM

Texture attributes

Color attributes
Feature
combination
Corrosion descriptors

Figure 2: Overview of the corrosion descriptors design.

We propose a corrosion descriptor database organized


into three-feature subsets: texture attributes, color attributes
and a combination of texture and color attributes. GLCM has
been computed by assigning the distance between pixels to 1
and orientation of neighbor pixels to 90 . Attributes defined
in (8)(11) have been calculated for the histogram of the hue,
saturation, and intensity components of the HSI color space.
3.2. Discriminant Analysis. Figure 3 illustrates the discriminant analysis of attributes to identify corroded and non
corroded surfaces. The principal component analysis (PCA)
is a mathematical procedure that optimizes the feature
set by eliminating redundant attributes. The result is a
smaller number of uncorrelated attributes called principal
components.
Fisher Linear Discriminant Analysis (FLDA) [17] is
applied to the three-feature subsets in order to compare the
attributes robustness for corrosion detection. The discriminant fits a multivariate normal density to each data group,
with a pooled estimate of covariance SW defined by
SW =

1
(n1 1 + n2 2 ),
n2

(12)

which maximizes the criterion


J=

 T

w (m1 m2 )2

wT SW w

(13)

where mi is the mean and ni the sample size of class i ,


i = 1 2; i is the maximum likelihood estimates of the
covariances matrices of classes i ; w represents the new space
that maximizes the criterion. According to (13), the criterion
proposed by Fisher is the ratio of between-class to withinclass variances.
The regression wT x + w0 = 0 obtained by FLDA is
used to assess the discrimination performance for each
feature subset. The corroded class and non corroded class
separability is investigated for dierent subset sizes.

Confusion matrix and receiver operating characteristic


(ROC) curve are used to evaluate the class separability,
robustness, and reliability. This matrix consists of two rows
and two columns that report the number of true negatives,
false positives, false negatives, and true positives estimated
by the model comparing with the previous probabilities of
the validation data. True values refer to correct model estimation, while false values correspond to incorrect estimated
results. Error rate is quantified by the sum of false values,
while accuracy is the sum of true values.
The confusion matrix provides the general discriminant
performance for each feature subset. In order to determine
the more appropriated subset, we used a receiver operating
characteristic (ROC) curve analysis [17] to estimate the
expected performance of a discriminant function under
varying criterion. Sensitivity, specificity, and area under
curve (AUC) are properties used to assess the performance
for dierent number of attributes. The area under ROC
Curve (AUC) gauges the ranking of correct class separation.
When dealing with a reduced database sample, the area
under ROC convex hull (ROCCH) provides the finest grade
of the separability for the corrosion classes [17].

4. Experimental Results
We performed both qualitative and quantitative discriminant
analysis in order to evaluate the eectiveness of dierent
feature subsets separability.
A set of 13 attributes per image sample were computed
from color and texture models using HSI statistics and gray
level co-occurrence matrices (GLCMs) probabilities.
The feature subsets discriminant performance was compared in terms of classification error and execution time
on a 13-dimensional (two feature subsets), 2-class (1 and
2 ) image database. For sake of simplicity, the two classconditional densities were admittedly Gaussian-like, with
mean vectors m1 and m2 and a pooled covariance matrix SW
defined in (12).
The experimental results are summarized in Tables 14
and Figures 4, 5 and 6. The correct class discrimination for
texture feature is computed as the sum of the highlighted
values in the main diagonal of the confusion matrices. The
values in Tables 1 and 2 reveal that about 79% hits were
obtained for texture features as much as for color features.
The false negatives and false positives rates are of the same
order (20%) for both subsets.
Combining feature sets leads to obtain more discriminant power without loss of generality. We achieved over
90% hits and an appreciably reduction of false positives to

EURASIP Journal on Advances in Signal Processing

Feature subset
Corroded pattern

Texture
or
Corrosion
descriptors

FLDA
discriminant
analysis

Color
or
Texture
and color

PCA

Non corroded
pattern

Figure 3: Proposed corrosion detection methodology.

Texture feature subset

0.9

0.8

0.9

0.7

0.8
True positive rate (TPR)

True positive rate (TPR)

0.6
0.5
0.4
0.3
0.2

0.7
0.6

0.4

0.2

0.1
0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0
0

False positive rate (FPR)


3 attributes
6 attributes

AUCCH = 0.8763

0.3

0.1
0

AUC = 0.8486

0.5

9 attributes
13 attributes

Figure 4: ROC convex hulls for dierent sizes of feature set.

0.1

0.2

0.3 0.4 0.5 0.6 0.7


False positive rate (FPR)

0.8

0.9

Figure 5: ROC analysis for corrosion characterization based on


texture features. The black dot over the curve represents poor
specificity and sensitivity. The solid line represents the ROC graph
and the dashed line the convex hull.

Table 2: Confusion Matrix for corrosion characterization based on


color features.

Predicted

Corroded
Non corroded

Corroded
0.3809
0.1309

Real
Non corroded
0.0833
0.4049

Table 3: Confusion Matrix for corrosion characterization based on


texture and color features.

Predicted

Corroded
Non corroded

Corroded
0.4118
0.0393

Real
Non corroded
0.0392
0.5099

less than 4%. The combination of texture and color features


reduced by 10% the number of false negatives and false
positives. The values are reported in Table 3.
In Figure 4, we compare the ROC convex hulls obtained
for dierent numbers of attributes. We noticed that the
discriminant function aggregates more separability power as
the feature set grows.

The ROC curves in Figures 57 show that satisfactory


results were achieved when texture and color features are
combined. The ROC curve presents graphically the statistics estimates of false positive and true positive corrosion
detection rates. We have observed from the tests that the
false positive rate (FPR) is minimized, while the true
positive rate (TPR) is maximized when providing texture
and color attributes, thus aggregating more discriminant
information. This eect is noticed by the black dot position
probabilistically representing the specificity (1-true negative
rate) and the sensitivity (true positive rate) for a given
feature subset. The AUC evidences whether a subset is more
separable than the other regardless of the relative costs
of misclassification. Table 4 gives the values for AUC and
AUCCH . The separability of the best feature set is highlighted
in gray.
The methodology was prototyped in a well-known
numerical mathematics scripting environment. The execution time of the scripts is reported in terms of processor ticks
spent on a 2.66 GHz Core 2 Duo processor. Ten randomly
generated feature subsets, each with 84 image patterns,
were tested and the averages of the runs are reported. The

EURASIP Journal on Advances in Signal Processing


0.4

Color feature subset


1
0.9
True positive rate (TPR)

0.8

0.3

0.7
0.6
AUC = 0.8826

0.5

0.2

0.4
AUCCH = 0.9039

0.3
0.2

0.1

0.1
0

1
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

5 6 7 8 9
Feature subset size

10 11 12 13

False positive rate (FPR)

Figure 6: ROC analysis for corrosion characterization based on


color features. The black dot represents modest specificity and
sensitivity. The dashed line indicates an ROC convex hull.

Execution time (100)


Error rate

Figure 8: Performance evaluation for dierent feature subset sizes.

Texture and color feature subset


1

it is noteworthy the decreasing error rate (from 30% to 10%)


when using a larger number of attributes.

0.9
True positive rate (TPR)

0.8
0.7

5. Conclusion

0.6
AUC = 0.9115

0.5
0.4

AUCCH = 0.9311

0.3
0.2
0.1
0
0

0.1

0.2

0.3 0.4 0.5 0.6 0.7


False positive rate (FPR)

0.8

0.9

Figure 7: ROC analysis for corrosion characterization based on


joint texture and color features. The black dot close to the upperleft corner indicates high specificity and sensitivity. The dashed line
indicates an ROC convex hull.
Table 4: Area Under Curve comparison for each feature subset.
Feature subset
Texture
Color
Texture and Color

AUC
0.8486
0.8826
0.9115

AUCCH
0.8763
0.9039
0.9311

attributes are sequentially added one by one starting with the


color subset. In the following the texture attributes are added
until complete all 13 available attributes. Figure 8 shows that
the error rate decays as the dimensionality increases more
discriminant information. Although the execution time has
an increase tendency with dimensionality of the feature set,

In this paper, we have investigated image texture and


color descriptors with the objective to address nondestructive atmospheric corrosion detection on petroleum plant
equipments exposed to marine atmosphere. The idea of
developing computer systems to assist specialists in corrosion
inspection is to provide a tool to prevent risks for human
life and environment in addition to minimizing economic
losses.
Our approach integrates texture and color features to
describe roughness and typical color changes in metallic
surfaces. Thus, to address corrosion description, GLCM
probabilities and HSI color space statistics are extracted
from optical images of metallic surfaces, regardless of
light and variations. Although texture and color feature
subsets are likely to characterize corrosion individually, we
demonstrated that when combining both of these subsets
with a suboptimal sequential feature selection procedure
the combination outperformed each one, that is, subset.
Moreover, a linear discriminant analysis revealed that it
minimized false positives and false negatives in corrosion
detection.
Further work will investigate these corrosion descriptors
for image segmentation in atmospheric corrosion detection
systems.

Acknowledgment
The authors would like to thank ANP PRH31 for providing
financial assistance.

EURASIP Journal on Advances in Signal Processing

References
[1] J. William and D. Callister, Fundamentals of Materials Science
and Engineering, John Wiley & Sons, New York, NY, USA, 5th
edition, 2001.
[2] M. Trujillo and M. Sadki, Sensitivity analysis for texture
models applied to rust steel classification, in Electronic
Imaging Science and Technology, vol. 5303 of Proceedings of
SPIE, pp. 161169, San Jose, Calif, USA, January 2004.
[3] M. Kutz, Handbook of Environmental Degradation of Materials,
William Andrew, NewYork, NY, USA, 2007.
[4] E. Bardal, Corrosion and Protection, Springer, Berlin, Germany,
2003.
[5] P. R. Roberge, Corrosion Inspection and Monitoring, John Wiley
& Sons, New York, NY, USA, 2007.
[6] A. S. Arko Lucieer and P. Fisher, Multivariate texture-based
segmentationof remotely sensed imagery for extraction of
objects and theiruncertainly, International Journal of Remote
Sensing, vol. 29172936, pp. 610621, 2005.
[7] D. Itzhak, I. Dinstein, and T. Zilberberg, Pitting corrosion
evaluation by computer image processing, Corrosion Science,
vol. 21, no. 1, pp. 1722, 1981.
[8] K. S. Robert, R. M. Haralick, and I. Dinstein, Textural features
for image classification, IEEE Transactions on Systems, Man
and Cybernetics, vol. 3, no. 6, pp. 610621, 1973.
[9] S. Chabrier, B. Emile, C. Rosenberger, and H. Laurent, Unsupervised performance evaluation of image segmentation,
EURASIP Journal on Applied Signal Processing, vol. 2006,
Article ID 96306, pp. 112, 2006.
[10] A. Baraldi and F. Parmiggiani, An investigation of the textural
characteristics associated with gray level cooccurrence matrix
statistical parameters, IEEE Transactions on Geoscience and
Remote Sensing, vol. 33, no. 2, pp. 293304, 1995.
[11] K. Y. Choi and S. S. Kim, Morphological analysis and
classification of types of surface corrosion damage by digital
image processing, Corrosion Science, vol. 47, no. 1, pp. 115,
2005.
[12] D. F. A. Lopes, G. L. B. Ramalho, F. N. S. de Medeiros, R. C.
Combining features to improve
S. Costa, and R. T. S. Araujo,
oil spill classification in SAR images, in Structural, Syntactic,
and Statistical Pattern Recognition, vol. 4109 of Lecture Notes
in Computer Science, pp. 928936, Springer, Berlin, Germany,
2006.
[13] G. L. B. Ramalho and F. N. S. Medeiros, Using boosting to
improve oil spill detection in SAR images, in Proceedings of
the 18th International Conference on Pattern Recognition (ICPR
06), pp. 10661069, August 2006.
[14] G. L. B. Ramalho and F. N. S. de Medeiros, Improving
reliability of oilspill detection systems using boosting for highlevel feature selection, in Proceedings of the International
Conference on Image Analysis and Recognition, Lecture Notes
in Computer Science No. 4633, pp. 11721181, August 2007.
[15] A. Jain and D. Zongker, Feature selection: evaluation, application, and small sample performance, IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp.
153158, 1997.
[16] S. Livens, Image analysis for material characterisation, Ph.D.
dissertation, Instelling Antwerpen University, 1988.
[17] A. R. Webb, Statistical Pattern Recognition, John Wiley & Sons,
London, UK, 2nd edition, 2002.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 262869, 12 pages
doi:10.1155/2010/262869

Review Article
Fluctuation Analyses for Pattern Classification in
Nondestructive Materials Inspection
A. P. Vieira,1 E. P. de Moura,2 and L. L. Goncalves2
1 Instituto

de Fsica Universidade de Sao Paulo, 05508-090 Sao Paulo, SP, Brazil


de Engenharia Metalurgica e de Materiais, Universidade Federal do Ceara, 60455-760 Fortaleza, CE, Brazil

2 Departamento

Correspondence should be addressed to L. L. Goncalves, lindberg@fisica.ufc.br


Received 30 December 2009; Accepted 25 June 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 A. P. Vieira et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We review recent work on the application of fluctuation analyses of time series for pattern classification in nondestructive materials
inspection. These analyses are based on the evaluation of time-series fluctuations across time intervals of increasing size, and were
originally introduced in the study of fractals. A number of examples indicate that this approach yields relevant features allowing the
successful classification of patterns such as (i) microstructure signatures in cast irons, as probed by backscattered ultrasonic signals;
(ii) welding defects in metals, as probed by TOFD ultrasonic signals; (iii) gear faults, based on vibration signals; (iv) weld-transfer
modes, as probed by voltage and current time series; (v) microstructural composition in stainless steel, as probed by magnetic
Barkhausen noise and magnetic flux signals.

1. Introduction
Many nondestructive materials-inspection tools provide
information about material structure in the form of time
series. This is true for ultrasonic probes, acoustic emission,
magnetic Barkhausen noise, among others. Ideally, signatures of material structure are contained in any of those
time series, and extracting that information is crucial for
building a reliable automated classification system, which is
as independent as possible from the operators expertise.
As in any pattern classification task, finding a set of
relevant features is a key step. Common in the literature
are attempts to classify patterns from time series by directly
feeding the time series into neural networks, by measuring
statistical moments, or by employing Fourier or wavelet
transforms. These last two approaches are hindered by the
presence of noise, and by the nonstationary character of
many time series. Sometimes, however, relevant information
is hidden in the noise itself, as this can reflect memory eects characteristic of underlying physical processes.
Analysis of the statistical properties of the time series can
reveal such eects, although global calculations of statistical
moments miss important local details. Here, we show
that properly defined local fluctuation measures of time

series can yield relevant features for pattern classification.


Such fluctuation measures, which are sometimes referred
to as fractal analyses, were introduced in the study of
mathematical fractals, objects having the property of scale
invariance. It turns out that they can also be quite useful
in the study of general time series. Early applications [13]
of fluctuation analyses to defect or microstructure recognition relied on extracting exponents and scaling amplitudes
expected to characterize memory eects on various systems.
The approach reviewed here, on the other hand, is based on
more general properties of the fluctuation measures.
The remaining of this paper is organized as follows.
In Section 2, we define mathematically the fluctuation (or
fractal) analyses used to extract relevant features from the
various time series. In Section 3, we review the tools used
in the proper pattern-classification step, illustrated by several
applications in Section 4. We close the paper by presenting
our conclusions in Section 5.

2. Fluctuation Analyses
All techniques of fluctuation analysis employed here start
by dividing the signal into time intervals containing

EURASIP Journal on Advances in Signal Processing

points. Each technique then involves the calculation of the


average of some fluctuation measure Q() over all intervals,
for dierent values of , thus gathering local information
across dierent time scales. For a signal with genuine fractal
features, Q() should scale as a power of ,
Q() ,

(1)

at least in an intermediate range of values of , corresponding


to 1   L, L being the signal length.
In general, the exponent is related to the so-called
Hurst exponent H of the time series [4, 5]. This exponent
is expected to gauge memory eects which somehow reflect
the underlying physical processes influencing the signal. A
simple example is provided by fractional Brownian motion
[57], in which correlated noise is postulated, leading to
persistent or antipersistent memory, and to a standard
deviation (t) following:


(t) = 2K f t

H

(2)

where t is the time elapsed since the motion started, and K f


is a generalized diusion coecient. A Hurst exponent equal
to 1/2 corresponds to regular Brownian motion, while values
of H dierent from 1/2 indicate the presence of long-range
memory mechanisms aecting the motion; H > (1/2) (H <
1/2) corresponds to persistent (antipersistent) behavior of
the time series.
Real-world time series, however, originate from a much
more complex interplay of processes, acting at dierent
characteristic time scales, and which, therefore, compete
to induce memory eects whose nature may change as a
function of time. As the series is probed at time intervals
of increasing size, the eective Hurst exponent can vary. In
that case, any other exponent related to H would likewise
vary. This variation of with the size of the time interval is
precisely what the present approach exploits.
Once the relevant features are obtained from the variation of with , the dierent patterns can be classified
with the help of statistical tools available in the patternrecognition literature. Here, as discussed in Section 3, we
make use of principal component analysis (PCA) and
Karhunen-Loeve transformations. (See, e.g., [8] for a thorough account of statistical pattern classification).
2.1. Hurst (R/S) Analysis. The rescaled-range (R/S) analysis
was introduced by Hurst [4] as a tool for evaluating the
persistency or antipersistency of a time series. The method
works by calculating, inside each time interval, the average
ratio of the range (the dierence between the maximum and
minimum values of the accumulated series) to the standard
deviation. The size of each interval is then varied.
Mathematically, the R/S analysis is defined in the following way. Given an interval Ik of size , we calculate z,k , the
average of the series zi inside that interval,
z,k =

1
zi .
iIk

(3)

We then define an accumulated deviation from the mean as


i 


Zi,k =

z j z,k ,

(4)

j =k

(k labelling the left end of Ik ), and from this accumulated


deviation we extract the range
R,k = max Zi,k min Zi,k ,
iIk

iIk

(5)

while the standard deviation is calculated from the series


itself,
S,k


 
2
1
=
zi z,k .

iIk

(6)

Finally, we calculate the rescaled range R,k /S,k , and take its
average over all nonoverlapping intervals, obtaining
()

1  R,k
,
n k S,k

(7)

in which n =  L/ is the (integer) number of nonoverlapping intervals of size than can be fit onto a time series of
length L.
For a purely stochastic curve, with no underlying trends,
the rescaled range should satisfy the scaling form
() H ,

(8)

where H is the Hurst exponent.


2.2. Detrended-Fluctuation Analysis. The detrendedfluctuation analysis (DFA) [9] aims at improving the
evaluation of correlations in a time series by eliminating
trends in the data. In particular, when a global trend is
superimposed on a noisy signal, DFA is expected to provide
a more precise estimate of the Hurst exponent than R/S
analysis.
The method consists initially in obtaining a new integrated series Zi ,
Zi =

i 


z j z ,

(9)

j =1

the average z being taken over all points,


1
zi .
L i=1
L

z =

(10)

After dividing the series into intervals, the points inside a


given interval Ik are fitted by a polynomial curve of degree l.
One usually considers l = 1 or l = 2, corresponding to firstand second-order fits. Then, a detrended variation function
i,k is obtained by subtracting from the integrated data the
local trend as given by the fit. Explicitly, we define
f

i,k = Zi Zi,k ,

(11)

EURASIP Journal on Advances in Signal Processing

where Zi,k is the value associated with point i according to


the fit inside Ik . Finally, we calculate the root-mean-square
fluctuation F,k inside an interval as

 
1
=
2i,k ,

F,k

(12)

iIk

2.5. Detrended Cross-Correlation Analysis. This is a recently


introduced [11] extension of DFA, based on detrended
covariance calculations, and is designed to investigate powerlaw correlations between dierent simultaneously recorded
time series {xi } and { yi }.
The first step of the method involves building the
integrated time series

and average over all intervals, obtaining


F() =

Xj =

1
F,k .
n k

(13)

For a true fractal curve, F() should behave as


F() ,

(14)

where is a scaling exponent. If the trend is correctly


identified, one should expect to be a good approximation
to the Hurst exponent H of the underlying correlated noise.
2.3. Box-Counting Analysis. This is a well-known method of
estimating the fractal dimension of a point set [7], and it
works by counting the minimum number N() of boxes of
linear dimension needed to cover all points in the set. For a
real fractal, N() should follow a power law whose exponent
is the box-counting dimension DB ,
N() DB .

(15)

For stochastic Gaussian processes, the box-counting and


the Hurst exponents are related by
DB = 2 H.

(16)

2.4. Minimal-Cover Analysis. This recently introduced


method [10] relies on the calculation of the minimal area
necessary to cover a given plane curve at a specified scale
given by the window size .
After dividing the series, we can associate with each
interval Ik a rectangle of height Hk , defined as the dierence
between the maximum and minimum values of the series zi
inside the interval,
Hk =

max

i0 ii0 + 1

zi

min

i0 ii0 + 1

zi ,

(17)

in which i0 corresponds to the left end of the interval. The


minimal area is then given by
A() =

Hk ,

(18)

the summation running over all cells.


Ideally, in the scaling region, A() should behave as
A() 2D ,

(19)

where D is the minimal cover dimension, which is equal to


1 when the signal presents no fractality. For genuine fractal
curves, it can be shown that, in the limit of infinitely many
points, the box-counting and minimal-cover dimensions
coincide [10].

j


Yj =

xi ,

i=1

j


yi .

(20)

i=1

Both series are then divided into N ( 1) overlapping


intervals of size , and, inside each interval Ik , local trends
f
f
X j,k and Y j,k are evaluated by least-square linear fits. The
detrended cross-correlation C,k is defined as the covariance
of the residuals in interval Ik ,


1 
f
f
C,k =
X j X j,k Y j Y j,k ,
(21)
j Ik
which is then averaged to yield a detrended cross-correlation
function

1
C() =
C,k .
(22)
N +1 k

3. Pattern-Classification Tools
Having obtained curves of dierent fluctuation estimates
Q() as functions of the time interval size , we make use
of standard pattern-recognition tools in order to group the
signals according to relevant classes. The first step towards
classification is to build feature vectors from one or more
fluctuation analyses of a given signal. In the simplest case,
a set of d fixed interval sizes { j } is selected, and the values
of the corresponding functions Q(j ) at each j , as calculated
for the ith signal, define the feature (column) vector xi of that
signal,

Q(1 )
Q( )
2

xi =
.. .
.

(23)

Q(d )
In our studies, unless stated otherwise, we select as interval
sizes the nearest integers obtained from powers of 21/4 ,
starting with 1 = 4 and ending with d equal to the length
of the shortest series available.
It is also possible to concatenate vectors obtained from
more than one fluctuation analysis to obtain feature vectors
of larger dimension. This usually leads to better classifiers.
The following subsections discuss dierent methods
designed to group feature vectors into relevant classes. All
methods initially select a subset of the available vectors
as a training group in order to build the classifier, whose
generalizability is then tested with the remaining vectors.
This procedure has to be repeated for many distinct choices
of training and testing vectors, as a way to evaluate the
average eciency of the classifier. One can then study the
resulting confusion matrices, which report the percentage of
vectors of a given class assigned to each of the possible classes.

EURASIP Journal on Advances in Signal Processing

3.1. Principal-Component Analysis. Given a set of N feature


vectors {xi }, principal-component analysis (PCA) is based
on the projection of those vectors onto the directions defined
by the eigenvectors of the covariance matrix
1
(xi m)(xi m)T ,
N i=1
N

S=

(24)

in which m is the average vector,


1
xi ,
N i=1
N

m=

(25)

and T denotes the vector transpose. If the eigenvalues of S are


arranged in decreasing order, the projections along the first
eigenvector, corresponding to the largest eigenvalue, define
the first principal component, and account for the largest
variation of any linear function of the original variables.
In general, the nth principal component is defined by the
projections of the original vectors along the direction of
the nth eigenvector. Therefore, the principal components are
ordered in terms of the (decreasing) amount of variation of
the original data for which they account.
Thus, PCA amounts to a rotation of the coordinate
system to a new set of orthogonal axes, yielding a new set
of uncorrelated variables, and a reduction on the number
of relevant dimensions, if one chooses to ignore principal
components whose corresponding eigenvalues lie below a
certain limit.
A classifier based on PCA can be built by using the first
few principal components to define modified vectors, whose
class averages are determined from the vectors in the training
group. Then, a testing vector x is assigned to the class whose
average vector lies closer to x within the transformed space.
This is known as the nearest-class-mean rule, and would be
optimal if the vectors in dierent classes followed normal
distributions.
3.2. Karhunen-Loeve Transformation. Although very helpful
in visualizing the clustering of vectors, PCA ignores any
available class information. The Karhunen-Loeve (KL) transformation, in its general form, although similar in spirit to
PCA, does take class information into account. The version
of the transformation employed here [8, 12] relies on the
compression of discriminatory information contained in the
class means.
The KL transformation consists in first projecting the
training vectors along the eigenvectors of the within-class
covariance matrix SW , defined by
1 C k
yik (xi mk )(xi mk )T ,
N k=1 i=1
N

SW =

(26)

where NC is the number of dierent classes, Nk is the number


of vectors in class k, and mk is the average vector of class
k. The element yik is equal to one if xi belongs to class k,
and zero otherwise. We also rescale the resulting vectors by

a diagonal matrix built from the eigenvalues j of SW . In


matrix notation, this operation can be written as
X = 1/2 UT X,

(27)

in which X is the matrix whose columns are the training


vectors xi , = diag(1 , 2 , . . .), and U is the matrix
whose columns are the eigenvectors of SW . This choice of
coordinates makes sure that the transformed within-class
covariance matrix corresponds to the unit matrix. Finally,
in order to compress the class information, we project the
resulting vectors onto the eigenvectors of the between-class
covariance matrix SB ,
SB =

NC

Nk

N
k=1

(mk m)(mk m)T ,

(28)

where m is the overall average vector. The full transformation


can be written as
X = VT 1/2 UT X,

(29)

V being the matrix whose columns are the eigenvectors of SB


(calculated from X ).
With NC possible classes, the fully-transformed vectors
have at most NC 1 relevant components. We then classify a
testing vector xi using the nearest-class-mean rule.

4. Applications
4.1. Cast-Iron Microstructure from Ultrasonic Backscattered
Signals. An early application of the ideas described in this
review aimed at distinguishing microstructure in graphite
cast iron through Hurst and detrended-fluctuation analyses
of backscattered ultrasonic signals.
As detailed in [2], backscattered ultrasonic signals were
captured with a 5 MHz transducer, at a sampling rate
of 40 MHz, from samples of vermicular, lamellar, and
spheroidal graphite cast iron. Double-logarithmic plots of
the resulting R/S and DFA calculations, shown in Figure 1,
reveal that in all cases two regimes can be identified,
reflecting short- and long-time structure of the signals,
respectively. From the discussion in Sections 2.1 and 2.2, this
implies that one can define two sets of exponents, related to
the short- and long-time fractal dimensions of the signals,
as estimated from the corresponding values of the Hurst
exponent H and the DFA exponent . See (16).
Lamellar cast iron is readily identified as having smaller
short- than long-time fractal dimension, contrary to both
vermicular and spheroidal cast irons. These latter types, in
turn, can be identified on the basis of the relative values of H
and on the dierent regimes.
As discussed in the following subsections, this fortunate
clear distinction on the basis of a very small set of exponents
is not possible in more general applications. Nevertheless, a
set of relevant features can still be extracted from fluctuation
or fractal analyses by using tools from the pattern recognition
literature.

EURASIP Journal on Advances in Signal Processing

Lamellar(L) cast iron

Vermicular(V ) cast iron


2.5

DF

log10 (R/S); log10 F

log10 (R/S); log10 F

2.5

< >= 0.29

< >= 0.65


1.5

< H >= 0.34

RS

< >= 0.85


2
DF
1.5

< >= 0.35


< H >= 0.98

RS
< H >= 0.35

< H >= 0.78


0.5

0.5
1

1.5

2
log10

2.5

1.5

(a)

2
log10

2.5

(b)
Spheroidal(S) cast iron

log10 (R/S); log10 F

2.5
< >= 0.92
2

DF

1.5

< H >= 1.09

< >= 0.34


1
RS
< H >= 0.41

0.5
1

1.5

2
log10

2.5

(c)

Figure 1: Double-logarithmic plots of the curves obtained from Hurst (R/S) and detrended-fluctuation (DF) analyses of backscattered
ultrasonic signals propagating in lamellar (a), vermicular (b), and spheroidal (c) cast iron. The values of  and H  are obtained by
averaging the slopes of all curves in the corresponding intervals, as shown by the solid lines.

4.2. Welding Defects in Metals from TOFD Ultrasonic Inspection. The TOFD (time-of-flight diraction) technique aims
at estimating the size of a discontinuity in a material by
measuring the dierence in time between ultrasonic signals
scattering o the opposite tips of the discontinuity. For
welding-joint inspection, the conventional setup consists of
one emitter and one receiver transducer, aligned on either
side of the weld bead. (Longitudinal rather than transverse
waves are used, for a number of reasons, among which is
higher propagation speed.)
In the case studied in [13], 240 signals of ultrasound
amplitude versus time were captured, with a TOFD setup,
from twelve test samples of steel plate AISI 1020, welded
by the shielded process. (Details on materials and methods
can be found in [14].) The signals used in the study
were extracted from sections with no visible defects in the
welding, and from sections exhibiting lack of penetration,

lack of fusion, and porosities. Each of the four classes was


represented by 60 signals, each one containing 512 data
points, with 8-bit resolution. Examples of signals from each
class are shown in Figure 2.
By combining curves obtained from Hurst, linear
detrended-fluctuation, minimal cover, and box-counting
analyses into single vectors representing each ultrasonic signal, a very ecient classifier is built using features extracted
from a Karhunen-Loeve transformation and the nearestclass-mean rule. The average confusion matrix obtained
from 500 sets of 48 testing vectors is shown in Table 1. A maximum error of about 27% is obtained, corresponding to the
misclassification of porosities. A slightly poorer performance
is obtained by first building feature vectors from each of the
four fluctuation analyses, performing provisional classifications, and then deciding on the final classification by means
of a majority vote (with ties randomly resolved). In this

EURASIP Journal on Advances in Signal Processing


300

300

250

250

200

200

150

150

100

100

50

50

100

200

300

400

500

100

200

(a)
300

250

250

200

200

150

150

100

100

50

50

100

200

400

500

300

400

500

(b)

300

300

300

400

500

(c)

100

200
(d)

Figure 2: Typical examples of signals obtained from samples with (a) lack-of-fusion defects, (b) lack-of-penetration defects, (c) porosities,
and (d) no defects. The horizontal axes correspond to the time direction, in units of the inverse sample rate of the equipment.

Table 1: Average percentage confusion matrix for testing vectors


built from a combination of fluctuation analyses. The possible
classes are lack of fusion (LF), lack of penetration (LP), porosity
(PO), and no defects (ND). Figures in parenthesis indicate the
standard deviations, calculated over 500 sets. (Notice that in [13]
these figures were erroneously reported.) The value in row i, column
j indicates the percentage of vectors belonging to class i which were
associated with class j.

LF
LP
PO
ND

LF
91.07 (0.37)
2.61 (0.37)
6.43 (0.32)
1.01 (0.15)

LP
1.69 (0.16)
83.96 (0.45)
13.99 (0.47)
2.55 (0.20)

PO
6.88 (0.33)
12.14 (0.41)
72.66 (0.58)
6.92 (0.32)

ND
0.35 (0.08)
1.28 (0.14)
6.92 (0.34)
89.51 (0.40)

case, as shown in Table 2, the overall error rate is somewhat


increased, although the classification error of samples associated with lack of penetration decreases. In any case, both of
these approaches yield considerably better performance than
classifiers based on either correlograms or Fourier spectra of
the signals, and at a smaller computational cost.

4.3. Gear Faults from Vibration Signals. As detailed in [15],


vibration signals were captured by an accelerometer attached
to the upper side of a gearbox containing four gears, one of
which was sometimes replaced by a gear either containing
a severe scratch over 10 consecutive teeth, or missing one
tooth.
Several working conditions were studied, consisting of
dierent choices of rotation frequency (from 400 rpm to
1400 rpm) and to the presence or absence of a fixed external
load. For each working condition, 54 signals containing 2048
points were captured (with a sampling rate of 512 Hz), 18
signals corresponding to each of the three possible classes of
gear (normal, scratched, or toothless). Linear DFA was then
performed on the signals, and feature vectors were built from
curves corresponding to 13 interval sizes ranging from 4
to 32. Figure 3 shows representative signals obtained under
load, at a rotation frequency of 1400 rpm, along with the
corresponding DFA curves.
Principal-component analysis was applied to the resulting vectors, and a nearest-class-mean classifier was built
from the first three principal components of 36 randomly
chosen training vectors. With averages taken over 100

EURASIP Journal on Advances in Signal Processing

Table 2: The same as in Table 1, but now for a majority vote involving classifications based on each fluctuation analysis separately.

LF
LP
PO
ND

LF
87.11 (0.40)
2.04 (0.18)
7.13 (0.34)
2.26 (0.19)

LP
0.64 (0.10)
90.06 (0.40)
19.16 (0.52)
1.38 (0.17)

PO
6.96 (0.33)
5.88 (0.34)
65.18 (0.61)
7.81 (0.34)

ND
5.28 (0.27)
2.01 (0.18)
8.53 (0.35)
88.54 (0.41)

Table 3: Average percentage of correctly classified testing signals coming from toothless and normal gears working in the absence of load.
rpm
Toothless
Normal

400
69.4 1.9
69.3 1.8

600
86.3 1.5
100

800
96.2 0.7
100

choices of training and testing vectors, the classifier was


always capable of correctly identifying scratched gears, while
the classification error of testing vectors corresponding to
normal or toothless gears, although unacceptably high for
two working conditions in the absence of load, lay below 6%
for most conditions under load. See Tables 3 and 4. Although
a similar classifier based on Fourier spectra yields superior
performance, this comes at a much higher computational
cost, since feature vectors now have 1024 points [15].
4.4. Weld-Transfer Mode from Current and Voltage Time
Series. As detailed in [16], voltage and current data were
captured during Metal Inert/Active Gas welding of steel
workpieces, with a simultaneous high-speed video footage,
allowing identification of the instantaneous metal-transfer
mode. The sampling rate was 10 kHz, and a collection of nine
voltage and current time series was built, with three series
corresponding to each of three metal-transfer modes (dip,
globular, and spray). The typical duration of each series was
4.5 seconds, and examples are shown in Figure 4.
A systematic classification study was performed by
first dividing each time series into smaller series containing L points (L being 512, 1024, 2048, or 4096).
These smaller series were then processed with Hurst, linear detrended-fluctuation, and detrended-cross-correlation
analyses. Figure 5 shows example curves. Selecting 80% of
the obtained feature vectors for training (with averages over
100 random choices of training and testing sets), classifiers were built from voltage or current signals separately
processed with Hurst or detrended-fluctuation analyses,
as well as from voltage and current signals simultaneously processed with detrended-cross-correlation analysis.
A Karhunen-Loeve transformation was finally employed
along with the nearest-class-mean rule. In the poorest
performance, obtained from signals with L = 512 points
subject to Hurst analysis, the maximum classification error
was 27% for signals corresponding to spray transfer mode,
with 100% correctness achieved for globular transfer mode.
Table 5 shows the average classification error of each
classifier, for dierent series length L. The overall performance of classifiers with L = 1024 and L = 2048 is better

1000
49.2 2.9
64.1 2.4

1200
68.8 2.1
91.5 1.2

1400
48.2 2.5
45.1 2.5

than with the other two lengths. This can be traced to the
fact that, as illustrated by Figure 5, distinguishing features
(such as average slopes and discontinuities) between curves
corresponding to dierent transfer modes tend to happen
at intermediate time scales. For a given length, detrendedcross-correlation analysis of voltage and current signals
yields an intermediate classification eciency as compared
to either voltage or current signals analyzed separately.
The best classifier is obtained with the Hurst analysis of
signals containing L = 2048 points, yielding a negligible
classification error of 0.1%.
In contrast, as shown in the bottom two rows of Table 5,
similar classifiers in which feature vectors are defined by the
full Fourier spectra of the various signals yield much larger
classification errors, and at a much higher computational
cost (since the size of feature vectors scales as L, whereas for
fluctuation analyses it scales as log L).
4.5. Stainless Steel Microstructure from Magnetic Measurements. Barkhausen noise is a magnetic phenomenon produced when a variable magnetic field induces magnetic
domain wall movements in ferromagnetic materials. These
movements are discrete rather than continuous, and are
caused by defects in the material microstructure, generating
magnetic pulses that can be measured by a coil placed on the
material surface.
Magnetic Barkhausen noise (BN) and magnetic flux
(MF) measurements were performed on samples of stainlesssteel steam-pressure vessels, as detailed in [17]. These
presented coarse ferritic-pearlitic phases (named stage A)
before degradation. Owing to temperature eects, two different microstructures were obtained from pearlite that has
partially (stage BC) or completely (stage D) transformed
to spheroidite. Measurements were performed by using a
sinusoidal magnetic wave of frequency 10 Hz, each signal
consisting of 40 000 points, with a sampling rate of 200 kHz.
A total of 144 signals were captured, 40 signals corresponding
to stage A, 88 to stage BC, and 16 to stage D. Typical signals
are shown in Figure 6. Notice that, as regards the magnetic
flux, the dierence between signals from the various stages
seems to lie on the intensity of the peaks and troughs,

EURASIP Journal on Advances in Signal Processing


4
0.2
2
log10 F()

Amplitude

0
0

0.2

0.4

500

1000
Time

1500

2000

2
log10

(b) DFA from normal gear

0.2

log10 F()

Amplitude

(a) Signal from normal gear

0.4

2
0.6
4

500

1000
Time

1500

2000

0.1

0.2

0.3

2
log10

(d) DFA from toothless gear


0

log10 F()

Amplitude

(c) Signal from toothless gear

0.4

500

1000
Time

1500

2000

(e) Signal from scratched gear

2
log10

(f) DFA from scratched gear

Figure 3: Representative signals and DFA curves obtained from the three types of gear, working under load at a rotation frequency of
1400 rpm. In the signal plots, time is measured in units of the inverse sampling rate.

Table 4: The same as in Table 3, but now for gears working under load.
rpm
Toothless
Normal

400
100
94.8 0.8

600
100
97.5 0.7

800
100
98.5 0.5

1000
100
95.6 0.7

1200
100
81.3 1.7

1400
100
100

EURASIP Journal on Advances in Signal Processing

9
400

40
Dip

350
30
Current (A)

Voltage (V)

300
20

250
200

10
150
0

0.1

0.2
Time (s)

0.3

100

0.4

0.1

(a)

0.2
Time (s)

0.3

0.4

0.3

0.4

0.3

0.4

(b)
200

34

Globular
180

Current (A)

Voltage (V)

32

30

160

140

28
120
26
0

0.1

0.2
Time (s)

0.3

100

0.4

0.1

(c)

0.2
Time (s)
(d)

199

25
Spray

198

Current (A)

Voltage (V)

24

23

22

196

195

21

20

197

0.1

0.2
Time (s)
(e)

0.3

0.4

194

0.1

0.2
Time (s)
(f)

Figure 4: Examples of voltage (left) and current (right) time series obtained during the welding process under dip (top), globular (center),
and spray (bottom) metal-transfer modes.

10

EURASIP Journal on Advances in Signal Processing

4
Hurst I

Hurst V
2.5

log R/S

log R/S

2
2

1.5
1

1
0.5
0

2
log10

(a)

2
log10

(b)

5
DFA I

DFA V
4

3
log F

log F

1
2

0
1
1

2
log

(c)

2
log
(d)

DCC I V

log FDCC

4
2
0
2
4

2
log10

Dip
Globular
Spray
(e)

Figure 5: Examples of curves obtained from Hurst (top), detrended-fluctuation (center), and detrended-cross-correlation (bottom) analyses
to current (I) and voltage (V ) sample signals obtained under dip (top), globular (center), and spray (bottom) metal-transfer modes.
Logarithms are in base 10, and the time window size is measured in tenths of a millisecond.

11

0.3

0.2

Barkhausen signals (a.u.)

Magnetic flux (a.u.)

EURASIP Journal on Advances in Signal Processing

0.1
0
0.1

1
0
1
2

0.2

10000

20000
Time (5 s)

30000

40000

Stage A
Stage BC
Stage D

10000

20000
Time (5 s)

30000

40000

Stage A
Stage BC
Stage D
(a)

(b)

Figure 6: Typical signals of (a) magnetic flux and (b) Barkhausen noise obtained from stainless-steel samples at dierent stages of
microstructural degradation. Plots in (b) have been vertically shifted for clarity.

Table 5: Average percentage classification errors of testing voltage (V ) and current (I) signals containing L points, produced by classifiers
based on Hurst, detrended-fluctuation (DF), or detrended-cross-correlation (DCC) analyses. Also shown are results for classifiers based on
Fourier spectra.
L
DF, V
Hurst, V
DF, I
Hurst, I
DCC, V + I
Fourier, V
Fourier, I

512
3.1 0.2
6.5 0.4
2.1 0.2
14.5 0.5
3.2 0.3
23.6 0.9
22.7 2.5

1024
2.2 0.4
3.1 0.5
0.6 0.2
5.4 0.6
1.5 0.3
21.8 0.8
27.5 1.9

2048
3.6 0.7
0.1 0.1
0.5 0.3
4.0 0.9
2.4 0.7
18.7 1.2
8.7 0.9

4096
5.3 1.3
0.7 0.7
1.6 1.1
2.7 1.3
7.7 1.3
36.7 2.3
14.5 1.9

Table 6: Average percentage of correctly classified testing signals coming from stainless-steel samples in dierent degradation stages.
Classifiers employed detrended-fluctuation (DFA), Hurst (RS), or Fourier spectral (FS) analyses on either Barkhausen noise (BN) or
magnetic flux (MF).

Stage A
Stage BC
Stage D

DFA/BN
54.8 1.9
57.6 1.2
68.4 2.9

RS/BN
34.2 1.6
49.5 1.5
31.0 2.7

although there is also a fine structure in the curves which is


not visible at the scale of the figure.
Results from classifiers based on detrended-fluctuation
and Hurst analyses, with a KL transformation as the final
step, are shown in Table 6, for both BN and MF signals,
with averages over 100 sets of training and testing vectors.
Also shown for comparison are results from classifiers based

DFA/MF
83.0 1.3
87.2 0.8
96.4 1.5

RS/MF
90.5 1.0
92.5 0.6
98.0 1.4

FS/MF
67.8 1.7
77.0 1.1
78.6 2.9

on Fourier spectral analysis (making use of magnetic-flux


signals with 512 points extracted from the original signals
by selecting every 78th point, in order to build feature
vectors with a manageable number of dimensions). The
performance of classifiers based on Barkhausen noise is
much inferior to that of classifiers based on magnetic flux
signals, which is now discussed.

12
The best performance is obtained by the Hurst classifier,
with maximum error of about 10%, followed by the DFA
classifier, with a maximum error around 17%. Somewhat
surprisingly, in view of the long-time regularity of the
magnetic flux signals evident in Figure 6, the Fourierspectral classifier shows the worst performance, with an
average classification error of 25%.

5. Conclusions
We have reviewed and supplemented recent work on application of fluctuation analysis as a pattern-classification tool in
nondestructive materials inspection. This approach has been
shown to lead to very ecient classifiers, with a performance
comparable, and usually quite superior, to more traditional
approaches based, for instance, on Fourier transforms. The
present approach also requires less computational eort to
achieve a given eciency, which would be an important issue
when building automated inspection systems for field work.
An extension of the present approach to defect recognition from radiographic or ultrasonic images can be achieved
based on generalizations of the fluctuation analyses to measure surface roughness [18, 19]. Given any two-dimensional
image, a corresponding surface can be built by a color-toheight conversion procedure, and mathematical analyses can
then be performed.

Acknowledgments
The authors acknowledge financial support from the Brazilian agencies FUNCAP, CNPq, CAPES, FINEP (CT-Petro),
and Petrobras (Brazilian oil company).

References
[1] P. Barat, Fractal characterization of ultrasonic signals from
polycrystalline materials, Chaos, Solitons & Fractals, vol. 9, no.
11, pp. 18271834, 1998.
[2] J. M. O. Matos, E. P. de Moura, S. E. Kruger, and J. M. A.
Rebello, Rescaled range analysis and detrended fluctuation
analysis study of cast irons ultrasonic backscattered signals,
Chaos, Solitons & Fractals, vol. 19, no. 1, pp. 5560, 2004.
[3] F. E. Silva, L. L. Goncalves, D. B. B. Fereira, and J. M. A.
Rebello, Characterization of failure mechanism in composite
materials through fractal analysis of acoustic emission signals,
Chaos, Solitons & Fractals, vol. 26, no. 2, pp. 481494, 2005.
[4] H. E. Hurst, Long-term storage capacity of reservoirs,
Transactions of the American Society of Civil Engineers, vol. 116,
pp. 770799, 1951.
[5] J. Feder, Fractals, Plenum Press, New York, NY, USA, 1988.
[6] B. B. Mandelbrot and J. W. van Ness, Fractional brownian
motion, fractional noises and applications, SIAM Review, vol.
10, pp. 422437, 1968.
[7] P. S. Addison, Fractals and Chaos, IOP, London, UK, 1997.
[8] A. R. Webb, Statistical Pattern Recognition, John Wiley & Sons,
West Sussex, UK, 2nd edition, 2002.
[9] C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E.
Stanley, and A. L. Goldberger, Mosaic organization of DNA
nucleotides, Physical Review E, vol. 49, no. 2, pp. 16851689,
1994.

EURASIP Journal on Advances in Signal Processing


[10] M. M. Dubovikov, N. V. Starchenko, and M. S. Dubovikov,
Dimension of the minimal cover and fractal analysis of time
series, Physica A, vol. 339, no. 3-4, pp. 591608, 2004.
[11] B. Podobnik and H. E. Stanley, Detrended cross-correlation
analysis: a new method for analyzing two nonstationary time
series, Physical Review Letters, vol. 100, no. 8, Article ID
084102, 4 pages, 2008.
[12] J. Kittler and P. C. Young, A new approach to feature selection
based on the Karhunen-Loeve expansion, Pattern Recognition,
vol. 5, no. 4, pp. 335352, 1973.
[13] A. P. Vieira, E. P. de Moura, L. L. Goncalves, and J. M.
A. Rebello, Characterization of welding defects by fractal
analysis of ultrasonic signals, Chaos, Solitons & Fractals, vol.
38, no. 3, pp. 748754, 2008.
[14] E. P. de Moura, M. H. S. Siqueira, R. R. da Silva, J. M. A.

Rebello, and L. P. Caloba,


Welding defect pattern recognition
in TOFD signals Part 1. Linear classifiers, Insight, vol. 47, no.
12, pp. 777782, 2005.
[15] E. P. de Moura, A. P. Vieira, M. A. S. Irmao, and A. A. Silva,
Applications of detrended-fluctuation analysis to gearbox
fault diagnosis, Mechanical Systems and Signal Processing, vol.
23, no. 3, pp. 682689, 2009.
[16] A. P. Vieira, H. H. M. Vasconcelos, L. L. Goncalves, and H.
C. de Miranda, Fractal analysis of metal transfer in mig/mag
welding, in Review of Progress in Quantitative Nondestructive
Evaluation, vol. 1096 of AIP Conference Proceedings, pp. 564
571, 2009.
[17] L. R. Padovese, F. E. da Silva, E. P. de Moura, and L. L.
Goncalves, Characterization of microstructural changes in
coarse ferritic-pearlitic stainless steel through the statistical
fluctuation and fractal analyses of barkhausen noise, in
Review of Progress in Quantitative Nondestructive Evaluation,
vol. 1211 of AIP Conference Proceedings, pp. 12931300, 2010.
[18] J. A. Tesser, R. T. Lopes, A. P. Vieira, L. L. Goncalves, and
J. M. A. Rebello, Fractal analysis of weld defect patterns
obtained from radiographic tests, in Review of Progress
in Quantitative Nondestructive Evaluation, vol. 894 of AIP
Conference Proceedings, pp. 539545, 2007.
[19] G.-F. Gu and W.-X. Zhou, Detrended fluctuation analysis
for fractals and multifractals in higher dimensions, Physical
Review E, vol. 74, no. 6, Article ID 061104, 2006.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 317216, 14 pages
doi:10.1155/2010/317216

Research Article
A Study of Concrete Hydration and Dielectric Relaxation
Mechanism Using Ground Penetrating Radar and Short-Time
Fourier Transform
W. L. Lai, T. Kind, and H. Wiggenhauser
BAM, Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin, Division VIII.2, Germany
Correspondence should be addressed to W. L. Lai, wai-lok.lai@bam.de
Received 7 January 2010; Revised 6 June 2010; Accepted 5 July 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 W. L. Lai et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Ground penetrating radar (GPR) was used to characterize the frequency-dependent dielectric relaxation phenomena in ordinary
Portland cement (OPC) hydration in concrete changing from fresh to hardened state. The study was experimented by measuring
the changes of GPR A-scan waveforms over a period of 90 days, and processed the waveforms with short-time Fourier transform
(STFT) in joint time-frequency analysis (JTFA) domain rather than a conventional time or frequency domain alone. The signals
of the direct wave traveled at the concrete surface and the reflected wave from an embedded steel bar were transformed with STFT,
in which the changes of peak frequency over ages were tracked. The peak frequencies were found to increase with ages and the
patterns were found to match closely with primarily the well-known OPC hydration process and secondarily, the evaporation
eect. The close match is contributed to the simultaneous eects converting free to bound water over time, on both conventional
OPC hydration and dielectric relaxation mechanisms.

1. Introduction
1.1. Study of Materials Using GPR. Nondestructive testing
methods attract many attentions in civil engineering applications rapidly in recent years [1, 2]. One of the most
widely used methods is ground penetrating radar (GPR)
which images and sees through material structures [3, 4].
Literatures have been reported extensively in the application
on concrete structures such as bridges [4], railway ballast
[5, 6], dams [3], and brick [7]. Principles of GPR were widely
documented and reviewed [1, 2, 8, 9]. While most published
works and field applications focused on locating internal
characteristics (such as concrete reinforcement, voids and
layer thickness), there is comparatively little eort to study
the material properties which usually refers to the host
materials, that is, concrete. From materials perspective, the
mechanism converting water from free to bound form in
OPC concrete was characterized with GPR by measuring the
change in the real part of dielectric permittivity/velocity in
time domain [10]. In this paper, we attempted to characterize
the same mechanism by measuring the changes of frequency-

dependent absorption and relaxation spectra which are


aected by the imaginary part of materials permittivity.
Research on the material properties of concrete using
the dielectric properties is normally in a localized scale
and small volume of concrete, such as microwave terminal
[11, 12] or dielectric sensor [13]. Larger-scale investigations
should involve GPR which was used to assess the mechanism
of hydration [10], concrete composition [14], moisture
contents [15, 16], and pore size distribution [17].
1.2. GPR Signal Processing Method to Evaluate Material
Properties. Most well-established GPR signal processing
methods (such as time-zero correction, filtering, gain control, deconvolution, etc.) put emphasis on enhancing signal
quality and understanding of the internal characteristics.
In most cases, the ultimate goal of these methods is to
reconstruct understandable signals in time domain which
reveal the internal and spatial concrete structure. The first
step is to stack one-dimensional GPR waveforms (A-scan)
laterally to construct two-dimensional (B-scan) radargram,
and followed by building three-dimensional cube views

2
(C-scan) after alignment of multiple B-scans [4]. B-scans
and C-scans have been frequently used to delineate and
interpret the internal characteristics of the spatial material
structures, provided that the features exhibit contrasting
electric properties compared to the host material. However,
the B- and C-scans operated in time domain are unable
to study the frequency features of the time-varying GPR
signals, which are largely aected by the spatial or temporal changes of the concrete properties. In other words,
when the interpretation of GPR signals is extended from
objects location and spatial characterization to the study
of material properties, the time domain method may not
be adequate because the characteristic of the time-varying
frequency is not taken into account. As such, this paper
adopted a two-dimensional joint time-frequency analysis
(JTFA) to transform and evaluate the signal. This method
is based on and back to the most fundamental A-scan
waveforms.
In the JTFA domain, we migrated the A-scans (obtained
from 1.5 and 2.6 GHz GPR antennae) from one dimension
(time) to two dimensions (time and frequency). In such
domain, the localized frequency changes are analyzed jointly
with the localized amplitude changes registered in the radar
time axis (i.e., A-scan). Short-time fourier transform (STFT)
was used as the computation algorithm because it allows the
frequency spectra to be revealed in a preselected, stepped,
and short-time windows compared to traditional Fourier
transform. The sweeping of this short-time window over
the radar time axis stacks the localized frequency spectra
centered at each radar time points, and ultimately compiles
a 2D time-frequency plot with time and frequency being
the x- and y-axis. In this paper, the usefulness of this
methodology is experimented with the well known cement
hydration mechanism in concrete. It is well-known that
concrete hydrates free water in concrete over the curing
processes, and changes the wave propagation velocity, real
part of permittivity and dielectric contrast [10]. In this
paper, we measured the temporal changes of the A-scans
over a period from an initial fresh state to a hardened
state up to 90 days. Then the A-scans were processed and
transformed to 2D time-frequency plots to observe the lateral
frequency changes for both direct wave and an embedded
steel bar reflector. These signals were further characterized
and quantified by determination of the peak frequency, and
justified by the dielectric relaxation and OPC hydration
theories.

2. Theories
2.1. Ground Penetrating Radar (GPR). A GPR is an instrumentation tool that produces high-frequency electromagnetic waves propagating through the host materials. These
waves are subjected to a combined eect of transmission,
reflection, refraction, scattering, absorption, and attenuation
by the materials which are usually dielectrics, such as
concrete, soils, and rocks. The reflected energy is then
captured, recorded, and digitized by a receiving antenna, and
is registered as an amplitude record over the time windows
in a scale of nanosecond, or simply known as an A-scan.

EURASIP Journal on Advances in Signal Processing


At frequency range in GHz level, the GPR wave propagation velocity and attenuation in a medium are primarily
governed by the dielectric permittivity () of the composite
medium, as illustrated by the GPR plateau in [18]. The
of any composite material is determined by the volumetric
fraction of each individual , namely, air, water, and solids,
and may be mathematically formulated by dierent dielectric
mixing models [19]. In a dielectric, water is recognized as
the single most dominant factor to change  [10, 1417, 20].
Other factors are (1) electromagnetic (EM) frequency [21
24], (2) water to cement ratio [10, 14, 25], (3) porosity
[17, 25], (4) ions in pore solution which make the material
behave as lossy dielectrics [26], and (5) clay minerals of wide
range of porosities and specific surfaces [27]. Because of the
dominant eect of water on GPR wave propagation, GPR
becomes a good candidate to solve the inverse problems of
material properties, in particular about characterization of
the changes of state of water in any porous cultural and
natural materials with low conductivities, such as concrete,
asphalt, and soil.
2.2. Cement Hydration. The change of free to bound state
of water in concrete is best realized in the cement hydration
process in concrete. For concrete after fresh mixing, curing
starts to allow continued absorption of water by cement to
develop and formulate dense solid calcium silicate hydrates
(C-S-H). As the volume of C-S-H increases, the gel and
capillary channels are blocked, segmented, and isolated. Free
water is reduced, absorbed, and bounded because water is
hydrated to change from free to bound form [28]. For OPC
concrete, this mechanism is quick at initial stage and slows
down afterwards. This process is vital for the development
of concrete strength and durability which determines the
structural integrity and the serviceability.
2.3. Frequency-Dependent Dielectric Relaxation in Porous
Materials. When an electrical field is applied on a nonconductive material, the frequency dependence of the
polarization process is described by permittivity relaxation
phenomena and is characterized by the relaxation frequency
( f ) of the material. Below f , the particles are capable
of being polarized instantaneously to the applied E-field
and stay in phase with the E-field. Above f , the frictional
and the inertial eects cause this polarization to lag behind
the E-field. It becomes no longer instantaneous [19] and
therefore manifests a phase dierence between the applied Efield and response polarization. Hence the dipole orientation
as a bulk fails to fully contribute to the total polarization
[29]. This phenomenon is known as dielectric dispersion
of the dipolar/orientation polarization [30]. In the GPR
frequency range (10 to 3000 MHz), most materials yield
permittivity relaxation mechanism by remarking a decrease
in the real component of complex permittivity, as formulated by Debyes model shown in Figure 1. At the same
frequency, the imaginary component rises until reaching the
peak frequency ( f ) and drops afterwards. f is governed
by relaxation time t ; where f = 1/2t [29, 31]. At
f , the absorption mechanism of the material is most
prominent.

EURASIP Journal on Advances in Signal Processing

Relative dielectric permittivity

81
70

Frequency associated
with dielectric dispersion

60
50

Relaxation
frequency

40
30
20
10
0

108

109

1010

1011

1012

Real part
Imaginary part

Figure 1: Graphical presentation of the Debyes model in liquid


water.

The eect of the above mechanism was mathematically


derived by Debye model [21], and Cole and Coles (1941)
models [32]. The former is used to model a pure material with single and well-defined polarization mechanism
(Figure 1), while the latter models composite material manifesting dierent regions of dielectric dispersion and a spectrum of relaxation time. Ranges of f for dierent materials
were extensively studied, and that of free water is the highest
compared to other composite materials containing water
across the microwave region [33]. Its value is reported to be
17.1 GHz at 25 C [34]. Within the permittivity spectrum as
a function of frequency, permittivity loss due to a composite
material containing free water starts to be prominent at
above 500 MHz [35].
The range of dielectric relaxation of composite materials
with bound water (such as clayey soil) is much smaller
than that with free water (such as sandy soil) [3638]. It is
because the bound water is limited in motion by electrostatic
interaction with neighbouring solid particles, while the free
water in the latter is not restricted [37]. It follows that the
dielectric dispersion and relaxation frequency of a material
are governed by the forms of water (i.e., bounded or free)
in a material [35]. This well-known dielectric phenomenon
is manifested in any porous material with water saturation,
where loss mechanism and the associated relaxation frequency are shifted to a frequency much lower than that of
free water (i.e., 17.1 GHz). It has been observed in soils,
clays, and rocks [3943], but is still not well understood in
concrete. The eect depends on the degree of saturation, the
form and distribution of the mineral phases (percentage of
clay and rock particles). In concrete, it depends on the mix
design and rate of change of turning majority of water from
free to bound form.
2.4. Fourier Transform (FT) and Short-Time Fourier Transform (STFT). A signal processed with the classical Fourier
transform is transformed to a plot showing the frequency
distribution over the entire signal window. It is compared to
sinusoidal functions, which spread over the entire signal in

time domain and are not concentrated in any particular time.


Therefore, the classical Fourier transform does not explicitly
reveal how the frequency contents evolve with time when
the signal is nonstationary [44]. To overcome this deficiency,
JTFA is required to transform the one-dimensional signals
into a two-dimensional time-frequency plot, in which STFT
is one of the most widely used algorithms in JTFA based
on detail Fourier transform centered at each time point. In
STFT, the signal is compared with window functions that
are concentrated in both time and frequency domains. The
spectra at any particular time are then stacked to reflect
the lateral variation of signal behavior in both time and
frequency in JTFA. The STFT algorithm and the window
function can be mathematically represented as (1) [45]
STFT[x(t)] X(, ) =

x(t)(t )e jt dt,

(1)

where (t) is the window function (e.g., a Hanning window)


which has a user-defined short-time duration, and x(t) is
the signal in time domain. The length of this window is
of vital importance and subject to the wavelength of the
GPR frequencies. If the window length is too short, spectral
leakage of low-frequency component appears; when it is
too long, the target of interest would be blurred. X(, ) is
the Fourier Transform of x(t)w(t ), a complex function
representing the phase and magnitude of the signal over
time and frequency. The right equality of (1) is an inner
product which reflects the similarity between signal x(t) and
the elementary function (t )e jt . If the time duration
and frequency bandwidth of () are t and , respectively,
STFT[x(t)] in (1) indicates the signals behaviors in the
vicinity [t t , t t ] and [ , ], where t
is governed by the width of the window function.

3. Specimens and Instrumentation


Normal concrete with ordinary ASTM Type I OPC (CEM
32.5) was used in the experiment. The mix proportion
includes 285 kg/m3 cement, 170 kg/m3 water (yielding a
water to cement ratio 0.6), 1867 kg/m3 well-graded fine and
coarse aggregate, and 0.91% lignosulfonates superplasticizer
for enhancement of the workability. The average 7-day
and 28-day strengths for measurements on three cubes for
each were 31.3 MPa and 45.8 MPa, respectively. A concrete
specimen with size 1.5 m long, 500 mm width, and 500 mm
height is shown in Figures 2 and 3. Twelve other 150 mm
cubes were casted to measure the 7-day, 28-day strengths, and
regular measurements of weight in cover-cured and watercured conditions (3 for each condition). The 1.5 m long
specimen was enclosed and sealed with a thick plastic sheet
to isolate the concrete from the environment throughout
the measurement period, and to minimize the evaporation
eect, within which a steel bar (25 mm diameter) was
positioned at 100 mm below the concrete surface. There are
some other bars inside the specimens installed for other
purposes. But the remaining bars are at least 160 mm apart
from the target bar and do not impose any eect on
the measured GPR signal because of the small radiation

EURASIP Journal on Advances in Signal Processing


Reinforcement bar @100 mm cover

antenna

1.5 GHz

E-field

2 GHz

E-field
antenna

Concrete

500 mm

500 mm
(a)
GPR antenna

100 mm
dia.25 mm steel bar

500 mm

(b)

Figure 2: Plan view (a) and section (b) of the concrete specimen.

4. Data Processing

Figure 3: Overview of the specimen.

footprint illuminated by the high-frequency antenna. The


concrete was cured in air over the test period. GPR antenna
array was aligned perpendicularly on top of the concrete.
At the initial fresh state (just after mixing), the antennae
were over-hanged in position to avoid sinking into the
surface concrete. The antenna arrays operated in nominal
frequencies 1.5 and 2.6 GHz and connected with Geophysical
Survey System Inc. (GSSI) SIR-20 dual channel GPR system.
Selections of these GPR frequencies (the highest amongst
the whole GPR frequency range) were justified because of
its close proximity with the relaxation frequency (Figure 1)
of free water, which makes the relaxation eect more readily
observable.

4.1. Dewow Filtering, Direct Current Drift Correction and


Time Zero Correction. All GPR A-scan signals (with time step
= 39.1 ps and time window 10 ns) were firstly processed with
an adjustment of direct current (DC) shift and dewow filter
in time domain. These two procedures filtered the DC bias
and low-frequency energy [9]. The second step is to assign
a correct time zero position at the first reflected wave. The
time zero was defined by firstly dierentiating the A-scan
twice to obtain the second derivative of the original A-scan,
as shown in Figure 4. Then, over the first arrival direct wave
(or direct wave because high-frequency GPR was used), the
positions where sign changes from positive to negative in
the 2nd derivative (i.e., the inflection point of the A-scan)
were determined and were then feed back to the original Ascan. This point was defined as the real time zero because it
demarcates the change from low frequency (being concave
upwards) to high frequency (being concave downwards)
when the direct wave enters from air to a dielectric medium,
as illustrated in Figure 5. It is important to note that this low
frequency component was not filtered by the DC shift and
dewow filters. Also in the context of low frequency-antennae,
the first wave is normally referred as a summation of wave
traveling from the transmitter to receiver, and the wave over
the surface concrete.
The above method is justified by an independent experiment using a concrete specimen (same mix as described

EURASIP Journal on Advances in Signal Processing

6000
10000

4000

5000

0
2000 0

10

20

30

4000

40
50
Time (points)

6000
8000

Amplitude

Amplitude

2000

0
0

10

20

30

40

50

60

5000
1000

10000

Time zero at the


inflection point
of A-scan

Time (points)
25 mm

25 mm

15000

A-scan
1st derivative of the A-scan
2st derivative of the A-scan

Bar @25 mm
Bar @50 mm
Time zero at inflection point

(a)

Figure 6: Subtracted 2.6 GHz waveforms (with and without bars)


for bars at 25 mm and 50 mm.

3000

Amplitude

2500
2000
Time zero at the
inflection point of A-scan

1500
1000
500
0
20
500

Time (points)
22

24

26

28

30

A-scan
1st derivative of the A-scan
2st derivative of the A-scan
(b)

Figure 4: (a) 1.5 GHz A-scan and its first and second derivative. (b)
2.6 GHz A-scan and its first and second derivative (close-up view).

Low frequency in air High frequency in dielectric media

Figure 5: Change of low to high frequency of wave incident from


air to dielectric media.

in this paper) with two steel bars at two cover depths


(25 mm, 50 mm). A-scans collected on top of the two bars
were directly subtracted by those without bars. The result
is shown in Figure 6. The position of zero-crossing at the
bar reflection was defined as the position of the reflector
[45]. It was determined, and then back-calculated linearly
to the true time zero corresponding to the surface position.

Comparison between the calculated time zero position and


the inflection point method yields a dierence less than 20 ps.
This accuracy should be able to justify the credibility of the
inflection point method to correct the time zero. Also noted
in Figure 6, the minor reflections before the zero crossing of
the bars are due to the slight phase shift between the A-scans
captured with and without bars.
4.2. Fourier Transform (FT) and Short-Time Fourier Transform (STFT). Fourier transform was applied to every Ascan to observe the distributions of frequency components
(arisen from clutter/signal ringing, direct wave, and steel
bars) over the original A-scan function, after zero-padding
to 4096 points, and no windowing function nor filtering
was applied at this stage. But for signal transformation
with STFT, a temporal finite impulse response (FIR) highpass filter was applied to filter the prevalent clutter/ringing
which locates in the low-frequency regime. Also, every
one-dimensional GPR A-scan was windowed with a short
Hanning window which is low aliasing. We adopt the size
of the Hanning window being eight times of the time taken
from first peak to first valley (or 1/2 wavelength) in the direct
wave. This is an objective measure independent of manual
selection of time window, which alleviates and reconciles
the trade-os regarding the spectral leakage (due to too
small time window), and unclear target of interest (due to
too large time window) in STFT spectrogram and influence
by other neighbouring frequency component. Within a
reasonable range of time window, a larger window size in
our program yields a smaller peak frequency, though the
dierence is minimal. For example, with an input signal
centered at 2 GHz, a window size spanning from 1ns to
3 ns in STFT yields a center frequency ranging from 2070
to 1970 MHz, respectively. The windowed mathematical
function was then transformed according to (1) and Figure 7.
The frequency spectrum was then stacked to plot a 2D
STFT spectrogram in JTFA domain (Figure 8(a)). At every
frequency step, a Gaussian gain function centered at the peak

EURASIP Journal on Advances in Signal Processing

3.5G

3.5E + 09

2.5E + 09
2E + 09
1.5E + 09

3G
Frequency (Hz)

Frequency (Hz)

3E + 09

150k

4G

4E + 09

2.5G
75k

2G
1.5G
1G

1E + 09
500 M
5E + 09
0

500P

0E + 00

1n

2n

(a)

3n

4n

5n

(b)
6000

Amplitude

4000
2000
0
0.5

2000

0.5

1.5

2.5

3.5

4.5

Time (ns)

4000
6000
8000

(c)

Figure 7: Frequency spectrum at 2.1 ns (a), 2.6 GHz STFT spectrogram (b), and GPR A-scan (c), at day 7.

location of the bar reflection (Figure 8(c)) was multiplied


on the spectrogram (Figure 8(a)) to stand out the bar
signals which were originally much weaker than the high
intensity of the direct wave (Figure 8(b)). Gaussian functions
were determined specifically for each dataset, in which
the maximum amplitudes of the functions were based on
dividing the maximum amplitudes of the direct wave by the
bar reflections in JTFA domain.

5. Findings
5.1. Time Domain A-Scans. Figure 9 shows the A-scans
collected at dierent days after fresh mixing. The collection
of data was started once every day from fresh state to day
7, and then once at days 14, 28, 56, and 90 Positions of the
bar reflections were identified by comparing the reflections
when the antenna was captured on top of the bar and on
top of other locations with plain concrete only. The bar
reflections were ill defined in time domain from the fresh
state to day 4 because high water content caused more
attenuation suered by the high-frequency GPR antennae.
This is improved after day 5, where the reflections started to
be well defined, and therefore only data from days 5 to 7,
14, 28, 56, and 90 are presented in Figures 9 and 10. Along
with age, the reflections from the bar traveled much faster
(shorter time) and exhibited a larger intensity. It is because
the majority of free water was progressively consumed and
bound to formulate the structure of calcium silicate hydrate,

while a small part of the water evaporated. Details of the


consumption of these two parts of water will be discussed in
Section 5.4. This mechanism was described by a mechanistic
model which takes both dielectric and hydration perspectives
into account [10].
5.2. Fourier Transform (FT) Over the Entire Time Window.
Time domain A-scans were transformed to frequency spectra
through FT, as illustrated in Figures 11 and 12. The frequency
amplitude was normalized by the maximum amplitude of
day 90s data, so that the tendency of the growth of the
frequency component becomes immediately prominent. For
both 2.6 GHz and 1.5 GHz data, the one and wide-band
frequency appeared at the fresh initial state dispersed into
three distinct frequency localities along with ages. The lowest
locality (below 1100 MHz at 2.6 GHz and below 800 MHz at
1.5 GHz FT) is arisen by the clutter and signal ringing. This
clutter and ringing declined in amplitude in 2.6 GHz data,
but increased in 1.5 GHz data along with ages. This reverse
trend may possibly be due to dierent antenna design which
is not of significant importance.
The middle and high-frequency localities were from
the direct wave or bar reflection, which was however not
possible to be distinguished according to FT plots alone.
With the aid of STFT spectrogram in Figures 13 and 14,
the nonstationary nature of the signals was revealed along
the time and frequency axis. In the STFT spectrogram, the
middle frequency locality refers to the range from 1.2 G to

EURASIP Journal on Advances in Signal Processing

4G

3G

2.5 G
2G
1.5 G

Bar

Time zero

Bar

3.5 G
Frequency (Hz)

3G

Time zero

Frequency (Hz)

3.5 G

150 k

4G

2.5 G
75 k

2G
1.5 G

1G

1G

500 M

500 M

500 p

2n

4n

500 p

5n

2n

Time (s)

4n

5n

Time (s)

(a)

(b)
Magnifation factor

35
30
25
20
15
10
5
0

0.5

0.5

1.5

2.5

3.5

4.5

Time (s)
(c)

Figure 8: Original STFT spectrogram (a) and the enhanced STFT spectrogram (b) after applying Gaussian gain in JTFA domain at every
frequency bin (c).

Direct wave

6000

700

4000

300

0
0.5

1.5

2.5
Time (ns)

4000

Amplitude

Amplitude

2000
2000 0

Bar reflection

500
Bar reflection
100
100

1.8

2.2

2.4

2.6

2.8
3
Time (ns)

300

6000
8000

500

10000

700

Day 5
Day 6
Day 7
Day 14

Day 28
Day 56
Day 90

Day 5
Day 6
Day 7
Day 14

Day 28
Day 56
Day 90

Figure 9: 2.6 GHz A-scan in time domain (reflections captured when the antenna was on top of the bar).

1.8 GHz for 2.6 GHz FT (shaded region in Figure 11), and
from 800 MHz to 1.2 GHz for 1.5 GHz FT (shaded region
in Figure 12). The localities beyond the upper threshold of
middle locality are termed as the highest frequency locality.
The FT middle frequency locality is corresponding to the bar

reflection in 2.6 GHz FT (Figure 11). It increased in intensity


and shifted to higher frequency with ages, but the same
locality in 1.5 GHz FT was correspondent to the direct wave
(Figure 12). On the contrary, the highest locality is attributed
to direct wave in 2.6 GHz FT. Similar to the middle locality,

EURASIP Journal on Advances in Signal Processing


Direct wave
10000
8000
6000

1500
Bar reflection

500

2000 0
4000

0.5

1.5

2.5

Time (ns)

6000

Amplitude

Amplitude

4000
2000
0

Bar reflection

1000

0
1.6

500

1.8

2.2

2.4

2.6

2.8

8000

Time (ns)
1000

10000

1500

Day 28
Day 56
Day 90

Day 5
Day 6
Day 7
Day 14

Day 5
Day 6
Day 7
Day 14

Day 28
Day 56
Day 90

Figure 10: 1.5 GHz A-scans in time domain (reflections captured when the antenna was on top of the bar).

Lowest locality

Middle locality
Lowest locality

Highest locality

Steel bar
Direct
wave

0.8

Normalized amplitude

Normalized amplitude

Clutter and
signal ringing
1

Middle locality

0.6
0.4
0.2
0

1000

Initial
Day 1
Day 2
Day 3
Day 4
Day 5

2000
Frequency (MHz)

3000

4000

Day 6
Day 7
Day 14
Day 28
Day 56
Day 90

Highest locality

Clutter and Direct


signal ringing wave
1
0.9
Steel bar
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
500
1000 1500 2000 2500
Frequency (MHz)
Initial
Day 1
Day 2
Day 3
Day 5
Day 6

3000

3500

4000

Day 7
Day 14
Day 28
Day 56
Day 90

Figure 11: Frequency spectra of the entire A-scan captured by


2.6 GHz antenna.

Figure 12: Frequency spectra of the entire A-scan captured by


1.5 GHz antenna.

it also increased in intensity and shifted to higher frequency


with age, but the same locality in 1.5 GHz FT was from the
bar reflection.

frequencies reached the maximum over the lowest locality


(clutter/signal ringing) in FT according to Figures 11 and 12.
The tendency of frequency shift shown in Figures 13 and
14 was further studied closely by plotting the STFT frequency
spectra which centered the peak locations of direct wave and
bar reflections, as reported in Figures 15 and 16. Depending
on the ages, the peaks of the direct wave are centered at
around 0.2 to 0.25 ns (time domain), and the peaks of the
bar reflections are centered at approximately 2-2.3 ns (time
domain). The values of both ranges are depending on the

5.3. Short-Time Fourier Transform Over Short-Time Window.


In Figures 13 and 14, the FT low-frequency localities (due
to clutter and ringing) were filtered out by applying a highpass FIR filter for 2.6 GHz and 1.5 GHz data. The high-pass
filter frequencies for each age were determined at which the

EURASIP Journal on Advances in Signal Processing

4G

Initial

4G

Day 1

4G

Day 2

4G

3G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

750p

4G

2n
Day 4

4n

750p

4G

2n
Day 5

4n

750p

4G

4n

750p

4G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

4G

2n
Day 14

4n

750p

4G

2n
Day 28

4n

750p

4G

4n

750p

4G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

2n

4n

750p

2n

4n

750p

50k

2n
Day 7

4n

2n
Day 90

4n

2n

4n

2n
Day 56

3G

750p

100k

2n
Day 6

3G

750p

Day 3

2n

4n

750p

Figure 13: 2.6 GHz STFT spectrogram. x-axis is time (s), y-axis is frequency (Hz), and z-axis is the signal intensity with the same scale
for all STFT spectrograms. The broken vertical line is the location of time zero. Gaussian gains were applied at the peak position of the bar
reflections, which is approximately at 2.0 ns.

ages of concrete, as illustrated in Figures 9 and 10. The peak


frequencies were then extracted to track the increase of peak
frequency changes of the reflection over the ages, and plotted
in Figure 17.
There are two important points to note. Firstly, the
spectra measured in Figures 15 and 16 are indicative to and
proportional with, but are not equal to, the actual spectra
of imaginary part of permittivity. This is because the spectra
were not directly determined by permittivity measurement
tools, but with signal processing methods such as STFT
algorithm. Dierent STFT methods (types and length of
windows) can change the absolute values of the spectra.
Secondly, it is clear for the FT plots, in which the increase of
frequency intensity at the middle and the highest localities is
due to the increase of amplitude in time domain, as discussed
in Section 5.1. But most importantly, the direct wave and the
bar reflections shifted to a higher-frequency regime which
can be justified by the dielectric relaxation theory described
in this paper.
5.4. Increase of STFT Peak Frequency with Ages. Concrete can
be regarded as a low-pass filter for the input GPR signals,
because frequency from a reflectors signal over a distance
from the surface is always smaller than the direct wave at
the surface [46], as depicted in Figures 13 and 14. On the
other hand, when we compare the peak frequencies from

the bar reflectors in dierent ages, the initial small peak


frequency was found to shift to its higher-frequency regime.
This increase over ages is attributed to the absorption of
dierent frequency components in dierent concrete ages,
and the associated change of relaxation frequency. In fresh
and early age of concrete, water was in free form and its
imaginary part tended to absorb high-frequency component
of the GPR wave (which is close to water relaxation frequency
[21, 33]), as shown in Figure 1. Therefore, the frequency
spectra shown in Figures 15 and 16 started with values
towards the low end. As concrete was hardened with time,
free water became bounded and restricted to be polarized
under the eect of high-frequency E-field. As a result, the
high-frequency component was less attenuated and shifted
the spectra to the higher end, yielding a higher value of
peak frequency and wider bandwidth. The mechanism is
illustrated in Figure 18. The rate of change of this process was
proportional to the well-known cement hydration process
which is a result of the process turning water from free
to bound form. In fact, three of the four plots (2.6 GHz
direct wave and bar reflection; 1.5 GHz bar reflection) in
Figure 17 show sharp increases before day 7 and a gradual
increase afterwards. The only exception is 1.5 GHz direct
wave because it covers a longer radar time window and longer
wavelength compared to the 2.6 GHz counterpart, as shown
in Figures 9 and 10. Therefore, the long wavelength made

10

EURASIP Journal on Advances in Signal Processing

4G

Initial

4G

Day 1

Day 2

4G

3G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

0
750p

4G

2n
Day 4

4n

750p

4G

2n
Day 5

4n

Day 3

4G

150k

750p

2n
Day 6

4G

750p

4n

4G

3G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

0
750p

4G

0
2n
Day 14

4n

750p

4G

0
2n
Day 28

4n

2n
Day 56

4G

750p

4n

4G

3G

3G

3G

2G

2G

2G

2G

1G

1G

1G

1G

0
2n

4n

750p

2n

4n

2n
Day 7

4n

2n
Day 90

4n

2n

4n

750p

3G

750p

300k

750p

2n

750p

4n

Figure 14: 1.5 GHz STFT spectrogram. x-axis is time (s), y-axis is frequency (Hz), and z-axis is the signal intensity with the same scale
for all STFT spectrograms. The broken vertical line is the location of time zero. Gaussian gains were applied at the peak position of the bar
reflections, which is approximately at 2.5 ns.

Cement hydration

1
0.9

1
0.9

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1

Normalized amplitude

Normalized amplitude

Cement hydration

0
0

1000

2000

3000

4000

500

Frequency (MHz)

(a)

1500

2000 2500

3000

3500

4000

Frequency (MHz)
Day 6
Day 7
Day 14
Day 28
Day 56
Day 90

Initial
Day 1
Day 2
Day 3
Day 4
Day 5

1000

Day 6
Day 7
Day 14
Day 28
Day 56
Day 90

Initial
Day 1
Day 2
Day 3
Day 4
Day 5
(b)

Figure 15: (a) 2.6 GHz STFT frequency spectra centered at the peak of direct wave. (b) 2.6 GHz STFT frequency spectra centered at the peak
of steel bar.

EURASIP Journal on Advances in Signal Processing

11
See remark A
1

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

Normalized amplitude

Normalized amplitude

1
0.9

500

1000
1500
Frequency (MHz)

2000

2500

Day 6
Day 7
Day 14
Day 28
Day 56
Day 90

Initial
Day 1
Day 2
Day 3
Day 4
Day 5

0.9
0.8
0.7
0.6

Cement hydration

0.5
0.4
0.3
0.2
0.1
0

500

1000
1500
Frequency (MHz)

2500

Day 6
Day 7
Day 14
Day 28
Day 56
Day 90

Initial
Day 1
Day 2
Day 3
Day 4
Day 5

(a)

2000

(b)

Figure 16: (a) 1.5 GHz STFT frequency spectra centered at the peak of direct wave. (b) 1.5 GHz STFT frequency spectra centered at the peak
of steel bar reflection. In time (Figure 10) and JTFA domains (Figure 14), low frequency component rises since bar reflection approached to
the trails of low-frequency direct wave which is stronger in later than early days.

2000

the rapid development of compressive strength (as a direct


indicator of hydration) in the first 7 days (31.3 MPa at day
7) and gradual development afterwards (45.8 MPa at day
28). It is therefore concluded that the change of STFT peak
frequency in the high GPR frequency regime can be used to
characterize the cement hydration process nondestructively.
A special remark about evaporation must also be noted. The
water turned from free to bound water may be formulated
by the approximation, assuming that change of weight is due
to the form of water: free to bound (hydrated) water = initial
water content evaporated free water unhydrated free water
contained, where the following points exist.

Peak frequency (MHz)

1800
1600
1400
1200
1000
800
600
0

20

40

60

80

100

Days
2.6 GHz ground wave
2.6GHz bar

1.5GHz ground wave


1.5GHz bar

Figure 17: Change of peak frequency with the early age curing of
concrete.

it not sensitive to changes of state of water, and exhibited


unremarkable frequency changes over the lapsed time. Also
in Figure 16(b), low-frequency component rose with ages
because the bar reflections in time domain (Figure 10) and
JTFA domain (Figure 14) approached to the low-frequency
trails of direct wave which is stronger in later than early ages.
These curve shapes (except 1.5 GHz direct wave) in
Figure 17 are very close to the well-known concrete hydration curves for OPC concrete. The OPC hydration curves
mean a rapid increase in the first 7 days and a gradual
increase afterwards. This is also evidenced experimentally by

(1) Initial water content. The design water content for


fresh mixing is 170 kg/m3 . Given a fixed 150 mm
cube in volume, this amount equals to 573.8 g
(=170 kg/m3 0.15 m 0.15 m 0.15 m).
(2) Evaporated free water. For 150 mm cubes weight
measurement throughout 90 days, and based on the
dierence between cover- and water-cured concrete,
83.8 g (14.6% of initial water content) was lost due
to evaporation at day 90 (Figure 19 and Table 1).
This loss would be much more if concrete is totally
exposed to air, and therefore covering concrete with
plastic sheet minimizes the eect of evaporation.
(3) Unhydrated free water. Standard 105 C oven-drying
of the 90-day specimen shows that 62.9 g (11% of initial water content) of the water was still unhydrated,
free, or absorbed in capillary, but not yet bounded.
Following this approximation, free to bound (hydrated)
water equals 573.8 g (100%) 83.8 g (14.6%) 62.9 g

12

EURASIP Journal on Advances in Signal Processing

Case 2

Case 1

Amplitude

Day 56

Amplitude

Day 7

Amplitude

Initial

Case 3

Frequency

Black minus green

Frequency

Frequency

Black minus blue

Black minus red

Normalized amplitude

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

500

1000

1500

2000

2500

3000

3500

4000

Frequency (MHz)
Initial
Day 7
Day 56

Figure 18: Spectrum change due to change from free to bound water in early-aged concrete. Case 1: All water is in free form. Case 2: Water is
hydrated and bounded; free water still remains and absorbed in capillary. Case 3: More water is hydrated and bounded, but smaller amount
of free water still exists in the capillary.

Table 1: Changes of concrete weight due to dierent curing conditions.


Water-cured in curing tank
Day
Weight (g) Dierence compared to initial (g)
Initial fresh state
7954,4
0,0
2
7850,5
6,5
3,1
3
7853,9
0,3
4
7856,7
5
7858,3
1,3
6
7860,1
3,1
7
7861,0
4,0
14
7861,8
4,8
28
7862,5
5,5
56
7863,1
6,1
90
7863,8
6,8

Cover-cured in room temperature and humidity


Day
Weight (g) Dierence compared to initial (g)
Initial fresh state
7954,4
0,0
2
N/A
N/A
3
7909,3
52,3
4
7880,3
23,3
5
7867,0
10,0
6
7860,4
3,4
3,0
7
7854,0
27,0
14
7830,0
46,5
28
7810,5
70,2
56
7786,8
77,0
90
7780,0

Concrete weights gain and loss (g)

EURASIP Journal on Advances in Signal Processing

10

20

30

40

50

60

70

80

13
90

100

20
40
60
80
100
120
140
160
180
200

Time (days)
Cover-cured concrete cured in room
temperature & humidiity
Water cured (100% humidity)

Figure 19: Changes of concrete weight due to dierent curing


conditions.

(11%) = 427.1 (74.4%) as shown above, assuming that the


weight change is solely dependent on the change of state of
water (bounded or evaporated). This majority, along with
the evaporated free water (14.6% minority) contributed to
absorption of less high-frequency content (manifesting a
shift to high frequency regime) in the frequency spectra.
Transformation of free to bound form of water is surely
minor in long term, but is definitely significant in early age
(esp. from initial to day 7). One to one relationship is still yet
to be developed, but nevertheless the trend is obvious.

6. Conclusions
The STFT method reported in this paper was used to
analyze the time-varying GPR signals. This was achieved
by measuring the changes of frequency spectra centered at
in each time step over the radar time window, rather than
applying classical fourier transform depicting the frequency
spectra over the whole time window. With appropriate
signal processing method, GPR can be used to study the
hydration properties in OPC concrete. It is because the free
or bound form of water yields a dierent but simultaneous
eect on both dielectric relaxation and cement hydration
mechanisms. The method can be further extended to
study dierent hydration mechanisms of dierent concretes
nondestructively, and on a much larger volume of materials
compared to the small transmission line method. Most
importantly, the method is not limited to OPC hydration,
but may also be applied in various types of cement, water to
cement ratios, and materials replacing part of OPC (such as
fly ash and silica fume) that exhibit pozzolanic reactions.

References
[1] J. H. Bungey, Sub-surface radar testing of concrete: a review,
Construction and Building Materials, vol. 18, no. 1, pp. 18,
2004.

[2] V. M. Malhotra and N. J. Carino, Eds., Handbook on Nondestructive Testing of Concrete, CRC Press, Boca Raton, Fla, USA,
2nd edition, 2006.
[3] H. C. Rhim, Condition monitoring of deteriorating concrete
dams using radar, Cement and Concrete Research, vol. 31, no.
3, pp. 363373, 2001.
[4] C. Kohl and D. Streicher, Results of reconstructed and fused
NDT-data measured in the laboratory and on-site at bridges,
Cement and Concrete Composites, vol. 28, no. 4, pp. 402413,
2006.
[5] T. Kind, Study of the behaviour of a multi-oset antenna
array on railway ballast, in Proceedings of the 12th International Conference on Ground Penetrating Radar (GPR 08),
Birmingham, UK, 2008.
[6] I. Al-Qadi, W. Xie, and R. Roberts, Optimization of antenna
configuration in multiple-frequency ground penetrating radar
system for railroad substructure assessment, NDT and E
International, vol. 43, no. 1, pp. 2028, 2010.
[7] Ch. Maierhofer and J. Wostmann, Investigation of dielectric
properties of brick materials as a function of moisture and
salt content using a microwave impulse technique at very high
frequencies, NDT and E International, vol. 31, no. 4, pp. 259
263, 1998.
[8] D. M. McCann and M. C. Forde, Review of NDT methods in
the assessment of concrete and masonry structures, NDT and
E International, vol. 34, no. 2, pp. 7184, 2001.
[9] A. P. Annan, Electromagnetic principles of ground penetrating radar, in Ground Penetrating Radar: Theory and
Applications, H. M. Jol, Ed., chapter 1, Elsevier, Amsterdam,
The Netherlands, 2009.
[10] W. L. Lai, S. C. Kou, W. F. Tsang, and C. S. Poon, Characterization of concrete properties from dielectric properties using
ground penetrating radar, Cement and Concrete Research, vol.
39, no. 8, pp. 687695, 2009.
[11] K. Gorur, M. K. Smit, and F. H. Wittmann, Microwave study
of hydrating cement paste at early age, Cement and Concrete
Research, vol. 12, no. 4, pp. 447454, 1982.
[12] M. A. Rzepecka, M. A. L. Hamid, and A. H. Soliman, Monitoring of concrete curing process by microwave terminal
measurements, IEEE Transactions on Industrial Electronics,
vol. 19, no. 4, pp. 120125, 1972.
[13] V. A. Beek, Dielectric properties of young concrete, Ph.D. thesis,
Delft University, 2000.
[14] M. N. Soutsos, J. H. Bungey, S. G. Millard, M. R. Shaw,
and A. Patterson, Dielectric properties of concrete and their
influence on radar testing, NDT and E International, vol. 34,
no. 6, pp. 419425, 2001.
[15] G. Klysz and J.-P. Balayssac, Determination of volumetric
water content of concrete using ground-penetrating radar,
Cement and Concrete Research, vol. 37, no. 8, pp. 11641171,
2007.
[16] S. Laurens, J. P. Balayssac, J. Rhazi, G. Klysz, and G. Arliguie,
Non-destructive evaluation of concrete moisture by GPR:
experimental study and direct modeling, Materials and
Structures/Materiaux et Constructions, vol. 38, no. 283, pp.
827832, 2005.
[17] W. L. Lai and W. F. Tsang, Characterization of pore systems
of air/water-cured concrete using ground penetrating radar
(GPR) through continuous water injection, Construction and
Building Materials, vol. 22, no. 3, pp. 250256, 2008.
[18] J. L. Davis and A. P. Annan, Ground-penetrating radar
for high-resolution mapping of soil and rock stratigraphy,
Geophysical Prospecting, vol. 37, no. 5, pp. 531551, 1989.

14
[19] M. D. Knoll, A petrophysical basis for ground-penetrating
radar and very early time electromagnetics, Ph.D. thesis, The
University of British Columbia, 1996.
[20] G. C. Topp, J. L. Davis, and A. P. Annan, Electromagnetic
determination of soil water content: measurements in coaxial
transmission lines, Water Resources Research, vol. 16, no. 3,
pp. 574582, 1980.
[21] P. Debye, Polar Molecules, Chemical Publication Company,
1929.
[22] P. Hoeskstra and A. Delaney, Dielectric properties of soils
at UHF and microwave frequencies, Journal of Geophysical
Research, vol. 79, no. 11, pp. 16991708, 1974.
[23] Concrete Society, Guidance on radar testing of concrete
structures, Tech. Rep. 48, Concrete Society, 1997.
[24] W. L. Lai, W. F. Tsang, H. Fang, and D. Xiao, Experimental
determination of bulk dielectric properties and porosity of
porous asphalt and soils using GPR and a cyclic moisture
variation technique, Geophysics, vol. 71, no. 4, pp. K93K102,
2006.
[25] P. Gu and J. J. Beaudoin, Dielectric behaviour of hardened
cementitious materials, Advances in Cement Research, vol. 9,
no. 33, pp. 18, 1997.
[26] D. J. Daniels, Ed., Ground Penetrating Radar, The Institution
of Electrical Engineers, London, UK, 2nd edition, 2004.
[27] N. R. Peplinski, F. T. Ulaby, and M. C. Dobson, Dielectric
properties of soils in the 0.3-1.3-GHz range, IEEE Transactions on Geoscience and Remote Sensing, vol. 33, no. 3, pp. 803
807, 1995.
[28] A. M. Neville, Properties of Concrete, Longman, Harlow, UK,
1995.
[29] J. PH. Poley, J. J. Nooteboom, and P. J. de Waal, Use of V.H.F.
dielectric measurements for borehole formation analysis, Log
Analyst, vol. 19, no. 3, pp. 830, 1978.
[30] A. Von Hippel, Ed., Dielectric Materials and Applications,
Technology Press of MIT, Cambridge, Mass, USA, 1995.
[31] J. Q. Shang and J. A. Umana, Dielectric constant and
relaxation time of asphalt pavement materials, Journal of
Infrastructure Systems, vol. 5, no. 4, pp. 135142, 1999.
[32] K. S. Cole and R. H. Cole, Dispersion and absorption in
dielectrics I. Alternating current characteristics, The Journal
of Chemical Physics, vol. 9, no. 4, pp. 341351, 1941.
[33] G. P. de Loor, Dielectric properties of wet materials, IEEE
Transactions on Geoscience and Remote Sensing, vol. 21, no. 3,
pp. 364369, 1982.
[34] J. B. Hasted, Aqueous Dielectrics, Chapman and Hall, London,
UK, 1973.
[35] N. J. Cassidy, Electrical and magnetic properties of rocks,
soils and fluids, in Ground Penetrating Radar: Theory and
Applications, H. M. Jol, Ed., chapter 2, Elsevier, Amsterdam,
The Netherlands, 2009.
[36] J. R. Wang and T. J. Schmugge, An empirical model for the
complex dielectric permittivity of soils as a function of water
content, IEEE Transactions on Geoscience and Remote Sensing,
vol. 18, no. 4, pp. 288295, 1980.
[37] J. A. Huisman, S. S. Hubbard, J. D. Redman, and A. P. Annan,
Measuring soil water content with ground penetrating radar:
a review, Vadose Zone Journal, vol. 2, pp. 476491, 2003.
[38] L. J. West, K. Handley, Y. Huang, and M. Pokar, Radar
frequency dielectric dispersion in sandstone: implications for
determination of moisture and clay content, Water Resources
Research, vol. 39, no. 2, pp. 10261037, 2003.

EURASIP Journal on Advances in Signal Processing


[39] P. Hoeskstra and W. T. Doyle, Dielectric relaxation of surface
absorbed water, Journal of Colloidal and Interface Science, vol.
79, no. 11, pp. 16991708, 1971.
[40] M. C. Dobson, F. T. Ulaby, M. T. Hallikainen, and M. A. ElRayes, Microwave dielectric behavior of wet soilpart II:
dielectric mixing models, IEEE Transactions on Geoscience
and Remote Sensing, vol. 23, no. 1, pp. 3546, 1985.
[41] M. T. Hallikainen, F. T. Ulaby, M. C. Dobson, M. A. ElRayes, and L. Wu, Microwave dielectric behavior of wet soil
part I: emprical models and experimental observations, IEEE
Transactions on Geoscience and Remote Sensing, vol. 23, no. 1,
pp. 2534, 1985.
[42] M. A. Fam and M. B. Dusseault, High-frequency complex
permittivity of shales (0.021.30 GHz), Canadian Geotechnical Journal, vol. 35, no. 3, pp. 524531, 1998.
[43] S. P. Friedman, A saturation degree-dependent composite
spheres model for describing the eective dielectric constant
of unsaturated porous media, Water Resources Research, vol.
34, no. 11, pp. 29492961, 1998.
[44] S. Qian and D. Chen, Joint Time-Frequency Analysis: Methods
and Applications, Prentice Hall, Upper Saddle River, NJ, USA,
1996.
[45] A. P. Annan, Ground penetrating radar, in Near-Surface Geophysics, D. K. Butler, Ed., chapter 11, Society of Exploration
Geophysics, 2005.
[46] A. Shaari, S. G. Millard, and J. H. Bungey, Modelling the
propagation of a radar signal through concrete as a low-pass
filter, NDT and E International, vol. 37, no. 3, pp. 237242,
2004.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 402597, 6 pages
doi:10.1155/2010/402597

Research Article
Strain and Cracking Surveillance in Engineered Cementitious
Composites by Piezoresistive Properties
Jia Huan Yu1 and Tsung Chan Hou2
1 School

of Civil Engineering, ShenYang Jianzhu University, LiaoNing 110168, China


of Civil and Environmental Engineering, University of Michigan, Ann Arbor, Michigan 48105, USA

2 Department

Correspondence should be addressed to Jia Huan Yu, yrudy@vip.sina.com


Received 1 January 2010; Revised 29 June 2010; Accepted 3 August 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 J. H. Yu and T. C. Hou. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Engineered Cementitious Composites (ECCs) are novel cement-based ultraductile materials which is crack resistant and undergoes
strain hardening when loaded in tension. In particular, the material is piezoresistive with changes in electrical resistance correlated
with mechanical strain. The unique electrical properties of ECC render them a smart material capable of measuring strain and the
evolution of structural damage. In this study, the conductivity of the material prior to loading was quantified. The piezoresistive
property of ECC structural specimens are exploited to directly measure levels of cracking pattern and tensile strain. Changes in
ECC electrical resistance are measured using a four-probe direct-current (DC) resistance test as specimens are monotonically
loaded in tension. The change in piezoresistivity correlates the cracking and strain in the ECC matrix and results in a nonlinear
change in the material conductivity.

1. Introduction
Engineered Cementitious Composite (ECC) is an ultraductile fiber reinforced cement based composite which has
metal-like features when loaded in tension and exhibits
tough, strain hardening behavior in spite of low fiber volume
fraction. The uniaxial stress-strain curve shows a yield point
followed by strain-hardening up to several percent of strain,
resulting in a material ductility of at least two orders of
magnitude higher than normal concrete or standard fiber
reinforced concrete [1]. ECC provides crack width to below
100 m even when deformed to several percent tensile strain
(Figure 1). Fiber breakage is prevented and pullout from the
matrix is enabled instead, leading to tensile strain capacity
in excess of 6% for PVA-ECC containing 2% by volume Poly
Vinyl Alcohol (PVA) fiber which is a unique implementation
by Yu and Dai [2].
Cracking in cementitious composite can result from
a variety of factors including externally applied loads,
shrinkage, and poor construction methods. Identification of
cracks can be used to evaluate the long-term sustainability
of structural elements made of cementitious composite. For

example, small cracks aecting only the external aesthetic of


the structure should be dierentiated from those that reduce
its strength, stiness, and long term durability. Priorities
should be given to cracks that are deemed critical to the
structures functionality (e.g. safety, stability).
After suspicious cracks are encountered, nondestructive
(e.g., ultrasonic inspection) and partially destructive (e.g.,
core holes) testing can be performed by trained inspectors
to determine crack features (e.g., location and severity)
below the structural surface. Perhaps the best approach
for automated structural health monitoring of concrete
structures entails the adoption of the sensors available in
the nondestructive testing (NDT) field. In particular, passive
and active stress wave approaches have been proposed for
NDT evaluation of concrete structures. Acoustic emission
(AE) sensing is foremost amongst the passive stress wave
methods. AE employs piezoelectric elements to capture the
stress waves generated by cracks [3]; while AE has played
a critical role in the laboratory, its success in the field
has been limited to only a handful of applications [4]. In
contrast, active stress wave methods have been proven more
accurate for crack detection in the field. This approach entails

120

100

80

60

40

20

Crack width (m)

EURASIP Journal on Advances in Signal Processing

Stress (MPa)

0
0

Strain (%)
Stress-strain
Crack width
(a)

(b)

Figure 1: Typical stress-strain-crack width relationship and saturated crack pattern of PVA-ECC.

(a) adding water

(b) adding super-plasticizer

(c) mixture without fiber

(d) adding fiber

Figure 2: Mixing process of ECC.

EURASIP Journal on Advances in Signal Processing


1.25 mm1.25 mm
I

Figure 3: Electrode instrumentation for the 4-point probe method


of resistivity measurement.

1000
35 days

900
800
Resistivity (kohmcm)

the use of a piezoelectric transducer to introduce a pulsed


ultrasonic stress wave into a concrete element and use the
same transducer or another to measure the pulse after it
has propagated through the element. A direct extension
of the active stress wave approach is the electromechanical
impedance spectra method. This approach measures the
electromechanical impedance spectrum of a piezoelectric
transducer to detect cracking in the vicinity of the surface
mounted transducer [5]. With digital photography rapidly
maturing, many researchers have also adopted the use
of charge-coupled device (CCD) cameras to take photographic images of concrete structural elements; subsequent application of digital image processing techniques
automates the identification of crack locations and widths
[6].
Compared to other NDT methods, utilization of the
electrical properties of cement-based materials for crack
detection has gained less attention from the civil engineering
community. In fact, the unique electrical properties of
cementitious composites render them a smart material
capable of measuring strain and the evolution of structural
damage [7]. The measurement of electrical properties of
cementitious composite is proved capable of detecting
serious as well as minor cracks. In particular, ECC is piezoresistive with changes in electrical resistance correlated with
mechanical strain. When ECC materials are mechanically
strained, they experience multiple saturated cracking and
change in their electrical resistance.
In this paper, the piezoresistive property of cementitious
materials is proposed as a novel approach for sensing strain
and cracking in PVA ECC by utilization of their electrical
resistance. The exploration of ECC materials piezoresistivity
sets a scientific foundation for the use of the material as a
self-sensing material for structural health monitoring in the
future.

28 days

700

21 days
14 days

600
7 days
500
400
300
1 day
200
100
0
0

200

400

600

Time (s)

2. Production of PVA ECC (Selection of


Constituents and Mixing Process)
In this research, the PVA-ECC mixture consists of cement,
sand, fly ash, fiber, and superplasticizer, and the proportion
is given in Table 1. Proportioning of each component with
the correct mechanical and geometric properties is necessary
to attain the unique ductile behaviour.
High modulus polyvinyl alcohol fiber (12 mm KuralonII REC-15 fibers supplied by Kuraray Company) was used
as the reinforcing fiber. Ordinary Portland type I cement,
Class F normal fly ash and silica sand were used as the
major ingredients of the matrix. Silica sand with 110 m
average grain size was used as the fine aggregates. Melamine
formaldehyde sulfate was applied as superplasticizer (SP) to
control the rheological properties of fresh matrix. SP neutralizes dierent surface charges of cement particles and thus
disperses the aggregates formed by electrostatic attraction.
However, it has been reported that SP fail to preserve the
initial flowability with time due to the high ionic strength
in dispersing medium [8]. Appropriate weight and adding
sequence of the constituent must be determined because

Figure 4: Resistivity measurement of ECC specimens by 4-point


probing.

very little dierence results in considerable change of the


property of acquired PVA ECC mixture. Coarse aggregates
are not used as they tend to increase fracture toughness
which adversely aects the unique ductile behaviour of the
composite. In addition, no coarse aggregates are present
thereby rendering the material as electrically homogeneous.
The sand and cement are mixed dryly first approximately
for 3060 seconds until the mixture becomes homogeneous
(Figure 2(a)). Then water, fly ash, and SP are added orderly
(Figure 2(b)). SP is used only when the mixer cannot mix
further (Figure 2(c)). At the end the fibers are added but
the mixture can be mixed for only 30 s, otherwise it will
be very clumpy. The wet ECC mixture is placed in molds
that cast ECC plate specimens. After 7 days, the specimens
are removed from their molds to continue curing until
mechanical testing occurs after the 28th day.

EURASIP Journal on Advances in Signal Processing


Table 1: Material Mixing proportion of PVA-ECC.

Cement
1.0

Silica Sand
0.8

Fly Ash
1.2

Water
0.66

Superplasticizer
0.013%

Fiber Volume Fraction(%)


2.0

Section A-A

12 cm

7.51.25cm2

Copper
electrode

2 cm 2 cm

30 cm

2 cm 2 cm

Aluminum
grip plates

(a) plate dimension

(b) plate loaded in MTS load frame

Figure 5: ECC plate element for piezoresistivity quantification.

3. Electrical Resistivity Measurement of


ECC Specimens
In this section, ECC test specimens roughly 7.5 1.25
1.25 cm in size are cast for electrical resistivity measurement
of ECC. The measured resistivity of ECC test specimens
is investigated using four-point probe methods with direct
current (DC). As the name suggests, the four-point probe
method employs four independent electrodes along the
length of a specimen.
Before the piezoresistivity of ECC can be characterized,
the conductivity of the material prior to loading should
be quantified. Time dependency is a direct result of the
measurement technique and the dielectric properties of
the material itself. Under an applied steady (static) electric
field, The change in electrical conductivity is often viewed
as an intrinsic feature of the material and has been used
to understand the materials chemical, rheological, and
mechanical properties.
After 1 day of curing, electrodes made of copper tape
are applied to the specimen surface using silver paste;
the electrical tape is applied around all four sides of the
specimen, roughly 4 cm apart, as shown in Figure 3. The
two outermost electrodes are used to drive an electric direct
current I(DC) into the medium while the two inner electrodes are responsible for measuring the electrical potential
and the corresponding drop in voltage V developed over the
length L. Electrodes must be in intimate contact with the

cement-based specimen to induce an ionic current within the


specimen. Metallic electrodes can be surface mounted using
conductive gels and pastes.
Current is applied to the specimen using a DC current
source (Keithley 6221) while voltage measurements are made
using a digital multimeter (Agilent 34401A). The resistivity
of ECC specimens at multiple degrees of hydration, namely,
at 1, 7, 14, 21, 28, and 35 days after casting was monitored
by 4-point probe resistivity measurement. The magnitude
of direct current (DC) used during 4-point probing is
varied from 500 nA to 5 A. Figure 4 shows the resistivity
measurement of ECC specimens over the first 600 seconds of
data collection. For the specimens tested on the first day, the
initial resistivity is about 158 kOhm-cm and grows to around
200 kOhm-cm after 600 seconds of DC charging. The initial
resistivity of the specimens at 14 days is about 524 kOhmcm and exponentially increases to about 720 kOhm-cm after
600 seconds of DC measurement. For specimens tested 35
days after casting, the initial resistivity is 652 kOhm-cm and
increases to about 880 kOhm-cm after 600 seconds of polarization. It should be noted that initial resistivity reported
in this study are under the case of 100% relative humidity
(RH) curing environments. For cementitious materials that
are naturally cured in air and not in a 100% RH environment,
the initial resistivity and polarization may vary due to the
variations in moisture contents that may occur over the test
time period.

EURASIP Journal on Advances in Signal Processing

5
880

4
D

E
Resistivity (kohmcm)

Stress (MPa)

840
F

800

D
760

720

0.4

0.8
Strain (%)

1.2

1.6

0.4

(a) stress-strain curve

0.8
Strain (%)

1.2

1.6

(b) resistivity-strain curve

Figure 6: Piecewise piezoresistive behavior of ECC specimen.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Figure 7: (a) Photo of the specimen after crack localization (at point G); (b)(g), Cracking patterns at loading point B through G,
respectively.

The higher initial resistivity encountered as the specimens cure can be easily explained. Since more and more
ions are trapped by the hardening hydration byproducts, it
is harder to mobilize the ions, which is consistent with a
higher resistivity. The electric properties of the cementitious
material are characterized chiefly by their initial resistivity at
early stage.

4. Strain Sensing of ECC Plates in Tension


ECC is piezoresistive with its resistivity changing in relation
to strain. To investigate the piezoresistive properties of the

ECC material, ECC plates are constructed for axial loading.


The dimensions of each plate are 30 7.5 1.25 cm as
shown in Figure 5(a). Prior to axial loading, copper tape is
wrapped around the specimen at the four locations shown
in Figure 5(b). These four copper tape pieces serve as the
current and voltage electrodes for the 4 probe resistivity
measurement. When ready for testing, the specimens are
clamped in a MTS load frame for application of uniaxial
loading. ECC specimens are loaded with very low loading
rates ranging from 0.013 to 0.064 mm/second. The stroke of
the load frame is recorded so that strain measurements can
be made since access.

EURASIP Journal on Advances in Signal Processing

Table 2: Gage factors of ECC based on 4-point DC probe


measurement.
Specimen
ECC

A-B
6.55

B-C
9.53

C-D
13.39

D-E
11.64

E-F
8.32

F-G
12.58

Figure 6 shows the piecewise piezoresistive behavior of


ECC specimen. Distinct regions where the resistivity-strain
plot is linear are denoted by dots A through G. Each linear
segment is due to a given crack state. The associated gage
factors (the percent change in resistivity divided by strain)
for each segment of the piecewise linear resistivity-strain
curve are summarized in Table 2. As can be observed, the
gage factors of ECC are generally lower and consistent at
about 6.5 during the elastic regime (A-B). This elastic gage
factor is about half the value of the ones encountered in the
strain-hardening range. It should be noted that these gage
factors are well above those associated with traditional metal
foil strain gages which typically have gage factors of 2 to 3
proposed by Perry and Lissner [9].
Figure 7(c) through 7(g) show the cracking pattern of
ECC specimen at loading point B through G, respectively.
By observing Figure 7(c), it is evident that prior to the
first cracking, changes of resistivity are mainly due to
the elastic deformation of the ECC specimen. During the
strain-hardening stage, resistivity changes are caused by the
development of new microcracks as well as the opening
of existing cracks along the ECC specimen. Once damage
localization occurs (at point F), resistivity changes are then
induced by the growth (i.e. widening) of the localized crack.
The dependency of the gage factor on damage state could
be potentially used to approximately estimate component
health based on electrical resistivity measurement if strain is
known.

5. Conclusion
This study exploits the piezoresistive properties of engineered
cementitious composites (ECCs) so that they can be used
as their own sensors to quantify the resistivity-strain relationship. ECC plate specimens were monotonically loaded
in axial tension to induce strain hardening behavior in the
material. As a result of linear changes in electrical resistance
due to tension strain, ECC specimens could potentially selfmeasure their strain in the field. The resistivity of ECC
specimens at dierent times after casting was monitored by
4-point probe resistivity measurement. The initial resistivity
changes with hydration degree and increases with DC
polarization. An interesting feature of the material lies in
the detectable change in resistance-strain sensitivity when
strain hardening initiates. The change in piezoresistivity
correlates the cracking in the ECC matrix and results in a
nonlinear change in the material conductivity. Additional
work is underway exploring the theoretical foundation for
ECC piezoresistive behavior.

Acknowledgment
Financial supports from Laboratory of Novel Building Materials Manufacturing and Inspection in Shenyang Jianzhu
Universiry are gratefully acknowledged. The authors would
like to express their gratitude to Professor V. C. Li and J. P.
Lynch, University of Michigan, for their helpful discussion
on properties of ECC.

References
[1] J. H. Yu and V. C. Li, Research on production, performance
and fibre dispersion of PVA engineering cementitious composites, Materials Science and Technology, vol. 25, no. 5, pp. 651
656, 2009.
[2] J. H. Yu and L. Dai, Strain rate and interfacial property eects
of random fibre cementitious composites, Journal of Strain
Analysis for Engineering Design, vol. 44, no. 6, pp. 417425,
2009.
[3] S. P. Shah and S. Choi, Nondestructive techniques for studying
fracture processes in concrete, International Journal of Fracture,
vol. 98, no. 3-4, pp. 351359, 1999.
[4] S. Mindess, Acoustic emission methods, in Handbook on
Nondestructive Testing of Concrete, V. M. Malhotra and N. J
Carino, Eds., CRC Press, Boca Raton, Fla, USA, 2004.
[5] G. Park, H. H. Cudney, and D. J. Inman, Impedance-based
health monitoring of civil structural components, Journal of
Infrastructure Systems, vol. 6, no. 4, pp. 153160, 2000.
[6] D. Lecompte, J. Vantomme, and H. Sol, Crack detection
in a concrete beam using two dierent camera techniques,
Structural Health Monitoring, vol. 5, no. 1, pp. 5968, 2006.
[7] D. D. L. Chung, Damage in cement-based materials, studied
by electrical resistance measurement, Materials Science and
Engineering R, vol. 42, no. 1, pp. 140, 2003.
[8] H. J. Kong, S. G. Bike, and V. C. Li, Eects of a strong
polyelectrolyte on the rheological properties of concentrated
cementitious suspensions, Cement and Concrete Research, vol.
36, no. 5, pp. 851857, 2006.
[9] C. C. Perry and H. R. Lissner, The Strain gage Primer, McGrawHill, New York, NY, USA, 1962.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 485695, 11 pages
doi:10.1155/2010/485695

Research Article
Heuristic Enhancement of Magneto-Optical Images for NDE
` Salvatore Calcagno,
Matteo Cacciola, Giuseppe Megali, Diego Pellicano,
Mario Versaci, and Francesco Carlo Morabito
DIMET Department, Faculty of Engineering, University Mediterranea of Reggio Calabria, Via Graziella Feo di Vito,
89100 Reggio Calabria, Italy
Correspondence should be addressed to Matteo Cacciola, matteo.cacciola@unirc.it
Received 31 December 2009; Revised 15 May 2010; Accepted 26 August 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 Matteo Cacciola et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of dierent
inspection techniques. Each dierent technique, like the magneto-optic imaging here treated, is aected by some special types
of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image
processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and
characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA)
to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other
advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show
advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.

1. Introduction
For both improving manufacturing quality and guaranteeing
safety, devices, components, and structures are usually
inspected to detect the presence of defects or faults which
may threaten their integrity. Nondestructive testing and
evaluation (NDT&E) are industrial methodologies which
couple the ability of detecting defects and characterizing
them usually by means of noninvasive procedures [1]. In
experimental NDT&E, the measurements can give details
on the structural properties of the sample [2], like cracks,
flaws and phase transformations that may develop in discontinuous deformations. Typically, the main challenge is
to detect and characterize flaws starting from experimental
measurements through the solution of suitably formulated
inverse problems. Because of the limitations of the measurement recording and the presence of noise, the problem to
be solved through inversion is commonly ill-posed as well
as ill-conditioned in its numerical counterpart. The critical
decision for a pattern recognition system is the selection
of appropriate features to be extracted from the image for
classification. These features should be unique, informative,

and invariant to translation and rotation of the image. In


aircraft structures the rivets are of dierent sizes and so it
is important that the features should also be size invariant.
Therefore, emphasizing informative pattern in the data set
by filtering the noise is strictly required in order to obtain
suitable input data for the inverse problem solution. Within
this framework, the main concern is the implementation
of automated solutions able to support scientists in characterizing defects by using nondestructive analysis. This
diculty can be overcome by using the recent advances
in computing power and signal processing techniques. A
number of dierent approaches on interpreting the eddy
current sensor response using advanced signal processing
techniques have been proposed [3, 4]. In this paper, we
propose some advanced image processing approaches able
to solve a particular problem for enhancing MOI. It is also
critical to keep the information unchanged or unperturbed
by the applied noise filtering techniques. Otherwise, the
inverse problem solution will produce ambiguous results [5].
MOI [6] is a relatively new technology, based in part
on the Faraday rotation eect and used in nondestructive
inspection. MOI is being used extensively in inspection of

2
aircraft fuselage skin for detecting subsurface corrosion and
cracks at rivet sites. In aircraft inspection, data (images) from
a large number of rivets need to be analysed. Currently it
is manually performed by scanning the MO imager over
the inspection surface. The main advantage of MOI (see
[7, 8] about MOI designed by physical research instruments
(PRI)) can inspect large areas at high speed compared with
classical eddy current methods. Other advantages are its fast
and easy inspection capabilities in comparison with other
conventional nondestructive inspection instruments. An
automated real-time method for evaluating the MO images
for structural defects can reduce human error and increase
accuracy and speed of the inspections. Such a method
requires the elimination of background (image) noise due
to presence of magnetic domain walls in the MO sensor
and enhancing the image for subsequent interpretation.
This paper presents an image processing and automated
classification algorithm that classifies the MO images for
both surface and subsurface cracks in aircraft skins under
various excitation frequencies. Within this framework, progresses have been made in developing the MOI for detection
of subsurface cracks and corrosion by improvements in
instrument design. The key to the instruments capability in
detecting the relatively weak magnetic fields associated with
subsurface defects is the sensitivity of the MO sensor. The
goal of this work is to introduce an enhanced MOI-based
approach for imaging magnetic fields of NDT&E. Within this
framework, tests have been carried out and the presented
methods have been evaluated by experimental results. The
paper is organized as follows. Section 2 briefly describes
the basic principles of MOI, with a particular overview
of domain noise in MO images. Section 3 introduces and
deepens the denoising of MO images by using two image
processing techniques: the former is based on ICA [911],
the latter is a self-implemented adaptive homomorphic filter
(AHF). Both were compared with the motion-based filtering
(MBF) [12, 13], a well-known technique able to reduce static
noise in images. Finally, in Section 4, the performances of the
proposed approaches are discussed.

2. Principles of MOI Technique


The MOI technology depends on exciting the aircraft skin by
eddy current induction and measuring the normal magnetic
field component. The time-varying magnetic field of the AC
current passing through the planar induction foil induces
a sheet of eddy current from its uniform flow and hence
generates magnetic field perpendicular to the surface of
the aircraft skin. The normal magnetic field component is
measured using the Faraday rotation eect where a linearly
polarized light transmitted through as MO garnet sensor is
rotated by an amount proportional to the magnetic field H.
Devices use an induction coil for inducing eddy currents
in a conducting sample and an MO sensor for detecting
the magnetic flux density associated with the induced eddy
currents. In a defect-free specimen, the induced current is
uniform and the associated magnetic field lies in the plane of
the sensor. Anomalies in the specimen, such as fasteners or

EURASIP Journal on Advances in Signal Processing


surface and subsurface cracks, result in generating a magnetic
flux density normal to the sensor plane. If the linearly polarized light is incident to the sensor, the polarization plane of
light will be rotated by an angle that is proportional to the
sensor thickness, given approximately by [7]


K M

 
f 
   ,
K M 

(1)

where K is the wave vector of the incident light, M is the


local magnetization of the epitaxial layer (sensor) at the point
where the incident light passes through the layer, and f is
the incident angle of light. The sign of the scalar product
K M determines the direction of the rotation. For more
information about MOI, the interested reader could refer to
[12, 13] and references therein. A MO sensor generates a realtime analog image of the magnetic field associated with the
induced eddy current interacting with structural anomalies.
The resulting images are formed by distortion within the
magnetic domain of the MO sensor, as a reaction to the
external field. Images are relatively insensitive to the field
intensity and can be reproduced by a charge-coupled device
(CCD) camera on a monitor. MO imaging can be used to
detect defects in both ferromagnetic and non-ferromagnetic
materials. The sensitivity of the resulting images depends on
the levels of the induced eddy current, operating frequency,
and sensor parameters. The used technique, jointly with
the key features of magnetic materials, generates domain
noise, such as mazes, bubble lattices, and strips [14]. A
schematic of the MOI instrument is shown in Figure 1. A
copper foil is used to produce uniform sheet currents at low
frequency (1.5100 [kHz]), which induces eddy currents in
the conducting test specimen. Under normal conditions, the
associated magnetic flux is tangential to specimen surface.
Cracks in the specimen generate a normal component of
the magnetic flux density. When a linearly polarized light is
incident normally on the sensor, the plane of polarization of
the light is rotated by the angle . This angle is decided by
the normal component of the magnetic flux density applied
on the sensor. When the reflected light is viewed through the
analyser, local occurrence of normal magnetic flux is seen as a
dark or light area in the magneto-optic image depending
on the direction of magnetization.
Usually, MOI inspection is conducted by a human
operator by scanning the surface of sample with the device.
The MOI data is interpreted by the operator in real time
or recorded as a video sequence for later interpretation.
This leads to variability in flaw detection according to the
experience of inspector. Here we propose some methods
exploiting sequences of MOI for enhancing the quality of
MO images and the corresponding inspection capabilities.
Figure 2 shows three consecutive frames in an MOI scanning
video. The dark disks represent rivets of 0.3 cm in diameter
and the dark bands show a substrate seam into the airplane
structure. The dark areas on the left are due to the leakage of
uncompensated magnetic field on the edge of the inspecting
area. Figure 2 shows that the background components are
stationary from frame to frame while objects in the sample
are moving. The scanning direction goes to left, thus

EURASIP Journal on Advances in Signal Processing

Light source

Analyzer

Polarizer

Bias coil
Sensor

Lap joint

Induction foil

Figure 1: Schematic of the magneto optic imaging system.

the rivets move to the right during the scanning. Generally


speaking, MO images data can be divided into two components: dynamic foreground and stationary background.
Usually, the foreground image is considered as the signal and
the background as noise to be suppressed.
2.1. Domain Noise in MOI. Magnetic materials, as previously
mentioned, exhibit many types of domain structures, such
as mazes, bubble lattices, and strips, which are aected by
various factors: the magnetic anisotropy and the magnetization of the material, its shape, the presence of defects,
the external magnetic field, temperature, surface treatment,
and the previous history of the sample [14]. In the MOI,
the domain structures introduce complex artifacts hardly
removable by means of nave techniques, like using simple
image thresholding and filtering. Furthermore, passivating
techniques (i.e., anodizing and alodizing) respond in a
dierent way to the external excitation. Another source of
noise and degradation in MO data is due to magnetic domain
boundaries in the garnet sensor. The domain boundaries
appear as a small, filament-like structure in MO images
(Figure 3) and can severely mask the presence of cracks and
corrosion.
The presence of this textured background in MO images
due to domain structures makes detection of cracks and
corrosion dicult. However, due to the magnetic domain
wall structures of the sensor, the MO images could be
corrupted by a characteristic noise, which could decrease
the MOI inspection capabilities. The domain walls generally
produce serpentine pattern noise, which can be reduced
by improving the sensor or by use of image processing
methods [13]. The noise hinders detection of small cracks
and corrosion located in the second and third layers, limiting
the capability of inspection. This leads to the need for an
image processing algorithm for reducing this background
noise. In MO images, noise associated with the domain

structures in the sensor is overall stationary, since it is related


to the stationary background of images. In fact, since the
sample is scanned using the MOI device, the domain noise
resident in the sensor moves with the sensor and hence is
stationary with respect to the sensor. In contrast, rivets and
cracks in the sample appear to move from frame to frame.
Therefore, algorithms presented in this paper particularly
aim to separate the moving parts from the stationary parts
in the sequence of images without generating distortions.
Bubbles, mazes, and other background static noises, due to
the tape itself, can be thought as non-Gaussian distributions,
convolved with the useful signal. In this way, the challenge
of denoising MOI can be approached as distinguishing nonGaussian distributions, comprising the useful signals. In this
way, the challenge of denoising MOI can be approached as a
problem of blind source separation (BSS) exploiting higherorder statistical features. The raw images, resulting from the
MO image acquisition system, have been processed by a BSS
technique such as ICA and by means of an adaptive filtering
technique such as AHF. Performances of these techniques
have been compared with MBF, a well-known technique
which is reputed to yield good detection performance.

3. Removing Background Noise in MOI


Generally speaking, any measurement device is disturbed by
parasitic phenomena. This includes the electronic noise and
also external events that aect the measured phenomenon,
depending on what is measured and on the sensitivity of
device. It is often possible to reduce the noise by controlling
the environment; otherwise, when the characteristics of the
noise are known or are dierent from the signal ones, it is
possible to filter it or to process the signal. Particularly, in our
case we exploited a Matlab code, according to Young et al.
[15], in order to calculate image SNR performances, before
and after the filtering step.

EURASIP Journal on Advances in Signal Processing

Frame 125

Frame 126

Frame 127

Scanning direction of MOI instrument

Figure 2: Three consecutive MOI images; dark disks represent rivets, which are moving to the right, while the sensor is moving to the left.
Dark bands show a substrate seam into the airplane structure.

Crack

Frame 33

Frame 34

Frame 35
Rivet with defect
Normal rivet

Frame 36

Frame 37

Figure 4: Sample MO image. On the left, a normal rivet with a


circular shape; on the right, a rivet with crack in its site.

Figure 3: Frames grabbed from one of the available experimental


sets.

Within this framework, we will process a set of various


MO images that have been collected by a commercial MOI
301 apparatus at the NDE Lab, Michigan State University,
USA [12].
3.1. Presentation of the Collected Data. Data exploited for
our experimentation have been collected on a set of fatigue
crack lap splice samples consisting of 36 panels with 720
rivet sites containing first-layer fatigue cracks of various
sizes. The induction frequency used was 50 [kHz]. MOI data
was directly recorded on a tape during the acquisition, and
subsequently has been acquired into a personal computer
system and saved as an AVI file. The movie has the
characteristics resumed in Table 1.
We selected six dierent sets, composed by five dierent
but consecutive frames per set, in order to evaluate and
compare the dierent exploited techniques: ICA, a selfimplemented AHF, and MBF. Our aim is to propose signalprocessing methods for eliminating the noise due to domain
boundaries and enhancing the signals due to objects of
interest. Figure 4 shows a particular MO image with two
rivets. The left rivet is unflawed whilst the rivet on the right
hand shows a radial crack. We can see that the normal rivet
image is roughly circular in shape while the abnormal rivet is
noncircular.

Figure 5 depicts the exported sets. The reason of composing each set by 5 dierent frames will be described within
the next subsection. As it is possible to denote, the sets cover
the whole time length of the movie, in order to give as much
generality as possible to the proposed approach. Moreover,
let us remark how, on each set, the images composing the
set itself show the same number of objects, that is, rivets,
and the same objects, except for the 5th set: frames 127,
128, and 129 show, in fact, dierent number of objects
and/or just dierent objects. The 5th set has been voluntarily
added to the experimental set, in order to evaluate the ICAs
performance in similar applicative contexts.
3.2. ICA for Enhancing MOI. The problem of source separation has been deeply analysed in electrical engineering;
many algorithms exist, depending also on the nature of the
mixed signals. The problem faced by the BSS is more dicult
because it is not possible to design appropriate preprocessors
in order to optimally separate mixed signals without any
knowledge about them. Even in NDT&E it is possible to meet
problems involving mixed signals and BSS. For instance, let
us consider the problem of noisy measurements in MOI: in
this framework, noise is an additive eect with respect to
the useful information. In many cases of practical relevance,
often for the presence of nonlinear phenomena, or when a
noise source is not strictly gaussian (e.g., the lift-o eect
in eddy current testing), it is very dicult to separate the
informative signal from the uninformative one. In this case,

EURASIP Journal on Advances in Signal Processing

Frame 33

Frame 34

Frame 36

Frame 37

Frame 35

(a) 1st set

Frame 50

Frame 51

Frame 53

Frame 54

Frame 52

(b) 2nd set

Frame 75

Frame 76

Frame 78

Frame 79

Frame 77

(c) 3rd set

Figure 5: Continued.

EURASIP Journal on Advances in Signal Processing

Frame 115

Frame 116

Frame 118

Frame 119

Frame 117

(d) 4th set

Frame 125

Frame 126

Frame 128

Frame 129

Frame 127

(e) 5th set

Frame 166

Frame 167

Frame 169

Frame 170

Frame 168

(f) 6th set

Figure 5: The set of magneto-optic images exploited in this experimentation.

EURASIP Journal on Advances in Signal Processing

Table 1: Characteristics of the AVI movie file exploited in this experimentation.


Time length
17 [s]

Rate
367 [Kbps], 10.0 [fps]

Resolution
320240 (4 : 3)

Codec
MS-MPEG standard (MPEG4)

(a) 1st set: useful component

(b) 2nd set: useful component

(c) 3rd set: useful component

(d) 4th set: useful component

(e) 5th set: useful component

(f) 6th set: useful component

Quality
20%

Figure 6: Independent components showing the informative signals, extracted from the in-study sets of magneto-optic images by the
proposed approach.

BSS and ICA could be very helpful to practitioners to recover


the unknown independent sources and it could be very useful
in order to enhance the quality of MOI [16]. The aim is
bringing out the rivets against the background, in order to
ease the application of a further morphological operator
able to isolate the rivet itself. In fact, we considered five
sources of signals to be separated within the sets of images:
source of mazes, source of bubbles, source of measure noise,
source of environment random noise, and finally source of
useful signal. In the general framework of ICA [9], a signal
x() in the time- or space domain is the result of mixing

the records of j dierent sources. Moreover, was s() =


{s1 (); s2 (); . . . ; s j ()} with the set of the unknown j source
signals, x() can be written as

a11 a12 ... a1 j

a a ... a

2j T
s .
x = 21 22
... ... ... ...

(2)

a j1 a j2 ... a j j
Under some general hypotheses, it is possible to recover the
set of j sources by calculating a suitable mixing matrix A,

EURASIP Journal on Advances in Signal Processing


the eect due to the superimposition of the dierent objects
visualized within the dierent frames.

H(u, v)

L
D(u, v)

Figure 7: Cross-section of homomorphic filter function.

that is, the matrix with elements akh . Once the matrix A is
calculated, it is possible to obtain its inverse A1 and retrieve
the independent components (ICs) having non-Gaussian
distributions [17]. The fixed point algorithm [17] has been
exploited in order to calculate and extract the independent
components from each one of the in-study sets. Figure 6
shows results of our experimentations. Our analysis has
been based on SNR evaluation since it relates the power
of signal useful to the noise of the acquisition system. It
can be considered as a measure of sensitivity performance
and it has a remarkable importance in many applications in
NDE. Within this framework, SNR is a key parameter for
evaluation of proposed image processing and its importance
cannot be underestimated. The SNR directly aects error
probability and ultimately the detection of flaws in NDE.
Within this framework, we propose the increment of SNR
compared with the original SNR of the proposed sets in
Table 2. The averaged SNR is more or less the same for the
dierent sets, with a small decrement for the latter set. SNR
is usually taken to indicate an average signal-to-noise ratio,
as it is possible that (near) instantaneous signal-to-noise
ratios would be considerably dierent. The concept can be
understood as normalizing the noise level to 1 (0 dB) and
measuring how far the signal stands out. Finally let us note
that, according to the method for calculating the SNR [15], it
depends on the range of values given by the filtered images:
thus, it is not important to consider the values of SNR as its
increment due to preprocessing procedure.
The performances shown by the application of the ICA
are generally remarkable in terms of increment of the
SNR, but the evaluation of the reliability of the proposed
method must be carried out by a joint consideration of
the SNR increment itself with the assessment provided by
a visual inspection of the useful independent components.
In fact, the 5th set shows a result which can be assessed
as an irregularity by a visual inspection, in spite of the
SNR increment. The failure was expected, since the rivets
depicted within the selected frames, as above explained, are
dierent or in varying number. It introduces an irregularity
in evaluating the mixing matrix and therefore its inverse
demixing matrix, during the calculation procedure of the
independent components. The result is a sort of mirroring
the depicted rivets within the useful component. Really, it is

3.3. AHF for Enhancing MOI. The aim of image enhancement is to improve the interpretability or the perception
of information in images for human viewers or to provide
the better input for other automated image processing
techniques. But, unfortunately, there is no general rule or
any mathematical criterion for determining what is good
image enhancement when it comes to the human perception.
If the image looks good, it is good. This section considers the
homomorphic filtering for the measurement of the degree
of the enhancement of images in NDT&E. Classical theory
about filtering makes use of linear filters for the improvement
of SNR. Our implementation regards a nonlinear system
based on a generalized principle of linearity. White and
black images can be represented by means of a 2-variable
system. Images are composed by reflection of the light from
physical objects. The process of realization of an image can be
modelled like a product about a lighting ( fi ) and a reflection
( fr ) function [18]:

f x, y = fi x, y fr x, y .

(3)

This equation cannot be used to operate separately and


directly on the frequency components of illumination and
reflectance, because the Fourier transform of the product of
the functions is not separable. So, (3) can not be expressed
as:

I f x, y





= I fi x, y I fr x, y ,

(4)

where I is the Fourier transform operator. Now, let us


suppose that an image f (x, y) can be defined in a way so that

I z x, y




= Z(u, v) = I ln f x, y




+ I ln fr x, y .
= I ln fi x, y

(5)

For the sake of simplicity, please let us denote Fi (u, v) =


I{ln( fi (x, y))}and Fr (u, v) = I{ln( fr (x, y))} [18]. The
function Z(u, v) can be processed by means of a filter
function H(u, v) and can be expressed as
S(u, v) = H(u, v) Z(u, v)
= H(u, v) Fi (u, v) + H(u, v) Fr (u, v),

(6)

where S(u, v) is the Fourier transform of the result. In the


spatial domain s(x, y) = I1 {S(u, v)} = I1 {H(u, v)
Fi (u, v)} + I1 {H(u, v) Fr (u, v)} by letting fi (x, y) =
I1 {H(u, v) Fi (u, v)} and fr (x, y) = I1 {H(u, v) Fr (u, v)}.
Finally the equation becomes

s x, y = fi x, y + fr x, y

g x, y = es(x,y) = e fi (x,y) + e fi (x,y) = fi0 x, y fr0 x, y ,


(7)


where fi0 (x, y) = e fi (x,y) and fr0 (x, y) = e fr (x,y) are the
illumination and the reflectance components of the output

EURASIP Journal on Advances in Signal Processing


Img

In

FFT

H(u, v)

FFT1

Exp

Img f

Figure 8: Block diagram of the AHF.


Table 2: Results after processing with ICA.
Inspected set
1
2
3
4
5
6

Visual comparison
Fair
Good
Optimal
Sucient
Failure
Sucient

Averaged SNR of original images (dB)


14.33
14.43
14.67
14.31
14.27
13.73

SNR of useful IC (dB)


23.64
25.49
24.29
21.47
22.12
18.88

SNR increment (dB)


9.31
11.06
9.62
7.34
7.85
5.15

Table 3: Results of enhancing the sets of MO images.


Inspected set
1
2
3
4
5
6

Averaged SNR of original images (dB)


14.33
14.43
14.67
14.31
14.27
13.73

SNR MBF (dB)


18.08
16.31
15.62
19.50
15.54
18.07

image. This method is based on a special class of systems


known as homomorphic systems. The filter transfer function
H(u, v) is known as the homomorphic filter function:
H(u, v) = L +

1 + D0 / u2 + v2

2n ,

(8)

where L and H are the lower and the higher frequency


components, respectively, D0 is the cut-o frequency, and
n defines the order of the filter. A good choice between the
lower and the higher frequencies provides a dynamic range
of the compression and enhancement [18]. H(u, v) acts on
the illumination and the reflectance components of the input
image separately. The illumination component of an image
is generally characterized by a slow spatial variations while
the reflectance components vary abruptly, particularly at
junctions of dissimilar objects. These characteristics actually
lead to associate the low frequencies of the Fourier transform
of the logarithm of an image with illumination and the high
frequencies with reflectance [19].
This process of enhancement can be expressed by using
the block diagram shown in Figure 8. Homomorphic filters
use the discrete Fourier transforms (DFTs) as the core
transform. Presently, in digital images, more ecient tools
for transformations are used, such as fast Fourier transform
(FFT) [19].
Homomorphic filter helps to have a good control over
the illumination and the reflectance. In our algorithm, for a
homomorphic filter of order 2, the cut-o frequency D0 is
adaptively calculated by inspecting the Fourier spectrum of
the considered image and finding the maximum value of the
spectrum. Then, we calculate the highest frequency showing

SNR AHF (dB)


18.27
17.40
17.29
17.91
17.73
18.27

SNR ICA (dB)


23.64
25.49
24.29
21.47
22.12
18.88

a 3 dB loss as the final cut-o frequency. The input of our


processor is the image img to be filtered. The order of the
homomorphic filter is 2. The output is the filtered image
(img f ).
A visual comparison of the proposed approach is shown
in Figure 9. For each one of the collected image sets, the AHF
has been applied to the central frame. The performance of
the proposed AHF in terms of SNR is resumed in Table 3.
Here, it is possible to make a quantitative comparison of
the proposed approaches with MBF and ICA. Depending
on the original image, the test value and the evaluation are
not always correlated with the impression of quality of a
subjective observation.

4. Conclusions
In this paper, the application of ICA for enhancing the
quality of magneto-optic images has been discussed and
compared with AHF and MBF. The MOI inspection technique is subjected to a special kind of measurement noise
and also bubbles, mazes and other background static noise,
due to the tape itself, that invariably influence the quality
of the acquired images. These noise sources can be thought
of as disturbing signals, with Gaussian and/or non-Gaussian
probability density distributions, convolved with the useful
signal. Accordingly, the practically relevant need of suitably
denoising MO images can be approached as a problem of
BSS. To deal with it, we decide to separate the underlying
components of the signal by making use of a well-known
algorithmic implementation of ICA. Comparing the performances of dierent algorithms, as reported in Table 3,
we find that the standard performance of ICA is higher

10

EURASIP Journal on Advances in Signal Processing

Frame 37: source

Frame 37: filtering by MBF

Frame 54: source

Frame 54: filtering by MBF

Frame 37: filtering by AHF

ICA result

Frame 54: filtering by AHF

ICA result

(a) 1st set

(b) 2nd set

Frame 79: source

Frame 79: filtering by MBF

Frame 119: source

Frame 79: filtering by AHF

ICA result

Frame 119: filtering by AHF

(c) 3rd set

Frame 129: source

ICA result

(e) 5th set

ICA result

(d) 4th set

Frame 129: filtering by MBF

Frame 129: filtering by AHF

Frame 119: filtering by MBF

Frame 170: source

Frame 170: filtering by MBF

Frame 170: filtering by AHF


(f) 6th set

Figure 9: Comparison of the performance for MBF, AHF, and ICA.

ICA result

EURASIP Journal on Advances in Signal Processing


or, at least, comparable with the MBF. ICA retrieves highly
denoised images, in which rivets are well defined and highlighted. We claim that this is a noteworthy result, considering
that the MBF is a filtering technique known to be particularly
ecient in enhancing the quality of magneto-optic images.
ICA averagely provided better results than AHF. Besides, the
comparison with the other image processing methodologies
showed ICA to be successful in increasing the SNR of the
source images, so ICA filtering can help the human operator
to detect defects more eciently. Indeed, whereas MBF and
AHF achieved an average improvement of the SNR of 4 dBs,
ICA was able to enhance the quality of the images with
averaged SNR increment of about 8 dBs (refer to Table 3 for
comparisons). Therefore, the ICA can be considered as a
useful and reliable method for MOI preprocessing, in spite of
the exploitation of ICA as an image filtering technique able to
give a direct representation of the separate components. The
weak point of ICA appears when the images included in the
available set portray a dierent number of objects. Anyway,
the presented results suggest the possibility of using ICA as
a preprocessing method alternative to other images filtering
procedures. Evaluation of the proposed algorithms using
additional data (test for a large range of scanning velocity
values) is under way. Future work suggests the possibility
to test real-time processing by hardware implementation.
Such a system can be directly connected to the MO imager
and process data as they are acquired. Finally, the use of
classification algorithms to identify regions with cracks is
under investigation.

Nomenclature
:
f :
K:
M:

Angle of rotating light


Incident angle of light
Wave vector of the incident light
Local state of magnetization of the
sensor
l:
Sensor thickness
Signal in time or spatial domain
s():
H(u, v): Transfer function of the Homomorphic
filter
Lower frequency component of the
L :
homomorphic filter
Higher-frequency component of the
H :
homomorphic filter
Cut-o frequency of the homomorphic
D0 :
filter
n:
Order of the homomorphic filter.

References
[1] Y. Yin, G. Y. Tian, G. F. Yin, and A. M. Luo, Defect identification and classification for digital X-ray images, Applied
Mechanics and Materials, vol. 1012, pp. 543547, 2008.
[2] S. Yamada, M. Katou, M. Iwahara, and F. P. Dawson,
Eddy current testing probe composed of planar coils, IEEE
Transactions on Magnetics, vol. 31, no. 6, pp. 31853187,
1995.

11
[3] C. F. Morabito, Independent component analysis and feature
extraction techniques for NDT data, Materials Evaluation,
vol. 58, no. 1, pp. 8592, 2000.
[4] L. Udpa and S. S. Udpa, Application of signal processing
and pattern recognition techniques to inverse problems in
NDE, International Journal of Applied Electromagnetics and
Mechanics, vol. 8, no. 1, pp. 99117, 1997.
[5] M. Cacciola, A. Gasparics, F. C. Morabito, M. Versaci, and V.
Barrile, Advances in signal processing to reduce lift-o noise
in eddy current tests, PIERS Online, vol. 3, no. 4, pp. 517521,
2007.
[6] S. Simms, MOI: magneto-optic/eddy current imaging,
Materials Evaluation, vol. 51, no. 5, pp. 529532, 1993.
[7] G. L. Fitzpatrick, Flaw Imaging in Ferrous and Nonferrous
Materials Using Magneto-Optic Visualization, US Patents no.
4,755,752, 1988.
[8] G. L. Fitzpatrick, D. K. Thome, R. L. Skaugset, E. Y. C. Shih,
and W. C. L. Shih, Magneto-optic/eddy current imaging of
aging aircraft: a new NDI technique, Materials Evaluation,
vol. 51, no. 12, pp. 14021407, 1993.
[9] A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons, New York, NY, USA, 2001.
[10] G. Yang, G. Y. Tian, P. W. Que, and T. L. Chen, Independent
component analysis-based feature extraction technique for
defect classification applied for pulsed eddy current NDE,
Research in Nondestructive Evaluation, vol. 20, no. 4, pp. 230
245, 2009.
[11] G. Y. Tian, A. Sophian, D. Taylor, and J. Rudlin, Waveletbased PCA defect classification and quantification for pulsed
eddy current NDT, IEE Proceedings: Science, Measurement and
Technology, vol. 152, no. 4, pp. 141148, 2005.
[12] P. Ramuhalli, F. Yuan, U. Park, J. Slade, and L. Udpa,
Enhancement of magneto-optic images, in Proceedings of
the International Workshop on Electromagnetic Nondestructive
Evaluation, p. 199, 2003.
[13] U. Park, L. Udpa, and G. C. Stockman, Motion-based filtering
of magneto-optic imagers, Image and Vision Computing, vol.
22, no. 3, pp. 243249, 2004.
[14] K. Zvezdin and V. Kotov, Modern magnetooptics and
magnetooptical materials, Journal of the Optical Society of
America B, vol. 22, no. 1, article 187, 1997.
[15] I. T. Young, J. J. Gerbrands, and L. J. van Vliet, Fundamentals
of Image Processing, Delft University of Technology, Delft, The
Netherlands, 1995.
[16] T.-W. Lee, M. Girolami, and T. J. Sejnowski, Independent
component analysis using an extended infomax algorithm
for mixed subgaussian and supergaussian sources, Neural
Computation, vol. 11, no. 2, pp. 417441, 1999.
[17] A. Hyvarinen, Fast and robust fixed-point algorithms for
independent component analysis, IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 626634, 1999.
[18] V. I. Ponomarev and A. B. Pogrebniak, Image enhancement
by homomorphic filters, in Applications of Digital Image
Processing XVIII, vol. 2564 of Proceedings of SPIE, pp. 153159,
San Diego, Calif, USA, July 1995.
[19] R. C. Gonzales and R. E. Woods, Digital Image Processing,
Addison Wesley, San Francisco, Calif, USA, 1993.

Hindawi Publishing Corporation


EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 895486, 14 pages
doi:10.1155/2010/895486

Research Article
A Machine Learning Approach for Locating Acoustic Emission
N. F. Ince,1 Chu-Shu Kao,2 M. Kaveh,1 A. Tewfik (EURASIP Member),1 and J. F. Labuz2
1 Department
2 Department

of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA


of Civil Engineering, University of Minnesota, Minneapolis, MN 55455, USA

Correspondence should be addressed to N. F. Ince, ince firat@yahoo.com


Received 18 January 2010; Revised 26 July 2010; Accepted 20 October 2010
Academic Editor: Joao Marcos A. Rebello
Copyright 2010 N. F. Ince et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper reports on the feasibility of locating microcracks using multiple-sensor measurements of the acoustic emissions (AEs)
generated by crack inception and propagation. Microcrack localization has obvious application in non-destructive structural
health monitoring. Experimental data was obtained by inducing the cracks in rock specimens during a surface instability test,
which simulates failure near a free surface such as a tunnel wall. Results are presented on the pair-wise event correlation of the AE
waveforms, and these characteristics are used for hierarchical clustering of AEs. By averaging the AE events within each cluster,
super AEs with higher signal to noise ratio (SNR) are obtained and used in the second step of the analysis for calculating the
time of arrival information for localization. Several feature extraction methods, including wavelet packets, autoregressive (AR)
parameters, and discrete Fourier transform coecients, were employed and compared to identify crucial patterns related to Pwaves in time and frequency domains. By using the extracted features, an SVM classifier fused with probabilistic output is used to
recognize the P-wave arrivals in the presence of noise. Results show that the approach has the capability of identifying the location
of AE in noisy environments.

1. Introduction
Rapidly changing environmental conditions and harsh mechanical loading are sources of damage to structures. Resulting damage can be examined based on local identification
such as the presence of small cracks (microcracks) in a component or global identification such as changes in natural
frequency of the structure. Continuous health monitoring
process may involve both global and local identification.
Generally, local damage, such as cracks in critical components, is inspected visually. This type of inspection is slow
and prone to human error. Therefore, automated, fast, and
accurate techniques are needed to detect the onset of local
damage in critical components to prevent failure.
In this scheme, nondestructive testing and monitoring
should be employed so that the damage can be inferred
through analysis of the signals obtained from inspection.
Acoustic emission (AE) events can serve as a source of
information for locating the damage, particularly as caused
by the initiation and propagation of microcracks [13]. The
spatial distribution of AE locations can provide clues about
the position and extent of the damage [4]. In practice, the

location of AE is estimated from the primary wave (P-wave),


the first part of the signal to arrive at the sensor (see
Figure 2(c)). However, the use of AE waveforms is often
obscured by noise and spurious events, which may cause
misinterpretation of the data. Even in controlled laboratory
settings, it is dicult to account for all the sources of noise.
Therefore, an AE system that automatically learns crucial
patterns from the total AE data, as well as particular P-wave
arrivals, may provide clues for distinguishing between real
events and extraneous signals, thus improving the spatial
accuracy of AE locations and reduce false alarms. Accurate
detection of these events with appropriate signal processing
and machine learning techniques may open new possibilities
for monitoring the health of critical components; this oers
the possibility for raising alarms in an automated manner if
the degradation of structural integrity is severe.
In this paper, we describe a novel combination of signal
processing and machine learning techniques based on hierarchical clustering and support vector machines to process
multi-sensor AE data generated by the inception and propagation of microcracks in rock specimens during a surface
instability test. The eectiveness of the approach is validated

EURASIP Journal on Advances in Signal Processing


Preprocessing
(median filter)

AE

Location
estimation
with TOA

SVM-based
P-wave detection

Hierarchical
clustering

Feature extraction

Averaging

Envelope detection

Figure 1: Schematic diagram of the signal processing and classification system. The AE signals were preprocessed with a median filter. In
the following step they are grouped with a hierarchical clustering procedure. An averaging step was implemented in each cluster to improve
the SNR. This is followed by a feature extraction procedure in time and frequency domains. On the test data, the feature extraction and
classification steps were executed when the signal envelope exceeded a predefined threshold. The TOA is calculated by detecting the P-waves
with an SVM classifier.

2. Acoustic Emission Recordings


AE events were recorded during a surface instability test
that is used to examine failure near a free surface such as
a tunnel wall. A photo representing the experimental setup

X
(a)
300
Amplitude (mV)

by laboratory-based experimental results. Fundamental to


the proposed technique is experimentally observed highly
correlated AE waveforms that are generated by the propagation of microcracks [3]. A similar phenomenon was also
reported in [5] by exploring the use of coherence functions
in the frequency domain. Thus, the signal processing framework we present in this study focuses on the capture and
processing of such correlated events as representing signals
of interest for damage localization. The correlated nature
of these events is expected to be dierent from extraneous
interfering signals within the same measurement bandwidth
that may be generated by other mechanisms with random
characteristics. Several features were extracted from time and
frequency domain using autoregressive modeling, wavelet
packets (WP), and discrete Fourier transform. These features
were used in conjunction with a maximum margin support
vector machine (SVM) classifier coupled with probabilistic
output [6] to recognize the P-waves in the presence of
noise for accurate time of arrival (TOA) calculation. The
classification step is followed by the use of TOA information
of the identified waves of interest for estimating the location
of the microcracks. The feasibility of the proposed techniques
in determining the location of a fracture is presented by
examining AE events recorded by eight sensors attached
to a structure with localized microcracks. A block diagram
summarizing the overall signal processing system is given in
Figure 1.
The remainder of the paper is organized as follows.
In the next section, the experiments and the AE data sets
recorded from two specimens during controlled failure tests
are described. Next, the signal preprocessing techniques used
for enhancing the measured AE signals in the presence of
noise and data acquisition imperfections are presented. This
is followed by a description of a novel hierarchical clustering
technique to group the AE events. The feature extraction
and machine learning techniques for detecting P-waves are
described in Section 4. Finally, the experimental results on
the spatial distributions of AE events are provided and
compared to the actual fracture locations.

(b)

P-wave

200
100
0
100
200
300

100

200

300
400
Samples

500

600

(c)

Figure 2: (a) Experimental setup for recording the AE events in


a surface instability test. (b) Coordinate axes of the setup. (c) AE
event recorded from the first sensor that triggers the data acquisition
process. The P-wave is indicated with an arrow; it is the first
component that arrives at the sensor and used for time of arrival
detection.

is given in Figure 2. A prismatic rock specimen, wedged


between two rigid vertical side walls and a rigid vertical rear
wall, is subjected to axial load applied in the Y -direction
through displacing rigid platens. The specimen is supported
in the Z-direction such that compressive stress is generated
passively. The rear wall in X-direction ensures that the lateral
deformation and failure (cracks) were promoted to take place
on the front, exposed face of the specimen.
Four acoustic emission (AE) sensors were attached to the
exposed face using cyanoacrylate glue, and their positions
(x, y, z) were measured. Four other AE sensors were fastened to the side walls of the apparatus. The AE data were
collected with high-speed, CAMAC-based data acquisition

10
5
0
5
10

100 200 300 400 500 600 700 800

900 1000

Normalized amplitude

Normalized amplitude

EURASIP Journal on Advances in Signal Processing


10
5
0
5
10

100 200 300 400 500 600 700 800

Samples
Original data

900 1000

Samples
Corrected data

(a)

(b)

Figure 3: Original signal on (a) corrupted with spikes. At (b), the corrected signal with a median filter.

equipment, consisting of four two-channel modular transient recorders (LeCroy model 6840) with 8-bit analog to
digital converter (ADC) resolution and a sampling rate of
20 MHz. The data acquisition system was interfaced with
eight piezoelectric transducers (Physical Acoustics model
S9225), and eight preamplifiers with bandpass filters from
0.1 to 1.2 MHz and 40 dB gain were used for conditioning the
raw AE signals. The frequency response of these transducers
ranged from 0.1 to 1 MHz, with a diameter of approximately
3 mm. All channels were triggered when the signal amplitude
exceeded a certain threshold on the first sensor. This sensor
is referred to as the anchor sensor. AE data were acquired
in a more or less continuous fashion until 128 Kbytes of
a digitizer memory were filled; then the AE data were
transferred to the host computer, with approximately four
seconds of downtime. The entire waveforms were stored
automatically and sequentially with a time stamp. This
experiment was repeated twice using two very similar rock
specimens with dimensions of 62 mm (X) 93 mm (Y )
80 mm (Z) labeled as SR1 and SR2. A sample AE signal
recorded with the system is presented in Figure 2(c). In total,
2176 and 1536 AE events were recorded in the experiments
SR1 and SR2, respectively. This number includes both real
AE and spurious (noise) events.
Several events contained spikes (Figure 3), which probably originated from ADC sign errors. Consequently, a
median filter was employed to remove the spikes from the AE
recordings. The median filter is a nonlinear digital filtering
technique that has found widespread application in image
processing. In this study, each sample was replaced with the
median value of a window covering three pre- and postsamples. A representative corrupted signal and median filter
output is shown in Figure 3. The median filter successfully
corrected the events with consecutive spikes.

3. Clustering of AE Events
In practice, the crack locations are inspected visually by
projecting on a plane the locations of individual AE events,
which are estimated from the TOA information at the
sensors [7]. The TOA is determined by comparing the
signal amplitude to a predefined threshold, where the earliest
arrival is due to the P-wave, as shown in Figures 2 and 4(a).

This type of method produces misleading TOA information


if the signal is noisy, which is usually the case in actual
structures. For instance, the data set we recorded contained
several records with corrupted baseline (Figure 4(b)) or
pseudo-AE events. Therefore, before applying the amplitude
threshold, the SNR of the signal was increased by capturing
correlated recordings and averaging grouped events. For
this particular purpose, a hierarchical clustering approach,
which uses the cross-correlation function computed between
dierent events, was applied.
As a first step, the normalized cross-correlation function
Rxy [k] was computed for only 256 shifts between pairs of
events represented by the preprocessed signals x[n] and y[n]
acquired at the anchor sensor:
Rxy [k] =


1
x[n]y[n + k],
(N k)x y n

|k | 256.

(1)

A correlation matrix was then constructed using the


maximum value of the absolute cross-correlation function
between all event pairs. The lag indices of maximum
correlation between paired events were saved to align the
associated events in further steps of the analysis. The
correlation matrices of the two data sets are shown in
Figure 5. These correlation matrices were used to build a
hierarchical cluster [8]. The average linkage method was
used to build the dendrogram, which represented the nested
correlation structure of all AE events. The dendrogram was
cut at level 0.2 in order to cluster those events that have
average cross-correlations equal or larger than 0.8. At this
level, 105 and 80 clusters were obtained with two or more
members for SR1 and SR2, respectively.
AE events related to a particular cluster with four
members are shown in Figure 5. This step was followed by
computing the averages of each cluster to obtain super
AE signals. In this scheme, averaging is expected to reduce
the uncorrelated noise in comparison with the repetitive
AE signal component across the records of a given
cluster,
resulting in an amplitude SNR increase of at best C, where
C is the number of events in a cluster. A similar approach
has been utilized for processing gene expression profiles in
[9]; it has been shown that averaged gene expression data
within clusters have more predictive power than those from
individual gene expressions. Thus, by increasing the SNR of
the waveforms, AE locations will be more accurate.

EURASIP Journal on Advances in Signal Processing


10
Normalized amplitude

Normalized amplitude

10
5
0
5
10

500

1000

1500

5
0
5
10

2000

500

Samples

1000

1500

2000

Samples

(a)

(b)

Normalized amplitude

10
5
0
5
10

500

1000

1500

2000

Samples
(c)

Figure 4: Sample AE recordings. (a) High SNR with clear baseline. (b) Corrupted baseline. (c) Pseudo-AE (noise).

In order to improve the amplitude SNR by a factor of


two or more, clusters with at least four members were used
in estimating the location of AE. Those clusters with large
numbers of members increase the reliability of the location
estimation step. We emphasize that the key assumption here,
and one that has been observed experimentally, is the very
low likelihood that, in practice, noise will also be highly
correlated across multiple measurement records. Hence, it
is expected that highly correlated signals (events) can only
originate from a source such as microcracks.

4. P-Wave Detection with SVM


The spatial distribution of AE is estimated from the TOA
information, which is extracted from the waveforms. The
detection of P-waves by a using simple threshold becomes
dicult in the presence of noise or local peaks in the data.
With lower amplitude thresholds, the rate of false positives
(FP) increases rapidly due to the noise in the baseline.
Increasing the amplitude threshold may cause a decrease in
false positive along with the true positive (TP) rate. Consequently, an intelligent algorithm is needed to distinguish
between real and pseudo-P-waves (noise). In this paper, the
use of a maximum margin classifier using input features
extracted from time and frequency domain analysis of the
AE data was investigated for the detection of the P-waves.
In order to determine the TOA accurately, the time and frequency domain properties of the AE data in short windows

around the P-wave arrival were examined. The energy of Pwaves was generally found to be located in lower frequency
bands. This wave was followed by large oscillations with
similar spectral characteristic (the 1st row in Figure 6(a)).
Sample waveforms and spectra related to a typical Pwave (center frame in the 1st row, Figure 6(a)) and those
windows preceding and following this wave are presented
in frames 1 and 3 in Figure 6(a). The same analysis related
to a segment that may be recognized as a pseudo-Pwave is also given (Figure 6(b)). It is observed that the
pseudo-P-waves were not followed by large oscillations.
In addition, their frequency spectrum indicates that these
waveforms had a certain amount of energy in mid-frequency
bands. In the following, we describe three approaches
for determining features to be used in a classifier. The
identification of the features was implemented on a training
set by selecting around 20 multichannel super AE events
from each data set. The eectiveness of these features and
their combinations are examined on testing datasets in
Section 5.
4.1. Discrete Fourier Transform-Based Features. Based on the
above observations on the frequency characteristics of Pwaves and noise and within the spirit of [10], so-called
Mel Scale, subband energy features were extracted from
the spectrum of each time window using a fast Fourier
transform. A Blackman-Tukey window was used during the
estimation of spectra of segments. In total, five subbands

EURASIP Journal on Advances in Signal Processing

0.9

500

0.9
500
Event number

Event number

0.8
1000
0.7

0.8

0.7
1000

1500
0.6

0.6

2000
1500

0.5
0

500

1000

1500

2000

0.5
0

500

Event number

1000
Event number

(a)

1500

(b)

Ch-1

Ch-8
100

200

300

400

500

600

Samples
(c)

Figure 5: Correlation matrices of (a) SR1 and (b) SR2. (c) Overlap plot of AE events related to a particular cluster with four members.

were extracted. The widths of the subbands were not uniform


and had a dyadic structure. The lowest two bands had the
same bandwidth, and the following subbands were twice as
wide as the preceding subbands. This setup focused more on
the lower frequency bands since the energy of the signal was
concentrated in this range. By concatenating the Mel Scale
subband features from all three windows, a 15-dimensional
feature vector was constructed. Generally, the noise (pseudoP-waves) had jagged spectra. In contrast, the spectra of the
P-waves were smooth. The variance of the derivative of the
spectrum of each time window was also computed as another
feature to capture this dierence.
4.2. Discriminatory Wavelet Packet Analysis-Based Features.
In addition to the energies computed in predefined Mel
Scale subbands, we also considered selection of the subbands
adaptively with a discriminant wavelet packet (WP) analysis
technique [11]. In more detail, the signals belonging to
noise and P-waves are decomposed into WP coecients
over a pyramidal tree structure. In the following step, the

expansion coecients at each position in the tree structure


are squared and averaged within each class. Then a Euclidean
distance between the averaged expansion coecients of noise
and P-waves were computed at each node of the WP tree.
The corresponding binary tree structure was pruned from
bottom to top to select the most discriminatory frequency
subbands. This is achieved by comparing the estimated
distance of the children and mother nodes. The energy, in
each selected band, is used as a feature for the recognition
of P-waves. The reader is referred to [11, 12] for a detailed
description of discriminatory wavelet packet analysis and
its derivations. Since short data segments are inspected, we
used a four-tap Daubechies wavelet filter while analyzing
the signals. A tree depth of four was selected, where in
the finest level the available bandwidth was divided in 16
subbands. In Figure 7, we present the selected WP subbands
for the datasets SR1 and SR2, respectively. We note that
the obtained segmentations were somewhat similar in both
datasets. Wider subbands were selected in the left window
preceding the P-wave. We note that the entire high frequency

EURASIP Journal on Advances in Signal Processing

Normalized amplitude

6
4

log power

20

40
Samples

60

20

40
Samples

60

10

10

10

10

10

10

20

20

20

20

60 80 100 120
Samples

Frequency (MHz)

Frequency (MHz)

40

Frequency (MHz)

Normalized amplitude

(a)
4

log power

20

40
Samples

60

20

40
Samples

60

10

10

10

10

10

10

20

2
4
Frequency (MHz)
Left

20

2
4
Frequency (MHz)
Center

20

20

40

60 80 100 120
Samples

2
4
Frequency (MHz)
Right

(b)

Figure 6: (a) Waveforms and log power spectra of 64-sample long time window preceding the P-wave, centered around P-wave, and a 128sample long window after the P-wave; (b) Raw data and spectra of noise segments that may be recognized as a pseudo-P-wave.

EURASIP Journal on Advances in Signal Processing


SR1
Center

Left

Right

1 2 3
Level

SR2
Center

Left

0
(a)

1 2 3
Level

Right

4
(b)

Figure 7: The WP subband tiling for datasets SR1 (a) and SR2 (b). Each selected subband is weighted with the corresponding log scaled
Euclidean distance between classes. The darker nodes have higher discrimination power.

band was selected as one feature in the left window. The


discriminative power of the high band in the left window
was higher than the high subbands in the center and right
windows, whereas the discriminatory power of the center
and right windows in lower bands were much higher than
the left window. Interestingly, finer levels were selected in the
center and right windows.
4.3. AR Model-Based Features. The AE data were also
analyzed in the left, center, and right windows using an
autoregressive model. Since the P-waves and oscillations
following them are more structured, it is expected that the
AE waveforms can be well predicted by a linear combination
of the past samples. However, for noise, such a prediction is
expected to fail due to the lack of correlation and/or structure
between consecutive samples. With this motivation, the prediction error of the AR (alternatively the linear predication)
model was used in each time window as another feature for
detecting the P-waves. Prior to employing the AR modeling
in each window, the data were normalized to zero mean and
unit variance in order to eliminate the energy dierences
between dierent events. Since short data segments are
analyzed, the order of the AR model was investigated with
a corrected Akaike information criterion (AICc) of [13],
AIC = 2 log(e) + 2p,


AICc = AIC +

2p p + 1
,
N p1

(2)

where p is the model order, N is the sample size, and e is


the prediction error of the model. The AICc has a secondorder correction for small sample sizes. As the number of
samples gets large, the AICc converges to AIC; therefore,

it can be employed regardless of sample size [13]. In Figure 8,


we present the averaged AICc of both datasets SR1 and SR2
computed in all windows. The AICc criterion indicated a
model order between 6 and 8. To obtain an idea about the
discriminative power of the selected model order, the receiver
operating characteristic (ROC) curves computed on the
training data were also constructed in these three consecutive
time windows for each model order. The area between
the ROC curve (AUC) and the diagonal, no decision,
line was used as a measure to quantify the discrimination
performance of the extracted features. We also inspected
change in discriminatory information as a function of model
order in each analysis window (see Figure 8(b)). However,
the AUC plot suggested lower model orders, where the
model order of p = 6 provided maximum discriminatory
information.
The ROC curves of dierent time windows for both
datasets are given in Figure 9. It was observed that the area
under the curve was the maximum in the time window
following the P-wave. This was followed by the window
covering the P-wave. Specifically, the prediction error of the
model was smaller in the last two windows for real P-waves
and provided better discrimination. This is an expected
outcome since the signals in these windows have higher SNR
and are more structured compared to the signals in the first
window.
For each time point, computing the features described
could be a demanding process. To reduce the number of
candidate time points that need to be tested for P-wave
arrival, first the signal was normalized, and then the envelope
of the signal was computed with the Hilbert transform.
When the envelope of the signal exceeded a predefined
threshold, and then that time point was tested for P-wave

EURASIP Journal on Advances in Signal Processing

0.44

0.42

Area under ROC curve

AICc

4
5
6
7

0.4
0.38
0.36
0.34

10

12

10

12

Model order

Model order
(a)

(b)

Figure 8: (a) The corrected Akaike Information criterion is computed for both datasets SR1 and SR2 and then averaged. The AICc criterion
indicated a model order between 6 and 8, where the minimum was at p = 8. (b) ROC curve related to prediction error of the AR model on
the training data was computed in the center and right windows and averaged over both datasets SR1 and SR2.

SR1
Center
1

0.8

0.8

0.8

0.6
AUC = 0.21

0.4

TP rate

0.2

AUC = 0.47

0.6
0.4
0.2

0
0.2

0.4

0.6

0.8

0.4

0
0

0.2

FP rate

0.4

0.6

0.8

(b)

0.8

0.8

0.4
0.2

TP rate

0.8
TP rate

AUC = 0.44

0.6
0.4
0.2

0
0.6

FP rate
(d)

0.8

0.8

AUC = 0.39

0.6
0.4
0.2

0
0.4

0.8

Right

AUC = 0.27

0.6

(c)
SR2
Center

Left

0.2

0.4

FP rate

0.2

FP rate

(a)

0.6

AUC = 0.43

0.6

0.2

0
0

TP rate

Right

TP rate

TP rate

Left

0
0

0.2

0.4

0.6

FP rate
(e)

0.8

0.2

0.4

0.6

FP rate
(f)

Figure 9: The ROC curves related to the model order p = 6 computed on the training data in the left, center, and right windows. Note that
the discrimination in the center and right windows is better than the left window.

EURASIP Journal on Advances in Signal Processing


arrival, it was found that a threshold value of 0.5 was
good enough to determine most of the P-waves. The feature
vectors for each method presented above were individually
fed into a linear support vector machine classifier for the
final decision [6]. The main motivation for using an SVM
classifier is based on its robustness against outliers and
its generalization capacity in higher dimensions, which is
the result of its large margin. Furthermore, the output
of the SVM classifier was postprocessed by a sigmoid
function to map the SVM output into probabilities. This
was accomplished by minimizing the cross-entropy error
function as suggested in [14]. By using this procedure, we
were able to assign posterior probabilities to SVM output
which is later used as a confidence level to detect Pwave arrival. The SVM classifier was trained by selecting
around 20 multichannel super AE events from each data
set. Since each event includes AE data from 8 channels,
this resulted in 160 P-waves to be tested in each dataset.
This number included those clusters with low number of
members. However, due to poor SNR, we were unable to
visually identify the location of all P-waves in these data
sets. Consequently, we selected those events which have a
visible P-wave. The training feature vectors for P-waves and
noise sets were constructed from this subset by manually
marking the P-wave arrivals and noise events that exceeded
the predefined threshold in each channel. The numbers of
visually identified P-waves were 100 and 78 in datasets SR1
and SR2, respectively. The numbers of noise events were 155
and 162 for SR1 and SR2, respectively. The SVM classifier
was trained on the features using the data set of one of
the experiments and applied it on the other dataset. In this
way, it was guaranteed that no test samples were used in
training the classifier. In addition, using such a training
strategy, it was investigated whether both data sets share
similar patterns. The success of such a strategy can also
validate the generalization capability of the classification
system constructed.

5. Results
As a first step, on each training set, the decision characteristics of the SVM classifiers were examined by visualizing
the ROC curves related to their outputs. We individually
investigated the ROC curves of each feature extraction
method described above and computed the area between
the diagonal line. In addition, we also considered the
classification performance of SVM when the raw AE data
in these consecutive windows are applied. The ROC curves
related to the training data for SR1 and SR2 are depicted in
Figure 10. We note that the maximum area in both datasets
were obtained with the WP method (0.496 for dataset SR1
and 0.481 for SR2). The second most discriminative features
were Mel scale subband energies obtained with FFT (AUC =
0.489 and 0.477 for datasets, SR1 and SR2, resp.). On both
datasets, adaptive selection of frequency subbands provided
better performance. We note that the SVMs trained with 256dimensional raw AE data had quite poor performance, where
the AUC was 0.39 and 0.31 for datasets SR1 and SR2.

9
We also examined the performance of a combination
of feature sets. Interestingly, the features computed with
WP method did not provide any better discrimination
performance when they are combined with other features.
For dataset SR1, the best performance was obtained with
those features computed with WP method only. We note
that the best separation performance was obtained with the
combination of Mel Scale, AR model error, and spectrum
variance features on the dataset SR2 (AUC = 0.483). Based on
these observations, we trained the SVM classifiers either with
only WP features or with the combination of Mel Scale, AR
model error, and spectrum variance features. These classifiers
were applied on the test samples we describe below.
In this study, it is desirable to have a system with low false
positive rates since there exist several peaks in the baseline
preceding the P-waves that can be potentially recognized as a
P-wave. For this particular purpose, we used the probability
output of the SVM classifier. We only accepted those points
as P-Wave arrivals when the posterior probability exceeds a
threshold of 0.9. The threshold can also be moved to more
stringent levels. However, this may result in the classifier
missing the P-waves which will yield low TP rates. One
can also select that time as P-wave arrival point, where the
posterior probability of the SVM classifier is maximum on
the whole AE signal. However, this caused the system to miss
the P-waves and identify those regions in the post-P-wave as
they share similar characteristics. Therefore, we selected the
first point as P-wave when the posterior probability exceeded
the 0.9 threshold.
As indicated in earlier sections, the SVM classifier was
trained on the features using the data set of one of the
experiments and applied on the other dataset. Using this
strategy, we evaluated the generalization capacity of the
system on similar specimens. At this point, it is dicult
to numerically quantify the classification accuracies of both
datasets due to the lack of true labels of the test data. The
true labels can be obtained by manually marking the P-waves.
However, several clusters with low number of members
had poor SNR. It was dicult to visually identify the Pwaves in these records. Consequently, we elected to study
the classification accuracy on those clusters with four or
more members. The algorithm identified 13 and 9 clusters
with four or more members in the datasets SR1 and SR2,
respectively. The super AEs obtained from these clusters had
much higher SNR, and the P-waves were mostly visually
observable. We manually marked the locations of P-waves
and when the classifier identified a region in 10 samples
around the marked location. We provided such a tolerance
region because the P-wave location was not clearly visible
on small number of records due to low SNR, and the expert
manually marked these positions as possible P-wave location.
The success of the system in recognizing the P-waves with
WP features was 97.1% when SR2 was used as training and
SR1 as testing set. While using SR1 as training and SR2
as testing set, the success on recognizing the P-waves was
94.5%. The combination of features yielded classification
accuracies of 93.3% and 94.5% using the same training
and testing procedure for these datasets, respectively. We
note that similar recognition accuracies were obtained with

EURASIP Journal on Advances in Signal Processing

0.8

0.8

0.6

0.6

TP rate

TP rate

10

0.4

0.4

0.2

0.2

0
0

0.2

0.4

0.6

0.8

0.2

0.4

FP rate
Wavelet packets
Raw data

Mel subbands
AR
Spectrum variance

0.6

0.8

FP rate
Wavelet packets
Raw data

Mel subbands
AR
Spectrum variance

(a)

(b)

Figure 10: The training classification performance of dierent feature sets on the dataset SR1 (a) and SR2 (b). The best performance was
obtained with WP approach. The performance of the raw AE data was quite poor compared to other methods.

Ch-1

Ch-8
100

200

300

400

500

600

Samples

Figure 11: Sample cluster average and detected arrivals from eight
sensors of SR1. TOA is marked with a vertical line on each channel.
Note that the SVM classifier was trained on SR2.

both techniques, and the performances were in accordance


with the training data characteristics. Sample TOA estimates
detected by the tuned SVM classifier for a particular cluster
are visualized in Figure 11. The horizontal dashed lines
represent the predefined threshold. Those time points, where

the envelope of the signal exceeded the threshold, were


tested for P-wave arrival. The vertical blue lines represent
the detected P-wave arrivals. Note that, although several
other time points exceeded the threshold, the algorithm
successfully eliminated them. Recall that the SVM classifiers
were trained with dierent data sets. It was observed that the
SVM classifier successfully recognized the P-waves showing
that the classifier can generalize similar specimens. This may
provide great advantage in the deployment of the system in
real-life applications.
After calculating the arrival information for each sensor,
the iterative algorithm in [15] was used to estimate the
3D hypocenter of the source. For the iterative localization
method, the location errors were described by the symmetric
covariance matrix. The algorithm was executed in a two-step
procedure to improve estimation accuracy. In the first step,
the iterative method computed an optimized AE position
while the covariance matrix that contains spatial variance of
arrival times was examined. The two channels that provided
largest estimated location errors computed from residual
times were disregarded. Then, in the second step, the source
location was estimated with the remaining channels. If no
noticeable reduction was observed, the location estimation
was implemented using all available channels. With this
strategy, we evaluated arrival information from the combination of AE sensors. It should be noted that the AE
location error for the iterative algorithm tested with synthetic
data is generally between 0.5 and 3.0 mm if the P-wave
arrivals can be located within 10 samples. Figure 12 shows
the estimated locations of all clusters and those with at
least four members. In Figure 13, we present the photos of

EURASIP Journal on Advances in Signal Processing

11

90

80

80

70

70

60

60

50

80

50
Z-axis

Y -axis

Y -axis

SR1
90

60

40

40

30

30

20

20

20

0
0

10

10
0

20

40

60

80
60

20
0

20

40

is
ax
X-

40

60

40
40

X-axis
SR2

X-axis

90

90

80

80

70

70

60

60

20
60

is
ax
Y-

40

50
Z-axis

Y -axis

Y -axis

80
50

40

60
40

30

30

20

20

20

0
0

10

10
0

20

40
X-axis

(a)

60

20

40
X-axis

(b)

60

is
ax
X-

80
20

60
40

40
20
60

s
axi
Y-

0
(c)

Figure 12: Estimated locations of the AE events for SR1 (first row) and SR2 (bottom row). Each blue circle represents the location of a
particular cluster. The diameter of the circle is proportional to the number of AE in the cluster: (a) The locations of all clusters; (b) the
locations of those clusters with at least four members. Note that the locations are very close to the free surface; (c) the 3D view of the
locations given in the second column.

deformed specimens. The locations were estimated using the


WP features for SR1 and combined feature set for SR2. Note
that the clusters with at least four members have an SNR that
is two times larger than individual recordings. The positions
of the AE sensors were marked with the gray squares. Each
blue circle represents the location of a particular cluster.
The size of each circle is proportional to the number of AE
events within the cluster. The locations of the AE events
were in accordance with the visible crack locations. Most of
the events were localized towards the free surface on both
specimens. Interestingly, the largest clusters were localized a
few millimeters away from the free surface, which matched
well with the observed cracks on the deformed specimens in
both tests (Figures 12(b), 13(a), and 13(b)). Several cracks

were developed on or adjacent to the frontal surface in the


X-Y plane in both tests (Figures 13(a) and 13(b)). Especially
for SR2, most clusters were located in the region of X > 45,
Y > 60, Z < 40 mm, which precisely matched with the
heavily cracked zone observed on the Y -Z plane (lower row
in Figures 12(c) and 13(c)).
The locations of all detected clusters in SR1 spread
over the specimen with a tendency towards the free surface
(Figure 12(a)). This is an expected factor since those clusters
with low number of members have lower SNR. It is also
possible to capture noise by chance with a low number of
members. In order to get around this problem, one can
construct another decision system in order to discriminate
between AE and noise. Observations indicate that keeping

12

EURASIP Journal on Advances in Signal Processing


SR1
90
80
70

Y -axis

60
50
Y = 50

40
Y

30

20
10
X

0
0

20

40

60

X-axis
SR2
90
80
70

Y -axis

60

50
40
30
Y = 50

20
10

Z
0

20

40

60

X-axis
(a)

(b)

(c)

Figure 13: (a) Photos of the observed cracks at the upper part of the free surface (X = 62 mm). (b) The observed cracks mapped onto X-Y
plane, where the free surface is on the right-hand side (X = 62 mm). Note that the cracks on SR2 sample are hairline thin. (c) Photos of the
observed cracks on the Z = 80 mm surface.

those clusters with large number of members automatically


eliminates those recordings with noise or random nature.
One can also increase the correlation threshold for identifying the clusters. However, there is a chance that a high correlation threshold may erase all possible clusters in the data,
where the SNR is low. On the other hand, keeping it very
low relaxes the constraints, where the chance of obtaining
clusters with noise members is increased. The threshold can
be adjusted depending on the quality of the available data.
In order to obtain an idea about the improvement in estimating the location of AE with our technique, we compared
our results to the AE locations estimated using the classic
threshold algorithm. The traditional algorithm uses an
amplitude threshold method to examine P-wave arrivals. The
threshold is determined from the mean signal noise (i.e., pre-

trigger signal) plus/minus 4 times of the standard deviation


or a minimum of 2 mV. In order to qualify the picked time
mark as a P-wave arrival, two criteria have to be satisfied.
(i) Once the signal exceeds the threshold, it has to surpass the threshold at least 3 times in the subsequent
40-sample (40 50 ns = 2 s) window.
(ii) After 120 samples (i.e., 6 s) from the picked time
mark, the signal has to exceed threshold at least once.
The threshold method has been studied and proven reliable [16] and was chosen due to its simplicity and eciency
to process thousands of AE events. In Figure 14, we provide the estimated locations with the traditional threshold
method. We note that the threshold method resulted in a
very scattered pattern of those AE events and did not provide

EURASIP Journal on Advances in Signal Processing

13
SR2

90

90

80

80

70

70

60

60
Y (mm)

Y (mm)

SR1

50
40

50
40

30

30

20

20

10

10

20

40

60

X (mm)

20

40

60

X (mm)

(a)

(b)

Figure 14: AE locations calculated with classic algorithm on SR1 and SR2 without clustering analysis and SVM technique.

clear information on crack locations. This was due to the


raw AE signals being quite noisy and the TOA that was not
precisely detected by the simple threshold-passing criterion.
The proposed machine learning approach, however, proved
its strength and potential to filter out the noise and enhance
the SNR to correctly identify the position of major cracks.

Acknowledgments

6. Conclusions

References

Novel approaches based on hierarchical clustering and support vector machines (SVM) are introduced for clustering
AE signals and detecting P-waves for microcrack location
in the presence of noise. Prior to feature extraction and
classification process, spikes from the AE data are removed
by employing a median filter. Clusters of AE events are
identified by inspecting their pairwise correlation. After
identifying clusters, an averaging step was implemented
to obtain super AE with improved SNR. Characteristic
features were extracted from the data in time and frequency
domains to identify P-waves for time of arrival (TOA). SVM
classifiers with probabilistic outputs were trained with these
features to recognize P-waves for TOA determination. The
location of each AE cluster was estimated accordingly.
The proposed machine learning technique with clustering analysis and SVM showed that the estimated clusters
can successfully indicate the location of failure observed in
surface instability tests, in which the cracks were promoted
to occur close to the front free surface of the specimen. This
approach, compared to the classic AE algorithm that gave a
very disperse pattern and was not indicative of the region of
failure, also presents the capability of filtering noisy signals
and enhance the SNR to obtain more reliable AE cluster
locations. The preliminary results show that the method
has the potential to be a component of a structural health
monitoring system.

Partial support was provided by the National Science Foundation, Grant no.CMMI-0825454. The authors express their
appreciation for the constructive comments provided by the
referees, which served to considerably improve the paper.

[1] C. Grosse, S. D. Glaser, and M. Kruger, Wireless acoustic


emission sensor networks for structural health monitoring in
civil engineering, in Proceedings of the European Conference
on Non-Destructive Testing (ECNDT 06), pp. 18, Berlin,
Germany, 2006.
[2] L. Golaski, P. Gebski, and K. Ono, Diagnostics of reinforced
concrete bridges by acoustic emission, Journal of Acoustic
Emission, vol. 20, pp. 8398, 2002.
[3] V. Emamian, M. Kaveh, A. H. Tewfik, Z. Shi, L. J. Jacobs, and
J. Jarzynski, Robust clustering of acoustic emission signals
using neural networks and signal subspace projections,
EURASIPJournal on Applied Signal Processing, vol. 2003, no.
3, pp. 276286, 2003.
[4] Z. Gong, E. O. Nyborg, and G. Oommen, Acoustic emission
monitoring of steel railroad bridges, Materials Evaluation,
vol. 50, no. 7, pp. 883887, 1992.
[5] C. U. Grosse, F. Finck, J. H. Kurz, and H. W. Reinhardt,
Improvements of AE technique using wavelet algorithms,
coherence functions and automatic data analysis, Journal of
Construction and Building Materials, vol. 18, no. 3, pp. 203
213, 2004.
[6] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of
Statistical Learning, Springer, New York, NY, USA, 2001.
[7] N. Iverson, C.-S. Kao, and J. F. Labuz, Clustering analysis of
AE in rock, Journal of Acoustic Emission, vol. 25, pp. 364372,
2007.
[8] S. Theodoridis and K. Koutroumbas, Pattern Recognition
Second Edition, Academic Press, New York, NY, USA, 2003.

14
[9] M. Y. Park, T. Hastie, and R. Tibshirani, Averaged gene
expressions for regression, Biostatistics, vol. 8, no. 2, pp. 212
227, 2007.
[10] A. E. Cetin, T. C. Pearson, and A. H. Tewfik, Classification
of closed and open shell pistachio nuts using principal
component analysis of impact acoustics, in Proceedings of the
IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP 04), pp. 677680, May 2004.
[11] N. Saito, Local feature extraction and its applications using a
library of bases, Ph.D. thesis, Department of Mathematics, Yale
University, New Haven, Connm USA, December 1994.
[12] N. F. Ince, F. Goksu, A. H Tewfik, I. Onaran, A. E. Cetin, and
T. Pearson, Discrimination between closed and open-shell
(Turkish) pistachio nuts using undecimated wavelet packet
transform, Biological Engineering Journal, American Society of
Agricultural and Biological Engineers (ASABE), vol. 1, no. 2, pp.
159172, 2008.
[13] K. P. Burnham and D. R. Anderson, Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach,
Springer, New York, NY, USA, 2nd edition, 2002.
[14] J. C. Platt, Probabilistic outputs for support vector machines
and comparisons to regularized likelihood, in Advances in
Large Margin Classifiers, A. Smola, P. Bartlett, B. Scholkopf,
and D. Schuurmans, Eds., MIT Press, Cambridge, Mass, USA,
1999.
[15] J.H. Kruz, S. Koppel, L. Linzer, B. Schechinger, and C. U.
Grosse, Source localization, in Acoustic Emission Testing:
Basics for Research-Applications in Civil Engineering, C. U.
Grosse and M. Ohtsu, Eds., chapter 6, Springer, Berlin,
Germany, 2008.
[16] K. R. Shah and J. F. Labuz, Damage mechanisms in stressed
rock from acoustic emission, Journal of Geophysical Research,
vol. 100, no. 8, pp. 1552715539, 1995.

EURASIP Journal on Advances in Signal Processing

Você também pode gostar