Você está na página 1de 22

See

discussions, stats, and author profiles for this publication at:


https://www.researchgate.net/publication/223632622

Multi-layer neural network applied to


phase and depth recovery from fringe
patterns
Article in Optics Communications July 2000
DOI: 10.1016/S0030-4018(00)00765-3

CITATIONS

READS

52

67

4 authors, including:
Manuel Servn

Ramon Rodriguez-Vera

Centro de Investigaciones en Optica

Centro de Investigaciones en Optica

149 PUBLICATIONS 2,157 CITATIONS

109 PUBLICATIONS 976 CITATIONS

SEE PROFILE

SEE PROFILE

Available from: Francisco Cuevas


Retrieved on: 31 July 2016

15 July 2000

Optics Communications 181 2000. 239259


www.elsevier.comrlocateroptcom

Multi-layer neural network applied to phase and depth recovery


from fringe patterns
F.J. Cuevas ) , M. Servin, O.N. Stavroudis, R. Rodriguez-Vera

Centro de Inestigaciones en Optica,


A.C., Apdo. Postal 1-948, Leon,
Guanajuato, Mexico
Received 22 November 1999; received in revised form 27 April 2000; accepted 17 May 2000

Abstract
A multi-layer neural network MLNN. is used to carry out calibration processes in fringe projection profilometry in
which the explicit knowledge of the experimental set-up parameters is not required. The MLNN is trained by using the
fringe pattern irradiance and the height directional gradients provided from a calibration object. After the MLNN has been
trained, profilometric height data are estimated from the projected fringe patterns onto the test object. The MLNN method
works adequately on an open fringe pattern, but it can be extended to closed fringe patterns. In the proposed technique, edge
effects do not appear when the field view is limited in the fringe pattern. In order to show the application of the MLNN
method, three different experiments are presented: a. shape determination of a spherical optical surface; b. optical phase
calculation from a computer-simulated closed fringe pattern; and c. height determination of a real surface target. An
analysis is also made of how noise, spatial carrier frequencies, and different training sets affect the MLNN performance.
q 2000 Elsevier Science B.V. All rights reserved.
PACS: 06.20.F; 84.35; 42.30.R; 42.30.T
Keywords: Profilometry; Calibration; Neural network; Fringe analysis; Phase measurement; Phase retrieval

1. Introduction
This paper is concerned with the application of
neural networks in calibration processes to profilometry, and can be extended to other metrologic optical
techniques where fringe images are used to measurement tasks. In these cases the fringe images are

)
Corresponding author. Tel.: q52-47-731018; fax: q52-47175000; e-mail: fjcuevas@foton.cio.mx

recorded digitally. The detected irradiance of the


fringes is given by:
C x , y . s a x , y . q b x , y . cos v x x q f x , y . . ,
1.
where a x, y . is the background irradiance; b x, y .,
the reflectivity of the underlying surface; v x , the
fundamental carrier frequency of the fringe pattern.
Both, a x, y . and b x, y ., are considered to vary
slowly. To be determined is the phase term f x, y ..
Then the purpose of optical metrologic techniques
such as the spatial synchronous method w1,2x, the

0030-4018r00r$ - see front matter q 2000 Elsevier Science B.V. All rights reserved.
PII: S 0 0 3 0 - 4 0 1 8 0 0 . 0 0 7 6 5 - 3

240

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

Fourier method w37x, and the phase-locked loop


w813x. is to analyze fringe patterns to recover the
phase term f x, y . in radians. These techniques
analyze the digitized fringe pattern and detecting the
spatial phase variations due to the physical quantity
being measured.
To accomplish the calibration process, that is, the
phase in radians. to physical measurements conversion, it is required to know the parameters of the
optical experimental set-up, which is a difficult task.
For example, in the profilometric case, where a
projected fringe pattern is analyzed, the conversion
process should use the optical set-up parameters such
as focal distance, reference plane location, fringe
pattern projected frequency, camera-fringe projector
angle, among others. In addition, the geometric aberrations of the optical components should be considered such as spherical, coma, astigmatism, and radial
distortion.
We applied a multi-layer neural network MLNN.
to carry out the calibration processes to determine
the physical measurements directly from the irradiance of the fringe image without requiring the explicit knowledge of the set-up parameters. The
MLNN should be trained by a fringe pattern provided from a previously measured object to accomplish the calibration. The MLNN technique is
appropriate in profilometric applications where a
controlled structured light is used. In optical profilometry the object shape is determined from a
projected structured light over a test object. In this
case the MLNN inputs are the irradiance values
obtained through a two-dimensional window, and the
desired MLNN outputs are the height gradients of
the target object. The learning process consists of
adjusting the parameters of the MLNN weights.,
which minimize the cost function in the least-squares
sense. Then the calibration process is carried out
during the training process. After the MLNN has
been trained, it can be used to recover depth from
fringe patterns of test objects.
There exist several techniques in order to determine physical measurements from a fringe pattern.
These techniques normally arrive to a phase detection step. Between these techniques are included the
spatial synchronous method w1,2x, the Fourier method
w37x, and the phase-locked loop method w813x.
They have been used in several applications w1420x.

The most important step is to transfer phase to


physical mapping in a flexible manner. Then, this
phase information is used with a calibration process
to obtain physical measurements such as surface
shape, temperature distribution, refractive index, and
ray deflections.
In the spatial synchronous detection w1,2x the phase
is calculated by first multiplying the fringe pattern
by the sine and cosine of the carrier signal. Then a
low pass filter is applied to both signals to remove
the high frequency terms. The ratio of these two
signals is then the tangent of f x, y .. The detected
phase given by this ratio is 2p wrapped because of
the tan x . function involved w2126x. The frequency cut-off of the low pass filter is a difficult
parameter to be determined presenting quite a problem with wide-spectrum, noisy fringe patterns.
In the case of the Fourier transform method w37x
the phase is detected by using a Fast Fourier Transform FFT.. In this case the fringe pattern is first
transformed and then processed by a bandpass
quadrature filter centered at the spatial carrier frequency, thus isolating the fundamental frequency
term. Then, the inverse FFT is applied to obtain
f x, y .. This phase is also wrapped in the range
wyp ,p x due to the arctan.. function involved in the
phase estimation process.
In the phase-locked loop technique w813x, the
phase is estimated by following the phase changes of
the input signal by varying the phase of a computersimulated oscillator VCO. such that the phase error
between the fringe pattern and VCOs signal vanishes. In the phase-locked loop technique we need to
find an appropriate value to the gain loop parameter
so that the method can work well. Occasionally this
is also difficult to determine.
In the above mentioned conventional techniques,
a calibration process is required to determine physical measurements. For example, if an ideal profilometric case is considered, as traditionally is made,
where all experimental set-up parameters are wellknown, a pinhole camera is used, and there are
neither aberrations nor optical distortions, then it can
easily be analyzed how to convert mapping. phase
to height. These conditions do not currently exist in
the real situation. This is demonstrated in the profilometric experimental set-up as shown in Fig. 1. A
structured monochromatic light grating. is projected

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

241

Fig. 1. A fringe projection experimental set-up.

over a test object located at the reference plane. It is


frame grabbed by using a pinhole camera. It can be

considered that the reference plane is located a distance d 0 from the pinhole camera and fringe projec-

Fig. 2. Structure of the artificial neuron used in the multi-layer neural network MLNN..

242

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

Fig. 3. Three-layer neural network topology used to recover the phase andror depth.

tor. Optical axis of projector and the pinhole camera


are parallel and separated by a distance d1. Both
optical axes are normal to the reference plane. Coordinate origin of the set-up system can be located at
the pinhole camera and z-axis is placed along its
optical axis.
The reference plane is considered to have a height
h x, y . s 0 for all x and y. The projected ruling can
be considered as a regular and constant fringe pattern
if the distance d 0 is large compared with d1. Now,
the phase in radians. of the projected fringe pattern
can be obtained by using conventional phase demodulation techniques mentioned above again: the
Fourier, the spatial synchronous, and the phaselocked loop methods..

To carry out the phase to height conversion, Fig.


1 can be analyzed. The phase fC on an object point
C has the same value that phase fA on a point A.
Besides, point C on the object and point B on the
reference plane are recorded by the CCD detector
array at the same point BX . Then the distance AB is
given by w46x:

ABs

f B y fC
2p f

2.

where f is the spatial frequency of projected grating


on the reference plane and f B is the phase value on
point B. From Fig. 1, it can be seen that triangles

Fig. 4. An example of irradiance window located at x, y . in the fringe pattern image used to train the MLNN.

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

243

Fig. 5. a. Interferogram obtained from the WYKO interferometer used in the MLNN training process. b. Optical path difference used in
the MLNN training process.

FPC and ABC are similar; the height of the analyzed point C can be expressed as:
h x , y . s

f B y fC . d 0
2p fd1 q f B y fC

3.

It should be stressed that above equation can be


used to accomplish the calibration process phase to
height conversion. when a pinhole camera is considered and the test object is located far away from the
projector-camera system. However, when a real ex-

244

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

perimental set-up with optical components is used,


several problems arise. In these conditions, there are
three main factors that give rise to calibration problems: 1. the diverging fringe projection; 2. the
video camera perspective distortion; and 3. the use
of crossed optical-axes geometry. These circum-

stances have non-linear effects on the recovered


phase w27x.
Here we propose a calibration technique using an
MLNN that can calculate directly the local height
gradient from the local irradiance of the fringe pattern. The MLNN is used to demodulate open fringe

Fig. 6. a. Interferogram used to test the MLNN system. b. Optical path difference approximated by the MLNN.

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

patterns, but it can be extended to work with closed


fringe patterns. Because the MLNN system estimates
the gradient, it must also include an integration
procedure as a final step in the height recovery
process. This MLNN technique does not produce any
edge effects if a mask is placed over the fringe

245

pattern. To demonstrate the MLNN process, the


technique is demonstrated in three different applications: a. to test a spherical optical surface; b.
optical phase calculation from a computer-simulated
fringe pattern with closed fringes; and c. height
determination in a profilometric application.

Fig. 7. a. Computer-simulated interferogram with closed fringes using Eqs. 25. and 26.. b. The phase of the fringe pattern of Fig. 7a.

246

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

2. The neural network model


An MLNN is composed of several layers of interconnected artificial neurons. The structure of an

artificial neuron w2837x is shown in Fig. 2. Each


input Ik is multiplied by a corresponding weight w k .
The neurons output is obtained by adding all
weighted inputs and passing them through a non-lin-

Fig. 8. a. Simulated interferogram generated by two Gaussians addition Eq. 27... b. Phase recovery obtained by the trained MLNN from
the fringe pattern in a.. c. Absolute error in radians between computer generated phase and the MLNN phase recovery.

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

247

Fig. 8 continued..

ear activation function F. In general, F is a sigmoid


function, although it is possible to employ another
kind of activation function w35x. Therefore, the output of a neuron is given by the following equations:
N

Os

w k Ik ,

4.

ks1

and the activation function is


OX s F O . s

1
1 q eyO

5.

The sigmoid function OX varies from a near zero


value for large negative excitations to a value approaching unity at large positive excitations. The
sigmoid function has the advantage of having a
simple derivative and of providing automatic gain
control which is used in implementing the backpropagation algorithm w29,30x. The derivative of the
sigmoid function is:

E F O.

used only to distribute R inputs to neurons in the


next layer, the hidden layer, which is composed of J
sigmoid neurons Eqs. 4. and 5... The output layer
has K sigmoid neurons and gives the output of the
neural network.
Using an MLNN requires a more general notation
among neurons in the network. The output of a
neuron is given by the following equations:

sF O. 1yF O. .
6.
EO
Obviously, one neuron cannot solve complicated
problems, therefore we use the MLNN topology
shown in Fig. 3. A three-layer neural network is
usually employed. The first layer the input layer. is

Oqp s

wkpq Ikp ,

7.

ks1

and
Ikp s F O kpy1 . s

8.
py 1 ,
1 q eyO k
where Ikp s F O kpy 1 . is the input to the p-th layer
from k-th neuron of the p-1 st of the multi-layer
neural network, w kpq is the weighting factor of the
k-th neuron in the p-1 st layer with the q-th neuron
in the p-th layer, and Oqp is the intermediaterunthresholded output of the q-th neuron in the p-th
layer.
In neural systems, when the training process is
executed, the weights are adjusted to minimize the
squared error between target outputs and MLNN
outputs the back-propagation algorithm w29,30x.. To

248

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

Fig. 9. a. Simulated interferogram generated by two Gaussians subtraction Eq. 28... b. Phase recovery obtained by the trained MLNN
from a.. c. Absolute error in radians between computer generated phase and the MLNN phase recovery.

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

249

Fig. 9 continued..

accomplish the training process the weights can be


adjusted, a training set S should be formed. The
training set S in this case is formed by training pair
vectors s x , y . which contain a large subset of small
regions of the image W x, y .. obtained from projecting fringes over the calibrated object and its local
directional height gradients D H x, y .. Then each
point x, y . in the fringe image has a related training
pair vector s x , y , that is:
X

Ss D

is valid there is a valid fringe window W x, y . and


can be considered in the training process. and 0
otherwise. The height directional gradients are approximated by differences in the directions x and y.
These are given by:

E h x , y .
Ex

f D h x x , y . s D h1 x , y .

D sx , y m x , y . ,

9.

where
sx , y s W x , y . , D H x , y . ,

10 .

and

DH x, y. s

h x , y . y h x y 1, y .

xs1 ys1

E h x , y . E h x , y .
,
.
Ex
Ey

11 .

The fringe pattern is digitized using X rows and Y


columns and the window W x, y . is formed from the
irradiance values obtained of a M = M neighborhood centered around pixel x, y . which proceed to
the fringe pattern image. The function m x, y . is a
binary mask which considers the valid area inside of
the fringe pattern to form the training set. The binary
mask m x, y . is established with 1 if pixel in x, y .

E h x , y .
Ey

12 .

13 .

f Dh y x , y . s Dh2 x , y .

h x , y . y h x , y y 1.

where h x, y . represents height obtained from the


calibration object. The different fields have been
normalized within the range of w0,1x using the constant r in Eq. 12. and 13.. In order to use indexed
variables in the back-propagation algorithm w29,30x,
we have made some equivalents, so that D h x x, y .
s D h1 x, y . and D h y x, y . s D h 2 x, y ., which rep-

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

250

resent the two target outputs in the learning process


see Fig. 3..
We established the following error function, that
is to be minimized by adjusting weights:
X

UD H s

D h x x , y . y D h x x , y . .

sigmoid as activation function, although it is possible


to employ a different function w35x.
The evolution of the weights of the hidden layer
wiqj can be expressed as:
wiqj t q 1 . s wiqj t . q h

E UD H
E wiqj

xs1 ys1

q D h y x , y . y D h y x , y .

i g w 1, R x , j g w 1, J x ,

m x , y . ,

14 .
where D h x x, y . and D h y x, y . are the two-outputs
of the MLNN which estimate the height gradients at
point x, y . when the irradiance window W x, y . is
propagated throughout the MLNN, see Figs. 3 and 4.
and m x, y . is the binary mask where data are valid.
In the same way that the target outputs, we have
made some equivalencies, D h 1 x, y . s D h x x, y . and
D h 2 x, y . s D h y x, y . to facilitate the use of indexes
in the back-propagation algorithm. The neural network training process modifies the weights until the
quadratic error function UD H is minimized. We use a
fixed step gradient descent method to optimize the
function UD H . The weights in the output layer, wjks ,
are given by:
wjks t q 1 . s wjks t . q h

E UD H
E wjks

j g w 1, J x , k g w 1,2 x ,

15 .

where

E UD H
E wjks

s D h k y D h k .

ED h k
E wjks

sigm Ojq . ,

16 .

18 .

with

E UD H
E wiqj

E sigm Ojq .
E wiqj

Ii d ks wjks ,

19 .

and where

d ks s D h k y D h k .

ED h k
E wjks

20 .

As a first step in the MLNN training process the


weights can be initialized with random numbers in
the range wy0.5, 0.5x and the binary mask m x, y . is
established with 1 if pixel in x, y . is valid there is a
valid fringe pattern and can be considered in the
training process. and 0 otherwise. Then an initial
valid location x, y . where m x, y . s 1. is selected
randomly over the fringe pattern. A window sample
W x, y . of the fringe pattern is the input to MLNN
and the weights are adjusted depending on the MLNN
output errors Eq. 15. and Eq. 18... Then the
training process is repeated selecting another valid
location over the fringe pattern where m x, y . s 1.
and finished when an average error m of about 0.5%
is obtained in the normalized outputs.
The value of m is calculated with

and where

ED h k
E wjks

s sigm

O ks

. 1 y sigm

O ks

. .

17 .

The values Ojq , O ks are the neural outputs in the


hidden superscript q . and output superscript s .
layers, respectively. The parameter h is the learning
rate that determines the size of the step in the
minimization algorithm. Its value should be around
0.1 to achieve good error reduction and precise
height-fitting in the recovery procedure. We use a

ms

1
T


xs1 ys1

D h x x , y . y D h x x , y .
Dh x x , y .

D h y x , y . y D h y x , y .
Dh y x , y .

= m x , y . = 100%,

/
22 .

where T is the total number of valid pixels where


m x, y . s 1. within the image. After the MLNN has

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

been trained, it is used to approximate phase or other


physical measurements from a test fringe pattern.
The approximation is derived from the MLNNs
output by sequential scanning of the fringe pattern.

3. Relaxation method to integrate the estimated


height gradients
Given the approximated discrete gradients
D H x, y . s E h x, y .r E x .., E h x, y .r E y ... the
height can be estimated using a path-independent
integration. We use a least-squares optimization to
find the surface that best fits the noisy height gradient. We can then minimize the error function:
Uf s

x , y.

f x , y . y f x y 1, y . y D h x x , y .

q f x , y . y f x , y y 1 . y D h y x , y .
= m x , y . ,

5
23 .

where D h x x, y ., D h y x, y .. are the estimated discrete gradients at x, y . when the window W x, y . is


propagated throughout the MLNN the two MLNN
outputs. and m x, y . is the binary mask where data
are valid. The function f x, y . represents the surface
which best fits the estimated gradient. We use fixed
step gradient descent to optimize the function Uf :
ftq1 x , y . s ft x , y . y j

E Uf
E f x , y .

24 .

where j is the step of the optimization process. We


use this recursive equation to obtain the estimated
height f x, y . from the MLNN outputs D h x x, y .,
D h y x, y ... It is iterated until a pre-established tolerance condition is met.

4. Experiments
The MLNN model was used in three different
applications: a. to test a spherical optical surface by
calculating optical path differences from an interferogram on which a mask is superimposed; b. to
calculate phase from computer-simulated closed
fringe interferograms; and c. to calculate depth in a

251

profilometric application in which experimental setup parameters are not required. Besides in Section
4.4. the MLNN recovery error is analyzed when
three different events are varied: noise, carrier frequency, and the training set. In all cases we use a
three-layer topology with 25 input neurons, 25 hidden neurons, and 2 output neurons height gradients
in direction x and y .. The input was a 5 = 5 pixel
window see Fig. 4. that samples the irradiance of
neighbourhood pixels over the pixel of interest. The
two-output neurons yield the x and y gradients of
the height. The learning rate h used in the training
process was 0.1. The training process was terminated
when the average error m reached 0.5%, which is
achieved in the range of 2 000 000 to 3 000 000
iterations. The training time depends on the fringe
image complexity. One iteration is considered to be
when one pair vector s x y is used to adjust the
weights. Then it implies that a 256 = 256 full-image
should be passed around 30 to 45 times through the
MLNN to arrive at the established average error
0.5%..

4.1. Testing a spherical optical surface


In this case we used a WYKO interferometer
model 6000-633-NM to obtain two interferograms
from two spherical optical surfaces that were
digitized using a 256 = 256 pixel array. One interferogram, shown in Fig. 5a, was used to train the
three-layer neural network. On the other hand, the
interferogram displayed in Fig. 6a was used to test
the neural network model.

4.1.1. The training process


The training pair, target interferogram, and target
optical path differences OPD., were obtained from
the associated interferogram and the OPD data files
using the WYKO interferometer. The target OPD,
shown in Fig. 5b, was obtained from the training
interferogram illustrated in Fig. 5a. With the WYKO
software the target OPD was calculated using a
five-step phase shifting technique. The required
learning iterations number was 2 900 000 which is
equivalent to propagate 44 times the full fringe

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

252

image into the MLNN 72 min on a Pentium-200


MHz computer..

and
h2 x , y .

4.1.2. The OPD recoery


Then the trained MLNN was used to obtain the
OPD from the second interferogram see Fig. 6a..
The OPD calculated from the test interferogram is
shown in Fig. 6b. Notice that there are not edge
effects in the calculated OPD due to the mask superimposed on the interferogram. The average retrieval
error was 0.015 wavelength wv.. The time taken to
estimate the phase, from the image through the
256 = 256 array was 23 s.
4.2. Closed fringes interferograms
4.2.1. The training process
The MLNN was used to recover phase from a
256 = 256 computer-simulated interferogram with
closed fringes. The simulated interferogram of a
parabolic phase function, with a footprint having a
diameter of 200 pixels, was used to train the neural
network see Fig. 7a.. The fringe pattern was calculated by the following mathematical relationship:
I x , y . s 128 q 127 cos 0.1 x q h x , y . . ,

s 25 exp

y x y 178 . y y y 178 .
50 2
2

y25 exp

y x y 78 . y y y 78 .
50 2

/
/

28 .

The surfaces were generated in the computer by a


linear combination of two Gaussian functions that
differed only in sign. The fringe patterns related to
Eqs. 27. and 28. are shown in Fig. 8a and Fig. 9a
respectively. Figs. 8b and 9b show the MLNN phase
recovery from these fringe patterns. Note in Figs. 8b
and 9b that the correct relative sign was found using
the MLNN from the fringe patterns. The plots of
absolute error are illustrated in Figs. 8c and 9c. A
number of 2 100 000 iterations were used to obtain
this error rate. The training process took 51 min with
a learning rate of h s 0.1 on a Pentium-200 MHz
computer while the MLNN retrieval time was only
18 s from a 256 = 256 image.

25 .

where

4.3. Fringe projection profilometry


2

h x , y . s 25 y x y 128 . q y y 128 . 0.0025.

26 .
The simulated interferogram and its associated phase
are shown in Fig. 7a. and b. respectively.
4.2.2. The phase recoery
Next, we used the MLNN, trained by the parabolic
phase function Eqs. 25. and 26.., to retrieve the
following two computer-simulated interferograms
generated by:

Another interesting application of the MLNN system is finding the surface profile from projected
fringe patterns. A 100 linesrinch Ronchi grating was
mounted in a Kodak Ektagraphic slide projector with
an fr3.5 zoom lens. The angular frequency of projected fringes over the reference plane was 2.15
radrmm. The fringe pattern projected was imaged
by a COHU-4815 camera equipped with a Computar
TV zoom lens. The fringe image was then digitized
with a VS-100-AT card from Imaging Technology
using a resolution of 256 = 256 pixels.

h1 x , y .
2

s 25 exp

y x y 78 . y y y 78 .
50 2
2

q25 exp

y x y 178 . y y y 178 .
50 2

27 .

4.3.1. The training process


Fig. 10a illustrates the fringe pattern projected
over the target object which was used to train the
MLNN. The target was an hemispherical object
whose profile was measured mechanically using
ZEISS CMM C400 machine. Fig. 10b shows the
mechanically measured surface on which was pro-

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

253

Fig. 10. a. Linear grating projected over a hemispherical calibration object to profilometry application. b. Mechanical measurement from
ZEISS C400 machine.

jected the fringe pattern in Fig. 10a. Both the measured surface and the fringe pattern. were used to
train the MLNN. Notice that in the training process
calibration. details of the experimental set-up were
not required. Then the MLNN was trained until a
0.5% average output error was reached. In this case,

2 500 000 iterations were required to obtain the established average error.
4.3.2. The depth recoery
Then the MLNN was used to determine the surface of a pyramid Fig. 11b. from the fringe projec-

254

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

tion image shown in Fig. 11a. In Fig. 11c measurements of the mechanical and MLNN systems are
compared. The average error was 0.088 cm. The
MLNN training time was 62 min while the recovery
time was only 26 s from a 256 = 256 image using a
Pentium 200 MHz computer.

4.4. The MLNN recoery error analysis


Although the MLNN performance is high in the
experiments shown in last sections, this is deteriorates by the noise, spatial carrier frequency variations, and improper training set. In this section, it is

Fig. 11. a. Linear grating projected over test pyramidal object. b. Depth recovery by the MLNN from Fig. a.. c. Comparison between the
mechanical measurement solid line. and the MLNN depth recovery dotted line..

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

255

Fig. 11 continued..

shown how these factors influence the object height


recovery process in the MLNN technique.

average error when the conic simulated object is


recovered.

4.4.1. Noise leel


To check how the noise affect the height or
phase. recovery the MLNN was trained using a
parabolic simulated object whose related fringe pattern can be expressed by:

4.4.2. Carrier frequency ariation


In the following experiment the purpose was to
analyze how the estimation error varies with the
carrier frequency v x . The MLNN was trained by the
fringe image related to parabolic object Eqs. 29.
and 30.., and was used to recover the conic object
Eq. 32.. with fringe images using different carrier
frequencies v x . where fringes appear finer and
finer. The Fig. 13 show the estimation errors obtained when the carrier frequency v x is varied. We
can see how the minimum is obtained when the test
object has the same carrier frequency as the training
object v x s 0.85 radrpixel..

I x , y . s 128 q 127 cos 0.85 x q h x , y . . ,

29 .

where
2

h x , y . s 60 y x y 128 . q y y 128 . 0.006.

30 .
Then a noisy fringe pattern provided from a conic
test object was used to recover the height term,
which can be expressed as:
I x , y . s 128 q 127 cos 0.85 x q h x , y .
qn x , y . . ,

31 .

where

( x y 128. q y y 128.
2

h x , y . s

100

60,

32 .

n x, y . is the noise with an uniform distribution, and


the carrier frequency is v x s 0.85 radrpixel. In Fig.
12 it is shown how the noise level influences the

4.4.3. Using different training sets


The estimation error was calculated when the
conic simulated object Eq. 32.. was recovered by
using of different training sets provided from fringe
images of four different calibration objects. These
learning objects can be expressed by the following
relationships:
2

h1 x , y . s 60 y x y 128 . q y y 128 . 0.006,

33 .

256

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

Fig. 12. The MLNN recovery error when phase uniform noise is added.

h 2 x , y . s 60 exp y

2
2
x y 128. q y y 128.

50

and
,

34 .
h 3 x , y . s 60 exp y

2
2
x y 128. q y y 128.

100 2

h 4 x , y . s 60 exp y

y exp y

2
2
x y 78. q y y 78.

50 2
2
2
x y 178. q y y 17 .

35 .

Fig. 13. The MLNN recovery error when carrier frequency v x is varied.

50 2

36 .

F.J. Cueas et al.r Optics Communications 181 (2000) 239259


Table 1
Learning object

% Error m .

h 1 x, y .
h 2 x, y .
h 3 x, y .
h 4 x, y .

1.416
0.980
2.086
1.338

Table 1 shows how the test object recovery conic


object. varies depending on the different training
objects. Notice that the smaller error is given for the
function h 2 x, y .. This is due to its geometrical
shape is more similar to the conic object Eq. 32..
that the others. In the other hand, when the neural
network is trained with h 3 x, y ., the maximum error
is obtained since this function belong to a extended
Gaussian which differs to the conic object expressed
by Eq. 32..
From these experiments can be summarized how
the MLNN technique is affected by noise, carrier
frequency, and applying different training sets.
5. MLNN limitations and further considerations
Three different applications, where the MLNN
system could be used, were described in the last
section; besides the MLNN recovery error was ana-

257

lyzed when noise, fringe carrier frequency, and training set were varied. It is clear that training images
should be selected to fit the requirements of a specific problem to which the MLNN is to be applied
and should have similar fringe carrier frequencies. In
profilometry this drawback is overcome, since structured light projection characteristics can be controlled in the experimental set-up and the fringe
carrier frequency can be easily adjusted by the user.
A point of interest on the MLNN learning process
is overfitting w36,37x. There is an optimal number of
iterations and tolerance errors needed to obtain a best
fit of the test images. In Fig. 14 the comparative
average error between training image solid line. and
test image dotted line., is shown following the
number of iterations over the training set for the
profilometric application. It can be observed that
there is an optimal number of iterations after which
the process should be terminated which was 2 100 000
iterations fixing h s 0.1.. From this point on the
average error increases when the test object height is
estimated see dotted line in Fig. 14..
The number of hidden neurons is an important
parameter. If a small number is used, the approximation error could be high. On the other hand, if there
are too many hidden neurons the training time could
be increased considerably and overfitting can occur.
Fig. 15 illustrates how the error evolves with the

Fig. 14. Plots of error as function of iterations for a given training image solid line. and test image dotted line..

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

258

Fig. 15. Error evolution as the number of hidden neurons is increased in the MLNN.

increase of the number of the hidden neurons in the


profilometry application. The training process was
stopped after 2 000 000 iterations 45 min.. It can be
shown that 25 hidden neurons is optimum since the
average error increases when the test object height is
recovered as more neurons are added see dotted line
in Fig. 15.. It is important to stress that although the
training process is slow, the MLNN estimating process takes a much shorter time less than 30 s for a
256 = 256 image..
If we want to increase the MLNN performance,
discrete Laplacians can be additionally used as outputs:

Dh x x x , y .
s

h x q 1, y . y 2 h x , y . q h x y 1, y .

r2

37 .
and

Dh y y x , y .
s

h x , y q 1. y 2 h x , y . q h x , y y 1.

r2

38 .
where r 2 is a normalization constant. Although the
Laplacians are not used in the data estimation pro-

cess in this paper, they provide additional robustness


to the training process with additional data.

6. Conclusions
A multi-layer neural network MLNN. was used
to carry out the calibration process in profilometry
where fringe patterns are used to measurement tasks.
Calibration was done by using a training process,
achieved by minimizing a cost function that compared the MLNN outputs with the target outputs
determined by the training set. The irradiance of the
fringe pattern was input to the MLNN; the outputs
were the directional differences of the target learning. object. The explicit knowledge of the experimental set-up parameters, often not available, is not
required. Because the MLNN system estimates the
gradient of the unknown height, it must also include
a provision for integration as a final step in height
recovery. The MLNN method is appropriate in interferometry and profilometry when the spatial carrier
frequency can be controlled by the user.
The MLNN method can be extended when the
fringe image contains closed fringes. Unlike the
Fourier and the spatial synchronous method, this
method introduces not edge effects when the fringe
pattern is bounded due to the finite extend of the

F.J. Cueas et al.r Optics Communications 181 (2000) 239259

object under analysis. To demonstrate its effectiveness the MLNN was applied to three different examples: a. to test a spherical optical surface by
calculating optical path differences from a masked
interferogram; b. to calculate phase from computer-simulated closed fringe interferograms; and c.
to calculate depth in a profilometric application in
which the parameters of the experimental set-up are
not explicitly needed. The MLNN recovery error is
affected by the noise, carrier frequency variation,
and using different target objects in the training
process.
An additional advantage is the important increase
of speed of phase and dimensional measurements
using a neural networks hardware. It could be used
in transient events real-time interferometry and optical metrology., where speed is essential.

Acknowledgements
We are indebted to M.C. Carlos Perez, Dorle
Stavroudis, Gonzalo Paez, and M.C. Martha Gutierrez for enlightening and useful discussions during
the development of this work. We also acknowledge
the support of the Consejo Nacional de Ciencia y
Tecnologa
of Mexico.

References
w1x Y. Ichioka, M. Inuiya, Appl. Opt. 11 1972. 1507.
w2x K.H. Womack, Opt. Eng. 23 1984. 391.
w3x M. Takeda, H. Ina, S. Kobayashi, J. Opt. Soc. Am. 72 1981.
156.
w4x M. Takeda, K. Mutoh, Appl. Opt. 22 1983. 3977.
w5x W. Zhou, X. Su, J. Mod. Opt. 41 1994. 89.
w6x J. Lin, X. Su, Opt. Eng. 34 1995. 3297.
w7x J. Yi, S. Huang, Opt. Lasers Eng. 27 1997. 493.
w8x M. Servin, R. Rodriguez-Vera, J. Mod. Opt. 40 1993. 2087.
w9x M. Servin, D. Malacara, R. Rodriguez-Vera, Appl. Opt. 33
1994. 2589.

259

w10x R. Rodriguez-Vera, M. Servin, Opt. Laser Technol. 26 1994.


393.
w11x M. Servin, D. Malacara, F.J. Cuevas, Opt. Eng. 33 1994.
1193.
w12x M. Servin, R. Rodriguez-Vera, D. Malacara, Opt. Lasers
Eng. 23 1995. 355.
w13x J. Kozlowski, G. Serra, Opt. Eng. 36 1997. 2025.
w14x X. Zhang, R. Mammone, Opt. Eng. 33 1994. 4079.
w15x J.H. Saldner, J. Huntley, Opt. Eng. 36 1997. 610.
w16x C. Joenathan, B.M. Khorana, J. Mod. Opt. 39 1992. 2075.
w17x W. Chen, Y. Tan, H. Zhao, Opt. Lasers Eng. 25 1996. 111.
w18x W. Nadeborn, P. Andra, W. Osten, Opt. Lasers Eng. 24
1996. 245.
w19x D.W. Robinson, G.T. Reid, Interferogram Analysis: Digital
Fringe Measurement Techniques, Institute of Physics Publishing, London, England, 1993.
w20x F.J. Cuevas, M. Servin, R. Rodriguez-Vera, Opt. Commun.
163 1999. 270.
w21x D.J. Bone, Appl. Opt. 30 1991. 3627.
w22x T.R. Judge, P.J. Bryanston-Cross, Opt. Lasers Eng. 21 1994.
199.
w23x M. Servin, R. Rodriguez-Vera, A.J. Moore, J. Mod. Opt. 41
1994. 119.
w24x F. Bremand,
Opt. Lasers Eng. 21 1994. 49.

w25x M. Takeda, T. Abe, Opt. Eng. 35 1996. 2345.


w26x M. Servin, J.L. Marroquin, D. Malacara, F.J. Cuevas, Appl.
Opt. 37 1998. 1917.
w27x R. Rodriguez-Vera, D. Kerr, F. Mendoza, J. Opt. Soc. Am. A
9 1992. 2000.
w28x D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning internal representations by error propagation. Parallel Distributed
Processing, vol. 1, MIT Press, Cambridge, MA, 1986, pp.
318362.
w29x P. Wasserman, Backpropagation in Neural Computing, Van
Norstrand Reinhold, New York, 1989, pp. 4358.
w30x J.A. Freeman, Neural Networks: Algorithms, Applications
and Programming Techniques, Addison-Wesley Publishing,
1998.
w31x M. Servin, F.J. Cuevas, Revista Mexicana de Fsica
39

1993. 235.
w32x G. An, Neural Computation 8 1996. 643.
w33x G. Thimm, P. Moerland, E. Fiesler, Neural Computation 8
1996. 451.
w34x H. Mills, D.R. Burton, M.J. Lalor, Opt. Lasers Eng. 23
1995. 331.
w35x S.K. Kenue, Trans. SPIE 1608 1991. 450.
w36x T. Mitchel, Machine Learning, McGraw Hill, New York,
1997, pp. 108112.
w37x R.J. Schalkoff, Artificial Neural Networks, McGraw Hill,
New York, 1997, pp. 194196.

Você também pode gostar