Escolar Documentos
Profissional Documentos
Cultura Documentos
CITATIONS
READS
52
67
4 authors, including:
Manuel Servn
Ramon Rodriguez-Vera
SEE PROFILE
SEE PROFILE
15 July 2000
Abstract
A multi-layer neural network MLNN. is used to carry out calibration processes in fringe projection profilometry in
which the explicit knowledge of the experimental set-up parameters is not required. The MLNN is trained by using the
fringe pattern irradiance and the height directional gradients provided from a calibration object. After the MLNN has been
trained, profilometric height data are estimated from the projected fringe patterns onto the test object. The MLNN method
works adequately on an open fringe pattern, but it can be extended to closed fringe patterns. In the proposed technique, edge
effects do not appear when the field view is limited in the fringe pattern. In order to show the application of the MLNN
method, three different experiments are presented: a. shape determination of a spherical optical surface; b. optical phase
calculation from a computer-simulated closed fringe pattern; and c. height determination of a real surface target. An
analysis is also made of how noise, spatial carrier frequencies, and different training sets affect the MLNN performance.
q 2000 Elsevier Science B.V. All rights reserved.
PACS: 06.20.F; 84.35; 42.30.R; 42.30.T
Keywords: Profilometry; Calibration; Neural network; Fringe analysis; Phase measurement; Phase retrieval
1. Introduction
This paper is concerned with the application of
neural networks in calibration processes to profilometry, and can be extended to other metrologic optical
techniques where fringe images are used to measurement tasks. In these cases the fringe images are
)
Corresponding author. Tel.: q52-47-731018; fax: q52-47175000; e-mail: fjcuevas@foton.cio.mx
0030-4018r00r$ - see front matter q 2000 Elsevier Science B.V. All rights reserved.
PII: S 0 0 3 0 - 4 0 1 8 0 0 . 0 0 7 6 5 - 3
240
241
considered that the reference plane is located a distance d 0 from the pinhole camera and fringe projec-
Fig. 2. Structure of the artificial neuron used in the multi-layer neural network MLNN..
242
Fig. 3. Three-layer neural network topology used to recover the phase andror depth.
ABs
f B y fC
2p f
2.
Fig. 4. An example of irradiance window located at x, y . in the fringe pattern image used to train the MLNN.
243
Fig. 5. a. Interferogram obtained from the WYKO interferometer used in the MLNN training process. b. Optical path difference used in
the MLNN training process.
FPC and ABC are similar; the height of the analyzed point C can be expressed as:
h x , y . s
f B y fC . d 0
2p fd1 q f B y fC
3.
244
Fig. 6. a. Interferogram used to test the MLNN system. b. Optical path difference approximated by the MLNN.
245
Fig. 7. a. Computer-simulated interferogram with closed fringes using Eqs. 25. and 26.. b. The phase of the fringe pattern of Fig. 7a.
246
Fig. 8. a. Simulated interferogram generated by two Gaussians addition Eq. 27... b. Phase recovery obtained by the trained MLNN from
the fringe pattern in a.. c. Absolute error in radians between computer generated phase and the MLNN phase recovery.
247
Fig. 8 continued..
Os
w k Ik ,
4.
ks1
1
1 q eyO
5.
E F O.
sF O. 1yF O. .
6.
EO
Obviously, one neuron cannot solve complicated
problems, therefore we use the MLNN topology
shown in Fig. 3. A three-layer neural network is
usually employed. The first layer the input layer. is
Oqp s
wkpq Ikp ,
7.
ks1
and
Ikp s F O kpy1 . s
8.
py 1 ,
1 q eyO k
where Ikp s F O kpy 1 . is the input to the p-th layer
from k-th neuron of the p-1 st of the multi-layer
neural network, w kpq is the weighting factor of the
k-th neuron in the p-1 st layer with the q-th neuron
in the p-th layer, and Oqp is the intermediaterunthresholded output of the q-th neuron in the p-th
layer.
In neural systems, when the training process is
executed, the weights are adjusted to minimize the
squared error between target outputs and MLNN
outputs the back-propagation algorithm w29,30x.. To
248
Fig. 9. a. Simulated interferogram generated by two Gaussians subtraction Eq. 28... b. Phase recovery obtained by the trained MLNN
from a.. c. Absolute error in radians between computer generated phase and the MLNN phase recovery.
249
Fig. 9 continued..
Ss D
E h x , y .
Ex
f D h x x , y . s D h1 x , y .
D sx , y m x , y . ,
9.
where
sx , y s W x , y . , D H x , y . ,
10 .
and
DH x, y. s
h x , y . y h x y 1, y .
xs1 ys1
E h x , y . E h x , y .
,
.
Ex
Ey
11 .
E h x , y .
Ey
12 .
13 .
f Dh y x , y . s Dh2 x , y .
h x , y . y h x , y y 1.
250
UD H s
D h x x , y . y D h x x , y . .
E UD H
E wiqj
xs1 ys1
q D h y x , y . y D h y x , y .
i g w 1, R x , j g w 1, J x ,
m x , y . ,
14 .
where D h x x, y . and D h y x, y . are the two-outputs
of the MLNN which estimate the height gradients at
point x, y . when the irradiance window W x, y . is
propagated throughout the MLNN, see Figs. 3 and 4.
and m x, y . is the binary mask where data are valid.
In the same way that the target outputs, we have
made some equivalencies, D h 1 x, y . s D h x x, y . and
D h 2 x, y . s D h y x, y . to facilitate the use of indexes
in the back-propagation algorithm. The neural network training process modifies the weights until the
quadratic error function UD H is minimized. We use a
fixed step gradient descent method to optimize the
function UD H . The weights in the output layer, wjks ,
are given by:
wjks t q 1 . s wjks t . q h
E UD H
E wjks
j g w 1, J x , k g w 1,2 x ,
15 .
where
E UD H
E wjks
s D h k y D h k .
ED h k
E wjks
sigm Ojq . ,
16 .
18 .
with
E UD H
E wiqj
E sigm Ojq .
E wiqj
Ii d ks wjks ,
19 .
and where
d ks s D h k y D h k .
ED h k
E wjks
20 .
and where
ED h k
E wjks
s sigm
O ks
. 1 y sigm
O ks
. .
17 .
ms
1
T
xs1 ys1
D h x x , y . y D h x x , y .
Dh x x , y .
D h y x , y . y D h y x , y .
Dh y x , y .
= m x , y . = 100%,
/
22 .
x , y.
f x , y . y f x y 1, y . y D h x x , y .
q f x , y . y f x , y y 1 . y D h y x , y .
= m x , y . ,
5
23 .
E Uf
E f x , y .
24 .
4. Experiments
The MLNN model was used in three different
applications: a. to test a spherical optical surface by
calculating optical path differences from an interferogram on which a mask is superimposed; b. to
calculate phase from computer-simulated closed
fringe interferograms; and c. to calculate depth in a
251
profilometric application in which experimental setup parameters are not required. Besides in Section
4.4. the MLNN recovery error is analyzed when
three different events are varied: noise, carrier frequency, and the training set. In all cases we use a
three-layer topology with 25 input neurons, 25 hidden neurons, and 2 output neurons height gradients
in direction x and y .. The input was a 5 = 5 pixel
window see Fig. 4. that samples the irradiance of
neighbourhood pixels over the pixel of interest. The
two-output neurons yield the x and y gradients of
the height. The learning rate h used in the training
process was 0.1. The training process was terminated
when the average error m reached 0.5%, which is
achieved in the range of 2 000 000 to 3 000 000
iterations. The training time depends on the fringe
image complexity. One iteration is considered to be
when one pair vector s x y is used to adjust the
weights. Then it implies that a 256 = 256 full-image
should be passed around 30 to 45 times through the
MLNN to arrive at the established average error
0.5%..
252
and
h2 x , y .
s 25 exp
y x y 178 . y y y 178 .
50 2
2
y25 exp
y x y 78 . y y y 78 .
50 2
/
/
28 .
25 .
where
26 .
The simulated interferogram and its associated phase
are shown in Fig. 7a. and b. respectively.
4.2.2. The phase recoery
Next, we used the MLNN, trained by the parabolic
phase function Eqs. 25. and 26.., to retrieve the
following two computer-simulated interferograms
generated by:
Another interesting application of the MLNN system is finding the surface profile from projected
fringe patterns. A 100 linesrinch Ronchi grating was
mounted in a Kodak Ektagraphic slide projector with
an fr3.5 zoom lens. The angular frequency of projected fringes over the reference plane was 2.15
radrmm. The fringe pattern projected was imaged
by a COHU-4815 camera equipped with a Computar
TV zoom lens. The fringe image was then digitized
with a VS-100-AT card from Imaging Technology
using a resolution of 256 = 256 pixels.
h1 x , y .
2
s 25 exp
y x y 78 . y y y 78 .
50 2
2
q25 exp
y x y 178 . y y y 178 .
50 2
27 .
253
Fig. 10. a. Linear grating projected over a hemispherical calibration object to profilometry application. b. Mechanical measurement from
ZEISS C400 machine.
jected the fringe pattern in Fig. 10a. Both the measured surface and the fringe pattern. were used to
train the MLNN. Notice that in the training process
calibration. details of the experimental set-up were
not required. Then the MLNN was trained until a
0.5% average output error was reached. In this case,
2 500 000 iterations were required to obtain the established average error.
4.3.2. The depth recoery
Then the MLNN was used to determine the surface of a pyramid Fig. 11b. from the fringe projec-
254
tion image shown in Fig. 11a. In Fig. 11c measurements of the mechanical and MLNN systems are
compared. The average error was 0.088 cm. The
MLNN training time was 62 min while the recovery
time was only 26 s from a 256 = 256 image using a
Pentium 200 MHz computer.
Fig. 11. a. Linear grating projected over test pyramidal object. b. Depth recovery by the MLNN from Fig. a.. c. Comparison between the
mechanical measurement solid line. and the MLNN depth recovery dotted line..
255
Fig. 11 continued..
29 .
where
2
30 .
Then a noisy fringe pattern provided from a conic
test object was used to recover the height term,
which can be expressed as:
I x , y . s 128 q 127 cos 0.85 x q h x , y .
qn x , y . . ,
31 .
where
( x y 128. q y y 128.
2
h x , y . s
100
60,
32 .
33 .
256
Fig. 12. The MLNN recovery error when phase uniform noise is added.
h 2 x , y . s 60 exp y
2
2
x y 128. q y y 128.
50
and
,
34 .
h 3 x , y . s 60 exp y
2
2
x y 128. q y y 128.
100 2
h 4 x , y . s 60 exp y
y exp y
2
2
x y 78. q y y 78.
50 2
2
2
x y 178. q y y 17 .
35 .
Fig. 13. The MLNN recovery error when carrier frequency v x is varied.
50 2
36 .
% Error m .
h 1 x, y .
h 2 x, y .
h 3 x, y .
h 4 x, y .
1.416
0.980
2.086
1.338
257
lyzed when noise, fringe carrier frequency, and training set were varied. It is clear that training images
should be selected to fit the requirements of a specific problem to which the MLNN is to be applied
and should have similar fringe carrier frequencies. In
profilometry this drawback is overcome, since structured light projection characteristics can be controlled in the experimental set-up and the fringe
carrier frequency can be easily adjusted by the user.
A point of interest on the MLNN learning process
is overfitting w36,37x. There is an optimal number of
iterations and tolerance errors needed to obtain a best
fit of the test images. In Fig. 14 the comparative
average error between training image solid line. and
test image dotted line., is shown following the
number of iterations over the training set for the
profilometric application. It can be observed that
there is an optimal number of iterations after which
the process should be terminated which was 2 100 000
iterations fixing h s 0.1.. From this point on the
average error increases when the test object height is
estimated see dotted line in Fig. 14..
The number of hidden neurons is an important
parameter. If a small number is used, the approximation error could be high. On the other hand, if there
are too many hidden neurons the training time could
be increased considerably and overfitting can occur.
Fig. 15 illustrates how the error evolves with the
Fig. 14. Plots of error as function of iterations for a given training image solid line. and test image dotted line..
258
Fig. 15. Error evolution as the number of hidden neurons is increased in the MLNN.
Dh x x x , y .
s
h x q 1, y . y 2 h x , y . q h x y 1, y .
r2
37 .
and
Dh y y x , y .
s
h x , y q 1. y 2 h x , y . q h x , y y 1.
r2
38 .
where r 2 is a normalization constant. Although the
Laplacians are not used in the data estimation pro-
6. Conclusions
A multi-layer neural network MLNN. was used
to carry out the calibration process in profilometry
where fringe patterns are used to measurement tasks.
Calibration was done by using a training process,
achieved by minimizing a cost function that compared the MLNN outputs with the target outputs
determined by the training set. The irradiance of the
fringe pattern was input to the MLNN; the outputs
were the directional differences of the target learning. object. The explicit knowledge of the experimental set-up parameters, often not available, is not
required. Because the MLNN system estimates the
gradient of the unknown height, it must also include
a provision for integration as a final step in height
recovery. The MLNN method is appropriate in interferometry and profilometry when the spatial carrier
frequency can be controlled by the user.
The MLNN method can be extended when the
fringe image contains closed fringes. Unlike the
Fourier and the spatial synchronous method, this
method introduces not edge effects when the fringe
pattern is bounded due to the finite extend of the
object under analysis. To demonstrate its effectiveness the MLNN was applied to three different examples: a. to test a spherical optical surface by
calculating optical path differences from a masked
interferogram; b. to calculate phase from computer-simulated closed fringe interferograms; and c.
to calculate depth in a profilometric application in
which the parameters of the experimental set-up are
not explicitly needed. The MLNN recovery error is
affected by the noise, carrier frequency variation,
and using different target objects in the training
process.
An additional advantage is the important increase
of speed of phase and dimensional measurements
using a neural networks hardware. It could be used
in transient events real-time interferometry and optical metrology., where speed is essential.
Acknowledgements
We are indebted to M.C. Carlos Perez, Dorle
Stavroudis, Gonzalo Paez, and M.C. Martha Gutierrez for enlightening and useful discussions during
the development of this work. We also acknowledge
the support of the Consejo Nacional de Ciencia y
Tecnologa
of Mexico.
References
w1x Y. Ichioka, M. Inuiya, Appl. Opt. 11 1972. 1507.
w2x K.H. Womack, Opt. Eng. 23 1984. 391.
w3x M. Takeda, H. Ina, S. Kobayashi, J. Opt. Soc. Am. 72 1981.
156.
w4x M. Takeda, K. Mutoh, Appl. Opt. 22 1983. 3977.
w5x W. Zhou, X. Su, J. Mod. Opt. 41 1994. 89.
w6x J. Lin, X. Su, Opt. Eng. 34 1995. 3297.
w7x J. Yi, S. Huang, Opt. Lasers Eng. 27 1997. 493.
w8x M. Servin, R. Rodriguez-Vera, J. Mod. Opt. 40 1993. 2087.
w9x M. Servin, D. Malacara, R. Rodriguez-Vera, Appl. Opt. 33
1994. 2589.
259
1993. 235.
w32x G. An, Neural Computation 8 1996. 643.
w33x G. Thimm, P. Moerland, E. Fiesler, Neural Computation 8
1996. 451.
w34x H. Mills, D.R. Burton, M.J. Lalor, Opt. Lasers Eng. 23
1995. 331.
w35x S.K. Kenue, Trans. SPIE 1608 1991. 450.
w36x T. Mitchel, Machine Learning, McGraw Hill, New York,
1997, pp. 108112.
w37x R.J. Schalkoff, Artificial Neural Networks, McGraw Hill,
New York, 1997, pp. 194196.