Você está na página 1de 10

A SPATIAL AND SPECTRAL MODELING OF AGRONOMIC IMAGES

G. Jones, and C. Gée

UP-GAP
ENESAD
Quétigny, France

F. Truchetet

LE2I
Université de Bourgogne
Le Creusot, France

ABSTRACT

To measure and compare the effectiveness of crop/weed discrimination


algorithms in different and controlled conditions (point of view, crop
characteristics, weed distribution and infestation rate), we propose a new method
based on the modeling of a virtual field through a virtual camera. Initially, a black
and white model dedicated to crop/weed discrimination algorithms using spatial
information has been developed and two algorithms using crop rows (Hough
transform) or crop frequency (Fourier transform) have been tested and revealed
that an accuracy better than 90% is possible. The main issue of these algorithms is
their inability to detect weed in the crop rows. To overcome this limit a spectral
approach is proposed for this model using BRDF measurements coupled with
radiative transfer model. For each objects of the scene (i.e. crop, weed and soil),
several parameters are calculated to approximate the spectral response under a
new angle from real spectral data taken under different angles (light and
observer). Estimated response are then transposed to an RGB color system. The
new possibilities of crop/weed discrimination using spectral information will be
discussed.

Keywords: Spectral modeling, BRDF, RGB color space, field modeling, weed
discrimination.
INTRODUCTION

For site-specific weed management, many online systems using different


optical sensors have been developed enabling to spray specifically the weed-
infested areas (Felton and McCloy, 1992; Felton, 1995; Tian et al., 1999). In this
context, an efficient image processing procedure for crop/weed discrimination is
required in order to quantify weed infestation rates. But a manual evaluation of
the weed infestation rate (WIR) is a tricky task: it will take a very long time for
either a manual segmentation of the image or a manual counting of weed plants in
the field and practically an evaluation of method accuracy can only be very
approximate and based on statistical tests on few samples of ground. Very few
articles have reported on the evaluation of the robustness of crop/weed
discrimination algorithms which has actually been validated from real images
with natural weed patterns taken from a camera under natural outdoor lighting
conditions (Andreasen et al., 1997; Onyango and Marchant, 2005). Some
algorithms have been developed in our lab, we have tested them on real data in-
field conditions but assessing and comparing them appeared difficult and unsure
(Vioix et al., 2002; Bossu et al., 2006).
Then in this context, we developed a new, mastered and original method to
test and validate the effectiveness of any algorithm aiming at estimating the weed
infestation rate (Jones et al., 2007a). We propose to model photographs taken
from a virtual camera placed in a virtual crop field with different well-known
weed infestation rates. In fact, a simulated image under various conditions, with
the knowledge of every parameter (weed and crop density and localization) is a
perfect tool for evaluating the accuracy of any algorithms aiming at
discriminating between crop and weed. This spatial modeling has already been
used to compare and validate spatial crop/weed discrimination algorithms on large
simulated pictures databases (Jones et al., 2007b). These series of test gave us an
opportunity to choose the better algorithm for a dedicated task but with one
limitation: the presence of weeds in the row can not be detected by such
algorithms. To overcome this limitation, the use of spectral plant properties has
been proposed, with the same issue: a discrimination algorithm based on the
spectral differences between crops and weeds can not be exhaustively tested with
standard means (manual counting or image segmentation) or the actual modeling
process.
To allow this, a new layer has to be added to this modeling, taking spectral
information into account. To perform this, a model that gives the spectral
response of a plant, based on its viewing angle, the light position and the object
characteristics is necessary. Different models to approximate plant or soil
reflectance exists, calculating the object parameters with a set of sample and
estimating their response in unknown situations. Then, transformation algorithms
will be used to create RGB pictures based on the spectral information.
As we experience difficulties to obtain real reflectance data, we will present
the theoretical part of the model and explain how we intend to create RGB
pictures
FROM AN OBJECT TO ITS SPECTRUM

Choice of the model

To model a spectral response, multiple parameters have to be considered. The


first one is the object type: a wheat leaf does not have the same spectral signature
than another plant or a type of soil. Then the light and observer positions, related
to the object, are also determinant to characterize this response. The spatial
modeling was developed with the idea to propose an exhaustively configurable
field of view: the camera location and orientation is not bounded. The addition of
a spectral layer should not restrict these possibilities, nevertheless, storing a set of
data containing all possible measurements under different inclinations for a large
amount of plants is not possible. Then, a model that estimates the spectral
response of an object is required, Jacquemoud et al. (Jacquemoud and Baret,
1990; Jacquemoud et al., 1992) have adapted existing ones to model plant and soil
reflectance. Plant reflectance is obtained by PROSPECT, derived from Allen’s
radiative transfer model and soil reflectance is obtained by SOILSPECT, derived
from Hapke’s radiative transfer model.
Both models are based on the same principles: the spectral response of an
object is measured under different viewing and lightning angles, then, a set of
parameters is calculated by an inversion of the model. The obtained parameters
are, then, used to estimate the object spectral response in an unobserved position.
Zenithal angles are necessary for both light and observer whereas azimuthal angle
is only required for the observer. This is due to the fact that the observer
azimuthal angle is calculated in respect to the light position (Fig. 1)
An interesting point about both of these models is the fact that their
parameters depend on spectral or physical characteristics. The three different
parameters of PROSPECT model are based on natural leaves characteristics : a
structure parameter, a pigment concentration and a water content. For
SOILSPECT model, two kind of parameters can be separated, the first varies
spectrally (the single scattering albedo) whereas the others ones are wavelength
independent (as the roughness parameter).
Fig. 1. BRDF parameters: observer (zenithal θ R and azimuthal φR ) and light
(zenithal θ D ) angles.

Angle calculation

As the spatial model has been developed in two dimensions, it is important to


notice that the plants are considered flat. Nevertheless, to take into account the
spectral variations brought by different plant inclinations, each plant is virtually
and stochastically oriented. As a beginning, plant orientation are limited to ten
degrees in every directions but further results should help to determine the more
realistic bounds. We can also imagine that plant orientations are dictated by others
parameters (for example, sunflower always face the sun).
Soil is also modified by variation of orientations to simulate its stochastic
structure. It is important to notice that the aim is not to exactly reproduce soil
structure but to bring variations in spectral responses. This step is obtained by the
addition of circles of different sizes all over the virtual field, these circles are
stochastically oriented, following the same process than plants.
A field in three dimensions is issued from this, the first dimension containing
the object type, associated with a set of BRDF parameters; the second dimension
concerns the object orientation and the third its inclination. PROSPECT and
SOILSPECT models will, then, be used during the field to picture transformation
process.
Both models need light and observer angles, as these angles are related to the
object’s normal, they have to be computed for each picture’s pixel. As the
considered light is the sun, its position is considered stationary due to its distance
from the scene. The observer’s angle has to be computed for every pixel of the
resulted picture, this step is performed using pinhole model properties. Once these
angles are known, they need to be recalculated to consider the object inclination.
These information, coupled with the parameters from one of the BRDF model,
allow the spectral response of the object to be calculated by PROSPECT or
SOILSPECT model.
To create colored pictures, reflectance spectrum must be transformed into a
color space, that is the point of the following part.

FROM REFLECTANCE SPECTRUM TO COLOR

The spectral response obtained for each of the field pixel does not take the
light or the action of a filter into account. These two steps, essential to create a
picture that simulate the reality are performed using simple calculation processes.

Filter simulation

Optical filters are often used in vision imaging dedicated to precision


agriculture. As an example, the soil/vegetation discrimination is made very easier
by an infra-red filter. A filter is defined by its transmittance which is its ability to
let lights of specific wavelength go through it. For a particular filter, the
transmittance is known for every wavelength and its intensity determine the ratio
of light that will pass through it.
To simulate the action of a defined filter on a spectral signal, it is necessary,
for each wavelength, to balance the spectral signal by the corresponding filter
transmittance. An example of a leaf signal, viewed within a virtual filter is
proposed in Fig. 2.

Fig. 2. Coupling of a leaf reflectance spectrum and a green pass-band filter.


Reflectance spectrum to RGB

As we have obtained a different spectrum for each pixel, it is necessary to


transform them in a color space to be able to visualize them. There is a lot of color
spaces and each of them has different use. As RGB color space is the color space
used in photography, we will transform the obtained spectrum in this color space.
Considering that there is multiple RGB color spaces with few computing
differences, we will let the ability to chose a particular RGB space.

The different RGB color spaces, which are specified by their primary color
and a white point (often D65). They principally differ in term of gamut, the set of
color that can be represented by the color space. Two color spaces are mostly
used: Adobe RGB and sRGB (Adobe RGB will be used in further examples).

To transform a spectrum into a particular RGB space, an intermediate space is


necessary: the CIE XYZ color space. This color space allows the description of a
color with a principal component Y representing the luminance (the perceived
brightness) and two components (X and Z) representing the chromaticity (the
perceived color). Three color matching functions ( x (λ ) , y (λ ) and z (λ ) ) allow
the transformation of a spectrum into the CIE XYZ color space using the equation
(1).
1 700
X = ∫ D ( λ ) R (λ ) x ( λ ) d λ
k 380
1 700
Y = ∫ D ( λ ) R (λ ) y (λ ) d λ
k 380
1 700
Z = ∫ D ( λ ) R (λ ) z (λ ) d λ (1)
k 380
700
with k = ∫ D (λ ) y (λ ) d λ
380

where λ is the wavelength in nanometers, D is the light irradiance


and R is the reflectance spectrum

X, Y and Z values are, then, transformed into an RGB color space using a
matrix dependent to the RGB color space. A lot of resources (matrices,
conversion formulae…) and details about these operations can be found in Bruce
Lindbloom website (Lindbloom).

Fig. 3 shows an example of the spectrum of the green GretagMacbeth


ColorChecker patch (number 14) transformed into Adobe RGB (1998). The
observed RGB values (102, 146, 78) are very close to the original ones (101, 148,
78), the error is due to the fact that the patch spectrum, the D65 irradiance and the
CIE XYZ data are discrete values whereas natural light is a continuous spectrum.
Fig. 3. Reflectance spectrum to RGB transformation of the green
GretagMacbeth ColorChecker patch (n°14).

RESULTS

RGB values are now available for every field pixel, their values will be
modified by the integration involved by the world to picture transform.
Nevertheless, the lack of multi-angular spectrum implies the impossibility to
create pictures that simulate a field correctly. As a consequence, the presented
field Fig. 4 is obtained by inappropriate PROSPECT and SOILSPECT
parameters: the data used to estimate the parameters were not multi-angular.

Fig. 4. Simulation of a sunflower field infested with two different weed


species and distributions giving a weed infestation rate of 30%.
INTRODUCING SPECTRAL DISCRIMINATION

Using spectral information to discriminate crop from weed is a very promising


field of research, good results have been obtained (Hahn and Muir, 1994) with the
use of few spectral wavelengths. Nevertheless, in spite of the fact that spectral
information is a promising way to discriminate crop from weed, it is important to
use the result brought by spatial discrimination.
A lot of classification based on spectral data need a learning step (such as
those based on neural network (Vioix et al., 2006)), this learning step is very
fastidious and can not be done by a standard user. Based on this, the idea is to
propose a discrimination algorithm that does not require any learning step. The
lack of this step should also avoid the problem of facing new situation (which is
one major issue of algorithms with a learning step).
Creating an algorithm able to discriminate crop from weed using spectral
information without prerequisite knowledge seems to have one major issue : how
to put labels on different classes? As crop or weed are not defined, even if they
are correctly segmented, their classification is an issue. The spatial discrimination
may be the solution: it gives two different classes one with a very large majority
of weeds and the other with a very large majority of crops. This results should be
used as a learning step to allow classification and to guide the segmentation itself,
providing an automatic discrimination method that uses spatial discrimination
results to endorse and to optimize spectral discrimination.

ACKNOWLEDGEMENTS

The authors are grateful for the financial support provided by Tecnoma
(trademark of the EXEL Industries group: http://www.tecnoma.com) and the
Regional Council of Burgundy.

CONCLUSION AND FURTHER WORK

In this paper, we present the bases to add a spectral layer to the spatial field
modeling previously developed by the authors. This approach uses BRDF models
that have been validated, one is dedicated to simulate vegetation reflectance
(PROSPECT) and the other to simulate soil reflectance (SOILSPECT). These
models imply the variation of a reflectance spectrum, depending on the viewing
and lightning angles and allow the characterization of a plant (or a soil) by a set of
parameters linked to physical parameters. The ability to reproduce optical filter
effects is also considered allowing to fit a larger number of experimental devices.
Resulting pictures are expressed into RGB color space, with the ability to chose a
particular one. The transformation from spectra to RGB is done in a two-step
process, a first to integrate the spectrum into the CIE XYZ space and a second to
transform the XYZ tristimuli into an RGB tristimuli.

At this moment, the whole process is functional, from the parameter


estimation to the RGB picture creation. The main issue is the lack of real
reflectance data: multi-angular spectra are not very frequent and their acquisition
requires a particular experimental device (a spectrogoniophotometer).

The data acquisition could be one of the future work to complete this spectral
approach, then, a comparison between real and virtual scenes would be very
interesting to validate the modeling. An other interesting point is the development
of a crop/weed discrimination algorithm that uses both spatial and spectral
information to break the limitations involved by the spatial approach on its own.

REFERENCES

Andreasen, C., Rudemo, M. and Sevestre, S. (1997). "Assessment of weed density


at an early stage by use of image processing." Weed Research 37: 5618.

Bossu, J., Gée, C., Guillemin, J. P. and F., T. (2006). Development of methods
based on double Hough transform and Gabor filtering to discriminate crop
and weeds in agronomic images. . SPIE 18th Annual Symposium
Electronic Imaging Science and technology, San Jose, USA,15-19jan.

Felton, W. L. and McCloy, K. R. (1992). "Spot spraying." Agricultural


Engineering 11: 26-29.

Felton, W. L. (1995). Commercial progress in spot spraying weeds. Brighton


Crop Protection Conference

Hahn, F. and Muir, A. Y. (1994). "Spectral sensing for crops and weed
discrimination." Acta Hort. (ISHS) 372: 179-186.

Jacquemoud, S. and Baret, F. (1990). "PROSPECT: A model of leaf optical


properties spectra." Remote Sensing of Environment 34(2): 75-92.

Jacquemoud, S., Baret, F. and Hanocq, J. F. (1992). "Modeling spectral and


bidirectional soil reflectance." Remote Sensing of Environment 41: 123-
132.

Jones, G., Gée, C. and Truchetet, F. (2007a). Simulation of perspective agronomic


images for an automatic weed detection by Hough transform. . 6th

Jones, G., Gée, C. and Truchetet, F. (2007b). Simulation of agronomic images for
an automatic evaluation of crop/weed discrimination algorithms. Eight
International Conference on Quality Control by Artificial Vision, Le
Creusot – FRANCE, 23-25 May, SPIE. European Conference on Precision
Agriculture, Skiathos, Greece, 3-6 June.

Lindbloom, B. J. (20 Apr 2003). Retrieved 04/29, 2008, from


http://www.brucelindbloom.com/.

Onyango, C. and Marchant, J. (2005). "Image processing performance assessment


using crop weed competition models." Precision Agriculture 182-192.

Tian, L., Reid, J. F. and Hummel, J. W. (1999). "Development of a precision


sprayer for site-specific weed management. ." Transaction of the ASAE
42(4): 893-900.

Vioix, J., Sliwa, T. and Gée, C. (2006). An automatic inter and intra-row weed
detection in agronomic images. . EurAgEng, Germany, 3-7 September.

Vioix, J. B., Douzals, J. P., Truchetet, F., Assemat, L. and Guillemin, J. P. (2002).
"Spatial and spectral methods for weed detection and localization. ."
Eurasip Journal on Applied Signal Processing 7: 679-685.

Você também pode gostar