Você está na página 1de 5

Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, 11-14 July 2010

A MACHINE VISION BASED CROP ROWS DETECTION FOR


AGRICULTURAL ROBOTS
GUO-QUAN JIANG1, CUI-JUN ZHAO2, YONG-SHENG SI3
1
School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
2
School of Resources and Environment Engineering, Henan Polytechnic University, Jiaozuo 454000, China
3
College of Information Science & Technology, Agricultural University of Hebei, Baoding 071001, China
E-MAIL: jiangguoquan@hpu.edu.cn, zhaocuijun@hpu.edu.cn, siyongsheng@hotmail.com

Abstract: threshold, the research of the midpoints of the rows, the


One approach of navigating agricultural robots to perform computation of the Hough transform and a connectivity
different kinds of operations such as weeding, spraying and analysis. Tijmen Bakker [6] selected three rectangular
fertilizing is using a machine vision based row detection system. sections of crop row spacing, and then he summed up the
A new method for robust recognition of crop rows is presented. grey values of the sections and used the grey-scale Hough
First, image pre-processing was used to obtain the binarization
image; second, the binarization image was divided into several
transform to find the row.
row segments, which created less data points while still reserved However, the crop rows detection rate is not very
information of crop rows; third, vertical projection method was satisfied. When the line detection algorithm is used for
presented to estimate the position of the crop rows for image finding crop rows, computational requirements are one of the
strips; and last the crop rows were detected by Hough transform. main limitations for autonomous navigation. Therefore the
The algorithm requires 70ms to determine all the crop rows. goal of the present work is to develop a new method which
Experimental results show that this approach can quickly and can detect crop rows to guide agricultural robots to work in
accurately find the crop rows even under different light real time even under a wide range of illumination situations.
conditions.
2. Material and method
Keywords:
Machine vision; Hough transform; Guidance; Crop rows
2.1. Material
1. Introduction
The Daheng DH-CG300 Image Acquisition Cards and a
KOKO Colour CCD Camera were used to obtain sample
Researchers have been interested in developing
automated guidance for agricultural machinery since the early wheat images. The resolution of image was 400300 pixels.
days of the tractor; this interest has increased recently The focal length of the camera lens was 16cm. Image
because of the advent of precision agriculture, the emergence processing was performed using a personal computer having
of which has been mainly attributed to the advances in a Pentium 1.8 GHz CPU 256MB RAM.
computers and sensors [1]. Sampling operations were performed at the experimental
Many studies to find the guidance directrix with the farm of the Chinese Academy of Agricultural Sciences. The
image processing techniques have been reported. Reid and camera was located 110 cm above ground and tilted at an
Searcy [2] used a Bayes classifier to find the optimal angle of 30to the vertical. The wheat height ranged from 5
threshold for segmenting near-infrared images of crop rows to 12cm. The area covered by one image was 2.6m long in
for obtaining guidance information. Kusano et al. [3] used a row direction and 1.0m wide.
principal component analysis and the HIS transform in the
binary image to extract the crop rows. Yu and Jain [4] used 2.2. Grey scale transform
Hough transform to detect line boundaries of a road using
images captured by a camera on an agricultural vehicle. The colour image was transformed into a grey image by
Rovira-Mas et al. [5] detected two crop rows in five steps the processing of emphasizing the green value and restraining
including the definition of a region of interest (to limit the the red and blue value. As for wheat image, the green value
computation load), the image binarization using an adaptive was larger than the red and blue value. The

978-1-4244-6531-6/10/$26.00 2010 IEEE


114
Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, 11-14 July 2010

2G R B transform model is used. Its principle is shown as (with the exception of crop serious absent), the number of
formula (1). strips may be equal to the number of pixel rows in the image
0 2G R+B but in order to reduce the amount of subsequent computations
it may be feasible to let each strip consist of more than one
pixel ( x, y ) = 2G R-B other (1)
pixel row [8]. Vertical projection method is used for obtaining
255 2G R + B + 255
the centre of the row for each image strip.
Where G, R, B are equal to the green, red and blue Here, several statistical formulas were used. Suppose
value of po int( x, y ) respectively in the colour image, that the resolution of image is M N pixels. The image
strip size is M h . I (i, j ) is the grey value of the pixel in
pixel ( x, y ) denotes the grey value of po int( x, y ) in the
position (i, j ) , s ( j ) is the result of summation of column j ,
grey image, and confined to [0, 255].
Compared with other possible colour channel m is the mean grey value of the whole image strip, thus,
h
combinations this method is reported to yield good results
s ( j ) = I (i, j ) j = 1, 2," M (2)
(Woebbecke et al., 1995), and has been proved to work well i =1
for the present image (Figure 2). It has worked well for a M
1
wide range of illumination conditions, ranging from bright m=
M
s( j )
j =1
j = 1, 2," M (3)
sunlight to a totally overcast sky.

Figure 1. Original image Figure 2. Grey scale image


Figure 3. Binarization Figure 4. Image division
2.3. Image binarization
Figure 5b shows the grey values distribution of each
In order to eliminate the noise and to reduce the number column. Its clear to see the grey values of crops
of pixels to be processed, a binarization process was applied. neighborhood are higher than the background. Hereby, the
Among the global thresholding techniques, the Sahoo et al. up-point and down-point would be detected by way of the
(1988) study concluded that the Otsu method was one of the differential method. Relying on concrete threshold rule, the
best threshold selection methods for general real world left and right edge of crop row can be obtained, and then the
images with respect to uniformity and shape measures. The midpoint of the edges indicating of the centres of the crop
basic principle of Otsu is looking for an optimal threshold rows would be found. The concrete algorithm can be
value to divide grey-level histogram of an image into two described as follows:
parts on the condition that between-cluster variance is
(1) Compute vertical grey values s ( j ) of each image
maximal [7]. Figure 3 showed the result of binarization with
Otsu algorithm. strip;
(2) Select an appropriate grey threshold Tg ,
2.4. Image division and estimation of centre points of the if s ( j ) Tg , then s ( j ) = h ; otherwise s ( j ) = 0 . Here, the
crop row
threshold Tg is replaced by the mean m ;
Hough Transform has often been used to determine rows. (3) Calculate differential Diff ( j ) for the above step
But the standard HT has difficulty in meeting the real-time (2);
need because of its drawback in dealing with complex data. (4) Judge the crops left and right edges. If Diff ( j ) > 0 ,
To reduce the data involved, this study adopted image
division method. The binarization image is divided into a then the column j is up-point (crops left edge), record the
number of horizontal strips as shown in Figure 4. In principle value of j ; on the contrary, if Diff ( j ) < 0 , then the column

115
Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, 11-14 July 2010

j is down-point (crops right edge), also, record the value


of j ;
Division of binarization image
(5) Give a distance threshold Td , Dud denotes the
distance between up-point and down-point. If Dud Td , this
pair of up and down points are treated as localization points; Vertical sum of grey values
the midpoint of both points means localizations abscissa. s( j) for each strip
Usually, suppose the vertical coordinate
yi = (i 1/ 2) * h, i = 1, 2," , N . As a result, the centres of No Yes
s( j) >Tg
crop rows can be obtained; otherwise, if Dud < Td , the point
pairs are not localization points, will be discarded. This step
will end up with all up-points and down-points are judged. s( j) = 0 s( j) = h
(6) Compute the midpoint of localization points. The
Differiential Diff( j)
midpoint is the centre of the crop row.
(7) End the process until the last strip is computed over.
The detailed process can be seen clearly from Figure 5.
<0 >0
(a) Diff( j) 0

j is down-point j is up-point

Distance Dud between the


up-point and the down-point

No Yes
Dud >Td

Compute the midpoint between


Give up
the up-point and the down-point

Crop row's vertical coordinate


Crop row's abscissa
= (i 1 2)*h

Crop row's localization point

Figure 6. Flow-chart of localization of the centre of the row

2.5. Crop rows detection


(f)
The Hough transform (HT) is a popular tool for line
detection due to its robustness to noise and missing data [10].
Figure 5. Estimation of the target points according to vertical projection. As has been described before, the data sets were decreased
obviously owing to the adoption of the use of vertical
(a) Strip No.4 from the binarization image. projection. Here, HT is employed to detect multi crop rows.
(b) Vertical sum of grey values in the strip. It is assumed that the orientation of the agricultural
(c) Grey values after thresholding.
robots with respect to the crop rows will never exceed 60.
(d) Differential processing of Figure (c).
Therefore, the implemented Hough transform calculates the
(e). Result of removing the fault localization points.
(f). Detection result of localization points (token *). accumulator values only for lines with between 30 and
The flow-chart of the algorithm is shown in Figure 6. 150 in steps of 1

116
Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, 11-14 July 2010

3. Results and discussion 4. Conclusions

3.1. Results (1) For green crops, the green value is larger than that of
red and blue, the 2G-R-B transform model can convert all
Use a serial of wheat field images at different growth colour images into grey scale images efficiently and
stages and in different light conditions to test the algorithm. effectively.
Figure 7 is a typical image obtained in a cloudy day. Image (2) In order to increase the image-processing speed,
size is 400h300 pixels. Practical experience has shown that firstly, image division was presented, the binarization image
division of the image into approximately 10 strips is suitable. was divided several horizontal image strips; and then, the
Each strip size is 400h30 pixels. centres of crop rows for each strip were obtained by grey
level accumulation in vertical direction.
Using the algorithm as mentioned above, the detected (3) HT was used for crop rows detection, because it is a
crop rows has shown in Figure 7. Corresponding equations in robust algorithm even in the crop absent.
image coordinate can be obtained as follows: (4) The detection algorithm consumed 70ms to process a
yleft = 4.7046 x + 428.0664 ; colour image with 400h300 pixels in the PC with a 1.8.GHz
CPU.
ymiddle = 14.3007 x + 3067.8000 ; Experiment has approved the algorithm could detect all
yright = 7.1154 x 217.0000 the centrelines under different weather conditions. However,
if the wheat canopies became too close or the inter-row space
was too narrow, the centrelines of the rows were not
satisfactory. Therefore, how to use crops' other information to
effectively extract crop rows is the important work for next
step.

Acknowledgements

This research was supported by the doctorial fund


(Grant No. B2010-27).

References

[1] Reid, J. E., Zhang, Q., Noguchi, N., and Dickson, M.


Agricultural automatic guidance research in North
America. Computers and Electronics in Agriculture,
Vol. 25, No. 1-2, pp. 155-167, 2000.
Figure 7. Detection of multi crop rows [2] T. Hague, J.A. Marchant, N.D. Tillett, Ground based
sensing systems for autonomous agricultural vehicles,
3.2. Discussion Computers and Electronics in Agriculture, Vol. 25, No.
1-2, pp. 11-28, 2000.
If the rows and inter-row spaces could be segmented [3] J.N. Wilson, Guidance of agricultural vehicles a
clearly, crop rows could also be detected easily. Within the historical perspective, Computers and Electronics in
growth of wheat, the rows became overlapped and more and Agriculture, Vol. 25, No. 1-2, pp. 3-9, 2000.
more canopies grew together. Sometimes the field had narrow [4] F. Rovira-Mas, Q. Zhang, J.F. Reid, J.D. Will,
inter-row spaces which made it difficult to reliably Hough-transform-based vision algorithm for crop row
discriminate all of the rows in view. Under these detection of an automated agricultural vehicle, Journal
circumstances, the crop rows could not be detected of Automobile Engineering., Vol. 219, No. 88, pp.
satisfactorily. In addition, it is obvious that a great amount of 999-1010, 2005.
weeds between the crop rows will disturb the row detection [5] Tijmen Bakker, Hendrik Wouters, Kees van Asselt, et
algorithm. al., A vision based row detection system for sugar
beet, Computers and Electronics in Agriculture, Vol.
60, No. 1, pp. 87-95, 2008.

117
Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, 11-14 July 2010

[6] Reld, J.E and Searcy, S. W. An algorithm for computer and electronics in agriculture, Vol 38, No. 2, pp.
vision sensing of a row crop guidance directrix, 141-158. 2003.
Transactions of the SAE, Vol 100, No. 2, pp. 93-105, [9] Zuoyun Yuan, Zhihuai Mao, Qing Wei. Orientation
1991. technique if crop rows based on computer vision,
[7] Hui Fuang Ng. Automatic thresholding for defect Journal of China Agricultural University, Vol 10, No. 3,
detection, Pattern Recognition Letters, Vol 27, No. 14, pp. 69-72, 2005.
pp. 1644-1649, 2006. [10] Leandro A.F. Fermandes, Manuel M. Oliveira.
[8] H. T. Sogaard, H. J. Olsen. Determination of crop rows Real-time line detection through an improved Hough
by image analysis without segmentation, Computers transform voting scheme, Pattern recognition. Vol 41,
No. 1, pp. 299-314, 2008.

118

Você também pode gostar