Escolar Documentos
Profissional Documentos
Cultura Documentos
Abstract With increased expectations for food products of high quality and safety
standards, the need for accurate, fast, and objective quality determination of these
characteristics in food products continues to grow. Machine vision systems are auto-
mated, nondestructive and cost-effective, and ideally suited for routine inspection
and quality assurance tasks which are common in the food and agro-products indus-
tries. Machine vision is a technology that allows the automation of visual inspec-
tion and measurement tasks using digital cameras and image analysis techniques.
Machine vision system generally consists of five basic components: light source, an
image-capturing device, an image capture board (frame grabber), and the appropri-
ate computer hardware and software. The potential of computer vision in the food
industry has long been recognized and the food industry is now ranked among the
top ten industries using this technology. Traditional visual quality inspection per-
formed by human inspectors has the potential to be replaced by computer vision
systems for many tasks. There is increasing evidence that machine vision is being
adopted at commercial level. This chapter highlighted the construction and image
processing of online detection by machine vision. First, an introduction to the image
acquisition system, including lighting system, camera, and lens, was given. Then,
the image processing, which includes imaging segmentation, interpretation, and
classification, was discussed. At last, three examples of online food quality detec-
tion were introduced.
Abbreviation
ADO ActiveX Data Objects
ANN Artificial Neural Network
BP-ANN Back-propagation artificial neural network
CCD Charged coupled device
CMOS Complementary metal-oxide semiconductor
DCT Discrete cosine transform
DSP Digital signal processor
FOV Field of View
FP Feature parameters
HSI Hue-saturation-intensity
HSV Hue, saturation, volume
IBPGR International Board for Plant Genetic Resources
LDA Linear discriminant analysis
LED Light-emitting diode
MIR Mid-infrared
MV Machine vision
NIR Near infrared
NTSC National Television Standards Committee
PAL Phase Alteration Line
PC Personal computer
PCA Principle component analysis
PCA Principal component analysis
RAM Random access memory
RGB Red, green, and blue
ROI Region of interest
SVM Support vector machine
TV Television
2.1Introduction
Machine vision (MV) is the technology and methods used to provide imaging-based
automatic inspection and analysis for such applications as automatic inspection,
process control, and robot guidance in food industry. Recent advances in hardware
and software have aided in this expansion by providing low-cost powerful solu-
tions, the field of MV, or computer vision, has been growing at a fast pace [13].
The technology aims to duplicate the effect of human vision by electronically per-
ceiving and understanding an image. Table2.1 illustrates the benefits and draw-
backs associated with this technology.
For the food industry, when consumers buy food, food perception is limited to
visual perception. This visual sensation is often the only direct information the con-
sumer receives from the product. The appearance together with former experiences
and the cultural background of the consumer direct the consumer decision to pur-
chase the product. The visual sensation is a mix of the color, the shape, and the size
of the product. Therefore, image processing is an important tool in quantifying the
external appearance of food. Imaging techniques have been developed as an inspec-
tion tool for quality and safety assessment of a variety of agricultural food products.
Imaging is generally nondestructive, reliable, and rapid, depending on the specific
technique used. These techniques have been successfully applied to fruit [4], meat
[5, 6], poultry [7, 8], and grain [9, 10].
Perception theory assumes that the human vision system is able to estimate the
size of an object independently of the distance between the eye and the object when
2.2Images Acquisition System 13
enough distance cues are available, nevertheless, this size constancy can be reduced
if less environmental information is provided. For example, in wholesale stores,
apples are presented in boxes without size cues. Marketing numbers show that for
a given color quality, the largest purchase is obtained for apples with a maximal
diameter between 75 and 80mm. Consequently, the farmer gets the highest price
for apples graded into the size class of 7580mm. Although machines mechani-
cally sort the apples by weight, a feature strongly correlated with the apple size [2,
3, 11], the question arose how well test persons could distinguish apples by size and
how quality grading per size can include the human visual perception abilities. It is
the consumer at the end of the commercial chain that assigns quality to the products
and evaluates whether or not he will purchase the product. As a result, automated
visual inspection is undergoing substantial growth in the food industry because of
its cost-effectiveness, consistency, superior speed, and accuracy.
The grading of food such as apples using MV can be arbitrarily divided into im-
age acquisition system and image processing system. The image acquisition system
as shown in Fig.2.1 is composed of lighting system, camera, lens, computer, con-
troller, and conveyor. The design of conveyor and controller should be adapted to
the detection food. The lighting system, camera, and lens are introduced as follows.
2.2.1Lighting System
The purpose of the lighting system is to provide radiant light with suited spec-
tral characteristic and a uniform spatial repartition. As with the human eye, vision
systems are affected by the level and quality of illumination. It was found that by
adjustment of the lighting, the appearance of an object can be radically changed
with the feature of interest clarified or blurred. Therefore, the performance of the il-
lumination system can greatly influence the quality of image and plays an important
role in the overall efficiency and accuracy of the system [12]. It should be noted that
14 2 Machine Vision Online Measurements
Camera Lens
Convey
a well-designed illumination system can help to improve the success of the image
analysis by enhancing image contrast. Good lighting can reduce reflection, shadow,
and some noise giving decreased processing time. Various aspects of illumination
including location, lamp type, and color quality need to be considered when design-
ing an illumination system for applications in the food industry [12].
Most lighting arrangements can be grouped as either front or back lighting. Front
lighting (electron projection lithography or reflective illumination) is used in situa-
tions where surface feature extraction is required. In contrast, back lighting (trans-
mitted illumination) is employed for the production of a silhouette image for critical
edge dimensioning or for subsurface feature analysis. Light sources also differ but
may include incandescent, fluorescent, lasers, X-ray tubes, and infrared lamps. The
choice of lamp affects quality and image analysis performance. The elimination of
natural light effects from the image collection process is considered of importance
with most modern systems having built in compensatory circuitry [12].
The illumination system along with its associated optical components is the prin-
cipal determinant of contrast. There are two principles for the illumination system:
(1) give stable and symmetrical lighting and (2) make the object outstanding from
background. The lighting type could be an incandescent lamp, a high-frequency
fluorescence lamp, a fiber halogen lamp, and a light-emitting diode (LED) light as
shown in Fig.2.2. The principal advancement has led to the increasing use of LEDs.
The illumination system was calibrated by taking the image of a color pattern that
had different regions painted with solid colors (red, green, blue, and yellow). Using
the vision system, the average red, blue, green (RGB) values of each region were
calculated and stored. The color pattern was presented to the vision system before
each experiment in order to check whether the calibration of the color camera was
necessary.
2.2.2Camera
The camera is the key point for apple-sorting machine like human eyes. There
are many different sensors which can be used to generate an image, such as ultra-
sound, X-ray, and near-infrared (NIR) spectroscopy. Images can be also obtained
using displacement devices and documents scanners. Typically, the image sensors
used in MV are usually based on solid-state charge-coupled device (CCD) camera
2.2Images Acquisition System 15
Fig. 2.2 Normally visible lighting type. a Incandescent lamp; b High-frequency fluorescence
lamp; c Fiber halogen lamp; and d LED. LED light-emitting diode
scanning and progressive scanning are the two techniques available today for read-
ing and displaying information produced by image sensors. Interlaced scanning is
used mainly in CCDs. Progressive scanning is used in either CCD or CMOS sen-
sors. Interlaced scanning is a transfer of data in which the odd-numbered lines of
the source are written to the destination image first, then the even-numbered lines
are written (or vice versa). Progressive scanning is a transfer of data in which the
lines of the source are written sequentially into the destination image. Each line of
an image is put on the screen one at a time in perfect order.
When interlaced video is shown on progressive scan monitors such as computer
monitors, which scans lines of an image consecutively, the artifacts become notice-
able. The artifacts, which can be seen as tearing, are caused by the slight delay
between odd and even line refreshes as only half the lines keep up with a moving
image while the other half waits to be refreshed. It is especially noticeable when the
video is stopped and a freeze frame of the video is analyzed. In Fig.2.3, an inter-
laced scan image shown on a progressive (computer) monitor (left), and a progres-
sive scan image on a computer monitor at right. Moving objects are, therefore, bet-
ter presented on computer screens using the progressive scan technique (Fig.2.3).
In an online sorting MV system, it can be critical in viewing details of a moving
subject (e.g., a fruit running away with conveyor).
At last, CCD cameras are either of the array type or line scan type. Array or area-
type cameras consist of a matrix of minute photosensitive elements (photosites)
from which the complete image of the object is obtained based on output propor-
tional to the amount of incident light. Alternatively, line scan cameras use a single
line of photosites which are repeatedly scanned up to 2000 times per minute to
provide an accurate image of the object as it moves under the sensor.
2.2.3Lens
Lens is also very important for online detection MV. It is always ignored in many
literatures. If a camera offers an exchangeable lens, it is important to select a lens
suitable for the camera. A lens (or better an objective containing several lenses) is
always designed for certain parameters. It is always a compromise between magni-
fication, field of view (FOV), focal number (F-number), spectral range, image size,
aberrations, and finally costs.
First, the size of a lens should be considered. A lens made for a 1/2-in image sen-
sor will work with 1/2-, 1/3-, and 1/4-in image sensors, but not with a 2/3-in image
sensor. Figure2.4 shows different lenses mounted onto a 1/3-in image sensor. If a
lens is made for a smaller image sensor than the one that is actually fitted inside
the camera, the image will have black corners (see left-hand illustration below). If
a lens is made for a larger image sensor than the one that is actually fitted inside
the camera, the field of view will be smaller than the lens capability since part of
the information will be lost outside the image sensor (see right-hand illustration).
This situation creates a telephoto effect as it makes everything look zoomed in.
2.2Images Acquisition System 17
Second, it is also important to know what type of lens mount the camera has.
There are two main standards used on cameras: CS-mount and C-mount. Both have
a 1-in thread and they look same. What differs is the distance from the lenses to the
sensor when fitted on the camera:
CS-mount: The distance between the sensor and the lens should be 12.5mm.
C-mount: The distance between the sensor and the lens should be 17.526mm.
It is possible to mount a C-mount lens to a CS-mount camera body by using a 5-mm
spacer (C/CS adapter ring). If it is impossible to focus a camera, it is likely that the
wrong type of lens is used.
Third, in low-light situations, particularly in indoor environments, an important
factor to look for in a camera is the lens light-gathering ability. This can be deter-
mined by the lens f-number, also known as f-stop. An f-number defines how much
light can pass through a lens. An f-number is the ratio of the lens focal length to the
diameter of the aperture or iris diameter, that is,
f-number = focallength/aperture
Smaller the f-number (either short focal length relative to the aperture, or large ap-
erture relative to the focal length), better the lens light-gathering ability; i.e., more
light can pass through the lens to the image sensor. In low-light situations, a smaller
f-number generally produces a better image quality. (There may be some sensors,
18 2 Machine Vision Online Measurements
however, that may not be able to take advantage of a lower f-number in low-light
situations due to the way they are designed.) A higher f-number, on the other hand,
increases the depth of field, which is explained below. A lens with a lower f-number
is normally more expensive than a lens with a higher f-number.
F-numbers are often written as F/x. The slash indicates division. An F/4 means
that the iris diameter is equal to the focal length divided by 4; so if a camera has an
8-mm lens, light must pass through an iris opening that is 2mm in diameter.
While lenses with automatically adjustable iris (In-situ storage image sensor)
have a range of f-numbers, often only the maximum light gathering end of the range
(smallest f-number) is specified.
A lens light-gathering ability or f-number, and the exposure time (i.e., the length
of time an image sensor is exposed to light) are the two main elements that control
how much light an image sensor receives. A third element, the gain, is an amplifier
that is used to make the image brighter. However, increasing the gain also increases
the level of noise (graininess) in an image, so adjusting the exposure time or iris
opening is preferred.
Fourth, limits to the exposure time and gain can be set in some online detec-
tion environment. The longer the exposure time, the more light an image sensor
receives. Bright environments require shorter exposure time, while low-light condi-
tions require longer exposure time. It is important to be aware that increasing the
exposure time also increases motion blur, while increasing the iris opening has the
downside of reducing the depth of field, which is explained in the section below.
When deciding upon the exposure, a shorter exposure time is recommended
when rapid movement or when a high-frame rate is required. A longer exposure
time will improve the image quality in poor lighting conditions, but it may increase
motion blur and lower the total frame rate since a longer time is required to expose
each frame.
There are three main types of lenses:
Fixed Lens Such a lens offers a focal length that is fixed; that is, only one field of
view (either normal, telephoto, or wide angle). A common focal length of a fixed
network camera lens is 4mm.
Varifocal Lens This type of lens offers a range of focal lengths, and hence, differ-
ent fields of view. The field of view can be manually adjusted. Whenever the field
of view is changed, the user has to manually refocus the lens.
Zoom Lens Zoom lenses are like varifocal lenses in that they enable the user to
select different fields of view. However, with zoom lenses, there is no need to refo-
cus the lens if the field of view is changed. Focus can be maintained within a range
of focal lengths, for example, 648mm. Lens adjustments can be either manual or
motorized for remote control. When a lens states, for example, 3-zoom capability,
it is referring to the ratio between the lens longest and shortest focal length.
Fifth, the spectral ranges of the camera should also be taken into account. Basler
cameras cover a spectral range from 400 to 1000nm. This is more than the human
eye is able to seehuman eyes roughly detect about 400800nm. Color cameras
2.3Image Processing 19
usually have a Bayer pattern in front of the sensor. Note that the effective resolution
of the chip has to be divided by two in each direction. The blue channel is sensi-
tive from 400 to 500nm, the green from 500 to 600nm, and the red for more than
600nm. Unfortunately, the NIR opens all three channels for higher than 700nm. To
avoid incorrect colors (e.g., green leaves appearing yellow or orange), an infrared
(IR) cut filter is required. For C-mount cameras, it could be mounted in front of the
sensor. Some lenses are corrected for the visible range, some include correction for
NIR.
At last, a criterion that may be important to a video surveillance application
is depth of field. Depth of field refers to the distance in front of and beyond the
point of focus where objects appear to be sharp simultaneously. Depth of field is
affected by three factors: focal length, iris diameter, and distance of the camera to
the subject. A long focal length, a large iris opening, or a short distance between the
camera and the subject will limit the depth of field. The illustration of Fig.2.5 is an
example of the depth of field for different f-numbers with a focal distance of 2m
(7ft). A large f-number (smaller iris opening) enables objects to be in focus over
a longer range. (Depending on the pixel size, very small iris openings may blur an
image due to diffraction.)
2.3Image Processing
Image processing and image analysis are recognized as being the core of computer
vision. Image analysis and MV have a common goal of extracting information from
digital images. They differ mostly in what objects or parts they are applied to and
the type of information extracted. Both use image processingcomputations that
20 2 Machine Vision Online Measurements
modify an input image to make image elements more obvious. The image processing
could be mainly divided into four steps: the images acquisition, the segmentation,
the interpretation, and finally the fruit classification. For example, the grading of
apples into quality classes is a complex task involving different stages. The prior
step is the images acquisition, which was performed by CCD cameras during the
motion of the fruit on an adapted commercial machine. It was followed by a first
segmentation to locate the fruits on the background and a second one to find the
possible defects. Once the defects were located, they were characterized by a set
of features including color, shape, texture descriptors as well as the distance of the
defects to the nearest calyx or stem end. These data were accumulated for each fruit
and summarized in order to transform the dynamic table into a static table. The
grading was performed using quadratic discriminant analysis. Image processing/
analysis can be broadly divided into three levels: low-level processing, interme-
diate-level processing, and high-level processing as described in reference [12].
Low-level processing includes image acquisition and preprocessing. Intermediate-
level processing involves image segmentation, image representation, and descrip-
tion. High-level processing involves recognition and interpretation, typically using
statistical classifiers or multilayer neural networks of the region of interest. These
steps provide the information necessary for the process/machine control for quality
sorting and grading.
2.3.1Image Segmentation
The images resulting from the acquisition step present from one to four planes. The
two most common configurations are the monochrome images (one plane) and the
color images (three planes, the red, green, and blue channels). The result of the image
segmentation can be expressed as a monochrome image with the different regions
having different gray levels. Image segmentation is one of the most important steps
in the entire image processing technique, as subsequent extracted data are highly
dependent on the accuracy of this operation. Its main aim is to divide an image into
regions that have a strong correlation with objects or areas of interest. Segmenta-
tion can be achieved by three different techniques as shown in Fig.2.6: threshold-
ing, edge-based segmentation, and region-based segmentation [28]. Thresholding
is a simple and fast technique for characterizing image regions based on constant
reflectivity or light absorption of their surfaces. Edge-based segmentation relies
on edge detection by edge operators. Edge operators detect discontinuities in gray
level, color, texture, etc. Region segmentation involves the grouping together of
similar pixels to form regions representing single object within the image. The cri-
teria for like-pixels can be based on gray level, color, and texture. The segmented
image may then be represented as a boundary or a region. Boundary representation
is suitable for analysis of size and shape features while region representation is used
in the evaluation of image texture and defects. Image description (measurement)
deals with the extraction of quantitative information from the previously segmented
2.3Image Processing 21
image regions. Various algorithms are used for this process with morphological,
textural, and photometric features quantified so that subsequent object recognition
and classifications may be performed.
For fruit grading images, these regions are the background, the healthy tissues
of the fruits, the calyx and the stem ends, and possibly some defects. The contrast
between the fruit and the background should be high to simplify the localization of
the fruit. This is usually carried out by a simple threshold. Nevertheless, as defects
or the calyx and the stem ends could present luminances comparable with the back-
ground, the defect detection is still an interesting study project. It is necessary to
distinguish the defects from the calyx and stem ends, which may present similarities
in terms of luminance and shape. This step is the separation of the defects from the
healthy tissue. On monochrome images, the apple appears in light gray, the mean
luminance of the fruit varies with its color and decreases from the center of the fruit
to the boundaries. The lenticels look like unevenness, which could be assimilated
to noise. The defects are usually darker than the healthy tissue but their contrast,
shape, and size may vary strongly. For these reasons, simple techniques such as
thresholding or background subtraction would give poor results while pattern rec-
ognition techniques would be unusable.
The next steps extract the relevant information from the regions segmented earlier
and synthesize it for a whole fruit, i.e., for several images. As it may be seen, most
researchers (except the last ones) did not consider how to manage several images
representing the whole surface of the fruit. It seems that each image was treated
separately and that the fruit was classified according to the worse result of the set
of representative images. Studies like those on apples used global measurements
(computed on the whole fruits, without segmentation of the defects) to evaluate the
fruits quality, but these techniques seemed too simple to be efferent if the reflec-
tance of the fruit is uneven, as for bicolor apples or for apples randomly presented
to the camera [29].
Computer-generated artificial classifiers that are intended to mimic human deci-
sion making for product quality have recently been studied intensively. The opera-
tion and effectiveness of intelligent decision making is based on the provision of a
22 2 Machine Vision Online Measurements
2.4.1Applications
Computer vision systems are being used increasingly in the food industry for qual-
ity assurance purposes. The system offers the potential to automate manual grading
practices, thus standardizing techniques and eliminating tedious human inspection
tasks. From vegetable and fruit to meat and fish, from poultry carcasses to prepared
consumer foods, and to container, MV has been meeting the ever-expanding re-
quirements of the food industry as described in Table2.2.
Computer vision has proven successful for the objective, online measurement of
several food products with applications ranging from routine inspection to the
2.4Applications of Machine Vision in Food and Agricultural Products 23
Table 2.2(continued)
Foods Quality indices Accuracy (%) Reference
Prepared consumer food
Bread Height and slope of the top [72]
Internal structure [73]
Chocolate Size, shape, baked dough color [74]
chip cookies
Muffin Color 96% of pregraded [75]
and 79% of
ungraded
Meat and fish
Pork Pork loin chops 90% [76]
Evaluation of fresh pork loin color R=0.75 [77]
Prediction of color scores 86% [76]
Beef Prediction of color scores R2=0.86 [78]
Prediction of sensory color 100% [79]
responses
Fish Fish species recognition 95% [80]
Prediction of color score assigned R=0.95 [81]
by a sensory panel
Detection of bones in fish and 99% [82]
chicken
complex vision guided robotic control. Table2.3 shows the online applications of
MV in food industries.
Visual inspection is used extensively for the quality assessment of meat and fish
products applied to processes from the initial grading to consumer purchases. Color,
marbling, and textural features were extracted from meat and fish images, and an-
alyzed using statistical regression and neural networks. Textural features were a
good indicator of tenderness [88]. MV has been used in the analysis of pork loin
chop images. More than 200 pork loin chops were evaluated using color MV [76].
Agreement between the vision system and the panelists was as high as 90% at a
speed of 1 sample per second. Storbeck and Daan [80] also measured a number
of features of different fish species as they passed on a conveyor belt at a speed of
0.21m/s perpendicular to the camera. A neural network classified the species from
the input data with an accuracy of 95%. Jamieson [82] used an X-ray vision system
for the detection of bones in chicken and fish fillets. This commercial system oper-
ates on the principle that the absorption coefficients of two materials differ at low
energies allowing the defect to be revealed. The developed system has a throughput
of 10,000 fillets per hour and can correctly identify remaining bones with an ac-
curacy of 99%.
External quality is considered of paramount importance in the marketing and
sale of fruit and some vegetables. The appearance, i.e., size, shape, color, and the
2.4Applications of Machine Vision in Food and Agricultural Products 25
Table 2.3 Online applications of machine vision in food and agricultural industries
Area of use Speed/processing Accuracy (%) Reference
time
Pork loin chops 1sample/s 90 [76]
Fish identification 0.21m/s conveyor 95 [80]
Detection of bones in fish and 10,000/h 99 [82]
chicken
Estimation of cabbage head 2.2s/sample [83]
size
Location of stem root joint in 10/s [84]
carrots
Apple defect sorting 3,000/min 94 [48]
Sugar content of apples 3.5s/fruit 78 [85]
Pinhole damage in almonds 66nuts/s 81 [86]
Bottle inspection 60,000/h [87]
Characterization of apple features includes the presence of defects, the size, the
shape, and the color. Descriptive variables include roundness, diameter, average
green color on the surface, and the color properties of defect spots. Many attempts
have been made to implement these algorithms in online sorting machines [90].
Size grading is very popular around the world. The color and shape of apples have
brought a serious problem because misjudgment occurs frequently due to seasonal
fluctuations in grading criteria, the difference among production areas.
Image analysis can be used to extract external quality properties from digitized
video images. Identifying the shapes and color of fruit is easy for human eyes and
brains, but it is difficult for a computer. Human descriptions of shapes, color are
often abstract or artistic, not quantitative. Researchers developed image processing
algorithms to measure objectively the shape and color features of horticultural
products.
2.5Machine Vision for Apples Grading 27
*UDEEHU
&&'FDPHUD
$SSOH
&RQHVKDSH DSSOH &RPSXWHU
&KDPEHU UROOHU
Shape uniformity of fruit and vegetables is important whether they are to be fresh
marketed or processed. To achieve the desired uniformity, fruit must be inspected
and classified. To date, most of the research leading to describe fruit shape has been
two-dimensional (2D), and this article focuses on 2D shape analysis.
Shape is an inherent characteristic of the phenotypic appearance of apples and
affected by many factors such as the conditions during production, market situation,
and attitudes of consumers. Today, the shape evaluation is done merely on a subjec-
tive way, making use of grading workers.
In the early research, most of the shape algorithms are quantifying the roundness,
the rectangularity, the triangularity, or the elongation of the product by calculating
ratios of the projected area to width of the product. As Segerlind and Weinberg ap-
plied Fourier expansion for the identification of different grain kernels, more and
more researches were focused on shape characteristics by Fourier transformation
and Fourier inverse transformation [33, 9194]. Fourier transformation and prin-
ciple component analysis (PCA) were used to characterize different types of apple
shape according to the international board for plant genetic resources (IBPGR).
However, it could not use apples shape grading. More recently, some results dem-
onstrated that using Fourier transformation and ANN to distinguish different grad-
ing Huanghua pears according to their shapes. However, it is difficult to select the
number of hidden units and hidden layers, and the learning procedure is lengthy for
ANN [33]. Therefore, an image processing algorithm was developed to characterize
objectively the apple shape to identify different grading. Here, we introduced a Fou-
rier expansion for shape feature extraction.
Horizontal line image scanning and detection of the minimum and maximum x
coordinate at each yth row resulted in about 1000 edge points. The apple shape char-
acterization was based on the extraction of the apple profile from digitized images,
as illustrated in Fig.2.8b. For the boundary of an apple in an image, the most im-
portant information is the positions of the pixels that constitute the boundary. Other
information, such as the brightness of the boundary pixels can be ignored. The co-
ordinates of the centroid (point O: xo, yo) can be found only based on the boundary
information. The edge points were centered around the centroid (xo, yo) of all (x, y)
coordinates:
28 2 Machine Vision Online Measurements
Fig. 2.8 Edge extraction and transformations. a Apple image. b Edge extraction and transformation
k =0
xo = n
(2.1)
2 [ yk ( xk xk 1 ) xk ( yk yk 1 )]
k =0
n
yk2 ( xk xk 1 ) xk ( yk2 yk21 )
k =0
yo = n (2.2)
2 [ yk ( xk xk 1 ) xk ( yk yk 1 )]
k =0
This resulted in a shift of the origin of the (x, y) vector space to the centroid (xo,
yo). In the following step, the Cartesian (x1, y1) coordinates were transformed into
polar (r, ) coordinates. The polar vector space was rotated by assigning the small-
est angle to the smallest radius. Finally, the (r, ) coordinates were normalized to a
constant average radius of 3cm to exclude size effects:
(2.3) r
r1 = 3.0
rav
where r1 is the normalized radius and rav is the average radius. Thus, the shape
of an apple can be mathematically described as a periodic function with a period
of 2 : r1 ( + 2 ) = r1 ( ). A periodic function can be expressed as a combination
of trigonometric functions with different frequencies using Fourier series. Fourier
expansion was used to characterize the shape of objects by writing the normalized
radius r1 as a function of the angle , using a sum of sine and cosine functions with
a period 2 [95]. Only the first period was considered, implying that is equal
to 1. Fourier expansion describes the apple shape as follows:
1
r1 = f ( ) = a0 + (am cos(m ) + bm sin(m ))
(2.4)
2 m =1
2.5Machine Vision for Apples Grading 29
The Fourier coefficients were calculated by the fast Fourier transform algorithm.
Meanwhile, only the first 16 coefficients of cosine terms am and sine terms bm were
calculated, because it could greatly reduce the calculation and describe the shape
of an apple. For apples, this study verified the conclusions through experiments,
which stated that the first two principal components of the first 16 cosine terms am
and the first 16 sine terms bm represented the height to width ratio and how conical
the shape was.
Leemans etal. [96] concluded that the amplitudes of F(h) have a precise physical
meaning that can be used to quantify the shape of apples. For a Golden Delicious
apple to be classified as class I (the best category contemplated in that work), and
considering a side view of the apple in upright position, F(2) should not be too high,
since high values of F(2) imply an excessive fruit elongation. Analogously, F(3)
should be high enough, since low values of F(3) imply lack of conicity or triangu-
larity. F(4) should be high enough, since this implies that the apple can be inscribed
in a square. In regards to the stem view, i.e., the view in which an observer would
watch the apple from above when the apple lies on a horizontal surface in upright
position, F(1) should be low, since high values of F(1) entail an excessively ellipti-
cal apple cross section. Abdullah etal. [93] observed that four-pointed, five-pointed
and six-pointed star fruits peaked in F(4), F(5), and F(6), respectively. Following
the rationale in Leemans etal. [96], it follows that the four-pointed star fruit can be
inscribed in a square, while the five-pointed and six-pointed inscribe in a pentagon
and a hexagon, respectively. Xiaobo Zou [97] used the 33 coefficients: a0, the first
16 cosine terms, am (a1 , a2 , , a16 ), the first 16 sine terms and bm (b1 , b2 , , b16 ) to
identify the shape of apples, the grade judgment ratios for extra, category II,
and reject are high, but the ratio for category I is not high.
The strong correlation between fruit color and maturity makes it feasible to evalu-
ate the maturity level based on color. Among all the image analysis based methods,
color image processing techniques played an important role in inspections for many
different fruits. Some color-based techniques for fruits inspection extract features
from the RGB or hue, saturation, volume (HSV) images of fruit accompany with
30 2 Machine Vision Online Measurements
other features, e.g., size, texture, and classify fruits with machine learning or artifi-
cial intelligence algorithms.
The composite video signal of an apple collected by an image processor was
processed to the 256 color gradient of the three primary colors in each pixel.
Then, the average color gradients ( R , G, B ) , the variances (VR , VG , VB ), and the
color coordinates (r, g, b) were calculated from the three primary colors in the fol-
lowing manner [14, 20, 30, 98102]. For example, for red;
(2.7)
R = R/n
n
VR = ( Ri R ) 2 / n
(2.8)
i =1
(2.9)
r = R / ( R + G + B)
n
where R is equal to i =1 Ri , and n is the number of total pixels in the image data.
Therefore, nine-color characteristic data were obtained for one entire apple [51; 52;
53; 54; 55].
Color representation in hue, saturation, intensity (HSI) provides an efficient
scheme for statistical color discrimination. These attributes are the closest ap-
proximation to human interpretation of color. So color RGB signals of apple were
transformed to HSI for color discrimination. For a digitized color image, the hue
histogram represents the color components and the amount of area of that hue in
the image. Therefore, color evaluation of apples can be achieved by analyzing the
hue histogram. The hue values of Fuji apple images are mainly between 0 and
100. The hue field in 080 can be divided into eight equal intervals. The number
of pixels in each interval divided by 100 was treated as apples color feature ci
(i=1,,8). Then, eight-color features were obtained. The hue curve of the differ-
ent class apples is presented in Fig.2.9. The maximum feature appeared in 0~20
for extra Fuji apples, 20~40 for class I apples, 4060 for substandard degree.
There is no maximum feature for class II apples [26]. Four images, one for every ro-
tation of 90, were taken from each apple. Seventeen color feature parameters (FPs)
were extracted from each apple in the image processing. They were the average
color gradients (R, G, B), the variances (VR, VG, VB ), the color coordinates (r, g, b),
and, c1,c2,c3,c4, c5, c6,c7, and c8.
Three hundred and eighteen apples used in this study were sent directly to our
laboratory from a farmer. Classification experiments were done under controlled
circumstances, in a room illuminated by halogen lamps with the apples placed
against a black background. The color of an apple was graded with a trained quality
inspector according to the grading standards in China. The quality grades for the ex-
ternal appearance of apples are classified into four categories: class extra, of which
more than 66% of the surface is deep red, and orange in the background; class I, of
which 5066% area of the surface is red, and the background is yellowish orange;
class II, of which 2550% of the surface is red, and the background is yellowish
2.5Machine Vision for Apples Grading 31
Table 2.4 Three hundred and eighteen apples in training set and test set were classified into
four classes
Class Samples
Training set Test set
Accepted apple Class extra 50 20
Class I 50 41
Class II 50 40
Rejected apple The reject 50 17
green; the reject, of which less than 25% of the surface is red, and the background
is light green or unevenly red colored, and an injured part can be seen on the apples
surface. The 318 Fuji apples were divided into two sets. An initial experiment
was conducted with 200 fruits (training set). The samples were inspected by the
MV system. Reference measurement for color was then taken. An independent set
of 118 samples (test set) was fed into the robotic device to assess the efficiency
of the online MV procedure and to test the precision of the online MV process. The
apples in training set and test set were classified into class extra, class I, class
II, and the reject, as Table2.4 shows.
Although there are many methods that have been proposed for apple color grad-
ing, we have been unable to investigate the performance of all these color-grading
methods. However, one example is that a three layer back-propagation ANN (BP-
ANN) has been considered for apple color grading [18]. As a comparison, BP-ANN
was build up for apple color grading.
The 17 normalized apple color FPs were chosen as the input values for the
neural network. The apples four color grades were coded to serve as the output
layer of the neural network: extra (1,0,0,0), class I (0,1,0,0), class II (0,0,1,0), and
reject (0,0,0,1). Other parameters of the BP-ANN were activation: logistic, learning
rate: 0.02, momentum: 0.9. The ANN was trained with the 200 training samples in
training set 20,000 times. It was then used to classify the test set, which consisted
32 2 Machine Vision Online Measurements
Table 2.5 The BP-ANN training cycle and classification accuracy as the number of nodes in the
hidden layer changed. BP-ANN back-propagation artificial neural network
Structure Total error (training Classification accuracy Classification accu-
(inputhiddenoutput) 20,000 times) of training set (%) racy of test set (%)
1744 1.374 66 56.8
1764 1.333 67.5 59.4
1784 1.306 68.5 63.6
17104 1.295 69 65.3
17124 1.273 72.5 71.2
17144 1.260 73.5 72.9
17164 1.205 75.5 74.6
17184 1.198 79 76.3
17204 1.187 82.5 77.9
17224 1.189 83 76.3
17244 1.190 83 76.3
of 118 Fuji apples with different color grades. The data in Table 2.5 show the
training cycles, the classification accuracy for the training set, and the classifica-
tion accuracy for the test set when the structure of the ANN changed. It can be seen
from Table2.5 that the training cycle decreased as the number of nodes in the hid-
den layer increased, whereas the classification accuracy increased at first, then did
not change significantly while the hidden layer nodes increased. It is obvious that
more nodes in the hidden layer result in a longer computation time. Therefore, the
network with a structure of 17204 was selected from this study because it yielded
the highest accuracy with a relatively small network structure.
It can be seen that the construction of the neural network (a number of layers
and neurons) is an empirical process similar to the conventional approaches, and
requires considerable trials and errors. Furthermore, the ANN is easy to overfitting,
that is, its classification accuracy of training set is very high, while its classification
accuracy of test set is unacceptable.
On the common systems, the fruits placed on rollers are rotating while moving.
They are observed from above by one camera. In this case, the parts of the fruit near
the points where the rotation axis crosses its surface (defined as rotational poles)
are not observed [15, 16]. This can be overcome by placing mirrors on each side of
the fruit lines oriented to reflect the pole images to the camera. Another system used
three cameras observing the fruit rolling freely on ropes. On more sophisticated
2.5Machine Vision for Apples Grading 33
systems, two robot arms were used to manipulate the fruit [103]. The study stated
that it was possible to observe 80% of the fruit surface with four images, but the
classification rate remained limited to 0.25 fruit per second.
Traditional mechanical, image processing, and structured lighting methods are
proved to be unable to solve this problem due to their limitations in accuracy, speed,
and so on. On the common systems, the fruits placed on rollers are rotating while
moving, and the cameras used by different researchers were mainly CCD cameras
[104]. They are observed from above by one camera. In this case, the parts of the
fruit near the points where the rotation axis crosses its surface (defined as rotational
poles) are not observed, and the detection of apple defects is still a problem because
it is hard to identify apple stem ends and calyxes from defects by imaging process
[13, 14, 18, 30, 33, 92, 95, 103, 105136].
A machine vision sorting system was developed that utilizes the difference in
light reflectance of fruit surfaces to distinguish the defective and good apples [29].
To accommodate to the spherical reflectance characteristics of fruit with curved
surface like apple, a spherical transform algorithm was developed that converts the
original image to a nongradient image without losing defective segments on the
fruit. To prevent high-quality dark-colored fruit from being classified into the defec-
tive class and increase the defect detection rate for light-colored fruit, an intensity
compensation method using maximum propagation was used. Leemans etal. [103]
present a method based on color information that is proposed to detect defects on
Golden Delicious apples. In a first step, a color model based on the variability of
the normal color is described. To segment the defects, each pixel of an apple image
is compared with the model. If it matches the pixel, it is considered as belonging to
healthy tissue, otherwise as a defect. Two other steps refine the segmentation, using
either parameters computed on the whole fruit, or values computed locally. Wen
and Yang [16] developed a method based on dual-wavelength infrared imaging us-
ing both NIR and mid-infrared cameras. This method enables a quick and accurate
discrimination between tree defects and stem ends/calyxes. The obtained results
have significant meanings to automated apple defect detection and sorting. A novel
adaptive spherical transform was developed and applied in a machine vision apple
defect sorting system [90]. The image transformation compensates the reflectance
intensity gradient on curved objects and provides flexibility in coping with fruits
natural variations in brightness and size. Guyer and Yang use genetic ANNs and
spectral imaging for defect detection on cherries [137]. ANN classifiers success-
fully separated apples with defects from nondefected apples without confusing the
stem/calyx with defects [33]. Wen and Tao [110] developed a novel method which
incorporates an NIR camera and a mid-infrared (MIR) camera for simultaneous im-
aging of the fruit being inspected. The NIR camera is sensitive to both the stem-end/
calyx and true defects; whereas the MIR camera is only sensitive to the stem-end
and calyx. True defects can be quickly and reliably extracted by logical comparison
between the processed NIR and MIR images.
More recently, multispectral and hyperspectral imagines were used for fruit de-
fect detection. Aleixos etal. developed a multispectral camera, which is able to ac-
quire visible and NIR images from the same scene; the design of specific algorithms
34 2 Machine Vision Online Measurements
and their implementation on a specific board based on two digital signal process-
ings (DSPs) that work in parallel, which allows to divide the inspection tasks in the
different processors, saving processing time [64]. The MV system was mounted on
a commercial conveyor, and it is able to inspect the size, color, and the presence of
defects in citrus at a minimum rate of 5 fruits/s. The hardware improvements needed
to increase the inspection speed to 10 fruits/s were also described. Mehl etal. [121]
applied hyperspectral image analysis to the development of multispectral tech-
niques for the detection of defects on three apple cultivars: Golden Delicious, Red
Delicious, and Gala. Two steps were performed: (1) hyperspectral image analysis to
characterize spectral features of apples for the specific selection of filters to design
the multispectral imaging system and (2) multispectral imaging for rapid detection
of apple contaminations. Good isolation of scabs, fungal, soil contaminations, and
bruises was observed with hyperspectral imaging using either principal component
analysis (PCA) or the chlorophyll absorption peak. This hyperspectral analysis al-
lowed the determination of three spectral bands capable of separating normal from
contaminated apples. These spectral bands were implemented in a multispectral im-
aging system with specific band-pass filters to detect apple contaminations. Spatial
and transform features were evaluated for their discriminating contributions to fruit
classification based on bruise defects [116]. Stepwise discriminant analysis was used
for selecting the salient features. Spatial edge features detected using Roberts edge
detector, combined with the selected discrete cosine transform (DCT) coefficients
proved to be good indicators of old (one month) bruises. Separate ANN classifiers
were developed for old (one month) and new (24h) bruises. An NIR transmission
system was developed to inspect defect and ripeness of moving citrus fruits [138].
The system consisted of light source and NIR transmission spectrophotometer. Four
100W halogen lamps were used as the light source and an NIR spectrometer was
used to measure NIR transmission spectra of the citrus fruits. Ripeness inspection
results of the NIR transmission spectrum system for 100 Unshiu citrus fruits were
compared with results of the visual inspection. Analysis of the spectra showed that
ripeness could be evaluated using the peak near 710nm wavelength band. Spectra
of the ripe fruits had a peak at 710 nm and those of immature fruits had a peak
at 713nm. The wavelength shift of the peak was assumed to be caused by varia-
tions of chlorophyll contents, which absorb light near 678nm. Ripeness inspection
model was developed by using the wavelength difference as a ripeness criterion.
Leemans and Destain present a hierarchical grading method applied to Jonagold
apples [125]. Several images covering the whole surface of the fruits were acquired,
thanks to a prototype grading machine. These images were then segmented and the
features of the defects were extracted. During a learning procedure, the objects were
classified into clusters by k-mean clustering. The classification probabilities of the
objects were summarized, and on this basis, the fruits were graded using quadratic
discriminant analysis. Bennedsen and Peterson [139] performed a system for apple
surface defect identification in NIR images through two optical filters at 740 and
950nm. A multispectral vision system including four wavelength bands in the vis-
ible/NIR range was developed [126]. Multispectral images of sound and defective
fruits were acquired tending to cover the whole color variability of this bicolor apple
2.5Machine Vision for Apples Grading 35
variety. Defects were grouped into four categories: slight defects, more serious de-
fects, defects leading to the rejection of the fruit and recent bruises. Stem ends/ca-
lyxes were detected using a correlation pattern matching algorithm. The efficiency
of this method depended on the orientation of the stem-end/calyx according to the
optical axis of the camera. Defect segmentation consisted in a pixel classification
procedure based on the Bayes theorem and nonparametric models of the sound and
defective tissue. Fruit classification tests were performed in order to evaluate the
efficiency of the proposed method. No error was made on rejected fruits and high
classification rates were reached for apples presenting serious defects and recent
bruises. Fruits with slight defects presented a more important misclassification rate
but those errors fitted, however, the quality tolerances of the European standard.
An integrated approach using multispectral imaging in reflectance and fluorescence
modes was used to acquire images of three varieties of apples [136]. Eighteen im-
ages from a combination of filters ranging from the visible region through the NIR
region and from three different imaging modes (reflectance, visible light induced
fluorescence, and ultra violet (UV) induced fluorescence) were acquired for each
apple as a basis for pixel-level classification into normal or disorder tissue. ANN
classification models were developed for two classification schemes: a two class
and a multiple class. In the two-class scheme, pixels were categorized into normal
or disordered tissue, whereas in the multiple-class scheme, pixels were categorized
into normal, bitter pit, black rot, decay, soft scald, and superficial scald tissues. A
tenfold cross validation technique was used to assess the performance of the neural
network models. The integrated imaging model of reflectance and fluorescence was
effective on Honeycrisp variety, whereas single imaging models of reflectance or
fluorescence was effective on Redcort and Red Delicious. AdaBoost and support
vector machine (SVM) were also used to improve pecan defect classification ac-
curacy [140]. Kavdir and Guyer evaluate different pattern recognition techniques
for apple sorting [127]. The features used in classification of apples were hue angle
(for color), shape defect, circumference, firmness, weight, blush percentage (red
natural spots on the surface of the apple), russet (natural netlike formation on the
surface of an apple), bruise content, and the number of natural defects. Different
feature sets including four, five, and nine features were also tested to find out the
best classifier and feature set combination for an optimal classification success. The
effects of using different feature sets and classifiers on classification performance
were investigated.
2.5.2.2The Hardware
The lighting and image acquisition system were designed to be adapted on an exist-
ing single row grading machine (prototype from Jiangsu University, China). Six
lighting tubes (18W, type 33 from Philips, Netherlands) were placed at the inner
side of a lighting box while three cameras (color 3CCD uc610 from Uniq, USA),
two inclined at about 60 and one above observed the grading line in the box, as
shown in Fig.2.10. The lighting box is 1000mm in length and 1000mm in width.
36 2 Machine Vision Online Measurements
Fig. 2.10 Hardware system of apple in-line detection. a System hardware. b Schematic of three
cameras system
The distance between apple and camera is 580 mm, thus there are three apples
in the view field of each camera, and had a resolution of 0.4456 mm per pixel.
The images were grabbed using three Matrox/meteorII frame-grabbers (Matrox,
Canada) in three computers. The standard image treatment functions were based on
the Matrox libraries (Matrox, Canada) and the other algorithms were implemented
in C++. A local network was built among the computers in order to communicate
results data. The central processing unit of each computer was a Pentium 4 (Intel,
USA) clocked at 3GHz. The fruits placed on rollers are rotating while moving. The
rotational speed of the rollers was adjusted in such a way that a spherical object
having a diameter of 80mm made a rotation in exactly three images. The moving
speed in the range 0~15 apples per second could be adjusted by the stepping motor.
2.5.2.3Image Preprocessing
Up
Down
Fig. 2.12 Sequential image and the single child image representation
processing design of the sequential images is based on the three different posi-
tions appearance sequence one by one (maybe there are apples in this position,
maybe there are not apples.). A 2D array R was used to represent the three child
single images information as shown in Fig.2.12. It can draw three conclusions from
Fig.2.12 as follows:
First, among the three child images, the left child image is represented one ap-
ples first image, the middle image is represented one apples second image and the
right image is represented one apples third image, and this rule do not change when
the trigger grabbing times increase.
Second, the sub numbers of array R of No.6 (the times of trigger grabbing I=6)
apple is the same as that of No.3 (the times of trigger grabbing I=3) apple, and
there is a cycle beginning. The cycle variable: X=I mod 3.
Third, it is a special case when I=1 or I=2. The cycle variable should be:
38 2 Machine Vision Online Measurements
X= (I1) mod 3.
The information of an apple in the array R should be saved when the apple
appeared three times, otherwise, it will be covered by the information of follow-
ing apples. ActiveX Data Objects (ADO) is used to save the information into a
database.
There exist several image analysis methods to produce blemish detection, such as
global gray-level or gradient thresholding, simple background subtraction, statisti-
cal classification, and color classification [13]. Blemish segmentation is a difficult
problem in image analysis, because various types of blemishes with different size
and extent of damage may occur on fruit surfaces. If a blemish appears as very a
dark mark on a fruit surface, a simple thresholding of gray-level intensity of re-
flected light may allow a direct segmentation of the blemish. However, in most
cases, the light reflectance from both blemished and nonblemished surfaces varies
considerably, and it is impossible to set a single threshold value for the segmenta-
tion. For example, a patch of good surface with a relatively dark color can have
similar reflectance as a slightly discolored blemish on a light colored surface. In this
case, the thresholding method will fail.
An image analysis scheme for accurate detection of fruit blemishes proposed by
Qingsheng Yang [141] is used in this study. The detection procedure consists of two
steps: initial segmentation and refinement. In the first step, blemishes are coarsely
segmented out with a flooding algorithm and in the second step an active contour
model, i.e., a snake algorithm, is applied to refine the segmentation so that the lo-
calization and size accuracy of detected blemishes is improved. However, Yangs
algorithms were tested on monochrome images of mono-color fruits. Here, the im-
ages are color images of bicolor fruit.
The appearances of calyxes and stem ends are also like the patch-like defects,
these patches were defined as region of interests (ROIs). The ROIs are general-
ly darker than their surrounding nondefective surfaces, and in image gray-level
landscapes, they usually appear as significant concavities using the concept of top-
ographic representation. The median filtering process mentioned in image prepro-
cessing improved the success of the flooding algorithm. This smoothing naturally
distorts the gray-level surface and thus has a drawback effect that the segmented
areas are larger than those that we see. Since the size of an ROI is important for
grade decision making, a refinement of the defect detection is necessary. A closed
loop snake has been implemented to improve the boundary localization of detected
ROI. Then, the minimum enclosing rectangle of each single ROI was used to mea-
sure the size of ROI. If the dimensions of the rectangle exceed 5 pixels (0.4456mm
per pixel), the measured ROI area is taken into account. The R channel signals were
used to detect the defects, because the tests for sample apple R channel images have
shown better results than other channel images.
The defect recognition steps are as following:
2.5Machine Vision for Apples Grading 39
First, the number of ROIs is counted in each single child apple image.
Second, logical recognition rules were developed. That is, since calyx and stem
ends could not appear in a single child image at the same time, an apple is defective
if any one of its nine images has two or more ROIs. Figure2.13 shows an example
of an apple image that has two ROIs.
Third, the defect detection mentioned above is all based on the data acquisition
using three computers, consequently, an apples characteristic parameters is formed
by integration into a single source. One of the three computers is server; the other
two computers are customers. Figure2.14 shows the data exchange and synchroni-
zation online grading.
Since nine images are sufficient to encompass the whole surface of the apple,
any defects in the surface can be detected by this method. The disadvantage of this
method is that it could not distinguish different defect types. Defects of apples, such
as bruising, scab, fungal growth, and disease, are treated as equivalent. The apples
were then graded to reject or accept.
2.5.2.5Fruits Grading
All the fruits used in this experiment were selected and came from the same grower.
Three hundred and eighteen fruits were used in only one experiment and each fruit
was thus presented only once to the machine, to avoid any additional bruises. The
apples were classified into two classes: accepted (199 apples) and rejected (with
blemish, 119 apples).
The proposed system has been tested with a laboratory three CCD cameras sys-
tem for fuji apples. The results obtained by the three-color cameras grading line
are given in Table2.6. The total error rate reached 11% mostly occurring in the
accepted batch. When these errors were analyzed, half of the errors were apples
with over-segmentation of healthy tissue and especially in the tissue near the bound-
aries in the defect segmentation processing. The other half was attributed to two
reasons. First, spot blush on the surface of good apples is segmented as defective
and the apple is classified into the rejected class. As the flooding algorithm used
40 2 Machine Vision Online Measurements
Fig. 2.14 Three computers image processing and synchronization online grading
Table 2.6 The results obtained by the three-color cameras grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 169 5
Rejected 30 114
Classification error 15.07% 4.2%
Global classification error 11%
by Yang [142] was designed to detect catchments basins, i.e., areas with a lower
luminance, large spot blushes were easily segmented as ROI. Second, errors occur
because apples with defects are accepted, i.e., false positives. These errors were
due to defects that are difficult to segment such as russet and bruises. Those defects
were present near the stem ends and calyxes of apples. They have almost the same
appearance as the russet around the stem end and, because of the proximity in posi-
tion and appearance, were probably confused with the latter. The defect is localized
together with the stem end and counted as one ROI. Therefore, this apple was seg-
mented as a good one.
Comparing different configurations, the results of a sorting line with only one
camera (the above camera) and sorting line with the two inclined cameras, are
shown in Tables2.7 and 2.8. With one camera, 21.8% of the apples with no defects
are misclassified (i.e., they are accepted), whereas this number reduces significantly
from 14.3% with two cameras to 4.2% with three cameras. However, at the same
time, the classification error for good apples increases from 11% for one camera
(three images), via 13.56% for two (six images) to 15.07% for three cameras (nine
2.5Machine Vision for Apples Grading 41
Table 2.7 The results obtained by the two in-lined color cameras grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 172 17
Rejected 25 102
Classification error 12.5% 14.3%
Global classification error 13.2%
Table 2.8 The results obtained by the only one camera (the above camera) grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 177 26
Rejected 22 93
Correct classification rate 11% 21.8%
Global correct classification 15.1%
rate
images). This is mainly caused by the information loss. A statistic test was carried
out for the loss of information when different numbers of cameras were utilized in
the sorting line. Fifteen to twenty percent of the apples surface cannot be observed
from the three images obtained by the single overhead camera. Five to ten percent
of the apple surface information will be lost using two inclined cameras. After sta-
tistic analysis, the individual images (child images) obtained by three CCD cameras
resulted in a probability for a defect to be present alone in one child image alone
as 28.4% after testing 318 apples (3189=2862 child images). However, the nine
images obtained by the three cameras could cover the whole surface of the apple.
With defective apples, more images provide more opportunity to detect the defects,
thus leading to a lower classification error. With good apples, more images mean a
change to classify spot blush as defect, and hence more will be misclassified. This
is caused by the defect detection algorithm. There are defects that are not darker
than their surrounding and could thus not be recognized. On the other hand, some
parts of the fruit are darker than their surroundings. There are also other reasons
for errors. Less defective apples in the accepted bin give higher prices that can
compensate for the slightly increased loss of good apples. With three cameras, the
class of accepted apples now has 174 apples, of which five still have defects (i.e.,
some 2.87%). Whereas, with one camera the accepted bin has 203 apples, but with
26 defective ones (i.e., some 13%). Compared with many former works in articles
[135, 143, 144], several images representing the whole surface of the fruit are con-
sidered in this work, and the defect recognition algorithm is simpler and faster.
42 2 Machine Vision Online Measurements
A cherry tomato is a smaller garden variety of tomato. With its highly nutritional
value and good appearance, cherry tomato had become one of the most popular
fruits in the world. Nowadays, cherry tomatoes are sorted by hand in many farms.
However, the manual inspection process is not only labor intensive and tedious,
but also subject to human error which results in poor quality. Farmers want an
automated grading device to facilitate this work. Cherry tomato little angel was
selected for the experiment. The samples were hand harvested on 23rd November
2007 from the experimental orchard in Jin rui Institute of Agricultural, Zhenjiang.
Cherry tomatoes were selected completely randomized in the same plant with
each fruit as an experimental unit. All fruits of each sample were individually num-
bered. Without any procedure, five assessors with previous experience in tomato
assessment were invited to classified cherry tomatoes into three different matu-
rity states (immature, half-ripe, and full-ripe), each with 30 samples. A total of 90
MV measurements were performed. For validation, the same variety, cherry tomato
little angel, was selected for the experiment. A total of 414 cherry tomatoes were
inspected for validation.
The MV system as shown in Fig. 2.15 was composed of a CCD color camera
(SenTec STC-1000) and a frame grabber (GRABLINK Value), connected to a
compatible personal computer (Pentium 2.8GHz, 512Mb random access mem-
ory (RAM)). The system provides images of 768 per 576 pixels. The frame grab-
ber digitized and decoded the composite video signal from the camera into three
user-defined buffers in RGB coordinates. In this chapter, lighting system was
composed of two ring-shaped LEDs inside of a chamber, with a hole in the top to
place the camera.
The vision system was part of the robotic system for automatic inspection and
sorting. Before entering the inspection chamber, the fruit was list one by one. Then
the fruit is made to be presented to the camera in three different, nonoverlapped
positions, in order to inspect as much of the fruit surface as possible. Entire system
as shown in Fig.2.15 is made of four parts as follows: (1) mechanical conveyor, (2)
CCD combine with PC, (3) executive mechanism, and (4) electronic device.
2.6.2Image Analysis
Figure 2.16 shows the flowchart of online grading software. It mainly includes im-
age acquisition, segmentation, and feature extraction.
2.6Machine Vision Online Sorting Maturity of Cherry Tomato 43
Online operation started with the acquisition of the first image. Three images of
different angles are obtained from each cherry tomato, allowing the inspection of
approximately 90% of the fruit surface (Fig.2.17a).
The second step consisted of image segment use fixed threshold as:
0, f ( x, y ) < T
f t ( x, y ) = (2.11)
255, f ( x, y ) T
Fig. 2.17 Results of image processing. a Raw image of cherry tomato. b Result of image segment
the specified classes, while the LDA calculation uses the class information that was
given during training. The LDA utilizes information about the distribution within
classes and the distances between them. Therefore, the LDA is able to collect infor-
mation from all sensors in order to improve the resolution of classes.
2.6.3Sorting Results
5.0
PC2
LD2
2.5
0.0
PC1 -2.5
a
-6 -3 0 3 6
b LD1
Fig. 2.18 PCA and LDA analysis for tomato ripeness. a PCA. b LDA. PCA principle component
analysis, LDA linear discriminant analysis
Soft capsules are produced in a single production step, filled, and then closed off.
The name soft capsule is used as the shell of the capsule contains plasticizers in
addition to the gelatine. The actual degree of softness and elasticity depends on the
46 2 Machine Vision Online Measurements
type and amount of plasticizer used, the residual moistness and the thickness of
the capsule shell. The soft capsule shells are generally somewhat thicker than hard
capsule shells. Glycerol, sorbitol, or a combination of both are common plasticizers.
The manufacture of soft capsules is generally by the so-called rotary die process as
invented by Robert Pauli Scherer at the end of the 1920s: Here, two dyed highly
elastic gelatine bands are fed through two counter-rotating drums in opposite direc-
tions. A film is formed, capsules are made and these are then filled with the pharma-
ceutical active ingredient provided.
In China, soft capsules are a new kind of capsules in which oil functional ma-
terial, liquor, suspension mash, or even powder is sealed. Soft capsules industry
is developing very fast and more than 60,000million soft capsules are produced
every year over the world which cost US$400 million. There are 300million soft
capsules produced in China every year. These capsules are exported to Japan, south-
east Asia, USA, Europe, Singapore, etc. As most of contents of soft capsules have
viscosity, a fraction of content adhered to injector and filling pump while it flowed
into wedge injector and was pushed into two pieces of colloidal film by the filling
pump of automatic rotating rolling capsules machine. This process caused fluctua-
tion of soft capsules weight which has a close correlation to its efficacy. Therefore,
it is important that soft capsules need measurement in order to keep their weight
uniform as the dose controlling. Nowadays, many companies use workers who were
trained to measure soft capsules weight according their size. The grading accuracy
and repeatability were low, in that the grading process was based on workers per-
sonal experience. The hand grading process is also labor intensive, expensive, and
with low efficiency. This hand grading method cannot meet the industry produce.
For our knowledge, it is the first time of developing soft capsules grading equip-
ment. Mimic human grading process, MV is proposed to grading the capsules.
A soft capsule online grading system was developed as shown in Fig. 2.19. It
consisted of feeding unit, MV system, grading unit, and electric control unit. The
basic feeding conveyor transported the soft capsule to the uniform spacing con-
veyor. Then, the capsules were fed to the MV system for the defect inspection one
by one. Finally, the automatic sorting unit accomplished the soft capsule grading
operation.
2.7Machine Vision Online Detection Quality of Soft Capsules 47
The MV system included a lighting chamber for the desired spectrum and light
distribution for soft capsule illumination, a CCD camera and an image grabbing
card with four input channels which provided by Euresys company inserted in a
microcomputer (processor speed: 1.66GHz).
2.7.2Image Process
First, it is the image background removal. There are many ways to remove back-
ground of an image [145]. According to the histogram of the soft capsule images,
the gray distribution is double peak. OSTU (maximization of interclass variance)
method was chosen to remove the background. Figure2.20a is the source image and
Fig.2.20b is the result image processed by OSTU method, from the image we can
get that the soft capsule was segmented completely.
Second, it is the noise removal. Following the background removal, the image
still has some noise which will influence future processing. There are many meth-
ods to remove noise from an image such as mean smoothing, low-pass filter, and
48 2 Machine Vision Online Measurements
1 2 1
1
median filter. In this research, 33 mean smoothing filter 2 4 2 , low-pass
16
1 2 1
filter, and median filter were investigated to remove noise [146]. The results of
these smoothing filter are shown in Fig.2.21. Compared to these results, Fig.2.21d
was the best image for future processing.
Third, it is the image character extraction. In order to keep whole soft capsule
region in background removing step, some background pixels that have similar
gray value were reserved. Before character extraction, we should do region labeling
[146] to find right region of soft capsule in the image. Many MVs software have
region-labeling algorithm. Blob analysis function, which includes in Evision soft
was chosen to do this work. The result is shown in Fig.2.22. In the image, the soft
capsule region is the biggest one. In this research, a region whose pixels are more
than 50,000 is soft capsule region.
Fourth, after the region of soft capsule was found in image, it is character extrac-
tion. In this research, area, girth, altitude diameter, and latitude diameter were used
to represent soft capsule character. Their definitions are shown in Fig.2.23:
1. Area (S) as shown in Fig.2.23, the number of pixels whose gray value is 0.
2. Girth (L) as shown in Fig.2.23, the number of the edge of soft capsule region.
3. Altitude diameter (H), the distance between the most left and right pixel.
4. Latitude diameter (W), the distance between the most top and bottom pixel.
2.7.3Sorting Results
Five hundred and forty soft capsules (180 unqualified and 360 qualified) were chose
to extract area, girth, altitude diameter, and latitude diameter to build linear regres-
sion model. Figure2.24 shows the relationship between area and weight. Fifteen
thousand four hundred and sixty soft capsules produced by Hengshun company
were tested by the online grading system based on linear regression model. The ac-
curate rate of grading is shown in Table2.10. The soft capsules were first detected
by manual using electronic scale (FA1604), and sorted into two classes: accepted
and rejected (Fig.2.24).
The detection results of regression model were 94.1% as shown in Table2.10.
Compared with the manual detection by human eyes (the accurate rate of detection
is 74.9%), the machine detection is much higher.
Summary
Over the past decade, MV has been applied much more widely, uniformly and sys-
tematically in the food industry. This chapter presents the recent developments and
applications of MV in the food industry, and highlights the construction and imaging
Summary 49
Fig. 2.21 Effect of mean smoothing, low-pass filter, and median filter. a Source image. b Pro-
cessed by mean smoothing. c Processed by low-pass filter, d Processed by median filter
processing of online detection by MV. The basic component and technologies as-
sociated with MV and three examples of online food detection were introduced. The
automated, objective, rapid, and hygienic inspection of diverse raw and processed
foods can be achieved by the use of computer vision systems.
Computer vision has the potential to become a vital component of automated
food processing operations as increased computer capabilities and greater pro-
cessing speed of algorithms are continually developing to meet the necessary on-
line speeds. This has been ensured by continual developments in the constituent
methodologies, namely image processing and pattern recognition. At the same time,
advances in computer technology have permitted viable implementations to be
achieved at lower cost. The flexibility and nondestructive nature of this technique
also help to maintain its attractiveness for application in the food industry. To some
extent, progress is now being held up by the need for tailored development in each
application: Hence, future algorithms will have to be made trainable to a much
greater extent than is currently possible.
50 2 Machine Vision Online Measurements
\ [
5
Weight(w)
Area (s)
Fig. 2.24 The relation between the area (s) and weight (w) of capsules
References
1. Alfatni MSM, Shariff ARM, Abdullah MZ, Marhaban MHB, Saaed OMB. The applica-
tion of internal grading system technologies for agricultural productsreview. J Food Eng.
2013;116:70325.
2. Ying Y, Zhang W, Jiang Y, Zhao Y. Application of machine vision technique in automatic
harvesting and processing of agricultural products. Nongye Jixie Xuebao/Trans Chin Soc
Agric Mach. 2000;31:1125.
3. Brosnan T, Sun DW. Inspection and grading of agricultural and food products by computer
vision systemsa review. Comput Electron Agric. 2002;36:193213.
4. Xu H, Ying Y. In Detection citrus in a tree canopy using infrared thermal imaging, Provi-
dence, RI, United States, 2004; The International Society for Optical Engineering: Provi-
dence, RI, United States, p.321327.
5. Daley WD, Doll TJ, McWhorter SW, Wasilewski AA. Machine vision algorithm generation
using human visual models. Proc SPIEInt Soc Opt Eng. 1999;3543:6572.
6. Purnell G, Brown T. Equipment for controlled fat trimming of lamb chops. Comput Electron
Agric. 2004;45:10924.
7. Pellerin C. Machine vision in experimental poultry inspection. Sens Rev. 1995;15:234.
8. Chao K, Chen Y-R, Hruschka WR, Gwozdz FB. On-line inspection of poultry carcasses by a
dual-camera system. J Food Eng. 2002;51:18592.
9. Igathinathane C, Pordesimo LO, Columbus EP, Batchelor WD, Methuku SR. Shape identi-
fication and particles size distribution from basic shape parameters using imagej. Comput
Electron Agric. 2008;63:16882.
10. Zapotoczny P. Discrimination of wheat grain varieties using image analysis and neural net-
works. Part i. Single kernel texture. J Cereal Sci. 2011;54:608.
References 51
33. Kavdir I, Guyer DE. In Artificial neural networks, machine vision and surface reflectance
spectra for apple defect detection, Milwaukee, WI., United States, 2000; American Society
of Agricultural Engineers: Milwaukee, WI., United States, p.937953.
34. Tao Y, H.P.H, Sommer HJ. Machine vision for colour inspection of potatoes and apples. T
Asae. 1995;5:94957.
35. Guizard CGJM. Automatic potato sorting system using colour machine vision, vol.98. In-
ternational Workshop on Sensing Quality of Agricultural Products, Motpellier, France 1998,
p.20310.
36. Wooten JR, White JG, Thomasson JA, Thompson PG. In 2000 asae annual international
meeting, vol.98,paper no.001123. St.Joseph, Michigan, USA:ASAE. 2000.
37. Noordam JC, Otten GW, Timmermans TJ, Zwol BHv. High-speed potato grading and quality
inspection based on a color vision system, electronic imaging. Int Soc Opt Photonics. 2000;
20617.
38. ELMasry G, Cubero S, Molto E, Blasco J. In-line sorting of irregular potatoes by using auto-
mated computer-basedmachine vision system. J Food Eng. 2012;12:608.
39. Elmasry G, Kamruzzaman M, Sun DW, Allen P. Principles and applications of hyper-
spectral imaging in quality evaluation of agro-food products: a review. Crit Rev Food Sci.
2012;52:9991023.
40. Zhang BH, Huang WQ, Li JB, Liu CL, Huang DF. Research of in-line sorting of irregular
potatoes based on i-relief and svm method. J Jilin University Eng Technol. 2014.
41. Heinemann PH, Hughes R, Morrow CT, Sommer HJ, Beelman RB, Wuest PJ. Grading of
mushrooms using a machine vision system. Transactions of the ASAE. 1994;37:16711.
42. Vizhanyo T, Felfoldi J. Enhancing colour differences in images of diseased mushrooms.
Comput Electron Agr. 2000;26:18798.
43. Gowen AA, ODonnell CP, Taghizadeh M, Cullen PJ, Frias JM, Downey G. Hyperspectral
imaging combined with principal component analysis for bruise damage detection on white
mushrooms (agaricus bisporus). J Chemom. 2008;22:25967.
44. Gowen AA, Taghizadeh M, O'Donnell CP. Identification of mushrooms subjected to freeze
damage using hyperspectral imaging. J Food Eng. 2009;93:712.
45. Howarth MS, Searcy SW. In Inspection of fresh market carrots by machine vision, Proceed-
ings of the 1992 Conference on Food Processing Automation II, May 46 1992, Lexington,
KY, USA, 1992; Publ by ASAE: Lexington, KY, USA, p.106106.
46. Qiu W, Shearer SA. Maturity assessment of broccoli using the discrete fourier transform.
Trans Am Soc Agric Eng. 1992;35:205762.
47. Tollner EW, Shahin MA, Maw BW, Gitaitis RD, Summer DR. In 1999 asae annual interna-
tional meeting, vol.26, paper no. 993165. S t. Joseph, Michigan, USA: ASAE, 1999.
48. Tao Y, Wen Z. An adaptive spherical image transform for high-speed fruit defect detection.
Trans Am Soc Agric Eng. 1999;42:2416.
49. Xing J, Saeys W, De Baerdemaeker J. Combination of chemometric tools and image process-
ing for bruise detection on apples. Comput Electron Agr. 2007;56:113.
50. ElMasry G, Wang N, Vigneault C. Detecting chilling injury in red delicious apple using hy-
perspectral imaging and neural networks. Postharvest Biol Technol. 2009;52:18.
51. Kim S, Schatzki TF. Apple watercore sorting system using x-ray imagery: I. Algorithm devel-
opment. Trans Asae. 2000;43,16951702.
52. Leemans V, Magein H, Destain MF. On-line fruit grading according to their external quality
using machine vision. Biosyst Eng. 2002;83:397404.
53. Chauhan APS, Singh AP. Intelligent estimator for assessing apple fruit quality. Int J Comput
Appl. 2012;60:3641.
54. Mendoza F, Aguilera JM. Application of image analysis for classification of ripening ba-
nanas. J Food Sci. 2004;69:E471E7.
55. Mendoza F, Dejmek P, Aguilera JM. Calibrated color measurements of agricultural foods
using image analysis. Postharvest Biol Technol. 2006;41:28595.
References 53
80. Storbeck F, Daan B. Fish species recognition using computer vision and a neural network.
Fish Res. 2001;51:115.
81. Quevedo RA, Aguilera JM, Pedreschi F. Color of salmon fillets by computer vision and
sensory panel. Food Bioprocess Tech. 2010;3:63743.
82. Jamieson V. Physics raises food standards. Phys World. 2002;15,212.
83. Hayashi S, Kanuma T, Ganno K, Sakaue O. In Cabbage head recognition and size estima-
tion for development of a selective harvester, 1998.
84. Batchelor MM, Searcy SW. Computer vision determination of the stem/root joint on pro-
cessing carrots. J Agric Eng Res. 1989;43:25969.
85. Steinmetz V, Roger JM, Molt E, Blasco J. On-line fusion of colour camera and spectro-
photometer for sugar content prediction of apples. J Agric Eng Res. 1999;73:20716.
86. Kim S, Schatzki T. Detection of pinholes in almonds through x-ray imaging. Trans Asae.
2001;44:9971003.
87. Anon. Focus on container inspection. Int Bottler Packag. 1995;69,2231.
88. Li J, Tan J, Martz FA. In Predicting beef tenderness from image texture features, Proceed-
ings of the 1997 ASAE Annual International Meeting. Part 1 (of 3), August 10, 1997 Au-
gust 14, 1997, Minneapolis, MN, USA, 1997; ASAE: Minneapolis, MN, USA.
89. Ilea DE, Whelan PF. Image segmentation based on the integration of colourtexture de-
scriptorsa review. Pattern Recognit. 2011;44:2479501.
90. Tao Y, Wen Z. Adaptive spherical image transform for high-speed fruit defect detection.
Trans ASAE. 1999;42:2416.
91. Ying Y-B, Gui J-S, Rao X-Q. Fruit shape classification based on zernike moments. Ji-
angsu Daxue Xuebao (Ziran Kexue Ban)/J Jiangsu University (Natural Science Edition).
2007;28:13.
92. Paulus I, Schrevens E. Shape characterization of new apple cultivars by fourier expansion
of digitized images. J Agric Eng Res. 1999;72:1138.
93. Abdullah MZ, Mohamad-Saleh J, Fathinul-Syahir AS, Mohd-Azemi BMN. Discrimination
and classification of fresh-cut starfruits (averrhoa carambola l.) using automated machine
vision system. J Food Eng. 2006;76:50623.
94. Abdullah MZ, Fathinul-Syahir AS, Mohd-Azemi BMN. Automated inspection system for
colour and shape grading of starfruit (averrhoa carambola l.) using machine vision sensor.
Trans Inst Meas Control. 2005;27:6587.
95. Paulus I, De Busscher R, Schrevens E. Use of image analysis to investigate human quality
classification of apples. J Agric Eng Res. 1997;68:34153.
96. Leemans V, Magein H, Destain MF. Defects segmentation on golden delicious apples by
using colour machine vision. Comput Electron Agr. 1998;20:11730.
97. Zou XB, Zhao JW, Li YX, Shi JY, Yin XP Apples shape grading by fourier expansion and
genetic program algorithm. In Icnc 2008: Fourth international conference on natural com-
putation, vol4, proceedings, Guo, M.Z.; Zhao, L.; Wang, L.P., Eds. 2008; p.8590.
98. Weeks AR, Gallagher A, Eriksson J. Detection of oranges from a color image of an orange
tree. Proc SPIEInt Soc Opt Eng. 1999;3808:34657.
99. Pydipati R, Burks TF, Lee WS. Identification of citrus disease using color texture features
and discriminant analysis. Comput Electron Agric. 2006;52:4959.
100. Lee DJ, Archibald JK, Chang YC, Greco CR. Robust color space conversion and color
distribution analysis techniques for date maturity evaluation. J Food Eng. 2008;88:36472.
101. Lee D-J. In Color space conversion for linear color grading, Boston, USA, 2000; Soci-
ety of Photo-Optical Instrumentation Engineers, Bellingham, WA, USA: Boston, USA,
p.358366.
102. Abdullah MZ, Guan LC, Mohamed AMD, Noor MAM. Color vision system for ripeness
inspection of oil palm elaeis guineensis. J Food Process Preserv. 2002;26:21335.
103. Leemans V, Magein H, Destain MF. Defects segmentation on golden delicious apples by
using colour machine vision. Comput Electron Agric. 1998;20:11730.
104. Bulanon DM, Burks TF, Alchanatis V. In Study on fruit visibility for robotic harvesting,
Minneapolis, MN, United States, 2007; American Society of Agricultural and Biological
References 55
Engineers, St. Joseph, MI 49085 9659, United States: Minneapolis, MN, United States,
p.12.
105. Zou XB, Zhao JW. Apple quality assessment by fusion three sensors. IEEE Sensors. 2005;1
& 2,38992.
106. Zhu B, Jiang L, Tao Y. Three-dimensional shape enhanced transform for automatic apple
stem-end/calyx identification. Opt Eng. 2007;46.
107. Yoruk R, Yoruk S, Balaban MO, Marshall MR. Machine vision analysis of antibrown-
ing potency for oxalic acid: a comparative investigation on banana and apple. J Food Sci.
2004;69:E281E9.
108. Xiuqin R, Yibin Y, YiKe C, Haibo H. In Laser scatter feature of surface defect on apples,
Boston, MA, United States, 2006; International Society for Optical Engineering, Belling-
ham WA, WA 98227-0010, United States: Boston, MA, United States, p638113.
109. Xing J, Jancsok P, De Baerdemaeker J. Stem-end/calyx identification on apples using con-
tour analysis in multispectral images. Biosystems Eng. 2007;96:2317.
110. Wen Z, Tao Y. Dual-camera nir/mir imaging for stem-end/calyx identification in apple de-
fect sorting. Trans ASAE. 2000;43:44952.
111. Upchurch BL, Throop JA. Effects of storage duration on detecting watercore in apples us-
ing machine vision. Trans ASAE. 1994;37:4836.
112. Upchurch BL, Throop JA. In Considerations for implementing machine vision for detect-
ing watercore in apples, Boston, MA, USA, 1993; Publ by Int Soc for Optical Engineering,
Bellingham, WA, USA: Boston, MA, USA, p.291297.
113. Unay D, Gosselin B. Stem and calyx recognition on jonagold apples by pattern recogni-
tion. J Food Eng. 2007;78:597605.
114. Unay D, Gosselin B. Automatic defect segmentation of jonagold apples on multi-spectral
images: a comparative study. Postharvest Biol Technol. 2006;42:2719.
115. Unay D, Gosselin B. In Artificial neural network-based segmentation and apple grading
by machine vision, Genova, Italy, 2005; Institute of Electrical and Electronics Engineers
Computer Society, Piscataway, NJ 08855 1331, United States: Genova, Italy, p.630633.
116. Shahin MA, Tollner EW, McClendon RW, Arabnia HR. Apple classification based on sur-
face bruises using image processing and neural networks. Trans Asae. 2002;45:161927.
117. Safren O, Alchanatis V, Ostrovsky V, Levi O. Detection of green apples in hyperspectral
images of apple-tree foliage using machine vision. Trans ASABE. 2007;50:230313.
118. Rao XQ, Ying YB, Cen YK, Huang HB. Laser scatter feature of surface defect on apples
art. No. 638113. Opt Nat Resour Agric Foods. 2006;6381:381133.
119. Peirs A, Scheerlinck N, De Baerdemaeker J, Nicolai BM. Starch index determination of
apple fruit by means of a hyperspectral near infrared reflectance imaging system. J Infrared
Spectrosc. 2003;11:37989.
120. Narayanan P, Lefcourt AM, Tasen U, Rostamian R, Kim MS. In Tests of the ability to orient
apples using their inertial properties, Minneapolis, MN, United States, 2007; American So-
ciety of Agricultural and Biological Engineers, St. Joseph, MI 49085 9659, United States:
Minneapolis, MN, United States, p12.
121. Mehl PM, Chao K, Kim M, Chen YR. Detection of defects on selected apple cultivars using
hyperspectral and multispectral image analysis. Appl Eng Agric. 2002;18:21926.
122. Li QZ, Wang MH, Gu WK. Computer vision based system for apple surface defect detec-
tion. Comput Electron Agric. 2002;36:21523.
123. Lefcout AM, Kim MS, Chen Y-R, Kang S. Systematic approach for using hyperspectral
imaging data to develop multispectral imagining systems: Detection of feces on apples.
Comput Electron Agric. 2006;54:2235.
124. Lefcourt AM, Narayanan P, Tasch U, Rostamian R, Kim MS, Chen Y-R. Algorithms
for parameterization of dynamics of inertia-based apple orientation. Appl Eng Agric.
2008;24:1239.
125. Leemans V, Destain MF. A real-time grading method of apples based on features extracted
from defects. J Food Eng. 2004;61:839.
56 2 Machine Vision Online Measurements
126. Kleynen O, Leemans V, Destain MF. Development of a multi-spectral vision system for the
detection of defects on apples. J Food Eng. 2005;69:419.
127. Kavdir I, Guyer DE. Evaluation of different pattern recognition techniques for apple sort-
ing. Biosystems Eng. 2008;99:2119.
128. Kavdir I, Guyer DE. Bulanik mantik kullanarak elma siniflama apple grading using fuzzy
logic. Turk J Agric For. 2003;27,37582.
129. Kaewapichai W, Kaewtrakulpong P, Prateepasen A. A real-time automatic inspection sys-
tem for pattavia pineapples. Key Eng Mater. 2006;321323 II,118691.
130. Huang X-Y, Lin J-R, Zhao J-W. Detection on defects of apples based on support vector
machine. Jiangsu Daxue Xuebao (Ziran Kexue Ban)/J Jiangsu University (Natural Science
Edition). 2005;26:4657.
131. ElMasry G, Wang N, Vigneault C, Qiao J, ElSayed A. Early detection of apple bruises
on different background colors using hyperspectral imaging. Lwt-Food Sci Technol.
2008;41:33745.
132. Cheng X, Tao Y, Chen Y-R, Luo Y. Nir/mir dual-sensor machine vision system for online
apple stem-end/calyx recognition. Trans Am Soc Agric Eng. 2003;46:5518.
133. Bulanon DM, Kataoka T, Ota Y, Hiroma T. Segmentation algorithm for the automatic rec-
ognition of Fuji apples at harvest. Biosystems Eng. 2002;83:40512.
134. Bulanon DM, Kataoka T, Okamoto H, Hata S. In Development of a real-time machine vi-
sion system for the apple harvesting robot, Sapporo, Japan, 2004; Society of Instrument and
Control Engineers (SICE), Tokyo, 113, Japan: Sapporo, Japan, p.25312534.
135. Bennedsen BS, Peterson DL, Tabb A. Identifying defects in images of rotating apples.
Comput Electron Agric. 2005;48:92102.
136. Ariana D, Guyer DE, Shrestha B. Integrating multispectral reflectance and fluorescence
imaging for defect detection on apples. Comput Electron Agric. 2006;50:14861.
137. Guyer D, Yang X. Use of genetic artificial neural networks and spectral imaging for defect
detection on cherries. Comput Electron Agric. 2000;29:17994.
138. Kim G, Lee K, Choi K, Son J, Choi D, Kang S. In Defect and ripeness inspection of citrus
using nir transmission spectrum, Jeju Island, South Korea, 2004; Trans Tech Publications
Ltd, Zurich-Ueticon, CH-8707, Switzerland: Jeju Island, South Korea, p.10081013.
139. Bennedsen BS, Peterson DL. Performance of a system for apple surface defect identifica-
tion in near-infrared images. Biosystems Eng. 2005;90:41931.
140. Mathanker SK, Weckler PR, Bowser TJ, Wang N, Maness NO. Adaboost classifiers for
pecan defect classification. Comput Electron Agric. 2011;77:608.
141. Qingsheng Yang JAM. Accurate blemish detection with active contour models. Comput
Electron Agric. 1996;14:7789.
142. Yang Q. An approach to apple surface feature detection by machine vision. Comput Elec-
tron Agr. 1994;11:24964.
143. Leemans VD, Destain M-F. A real-time grading method of apples based on features ex-
tracted from defects. J Food Eng. 2004;61:839.
144. Blasco J, Aleixos N, Molto E. Computer vision detection of peel defects in citrus by means
of a region oriented segmentation algorithm. J Food Eng. 2007;81:53543.
145. Sonka M, Bosch JG, Lelieveldt BPF, Mitchell SC, Reiber JHC. Computer-aided diagnosis
via model-based shape analysis: cardiac MR and echo. Int Congr Ser. 2003;1256:10138.
146. Zhang Y, Yin X, Xu T, Zhao J. On-line sorting maturity of cherry tomato bymachine vision.
In: Li D, Zhao C, editors. Computer and computing technologies in agriculture ii. vol.3.
New York: Springer; 2009. p.22232229
http://www.springer.com/978-94-017-9675-0