Você está na página 1de 12

www.ietdl.

org
Published in IET Computer Vision
Received on 29th November 2009
Revised on 10th December 2010
doi: 10.1049/iet-cvi.2010.0046

ISSN 1751-9632

Vanishing point detection in corridors: using Hough


transform and K-means clustering
R. Ebrahimpour1,2 R. Rasoolinezhad3 Z. Hajiabolhasani3 M. Ebrahimi3
1
Department of Electrical and Computer Engineering, Brain and Intelligent Systems Research Lab,
Shahid Rajaee Teacher Training University, P.O. Box-16785-163, Tehran, Iran
2
School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran,
P.O. Box-19395-5746, Tehran, Iran
3
Islamic Azad University, South Tehran Branch, No. 3, 9th Nayestan, Pasdaran Avenue, Tehran, Iran
E-mail: ebrahimpour@ipm.ir; rebrahimpour@gmail.com

Abstract: One of the main challenges in steering a vehicle or a robot is the detection of appropriate heading. Many solutions have
been proposed during the past few decades to overcome the difficulties of intelligent navigation platforms. In this study, the
authors try to introduce a new procedure for finding the vanishing point based on the visual information and K-Means
clustering. Unlike other solutions the authors do not need to find the intersection of lines to extract the vanishing point. This
has reduced the complexity and the processing time of our algorithm to a large extent. The authors have imported the
minimum possible information to the Hough space by using only two pixels (the points) of each line (start point and end
point) instead of hundreds of pixels that form a line. This has reduced the mathematical complexity of our algorithm while
maintaining very efficient functioning. The most important and unique characteristic of our algorithm is the usage of
processed data for other important tasks in navigation such as mapping and localisation.

1 Introduction for vanishing point detection are Gaussian sphere, Hough


space and image space.
Under perspective projection, parallel lines in three- One of the most used Gaussian sphere solutions is raised by
dimensional (3D) space project to converging lines in the Barnard [1], which raises the use of a Gaussian sphere, others
image plane. The common point of intersection is called more they use a Bayesian model. Barnard proposed a
vanishing point and may eventually belong to the line at Gaussian sphere as a bounded space into which lines in the
infinity of the image plane in the case of 3D lines parallel 2D image space are projected and, thereby, both the finite
to the image plane. Even though concurrence in the image and infinite vanishing points can be detected effectively.
plane does not necessarily imply parallelism in 3D (it only The complexity of Barnard’s algorithm is in the number of
implies that all 3D lines intersect the line defined by the lines processed, whereas the vanishing points are located
focal point and the vanishing point), the counter examples only within the boundaries of a histogram bucket. This
for this implication are extremely rare in real images, and makes the approach ill suited for accurately estimating the
the problem of finding parallel lines in 3D is reduced to true vanishing point location. This drawback can be
finding vanishing points in the image plane. The knowledge overcome by using a more sophisticated hierarchical
of the vanishing point is an important step towards 3D apparatus, as inref. [2]. Magee and Aggarwal [3] computed
interpretation, allowing meaningful information to be the intersections of pair’s line segments directly, using
obtained about the real scene (such as depth, object cross-product operations. Later works by Brillault-
dimensions etc.). The vanishing point detection in images O’Mahoney [4] started acknowledging the existence of
plays a leading role in various tasks of artificial vision such errors in image measurements, but reduced this to a method
as camera calibration, images rectification, navigation of that was sensitive to line lengths only. Collins and Weiss [5]
autonomous vehicles etc. Using the vanishing point is one declared that vanishing point computation is characterised as
of the most widely used techniques for indoor vision-based a statistical estimation problem on the unit sphere.
navigation. A lot of parallel lines are present in the Some authors used Hough space for their job. Tuytelaars
corridors, such as the edges of the side walls with the floor et al. [6] proposed a space bounding technique for the
and the ceiling. Using the vanishing point determined from unbounded parameter space of a cascade Hough transform
these lines, the heading of the robot can be estimated. to detect the vanishing points and lines. Local descriptors
Several approaches exist to detect the vanishing points but of the image surface and their distributions are attributes
generally, they are developed based on the concept of Hough used in many computer vision applications. Using
transform and differ in parameter space. Accumulated spaces additional information measuring their uncertainty, a more

40 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org
accurate estimate of their probabilistic distribution can be We have used a standard Hough transform (in MATLAB
modelled. Dahyot [7] illustrated this concept by proposing environment) for our work but there were some other
two Kernel-based representation of the distribution of the choices such as fast, hierarchical, cascade, stochastic based
Hough variables that were more robust than the standard etc. The following papers can provide good information
Hough transform. about the various types of the Hough transform: Hough
In methods based on ‘image space’, the vanishing points [11], Duda and Hart [12], Aggarwal and Karl [13], Bandera
accumulate onto the image space to solve problems of et al. [14], Goldenshluger and Zeevi [15], Kosaka and Kak
camera calibration requirements and information loss. [16] and Cheshkov [17].
Rother [8] used the image space as an accumulator space to
detect the vanishing point without any information loss.
3 Our proposed method
Clustering has been used to find the vanishing point since
the early 1990s. McLean and Kotturi [9] suggested a method We decided to develop a navigation theory on the basis of
for the detection of vanishing points based on sub-pixel line comparison and monitoring of gathered information from
descriptions of thresholds. Having unlimited search space the side walls. If the robot can draw an approximate model
and points in infinity are important problems in finding of the corridor, the planning of the navigation process and
the vanishing points. He et al. [10] found the vanishing the accuracy of travelling improves considerably. We tried
point by clustering on the normalised unit sphere. In to model the variation of straight lines in the corridors
comparison with the Gaussian sphere method, this method using the Hough transform and the K-Means clustering. In
transforms the non-homogeneous space into a normalised our theory, left clusters represent the left wall and right
homogeneous space. In the normalised homogeneous space, clusters represent the right wall and the gap between them
finite points and points at infinity are treated equally. represents the space available for navigation. The theory
The method we try to develop here is different than in will be explained in the following sections. It was soon
previous works since we do not estimate the intersection of recognised that if the robot stands in the centre of the
lines neither in the image plane nor in the Hough space. It corridor, the density of non-vertical and non-horizontal
is less complicated and obtained features can be used for lines is almost shared on the left and right sides of the
localisation and mapping purposes. Normally, the image. If we could model the areas with more line
illumination and other harmful visual events hit the densities, the shape of the corridor including the walls and
accuracy of the edge detection and tend to poor the gap between them would be easily distinguishable in
understanding of the image. This difficulty is encountered pictures. In order to navigate accurately, we had to choose a
by reducing the dependency of our algorithm to the heading correction criterion as well. We decided to use the
accuracy of the edge detection principle. This happens vanishing point to correct the heading. This was the
because we deal with a set of lines instead of a single line motivation to start working on the vanishing points. Our
here. It is not like other solutions that require the exact goal was to introduce a new and simple procedure for
location of a line to estimate the intersection point. vanishing point extraction based on the visual information.
This paper includes: a brief description of the main We used the same data for the next steps, including
characteristics of Hough transform (Section 2), our modelling of the environment and forming a strategy for
proposed method (Section 3), experimental results (Section navigation. In this paper, we present our achievements in
4) and conclusions and future works (Section 5). vanishing point estimation. The second part of our
research 2 visual navigation in corridors 2 will be
2 Hough transform presented in a separate paper in the near future.
We will introduce our theory step by step and the result of
There are a variety of segmentation criteria that involve each step will be shown on a difficult and complex corridor
models at a larger scale. Typically, one wants to decompose (Fig. 1). Moving in this corridor is difficult because there
an image or a set of tokens – which could be pixels, are elevation differences, varying slop, steps, equipments,
isolated points, sets of edge points etc. – into components unbalanced lighting conditions, changing illumination,
that belong to one or another simple family. For example, reflecting ground and working personnel. Dealing with each
we might want to cluster tokens together because they form of these difficulties has been a challenge for the robots
a circle. Finding such groups is often called fitting. We see since the beginning.
fitting as part of the segmentation process, because it uses a
model to produce compact representations that emphasise
the relevant image structures. Generally, fitting involves
determining what possible structures could have given rise
to a set of tokens observed in an image. For example, in
our case, we have a set of edge points (the tokens) and
wish to determine which lines fit them best. The Hough
transform is designed to detect lines, using the parametric
representation of a line

r = x ∗ cos(u) + y ∗ sin(u)

The variable r is the distance from the origin to the line along
a vector perpendicular to the line. u is the angle between
the x-axis and this vector. The Hough transform generates a
parameter space matrix whose rows and columns
correspond to these r and u values, respectively. Fig. 1 Input image

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 41


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012
www.ietdl.org
As a first step, the input image was resized to minimise the
processing time. Change in the illumination conditions is
one of the biggest problems associated with vision-based
techniques. To reduce the effect of this problem, the RGB
image was converted to greyscale and then histogram
equalisation was performed on the greyscale image. This

Fig. 2 Result of applying histogram equalisation on Fig. 1

Fig. 6 Result of Hough line detection based on the results shown


in Figs. 4 and 5
Fig. 3 Result of applying 458 filter on Fig. 2

Fig. 4 Result of applying canny edge detector on Fig. 3


Fig. 7 Result of finding four clusters; blue points, red points,
magenta and yellow points are showing the distribution of data
for each cluster; marker circles is used to indicate the centroid of
each cluster

Fig. 5 Result of applying Hough transform on Fig. 4 is shown here Fig. 8 Final clustering and the indication of vanishing point;
Curves in parameter space are shown in hot colour format. Hough peaks are marker circle shows the vanishing point and the green points are
indicated by green squares showing the location of resulted centroids in Fig. 7

42 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org

Fig. 9 Classification of data with even number of clusters


a and b Two clusters are found for the set of data and two triangles are showing the centroid of each cluster. The centroid of final clustering is shown as a circle.
c and d Four clusters are found for the set of data and four triangles are showing the centroid of each cluster. The centroid of final clustering is shown as a circle.

Fig. 10 Experimental results 1

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 43


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012
www.ietdl.org
ensured sufficient contrast within the image, to extract the Lines with 458 orientation are very trustable features, apart
required edges (Fig. 2). from the shape and the size of the corridor (refer Section
We applied the following 458 filter on the resulted image 4). The standard Hough transform was used to extract the
before performing canny edge detection to decrease desired lines from the detected edges. The Hough
processing time, whereas increasing the accuracy of canny transform is normally a time-consuming task. This was the
edge detector by finding only required lines for our reason we used 458 filter prior to the edge detection step.
algorithm (Fig. 3). We choose 458 filter because there As a result of this matter only a few number of edges have
always exist 458 lines in all types of corridors (perspective to be imported to the Hough space which automatically
effect) and it was providing the best possible results in our decreases the processing time. Fig. 5 shows the result of
experiments. applying the Hough transform on Fig. 4. Hough peaks are
indicated by green squares. Fig. 8 shows the result of
Hough line detection for Figs. 6 and 7. Yellow lines are
our desired lines.
21 21 2
We had to choose a classification criterion to properly
21 2 21
classify left and right features (the reason will be explained
2 21 21
in the following paragraphs). K-means clustering was
selected for the partitioning method. It partitions data into k
mutually exclusive clusters and returns the index of the
Since it is very difficult to extract the right edges (edges with cluster to which it has assigned each observation.
458 orientation) under different conditions, a layered filtering We found that if we only choose the beginning and
approach was used, where spurious lines or edges are ending pixel of each line, instead of importing all pixels
eliminated at each level. The Canny edge detector located between them, we can decrease the processing
thresholds were adjusted by hysteresis thresholding (Fig. 4). time of K-means algorithm to almost 10% of the first

Fig. 11 Experimental results 2

44 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org

Fig. 12 Experimental results 3

trial. The number of required iterations was also reduced. heading by finding the vanishing point, we need to have
For example, in Fig. 6 we needed to have 20 iterations to even number of clusters as we need at least one cluster
reach the well-separated clusters, when using whole pixels for each wall. Fig. 9 shows the reason for choosing four
representing the lines. It was then reduced to four clusters for the classification of our dataset which includes
iterations when only the beginning and ending pixels of the start points and end points. If we classify the dataset
each line were used. into two clusters (one for each wall), the result of final
We will call the beginning pixel of each line as the ‘start clustering on these centroids (green points) will be as
point’ and the ending pixel of it as the ‘end point’. We Figs. 9a and b. The final centroid (marked circle) in Fig. 9a
found four clusters for the set of data which included both represents the vanishing point but the same centroid in
start points and end points (Fig. 7). The reason will be Fig. b does not have the characteristics of a vanishing point
explained later. As shown in Fig. 8, the centroid of since it is not the point in which parallel lines not parallel
each cluster is used to form a new set of data for final to the image plane appear to converge. This shows that
clustering. We consider the final centroid as the vanishing finding two clusters and applying final clustering on them
point because its location is accurately the same as the fails in some cases like the one illustrated in Fig. 9b. Next
location of the vanishing point in the image. As shown in we tried an even number of clusters which was four or two
Sections 2 and 4 the final centroid has all characteristics of a clusters for each wall. The results are shown in Figs. 9c and
vanishing point. d. The resulted centroid is the vanishing point in both
Since we want to model the side walls of the corridor to corridors. We have tested this procedure in several corridors
navigate safely within the walls while correcting the and the results were as strong and accurate as in Fig. 9c and

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 45


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012
www.ietdl.org
d. This is the reason why we found four clusters in our As the heading of the robot changes, the x-coordinate of
algorithm. Experimental results in Section 4 can show the the vanishing point changes. The robot then employs a
effectiveness of this decision accordingly. simple controller to maintain a straight heading for
There is a chance of incorrect estimation of the vanishing the robot. As the robot deviates from its straight
point in cases of severe occlusion or in the presence of line heading, the controller appropriately varies the two
many cluttered objects. Hence, another check was wheel velocities to bring back the robot to its required
performed to check the validity of the vanishing point heading.
estimated. Since the pitch of the camera does not change A brief review of the steps mentioned is shown in the
during the robot motion and the optical axis is following algorithm:
perpendicular to the vertical axis in the world frame, the
vanishing point in the image must always have its Algorithm:
y-coordinate close to half of the image height. By
performing this simple check, the validity of the vanishing 1. Obtain new image
point can be established. 2. Resize the image
When the robot is moving in a straight line within the corridor 3. Convert it from RGB to grey
parallel to the walls, the x-coordinate of the vanishing point 4. Perform the histogram equalisation
within the image is close to the image centre Cx. 5. Apply the 458 mask filter

Fig. 13 Experimental results 4

46 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org
6. Apply the canny edge detector 4 Experimental results
7. Perform the Hough transform
8. Extract the Hough peaks In this section, we present some experimental results obtained
9. Extract the Hough lines from applying the proposed method in real images. One of the
10. Determine the start points and the end points complex images we had is already processed in prior sections.
11. Apply the K _means clustering on the data set (start Generally, in vision applications environmental conditions,
points and end points) such as illumination, can change the characteristics of the
12. Extract four cluster centroids scene and result in wrong detections. The given examples
13. Put these four cluster centroids in a new matrix in this section are able to overcome many of these
14. Apply the K _means clustering difficulties while maintaining very robust outputs. We have
15. Extract one cluster centroid tested our algorithm on several images and the results were
16. Validate the point very promising. One can see the results of some of our
17. Register the vanishing point experiments in Figs. 10– 17 in different environments
18. Correct the heading (corridors). Fig. 18 shows one of our attempts to test the

Fig. 14 Experimental results 5

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 47


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012
www.ietdl.org

Fig. 15 Experimental results 6

algorithm in outdoor environments. According to results the robustness of our algorithm even when the accuracy of
shown in Fig. 18, we need only two clusters to find the canny edge detection is low or the input image is noisy.
vanishing point for outdoor cases. More work is required to Using 458 filter prior to the edge detection step enabled us
adapt the algorithm for outdoor cases but presented to import only a few numbers of edges to the Hough space
experiments in this section show the effectiveness of our which automatically decreases the processing time. The
algorithm in corridors. amount of time saved depends on the number of unwanted
edges (lines with an orientation other than 458) that varies
5 Conclusions and future works across pictures and across corridors but generally we can
save more time in more cluttered environments. It is an
In this paper, we have summarised a new approach for finding important capability for an algorithm to discard useless
the vanishing point in the corridors. Generally, there are information to avoid extra processing.
considerable similarities between the sides of corridors We used standard Hough transform for our work which is
which could be used by the robot to correct the heading. In the simplest type within this family. It is less complicated and
following paragraphs, we try to address some of the major easy to perform. One of the prior works in the field of
advantages of our algorithm in comparison with prior works. vanishing point was the research of Tuytelaars et al. [6] that
Normally, the illumination and other harmful visual events used cascade Hough transforms. It required three levels of
hit the accuracy of the edge detection and tend to poor Hough transform to find the vanishing point. Some other
understanding of the image. This difficulty is encountered researchers have created their own type of Hough
by reducing the dependency of our algorithm to the transform like Dahyot [7], who proposed a statistical Hough
accuracy of the edge detection principle. This happens transform to find the vanishing point. Some of these new
because here we deal with a set of lines instead of a single transforms have better accuracy but generally more
line. It is not like other solutions that require the exact complexity and higher cost than our standard Hough
location of a line to estimate the intersection point. transform.
According to our discussion in the introduction section, We decreased the processing time of K-means algorithm to
almost all existing methods need to find the intersection of almost 10% by choosing the beginning and ending pixel of
the lines to extract the vanishing point. This has increased each line, instead of importing all pixels located between

48 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org

Fig. 16 Experimental results 7

them. The number of required iterations was also reduced. For distance to the number of specified centroids (four in our
example, as we mentioned in earlier sections, we required to case) decreases the effect of these new lines (the lines
have 20 iterations to reach the well-separated clusters in caused by an object in the corridor) and helps to find the
Fig. 6, when using whole pixels representing the lines, but location of vanishing point accurately. Please note that it is
it reduced to 4 iterations when only the beginning and the assumption of our procedure to have enough free
ending pixels of each line were used. Most of the available space available in the corridor to move (we have
literatures that use clustering for their work have been using represented a vanishing point detection algorithm and not
clustering in Hough space or other accumulation spaces but an obstacle avoidance procedure). Fig. 16 shows the results
we performed clustering on the results of Hough transform accordingly.
(outside of Hough space). Hough transform is a time- We do not need any a priori information or a calibrated
consuming transform and adding some other tasks like camera to find the vanishing point. This capability alone is
clustering increases the cost of the transform unpleasantly an important advantage of our algorithm in comparison
(moving information from image space to Hough space and with a large number of prior works.
moving to image space again after processing is one of the Dynamic changes in environment will not affect the
reasons). The rate of saving varies from case to case but in algorithm until the changes limit the view of the camera
the best condition the algorithm works 20% faster in 10 trials. seriously. There is a same story when people exist in the
In our procedure, external objects in corridors have corridor. The robot can move until it is close to people
minimum effects on the accuracy of vanishing point which mean the algorithm cannot find the proper lines any
location, whereas they normally cause failures in most of more. It will stop in this situation until the normal condition
other solutions. This capability is achieved because of the is available again. Fig. 17 shows the related results.
unique way of feature extraction we choose for our In addition to all possible applications of the
algorithm. We ignore the general shape of the object and vanishing point, as a continuation of this research, we are
only resulted 458 lines will be considered. The nature of currently working on a system that uses real-time
clustering that is developed on the basis of minimum Extended Kalman filter-simultaneously localisation and

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 49


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012
www.ietdl.org

Fig. 17 Experimental results 8

Fig. 18 Experimental result for outdoors

mapping (EKF-SLAM). The robot is equipped with a single principle in corridors. Vanishing point is used for heading
camera as a bearing-only sensor that uses unique feature correction and side clusters (the left and right clusters
detection (resulted clusters in this paper) and mapping which were required to find the vanishing point) are

50 IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51


& The Institution of Engineering and Technology 2012 doi: 10.1049/iet-cvi.2010.0046
www.ietdl.org

Fig. 19 Result of mapping in a corridor


a Robot is moving in a straight corridor
b Robot has reached to a right turn in the corridor
c Robot has reached to a left turn
d Robot has reached to a T-junction
The line ( – – – –) is showing the variation of left centroids
The (— . —) is showing the variation of right centroids
The line (........) is showing the variation of the vanishing point and C is representing the centre of the corridor
(Scales are manipulated for better indication)

employed to perform localisation and mapping. SLAM, in 5 Collins, R., Weiss, R.: ‘Vanishing point calculation as a statistical
its naive form, scales quadratically with the number of inference on the unit sphere’. Proc. Third Int. Conf. on Computer
Vision, 1990, pp. 400– 403
landmarks in a map. For real-time implementation, this 6 Tuytelaars, T., Proesmans, M., Van Gool, L.: ‘The cascaded Hough
scaling is potentially a substantial limitation in the use of transform’. Proc. Int. Conf. on Image Processing, October 1997,
SLAM methods. We are trying to decrease computational vol. 2, pp. 736– 739
complexity and simplify environment representation by 7 Dahyot, R.: ‘Statistical hough transform’, IEEE Trans. Pattern Anal.
Mach. Intell., 2009, 31, (8), pp. 1502– 1509
using different landmark detection and mapping principles. 8 Rother, C.: ‘A new approach to vanishing point detection in architectural
This can be an alternative solution to rectify the problems environments’, Image Vis. Comput., 2002 (20), pp. 647–655
caused by quadratic nature of EKF. Fig. 19 shows the 9 McLean, G.F., Kotturi, D.: ‘Vanishing point detection by line
result of mapping process. SLAM is an extended field of clustering’, IEEE Trans. Pattern Anal. Mach. Intell., 1995, 17, (11),
research in itself and requires enough space and detailed pp. 1090– 1095
10 He, Q., Henry Chu, C.-H.: ‘An efficient vanishing point detection by
discussions that is out of our scope. We try to address the clustering on the normalized unit sphere’. IEEE Int. Conf. on
next part of our work in a separate paper in the near Application-Specific Systems, Architectures and Processors (ASAP),
future. Outdoor usage is another challenge waiting to be 2007, pp. 203– 207
managed in future works. 11 Hough, P.: ‘Methods of means for recognising complex patterns’. US
patent 3069654, 1962
12 Duda, R.O., Hart, P.E.: ‘Use of the hough transformation to detect lines
and curves in pictures’, Commun. ACM, 1972, 15, pp. 11– 15
6 References 13 Aggarwal, N., Karl, W.C.: ‘Line detection in image through regularized
Hough transform’, IEEE Trans. Image Process., 2006, 15, (3),
1 Barnard, S.T.: ‘Interpreting perspective images’, Artif. Intell., 1983, 21, pp. 582–591
pp. 435–462 14 Bandera, A., Ṕerez-Lorenzo, J.P.B.J.M., Sandoval, F.: ‘Mean shift based
2 Quan, L., Mohr, R.: ‘Determining perspective structures using clustering of Hough domain for fast line segment detection’, Pattern
hierarchical Hough transform’, Pattern Recognit. Lett., 1989, 9, (4), Recogn. Lett., 2006, 27, pp. 578–586
pp. 279–286 15 Goldenshluger, A., Zeevi, A.: ‘The Hough transform estimator’, Annals
3 Magee, M., Aggarwal, J.: ‘Determining vanishing points from Stat., 2004, 32, (5), pp. 1908–1932
perspective images’, Comput. Vis. Graph. Image Process.: Image 16 Kosaka, A., Kak, A.C.: ‘Fast vision-guided mobile robot navigation
Underst., 1984, 26, pp. 256– 267 using model-based reasoning and prediction of uncertainties’, Comput.
4 Brillault-O’Mahoney, B.: ‘New method for vanishing point detection’, Vis. Graph. Image Process.: Image Underst., 1992, 56, (3), pp. 271– 329
Comput. Vis. Graph. Image Process.: Image Underst., 1991, 54, (2), 17 Cheshkov, C.: ‘Fast Hough transform algorithm’. HLT Workshop, 6– 9
pp. 289–300 June 2004

IET Comput. Vis., 2012, Vol. 6, Iss. 1, pp. 40– 51 51


doi: 10.1049/iet-cvi.2010.0046 & The Institution of Engineering and Technology 2012

Você também pode gostar