Você está na página 1de 7

Indoor Space 3D Visual Reconstruction Using Mobile Cart with

Laser Scanner and Cameras


Prince Dukundane Gashongore1, Kikuhito Kawasue1, Kumiko Yoshida1 and Ryota Aoki1
Fuculty of Engineering, Univesity of Miyazaki, 1-1 Gakuen Kibanadai Nishi, Miyazaki, Japan
kawasue@cc.miyazaki-u.ac.jp

ABSTRACT
Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to
conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue
response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building.
Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given
environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner,
CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is
reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially
sighted persons and so forth.
Keywords: 3D Visual Reconstruction, Indoor, Interior Building Structures, Point Cloud, 3D Laser scanning,
Surface, Calibration, Pattern matching, Measurement system, Computer vision.

1. INTRODUCTION
Indoor space 3D visual reconstruction has interested researchers in the robotics community for decades. Although
many researcher focus on the development of navigation systems and mapping for autonomous mobile robots [1],
indoor space 3D visual reconstruction has many applications such as efficient deployment of rescue teams on
emergency response duty and guiding and assisting blind people so that they can easily move in public buildings [2].
The importance of these assistive applications has been the source of motivation for many researchers to develop
indoor 3D visual reconstruction systems such as those seen in [3], [4], [5], [6], [7], [8] and [9]. The system in [3] has
multiple laser scanners and relies on robust loop closure detection techniques, which make that system prone to
errors when operated in an unconditioned environment. Most of the cited systems operate only in conditioned
environments. For example, the system in [3] operates only when the environment is composed of vertically
oriented planes (i.e. walls). While the system in [3] is accurate in the assumed environment, it is limited and slow
when operated in different environments and this is also true for many other systems. Therefore, the development of
a new improved 3D visual reconstruction system which can work in any given environment is expected.
Road measurement system has been developed in our previous study [10]. The shape of the road is reconstructed
on computer considering the movement of the system. In this paper, we develop a mobile cart indoor space 3D
visual reconstruction System. In addition to the road measurement system capability of measuring the movement of
the cart, this system is equipped with the rotational measurement of the system itself to detect indoor corners. The
system is human-operated and is equipped with a laser scanner, CCD camera, omnidirectional camera and a
computer. Microsoft KINECT sensor can be used for acquisition of a dense set of 3D points[11,12,13,14] but it is
limited because of the area it can capture is small and the point cloud data it generates should be connected when it
is moving, thus the reason not to use it in the system. The derived three-dimensional indoor space condition is
reconstructed by considering the movement and direction of the mobile cart. The system enables us to measure the
target indoor space with high accuracy without the use of GPS. In addition, the system works in any given
environment using the proposed localization method without requiring the assumption of vertical oriented planes as
required by [3]. Experimental results show the feasibility of the proposed system.

2. MOBILE CART INDOOR SPACE 3D VISUAL


RECONSTRUCTION SYSTEM
1.1 System Configuration
Figure 1 shows the CAD model of the mobile cart. The cart is composed of a computer, a laser scanner,
omnidirectional camera, tilt sensor and a CCD camera to detect the movement of the system. The target space is
measured three-dimensionally by rolling the cart through the indoor space.
Omnidirectional camera
Tilt sensor

Laser
PC
scanner
Localization camera

Battery

Figure 1. A CAD Model of the mobile cart used to collect data

1.2 System operation and Reconstruction process


The measurement (reconstruction) is achieved by arranging the cross sectional shapes of the indoor space
perpendicular to the changing direction of the cart. The arrangement of the cross sections is executed by considering
the direction and the magnitude of the movement of the cart. The direction and the magnitude of the movement are
detected by the CCD camera attached at lower position of the cart. The optical flow of the texture movement of the
path surface is analyzed to calculate the movement. The movement of the cart can be detected accurately regardless
of the indoor space configuration by using this method.
The omnidirectional camera is attached near the laser scanner and its view is aligned in such a way that it includes
the scan area of the laser scanner. In this manner 3D point cloud data is obtained with color information allocated to
each point.

3. IMAGE PROCESSING FOR 3D VISUAL


RECONSTRUCTION
1.3 Calibration
The relationship between the camera coordinate system (u, v) and global coordinate system (x,y,z) is given by the
following equation. hi,j is a transformation matrix as decribed in [15].
u h11
v h21
1 h31

h12
h22

h13
h23

h32

h33

x
h14
y
h24
z
1
1

(1)

This equation is transformed as follows.


u h11x h12 y h13 z h14 h31ux h32uy h33uz
v h21x h22 y h23 z h24 h31vx h32vy h33vz

(2)
A scale board is set as shown in Figure 2 and the CCD camera records the scale on the board. The Laser scanner
that is set on the upper position of the cart detects the z position of the scale board. More than 6 non-coplanar points
are selected by successively changing the z-position of the scale board. The camera coordinate (u, v) is read using
the mouse device on the computer. The global coordinates (x, y) are read from the scale on the board and the zcoordinate is detected by the laser scanner. The 6 coordinate pairs between camera coordinate system (u, v) and

global coordinate system (x, y, z) are substituted into (1) in order to determine the calibration parameters (hi,j ) of the
transformation matrix.
Following the above described procedure, the conversion matrix from the camera coordinate system (u, v) to the
global coordinate system (x, y) can be written as follows:

x h11 h31u h12 h32u


y h h v h h v
21 31
22
32

u h14 (h33u h13 ) z


v h (h v h ) z
24
33
23

(3)

Figure 2. Camera calibration

1.4 Calculation of the displacement and orientation of the mobile cart


The direction and the magnitude of the cart movement is detected by the CCD camera attached at the lower
position of the cart. An image correlation method is used to estimate the optical flow. Small interrogation
regions(i.e. Templates) are selected in the images recorded by the CCD camera. An example of the interrogation
regions is shown in the rectangular regions in Figure 3. The intensity of the pixels are used to find the matched
interrogation region in consecutive images. The following equation gives the correlation function.
H t 1Wt 1

R NNC (a, b)

I (a u, b v) T (u, v)
i 0 j 0

H t 1Wt 1

H t 1Wt 1

I (a u, b v) T (u, v)
i 0 j 0

i 0 j 0

(4)
Ht and Wi in (4) are size of the interrogation region. The image coordinates of the original interrogation regions are
given by u and v. I is the intensity of the pixels in an input image and T is the intensity of the pixels at (u, v) in the
original interrogation region. RNNC is a correlation value. The (a, b) is interrogation region displacement vector. The
value of (a,b) maximized RNNC and is taken as the displacement of the mobile cart. The vectors in Figure 3 are
examples of optical flow vectors in an interval. The RNNC correlation value is used to match the interrogation region
in consecutive images.

Laser scanning
line

Figure 3. Detection of the movement

Figure 4. The movement and direction detection

u i , vi
In order to calculate the direction and movement, two image points are utilized: the pre-movement point (
)
u i , v i
obtained by matched interrogation region (i.e. template matching) and the post-movement point (
). Then,
global coordinates corresponding to those camera coordinates. Figure 4 shows the state of movement of the center
coordinates of the interrogation regions in consecutive images captured before and after the mobile cart movement.
The expression (5) is a translation and rotation matrix of the origin point coordinate (
x cos
y sin

1 0

sin
cos
0

x
y
1

(5)
And Taking,

c cos

s sin

x
y

1

) to (

xi , y i

).

, (5) is deduced into (6)

x cx sz x

y sx cy y

(6)

x, y

c , s , x , y
By deducing
vectors respectively.

xi , y i

from (6), we get (7). The parameters c,s and

c
s

x1
y
1

xn
y n

y1
x1

yn
xn

1
0

1
0

0
1

0
1

x1
y
1


xn
y n

in (4) are rotation and translation

(7)

The direction and the magnitude of the cart movement is calculated using equation (7) and optical flow captured
by the CCD camera attached at the lower position of the cart.

4.

EXPERIMENTAL RESULTS

1.5 Mobile cart direction detection accuracy


Figure 5 shows the experiment setup used to measure the accuracy of the direction detection of the mobile cart.
Experiments were carried out by rotating the laser scanner and the CCD camera with a precision rotary stage. Figure
6 shows the results of the experiment.

Laser scanner

CCD Camera
Precision
rotary stage

Figure 5. Cart orientation detection

Figure 6. Results

Mobile cart orientation detection errors increases the rotational speed is above 0.4 rad/s and below 0.2 rad/s.
Therefore the accurate rotational speed of the mobile cart must be between 0.2 and 0.4 rad/s.
4.2 3D Visual Reconstruction of the indoor space
A test space was used to check the feasibility of the system as shown in Figure 7. The cart was rolled on the floor
surface as seen in the figure. The reconstructed space is shown in Figure 8. The maximum distance of the measured
data is about 10m and the number of the measured points is about 1 million in this trial. Once the measurement
process is complete, 3D visualization on the computer allows the user to view the space from any perspective.

Figure 7. Indoor space

Figure 8. 3D visualization of captured data

5. CONCLUSION
In this study, a mobile cart indoor space 3D visual reconstruction system without the use of GPS was developed. The
system is capable of functioning in any given environment without requiring the assumption that the environment
consists of only vertical oriented planes. The laser scanner measures the cross-sectional shape of the indoor surface.
The image data recorded by omnidirectional camera is allocated to the three dimensional shape data captured by
laser scanner. The relative movement and direction of the mobile cart to the indoor surface is detected by analyzing
the optical flow in consecutive images captured by a CCD camera. Three-dimensional indoor space is reconstructed
and visualized with accurate color and details on the computer. The proposed measurement system can therefore be
used in many applications under varying conditions and is suitable for applications such as emergency rescue
response.

REFERENCES
[1] Vosselman, G,DESIGN OF AN INDOOR MAPPING SYSTEM USING THREE 2D LASER SCANNERS
AND 6 DOF SLAM. ISPRS Annals, Volume II-3, 2014 (pp. 173-179), Zurich.
[2] Yogesh Rajendra, R. D,Interior Renovation of an Urban Building using 3D Terrestrial Laser. International
Journal of Advanced Research in Computer Science and Software Engineering, 533-538,(2013).
[3] Nicholas Corso, A. Z, "Indoor Localization Algorithms for an Ambulatory Human Operated 3D Mobile
Mapping System". Remote Sensing, 6611-6646,(2013)
[4] Feng, Y.[Yue], Ren, J.C.[Jin-Chang], Jiang, J.M.[Jian-Min], Halvey, M.[Martin], Jose, J.M.[Joemon M.],
Effective venue image retrieval using robust feature extraction and model constrained matching for mobile
robot localization,MVA(23), No. 5, September 2012, pp. 1011-1027.
[5] Bacca Cortes, B., Cufi Sole, X., Salvi, J., Vertical edge-based mapping using range-augmented
omnidirectional vision sensor, IET-CV(7), No. 2, 2013, pp. xx-yy.
[6] Sareen, K.K.[Kuldeep K.], Knopf, G.K.[George K.], Canas, R.[Roberto], Hierarchical data clustering
approach for segmenting colored three-dimensional point clouds of building interiors, OptEng(50), No. 7, 2011,
pp. 077003.
[7] Pintore, G.[Giovanni], Gobbetti, E.[Enrico], Effective mobile mapping of multi-room indoor structures,
VC(30), No. 6-8, June 2014, pp. 707-716.
[8] Keller, F.[Friedrich], Sternberg, H.[Harald], Multi-Sensor Platform for Indoor Mobile Mapping: System
Calibration and Using a Total Station for Indoor Applications, RS(5), No. 11, 2013, pp. 5805-5824.
[9] Bigun, J., Granlund, G., Wiklund, J. Multidimensional orientation estimation with applications to texture
analysis and optical flow. PAMI 13 ,1991, 775790
[10] K. Kawasue, R. Futami, H. Kobayashi, Three-dimensional visual reconstruction of path shape using a cart
with a laser scanner, Prc. of VISAPP, 2014, 600-604
[11] J.Mahoney, Testing the goods:Xbox KINECT,2010
[12] A. Bigdelou, T. Benz, L.Schwarz, and N. Navab, Simultaneous categories and spatio-temporal 3D gestures
using Kinect, Proc.3D interface,2012,pp.53-60.
[13] C. Mutto, O. Zanuttigh, and G. Cortelazoo, Time-of-Flight Cameras and Microsoft KINECT,Springer,
2012.
[14] Z. Zang, Microsoft Kinect sensor and its effect, IEEE multimedia, vol.19,2012,pp.4-10.
[15] Wei, G., Ma, S. Implicit and explicit camera calibration: Theory and experiments. PAMI 16 ,1994, 469480
[16] Park, S., Chung, M. 3d world modeling using 3d laser scanner and omni-direction camera. ,FCV13, 2013,
285288

AUTHORS BACKGROUND
Your Name

Title*

Research Field

Prince Dukundane Gashongore

Master student

Computer Vision

Kikuhito Kawasue

Full Professor

Computer Vision

Kumiko Yoshida

Phd candidate

Computer Vision

Personal website

Ryota Aoki

Master student

Computer Vision

Você também pode gostar