Você está na página 1de 2

P4-14

Vision-based Virtual Touch Screen Interface


Eunjin Kohl2, Jongho Won2, and Changseok Bae2
'University of Science and Technology, Daejeon, South Korea
2Digital Home Research Division,
Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
Abstract-To utilize the augmented reality with the portable
environment, human-computer interaction interface that satisfies
the mobility, the convenience, as well as the accuracy is required.
In this paper, we propose a vision-based virtual touch screen
interface that ensures the requirements. The virtual touch screen
is implemented on a stereo cameras installed see-through head
mounted display device using an adaptive gesture recognition
algorithm. Since it provides a natural way to interact with
computers through the vision-based select, click, and drag&drop
operations, it can be applied to various augmented reality
environments without the restriction of the mobility and the
inconvenience.

I. INTRODUCTION
It is inarguable that the augmented reality (AR) improves
our real world by adding virtual objects and fleetly enlarges its
application areas. This leads the intensive research in the area
of human computer interaction (HCI). However, they have
the intrinsic limitation for the ubiquitous service environment
that requires the convenience and the mobility. Some
examples include the restricted environments of the place
where AR can be applied, i.e. a certain table, the requirement
of too many markers, and the need of markers inconveniently
attached to human bodies [1]-[3].
For the fine interaction to control mobile equipment, it also
requires the convenience, the accuracy, and the naturalness.
Whereas much research has been devoted to fulfill such
requirements recently, they still are not satisfactory for being
adopted by the mobile environment. For example [4], [5] use
mechanical devices that directly measure motions and spatial
positions of a hand. They require wearing glove typed devices
that are directly wire-connected and constrain the naturalness
and comfort of the user to interact with computer.
To resolve these problems, we propose a virtual touch
screen interface that uses a stereo cameras installed seethrough head mounted display (HIMD) device and an adaptive
gesture recognition algorithm. Since it provides a natural way
to interact with machines or computers through the vision
based select, click, and drag&drop operations on the virtual
touch screen, it ensures the mobility and the convenience. It
allows users to control their portable devices like personal
digital assistants (PDAs) or ultra mobile personal computers
(UMPCs) using a hand during their daily life.

This work was supported by the IT R&D program of MIC/IITA

[2006-S032-02, Development of an Intelligent Service technology

based on the Personal Life Log]

1-4244-1459-8/08/$25.00 2008 IEEE

11. VISION-BASED VIRTUAL ToUCH SCREEN INTERFACE


To ensure the mobility of interfaces with AR, we adopt a
stereo cameras installed see-through H\MD device for
displaying the virtual touch screen shown in Fig. 1. The
camera A in Fig. 1 obtains the real patch image stuck to the
front of the camera and sends it to ARToolkit [6]. The
ARToolkit recognizes the patch and calculates its relative
coordinates. The coordinates of the virtual touch screen is
obtained by calculating coordinate transformation matrices
and the virtual touch screen is translucently displayed at an
adjustable distance on the air as shown in Fig. 1.
camera B

humn C*

camera A

ARTbolKit
Reonto

Fig. 1. A stereo cameras installed see-through HMD and the virtual touch
screen display algorithm

The concept of calculating coordinate transformation


matrices is shown in Fig. 2. The black cube explained in [6] is
used as the real patch image as shown in Fig. 2. Let x, y,
and z be the points of the real patch image, x', y', and z' be
the destination points of the virtual touch screen in the relative
coordinates system. After recognizing the position of the real
patch image, the position of the virtual touch screen is
calculated as follow:
[x, y, z, 1] = [x, y,

z,

1] AMs AMr Alt

where MAs Mr , and Mt are scale, rotation, and translation


matrices, respectively. Even though the matrices are changed
along the relative positions among the HIMD, the camera A,
and the real patch image, users do not need to pay attention
because the matrices are determined in the arrangement time.

virtual touch scree

real patch image

11
y

'k

z
x

Fig. 2. Concept of calculating coordinate transformation matrices

To detect and track a user's hand, we build hand shape and


hand-color models [7] before using the virtual touch screen
interface. It offers the fully natural interface because it does
not use any cumbersome devices.
The concept and the example of interaction with the virtual
touch screen are depicted in Fig. 3. The user controls mobile
equipments by touching the virtual screen as shown in Fig.
3(a). To be aware that the user imaginarily touches the virtual
touch screen, it has to know the distance between the user's
hand and the H\MD device because the proposed interface
already knows the distance between the virtual touch screen
and the H\MD through the matrices. The stereo vision
technique makes the proposed interface able to calculate the
disparities between stereo images obtained from the two
cameras and assess the distance (depth) by using the
disparities [8].
The example of interaction is represented in Fig. 3(b). The
real patch image is in the top-right area of the view. Because
the patch image is attached to the camera and the position of
the virtual touch screen is related to the position of the real
patch image, the virtual touch screen is displayed at the same
position of the view regardless of looking at anywhere.
Therefore, it ensures the mobility and the naturalness of the
HCI interface and the user can control the mobile equipments
during walking or doing other works.

icon and it is the click operation. If the select operation is


occurred for a long time (longer than 1 second), it is regarded
as drag&drop operation. The selected icon moves along the
movement of the hand as real time. These operations fulfill
the most of the mouse operations and the scene of the virtual
touch screen in Fig. 3(b) is a Windows XP desktop image. It
is familiar to consumer use.
IV. CONCLUSION AND FUTURE WORK
This paper has proposed the virtual touch screen interface
using a stereo cameras installed see-through H\MD to interact
with machines or computers.
Two most important
contributions of the proposed vision based virtual touch screen
interface are given.
First, the interface offers the mobility.
The most
applications of AR has the intrinsic mobile limitation since the
applications usually use the AR environments such as a certain
table, too many marker patches, and cumbersome markers
attached to the user's body. However, since the proposed
interface has the real patch stuck to the front of camera as its
environment, the users do not need any additional

environments.
Second, the interface offers the natural and comfortable
HCI interface by not using any awkward devices for
interacting. Because the system recognizes the color, shape,
and movement of the hand, the users do not need any
additional devices.
In our future work, we plan to employ the differential image
and the optical flow techniques for improving the gesture
recognition accuracy, especially toward complex backgrounds.
We further intend to realize a vision-based interface that
allows the users to use their both hands without any real patch
images. This will naturally reinforce the mobility and the
convenience of our existing interface.

real patch image

virtual touch screeh

REFERENCE
[1] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana,

~~~~~~1.

......I
(a)

"Virtual object manipulation on a table-top AR environment,"


Augmented Reality, ISAR 2000, pp. 111-119,2000.
[2] I. Poupyrev, D. S. Tan, M. Billinghurst, H. Kato, H. Regenbrecht, and N.

I
(b)

Fig. 3. Concept and example of interaction with the virtual touch screen

Tetsutani, "Developing a generic augmented-reality interface,"


Computer, Vol 35, pp. 44-50, 2002.

[3] J. M. S. Dins, P. Santos, and P. Nande, "In your hand computing:


tangible interfaces for mixed reality," IEEE International Conference on
Augmented Reality Toolkit Workshop, pp. 29-31, 2003.

IMPLEMENTATION
We use a UMPC, VAIO VGN-UX50, for the
implementation. This device has a 1.06GHz Intel Core Solo
CPU and 512MB of main memory.
The implemented system detects and tracks the user's
hand position and perceives the index finger tip. It offers the
select, click, and drag&drop operations. If the distance
between the fingertip and the HMD is same to the distance
between the virtual touch screen and the HMD for a short time
(shorter than 1 second), it is regarded as the select operation.
If the selected location is on an icon, the UMPC executes the
III.

[4] S. S. Fels and G.E. Hinton, "Glove-talk: a neural network interface


between a data-glove and a speech synthesizer, " IEEE Trans. Neural
Networks, pp. 2-8, 1993.

[5] S. Dominguez, T. Keaton, and A. Sayed, "A vision-based wearable


system to support 'web-on-the-web' applications," 6th Australasian

Conf on DICTA2002, pp. 92-97, 2002.


[6] I. Poupyrev, H. Kato, and M. Billinghurst, "ARToolkit user manual,
version 2.33," Human interface technology Lab, University of
Washington, 2000.
[7] T. Kurata, T. Okuma, M. Kourogi, and K. Sakaue, "The hand mouse:
GMM hand-color classification and mean shift tracking," 2nd Int'l
Workshop on RATFG-RTS2001, pp. 119-124,2001.
[8] M. Xie, C. M. Lee, and Z. Q. Li, "Depth assessment by using qualitative
stereo-vision," IEEE International Conference on Intelligent Processing
Systems, pp. 1446-1449, 1997.

Você também pode gostar