Escolar Documentos
Profissional Documentos
Cultura Documentos
Project Report
Group Members
Daniel P. Dye (5322792797)
Thiraphong Chawla (5322793555)
Advisors:
Dr. Cholwich Nattee
Dr. Nirattaya Khamsemanan
Table of Contents
1. Introduction ............................................................................................................................ 5
2. Background ............................................................................................................................ 7
2.1. Oculus Rift ...................................................................................................................... 7
2.2. Unity ................................................................................................................................ 8
2.3. Google Street View ......................................................................................................... 9
2.4. Processing...................................................................................................................... 10
3. Motivation ............................................................................................................................ 11
4. Objectives ............................................................................................................................. 11
5. Outputs and Expected Benefits ............................................................................................ 12
5.1. Outputs .......................................................................................................................... 12
5.2. Benefits.......................................................................................................................... 12
6. Literature Review ................................................................................................................. 13
6.1. Google Maps with Oculus Rift and Leap Motion ......................................................... 13
6.2. Oculus Street View........................................................................................................ 14
6.3. Oculus Rift and NASAs virtual reality of Mars........................................................... 15
7. Methodology ........................................................................................................................ 16
7.1. Approach ....................................................................................................................... 16
7.1.1. Overview ................................................................................................................ 16
7.1.2. Obstacles ................................................................................................................ 19
7.2. Tools and Techniques.................................................................................................... 20
7.2.1. Tools ....................................................................................................................... 20
7.2.2. Techniques ............................................................................................................. 21
7.3. Technical Specifications ............................................................................................... 22
7.3.1. Oculus Rift ............................................................................................................. 22
7.3.2. Google Street View ................................................................................................ 23
7.3.3. Google Geocoding.................................................................................................. 23
8. Project Schedule ................................................................................................................... 24
9. Project Progress .................................................................................................................... 25
9.1. Research and Understanding ......................................................................................... 25
9.2. Obstacles and Solutions ................................................................................................ 30
9.3. Completion Steps .......................................................................................................... 33
10. Technical Description ........................................................................................................ 36
10.1. Overview ..................................................................................................................... 36
10.2. Implementation............................................................................................................ 36
10.2.1. Unity ..................................................................................................................... 36
School of ICT, SIIT
Statement of Contribution
By submitting this document, all students in the group agree that their contribution in the
project so far, including the preparation of this document, is as follows:
Daniel P. Dye
(5322792797) 50%
Thiraphong Chawla
(5322793555) 50%
1. Introduction
We all have dreamt about visiting places around the world but most of the time this is not
possible because of many reasons such as family, money, etc. and so we have decided to
make it at least possible for you to visit these places in a virtual reality with the help of a
device known as Oculus Rift. This virtual reality will be a splitting image of the physical
reality which we live in. This can be achieved by the use of services provided by Google.
The project that we planned to achieve was to create a software system that uses only the
Oculus Rift to maneuver around in the Googles Street Views virtual world.
Google provides services known as Google Maps and Google Earth that maps the streets and
locations of places all around the world. These services have been integrated with a feature
known as Google Street View [1] which shows the panoramic view of a location. The
panoramic view of a location allows users to navigate 360 degrees horizontally and 290
degrees vertically. The panoramic view of a location is due to the combinations of many
panoramic images of the street [2].
Oculus Rift is a virtual reality headset that allows your head movement to interact with video
games [3]. The Oculus Rift provides rotation input from 3 axes; x, y and z. The Oculus Rift
also provides an almost human-like field of view on a high resolution display seen through 2
lenses in the headset. The extremely low latency of the device provides input and output
synchronization at rates that appear to be close to actual head movements, in which we
navigate our panes of vision.
We first decided to use Unity Engine to create an application for this project because Oculus
Rift has a software development kit (SDK) for Unity Engine but due to the limitations of
textures in Unity Engine we decided to migrate our project to Processing [4] which has
comparatively much fewer limitations. The limitations of Unity Engine will be further
described in details in the Progress (section 9) and Technical Description (section 10)
sections.
We planned to create an application that allows users to look around as they would with a
mouse, but in order to make it convenient, this system would use Oculus Rift to look around
and move with simple and natural head movements. Since the head movement is
synchronized with the display, it will be very effective to use Oculus Rift to view the
panoramic images displayed by the Google Street View. The main focus of this project is to
allow not only look around but also move around with the Oculus Rift in the Google Street
View. These head movements will be further researched and experimented with to locate the
most comfortable way to move.
We have also decided to make the system able to overlay animated images at certain
geographical locations and therefore make this not only a new idea, but also an improvement
towards existing software. These dynamic overlays will allow animated objects to be
displayed at pre-specified locations. The overlays may display explanations and descriptions
towards a location or display short animation loops of cultural activities. It could give a brief
explanation about the location or give details about the culture or traditions performed at these
places. The benefit of having these dynamic overlays will make it more interesting for people
to view rather than just having static images to look at.
When this project becomes complete it will be a portable and extremely cheaper way for
people to see the sights of the world. People with disabilities will be able to use the system to
go to places they never thought would be possible.
Section 2 provides the background of our topic and related technologies. Section 3 explains
our motivation towards working on this project. Section 4 states our aims and objectives of
School of ICT, SIIT
5
this project. Section 5 lists the items that are going to be implemented for our system and also
the final outcome. It also explains in greater detail as to whom this project will benefit.
Section 6 shows research and projects that have already been/are being done that influence
our project. Section 7 enumerates the steps, tools, and technical specifications used to
complete our objectives in order to achieve our aims. Section 8 illustrates the time schedule.
Section 9 presents our progress towards the completion of this project. Section 10 describes
the technical aspects of this project. Section 11 indexes the references.
2. Background
Technology has continuously improved and is now a part of our daily life. The use of
technology has substantially increased in the past decade. The development of technology has
also explicitly increased in that period of time, making most people from all around the world
dependent on it. The development of virtual realities has been expanded to a certain extent
that it can now be used do many things. The concept of virtual reality is known to be one of
the new most exciting computer technologies. Virtual reality can be described as a computer
simulated environment used for the displaying of graphical images or models of the real or
imaginary world [5]. It is often associated with the term three dimensional environment. It is
generally displayed on computer screens or stereoscopic displays.
The virtual reality device that we have been using in this project is the Oculus Rift.
Oculus Rift team has been developing their device with many of the great minds in the
Gaming industry including David Helgason, CEO of Unity, and Gabe Newell, President and
Owner of Valve.
We are using the Oculus Rift as our virtual reality interface because of its functionality and the
specifications available. The Oculus Rift can be used to view the panoramic images retrieved
from Google Street View with ease. The functionality of the Oculus Rift is beyond the
imaginable. Although the Oculus Rift was designed to focus on video gaming, it is still a very
effective device for this purpose.
Currently the Oculus Rift is available for development purposes, but they hope to release their
consumer version with more impressive specifications. They have developed an SDK to
provide easy integration and development. Their SDK currently works with 2 game
development engines: Unity and Unreal Engine.
Although we had decided that we would be using Unity engine to develop our system, we
changed our minds due to the limitations of Unity engine. We have decided to try out other
development environments, such as Processing.
2.2. Unity
Figure 2: Unity3d
Unity is a 3D game development engine [8]. It provides exceptionally powerful rendering
with multiple tools to assist with development. It allows you to publish your creation on
multiple platforms without any hassle. It provides an easy method towards controls and game
rules using scripts embedded into Game Objects. It also has a vast community and plentiful
Assets for use in their Asset Store.
We are using the Unity, over Unreal Engine, for development due to us having a brief period
of trial and error with Unity. We have learned some of the basic controls and functions of the
engine and intend to expand our knowledge to develop our system.
The system that we will be developing is computer software that will be used for displaying
images on the Oculus Rift device therefore the Unity engine will be a very powerful tool to
School of ICT, SIIT
8
use for the development of this project. It is designed to be used with a three dimensional
environment, so it will be very convenient for us to use this engine to create a virtual reality
from the Google Street View. The software development kit, provided by the Oculus Rift
development team to be used with the Unity engine, makes it possible to configure the display
to be viewed on the Oculus Rift device.
Since we have decided to implement the animated images as an overlay on the panoramic
images of the Google Street View, the Unity engine can make it possible because it is a
system that is specifically designed for game development and it is possible to develop a
system that supports the displaying of the animated overlays.
2.4. Processing
Figure 4: Processing
Processing is a programming language and integrated development environment which was
initially designed to help teach programming through visual context but it was later developed
to a powerful development tool. It was design to help non-programmers get started with
programming. It builds on Java language but with much simpler syntax. Processing provides a
sketchbook that is derived from the PApplet, a java class that implements most of the
Processings features, for organizing projects. Processing is open source and works across
multiple platforms such as GNU/Linux, Windows, and Mac OS X.
10
3. Motivation
The motivation to work on this project is to work with the virtual reality environment. The
capabilities and extents of working with the Oculus are huge. We decided to work with the
Google Street View because we wanted to create a virtual reality of our real physical world
without having to 3D sculpture everything.
This software will also allow people to visit places without the difficulties of travelling with
their conditions. This is a step towards enabling them with alternative sightseeing plans. It is a
positive advancement towards cheaper and easier approaches of seeing the world.
We believe this project is plausible for us as we have a lot of a programming background and
have dealt with many technical obstacles. We are both Computer Science students surrounded
by multiple great minded Professors. We also have experience with other controllers, such as
the Leap Motion [9] and Microsoft Kinect [10]. The concept of using a device to control the
virtual reality that is a replica of our physical world is fascinating. It is quite intriguing that we
can use this software to explore the world we live in with just simple head movements.
4. Objectives
The aim of this project is to create a system that uses will allow us to view our physical reality
in a form of virtual reality using the Oculus Rift. The virtual reality will be integration of
Googles Street View and the animated images.
In order to achieve this aim, there are 3 objectives:
1. Create a system for displaying a virtual reality on Oculus Rift using Oculus Rift SDK.
2. Integrate Oculus Rift with Google Street View on the system using the Google Street
View API.
3. Implement the dynamic overlays for animated images.
11
5.2. Benefits
The benefit of this project is that it will bring us one step closer to what many people have
been dreaming about i.e., the experience of life in the virtual reality.
Many people want to go to places around the world but to actually visit those places can be
quite expensive and so the main benefit of this system is that it will allow users to experience
them as they are, visually, in real life. Visualizing things can make a person feel the
atmosphere of the location. This software with its implementation of Oculus Rift with
Googles Street View will allow us to see the world with our very own two eyes without
having to physically go to those places.
In a short term benefit regarding the development of software for any virtual reality, this
program will be useful to the people wanting to develop software that does not use any other
hardware except for the Oculus Rift. It will encourage people to enter the world of virtual
reality. The virtual reality provided by our project would not be a simple one but rather a
much more advanced one with the availability of dynamic overlays which could help a lot of
other development projects trying to achieve overlays in a 3D environment.
In a long term benefit, this software may be the base of the integration of the virtual reality
with our physical reality. As we have seen in many movies that there are many technologies
which allows a person to look through other peoples eyes, this technology could also be
possible with the help of this program. It can be the recipient of the transmitted information
from the source which could be a server or a persons eye. It could also be integrated with the
social networking sites to allow people to upload videos of certain events at certain locations
and share them with everyone around the world with the help of the system that would allow
addition of these videos.
12
6. Literature Review
We have come across various documents, videos, and websites related to this topic. Here are
some of the projects that have either been completed or are in progress:
Figure 5: First photo of Google Maps with Oculus Rift and Leap Motion
Google has previewed its integration of Oculus Rift and Leap Motion with its own new
Google Maps during the Google IO 2013 event [11] suggesting that it will be supporting the
Oculus Rift but it also requires Leap Motion to send information in order to navigate.
The integration of Google Maps with Oculus Rift and Leap Motion is done with Google
Chrome as a mediator. Since Google Maps already has an API for Google Chrome, they used
this along with the APIs for Oculus Rift and Leap Motion for Google Chrome in order to
make it completely functional.
Although Google has not made it official but the appearance of this particular integration of
Oculus Rift and Leap Motion with the Google Street View suggests the idea of making reality
into a virtual representation. This may be common in the near future.
The advantages and disadvantages of this particular system may not be easily described as
they did not release any information other than previewing it. But it is possible for us to
mention something which this particular system does not have and is available in our system.
For example, Oculus Rift as the only controller. We also have an animated layer on our
system which will make an exciting integration.
13
14
The major limitation to this system is also the fact that the head movement of the Oculus Rift
can only be used on a windows operating system. The server which allows this to happen can
also cause delays to navigation with respect to the head movement. This problem may be
caused by the internet connectivity.
The system that we are proposing will not only allow us to look but also move around. This
can have a major impact on the feeling that one may get while navigating. It will bring the
usability of a user to a whole new level. This usability is also supported by the pre-cache
system which we will design to avoid the problem that occurs in the Oculus Street View.
Figure 7: Oculus Rift and Virtuix Omni with NASAs virtual reality of Mars
The employees of NASAs Jet Propulsion Laboratory have taken the virtual reality of the
physical world to the next level by not limiting it to the space and the conciseness of the
Earth. They have combined the stereoscopic 360-degree panoramic views of Mars taken by
Curiosity rover, along with satellite images, with the Oculus Rift VR device for developers
available in the market to map the surface of the terrain on Mars. [15]
Initially, they used the Xbox 360 controller which allows them to move around while the
Oculus is used to look around. But later on they replaced the Xbox 360 controller with the
virtual reality treadmill Virtuix Omni [16] to move around. This made it feel as if you were
actually walking or running on Mars.
The limitation to this is that it is not available for public use. They have actually put together
a very interesting system but it is still just like other system which uses other controllers to
move around. Although the devices used are affordable to some, it would be better if the same
thing could be done by using just one single device.
15
7. Methodology
7.1. Approach
In order to complete such a project, choosing an appropriate approach is very important. The
approach we will be taking is integrating the Oculus Rift software development kit for Unity
engine provided by the Oculus Rift Development team with our system for loading and
displaying the images from the Google Street View. This system of ours will also have the
ability to display animated overlays over the images of the Google Street View.
7.1.1. Overview
We are going to make a system that can display the images from the Google Street View on
the Oculus Rift virtual display device and also add overlays at specific locations.
The project we plan to make requires proper planning and a lot of effort. To make this project
a success, we need to accomplish many tasks. These tasks will be divided amongst member of
the team according to their area of expertise. Some tasks may also require every member of
the team to collaborate.
Our work process towards development has been divided into 8 steps:
1. Understand the workings of Unity Engine, Oculus Rift, and Google Street View.
1.1.
1.2.
1.3.
1.4.
Learn the specifications and integration of the Google Street View API.
3.2.
Use search query to change current location. Main use for starting location.
16
8.1.1. Experiment with different display methods and different animation file
types. E.g., .GIF, .MOV or .AVI.
8.2.
8.2.1. Use the discovered methods to display overlays onto the canvas, above the
Street View base.
9. Revise the systems functionality and improve code where applicable.
9.1. Revise user interface and user experience. Use surveys to determine necessary
changes.
9.2.
9.3. Revise storage methods. Consider compression, bulk loading and other
techniques.
10. Add a functionality to add in more overlays to the system.
10.1. Create an interface to add more overlays, given the following information:
Coordinates, Position, Size, Name, and Animation File (Possibly Caption as an
optional).
17
18
7.1.2. Obstacles
While working on a huge project, such as this, we knew that we would face many obstacles.
Most of these obstacles have been overcome by the amount of research and the effort that we
have put in to ensure the success of this project.
One of the major obstacles while we were developing this project with Unity was that it had
many limitations as to what could be done to the game objects that we created as a base for
Google Street View panoramic images. Since Unity focuses more on 3D, it was difficult to
stitch the images we retrieved from Google Street View as textures and display them on the
game objects. After days of research on the topic we realized that this was one of the major
limitations of Unity. Therefore we decided to look for other development environments that
would not have such limitations. We came across Processing, a programming language and
development environment. It has very few limitations and can perform as well as Unity for
the tasks that we need it to perform.
With the use of Processing we could no longer use the SDK provided by the Oculus Rift
Development team, therefore we had to come up with a way to be able to display images on
Oculus Rift and retrieve the Oculus Rifts sensor values. Many hours of research lead us to
discover that in order for the image to be displayed on the Oculus Rift with the correct
settings we need a shader that would distort the image to form left shader and right shader for
left eye and right eye respectively. The shader provided by ixd-hof[17] helped guide us
towards our aim of display images on Oculus Rift a success.
The next major obstacle was that now we no longer had the game objects to as the base of the
images, therefore we had to create our own way of making the images we retrieved into
panoramic images. To solve this, we decide to write our own code to stitch images and use a
scene as the base so that the images could be converted to form a panoramic image.
While working with the Java wrapper called JRift [18], we faced an obstacle regarding the
compilation of the Oculus SDK to Java Native Interface (JNI) [19]. The problem was due to
the difference in Java on Windows 32-bit and 64-bit platform. This problem was resolved by
compiling on a Mac OS X platform.
The greatest obstacle we faced when creating this system was methods in retrieving the
Google Street Views data in a fast and efficient way that allows the user to freely walk
around without any cause of delay due to internet data transfer issues. The other main obstacle
is discovering the most natural head movements to trigger horizontal and vertical movement
within the virtual reality.
During the process of development, we have found various other restrictions and difficulties.
In order to make sure that the problems we encounter were limited to be as few as possible,
we performed code revisions at milestones and adjust our codes and designs accordingly.
19
20
7.2.2. Techniques
Image Caching: Image caching is done by temporarily saving the images in the local drive.
The images are retrieved when the locations searched already has the images cached in the
local drive. This is done so that the images need not be loaded each time the user would like
to visit the place. The images are deleted once the session is destroyed i.e., once the user
closes the application. They may also be deleted when they use up excess storage space than
the set limit. We have also planned to save the images of frequently visited places so that the
system can load it up as quickly as possible. This method saves both time and effort required
by the system to load the image from the Street View.
Predictive Caching: Predictive caching is an algorithm the system runs when it is idle and the
user is currently visiting a searched location. This algorithm preloads images into a cache
depending on their current heading direction. The system can display the images when the
forward-movement is triggered.
User Experience Survey:
For the branches stated in the Methodology section, we shall conduct a series of
surveys and evaluate the results to create reports of best/most natural methods.
The eye distance or inter-pupillary distance (IPD) to be set initially by the system may
be determined by the surveys conducted.
The head gestures such as nodding or shaking head that would trigger forward
movement may also be determined by the surveys. The findings from this survey will
contribute to research on natural movements and their relationship with computer
inputs.
21
22
json [29]
xml [30]
The parameters used in the API to get the coordinates are as follows:
Required Parameters:
Optional Parameters:
bounds, this indicates the area within which the geocode results are more prominent.
key, this identifies your application for quota purposes, and enables reports in the
APIs Console. The API Key for Google.
language, this indicates the language of the result.
region, the region code specified as a two-character value.
components, this contains the filters from the resulting geocodes for restricting the
results from the geocode.
23
8. Project Schedule
Task
Description
Person
Duration
Deadline
Status
DD, TC
2m
1 Oct 13
100%.
DD, TC
2w
15 Sep 13
100%
DD, TC
1m
25 Sep 13
100%
DD, TC
1w
30 Oct 13
100%
DD, TC
1w
1 Oct 13
100%
DD, TC
2d
13 Oct 13
100%
DD, TC
3w
13 Oct 13
100%
DD, TC
1m
30 Nov 13
100%
DD, TC
2w
15 Dec 13
100%
10
DD, TC
1m
10 Feb14
100%
11
DD, TC
1m
20 Feb 14
100%
12
DD, TC
1w
22 Feb 14
100%
13
Testing
DD, TC
5m
9 Mar 14
100%
14
DD, TC
2w
9 Mar 14
100%
15
DD, TC
3w
9 Mar 14
100%
24
9. Project Progress
9.1. Research and Understanding
Since the start of the development of this project we have researched on some of the tools that
will be used in this project. We have researched on Unity and Google Street Views API
along with programming languages such as C# and JavaScript.
We have gained knowledge in the basics of Unity engine. Unity engine allows us to integrate
programming scripts with the Game Objects. We have created multiple scenes with different
properties and objects to test the functionality of the script on different objects under both
similar and different circumstances. In one of the scenes, we tested the supplied camera
control script to navigate in a two dimensional plane. In another scene, we created our own
control script with the same game object. These were some of the scenes we created to
understand the basics of Unity (See Figures 10, 11, 12, 13, 14, and 15).
25
26
27
28
Apart from the research and understanding of tools, we have implemented loading Google
Street View images to a Game Object. As mentioned in the methodology, that we will be
implementing two branches for displaying images using a 2D canvas and a sphere as the
Game Object, we have compared these two methods of displaying images while using the
control script that we have written to look around, but we have yet to determine the best way
to display the images.
29
30
We discovered that in order to re-create our project in Processing, we would possibly need a
shader to replicate the barrel shader used with the Oculus Rift SDK. We found the shader
online, along with an example. We took the example and modified the code to fit our needs:
to display a panoramic image, created from the stitching of multiple resized Google Street
View images.
31
We created a function that loads all the necessary images from an inputted location (later shall
be generated from search functionality). We applied the function to the existing code and
added a simple way to test looking around. The following is the result:
32
Figure 22: Applying shader to display the image on left and right eye 1
33
Figure 23: Applying shader to display the image on left and right eye 2
The necessary step towards completion of this project is the implementation of a method to
read sensor values from Oculus Rift. This method can be implemented by using the Oculus
Software Development Kit (OculusSDK) made available by the Oculus Rift team, but since it
was programmed in C++ therefore we needed a Java wrapper which will compile the C++
program to be a Java readable format. The Java wrapper called JRift made available by
38leinaD made it possible for us to easily get the sensor values from the Oculus Rift.
Since we now had the functionality to retrieve Euler values from the Oculus Rift, we were
able to display the images on the Oculus Rift according to its position. The main problem
which then arose was the calibration of the eye distance which would feel most natural to the
user therefore we created a functionality that allows user to calibrate the eye distance.
34
Figure 25: Vergence and focal distance with real stimuli and stimuli presented on
conventional 3D displays.
The availability of sensor values from the Oculus rift also made the implementation of look-at
and movement trigger functions completely functional features. The ability to look at a certain
pitch and heading at the location when the user rotates his/her head was made possible. The
movement trigger functionality was then programmed based on the sensor values retrieved.
This function required a survey to be performed for the gestures that would feel most natural
to user.
Upon the completion of the movement trigger functionality, we created the last of the main
functionalities that would be available in this system, the functionality to display animated
images as an overlay on the Street View images. We used a database library provided by the
Processing library to create a database for storing the path of the images and the location at
which the images will be displayed. With the retrieval of the images via database, we
displayed them on a layer on top of the Street View images at the location where it was
specified. The animated images, Graphics Interchange Format (GIF), cannot be directly
rendered by Processing and requires an external library which could render it and play it so
that they can be viewed as an animation. The external library provided by Processing to
render GIF animations is GifAnimation.
With the completion of the main functionalities of the system, we then focused on the
implementation of the User Interface (UI). The UI was designed to interact with the user
providing them with the access to the functionalities of the system. The UI also includes other
functionalities that would allow user to calibrate eye distance to view the images displayed
with more precision, add overlay images/animations, and interactive search.
35
10.2. Implementation
We began the implementation of our project on Unity but had to migrate it project to
Processing due to the limitations of Unity.
10.2.1. Unity
Unity being a powerful 3D engine and user-friendly application helped us move forward with
a quick start.
After thorough research and understanding of Unity and testing of individual part needed to
start of the project, we created a unity project with two main scenes, one scene for the flat
panoramic branch and the other for the spherical branch.
36
According to the flow of the first branch, the GameObject we selected was Sphere. We set
the center of the sphere to be at coordinates x=0, y=0, z=0 and the radius of the sphere to be
an approximate size of the image that would be displayed in the sphere which was 400. The
next step was to have a camera at the center of the sphere that means that the cameras
coordinate would also be x=0, y=0, z=0. The camera is controlled by the camera controller
script as shown in the figure 27. The camera controller script for the sphere is based on
rotation over x, y, z axes. This script is to control the camera from the input keys of the
computer.
37
With this the texture was displayed on the outside of the sphere. Now the final thing was to
display it on the inside of the sphere. During our research we came across some line of code
that manipulates the triangle matrix of the sphere so that the main texture would be displayed
on the inside of the sphere and so we were successful in displaying the image inside the
sphere, all thanks to BPPHarv [31].
The next step in our development would be to integrate this with the Oculus Rift SDK and
build the project to run with Oculus Rift device. Oculus Rift SDK provides the
OVRCameraController and OVRPersonController prefabs. By using OVRCameraController
prefab we could use the Oculus Rift device to control the camera movement in the sphere.
38
The second branch of implementation on Unity is the flat panoramic display of image by
using a Cube as GameObject. In this case even though a cube is used as the game object,
we set one side of the cube to be 0 and the remaining sides as 400 to make it 2D.
The camera is at a distance from the cube but facing the cube. The camera is controlled by a
separate camera controller script as shown in the figure 30. In this case the camera is
translated instead.
39
10.2.2. Processing
Processing is a java based programming language and development environment. The
implementation process had to be completely redefined for Processing since there is no SDK
provided by the Oculus Rift Development Team. The main process for loading the images
and displaying remains the same but in this case we came to a conclusion that using the flat
panoramic branch of development would be the best practice because the support for two
dimensional graphics of Processing is quite efficient.
10.2.2.1. Retrieve and display images from Street View API
We load the images from Google by using their Street View API. The images are retrieved
from a URL by passing GET values. The key values to be passed are the Location (Either
latitude/longitude values or location as a String, e.g. Manhattan, NY), the Heading and the
Pitch. The images retrieved from the Street View are stored to allow local loading
thereafter. This acts as a permanent cache for almost instant loading. The script written in
Processing to get the images and store them is in figure 31.
40
41
10.2.2.5. Shader
The implementation of the shader for the Oculus Rift is based on the calculations in the
following snippets of code.
42
Figure 36: Connecting to the Oculus Rift and get sensor values via Processing
Figure 37: Snippet of code for looking around after implementing the JRift library
43
)
(
)
)
10.2.2.9. Overlays
The additional information overlays functionality is implemented by retrieving the path of the
overlay, location, and heading from the local database created using SQLite library known as
BezierSQLib. The animated images are then loaded from the local storage via the path
retrieved. These GIF animations, rendered by using the GifAnimation library, are displayed
on top of the Street View images.
44
Figure 39: Snippet of code for displaying Overlay on top of the Street View on left eye
Figure 40: Snippet of code for displaying Overlay on top of the Street View on right eye
45
10.3. Interface
10.3.1. Unity
The interface of the system built on Unity is based on the Oculus Rift SDK. The Oculus Rift
SDK provides an interface that splits the screen into two for left and right eye.
Figure 42: Prototype Interface for Oculus Rift built on Processing using barrel
The final user interface of the system are as shown in the figures 43, 44, 45, 46, 47, 48, 48.
School of ICT, SIIT
46
47
48
49
11. References
1. Street View. The homepage of Google Street View. [Online]
Available: https://www.google.com/maps/views/home?gl=us&hl=en-us
2. Whatis.techtarget (March 2009). Definition of Google Street View. [Online]
Available: http://whatis.techtarget.com/definition/Google-Street-View
3. OculusVR . The Oculus Rift by Oculus VR. [Online]
Available: http://www.oculusvr.com/
4. Processing. Processing: Open source programming language and development
environment [Online]
Available: http://processing.org/
5. Wikipedia (December 2010). Description of Virtual Reality. [Online]
Available: http://en.wikipedia.org/wiki/Virtual_reality
6. Kickstarter. The homepage of Kickstarter website. [Online]
Available: http://www.kickstarter.com/
7. Kickstarter project: Oculus Rift (September 2012). Oculus Rift: Step into the Game by
Oculus. [Online]
Available: http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game
8. Unity3d. Unity: Game engine, tools and multiplatform. [Online]
Available: http://unity3d.com/unity/
9. Leap Motion. [Online]
Available: http://www.leapmotion.com /
10. Microsoft Kinect. A device that gives computer eyes, ears, and a brain. [Online]
Available: http://www.microsoft.com/en-us/kinectforwindows/
11. Roadtovr (March 2013). Article on Google Maps with Oculus Rift and Leap Motion by
Ben Lang. [Online]
Available:
http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-withoculus-rift-and-leap-motion/
12. Oculus Street View. The website that displays the Oculus Street View. [Online]
Available: http://oculusstreetview.eu.pn
13. Github. The profile page of troffmo5. [Online]
Available: https://github.com/troffmo5
14. troffmo5/OculusStreetView. The rift server files and other Oculus Street View files.
[Online]
Available: https://github.com/troffmo5/OculusStreetView
15. Gizmodo (June 2013). Article on Oculus Rift and NASAs Virtual Reality of Mars by
Eric Limer. [Online]
Available:
1042561045
School of ICT, SIIT
http://gizmodo.com/oculus-rift-nasa-s-simple-vr-rig-can-let-you-explore50
16. Virtuix Omni. The treadmill used to control the movement in a virtual reality. [Online]
Available: http://www.virtuix.com
17. Barrel by ixd-hof. Shader for Oculus Rift. [Online]
Available:
https://github.com/ixdhof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pd
e
18. JRift by 38leinaD. [Online]
Available: https://github.com/38leinaD/JRift
19. Java Native Interface (JNI). [Online]
Available: http://docs.oracle.com/javase/6/docs/technotes/guides/jni/
20. Oculus SDK. [Online] Available: https://developer.oculusvr.com/
21. Street View API. [Online]
Available: https://developers.google.com/maps/documentation/streetview/
22. Geocoding API. [Online]
Available: https://developers.google.com/maps/documentation/geocoding/
23. C Sharp. [Online]
Available: http://en.wikipedia.org/wiki/C_Sharp_(programming_language)
24. JavaScript. [Online]
Available: http://en.wikipedia.org/wiki/JavaScript
25. BezierSQLib by F. Jenett. A Processing library which acts as JDBC driver wrapper.
[Online]
Avaliable: http://bezier.de/processing/libs/sql/
26. SQLite. [Online]
Available: http://www.sqlite.org/
27. GifAnimation. [Online]
Available: http://extrapixel.github.io/gif-animation/
28. Google API Key. API key for Google Street View. [Online]
Available: https://developers.google.com/maps/documentation/streetview/#api_key
29. JSON. [Online]
Available: http://json.org/
30. XML. [Online]
Available: http://en.wikipedia.org/wiki/XML
31. Flip triangles to draw inside sphere. [Online]
Available:
spheres.html
http://answers.unity3d.com/questions/330025/flip-normals-unity-lightwave-
51