Você está na página 1de 29

Software Engineering Department

Accelerometer Control

Fulfillment of Requirements for Project in Software Engineering,


Course 61401, Karmiel - January 2010

Omer Ben Yosef 038066809, Slava Kamzen 307065086

Supervisor: Dr. Peter Soreanu.


Content

CHECK DESIGN
1. Theoretical Background 3

1.1 Cellualr OS 3

1.2 Android 4

1.3 Accelerometer 6

1.4 Accelerometer in Smart Phones: 6

2. Project Description 8

2.2 Project's Math 10

2.3 Project optimization 11

2.4 Project infrastructure 11

2.5 Project components 12

CONTINUE

2
Abstract: The project defines and implements a quasi-generic motion gesture shortcuts generator
for cellular phones based on Android OS. The application exploits the already existing 3-axis
accelerometer embedded in the phone. Although its primary use was to adjust the
portrait/landscape aspect of the display, the project uses it to generate, save, and possible further
edit motion-based shortcuts. The project implements shortcuts for contacts, opening installed
applications, sound profiles, and a few other functions. The shortcuts are a suite of 3-Dimensional
gestures, executed by the user. The learning process is adapted to each user and to the complexity
of the movements.

Keywords: Android, gestures, accelerometer, Smartphone.

1. THEORETICAL BACKGROUND
1.1 Cellular OS:

At the beginning of the age of cellular phones, all phones possessed a fairly simple OS (operating system),
that focused mostly on the options to dial and store contacts, as the user's demands grew and the
miniaturizing technology improved, the calculation strength of the hardware grew, and the hardware had to
support more features, and at a later stage, had to possess an advanced OS to support the phone's basic
operations as well as third-party software, hence the birth of the Smartphone.

The Smartphone of these day and age are practically a miniature computer, capable of fully running java
software, support 3d games, and boast a powerful processor such as the iPhone 3GS’s 600MHZ ARM
Cortex A8[1], high memory as the HTC Hero’s ROM: 512 MB and RAM: 288 MB[2].
Such a device must possess a powerful OS capable of fully utilizing the machine’s advantages; hence the
cellular phone’s OS has been invented.
At the time of writing this article the most common operating systems are the Symbian, BlackBerry, iPhone
and Android[3] , as seen in (1). Each operating system possesses its own programming methodology and
programs written directly to a specific operating system would not be able to execute on a different
machine.

Global Smartphone Market by Operating System


Symbian 50.3%
RIM BlackBerry 20.9%
Apple iPhone 13.7%
Microsoft Windows Mobile 9.0%
Google Android 2.8%
Other (Palm, Linux) 3.3%
(1)

Even in this highly saturated market, new operating systems rise such as the Samsung’s Bada [4],
scheduled for release at 2010, being open source and having an application store similar to the Google and
Apple stores.

Programming methodology for a Smartphone depends on the chosen system, if programming for the
Apple iPhone, you must write the code on a Mac, purchase a developer id and have apple verify your
software, a process that takes up to a month. Opposed to that is programming for the Google Android,

3
where you may use any Java developer environment using a free SDK and upload the software directly to
the target machine or app store.

Despite all the difference between the different OS, all must still supply the basic needs of the users
without the need of the user to installed additional software, such as the basic telecommunication,
messaging, contacts, camera and internet surfing. Beyond that, most phone developer adds common
features to draw even more customer attention such as touch screen, advanced sensors and voice
recognition.

1.2 Android:

The Android is an open source operating system designed for cellular phones, based on the Linux kernel
and designed to be flexible and easily upgradeable with the usage of JVM (Java Virtual Machine) for
installed programs.

The Android operating system was unveiled at November 2007 at the same time as the founding of the
Open Handset Alliance [5], however in mere two years it managed to reach a market share of 3.5% of total
Smart phones market.
At the early development stages the SDK supplied to developers was lacking, offered inadequate
documentations, bugs and lack of infrastructure for QA.

The first official release of a non-beta Android SDK was version 1 on September 2008, however the first
version to be common on the general-public's phones was the Android 1.5, code named cupcake, and
featured several high profile additions to the software.

Software for the Android is developed in the Java language using the API released by Google to access
the device, thus making veteran programmers easily capable of starting to develop immediately using the
basic documentation supplied, yet be able to use most of the Android features.
The most common development environment is Eclipse with additions of add-ons for the xml designer,
API and emulator.

Currently the Android has the following features [6]:

 Capable of supporting large VGA screens, and possess a 2d graphics library and a 3d library based
on the OpenGL for the developers to be able to create visual effects.
 Use the SQLite software for data storage purposes.
 Common connectivity technologies are supported, including GSM/EDGE, CDMA, EV-DO and
UMTS for carrier support. Bluetooth and Wi-Fi are supported for the internet and device
connection.
 For texting the common SMS and MMS are included at the base of the operating system.
The factory- installed browser is based on an open-source browser called WebKit and is
considered one of the most accurate to standards cellular browser.
 Applications written in Java are run using a JVM specialized for mobile hardware, in order to limit
the memory and processor usage.
 The Android support without the usage of third-party software the following media formats:
Video: H.263, H.264, MPEG-4 and ARM (a format designed for speech coding).
Audio: AAC, MP3, MIDI, OGG Vorbis, WAV.
Images: JPEG, PNG, GIF, BMP.
While many other formats are supported by third-party developers.
 It possess support for most common hardware used in Smart-phones, including but not limited to
cameras, touch screens(including multi-touch), GPS, accelerometers, magnetometers, and
hardware accelerated 3D graphics.
 Support for over-the-air purchase of applications using the Android Market and installation
without the need for a PC.

4
The Android uses a specific JVM that is not following the Java standard, but is designed to limit the
resources consumption for mobile use, thus losing several graphic libraries. However, thanks to this
removal each software can run on its own JVM, and prevent a single point of failure where one program
might crash the entire system or compromise the security of another,
making multi-processing much safer.

Developers are able to use an emulator, acting as close as possibly


to the real phone. Much thanks to the fact the entire system is open-
sourced, creating an emulator and integrating him to the IDE takes
very short time, and works with high similarity to the real phone so
developers are capable of seeing the effects of the software coming
close to the real result.
On (2), we see the default emulator supplied. The emulator takes the
form of a real phone, capable of mimicking a phone and thus making
developing on the emulator as close as possible to working with a
real working phone.

In order to invite programmers to develop for the Android and


provide a stream of applications while the Android was still lacking
in deployment, Google had several competitions, promising large
rewards to its operating system.

In addition the Android Market allow developers to put for sale


applications, with 70% of the price going to the developer and 30%
to the carriers, with no further costs imposed on them from Google,
unlike the iPhone’s App Store, demanding the developer to pay large
sums for the SDK and digital signature.
Thus enabling low budget firms and small teams to develop
commercial software.

(2)
The Android is an up and coming operating system, with a growing popularity in the cell phone market,
with market share growing fast with each release and the simplicity for developers assist in creating a wide
bank of software for users.

5
1.3 Accelerometer:

The accelerometer is a sensor designed to measure the acceleration it feels relative to free-fall, thus the
known gravity must be supplied for a reading to have a meaning.

The basic mechanism of an accelerometer is having a mass attached to a damping mechanism (spring,
fluid or gas) and measuring the displacement of the mass, be it by optical measurements or mechanicals.
Several accelerometers are usually used to make multi-dimensional measurements.

Accelerometers are used for measuring acceleration, enabling it to measure speed and displacement
(using a zero value calibrated), making it an important machinery in the engineering fields, for example
measuring a car’s 0-100 acceleration.

Thanks to the miniaturization of the accelerometers they became a common household item used in
plenty of home electronics such as gaming consoles, with the Wii’s remote serving as a good example for
the capability of accelerometers, playing games such as tennis with the remote and the console reading
them into the game accurately.
It is also used in most of recent smart phones, currently with a fairly limited usage, such as switching
between portrait and landscape viewing modes, but developers are making new software to use more and
more of the potential.

A typical accelerometer that is used in the small devices industry has the specification as stated in (3). [7]
[8]

Measurement range +-6g


Nonlinearity +-0.2%
Cross-axis sensitivity +-1%
Noise density 250 μg/ √ Hz rms
Response frequency 1600 Hz
Supply current 350A

(3)

1.4 Accelerometer in Smart Phones:

In the recent generations of smart phones the accelerator became an integral part of the machine, with
usage ranging from sensing the machine tilt in order to change the screen orientation to games using the
phone itself as a steering wheel.

With the usage of additional sensors such as magnetic field and GPS the smart phones can have a great
sense of location and orientation of the machine.

Using the excellent on-board sensors the developers can create a wide variety of programs using the
phone physically as an input device, instead of the GUI, thus revolutionizing the common norm of user
input, for an example the cell phone can be used in the same matter a Wii’s remote to supply input for
games [9] or a steering wheel for driving games [10].

The accelerator present on the Smart phones is capable of measuring very minute changes it sees, as seen
in (3) you can see that even on a standstill location it shows small changes that are present, and on (4) we
see the reading from a phone being moved on a surface and then moved back to start position, we can see
that while the movement attempted to be lateral, the x and z sensors picked up changes far exceeding the
normal noise, showing that even on a surface, the sensors were able to pick up minute changes in altitude.

6
However, by far the most interesting motion is a motion performed on real 3-dimensional world, such as
seen in (5). And show the phone being dropped, starting from a standing position, as seen in the x-axis
being set on -1g before the drop, followed by a freefall, seen as the three sensors indicate close to no
acceleration difference compared to freefall (calibrated as the 0g for the sensors), followed by a great quick
shock as the phone hit the floor, followed by rebounds and aftershocks.

(3) (4) (5)

(6)

7
These graphs shows the capability of the sensors installed in the smart phones, and shows that they are
capable of capturing elaborate motions to great accuracy and may be used for a wide variety of
applications, ranging from games to actual monitoring of the human holding to them for various purposes.

A project named TiltRacer [9] using the Nokia 5500 as gaming controller has been shown great promise,
making the phone a remote that control a car, using the phone’s Bluetooth to communicate with a computer
telling him the sensor's reading and moving a car.
On (6) we can see the project at an early development stage, where the user is holding the Nokia phone and
moving it to control the car.

(7)

The TiltRacer project uses a Symbian phone, and the majority of the processing is done on the computer.
However the Android has sufficient resources to actually run the complete games using the motion
captured by the highly accurate sensors, or even using the data for detecting gestures to be used for phone
operations instead of the GUI.

2. PROJECT DESCRIPTION
2.1 Project review

The Project's purpose is to enable the user to define a series of Motions, having the program read the
Motion, and matches it to a pre-defined Movement Rule that is set to an Action.

A Movement Rule is a Motion and the Action the movement is set to perform, it is created in the learning
process.

A Motion is defined as a physical movement of the phone, be it a single movement, or a set of them,
being read by the installed accelerometer sensors, and attempting to match it to a motion that was recorded
in an earlier stage, during the learning process.

8
An Action is defined as the option to execute that was defined to the motion during the learning process.
An action can range from performing a call to a contact that was defined at creation of the movement rule,
changing the ring profile to the profile set at the movement rule, activating installed software or answering
the phone.

The learning process is the part of the program that is used to make the program learn Motions, and set
them to Movement Rules, that will at turn activate the Actions.
This process is initiated from the program memory by actively using the GUI (Graphical User Interface) to
prevent accidental learning of movement rules, and the process is simply selecting the “learn new rule”
option from the GUI, then selecting the type of action, the specific action and performing the Motion.
While the program is set to the listening mode, it will monitor the telemetry data sent by the sensors. It will
then attempt to compare the motion’s data and compare it to a known Motion using a comparing algorithm
developed by us at the Detection Engine, giving a margin of mistake to accommodate for human and
sensory irregularity.

The program attempts to refrain from recognizing Motions that were not meant to be directed at the
program’s Detection Engine, effectively running Actions that are not intended and might cause the user
discomfort.

The Detection Engine is getting the telemetry data and using a unique algorithm the team has developed
will detect similarity to a known Motion.
One of the driving motivations for the algorithm is to minimize resources usage, due to the fact the program
is running constantly, and the machine’s computing is highly limited, all the while having to minimize the
running of unwanted Actions.
The main principle is dividing the Motion into several individual movements of the phone, and storing
them into a tree for easy withdraw, much like the Trie data type (prefix tree)[11], making complex motions
that start the same use a similar path down the tree as seen in (7), an example of Trie.

(8)

For our purposes instead of dividing the sting into chars, we divide a complex Motion into atomic
movements. Then attempt to generalize each atomic motion, and adding it to the tree, when reading sensor
telemetry, the same generalization algorithm is executed, and compared to the root, then next atomic

9
movement is compared with the child, if a match is found we will use the linked leaf as the Motion, and
using the Movement Rule we will run the desired Action.

The generalization in the Detection Engine is required for both improving of the processing of atomic
movements as well as way to make detection of motions possible, mostly due to inconsistency of human
actions, as well as the sensors background noise. The generalizations insensitivity is one of the main way to
control the percentage of correct Motions remaining undetected (False Negative) as opposed to detecting a
Motion, where the user didn’t intend for one (False Positive), the latter should be considered worse, as it
might cause humiliation to the user, as opposed to the former that most likely cause the user to reattempt
the Motion.

2.2 Project's Math

The software receives information from the sensors in the form of 3 arrays, orientation, acceleration and
direction related to earth’s North Pole, the information we seek, the acceleration, includes the movement of
the cellular device, as well as the gravitational pull from earth, that we must remove to receive actual
acceleration of the device relative to the user. To do so we have to use rotations matrices using the
orientation data, and multiplying by earth’s gravitational constant.

The rotation itself is done using the following matrices:

Yaw, pitch, and roll are α, β, and γ

(9)

An improvement implemented by the team used the facts that the screen of the cellular phone is directed
up on the new axis, and therefore reduced the amount of calculations required of the CPU, an especially
important task due to the percentage of these calculations from the overall running time, as these
transformation occur with every sensor update, an action that happens several dozen times per second while
the system is in listening mode.

The gravity is calculated by the following functions, using additional optimization in the code make the
process even faster.

[ x ] =9.81∗cos ( β )∗sin ( γ )
[ y ] =9.81∗sin ( β )

10
[ z ] =−9.81∗cos ( β )∗cos ( γ )

(10)

Overall the goal of this project is to enable the user to teach the program several gestures and set them to
activate desired features. After the initial learning, the phone must remember them.
Later, as the user use the phone in its Listening mode, he will make physical gestures with the cellular
device. The phone in turn will follow the following order in the detection process:
at first, as the user makes the movement of the phone, the Detection Engine will break it to atomic
movements, comparing it to the saved Motions. If he detects a Motion, he will check the Motion’s Action
at the Movement Rule and at the end he will run the Action.

2.3 Project optimization

During the testing of the project we found that most atomic motions had a shadow motion, the
deceleration part was registered as an atomic motion, however it made the false negatives misses abundant,
as the user did not always decelerated at a rate sufficient for the sensor threshold to register it as an atomic
motion.
The process strips the movement of atomic motions that are simply the deceleration part of the motion
prior, and by removing it, the false negative misses goes down immensely, however, the false positive rises
as a result when motions are too close to each other, but the rate of rise makes this tactic still beneficial.
The stripping process occur both when creating a new rule, and when the system is in listening mode and
has finished reading atomic motions and it strips the movement prior to comparing it to the saved motions.
This process’s complexity is o(n) where n is the length of the motion prior to the striping.

Additional stripping mechanisms are in use, such as a requiring a minimum length of the motion,
removing motions that were too short, drastically removing spikes and bumps, as well as noise from
registering as parts of the motion itself.

Overall the complexity of adding a new movement is depending only on the length of the motion, not the
number of motions already save, and this process is accomplished in o(n) where n is length of the motion.
The complexity of detecting a motion from the background is also o(n), as after the motion has been read, it
is striped at o(n) and compared on the tree at o(n), therefore, this process do not depend on the number of
motions in place, and the maximum length of a motion is hard coded to 25.

2.4 Project infrastructure

The Project is divided into two modes:

 UI mode

The application is operating in UI mode, allows the user to define new rules and select application
options. The application's Activity is responsible for that part.

 Run in background mode

The application is operating in listening mode, monitoring user movements and matching them to
movement tree, this mode is running in the background, and is required to be very stable, and
taking the minimum amount of resources as possible. The application main Service is responsible
for that part.

11
Activity:

An activity is a single, focused module that the user can interact with. Almost all activities interact with
the user, so the Activity class takes care of creating a window for you in which you can place your UI.
While activities are often presented to the user as full-screen windows, they can also be used in other ways:
as floating windows or embedded inside of another activity.
AccelControl.java class is the project's activity.

Service:

A Service is an application component representing either an application's desire to perform a longer-


running operation while not interacting with the user or to supply functionality for other applications to use.
Services, like other application objects, run in the main thread of their hosting process. This means that, if
your service is going to do any CPU intensive (such as MP3 playback) or network heavy communication
(such as networking) operations, it should spawn its own thread in which he does that work.
BackgroundMotionListener.java is the project service.

2.5 Project components:

The project is created from a combination of packages, each containing classes and XML files:

AccelControl Package:

 AccelControl.java
The project's main activity. Responsible for User Interface, application rules and settings.
This class runs the GUI, adding new rules to the tree and rules list.
Beginning with reading the required action and reading the motion that will be saved for this
action, as well as saving the settings and loading them.

 AppRule.java
Class that responsible for an application rule, extends the rule class, and keeps all the information
about application the user set to launch, including applications name, package, and activity name.

 CallingRule.java
Class that is responsible for calling rule, extends the rule class, and keeps the information about
the remote user phone number.

 EventReceiver.java
Class that is responsible for application auto-run feature. It gets boot-completed intent from
BroadcastReceiver class (class included in the SDK), and user auto-run option (from shared
preferences saved on the phone's flash memory). If user choses the auto run option- the application

12
will run AccelControl activity on phones startup.

 ProfileRule.java
Class that is responsible of the profile select rule, extends rule class, and keeps the information
about selected profile.

 Rule.java
Rule is an abstract super-class that keeps the type of extended rule, and have declarations of
methods which his child classes will use. The AppRule, CallingRule and ProfileRule extends this
class for the specific action they are meant to run.

AccelerometerMath Package:

 Detector.java
The Detector class is the class that is responsible to detecting a single atomic movement, the class
must work as fast as possible as it runs all the time. The class is also responsible to stripping a full
movement into a more basic one, making it more stable and reducing noise.

 Trie.java
This Tree class is the father class for the Trie data type, it is connected to the root of the Trie and
runs several functions that are simpler.

 TrieNode.java
The TrieNode is the class that holds a single node of the Trie, it holds the information inside the
nodes themselves and the node's Childs as well as all the functions required to adding nodes to the
tree, finding a specific node and reading or writing a rule that is assigned or needs to be assigned.

Logic Package:

 MyPhoneStateListener.java
WRITE

 RullesCollection.java
This class holds the rules as a list, making it possible to interact with the rules directly, and not
from the tree.

Service Package:

 BackgroundMotionListener.java
This class is the service of the project, it is responsible to running in the background while the
user has no visual interface to interact with, it's main responsibility is to detect the Motions and
run the Rules, making it the most important class in terms of running time, as this class will be
ran at all time while the user's phone is activated. Therefor it is crucial to keep the complexity
at the bare minimum.

XML

13
2.5 Detection Algorithm

The main detection of a motion is performed by the following algorithm.

1. Even before the user has pressed the record motion button, the system must align itself using the
orientation sensors, so that when he starts recording, the data is on the correct axis.

1.1. Whenever the sensors detect a change in the information, it runs a function called
"void onSensorChanged(int sensor, float[] values)", this function then see which sensor has called
it, if the sensor is the orientation, proceed to 0.2.

1.2. The function sends the sensor data to the detector, using the "int update(int sensor, float[] values)"
function in it.

1.3. If the update function detect the sensor is orientation, it starts by taking the values from the value
array, and putting them into the yaw, pitch and roll variables.

1.4. The next step is calculating the gravity vector from the orientation, using the functions at (10) and
save it to a static variable in the detector.

Overall, this stage runs at o(1), as there are only few calculations thanks to the optimization to
these functions.

When the user press the "record motion", the next algorithm begins.

2. Beginning to seek acceleration.


2.1. Setting the record mode to true.
2.2. Whenever the sensor sense a change in sensor readings, the function "int update(int sensor, float[]
values)" activates, this function first check the type of sensor, if its orientation, goto 0.2, if its
acceleration, continue to 1.3
2.3. Clear the motion var, Get a new atomic motion from detector, by running the
"detector.update(sensor, values);"
2.3.1. The detector receives the sensor type and readings, and enters the part about acceleration
handling
2.3.2. It decrease the gravity vector saved from the orientation handling from the acceleration data,
affectively getting real acceleration in relation to the phone's axis.
2.3.3. First compensation for axis movement, attempts to detect axis misalignment due to bad
orientation data that occurs on high angles of tilt.
2.3.4. Detect on each axis if there is a motion that crosses the threshold and if does, enter minute
fixes to fix its priority, as some axis are more sensitive than others.
2.3.5. Primary striping process, detects if a motion is strong on one axis, but much weaker on the
others, effectively eliminating additional axis movements when it’s a wide swing on one axis,
and the other axis are only noise of that swing.
2.3.6. Returns to the caller(AccelControl) with the atomic movement that was detected.
2.4. If there were no motion happening, do nothing.
2.5. If a real motion was detected(not a static motion after real motions)
2.5.1. The stochastic algorithm, if the atomic movement contains a movement on a single axis while
the last atomic movement was at the same axis, increase the counter for that axis, if there was

14
an atomic movement in that direction at the last atomic movement but this one does not, zero
that direction's counter.
2.5.2. If one or more of the axis exceeded the minimal distance, it will be the set as the next atomic
motion
2.5.3. Set the directions each axis is taking(to prevent shaking such as (left, right, left, right)
exceedeing the minimal distance.
2.5.4. Setting the sleep counter to zero.
2.6. If the motion was a static motion(no atomic movement detected), while there were real motions
before it
2.6.1.

2.6 Main Problems:

During the creation of the project the development crew encountered many hardships, ranging from
unsupportive components in the SDK, the lack of third-party components to support development and the
Android's design and protection.

One of the main problems was while working on the emulator actually getting sensor data, as the
emulator does not supply sensor data to the program, the solution was SensorSimulator, an open source,
free third-party application that simulate sensors, and controls an android application that sends the data to
the applications, while being controlled by an easy to use java application that runs on the computer.
This solution provided us with an easy to use sensor source, however the program itself had plenty of
problems that could only be solved using a real cellular phone, among them the fact that the application is
designed for the Android 1.1, with updates made for the Android 1.3, making it barely functioning on
Android 1.5and incorrect for applications designed for the Android 2.1.
The team has solved the problems that arose from that only when they went to testing on the physical
machine, requiring them to revise many functions and functions that were designed for the simulator but
were no longer appropriate.

Additional problem that arose when migrating to the actual physical machine was that the noise and human
error became a real interference to the motions that were attempted.
To solve these problems, the team has extended the detection rules to extend the limitations, and has
introduced a stochastic algorithm that required minimum length the of motion, in milliseconds, while each
motion could be composed of up to three motions, each at a different axis, and the minimum length is
required of each axis, for example, if you move the phone left for half the minimum time, and then left and
up for half the minimum time, the detector will detect the left, as it runs for the minimum time, but not the

15
up, for it only ran half the time.
This helps clear shock motions that occur for short time, but might overrun the existing motions, making
them invalid.

Another problem we encountered was saving a pointer to the rule into the tree. The problem stem from the
fact that we add to the tree at an early stage, and only much later we actually assign the rule itself to the
node. This problem was solved by changing the tree to create the location for the rule, and create the node,
but save a pointer to it as a global object, and when the user press the save button, only then the rule is
created and assigned for the node in question.

problems we overcame

16
3. SOFTWARE ENGINEERING DOCUMENTS

3.1 UML Use Case

17
3.2 GUI ‫לעדכן‬

The project UI will be divided into 3 different interfaces:

 Main Screen:
The main screen of the application, contain the application logo, and menu
buttons which allow the user to switch between application screens.

 Settings Screen:
 Application auto start option.
 Clear movement tree button
 New Rule Screen
 Call contact:
o Displays list of all phone contacts.
 Run application.
o Displays list of all installed applications.
 Select audio profile.
o Silent.
o General.
o Vibrate.

18
Main window: Settings window:

New rule windows:

19
3.3 - Requirements Document.

System Creators
The system will be created by Slava Kamzen and Omer Ben Yosef under the supervision of Dr. Peter
Soreanu.

System Designation
The system is to be a graduation project in the field of software engineering and to withstand the harsh
standards of equivalent commercial software.

System Essence
The system should be a base platform for operating a cell phone using physical operations instead of using
the GUI.
The actions will be pre-defined by the user using a highly flexible learning system.

Functional Demands
1. The system's platform shall be the new Android OS, specifically designated to the HTC Hero’s Android
1.5 or above version due to its reliability and sensor precision.
2. For each action the user wishes the system to learn, he must first feed it using the system’s learning
routine.
2.1. The possible operations are divided into 3 types
2.1.1. Dialing to a specific contact – a specific motion which causes the system to dial a contact specified
during the learning process.
2.1.2. Operating specific external software– a specific motion which causes the system to activate a
program as specified during the learning process.
2.1.3. Changing active profile - the system will change the active ringing profile according to the user
selection.
2.2. Learning the motions will take place during the action learning process, when the user will be
prompted to perform the actions several times.
2.3. The system will inform the user about any attempt to learn a new motion that is too similar to an
existing motion.
3. The system will be able to switch between two operation-modes, a listening mode and a background
mode.
3.1. Moving between the two operation-modes will be done by the user himself.
3.2. Performing a known motion while the system is in listening mode will result in the activation of the
saved action.
3.2.1. While the system is in listening mode the system will only react to motions the user has pre-defined
and will attempt to minimize the number of incorrect or undesired actions it performs.
3.3. While in background mode (not listening) the system will not react to any action.
4. The system’s user-complexity will extend beyond existing platform programs.

20
User Interface
1. The interfaces language will be English.
2. The system will be simple and easy to navigate through minimal number of required user clicks.
3. The system must be easy to learn for a starting user and present as few obstacles for first time users
while maintaining a wide variety of features.
4. The Graphic User Interface should look as close as possible to the standard in the system’s OS.
5. While the program is in the listening or background mode it will operate at the background and take no
space on the device’s screen.

System Constraints
1. Due to limitation originating in the operating system’s base design, it is impossible to perform actions
inside other third-party software.
2. The system is under heavy precision constraints
2.1. The internal sensing hardware does not supply the system with sufficiently high accuracy for the
sensor readings.
2.2. Due to the fact that the motions are performed by a human user, they are not completely re-enacted.
2.3. As a result of the previous constraints, each action must be given a motion range, causing the system
to decrease the number of possible motions and preventing the system from identifying motions if they
were outside of this range.
3. The system is bound by limitations forced from the operating system and development language.

Possible Future Additions


1. The system must be as modular as possible to be capable of easily adding additional features later on.
2. The system will integrate to a higher degree with the operating system, be capable of performing
macros inside third-party software and become an integral part of cellular usage.
3. The system will sport internal programs to prevent the need of many external programs.

21
2.4

UML Class Diagram

22
3.5 Test Cases FINISH WRITE THEM

Adding a motion that already exist


Purpose Testing if adding a second motion with identical movement
is added
PreReq System is on listening mode, no action is set to Movement
Test Data Movement = {[up,down]}
Action = {run a random application}
Steps 1. Activate the add motion
2. Select the Action
3. Perform the Movement
4. finish the adding of the Movement
5. follow steps 1-3
Notes Up to step 4 should work, Step 5 should return an error.

Activating a saved movement


Purpose Testing if an motion set to the system will be run correctly
and if a motion not set will not run.
PreReq System is on listening mode, no action is set to Movement
set.
Test Data Movement = {[up,down],[up,left]}
Action = {run a random application}
Steps 1. Activate the add motion
2. Select the Action
3. Perform Movement {1}
4. finish the addition of the Movement
5. go to Minimized mode
6. perform Movement{2}
7. wait 10 seconds
8. perform Movement{1}
Notes Step 6 should not activate, step 8 should activate the
application set on Action.

23
Testing the “delete tree”
Purpose Checking if the delete tree function delete the tree but
leaves the program at a running state
PreReq System is on listening mode, no action is set to Movement
set.
Test Data Movement = {[up,down] }
Action = {run a random application}
Steps 1. Activate the add motion
2. Select the Action
3. Perform the Movement
4. finish the addition of the Movement
5. go to Minimized mode
6. perform Movement
7. return to phone main menu
8. reopen application GUI
9. open menu, settings
10. click “delete tree” and apply.
11. Activate the add motion
12. Select the Action
13. Perform the Movement
Notes At 6 the application will run,
At 13 the program will change the button to “save”, if delete
tree was not functioning then an error will appear.

Purpose
PreReq
Test Data
Steps
Notes

24
3. Results and conclusions
As the main goal of this project is not to serve as a research program or mathematical helper but as a
commercial application for a growing market, a solid conclusion about the program’s success cannot be
discerned prior to market release. However, during the development phase the team underwent algorithms
development steps to solve several obstacles that had no solutions online, ranging from acquiring
acceleration related to the machine to actively splitting a program to two communicating modes, working
together. The fact that no solution was online means that whoever developed a solution does not belong to
the open source community, thus these algorithms are in fact at a commercial level.

As for the Android operation system and development tools, working with both while attempting to use
as much of the functions provided by Google’s SDK, lead us to several conclusions regarding Android’s
future as client’s OS, and as a developer’s platform.
As thus, our conclusions will be divided into two parts; first, as a client’s operating system designed for
cellular phone and light tablet-PC, the system is bulky, not user friendly and tents to crash on minor errors
or have failures to load. At user friendliness and GUI design the system pales in comparison to its
competition, the iPhone, however, what the system looses in stability, it gains in its diversity, openness and
providing the user with much more options to explore, as opposed to iPhone’s being UI friendly, however
not allowing any software that uses the phone t harsh.

As a developing platform the system suffers for having no dedicated programming environment, the
only viable platform is Eclipse, and Eclipse suffer from many faults, among others, the lack of graphic
designer and plug-ins that simplify the work with Android. The SDK is quite covering, and allows access to
most of the machine’s capabilities, a refreshing approach, considering the fact the operating system is
designed for a wide variety of machines, ranging from phones, to tablet-pc and laptops. However, the SDK
came with far too little functions needed by the programmers for utilizing the access they are provided, and
many functions had reliability problems or a non-standard structure.
The emulator provided was exceptionally good and performed as close to the original as possible, having
only minor CPU usage leak.

For our project we needed accelerometer simulation and that was not provided by Google, so the team
had to seek a third-party simulator that suited our need, the one we found was an open source program
names SensorSimulator.

The simulator is very light, and it


enables the user to control the sensor
somewhat close to the original. The
simulator requires you to installs an
application to the phone, and enter the
internet address that will control the phone,
using internet protocols the software on the
computer sends the sensor data to the
application in the phone, this application
sends the information to static objects he
owns, and by setting the sensor manager to
these objects, you can use the program’s
data instead of the phone’s, and when using
the emulator, this means you can actually
control and change the data, as the emulator
is not capable of sending sensor data to the
11
25
applications on the phone by itself.

The program runs very smoothly and takes very little memory and
processing from the phone. However, the simulator was written when
Android was at its beginning, and the simulator is designed for
Android 1.3, it works fairly well up to Android 1.5, however from that
point and beyond it supply incorrect data, making this program
unsuited for simulation, the program had no update beyond this
version, and there is no viable alternative, and until Google create a
simulator themselves, we must assume this is the only possible third
party simulator that we are capable of using, making migrating from
emulator to real machines a tedious task, and sometimes requires
rewriting most of the code handling the sensor data. Therefore it is not
recommended for use if there is an alternative, such as using an actual
phone for all the testing period.

12

Overall, the Android system is excellent developing environment compared to what the market has to
offer, however with the impending release of Microsoft’s windows mobile 7 that has excellent development
kits and environments, and the bonus of using the much more refined language of C#, Google must
improve its Android OS to remain in the market among the

UI friendly iPhone and the much refined Windows mobile 7.

As for the project itself, after further refining it, the team plans to upload a much more user friendly
version to the Android app store, and hoping for commercial success, considering that up to this day, there
is no program out that can accomplish what the team has been able to accomplish.

Proof read

26
4. ANNEX I- ACCELEROMETER SPECS
MEMS motion sensor - LIS331DL

3-axis - ±2g/±8g smart digital output “nano” accelerometer

Features

■ 2.16 V to 3.6 V supply voltage

■ 1.8 V compatible IOs

■ <1 mW power consumption

■ ±2g / ±8g dynamically selectable full-scale

■ I 2C/SPI digital output interface

■ Programmable interrupt generator

■ Embedded click and double click recognition

■ Embedded free-fall and motion detection

■ Embedded high pass filter

■ Embedded self test

■ 10000 g high shock survivability

■ ECOPACK® RoHS and “Green” compliant

Applications

■ Free-Fall detection

■ Motion activated functions

■ Gaming and virtual reality input devices

■ Vibration monitoring and compensation

Description

The LIS331DL, belonging to the “nano” family of ST motion sensors, is the smallest consumer low-power
three axes linear accelerometer. The device features digital I2C/SPI serial interface standard output and
smart embedded functions.

27
The sensing element, capable of detecting the acceleration, is manufactured using a dedicatedprocess
developed by ST to produce inertial sensors and actuators in silicon.

The IC interface is manufactured using a CMOS process that allows to design a dedicated circuit which is
trimmed to better match the sensing element characteristics.

The LIS331DL has dynamically user selectable full scales of ±2g/±8g and it is capable of measuring
accelerations with an output data rate of 100 Hz or 400 Hz.

A self-test capability allows the user to check the functioning of the sensor in the final application.

The device may be configured to generate inertial wake-up/free-fall interrupt signals when a programmable
acceleration threshold is crossed at least in one of the three axes. Thresholds and timing of interrupt
generators are programmable by the end user on the fly.

The LIS331DL is available in plastic Land Grid Array package (LGA) and it is guaranteed to operate over
an extended temperature range from -40 °C to +85 °C.

28
5. REFERENCES
[1] – Anand Lal Shimpi, “The iPhone 3GS Hardware Exposed & Analyzed”,
“http://www.anandtech.com/gadgets/showdoc.aspx?i=3579&p=2”

[2] - HTC hero specifications, “http://www.htc.com/www/product/hero/specification.html”

[3] – Prince McLean, Canalys: iPhone outsold all Windows Mobile phones in Q2 2009,
“http://www.appleinsider.com/articles/09/08/21/canalys_iphone_outsold_all_windows_mobile_phones_in_q2_2009.html”

[4] – Electronista Staff, “Samsung launches bada open mobile OS”,


“http://www.electronista.com/articles/09/11/09/samsung.bada.to.rival.android.linux/”

[5] – Erin Fors, “Industry Leaders Announce Open Platform for Mobile Devices”,
“http://www.openhandsetalliance.com/press_110507.html”

[6] – developer.android, “What is Android?”, “http://developer.android.com/guide/basics/what-is-android.html”

[7] – Crossbow, CXL TG-Series High Performance Accelerometer,


“http://www.xbow.com/Products/Product_pdf_files/Accel_pdf/TG_Series_Datasheet.pdf”

[8] – ANALOG DEVICES, ADXL325 Accelerometer,


“http://www.analog.com/static/imported-files/data_sheets/ADXL325.pdf”

[9] – Tamas Vajk, Paul Coulton, Will Bamford, Reuben Edwards, “Using a Mobile Phone as a “Wii-like” Controller
for Playing Games on a Large Public Display” , International Journal of Computer Games Technology volume 2008.

[10] – YouTube, K850Wii, “http://www.youtube.com/watch?v=QtIY0bijuZc”

[11] – Wikipedia, Trie data type, “http://en.wikipedia.org/wiki/Prefix_tree”

29

Você também pode gostar