Escolar Documentos
Profissional Documentos
Cultura Documentos
Smart Car
A. Interaction Unit
In our Smart Car demonstration plat-
form (see Fig. 4), there are two classes of
input sensors to convey information
about the environment outside the vehi-
cle and the users intended actions, and
one class of output devices used to con-
vey information about the feedback of
the computing and communication unit.
Input sensors vehicle surround sensors:
There are six image sensors plus a
GPS sensor around the Smart Car
Figure 2 Exterior and interior views of the Smart Car demonstration platform. demonstration platform for captur-
ing road scenes and locating the cars
position. The configuration of these
image sensors, which jointly offer
Application Unit the driver with a 360-degree view
Automobile Companies Government Utility Provider out of the vehicle, is given in the left
side of Fig. 4.
Private Utility Provider Insurance Companies Input sensors user behavioral sensors:
Computing and Communication Unit In our Smart Car demonstration
Communication Management
platform, there are three kinds of
Device Management
Wi-Fi, 3G/3.5G/4G/LTE, motion sensors (including a gesture
NFC, Bluetooth, etc. Operating System sensor, a voice sensor, and an eye-
Interaction Unit
tracking sensor) mounted on the
dashboard of the vehicle which
Image Sensors
receive users commands and trans-
Voice Sensors mit them to the computing and
Transparent Windshield
Display communication unit, as shown in
Eye Sensors
the right side of Fig. 4.
Gesture Sensors Output devices transparent windshield
display: The Smart Car demonstration
platform has a transparent display
Figure 3 System architecture of the Smart Car demonstration platform.
built into its windshield. Hence, the
windshield acts as a display that allows
d iscuss three potential applications, II. Smart Car Platform users to not only see through the
including a visibility restoration applica- The system architecture of the Smart screen but also show on-demand
tion, a nighttime contrast enhancement Car demonstration platform is divided information with real-time display in
application, and a driving environment into three major units (see Fig. 3): augmented reality through this visual
understanding application, to enhance 1) The interaction unit, with a see- interface. Here, the on-demand infor-
operational safety for Smart Cars when through windshield monitor to dis- mation refers to the drivers requests
driving in urban street scenarios. play content quickly instead of a regarding the content of Smart Cars
Gesture Sensor
Image Sensor Receives the Users
Captures Road Commands and
Scene from Transmits Them
Front-View
Image Sensor Voice Sensor
Captures Road Scene from Receives the Users
the View of Left-Hand Commands and
Side Pillar Transmits Them
GPS Sensor
Locates the Current Position
and Transmits to the In-Vehicle
Computing System
In-Vehicle Computing System
Provides Access and Execution of
Both External and Internal Services
with Internet Access Capability
B. Computing and Embedded Multi-core Processor Base Frequency 2.2 GHz/Max Turbo
Processors Frequency 3.1 GHz/Cache 4 MB
Communication Unit
The computing and communication Internal Memory Memory Interface DDR3L 1600/Memory Size 16 GB
unit of the Smart Car should provide Solid State Disk Capacity 1TB/Form Factor 2.5-Inch/Interface SATA III
real-time access and execution of Communications Bluetooth/WLAN/FM/Transmitter/Receiver
external/internal services and process (802.11a/b/g/n, Bluetooth V2.1+EDR, 65 nm)
multiple data sets acquired from varied Global Positioning System Tracking Sensitivity-163 dBm/NMEA 0183 data pro-
tocol/Time To First Fix/Built-in SuperCap
sensors of the interaction unit. Hence,
the growing complexity of in-vehicle Graphics Processing Unit Memory Size 4 GB/Memory Clock 2500 MHz/Memory
Interface GDDR5/CUDA Cores 1536
systems requires: integration and coop-
Power Management Output 19.5 V DC, 9.23 A, 230 W
eration of the software modules; multi-
sensor data fusion; stable storage; Mobile Operating Systems Windows 8.1 Enterprise
Challenges Solutions
where v c is the gain factor for cth color
Local lights Hybrid dark channel prior [9]
channel and can be produced by
Colorcast Gray world assumption [9], White patch-Retinex
theory [10]
vr = argmax PMF (I r ) , (5) Gray road No-black-pixel constraint [11]
6l
Deep depth of field Bi-histogram modification [12]
vg = 1 ` argmax PMF ^I r h
2 6l Planar surface Flat-world assumption [13]
+ argmax PMF ^I g hj , (6) Complex architecture Nonlinear filtering [14]
6l
I i ! f (I p) .(9)
M M Ni
CMK framework, we need to reformu- J (X ) = / v i J i (X ) = / v ik / w ik J ik ^X h, geometry [45]. Hence, for each target i,
late the problem. First we extend the Eq. i=1 k=1 k=1 the movement vector d x can now be
(16) from 2-D to 3-D space X ! 0 3 # M # Ni , (17) iteratively solved by using the projected
gradient method [32]. The computa-
Ni
J i ^X h = / w ik J ik ^X h, X ! 0 3 # Ni . where M is the number of the targets tional complexity is relatively high, due
(16) k=1 in one video frame, and X = to the use of human detector on every
[(X 11) T , ..., (X ik) T ] T is for the ith target frame of video, as well as the ground
This equation is regarded as the local and the kth kernel. plane estimation. The CMK tracking in
optimization for each individual target Necessarily, the constraint functions 3-D space based on projected gradient
i with multiple ^N i h kernels, each of C (X ) = 0 must be considered to main- can be very fast. At this moment, using a
them is weighted by w ik . Second, con- tain the relative locations of the kernels. high-end desktop CPU still requires
sidering the depth information, we In [32], two-kernel and four-kernel lay- couple seconds to complete one frame
assign the visibility of each target as a outs are proposed to describe a human. of video. This complexity can be
weight v i to deal with the global opti- Unlike the constraints used in [32], relieved if the cloud computing with
mization. In other words, the total cost which are mainly based on 2-D geome- GPU speedup, real-time processing can
function becomes: try, we set the constraints based on 3-D be expected.
As examples, representative perfor-
mances of human tracking for videos
Video Tracking Results obtained from four separate car recorders
are shown in the left column of Fig. 9.
Moreover, a 3-D visualization of
Hypothesis
dynamic on-road scenes can also be
Human Depth CMK reconstructed. Its purpose is not only to
SfM
Detection Tracking
visualize the pedestrians paths and
Camera 2-D movements in a 3-D environment, but
Motions Locations Depth
Map also to avoid issues of privacy invasion
by using avatar-like 3-D models. When
Ground Plane Ground Pose 3-D Depth Map
effectively integrated with a 3-D map
Estimation Plane Estimation Locations Construction
service, such as Google Earth, we can
treat this new 3-D augmented reality
Figure 8 A moving-platform-based human tracking system [47]. visualization as a dynamic 3-D GPS
Figure 9 3-D visualization of the scene recorded by four driving recorders. Each row belongs to one driving recorder; the leftmost shows the
video frames, the middle shows the corresponding view of 3-D visualization, and the right shows the scene visualized from different aspects [47].
...
Pedestrian Pedestrian Pedestrian Pedestrian
Tracking Tracking Tracking Tracking
BTF
3-D 3-D 3-D 3-D Construction
Locations Locations Locations Locations
BTFs
Google
3-D Visualization
Map/Earth
Figure 10 A system of human tracking across multiple moving car cameras [47].
(b)
Figure 11 Visual tracking results, where the top rows are the recorded frames, and bottom rows are the corresponding 3-D visualization [47].
(a) Two human tracked frames from driving recorder1. (b) Two human tracked frames from driving recorder3.
Finally, the application platform of the opened up for third party developers, and on three proposed applications. Addition-
Smart Car is designed based on the plat- can be downloaded from [7]. This Smart ally, we have demonstrated the potential
forms of smartphone and smart TV, is Car demonstration platform was tested capability of a Smart Car to effectively