Escolar Documentos
Profissional Documentos
Cultura Documentos
Outline
Introduction
Background
System Modeling
Policy Optimization
Experimental Results
Conclusion
Introduction
Employ Wireless Remote Processing to Save Energy
recognition
"Voice recognition
"LargeLarge-scale numerical
computations
"Simulation
"Compilation
Energy Consumption
(mobile Client)
Local
Remote
5
4
3
2
1
0
500x500
800x800
1000x1000
Prior Work
A remote processing framework that supports
process migration at the operating system level
[Othman[Othman-98]
An adaptive decisiondecision-making policy based on CPU
measurements for a repetitive task [Rudenko[Rudenko-99]
A compilation framework for remote processing
processing
[KremerKremer-01]
An economicseconomics-based computation distribution
protocol [Shang[Shang-02]
Discussion
The previous works do not
! consider any task timing constraints
! discuss how to combine remote processing and power
management techniques to achieve further energy
saving
Background I
Dynamic Power Management (DPM)
! Selectively shuts down the idle components or slows
the underutilized components
Background II
Constructing the CTMDP Model of the System
! State space
"The Cartesian product of the state space
space of each
component minus invalid states
! Generator matrix
"Independent components: tensortensor-sum method
"Correlated components: each entry should be
computed separately
State Information
Command
Power
Manager
State Information
State Information
Service
Provider
Service Queue
Service
Requestor
Mobile
Client
Wireless
Channel
Server
Busy
Finish
Idle
Start
Wait
SQ
Sleep
SR
P
Q
IP
SP
RPR
Rejected
Conference
RPR
Accepted
Finish
CQ
Start
Idle
Migration
data flow
state transition
Sleep
CP
Reject
Accept
t
m =0
PER m =
nt 0
t
=
1 PER 1 PER
s , 2
s,m
s
1
s ( s s ) E ( te )( c 1) + 1
Special Cases
! c =1
prej (s , s ) = s / s = 1 p0
1, 2
s ,1
s , 2
prej ,1
prej ,2
2,1
1
prej (s , s ) =
s
c( s s 1) + 1 c s
s,m
prej , m
Online Policy
Based on a 22-D decision table computed offoff-line
! Key: (PER, Preject)
! Value: policy
Parameter prediction
! PER: predicted packet error rate
PER( n) = APER( n) + (1 ) PER ( n1)
Experimental Setup
Component Parameters
! SP: StrongARM SA1110
State
Power (mW
(mW))
Busy
600
Wait to Busy
Busy to Wait
Transition
Time
Wait
100
Sleep
0.2
10 us
Sleep to Busy
160 ms
90 us
Transmit
Power (mW
(mW))
Receive
1400
900
34
62
Sleep
50
Experimental Results
Offline Optimal Policy
! NonNon-varying wireless channel and server behavior
" M1: No RPR; M2: Always try RPR first
Average Power
(W)
PER = 0
0.4
0.3
M1
0.2
M2
0.1
M DPBP
0
0
Average Power
(W)
PER = 0.4
0.4
0.3
M1
0.2
M2
0.1
M DPBP
0
0
Experimental Results
Offline Optimal Policy
! Server: queuing model
! Characteristics of the wireless channel and server
changed stochastically
PER1
PER2
v(1,2)
v(2,1)
0%
20%
1/10000
(2,1)
1/20000
s,1
s,2
1/15000
(1,2)
16 per sec.
24 per sec.
1/20000
! Results
Policy
Average Power
(W)
MDPBP
Improvement
M1
M2
MDPBP
0.2742
0.2746
0.2412
12.0%
12.2%
--
Experimental Results
Online Policy
! Server: queuing model
! Wireless channel: slowly and randomly
changing
Policy
Average
Power (W)
MDPBP
Improvement
M1
M2
MDPBP
0.2742
0.2510
0.2310
15.8%
10.1%
--
Conclusions
A new mathematical framework for extending the lifetime
of a mobile host in a clientclient-server wireless network by using
remote processing was proposed
The clientclient-server system was modeled based on the theory
of continuouscontinuous-time Markovian decision processes
The DPM problem was formulated as a policy optimization
problem and solved exactly by using a linear programming
approach
Based on the offoff-line optimal policy computation, an onon-line
adaptive policy was developed and employed in practice
Experimental results demonstrated the effectiveness of our
proposed methods
Future work will be focused on adad-hoc mode wireless
network