Escolar Documentos
Profissional Documentos
Cultura Documentos
Performance Evaluation
In system design
Selection of algorithms
Setting parameter values
In procurement decisions
Value for money
Meet usage goals
Its implementation
(What we teach in programming courses)
Its implementation
(What we teach in programming courses)
Research agenda
In the context of parallel job scheduling
Example #1
Gang What?!?
Time slicing parallel jobs with coordinated
context switching
Ousterhout
matrix
Gang What?!?
Time slicing parallel jobs with coordinated
context switching
Ousterhout
matrix
Optimization:
Alternative
scheduling
Ousterhout, ICDCS 1982
Packing Jobs
Use a buddy system for allocating processors
Packing Jobs
Use a buddy system for allocating processors
Packing Jobs
Use a buddy system for allocating processors
Packing Jobs
Use a buddy system for allocating processors
Packing Jobs
Use a buddy system for allocating processors
The Question:
The buddy system leads to internal
fragmentation
But it also improves the chances of
alternative scheduling, because processors
are allocated in predefined groups
Which effect dominates the other?
Verification
Example #2
Variable Partitioning
Each job gets a dedicated partition for the
duration of its execution
Resembles 2D bin packing
Packing large jobs first should lead to better
performance
But what about correlation of size and
runtime?
Scaling Models
Constant work
Parallelism for speedup: Amdahls Law
Large first SJF
Constant time
Size and runtime are uncorrelated
Memory bound
Large first LJF
Full-size jobs lead to blockout
Worley, SIAM JSSC 1990
Scan Algorithm
Keep jobs in separate queues according to
size (sizes are powers of 2)
Serve the queues Round Robin, scheduling
all jobs from each queue (they pack
perfectly)
Assuming constant work model, large jobs
only block the machine for a short time
But the memory bound model would lead to
excessive queueing of small jobs
Krueger et al., IEEE TPDS 1994
The Data
The Data
The Data
The Data
The Data
The Data
Conclusion
Parallelism used for better results, not for
faster results
Constant work model is unrealistic
Memory bound model is reasonable
Scan algorithm will probably not perform
well in practice
Example #3
Backfilling and
User Runtime Estimation
Backfilling
Variable partitioning can suffer from
external fragmentation
Backfilling optimization: move jobs
forward to fill in holes in the schedule
Requires knowledge of expected job
runtimes
Variants
EASY backfilling
Make reservation for first queued job
Conservative backfilling
Make reservation for all queued jobs
They Arent
Surprising Consequences
Inaccurate estimates actually lead to
improved performance
Performance evaluation results may depend
on the accuracy of runtime estimates
Example: EASY vs. conservative
Using different workloads
And different metrics
EASY
Verification
Run CTC workload with accurate estimates
No Data
Innovative unprecedented systems
Wireless
Hand-held
Serendipitous Data
Data may be collected for various reasons
Accounting logs
Audit logs
Debugging logs
Just-so logs
Degree of Multiprogramming
System Utilization
Job Arrivals
Distribution of Runtimes
User Activity
Repeated Execution
Application Moldability
Recurring Findings
Instrumentation
Passive: snoop without interfering
Active: modify the system
Collecting the data interferes with system
behavior
Saving or downloading the data causes
additional interference
Partial solution: model the interference
Data Sanitation
Strange things happen
Leaving them in is safe and faithful to
the real data
But it risks situations in which a nonrepresentative situation dominates the
evaluation results
3:30 AM
Nearly every day, a set of 16 jobs are run by
the same user
Most probably the same set, as they
typically have a similar pattern of runtimes
Most probably these are administrative jobs
that are executed automatically
Two Aspects
In workload modeling, should you include
this in the model?
In a general model, probably not
Conduct separate evaluation for special
conditions (e.g. DOS attack)
Automation
The idea:
Cluster daily data in n based on various
workload attributes
Remove days that appear alone in a cluster
Repeat
The problem:
Strange behavior often spans multiple days
Cirne &Berman, Wkshp Workload Charact. 2001
Workload Modeling
Statistical Modeling
Identify attributes of the workload
Create empirical distribution of each
attribute
Fit empirical distribution to create model
Synthetic workload is created by sampling
from the model distributions
Fitting by Moments
Calculate model parameters to fit moments
of empirical data
Problem: does not fit the shape of the
distribution
Fitting by Moments
Calculate model parameters to fit moments
of empirical data
Problem: does not fit the shape of the
distribution
Problem: very sensitive to extreme data
values
Goodness of fit
Kolmogorov-Smirnov: difference in CDFs
Anderson-Darling: added emphasis on tail
May need to sample observations
Correlations
Correlation can be measured by the
correlation coefficient
It can be modeled by a joint distribution
function
Both may not be very useful
Correlation Coefficient
x x y y
x x y y
i
system
CTC SP2
KTH SP2
SDSC SP2
LANL CM-5
SDSCParagon
CC
-0.029
0.011
0.145
0.211
0.305
Distributions
A restricted version
of a joint distribution
Modeling Correlation
Divide range of one attribute into subranges
Create a separate model of other attribute
for each sub-range
Models can be independent, or model
parameter can depend on sub-range
Stationarity
Problem of daily/weekly activity cycle
Not important if unit of activity is very small
(network packet)
Very meaningful if unit of work is long
(parallel job)
Add users
Stationarity
Problem of daily/weekly activity cycle
Not important if unit of activity is very small
(network packet)
Very meaningful if unit of work is long
(parallel job)
Heavy Tails
Tail Types
When a distribution has mean m, what is the
distribution of samples that are larger than x?
Light: expected to be smaller than x+m
Memoryless: expected to be x+m
Heavy: expected to be larger than x+m
Formal Definition
Tail decays according to a power law
F x Pr X x x
0a2
log F ( x) a log x
Consequences
Large deviations from the mean are realistic
Mass disparity
small fraction of samples responsible for large
part of total mass
Most samples together account for negligible
part of mass
Consequences
Large deviations from the mean are realistic
Mass disparity
small fraction of samples responsible for large
part of total mass
Most samples together account for negligible
part of mass
Infinite moments
For a 1 mean is undefined
For a 2 variance is undefined
Crovella, JSSPP 2001
Pareto Distribution
With parameter a 1 the density is
2
proportional to x
The expectation is then
1
E[ x] cx 2 dx c ln x
x
Pareto Samples
Pareto Samples
Pareto Samples
In analysis:
Average long-term behavior may never happen
in practice
Real Life
Data samples are necessarily bounded
The question is how to generalize to the
model distribution
Arbitrary truncation
Lognormal or phase-type distributions
Something in between
Solution 1: Truncation
Solution 3: Dynamic
Place an upper bound on the distribution
Location of bound depends on total number
of samples required
Example:
1
BF 1
2N
Self Similarity
The Phenomenon
The whole has the same structure as certain
parts
Example: fractals
The Phenomenon
The whole has the same structure as certain
parts
Example: fractals
In workloads: burstiness at many different
time scales
Note: relates to a time series
Long-Range Correlation
A burst of activity implies that values in the
time series are correlated
A burst covering a large time frame implies
correlation over a long range
This is contrary to assumptions about the
independence of samples
Aggregation
Replace each subsequence of m consecutive
values by their mean
If self-similar, the new series will have
statistical properties that are similar to the
original (i.e. bursty)
If independent, will tend to average out
Poisson Arrivals
Tests
Essentially based on the burstiness-retaining
nature of aggregation
Rescaled range (R/s) metric: the range
(sum) of n samples as a function of n
R/s Metric
Tests
Essentially based on the burstiness-retaining
nature of aggregation
Rescaled range (R/s) metric: the range
(sum) of n samples as a function of n
Variance-time metric: the variance of an
aggregated time series as a function of the
aggregation level
Research Areas
Effect of Users
Workload is generated by users
Human users do not behave like a random
sampling process
Feedback based on system performance
Repetitive working patterns
Feedback
User population is finite
Users back off when performance is
inadequate
Negative feedback
Better system stability
Need to explicitly model this behavior
Locality of Sampling
Users display different levels of activity at
different times
At any given time, only a small subset of
users is active
Active Users
Locality of Sampling
Users display different levels of activity at
different times
At any given time, only a small subset of
users is active
These users repeatedly do the same thing
Workload observed by system is not a
random sample from long-term distribution
Growing Variability
Locality of Sampling
The questions:
How does this effect the results of
performance evaluation?
Can this be exploited by the system, e.g. by
a scheduler?
A Small Problem
We dont have data for these models
Especially for user behavior such as
feedback
Need interaction with cognitive scientists
Final Words
We like to think
that we design
systems based
on solid
foundations
But beware:
the foundations
might be
unbased
assumptions!
Acknowledgements
Students: Ahuva Mualem, David Talby,
Uri Lublin
Larry Rudolph / MIT
Data in Parallel Workloads Archive