Você está na página 1de 19

Chapter 4

Exponential Smoothening Methods


Based on Material and Data from Forecasting Methods and
Applications by Makridakis et al, 3rd Edition, Wiley

Dr. Sunil D. Lakdawala

1
Contents

Various Smoothening Methods


Approach for appraising smoothening methods for
forecasting
Averaging Methods
SES: Single Exponential Smoothening Methods
ARRSES: Adaptive-response-rate SES
Holts Linear Method
Holt-Winters Trend and Seasonality Method
General Aspects of Smoothening Methods

2
Various Smoothening Methods

Stationary Time Series


Averaging Methods

Simple Average

Moving Average

Single Exponential Smoothening

One Parameter

Adaptive Parameter

Trend
Holts Linear Method

3
Various Smoothening Methods (Cont)

Trend and Seasonality


Holt-Winter s Method

Pegels Classification See Fig 4.1)


1 2 3
No Seasonal Additive Multiplicative
Seasonal Seasonal
A
No Trend
B
Additive Trend
C
Multiplicative
Trend

4
Approach - Smoothening Methods for
Forecasting
Divide Data into Training and Test data
Plot time series and identify components
Choose appropriate smoothening methods
Use Training data to build model
Apply model on Test data for forecasting
Measure MAPE, MSE, etc using test data
Optimize (Minimize) MAPE and MSE
Decide a final smoothening method

5
Averaging Methods

Simple Average
Take simple average of all observed data (Eq 4.1)

Applicable when no trend, no seasonality (Cell A-1 in

Pegels table
Moving Average MA(k)
Different from the one discussed in early chapter.
Here k recent past records are chosen, and objective
is for forecasting
Deals with only latest k data

As time goes on, number of past data used for

forecasting does not change


6
Averaging Methods (Cont)

Moving Average MA(k) (Cont)


Does not handle trend or seasonality, but better than

Simple Average
Do Table 4.2 Exercise (Using MA(3) and MA(5) on

July to November Data [Period 7 to 11])


Find Mean Error, MAPE

Forecast for December [period 12]

Time Plot

Find value of Theils U Statistic

MA(1) is same as NF1

7
Averaging Methods (Cont)

If Time series changes suddenly, Average methods do


not give very good result (They break down. In fact
larger the k, more time it takes to catch up)
Exponential smoothening methods are better for
forecasting

8
SES: Single Exponential

Weighted Average Exponentially decreasing


Ft+1 = Ft + (Yt Ft) = Ft + * t
New forecast is old forecast plus adjustment for error

varies from 0 to 1; nearer to 0 implies very little


adjustment, while nearer to 1 implies huge
adjustment (becomes NF1)
Ft+1 = Yt + (1-) Ft
Ft+1 = Yt + (1-)Yt-1 + (1- )2 Yt-2+ ..
Plot the weights for various values of . All of them
exponentially die down (see Pg 149)

9
Single Exponential (Cont)

F1 needs to be decided
Typically one takes F1 = Y1, as time increases, weight
age of F1 will decrease. However if is nearer to 0, then
initialization will have considerable effect on forecasting
Other methods will take average of first few values of Yt
Calculate MSE, MAPE, Theils U-Statisitcs for Table 4.2
Electronic Cane Opener by forecasting values from 2nd
to 11th month (Feb to Dec) for values of = 0.1, 0.5 and
0.9 (Use F1 = Y1). Plot them
Larger implies less smoothening (see plot in previous
case)

10
Single Exponential (Cont)

For longer range forecasts,


Ft+h = Ft+1 h = 1, 2, ..
If there is a trend, forecasts lag behind the trend, farther
behind for smaller
See example Table 4.4 Inventory Demand

11
ARRSES: Single Exponential Adaptive Approach

Allows value of t to be modified in a controlled manner,


as changes in the pattern of data occur
May give inferior forecast with single exponential
smoothening with optimal , but reduces risk of serious
error and minimal administrative worries, when hundreds
of items require forecasting
(See equations 4.7 to 4.11)
Ft+1 = tYt + (1 - t)Ft
t+1 = |At / Mt|
At = Et+ (1 )At-1
Mt = |Et|+ (1 )Mt-1
Et = Yt - Ft
12
Single Exponential Adaptive Approach (Cont)

Initialization is more complex


One might use F2 = Y1; 2 = 3 = 4 = = 0.2,;
A1 = M1 = 0
Do example Table 4.5 Electric Can Opener
Shipments

13
Holts Linear Method
Used for data having trend
Also called Double Exponential Smoothening
Two parameters and (Range: 0 to 1)
Lt = Yt + (1-)(Lt-1 + bt-1) = Yt + (1-)Ft (m=1)
bt = (Lt Lt-1) + (1-)bt-1
Ft+m = Lt + bt*m, m = 1,2,
Lt denotes level of series, while bt denotes slope
Initialization:
L1 = Y1; b1 = Y2-Y1 or L1 = Y1; b1 =(Y4-Y1)/3 or
Use least square regression of first few values of Yt
Problem with initialization: If trend is upwards but Y2-Y1 is
negative, it will take long time to overcome influence
Do example Table 4.6 Inventory Demand Data

14
Holts Linear Method (Cont)
To find optimal value of , (Minimize error); either use
non-linear optimization method, or use grid search
approach (i.e. try out different values)

15
Holt-Winters trend & Seasonality Method
See example in Table 4.7
Multiplicative
Level: Lt = *Yt/St-s +(1-)*(Lt-1+bt-1)

Trend: bt = *(Lt-Lt-1) + (1-)*bt-1

Seasonal: St = *Yt/Lt + (1- )*St-s

Forecast: Ft+m = (Lt+bt*m)*St-s+m

Initialization
Ls = 1/s*(Y1+Y2+..+Ys)
bs = 1/s2*(Ys+1-Y1+Ys+2-Y2+..+Ys+s-Ys)
S1 = Y1/Ls, S2 =Y2/Ls, .. Ss = Ys/Ls
Optimization: Just like previous method

16
Holt-Winters trend & Seasonality Method
See example in Table 4.7
Additive
Level: Lt = *(Yt - St-s) +(1-)*(Lt-1+ bt-1)

Trend: bt = *(Lt - Lt-1) + (1-)*bt-1

Seasonal: St = *(Yt-Lt)+ (1- )*St-s

Forecast: Ft+m = (Lt+bt*m)+St-s+m

Initialization
Ls = 1/s*(Y1+Y2+..+Ys)
bs = 1/s2*(Ys+1-Y1+Ys+2-Y2+..+Ys+s-Ys)
S1 = Y1-Ls, S2 =Y2-Ls, .. Ss = Ys-Ls
Optimization: Just like previous method

17
General Aspects of Smoothening Methods
Initialization, Optimization and Prediction Intervals are
the major issues
Initialization
How you initialize becomes less and less important as we move
down the future
Back Forecasting: Reverse data series, start estimation
procedure from latest (most recent) and obtain the first value.
Use this as an initial value
Least Square Estimates: For first few values (say 10), fit it to
straight line and get initial value
Decomposition: Use decomposition methods
Others: For initial period, use high value of , ,

18
General Aspects of Smoothening Methods (Cont)
Optimization
Grid: Vary smoothening parameters in increment of 0.1
(0.0.1,0.2, ..0.9). Then vary the same in increment of 0.01
(Minimize MSE or MAPE)
Non-linear Optimization
Get good initial estimates of Lt, bt and St, and then choose small
values of , , . It is slow response system, but stable one
Prediction Intervals
Since smoothening methods do not depend upon statistical
models, giving interval and interpreting the same is difficult. For
example, when prediction saying that Sale is between 23 and 27
with 66% probability is difficult.
MSE, MAPE gives some idea.
Only if errors are random (no autocorrelation) and normally
distributed, some interpretation to MSE could be given
19

Você também pode gostar