Você está na página 1de 16

ARIMA

Kelompok 10
Agnes Ona Bliti Puka(1315201010)
Farida Apriani (1315201026)
1. Consept of ARIMA
A homogeneous nonstationary time series can be reduced to a stationary time series by talking a
proper degree of differencing . The autoregressive moving average models are useful in
describing stasionary time series, so in this section the effect of differencing to build a large class
of time series models, autoregressive integrated moving average models which are useful in
describing various homogeneous non stationary time series.
Thus, we have :
d

B ) Z t 0 nonstasionary
q ( B )at model above has been reffered to as the autoregressive
p ( B )(1
The
resulting
homogeneous
integrated moving average model of order(p,d,q) and is denoted as the ARIMA(p,d,q) model.
2. Step of ARIMA in MINITAB
The data that use in this observation are shown below:
periode
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Data
2526
2344
2004
2428
2518
1696
2734
2593
2035
2435
1724
1569
1567
1097
1065

Periode
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Data
1474
1433
1470
2096
1916
1903
1861
1596
2067
1054
884
1406
1074
1279
999

Periode
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

data
1280
1227
1212
1118
911
780
1345
861
1101
1206
1223
1321
1804
1360
1235

Periode
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

data
1347
968
1191
1160
754
900
817
1038
989
1529
1483
926
852
790
1127

periode
61
62
63
64
65
66
67
68
69
70
71
72

1) Identifications
Plot the time series data and choose proper transformation
Command: STAT > TIME SERIES > Time Series Plot >Simple>OK
Result:

data
1154
751
1054
943
1238
1280
1509
883
977
957
804
972

Time Series Plot of data


3000

data

2500

2000

1500

1000

14

21

28

35

42

49

56

63

70

Index

Gambar 1: Plot Time series


The plot of the data above indicates that the series non stasionary in the mean. To
check the stationary in the variance we can use the box-cox plot. Because the data is
not stasioner we can use the transformation to solve it.
Command: Stat>Control Chart>Box-Cox Transformation
Result:
Box-Cox Plot of data
Lower CL

Upper CL

800

(using 95.0% confidence)

700

StDev

600

Estimate

0.22

Lower CL
Upper CL

-0.44
0.86

Rounded Value

0.00

500

400

300
Limit
200
-5.0

-2.5

0.0

2.5

5.0

Gambar 2: Box-Cox plot


The plot of box-cox show that the rounded value is zero it means that the data is not
stationary in variance.
Because variance stabilizing transformations such as the power transformation require
non negative values and differencing may create some negative values,we should
always apply variance stabilizing transformations before taking differences
Command: Time Series>Differences>OK then Stat>Time Series>Time Series
Plot>OK
Result:

Time Series Plot of datadiff


0.50

datadiff

0.25

0.00

-0.25

-0.50

-0.75
1

14

21

28

35

42

49

56

63

70

Index

Gambar 3: Time series plot of differences data


The patterns of the plot indicates that the data has been stasionary in mean and
variances. Therefore we can directly use the data to buid arima model.
Note: to check the stasionary we can use the ACF and PACF Plot by Compute and
examine the sample ACF and the sample PACF of the original series to further
confirm a necessary degree of differencing so that differenced series is stasionary(if
the sample ACF decays very slowly or the individual ACF may not be large and the
sample PACF cuts off after lag 1,then it indicates that differencing is needed)
2) Parameter estimation ,diagnostic cheking and model selection
a. Parameter estimation and Diagnostic checking
Compute and examine the sample ACF and PACF of the properly transformed and
differenced series to identify the orders of p and q.
Command: Stat>Time Series>Autocorrelation and Stat>Time series>Partial
Autocorelation>OK
Result:
Autocorrelation Function for datadiff
(with 5% significance limits for the autocorrelations)
1.0
0.8

Autocorrelation

0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-1.0
2

10

12

Lag

Gambar 4: Plot ACF


Autocorrelation Function: datadiff

14

16

18

Lag
ACF T LBQ
1 -0.388561 -3.27 11.18
2 -0.049807 -0.37 11.37
3 0.161572 1.19 13.36
4 -0.215349 -1.56 16.94
5 0.038344 0.27 17.06
6 0.036184 0.25 17.16
7 -0.161467 -1.13 19.27
8 -0.006124 -0.04 19.28
9 0.004263 0.03 19.28
10 0.001547 0.01 19.28
11 0.016569 0.11 19.30
12 0.247161 1.69 24.67
13 -0.160267 -1.06 26.96
14 0.087133 0.57 27.65
15 -0.077373 -0.50 28.21
16 -0.027936 -0.18 28.28
17 0.001158 0.01 28.28
18 -0.085544 -0.55 29.00
Partial Autocorrelation Function for datadiff
(with 5% significance limits for the partial autocorrelations)
1.0

Partial Autocorrelation

0.8
0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-1.0
2

10

12

Lag

Gambar 5: Plot PACF


Partial Autocorrelation Function: datadiff
Lag
PACF T
1 -0.388561 -3.27
2 -0.236492 -1.99
3 0.057079 0.48
4 -0.160386 -1.35
5 -0.108241 -0.91
6 -0.056742 -0.48
7 -0.178498 -1.50
8 -0.227945 -1.92
9 -0.208696 -1.76
10 -0.140796 -1.19
11 -0.161259 -1.36
12 0.189432 1.60
13 0.015827 0.13

14

16

18

14
15
16
17
18

0.084939
-0.131318
-0.051094
-0.122711
-0.153190

0.72
-1.11
-0.43
-1.03
-1.29

Based on the plot of ACF above show that the autocorrelation value at lag 1
significantly different from zero(cut off after lag 1) while the plot of PACF show that
the partial autocorrelation values significantly at lag 1. Therefore we have some
possible model are ARIMA(1,1,1),ARIMA(0,1,1) and ARIMA(1,1,0)
ARIMA(1,1,1)
Command: Stat>Time series>ARIMA>enter transformasi in series>enter
value 1 in autocorrelation,integrated and moving average
Result:
ARIMA Model: transformasi
Estimates at each iteration
Iteration SSE
Parameters
0 5.36492 0.100 0.100 0.078
1 3.91257 -0.050 0.250 -0.019
2 3.80836 0.031 0.400 -0.016
3 3.71498 0.083 0.550 -0.013
4 3.66015 0.168 0.671 -0.011
5 3.56585 0.294 0.804 -0.009
6 3.47244 0.394 0.919 -0.008
7 3.45792 0.400 0.939 -0.007
8 3.44655 0.415 0.955 -0.007
9 3.43445 0.427 0.969 -0.007
10 3.43151 0.425 0.972 -0.007
11 3.43120 0.423 0.972 -0.007
12 3.43113 0.423 0.972 -0.007
Relative change in each estimate less than 0.0010
Final Estimates of Parameters
Type
Coef SE Coef T P
AR 1
0.4226 0.1229 3.44 0.001
MA 1
0.9722 0.0585 16.63 0.000
Constant -0.006859 0.001187 -5.78 0.000
Differencing: 1 regular difference
Number of observations: Original series 72, after differencing 71
Residuals: SS = 3.37811 (backforecasts excluded)
MS = 0.04968 DF = 68
Modified Box-Pierce (Ljung-Box) Chi-Square statistic
Lag
12 24 36 48
Chi-Square 14.9 28.3 52.3 60.1
DF
9 21 33 45
P-Value 0.094 0.133 0.018 0.066
Normality Check

Probability Plot of RESI1


Normal
99.9

Mean 0.003538
StDev
0.2196
N
71
KS
0.055
P-Value
>0.150

99
95

Percent

90
80
70
60
50
40
30
20
10
5
1
0.1

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

RESI1

Gambar 6: Normality plot


ARIMA(0,1,1)
ARIMA Model: transformasi
Estimates at each iteration
Iteration SSE Parameters
0 5.20938 0.100 0.087
1 4.18703 0.250 0.027
2 3.79239 0.400 -0.007
3 3.72356 0.503 -0.015
4 3.71796 0.530 -0.014
5 3.71691 0.541 -0.014
6 3.71666 0.547 -0.014
7 3.71660 0.550 -0.014
8 3.71658 0.551 -0.014
9 3.71657 0.552 -0.014
10 3.71657 0.552 -0.014
Relative change in each estimate less than 0.0010
Final Estimates of Parameters
Type
Coef SE Coef T P
MA 1 0.5524 0.1004 5.50 0.000
Constant -0.01356 0.01239 -1.09 0.278
Differencing: 1 regular difference
Number of observations: Original series 72, after differencing 71
Residuals: SS = 3.71440 (backforecasts excluded)
MS = 0.05383 DF = 69
Modified Box-Pierce (Ljung-Box) Chi-Square statistic
Lag
12 24 36 48
Chi-Square 24.7 44.4 70.7 83.4
DF
10 22 34 46
P-Value 0.006 0.003 0.000 0.001

Normality Check
Probability Plot of RESI2
Normal
99.9

Mean -0.0006562
StDev
0.2304
N
71
KS
0.094
P-Value
0.119

99
95

Percent

90
80
70
60
50
40
30
20
10
5
1
0.1

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

RESI2

Gambar 7: Normality Plot of ARIMA(0,1,1)


ARIMA(1,1,0)
ARIMA Model: transformasi
Estimates at each iteration
Iteration SSE Parameters
0 5.63711 0.100 0.078
1 4.70206 -0.050 0.041
2 4.16485 -0.200 0.010
3 3.95400 -0.350 -0.016
4 3.94523 -0.390 -0.019
5 3.94521 -0.392 -0.020
6 3.94521 -0.392 -0.020
Relative change in each estimate less than 0.0010
Final Estimates of Parameters
Type
Coef SE Coef T P
AR 1 -0.3924 0.1113 -3.52 0.001
Constant -0.01952 0.02838 -0.69 0.494
Differencing: 1 regular difference
Number of observations: Original series 72, after differencing 71
Residuals: SS = 3.94473 (backforecasts excluded)
MS = 0.05717 DF = 69
Modified Box-Pierce (Ljung-Box) Chi-Square statistic
Lag
12 24 36 48
Chi-Square 18.8 32.1 58.0 67.4
DF
10 22 34 46
P-Value 0.042 0.076 0.006 0.021
Normality Check

Probability Plot of RESI3


Normal
99.9

Mean -0.0002040
StDev
0.2374
N
71
KS
0.087
P-Value
>0.150

99
95

Percent

90
80
70
60
50
40
30
20
10
5
1
0.1

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

RESI3

Gambar 8: Normality plot of ARIMA(1,1,0)


b. Model Selection
Based on diagnostic check we could know the best model for forecasting by using
MSE value. We can choose the small value of MSE to be the best model for
forecasting. Look at the table below:
Table 1: Model Selection
Model
Significant
ARIMA(1,1,1)

ARIMA(0,1,1)

ARIMA(1,1,0)

Normal distribution

White Noise

MSE
0.04968
0.05383
0.05717

From these result we can choose ARIMA(1,1,1)


3) Forecasting
Command:
Stat>TimeSeries>Arima
Series>Forecast>lead 1>origin 72>OK
Result:
Forecasts from period 72

then

enter

transformasi

on

the

95% Limits
Period Forecast Lower Upper Actual
73 6.79922 6.36228 7.23617

3. Step of ARIMA in SAS


Based on predicted model in the result of minitab we can solve the ARIMA process using SAS

We can use this sintaks:


data test;
input t y;
datalines;
1
7.83439
2
7.75961
3
7.60290
4
7.79482
5
7.83122
6
7.43603
7
7.91352
8
7.86057
9
7.61825
10
7.79770
11
7.45240
12
7.35819
13
7.35692
14
7.00033
15
6.97073
16
7.29574
17
7.26753
18
7.29302
19
7.64779
20
7.55799
21
7.55119
22
7.52887
23
7.37526
24
7.63385
25
6.96035
26
6.78446
27
7.24850
28
6.97915
29
7.15383
30
6.90675
31
7.15462
32
7.11233
33
7.10003
34
7.01930
35
6.81454
36
6.65929
37
7.20415
38
6.75809
39
7.00397
40
7.09506
41
7.10906
42
7.18614
43
7.49776
44
7.21524
45
7.11883
46
7.20564
47
6.87523

48
7.08255
49
7.05618
50
6.62539
51
6.80239
52
6.70564
53
6.94505
54
6.89669
55
7.33237
56
7.30182
57
6.83087
58
6.74759
59
6.67203
60
7.02731
61
7.05099
62
6.62141
63
6.96035
64
6.84907
65
7.12125
66
7.15462
67
7.31920
68
6.78333
69
6.88449
70
6.86380
71
6.68960
72
6.87936
;
proc arima data=test;
identify var=y nlag=12;
run;
identify var=y(1) nlag=12;
run;
estimate p=1 ;
run;
estimate p=1 q=1;
run;
forecast lead=1 out=results;
run;
quit;
RESULT:
The SAS System
1
The ARIMA Procedure
Name of Variable = y
Mean of Working Series 7.168118
Standard Deviation
0.340049
Number of Observations
72
Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error
0 0.115633
1.00000 |
|********************|
0
1 0.079556
0.68801 |
. |**************
| 0.117851
2 0.068329
0.59091 |
.
|************
| 0.164431
3 0.062328
0.53901 |
.
|***********
| 0.191669
4 0.044010
0.38060 |
.
|********
| 0.211677

5
6
7
8
9
10
11
12

0.038666
0.036615
0.026989
0.028003
0.030400
0.031128
0.033411
0.037133

0.33438
0.31665
0.23340
0.24217
0.26290
0.26919
0.28894
0.32113

|
|
|
|
|
|
|
|

.
.
.
.
.
.
.
.

|******* .
|****** .
|***** .
|***** .
|***** .
|***** .
|****** .
|****** .

|
|
|
|
|
|
|
|

0.220978
0.227897
0.233928
0.237140
0.240551
0.244509
0.248591
0.253212

"." marks two standard errors


Inverse Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1
-0.31618 |
******| .
|
2
-0.08114 |
. **| .
|
3
-0.17029 |
. ***| .
|
4
0.12494 |
. |** .
|
5
0.00329 |
. | .
|
6
-0.09725 |
. **| .
|
7
0.09206 |
. |** .
|
8
-0.00242 |
. | .
|
9
0.01584 |
. | .
|
10
-0.03559 |
. *| .
|
11
0.03098 |
. |* .
|
12
-0.05611 |
. *| .
|
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1
0.68801 |
. |**************
|
2
0.22322 |
. |****.
|
3
0.13916 |
. |*** .
|
4
-0.15166 |
. ***| .
|
5
0.04041 |
. |* .
|
6
0.08192 |
. |** .
|
7
-0.04162 |
. *| .
|
8
0.07093 |
. |* .
|
9
0.09221 |
. |** .
|
The SAS System
The ARIMA Procedure
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
10
0.09292 |
. |** .
|
11
0.03678 |
. |* .
|
12
0.07739 |
. |** .
|
Autocorrelation Check for White Noise
To
ChiPr >
Lag
Square DF ChiSq ------------------Autocorrelations----------------6
112.87
6 <.0001 0.688 0.591 0.539 0.381 0.334 0.317
12
150.74 12 <.0001 0.233 0.242 0.263 0.269 0.289 0.321
The SAS System
3
The ARIMA Procedure
Name of Variable = y
Period(s) of Differencing
1
Mean of Working Series
-0.01345
Standard Deviation
0.256051
Number of Observations
71
Observation(s) eliminated by differencing
1
Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error
0 0.065562
1.00000 |
|********************|
0
1 -0.025475
-.38856 |
********| .
| 0.118678
2 -0.0032656
-.04981 |
. *| .
| 0.135416
3 0.010593
0.16158 |
. |*** .
| 0.135674
4 -0.014119
-.21535 |
. ****|
.
| 0.138357
5 0.0025137
0.03834 |
.
|* .
| 0.143000
6 0.0023727
0.03619 |
.
|* .
| 0.143145
7 -0.010586
-.16147 |
. ***|
.
| 0.143274
8 -0.0004015
-.00612 |
.
|
.
| 0.145814
9 0.00027973
0.00427 |
.
|
.
| 0.145818
10 0.00010112
0.00154 |
.
|
.
| 0.145820
11 0.0010863
0.01657 |
.
|
.
| 0.145820

12

To
Lag
6
12

To
Lag
6
12
18
24

0.016205

0.24717 |
.
|*****.
| 0.145847
"." marks two standard errors
Inverse Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1
0.69054 |
. |**************
|
2
0.52455 |
. |**********
|
3
0.39861 |
. |********
|
4
0.38844 |
. |********
|
5
0.32583 |
. |*******
|
6
0.31691 |
. |******
|
7
0.31445 |
. |******
|
8
0.23908 |
. |*****
|
9
0.16486 |
. |*** .
|
10
0.04224 |
. |* .
|
11
-0.03210 |
. *| .
|
12
-0.08408 |
. **| .
|
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1
-0.38856 |
********| .
|
2
-0.23649 |
*****| .
|
3
0.05708 |
. |* .
|
4
-0.16039 |
. ***| .
|
5
-0.10824 |
. **| .
|
6
-0.05674 |
. *| .
|
7
-0.17850 |
.****| .
|
The ARIMA Procedure
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
8
-0.22795 |
*****| .
|
9
-0.20869 |
.****| .
|
10
-0.14080 |
. ***| .
|
11
-0.16126 |
. ***| .
|
12
0.18944 |
. |****.
|
Autocorrelation Check for White Noise
ChiPr >
Square DF ChiSq ------------------Autocorrelations----------------17.16
6 0.0087 -0.389 -0.050 0.162 -0.215 0.038 0.036
24.67 12 0.0165 -0.161 -0.006 0.004 0.002 0.017 0.247
The SAS System
5
The ARIMA Procedure
Conditional Least Squares Estimation
Standard
Approx
Parameter
Estimate
Error t Value Pr > |t|
Lag
MU
-0.01409
0.02046
-0.69
0.4934
0
AR1,1
-0.39207
0.11134
-3.52
0.0008
1
Constant Estimate
-0.01961
Variance Estimate
0.057185
Std Error Estimate
0.239134
AIC
0.29687
SBC
4.82223
Number of Residuals
71
* AIC and SBC do not include log determinant.
Correlations of Parameter
Estimates
Parameter
MU
AR1,1
MU
1.000
0.009
AR1,1
0.009
1.000
Autocorrelation Check of Residuals
ChiPr >
Square DF ChiSq ------------------Autocorrelations----------------6.60
5 0.2521 -0.092 -0.169 0.096 -0.199 -0.021 -0.012
18.82 11 0.0643 -0.209 -0.086 0.003 0.008 0.146 0.266
21.22 17 0.2167 -0.070 0.011 -0.073 -0.064 -0.054 -0.090
32.14 23 0.0972 0.029 -0.045 0.066 0.062 0.043 0.295
Model for variable y
Estimated Mean
-0.01409
Period(s) of Differencing
1
Autoregressive Factors
Factor 1: 1 + 0.39207 B**(1)

To
Lag
6
12
18
24

The ARIMA Procedure


Conditional Least Squares Estimation
Standard
Approx
Parameter
Estimate
Error t Value Pr > |t|
Lag
MU
-0.01334
0.0049805
-2.68
0.0093
0
MA1,1
0.89830
0.07190
12.49
<.0001
1
AR1,1
0.36980
0.14543
2.54
0.0133
1
Constant Estimate
-0.00841
Variance Estimate
0.05174
Std Error Estimate
0.227463
AIC
-5.84469
SBC
0.94335
Number of Residuals
71
* AIC and SBC do not include log determinant.
Correlations of Parameter Estimates
Parameter
MU
MA1,1
AR1,1
MU
1.000 -0.252 -0.148
MA1,1
-0.252
1.000
0.631
AR1,1
-0.148
0.631
1.000
Autocorrelation Check of Residuals
ChiPr >
Square DF ChiSq ------------------Autocorrelations----------------4.96
4 0.2910 -0.044 0.082 0.168 -0.160 -0.017 -0.043
15.93 10 0.1016 -0.195 -0.064 -0.005 0.042 0.105 0.271
18.15 16 0.3154 -0.054 0.077 -0.068 -0.050 -0.033 -0.082
32.75 22 0.0655 0.058 -0.016 0.105 0.136 0.032 0.316
Model for variable y
Estimated Mean
-0.01334
Period(s) of Differencing
1
Autoregressive Factors
Factor 1: 1 - 0.3698 B**(1)
Moving Average Factors
Factor 1: 1 - 0.8983 B**(1)
The SAS System
7
The ARIMA Procedure
Forecasts for variable y
Obs
Forecast Std Error
95% Confidence Limits
73
6.8271
0.2275
6.3813
7.2729

From these result we can summarize as follows:


Tabel 2:The Summarize of SAS Result
Model Significansi
Normal
Distribution
ARIMA(1,1,1)

ARIMA(0,1,1)

ARIMA(1,1,0)

White Noise

AIC

-5.84469
-3.89294
0.29687

Based on diagnostic check we could know the best model for forecasting by using AIC value. We
can choose the small value of AIC to be the best model for. From these result we can choose
ARIMA(1,1,1)

4. Langkah-langkah ARIMA pada SPSS adalah sebagai berikut:

a. Tahap Identifikasi
1. Masukkan data kedalam SPSS
2. Melakukan analisis menggunakan menu Analyze> Forecasting > Sequence Charts> pindahkan
variabel data ke kotak variables> Klik OK> kemudian akan muncul plot
3. Jika data belum stasioner dalam mean dan variansi, lakukan seperti langkah kedua Analyze>
Forecasting> Sequence Charts> centang Natural Log Transform> klik OK kemudian akan
muncul plot dari data yang sudah di transformasi
4. Data telah stasioner dalam varians, kemudian akan dilakukan differencing, sama seperti langkah
kedua dan ketiga klik Analyze> Forecasting> Sequence Charts> centang Natural Log
Transform dan centang juga Difference> Klik OK
5. Jika data telah memenuhi asumsi stasioner dalam mean dan variansi, langkah selanjutnya yaitu
menetapkan model sementara dengan melihat ACF dan PACF, klik Analyze> Forecasting>
Autocorrelation> Pindahkan variabel data ke kotak Variables> centang Natural Log Transform
dan centang juga Difference> kemudian centang juga Autocorrelation dan Partial
Autocorrelation> Klik OK, plot yang muncul dari ACF fan PACF akan membentuk model yang
akan dijadikan model sementara, kemudian akan dilanjutkan dengan identifikasi model
b. Identifikasi Model
6. Jika model sementara sudah didapatkan, untuk pengujian masing-masing model yaitu klik
Analyze> Forecasting> Create Models> pindahkan variable data kedalam kotak dependent
variables> pada bagian method pilih ARIMA > klik Criteria, kemudian pada kotak Arima criteria
tuliskan angka 1 pada Autoregresive, 1 pada difference, 1 Moving Average atau tuliskan angka 1
pada Autoregresive, 0 pada difference, 1 Moving Average, dan model lainnya > centang include
7.

constant in model> klik Continue


Masih dalam kotak dialog yang sama, kemudian klik pada tabs Statistics> centang parameter
estimates> centang Stasionary R square> pada tabs Save klik Noise Residual> klik OK. Jika
model ARIMA telah di identifikasi dan memenuhi keseluruhannya, langkah selanjutnya yaitu

diagnostic cek pada model yang akan dilakukan peramalan


c. Diagnostik Chek
8. Pada pemeriksaan diagnostic chek atau biasanya dikenal dengan white noise, klik Analyze>
Explore> pada saat uji model sementara pada tabs Save telah di centang Noise Residual,
sehingga Noise Residual muncul pada dataset, pindahkan data Noise Residual ke Dependent
List> klik Plot centang Normality Plot with test> klik Ok dan hasilnya akan muncul pada file
9.

Ouput
Untuk

pemeriksaan

no

autokorelasi

dan

homoskedastisitasnya,

klik

Analyze>

Forecasting>AutoCorrelations> pindahkan variabel Noise Residuals ke kotak Variables>


centang autocorrelations dan partial autocorrelations> Klik OK hasilnya akan muncul, jika
ketiga asumsi telah terpenuhi, langkah selanjutnya yaitu memprediksi periode selanjutnya
d. Forecasting

10. Setelah semua asumsi terpenuhi, langkah selanjutnya yaitu memprediksi data untuk periode
selanjutnya, klik Analyze> Forecasting> Create Models> klik Tabs Options>kemudian pilih
first Case after end estimation period trough a specified date, masukkan angka 73 pada kotak
observation, pada kasus ini data yang akan diprediksi adalah data yang ke 73> pada tabs
Statistics centang Display Forecast> klik OK dan hasil akan muncul pada file output
5. Langkah-langkah ARIMA pada R
a. Proses Input Data ke dalam R
1)
2)
3)

4)

Masuk software R kemudian klik package>load package>pilih tseries>klik OK, atau dari R
console tuliskan syntax library(tseries)
Untuk memanggil data yang akan di analisis pada R console tuliskan syntax mydata<read.csv("D:/dataR.csv")kemudian tuliskan pada R console attach(mydata)
Untuk mendefinisikan variabel yang akan digunakan pada ARIMA time series ini, syntaxnya
adalah sebagai berikut:
Y <- ppi
>d.Y<- diff(Y)
> t <- yearqrt
Kemudian data yang digunakan akan diubah menjadi time series dengan menuliskan syntax
seperti berikut:
y <- ts(y,start=c(1996,1), freq=12)
b. Tahap Identifikasi

5) Setelah data diubah menjadi time series, langkah selanjutnya yaitu membuat plot data untuk
melihat trend data dengan menuliskan
ts.plot(y,col="blue",main="Time Series Plot")
6) Setelah mengetahui plot data, kemudian akan dilakukan uji stasioneritas dengan menuliskan
syntax berikut
>adf.test(y)
>win.graph()
> par(mfrow=c(2,1))
7) Untuk melihat ACF dan PACF dilakukan dengan syntax berikut:

8)

>acf(y,na.action=na.pass)
>pacf(y,na.action=na.pass)
Langkah selanjutnya yaitu transformasi difference dan plot dengan syntax berikut:
#transformasi difference dari log danplotnya
plot(log10(y),ylab="Log (y)")
adf.test(y)
y.Difflog1 <- diff(log(y),differences=1)
# transformasi difference dan plot
y.Diff1 <- diff(y,differences=1)
ts.plot(y.Diff1,col="blue",main="Time Series Plot")

9) Setelah data telah stasioner dalam mean dan varians, langkah selanjutnya akan menentukkan
model sementara, dengan syntak sebagai berikut:
#ACF dan PACF
par(mfrow=c(2,1))
par(mfrow = c(1,2))
acf(ts(diff(log10(y))),main="ACF Data untuk andat")
pacf(ts(diff(log10(y))),main="PACF data untuk andat")
10) Kemudian untuk melakukan identifikasi ARIMA dengan model terbaik dapat menggunakan
syntax berikut:
ARIMAfit <- auto.arima(log10(y), approximation=FALSE,trace=FALSE)
> summary(ARIMAfit)
Dengan menggunakan auto.arima, model yang akan didapatkan merupakan model yang
terbaik, kemudian langkah selanjutnya yaitu menguji model dengan diagnostic chek
c) Diagnostik Chek model
11) untuk menguji model dapat menggunakan syntax berikut:
#Diagnostik check
(fit1 <- arima(Y, c(0, 1, 1)))
tsdiag(fit1)
d) Forecasting
12) Untuk melakukan forecsting dapat menggunakan syntax berikut:
predict(fit1,1 )
6. Strong and weakness between Minitab, SPSS, R, and SAS
1. In minitab it is easly to predict the model by seeing dengan patterns of ACF and PACF while
in SAS we can not predict the model. We use the result from minitab
2. In minitab and SAS have the similar forecasting values
3. SAS is Simple,practice and more easly to run the data
4. With this simple sintax it can shown all of values that we need to interpretate
5. In minitab by visual we can take the conclusion about the stationary in mean but in other
software we could try it by using dickey fuller test
6. In minitab we can not run the subset autoregessive or moving average model
7. R memberikan model terbaik dengan menggunakan auto arima, dimana pengguna tidak harus
menguji satu per satu model dalam ARIMA
8. Pada uji ARIMA menggunakan minitab dan SPSS tidak memiliki uji dickey fuller sehingga
harus menggunakan SAS ataupun R untuk pengujian kestasioneran dalam mean
9. Pada SPSS tidak terdapat subset jika lag yang keluar lebih dari satu, dengan menggunakan R
dan SAS dapat menggunakan subset.
10. Lag yang dihasilkan pada minitab, SPSS, R, dan SAS memiliki hasil yang berbeda, sehingga
peneliti harus lebih bijak lagi dalam pemilihan software yang akan digunakan dalam ARIMA

Você também pode gostar