A Test For The Random Walk Hypothesis Economics Essay

Published: November 21, 2015 Words: 4432

An exchange rate, determined by market supply and demand, fluctuates regularly, increasing the risk taken by participants in the foreign currency market. Many have attempted to predict the movement of exchange rates to help minimise risk and allow market participants to make wiser economic decisions. However, this has been a challenging task.

The purpose of this paper is to investigate whether nominal exchange rates follows a random walk by carrying out a case study of the Swiss-US exchange rate. The time frame of the data is the daily returns from January 1971 to November 2012. Swiss francs have recently been a popular currency as they have been a safe haven with recent economic turmoil in Europe and the Eurozone and would make the findings quite interesting.

Many authors document that it is challenging to explain exchange rate fluctuations with the use of macroeconomic fundamentals and a random walk forecast is a better indicator of future exchange rates (Rossi, 2005). Some recent research also supported the idea that exchange rates behave like the prices of financial assets, whose movements are driven by changes in expectations in economic fundamentals, rather than changes in current fundamentals (Engle and West, 2005). By carrying out various econometric tests, the paper reports its findings on whether the Swiss-US exchange rate takes a random walk or not.

Literature Review

Meese and Rogoff (1983) were first to report that economic variables such as money supply, trade balance and national income, were of little use in forecasting exchange rates and that a random walk model forecasts exchange rates better than economic models. They compared existing models to other alternatives in which economic fundamentals were removed and they found that any exchange rate changes were purely random. Their finding suggested that if exchange rates were indeed random, it meant that all exchange rate-related economic analysis derived from these models based on fundamentals was likely to be incorrect.

Burt et.al. (1977) studied the behaviour of spot exchange rates using the British Pound (GBP), German Mark (DM) and Canadian Dollar (CAD) for random walk patterns using serial correlation, stationarity, and other tests. They used the USD as the vehicle currency and tested from April 1st, 1973 to April 27th, 1975. They found the DM and BP were valued at their intrinsic value, showing that these two currencies support efficient market hypothesis and random walk.

More recently, Diebold and Nason (1990) attempted to forecast the spot exchange rates for ten major currencies against the USD dollar for the period after 1973. However, they found they were not able to do better than a simple Random Walk model. Similarly, Engel and Hamilton (1990) studied the Deutsche mark, French franc, and UK pound for the period from 1984 to 1988 using quarterly data and found that their model was out performed by a simple Random Walk model.

Economists have been very puzzled by these findings and there have been alternatives offered to the random walk model for exchange rates. Engle and West (2005) have argued that exchange rate disconnect is consistent with an exchange rate being determined by fundamental variables. They showed that models can be written in a present value asset pricing format. In these models, exchange rates are determined not only by current fundamentals but also by expectations of what the fundamentals will be in the future.

More recently Cerrato, Crosby and Kaleem (2011) have suggested that statistical tests used to forecast performance are not relevant. According to them what matters most is how an exchange rate portfolio is managed and that is one way to beat random walk. They argued that a simple model which has some theoretical foundations and very few variables is a better model than complicated statistical models.

Scholars also describe various variation of random walk. Below are two variations of the random walk model which will be mentioned in this paper.

Random Walk with Drift (Yt = α + Yt-1 + εt )

If the random walk model predicts that the value at time "t" will equal the last period's value plus a constant, or drift (α), and a white noise term (εt), then the process is random walk with a drift. It also does not revert to a long-run mean and has variance dependent on time.

Random walk with a trend (Yt = α + βt + εt )

Often a random walk with a drift is confused for a deterministic trend. Both include a drift and a white noise component, but the value at time "t" in the case of a random walk is regressed on the last period's value (Yt-1), while in the case of a deterministic trend it is regressed on a time trend (βt). A non-stationary process with a deterministic trend has a mean that grows around a fixed trend, which is constant and independent of time.

Figure 1 Random Walk with drift

Source: http://people.duke.edu/~rnau/411rand.htm

Figure 2: Random Walk without Drift

Source: http://people.duke.edu/~rnau/411rand.htm

Test for Autocorrelation

Autocorrelation helps determine the strength of association between the current and lagged values of the underlying variable. Similarly, the partial autocorrelation function is the relationship between the current observation of a variable, Xt, and successive lagged values of that variable Xt-1, …, Xt-k, when the effects of intervening lags have been removed. In order to check for Autocorrelation in our data, we have used the ACF function in R which produces a correlogram. The correlogram is a graphical plot to determine if any lag values of the returns are correlated with the current value.

.

Figure 3 - ACF - SZ/US

Test for Stationarity

In preliminary testing for stationarity, it is necessary to examine the correlogram of autocorrelation coefficiants. However, as graphical tests are subjective and could lead to unambiguous results, further tests are needed. A time-series is stationary if it has a constant mean, variance and covariance which depend only on the time series' lagged observations.

As shown in Figure 3, we see that the correlation values decay slowly after 40 lags, showing that there may be non-stationarity in the Swiss-US exchange rate time series.

Augmented Dickey Fuller (ADF) test

This test, developed by Dickey and Fuller (1981), provides evidence of whether time series data follows a random walk process. The ADF tests for the presence of a unit root as well as stationarity. For our model, we can test for stationarity by forming the following equation:

Yt = á¿¥Yt-1 + Vt,

Ho: data DOES contains Unit Root (Non-Stationarity exists), i.e. á¿¥ = 1

H1: data does not contain unit root, i.e. á¿¥ < 1

If the computed absolute value of the t statistics exceeds the critical values, then the null hypothesis of the presence of a unit root cannot be rejected (Gujarati 1995). The presence of a unit root indicates that the time series data moves in a random fashion and cannot be predicted.

Testing with No Drift or Trend

SWISS/US

1%

5%

10%

Ï„ statistic

-3.7447

-2.58

-1.95

-1.62

The standard ADF test shows a t-statistic value which is smaller than the tau values at all three significant levels (1%, 5% and 10%), implying that the null hypothesis should be rejected and thus there is no unit root present.

Testing With a Trend

SWISS/US

1%

5%

10%

Ï„ statistic

-3.6297

-3.96

-3.41

-3.12

Ñ„2

7.9657

6.09

4.68

4.03

Ñ„3

8.9183

8.27

6.25

5.34

However, when the ADF test with a trend was performed, the tau value is larger than the critical value at 1%, but is smaller at 5% and 10%. The phi3 critical values however, are lower than the computed t-statistic of 8.9183.

Thus, at 1% level of significance, the null hypothesis is rejected, indicating that there is a unit root present.

Also, the phi2 computed value is larger than the phi2 values at all three significant levels, implying that the series is random walk with a linear trend.

Testing With a Drift

SWISS/US

1%

5%

10%

Ï„ statistic

-3.8672

-3.43

-2.86

-2.57

Ф1

10.5073

6.43

4.59

3.78

The test statistic value of -3.8672 is larger than all three critical tau values further suggesting that there is no unit root present.

Overall, the ADF tests seem to imply that the time series of the Swiss-US exchange rate does not to have a unit root present thus showing that the series could possibly be stationary and does not follow a random walk.

However, the Phillip-Perron(PP) test would be able to further validate the presence of a unit root.

Phillips-Perron Unit Root Test (PP)

Short lags

( constant)

Short lags

( trend)

Long lags (constant)

Long lags

(trend)

(Z- Ï„statistic)

-3.8229

-3.6209

-3.7425

-3.6119

1%

-3.43407

-3.964594

-3.43407

-3.964594

5%

-2.86236

-3.412984

-2.86236

-3.412984

10%

-2.567237

-3.12813

-2.567237

-3.12813

Ho: Unit Root is present (there is non stationarity)

H1: Unit Root is not present

The test done with an intercept but with short lags showed that all three critical values were larger than the computed Z-tau of -3.8229 and again, implies that the null hypothesis is to be rejected and that there is no presence of a unit root.

However, for the PP test done on short lags with a trend, the null hypothesis cannot be rejected at 1% (as the Z-tau value is bigger than -3.964594) but can be rejected at critical values of 5% and 10%.

For the PP test done on the series for just an intercept and long lags, the Z-tau value of -3.7425 is smaller than all three critical values and thus clearly indicates that the null hypothesis is to be rejected and that there is no unit root present in the data.

For the PP test done on long lags with an intercept and trend, the Z-tau computed value is larger than the 1% critical value, but smaller than the 5% and 10% values. This means that the null hypothesis that there is a unit root, can be rejected at 5% and 10% but will not be rejected at 1%.

The result of four different PP tests done on short lags and long lags with or without trend strongly suggests that the Swiss-US exchange rate time series does not have a unit root, thus proving that it is indeed stationary.

ARMA(p,q) model

ARMA(1,1)

Residuals:

Min

1Q

Median

3Q

Max

-0.145346

-0.005926

0.000334

0.006107

0.165930

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

0.9993096

0.0001774

5633.095

<2e-16***

ma1

0.0078210

0.0098182

0.797

0.42569

intercept

0.0008970

0.0003385

2.650

0.00805**

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61381.32

ARMA(1,2)

Residuals:

Min

1Q

Median

3Q

Max

-0.1453422

-0.0059241

0.0003346

0.0060912

0.1657796

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

0.9993125

0.0001763

5668.225

<2e-16***

ma1

0.0078088

0.0097499

0.801

0.42318

ma2

-0.0069620

0.0097689

-0.713

0.47605

intercept

0.0008918

0.0003363

2.651

0.00801**

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61378.92

ARMA(2,1)

Residuals:

Min

1Q

Median

3Q

Max

-0.1454586

-0.0059299

0.0003468

0.0060935

0.1662268

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

1.927e+00

3.853e-02

50.010

<2e-16***

ar2

-9.270e-01

3.851e-02

-24.076

<2e-16***

ma2

-9.175e-01

4.143e-02

-22.694

<2e-16***

intercept

7.016e-05

4.501e-05

1.559

0.119

-Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001712, Conditional Sum-of-Squares = 1.8, AIC = -61384.14

ARMA(2,2)

Residuals:

Min

1Q

Median

3Q

Max

-0.1452983

-0.0059312

0.0003416

0.0061028

0.1659971

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

1.5748255

NA

NA

NA

ar2

-0.5751178

NA

NA

NA

ma1

-0.5677750

NA

NA

NA

ma2

-0.0021529

0.0081304

-0.265

0.791

intercept

0.0003796

NA

NA

NA

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61376.36

The results show that the coefficients of AR(p) are significantly different from zero at the second lag order of the time-series. Results from the regression also show that the MA(q) coefficient is statistically different from zero at the first level, indicating that ARMA(2,1) has the best predictive power in current returns. This is in comparison to the ARMA(1,1), ARMA(1,2), and ARMA(2,2).

GARCH(p,q) Model

In practice, usually a GARCH(1,1) model with only three parameters in the conditional variance equation is adequate to obtain a good model fit for financial time series. A GARCH(1,1) is simply(Sharma 2012):

The Table below shows the estimates of the GARCH (1,1) model for the SZ/US time series.

Coefficients :

mu

omega

alpha1

beta1

1.4869e+00

8.8463e-05

9.5187e-01

4.9165e-02

Std. Errors: (based on Hessian)

Estimate

Std. Error

t value

Pr(<|t|)

mu

1.487e+00

1.027e-03

1447.792

<2e-16***

omega

8.846e-05

NA

NA

NA

alpha1

9.519e-01

7.738e-03

123.018

<2e-16***

beta1

4.916e-02

1.382e-02

3.557

0.000375***

-Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:

-361.1154 normalized: -0.03432003

The mean of returns, µ, is significantly different from zero in the data. This result fundamentally rejects the random walk hypothesis for the Swiss-US currency.

Brock-Dechert-Scheinkman (BDS) Test

In the random walk hypothesis, it is assumed that the forecasted errors are uncorrelated. Therefore, the BDS Test is to confirm if the time-series is independent identically distributed (iid), or the residuals are independent identically distributed. The hypothesis for BDS is:

Ho: Residuals are iid

H1: Alternative non-linear structure to be modeled

Standard Normal =

[0.0065]

[0.0131]

[0.0196]

[0.0262]

[2]

19.8898

20.8476

21.8879

22.4705

[3]

26.4518

26.3191

26.9905

27.5264

p-value =

[0.0065]

[0.0131]

[0.0196]

[0.0262]

[2]

0

0

0

0

[3]

0

0

0

0

For both the models, since the p-values are lower than the critical values corresponding to 1%, 5% and 10% significance levels, there is rejection of the null of the presence of iid residuals. So the linear random walk model is not suitable to fit the univariate time-series data.

Conclusion

This paper studied the validity of the random walk hypothesis by testing to see whether the Swiss-US exchange currency moves in a certain pattern or follows a random walk model. The time frame of the data is the daily returns from January 1971 to November 2012.

The first step was to check if the time series of the Swiss-US currency was stationary and if a unit root was present. To obtain evidence in this area, the first diagnostic test was to examine the autocorrelation coefficients in a correlogram. The result showed that that the series was not stationary, and possibly argues the existence of a unit root.

However, to further solidify this notion, the ADF and Phillip-Perron tests were done in which both tests yielded the same conclusion - that under most critical values, the time series did not have a unit root. This indicates that the Swiss-US exchange rate is stationary and does not follow a random walk model.

The ARMA and GARCH models showed that all coefficients were significantly far from zero, further rejecting the notion that the Swiss - US exchange rate is stochastic.

Furthermore, the BDS test provided evidence that the residuals and the time series were not independent and identically distributed further rejecting the suggestion that the Swiss-US exchange rate is purely random.

Appendices

Appendix l (Regression Models)

model1 = lm(formula = TS~TSLAG1)

Residuals:

Min

1Q

Median

3Q

Max

-5.710e-15

-1.100e-16

0.000e+00

6.000e-17

5.353e-13

Coefficients:

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

-1.330e-14

1.340e-16

-9.925e+01

<2e-16***

TSLAG1

1.000e+00

7.023e-17

1.424e+16

<2e-16***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.222e-15 on 10520 degrees of freedom

Multiple R-squared: 1, Adjusted R-squared: 1

F-statistic: 2.028e+32 on 1 and 10520 DF, p-value: < 2.2e-16

model2 = lm(formula = TS~TSLAG1, data = TSALL1)

Residuals :

Min

1Q

Median

3Q

Max

-0.144050

-0.005913

0.000338

0.006088

0.165936

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

0.0008938

0.0003359

2.661

0.0078**

TSLAG1

0.9993110

0.0001760

5677.079

<2e-16***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10519 degrees of freedom

Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997

F-statistic: 3.223e+07 on 1 and 10519 DF, p-value: < 2.2e-16

Appendix II - Augmented Dickey-Fuller Tests

TSdf = ur.df(y = TS, lags = 5, type = 'none')

Augmented Dickey-Fuller Test Unit Root Test

Residuals:

Min

1Q

Median

3Q

Max

-0.146056

-0.005737

0.0004087

0.006194

0.165257

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

z.lag.1

-2.514e-04

6.715e-05

-3.745

0.000182 ***

z.diff.lag1

8.046e-03

9.751e-03

0.825

0.409302

z.diff.lag2

-6.742e-03

9.752e-03

-0.691

0.489317

z.diff.lag3

6.915e-03

9.752e-03

0.709

0.478254

z.diff.lag4

-2.263e-03

9.752e-03

-0.232

0.816497

z.diff.lag5

1.328e-02

9.751e-03

1.362

0.173304

SWISS/US

1%

5%

10%

Ï„ statistic

-3.7447

-2.58

-1.95

-1.62

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.0131 on 10510 degrees of freedom

Multiple R-squared: 0.001714, Adjusted R-squared: 0.001144

F-statistic: 3.007 on 6 and 10510 DF, p-value: 0.006154

TSdf2 = ur.df(y=TS, lags = 5, type = 'drift')

Residuals:

Min

1Q

Median

3Q

Max

-0.145197

-0.005907

0.000341

0.006070

0.166004

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

0.0008899

0.0003367

2.643

0.008235**

z.lag.1

-0.0006836

0.0001768

-3.867

0.000111***

z.diff.lag1

0.0078108

0.0097489

0.801

0.423034

z.diff.lag2

-0.0069718

0.0097492

0.685

0.493255

z.diff.lag3

0.0066798

0.0097492

0.685

0.493255

z.diff.lag4

-0.0024912

0.0097492

-0.256

0.798319

z.diff.lag5

0.0130439

0.0097488

1.338

0.180925

SWISS/US

1%

5%

10%

Ï„ statistic

-3.8672

-3.43

-2.86

-2.57

Ф1

10.5073

6.43

4.59

3.78

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10509 degrees of freedom

Multiple R-squared: 0.001775, Adjusted R-squared: 0.001205

F-statistic: 3.114 on 6 and 10509 DF, p-value: 0.004752

TSdf3 = ur.df(y=TS, lags = 5, type = 'trend')

Residuals:

Min

1Q

Median

3Q

Max

-0.144895

-0.005892

0.000400

0.006107

0.166263

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

2.276e-03

8.838e-04

2.576

0.010021*

z.lag.1

-1.103e-03

3.039e-04

-3.630

0.000285***

tt

-1.228e-07

7.239e-08

-1.697

0.089794

z.diff.lag1

7.959e-03

9.748e-03

0.816

0.414257

z.diff.lag2

-6.821e-03

9.749e-03

-0.700

0.484151

z.diff.lag3

6.830e-03

9.749e-03

0.701

0.483571

z.diff.lag4

-2.341e-03

9.749e-03

-0.240

0.810270

z.diff.lag5

-2.341e-03

9.749e-03

-0.240

0.810270

SWISS/US

1%

5%

10%

Ï„ statistic

-3.6297

-3.96

-3.41

-3.12

Ñ„2

7.9657

6.09

4.68

4.03

Ñ„3

8.9183

8.27

6.25

5.34

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10508 degrees of freedom

Multiple R-squared: 0.002048, Adjusted R-squared: 0.001384

F-statistic: 3.081 on 7 and 10508 DF, p-value: 0.003034

Appendix III

Phillip-Perron's Tests

TSPP1 = ur.pp(TS, type = c("Z-tau"), model = c("constant"), lags = c("short"))

Residuals:

Min

1Q

Median

3Q

Max

-0.144050

-0.005913

0.000338

0.006088

0.165936

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

0.0008938

0.0003359

2.661

0.0078**

y.|1

0.9993110

0.0001760

5677.079

<2e-16***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10519 degrees of freedom

Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997

F-statistic: 3.223e+07 on 1 and 10519 DF, p-value: < 2.2e-16

aux. Z statistics

Z-tau-mu 2.62

TSPP2 = ur.pp(TS, type = c("Z-tau"), model = c("constant"), lags = c("long"))

Residuals:

Min

1Q

Median

3Q

Max

-0.144050

-0.005913

0.000338

0.006088

0.165936

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

0.0008938

0.0003359

2.661

0.0078**

y.|1

0.9993110

0.0001760

5677.079

<2e-16***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10519 degrees of freedom

Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997

F-statistic: 3.223e+07 on 1 and 10519 DF, p-value: < 2.2e-16

aux. Z statistics

Z-tau-mu 2.5857

TSPP4 = ur.pp(TS, type = c("Z-tau"), model = c("trend"), lags = c("short"))

Residuals:

Min

1Q

Median

3Q

Max

-0.143737

-0.005858

0.000397

0.006067

0.166182

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

1.622e-03

5.496e-04

2.952

0.00316**

y.|1

9.989e-01

3.028e-04

3298.568

<2e-16***

Trend

-1.211e-07

7.228e-08

-1.675

0.09399

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10518 degrees of freedom

Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997

F-statistic: 1.612e+07 on 2 and 10518 DF, p-value: < 2.2e-16

aux. Z statistics

Z-tau-mu 2.6703

Z-tau-beta -1.7199

TSPP3 = ur.pp(TS, type = c("Z-tau"), model = c("trend"), lags = c("long"))

Residuals:

Min

1Q

Median

3Q

Max

-0.143737

-0.005858

0.000397

0.006067

0.166182

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

(Intercept)

1.622e-03

5.496e-04

2.952

0.00316**

y.|1

9.989e-01

3.028e-04

3298.568

<2e-16***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.01309 on 10518 degrees of freedom

Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997

F-statistic: 1.612e+07 on 2 and 10518 DF, p-value: < 2.2e-16

aux. Z statistics

Z-tau-mu 2.3998

Z-tau-beta -1.7678

Short lags

( constant)

Short lags

( trend)

Long lags (constant)

Long lags

(trend)

(Z- Ï„statistic)

-3.8229

-3.6209

-3.7425

-3.6119

1%

-3.43407

-3.964594

-3.43407

-3.964594

5%

-2.86236

-3.412984

-2.86236

-3.412984

10%

-2.567237

-3.12813

-2.567237

-3.12813

Appendix III

ARMA MODELS

model3 = arma(x=TS, order = c(1,1))

ARMA(1,1)

Residuals:

Min

1Q

Median

3Q

Max

-0.145346

-0.005926

0.000334

0.006107

0.165930

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

0.9993096

0.0001774

5633.095

<2e-16***

ma1

0.0078210

0.0098182

0.797

0.42569

intercept

0.0008970

0.0003385

2.650

0.00805**

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61381.32

model4 = arma(x=TS, order = c(1,2))

ARMA(1,2)

Residuals:

Min

1Q

Median

3Q

Max

-0.1453422

-0.0059241

0.0003346

0.0060912

0.1657796

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

0.9993125

0.0001763

5668.225

<2e-16***

ma1

0.0078088

0.0097499

0.801

0.42318

ma2

-0.0069620

0.0097689

-0.713

0.47605

intercept

0.0008918

0.0003363

2.651

0.00801**

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61378.92

model5 = arma(x=TS, order = c(2,1))

ARMA(2,1)

Residuals:

Min

1Q

Median

3Q

Max

-0.1454586

-0.0059299

0.0003468

0.0060935

0.1662268

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

1.927e+00

3.853e-02

50.010

<2e-16***

ar2

-9.270e-01

3.851e-02

-24.076

<2e-16***

ma2

-9.175e-01

4.143e-02

-22.694

<2e-16***

intercept

7.016e-05

4.501e-05

1.559

0.119

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Fit:

sigma^2 estimated as 0.0001712, Conditional Sum-of-Squares = 1.8, AIC = -61384.14

model6 = arma(x=TS, order = c(2,2))

ARMA(2,2)

Residuals:

Min

1Q

Median

3Q

Max

-0.1452983

-0.0059312

0.0003416

0.0061028

0.1659971

Coefficients :

Estimate

Std. Error

t value

Pr(>|t|)

ar1

1.5748255

NA

NA

NA

ar2

-0.5751178

NA

NA

NA

ma1

-0.5677750

NA

NA

NA

ma2

-0.0021529

0.0081304

-0.265

0.791

intercept

0.0003796

NA

NA

NA

Fit:

sigma^2 estimated as 0.0001713, Conditional Sum-of-Squares = 1.8, AIC = -61376.36

Appendix IV

GARCH Test

garchFitoutput = garchFit(~garch(1,1), data = TS, trace = F)

Mean and Variance Equation:

data ~ garch(1, 1)

Conditional Distribution:

norm

Coefficients :

mu

omega

alpha1

beta1

1.4869e+00

8.8463e-05

9.5187e-01

4.9165e-02

Std. Errors: (based on Hessian)

Estimate

Std. Error

t value

Pr(<|t|)

mu

1.487e+00

1.027e-03

1447.792

<2e-16***

omega

8.846e-05

NA

NA

NA

alpha1

9.519e-01

7.738e-03

123.018

<2e-16***

beta1

4.916e-02

1.382e-02

3.557

0.000375***

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:

-361.1154 normalized: -0.03432003

Standardised Residual Tests :

Statistic

p-Value

Jarque-Bera Test

R

Chi^2

1251.991

0

Shapiro-Wilk Test

R

W

NA

NA

Ljung-Box Test

R

Q(10)

92955.16

0

Ljung-Box Test

R

Q(15)

136252.9

0

Ljung-Box Test

R

Q(20)

177488

0

Ljung-Box Test

R^2

Q(10)

22.89869

0.01112677

Ljung-Box Test

R^2

Q(15)

28.7552

0.01729239

Ljung-Box Test

R^2

Q(20)

33.16981

0.03232069

LM Arch Test

R

TR^2

26.62115

0.008757645

Information Criterion Statistics:

AIC BIC SIC HQIC

0.06940037 0.07216077 0.06940008 0.07033239

Appendix V

BDS Test

Embedding dimension = 2 3

Epsilon for close points = 0.0065 0.0131 0.0196 0.0262

Standard Normal =

[0.0065]

[0.0131]

[0.0196]

[0.0262]

[2]

19.8898

20.8476

21.8879

22.4705

[3]

26.4518

26.3191

26.9905

27.5264

p-value =

[0.0065]

[0.0131]

[0.0196]

[0.0262]

[2]

0

0

0

0

[3]

0

0

0

0

Appendix VI

Commented R commands

# Call library 'fImport' to import data into R

>library("fImport")

> NAME = "DEXSZUS"

# Compose download URL

> URL <- composeURL( "research.stlouisfed.org/fred2/series/", NAME, "/downloaddata/",

+ NAME, ".csv")

# Download data

> download = read.csv(URL) # Download data

> TS = as.timeSeries(download)

> colnames(TS) = NAME

> SZUS = "DEXSZUS"

> SZUS

[1] "DEXSZUS"

# Autocorrelation plot of data

> acf(TS)

# Partial autocorrelation plot of data

> pacf(TS)

> TS = ts(TS, frequency = 365, start = 1971) # Setting a time series, beginning from year 1971

Regression Models

> TSLAG1 = lag(TS,-1) # One lagged variable for regression

> TSALL = ts.union(TS,TSLAG1) # adding one lagged variable to existing timeseries

> library(lmtest) # Call library 'lmtest' to do regression on time series

> model1 = lm(formula = TS~TSLAG1) # Regression model

> summary(model1) # Summary of regression model

> TSALL1 = na.omit(TSALL) # Omit NAs from combined data

> model2 = lm(formula = TS~TSLAG1, data = TSALL1) # Regression model without NAs

> summary(model2) # Summary of regression model without NAs

ADF TESTS

> library(urca) # Call library 'urca' to do ADF test

> TSdf = ur.df(y = TS, lags = 5, type = 'none')

# ADF test with no trend or drift (lags = 5 as there are 5 working days in a week)

> summary(TSdf) # Summary of ADF test

> TSdf2 = ur.df(y=TS, lags = 5, type = 'drift') # ADF test with drift

> summary(TSdf2) # Summary of ADF test 2

> TSdf3 = ur.df(y=TS, lags = 5, type = 'trend') # ADF test with trend

> summary(TSdf3) # Summary of ADF test 3

PHILLIP-PERRON TESTS

> TSPP1 = ur.pp(TS, type = c("Z-tau"), model = c("constant"), lags = c("short"))

# PP test with intercept and short lags

> summary(TSPP1) # Summary of PP test 1

> TSPP2 = ur.pp(TS, type = c("Z-tau"), model = c("constant"), lags = c("long"))

# PP test with intercept and long lags

> summary(TSPP2) # Summary of PP test 2

> TSPP3 = ur.pp(TS, type = c("Z-tau"), model = c("trend"), lags = c("long"))

# PP test with intercept and trend and long lags

> summary(TSPP3) # Summary of PP test 3

> TSPP4 = ur.pp(TS, type = c("Z-tau"), model = c("trend"), lags = c("short"))

# PP test with intercept and trend and short lags

> summary(TSPP4) # Summary of PP test 4

ARMA TESTS

> library(tseries) # call library 'tseries' to fit ARMA model

> model3 = arma(x=TS, order = c(1,1)) # ARMA model (1,1)

> summary(model3) # summary of ARMA model (1,1)

> model4 = arma(x=TS, order = c(1,2)) # ARMA model (1,2)

> summary(model4) # summary of ARMA model (1,2)

> model5 = arma(x=TS, order = c(2,1)) # ARMA model (2,1)

> summary(model5) # summary of ARMA model (2,1)

> model6 = arma(x=TS, order = c(2,2)) # ARMA model (2,2)

> summary(model6) # summary of ARMA model (2,2)

BDS TEST

> rmodel2 = resid(model2) # residual of regression model without NAs

> bds.test(rmodel2) # BDS test on residual of regression model

GARCH

# Call library 'fGarch' and 'fEcofin' to do GARCH modelling

> library(fGarch)

> library(fEcofin)

> garchFitoutput = garchFit(~garch(1,1), data = TS, trace = F) # Fitting GARCH model (1,1)

> summary(garchFitoutput) # Summary of output for GARCH model (1,1)