Using Volatility To Analyse Market Timing Decisions Finance Essay

Published: November 26, 2015 Words: 6763

In finance, volatility, i.e. the standard deviation of a variable (the spread of asset returns for example), is become a subject very popular for academics, policy makers and practitioners (Yu, 2002). According to Das (1997, p.8), the focus on risk is driven by the impact of volatility on values of financial instruments. Moreover, due to the fact that the availability of financial data is easier, and the desire to understand financial markets is became imperative in order to manage an efficient portfolio forecast the future value of derivative or in terms of risk management. As a consequence, the price of a security can change dramatically with a high volatility and the higher is the volatility, the riskier is the security. But we need to take into account the fact that high volatility can also generate high gain (and high loss as well).

Consequently, the use of volatility is useful for: the analysis of market timing decisions, the selection of securities in a portfolio and the provision of estimates of variance for use in asset pricing models (Brailsford & Faff, 1996). Moreover, stocks are one of the most important sources of funding individuals and institutional investors, with the treasury bills, bonds and options. However, invest in stock markets could be dangerous due to the high level of uncertainty and a factor which is very hard to determine, volatility. According to McMillan, Speight and Apgilym (2000), the stock market crash of 1987 was the starting point of empirical studies based on modelling and forecasting volatility. They explained that three aspects need to be taken into account. Firstly, McMillan (2000) stated that the introduction of automated trading and trading in derivatives futures and options contracts "may have enhanced the likelihood of large swings in means stocks". Secondly, the use of ARCH and GARCH models have enable researchers to modelling volatility with a better precision because these models take into account stylized facts, like volatility clustering (Mandelbrot, 1963). Thirdly, the use of the forecasting volatility is more and more used for the portfolio and the market timing selections (Gemmill, 1993).

Finally, we can emphasize the importance of using volatility models with the work of Poon and Granger (2003). First, as we said previously due to the increasing use of derivative securities, volatility is become more important: the trading volume has quadrupled, and in order to price an option, the volatility of the underlying asset needs to be known. Second, with the introduction of Basle Accord(s), financial risk management is now a priority. As a consequence, volatility forecasting is now a duty for financial institutions (with the use of value-at-risk (VaR)). Third, market volatility can have some impacts on the economy in general. Poon and Granger (2003) give the example of the September 11 attacks, or the financial reporting scandals in United States (like Enron). These events have caused an important turmoil in the financial markets and the countries in general. Consequently, modelling volatility is essential in order to perform an efficient portfolio or get the best decision possible.

Previous researches on modelling volatility have been made by numerous authors. Stock market indexes of developed countries and developing countries have been analysed, but neither we had a common research based on the five most important stock indexes, which are the FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225.

1.1 Aim:

As we saw previously, never we on a sole working paper which takes into account the following the FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225. Modelling volatility of these five stock indexes will enable us to find some evidences which are useful for investors or fund managers. Hence, the aim of our thesis will be to model volatility of five financial stock markets: the FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225 and we will analyse the behaviour of the volatility of these five stock market indexes.

1.2 Objectives

From the above identified aim, and in order to accomplish this aim, we defined the following objectives:

By using the useful volatility models we will try to Modelling volatility of FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225.

Found evidences of stylized facts.

1.3 Structure of the thesis:

The dissertation is organised as followed. In the chapter 2, the literature review, we will discuss about modelling volatility in different stock market indexes. We will focus on findings about stylised facts and more especially leverage effects. Chapter 3, data, is a review of the data of FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225, we analysed the information given by the descriptive statistics. Chapter 4, methodology, will explain the models used in order to modelling volatility. Chapter 5, empirical results, this part will describe empirical results given by volatility models given in chapter 4. Chapter 6 will conclude the thesis.

CHAPTER 3

3 Data:

3.1 The stock market indexes:

In order to model volatility, we have chosen the five more important stock market indexes: FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225. By using these five indexes we can make comparisons and find some as similar aspects. That is, try to discover leverage effects. Briefly, we will refer to the five stock indexes used in the thesis.

Firstly, the FTSE 100 (the Financial Times Stock Exchange 100) is the most famous UK index. This index was launched the 3rd January 1984, and was developed with a base value of 1000. The FTSE 100 constituents are all traded on the London Stock Exchange's SETS trading system.

The CAC 40 (Cotation Assistée en Continu 40), is the most famous French index. This index comprises the 40 most highly capitalised blue chip companies, stock are traded in Paris. The index was launched the 31st of December 1987, with a base value of 1,000.

The DAX (Deutscher Aktien Index), created the 30th of December 1987, with a base value of 1 000 is the German stock index. This index comprises 30 companies and the stocks are traded in Frankfurt.

The most famous US index, the S&P 500, launched the 4th of March 1957, with a base value of 1000. The 500 most highly capitalised blue chip companies are traded in this stock market.

The Japanese stock index, the Nikkei 225 (Nikkei Heikin Kabuka 225) is a stock market index, which started the 7th of September 1950.

All these indexes are market value-weighted. That is, because they are weighted indexes the change of the price of one company could have a bigger effect than another. That depends on the market capitalisation of the company. Hence, the change of price of companies with a big market capitalisation has a bigger effect on the index than companies with a lower market capitalisation.

The formula of the five stock indexes (FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225) is the same. And the following formula is the common way to calculate the value of a stock market index.

Source :(Watsham & Parramore, 1997 p. 78)

Where:

: The initial value of the index (i.e. 1000 for the FSTE 100),

: The number of constituents of the index (i.e. 104 in May 2010),

: The number of shares outstanding in company j at time t,

: The price of one of the shares at time t,

Base value: The actual base value (for example 5.428 the 3rd September 2010 for the FTSE).

Finally, the definition of volatility is a good premise in order to understand the models for modelling volatility.

3.2 Definition of volatility:

Basically, volatility measures the variability of a variable, it is a measure of uncertainty (González-Rivera, Lee and Mishra, 2004).Volatility refers to the notion of risk: the higher is the volatility the higher is the risk.

In a mathematical point of view we calculate volatility with the standard deviation () of a variable (the returns of an asset for example). However, volatility is not directly observable. As a consequence, we need to follow the following framework:

We compute the returns (of an asset or a stock index),

We calculate the variance of returns,

We calculate the standard deviation (i.e. the square root of the variance).

Most of the time, standard deviation is commonly used in order to provide a good indication of volatility, the formula is:

Where:

: The number of days (a trading period for example),

: The return on day t,

: The average return over the T-day period.

Sometimes, variance (), is used also as a volatility measure.

3.3 Data:

In this paper we use closing stock market daily prices of the UK stock market index (FTSE 100), the French stock market index (CAC 40), the German stock market index (DAX), the US stock market index (S&P 500) and the Japanese stock market index (NIKKEI 225) from t January 1, 1993 to June 1, 2010. Closing prices index for stock market indixes were obtained from uk.finance.yahoo.com. These data are not adjusted for dividends and the non-trading days are not taken into account.

In terms of number of observations, there are 4396 observations for the FSTE 100, 4409 for the CAC 40, 4406 for the DAX, 4385 for the S&P 500 and 4279 for the NIKKEI 225.

Daily returns () are computed as a logarithmic price (we take the first differences between each logarithmic series) i.e. , where is the daily close price at time t of each stock index.

3.4 Descriptive statistics:

Figure 1: Price and returns

Figure 2: Histogram of returns series

Table 1: Summary statistics of returns

FTSE 100

CAC 40

DAX

S&P 500

NIKKEI 225

Observations

4396

4409

4406

4385

4279

Mean

0.000204

0.000250

0.000423

0.000280

-1.26E-005

Median

0.000445

0.000391

0.000907

0.000582

4.38E-05

Maximum

0.098387

0.111762

0.114020

0.115800

0.141503

Minimum

-0.088483

-0.090368

-0.071639

-0.09035

-0.114064

Std. Dev.

0.011847

0.014476

0.015060

0.012200

0.015366

Skewness

0.020199

0.162068

0.088899

-0.007053

0.011353

Kurtosis

9.534910

8.194919

7.661839

11.99919

8.517612

Jarque-Bera

7822.441

4977.071

3995.573

14796.74

5428.012

Probability

0.000000

0.000000

0.000000

0.000000

0.000000

Figure 1 is a depiction of the statistical properties of prices and returns of the five stock indices used in this paper. Table 1 presents the descriptive statistics of all the stock market return indices. First, the mean return is positive for the FTSE 100, the CAC 40, the DAX and the S&P 500. The mean return is negative for the NIKKEI 225. The range of the mean is the following: a minimum of -0.0000126 (NIKKEI 225) and a maximum of 0.000423 (DAX). As we can see the means are close to 0.

The sample standard deviations show that the Japanese stock market index (NIKKEI 225) is the most volatile indexes, with a standard deviation of 0.015366; the UK stock market index (FTSE 100) is the least volatile, with a standard deviation of 0.011847. The graph in figure 1 (page) of the daily closing price of NIKKEI 225 confirms this affirmation.

The Skewness statistics indicate that four stock market returns (FTSE 100,CAC 40, DAX and NIKKEI 225) are skewed to the right (the Skewness is positive) i.e. they are more information in the right-hand tail than in the left-hand tail. Furthermore, a positive skewness indicates that returns are non-symmetric.

One stock market (S&P 500) is skewed to the left (the skewness is negative). According to Tavares and Curto (2008) when there is a negative skewness it appears that returns are asymmetric. With this statement we can affirm that returns of FTSE 100, CAC 40, DAX and NIKKEI 225 are symmetric and returns of NIKKEI 225 are asymmetric.

Kurtosis of the five stock market returns is greater than 3 (which is the number for a standard Guaussian distribution). It is due to the fact that he data used are high frequency financial returns series. Furthermore, that illustrates the fact that the data follow a leptokurtosis distribution. As a consequence, the tail of mean of the return series is thicker than the normal distribution. Thus, we can reject the Gaussian distribution hypothesis (or the null hypothesis of normality) because the five series returns don't show characteristics of a standard normal distribution (i.e. a skewness of 0 and a kurtosis equal to 3). Furthermore, because of a high kurtosis the daily return of the five stock markets is not normal.

Last but not least, the Jarque-Bera test of all the stock market indexes is higher than the critical value at 5% (i.e. 5.99). That confirms the fact that we can reject the null hypothesis.

Finally, the graphs in the figure 1 suggest that the daily returns of the five stock markets are not random walk; as a consequence it exist volatility clustering, in other words large (small) returns tend to follow large (small) returns (Mandelbrot, 1963).

Figure 2 represents the histogram of the returns of the five stock market indexes. As we can see the distribution of the indexes returns is peaked, it is the proof of a non-normal distribution. Moreover, according to Kalu O (2010), when a distribution is peaked "this is the sign of recurrent wide changes and an indication of uncertainty in the price discovery process".

CHAPTER 4

4 Methodology:

4.1 Volatility models:

First of all, we following statement needs to be taken into account for our thesis: "the best volatility models depend of the aim of the researcher" (González-Rivera, Lee and Mishra, 2004). So, we need to find models which are able to modelling volatility and provide evidences of stylized facts.

In order to model volatility two kinds of models are used, linear and non-linear models. Campbell, Lo and MacKinley (1997) define these models as following:

When we deal with linear models, we assume that shocks are not correlated and it is not compulsory that time series are identically independent distributed.

On the contrary, in the case of non-linear models shocks are identically independent distributed.

Linear models have the following assumption: the variance of the errors is constant i.e. the model is homoscedastic. Nevertheless, we can kick out very quickly the linear models because some features like leptokurtosis, volatility clustering, long memory, volatility smile or leverage effects are not explained by these models (our thesis wants to illustrates the presence of leptokurtosis, volatility clustering and leverage effects). Indeed, BROOKS (2008 p.380) stated that "linear models are unable to explain a number of important features common to much financial data". Thus, we will refer to the non linear models where the variance of the errors is not constant; as a consequence, the models are Heteroskedastic (i.e. the variance is not constant).

Hence, we will use the classification of Poon and Granger (2003). Their paper is a survey of 20 years of volatility-forecasting, they use 93 journals. They focused as well on 44 time series papers and they described two approaches for volatility modelling and forecasting. The first approach is the times series volatility (i.e. the use of historical data); the second approach, called options-based volatility forecast. The second approach will not be explained because our thesis refers to stock markets modelling, not futures and options markets. However, another category of volatility models needs to be stated: the moving average family (or MA family).

In their classification of time series volatility, Poon and Granger (2003) defined three families of times series volatility. The first family is a group which uses past standard deviations. The second group, the ARCH family, uses conditional volatility. The third family is the stochastic volatility models (SV models).

The first family, (using past standard deviation), contains the simplest volatility model i.e. the Random Walk model (Brooks, 2008). In fact, in order to assess volatility at the time t (i.e.), we need to use. Hence, this model suggests that the best way in order to modelling or forecasting volatility of the month t is to use volatility of the previous month (i.e. t-1).

The extensions are called Moving Average method (MA), Weighted Moving Average method (WMA), and Exponentially Weighted Moving Average method (EWMA). In the Moving Average method the weigh of the past observations is equal; in contrary to the Exponential smoothing method where the weigh of the past observations decreases over time. MA model suggests that in order to forecast volatility of month t we need to use volatility of the previous x months. Thus, volatility of the previous x months is equally weighted. Rather than the WMA model, where the weight of oldest volatility is lower than the newest volatility (i.e. volatility at time t-1 is bigger than volatility at t-2). In the EWMA model, volatility of this month depends on the previous month; that is, recent observations get the biggest weight, old observations have a lower rate.

But, these models don't capture leverage effects. Therefore, we can't use Random Walk model or EWMA because they are not helpful for our thesis.

The second family of times series volatility is the ARCH family (in next pages, we will give the formulas of the three volatility used for this thesis). According to González-Rivera (2004), a volatility model should pick up the stylized facts; in consequence, new models were created: ARCH family models. First of all, in order to understand ARCH models, we need to start with the following statement: large returns (of either sign) tend to be followed by more large returns (of either sign). As a consequence, the volatility of assets returns appears to be serially correlated (Campbell, Andrew & Mackinlay, 1997 p.482). In order to capture this serial correlation, Engle (1982) developed the ARCH model (Autoregressive Conditional Heteroskedasticity model).

In addition, according to Taylor (2004), the ARCH model is the most popular statistical modelling approach to volatility forecasting. This model was introduced by Engel (1982). The aim of the ARCH model is to simultaneously model the mean and the variance of a time series. In overall, ARCH models family describes the changes in conditional variance as a function of past returns (DE GOOIJER and HYNDMAN, 2006). Finally, the motivation of ARCH model was to account the volatility clustering and fat-tail behaviours of the data (GONZÁLEZ-RIVERA, TAE-HWY LEE and MISHRA, 2004).

Besides, according to Bera and Higgings (1993) ARCH models are very popular because:

They are simple and easy to handle,

They take into account the clustered errors,

They take into account non-linearity.

Last but not least, another way for modelling volatility is the stochastic volatility models (SV models). The difference between GARCH models and SV models is included in the equation of the conditional variance: error term is included in the variance and means equations. Rather than the GARCH models, where error term is just included in the mean equation.

SV models refer to the Black and Scholes work (1973), in this paper they concluded that volatility is constant; however, SV models are not interesting in our thesis because this assumption (volatility is constant) is not acceptable or unrealistic. Indeed, we use the point of view of Watsham & Parramore (1997 p. 261): volatility is not constant but time varying.

In conjunction with our previous criticism, we can also affirm the fact that we will not use these models (i.e. SV models) because they are more commonly used for the estimation of option pricing (in our thesis we use stocks markets data, not options data) and according to Brooks (2008) they are hard to estimate. Moreover, we can refer to the work of Walsh and Yu-Gen Tsou (1998). In their literature review they try to show the performance of volatility measures in the most important stock markets: in overall, when the authors compare ARCH/GARCH family (most often GARCH (1,1)), stochastic volatility models and EWMA, the GARCH family is the best model for US, UK, French, German stock markets. As a consequence, ARCH/GARCH family is the best in order to modelling volatility of FTSE 100, CAC 40, DAX, S&P 500 and NIKKEI 225.

If we used the survey of Poon and Granger (2003), the conclusion doesn't give the best model. However, the work of these two authors is clear about one thing: stochastic volatility models are outperformed by moving average models and ARCH/GARCH family. But we don't have a winner between these two models (i.e. moving average models and ARCH/GARCH family). Besides, in our thesis, we don't forecast volatility but modelling and we want to find if leverage effects are present in five stock market indexes. Because of the purpose of our thesis, we can use some researches which show that ARCH/GARCH models are appropriate for modelling volatility clustering and leverage effects (Bollerslev (1987)). As a result, we will use GARCH models for modelling volatility. Moreover, we will see, in the next pages (with the presentation of GARCH models) that they fit perfectly with the aim of our thesis: found existences of stylised facts in the five most famous stocks markets. Last but not least, Kalu O. (2010) showed that the use of ARCH models is efficient for the purpose of modelling volatility. According to this author, EGARCH and TGARCH are asymmetric models which fit perfectly the aim to capture volatility clustering like asymmetry.

The next part will explain some models of the ARCH/GARCH family: GARCH, EGARCH and TGARCH. In a first step, in order to understand these volatility models we need to explain the first model created by Engel in 1982.

4.1.1 ARCH model:

Before to give the formula of the Autoregressive Conditional Heteroskedasticity model (ARCH model), we need to define three concepts, Autoregressive, Conditional and Heteroskedasticity.

Autoregressive i.e. the model depends on past values,

Conditional i.e. the variance depends on past information,

Heteroskedasticity i.e. the variance is non-constant, rather than a homoscedastic model with a constant variance.

The first ARCH model, created by Engel (1982), allows us to simultaneously model the mean and the variance of a time series. As a result, when we use this volatility model we assume that the variable used (in our thesis daily returns of FTSE 100, DAX, CAC 40, S&P 500 and NIKKEI 225) is linear in mean and non-linear in variance.

ARCH model is defined by the following equation the following equation:

Where:

: Conditional variance,

: Number of lags included in the model,

: Parameters to be estimated,

: The error term which illustrates the price of "shock or news" at time t.

could be explained by the following statement: news about volatility from the previous periods are measure as lags of squared residuals from the mean equation.

The mean equation is defined as following:

Moreover, the parameters required to be as following: . Hence, in this model the conditional variance gives a measure of volatility at time t (i.e.). Moreover, according to Taylor in the ARCH model conditional variance is expressed as "a linear function of lagged squared errors terms". Bollerslev (1986) extended this model with the GARCH model where lagged conditional variances terms are included as well.

This measure of volatility is a function of q past squared returns. Last but not least the conditional means in the ARCH and GARCH models are given by the following equation: .

However, ARCH model presents some weaknesses. Engel (1982) criticized ARCH (q) models by given three negative points:

First, how to find the value of q? In other words, how to define the number of lags of the squared residual?

Second, the number of lags needs to be big. Because of large number of lags, the conditional variable could be not precise.

Third, the model needs to be positive.

4.1.2 GARCH model:

The extension of ARCH model, GARCH model was developed by Bollerlev (1986) and Taylor (1986). In contrary to ARCH, GARCH model enable "the conditional variance to be dependent upon previous own lags" (BROOKS (2008), p 392). We can refer as well to the working paper of Engle and Ng (1993), they stated that the relationship between stock returns and the sign of stock return is negative. That is, the result of the rise of stock returns rises is a decrease of volatility.

The GARCH model equation is the following:

Hence, a new part is added to the ARCH equation i.e. . This GARCH term is the previous periods' estimated variances.

Where:

: The conditional variance,

: Parameters to be estimated,

: Number of return innovation lags included in the model,

: Number of past volatility lags included in the model.

The equation shows that the conditional variance () is a linear function past squared errors and past conditional variance. We will use the GARCH (1,1), in the thesis; as a result the equation of GARCH (1,1):

Hence, in the case of GARCH (1,1), is like a ARMA (1,1) model; as a result, is like a weighted average of:

The long term average value,

Volatility in the previous period,

Fitted variance from the previous period.

A positive point by using GARCH model is the fact that, according to Brook and Burke (2003), GARCH (1,1) is sufficient in order to find presence of volatility clustering.

However, GARCH is not a model fitted for assessing leverage effect i.e. a good new has less effect on volatility than a bad new. That is, positive or negative innovations (the coefficient in the GARCH equation) have the same effect on the conditional variance. In consequence, we need to talk about the volatility model proposed by Nelson in 1991: EGARCH.

3.1.3 Exponential-GARCH model:

The extension of GARCH, Exponential-GARCH (EGARCH) model introduced by Nelson (1991), was created in order to capture the asymmetric effect. In this model the log of the conditional variance implies that the leverage effect is exponential and the forecast are guaranteed to be nonnegative.

One of the advantages of this model is the fact that, even if the coefficients of EGARCH are negative, will be positive. Thus we don't need to specify some restrictions (no negative coefficients) in order to use EGARCH. Another advantage is the fact that

In this model, leverage effect is captured when

Last but not least, another asymmetric model is the TGARCH (Threshold model). Sometimes, people use the term GJR model in order to refer to TGARCH.

4.1.1.4 TGARCH/GJR-GARCH model:

Another extension of GARCH model is the TGARCH/GJR-GARCH model purposed by Glosten, Jagannathan and Runkle (1993) and Zakoian (1994). We add an additional term in order to take into account the asymmetries. By using this model, we need to have the following assumption: an unexpected variation in the stock market returns will generate a different impact in the conditional variance. In fact, a good new (presented by the coefficient) will have a different impact than a bad new (we found the bad new effect by adding two TGARCH coefficients, i.e.).

Where:

if and in the other cases (i.e. ),

: The asymmetric term.

In this model, leverage effect is captured when Unlike the TGARCH model we need to set up non-negativity conditions. These conditions are the following:

First condition:

Second condition:.

CHAPTER 5

5 Empirical results:

First of all results are summarized in Table 2 (page), the detailed results are in the annex 1 (page)

First, the coefficients estimated are significant at 1% or 5% significance level, which implies important evidences or ARCH and GARCH effects.

In the GARCH equation, for each index the sum of ARCH and GARCH coefficients (i.e. α+β) is nearly equal to one. With a sum of 0.994096 for the FTSE 100, 0.993051 for the CAC 40, 0.990346 for the DAX, 0.995251 for the S&P 500 and 0.991480 for the NIKKEI 225. The more this sum tends to unity, the greater volatility clustering is. In other words volatility shocks to the conditional variance are quite persistent (Floros, 2008). In a result, we can affirm that in the case of the five indexes chosen for our thesis, volatility shocks are quite persistent. In the paper of Kalu (2010) in order to modelling stock market returns volatility of the Nigerian Stock Exchange, the author stated that this result (the sum of α+β is near to unity) is an indication that important changes in returns are followed by change of the same magnitude.

Moreover, according to Brooks (2008), when the sum of these two coefficients is near to unity: "shocks to the conditional variance will be highly persistent" (page 403). Moreover, the value of β in the GARCH equation of the fixe stock market indexes is high (the maximum is 0,919156 for the S&P 500 and the lower value is 0,893277 for the NIKKEI 225), this is an indication of long memory in the variance i.e. the persistence of volatility when a shock occurs. Hence, the more is important this value the more important is the impact of a old new.

The two other models will show us if the five more important stock markets in world present leverage effect or asymmetric effects.

In the case of EGARCH model, leverage effect is captured by γ. In EGARCH model, leverage effect exists when γ < 0. This is the case for four stock market indexes: CAC 40, DAX, S&P 500 and NIKKEI 225. As a consequence, because the coefficient estimate is negative for the FTSE 100 (with a coefficient equal to 0,087223), this stock market index doesn't show leverage effect. According to Brooks (2008) when γ < 0 that demonstrates the fact that positive shocks generate a higher conditional variance than negative shocks. However, it is the contrary effect which is expected (i.e. negative shocks generate a higher conditional variance than positive shocks).

Moreover, the asymmetric relation found between stock market return and volatility for the CAC 40, DAX, S&P 500 and NIKKEI 225 is very significant for the US index (S&P 500). Hence, US stock market is more volatile than the other stock market indexes.

Nevertheless, we have slight different results with TGARCH model. This volatility model captures leverage effect when γ > 0. Due to the fact that for each stock indexes, the coefficient γ (the leverage term) is higher than 0, the five stock market indexes present a leverage effect. Moreover, the level of significance is very high for all the indexes. Hence, TGARCH model shows leverage effect for the five stock market indexes rather than EGARCH model which capture leverage effect for four indexes CAC 40, DAX, S&P 500 and NIKKEI 225. In other words, EGARCH and TGARCH models don't show the same conclusion for the FTSE 100.

Furthermore, TGARCH coefficients give us the impact of old news on volatility when we sum α and γ. As a consequence, bad news impact is equal to 0.11785 (FTSE 100), 0.12361 (CAC 40), 0.13744 (DAX), 0.13420 (S&P 500) and 0.14327 (NIKKEI 225).

Last but not least, if α < γ the impact of a negative shock, on the conditional variance, is lower than a positive shock (in the same magnitude). This situation occurs for the five stock indexes.

In overall, TGARCH results reinforce EGARCH results, that is, leverage effects are present in the five most important stock markets.

Because the asymmetric coefficient of the EGARCH and TGARCH are respectively (in the case of CAC 40, DAX, S&P 500 and NIKKEI 225) and (for the five stock market indexes) this is an indication of big volatility when news (or innovations) are negative. This finding is the same than Tavares, Curtos & Tavares (2008), in their paper they found evidence of stylized facts in FTSE 100 and S&P 5000 which the use of EGARCH, TGARCH and APARCH.

Table 2: GARCH-family Models for Volatility:

INDEX/MODEL

ω

α

γ

β

FTSE 100

GARCH (1,1)

0,000000914

(4,748933)*

0,082517

(12,37148)*

0,911579

(132,3866)*

EGARCH

-0,21614

(-10,70345)

0,124773

(10,45487)*

0,087223

(-12,88974)

0,987271

(568,4699)*

TGARCH

0,00000112

(0,000000162)

0,00936

(1,285082)

0,108487

(11,41805)*

0,925937

(139,8817)*

CAC 40

GARCH (1,1)

0,00000169

(4,693845)*

0,077456

(13,55332)*

0,915595

(140,6252)*

EGARCH

-0,243594

(-10,74026)

0,134937

(12,96632)

-0,088163

(-14,77314)

0,984138

(451,845)*

TGARCH

0,0000022

(6,506947)*

0,012702

(2,569484)

0,110911

(12,39699)*

0,920541

(151,5711)*

DAX

GARCH (1,1)

0,00000222

(7,127912)*

0,089013

(12,71937)*

0,901333

(121,8319)*

EGARCH

-0,297959

(-11,93805)

0,163347

(13,13461)*

-0,080923

(-14,21457)

0,980508

(446,9304)*

TGARCH

0,00000277

(8,491101)*

0,028093

(4,200547)*

0,109344

(11,24568)*

0,902293

(116,6303)*

S&P 500

GARCH (1,1)

0,000000855

(6,96757)*

0,076095

(14,15273)*

0,919156

(161,4681)*

EGARCH

-0,267948

(-13,67741)

0,129082

(12,66577)*

-0,110204

(-16,7932)

0,981462

(630,027)

TGARCH

0,0000012

(9,996835)*

-0,005437

(-0,934108)

0,139636

(15,21553)*

0,925616

(172,4502)*

NIKKEI 225

GARCH (1,1)

0,00000448

(7,91805)*

0,088203

(13,04953)*

0,893277

(107,945)*

EGARCH

-0,358527

(-10,91628)

0,167039

(13,40912)*

-0,088896

(-13,13873)

0,973246

(310,8366)*

TGARCH

0,0000045

(8,151523)*

0,026404

(4,476885)*

0,116867

(10,84497)*

0,896669

(107,6376)*

Note: Z-statistics in brackets. * represents 1% significance and ** 5% significance.

CHAPTER 6

6 Conclusion:

CHAPTER 7:

Books and journals:

CHAPTER 8:

Appendices:

Appendix A, resluts of GARCH, EGARCH and TGARCH :

CAC 40

GARCH

Dependent Variable: CAC_40_RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4409 after adjustments

Convergence achieved after 9 iterations

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000623

0,000168

3,711113

0,0002

Variance Equation

C

0,00000169

0,000000359

4,693845

0

RESID(-1)^2

0,077456

0,005715

13,55332

0

GARCH(-1)

0,915595

0,006511

140,6252

0

R-squared

-0,000664

Mean dependent var

0,00025

Adjusted R-squared

-0,001345

S.D. dependent var

0,014476

S.E. of regression

0,014486

Akaike info criterion

-5,946304

Sum squared resid

0,92435

Schwarz criterion

-5,940506

Log likelihood

13112,63

Hannan-Quinn criterion

-5,944259

Durbin-Watson stat

2,043358

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1)

CAC 40

EGARCH

Dependent Variable: CAC_40_RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4409 after adjustments

Convergence achieved after 12 iterations

Presample variance: backcast (parameter = 0.7)

LOG(GARCH) = C(2) + C(3)*ABS(RESID(-1)/@SQRT(GARCH(-1))) + C(4)

*RESID(-1)/@SQRT(GARCH(-1)) + C(5)*LOG(GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000211

0,000162

1,304887

0,1919

Variance Equation

C(2)

-0,243594

0,02268

-10,74026

0

C(3)

0,134937

0,010407

12,96632

0

C(4)

-0,088163

0,005968

-14,77314

0

C(5)

0,984138

0,002178

451,845

0

R-squared

-0,000299

Mean dependent var

0,00025

Adjusted R-squared

-0,00098

S.D. dependent var

0,014476

S.E. of regression

0,014483

Akaike info criterion

-5,972988

Sum squared resid

0,924014

Schwarz criterion

-5,96719

Log likelihood

13171,45

Hannan-Quinn criterion

-5,970943

Durbin-Watson stat

2,044103

CAC 40

TGARCH

Dependent Variable: CAC_40_RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4396 after adjustments

Convergence achieved after 11 iterations

Presample variance: backcast (parameter = 0.7)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000249

0,000166

1,504494

0,1325

Variance Equation

C

0,0000022

0,000000338

6,506947

0

RESID(-1)^2

0,012702

0,004943

2,569484

0,0102

RESID(1)^2

*(RESID(-1)<0)

0,110911

0,008947

12,39699

0

GARCH(-1)

0,920541

0,006073

151,5711

0

R-squared

0

Mean dependent var

0,00025

Adjusted R-squared

-0,000908

S.D. dependent var

0,014476

S.E. of regression

0,014483

Akaike info criterion

-5,971701

Sum squared resid

0,923737

Schwarz criterion

-5,964453

Log likelihood

13169,62

Hannan-Quinn criterion

-5,969145

Durbin-Watson stat

2,044714

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*RESID(-1)^2*(RESID(-1)<0) + C(5)*GARCH(-1)

DAX

GARCH

Dependent Variable: DAX_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4409 after adjustments

Convergence achieved after 9 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000828

0,000165

5,029842

0

Variance Equation

C

0,00000222

0,000000312

7,127912

0

RESID(-1)^2

0,089013

0,006998

12,71937

0

GARCH(-1)

0,901333

0,007398

121,8319

0

R-squared

-0,000724

Mean dependent var

0,000423

Adjusted R-squared

-0,001406

S.D. dependent var

0,01506

S.E. of regression

0,01507

Akaike info criterion

-5,924527

Sum squared resid

0,99978

Schwarz criterion

-5,918725

Log likelihood

13055,73

Hannan-Quinn criterion

- 5,922481

Durbin-Watson stat

2,034509

DAX

EGARCH

Dependent Variable: DAX_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4409 after adjustments

Convergence achieved after 9 iterations

Presample variance: backcast (parameter = 0.7)

LOG(GARCH) = C(2) + C(3)*ABS(RESID(-1)/@SQRT(GARCH(-1))) + C(4)

*RESID(-1)/@SQRT(GARCH(-1)) + C(5)*LOG(GARCH(-1))

Coefficient

Std. Error

z-Statistic

Prob.

C

0,00045

0,000163

2,767371

0,0057

Variance Equation

C(2)

-0,297959

0,024959

-11,93805

0

C(3)

0,163347

0,012436

13,13461

0

C(4)

-0,080923

0,005693

-14,21457

0

C(5)

0,980508

0,002194

446,9304

0

R-squared

- 0,000003

Mean dependent var

0,000423

Adjusted R-squared

-0,000912

S.D. dependent var

0,01506

S.E. of regression

0,015067

Akaike info criterion

-5,942802

Sum squared resid

0,99906

Schwarz criterion

-5,93555

Log likelihood

13096,99

Hannan-Quinn criterion

-5,940244

Durbin-Watson stat

2,035975

DAX

TARCH

Dependent Variable: DAX_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4406 after adjustments

Convergence achieved after 11 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*RESID(-1)^2*(RESID(-1)<0) + C(5)*GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000508

0,000167

3,049271

0,0023

Variance Equation

C

0,00000277

0,000000326

8,491101

0

RESID(-1)^2

0,028093

0,006688

4,200547

0

RESID(1)^2

*(RESID(-1)<0)

0,109344

0,009723

11,24568

0

GARCH(-1)

0,902293

0,007736

116,6303

0

R-squared

-0,000032

Mean dependent var

0,000423

Adjusted R-squared

-0,000941

S.D. dependent var

0,01506

S.E. of regression

0,015067

Akaike info criterion

-5,943008

Sum squared resid

0,999089

Schwarz criterion

-5,935756

Log likelihood

13097,45

Hannan-Quinn criterion

-5,94045

Durbin-Watson stat

2,035916

S&P 500

GARCH

Dependent Variable: S_P_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4385 after adjustments

Convergence achieved after 11 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000558

0,000125

4,451988

0

Variance Equation

C

0,000000855

0,000000123

6,96757

0

RESID(-1)^2

0,076095

0,005377

14,15273

0

GARCH(-1)

0,919156

0,005692

161,4681

0

R-squared

-0,000521

Mean dependent var

0,00028

Adjusted R-squared

-0,001206

S.D. dependent var

0,0122

S.E. of regression

0,01220

Akaike info criterion

-6,448581

Sum squared resid

0,652815

Schwarz criterion

-6,442756

Log likelihood

14142,51

Hannan-Quinn criterion

-6,446526

Durbin-Watson stat

2,133723

S&P 500

EGARCH

Dependent Variable: S_P_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4385 after adjustments

Convergence achieved after 14 iterations

Presample variance: backcast (parameter = 0.7)

LOG(GARCH) = C(2) + C(3)*ABS(RESID(-1)/@SQRT(GARCH(-1))) + C(4)

*RESID(-1)/@SQRT(GARCH(-1)) + C(5)*LOG(GARCH(-1))

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000265

0,000119

2,224322

0,0261

Variance Equation

C(2)

-0,267948

0,019591

-13,67741

0

C(3)

0,129082

0,010191

12,66577

0

C(4)

-0,110204

0,006562

-16,7932

0

C(5)

0,981462

0,001558

630,027

0

R-squared

-0,000526

Mean dependent var

0,00028

Adjusted R-squared

-0,001211

S.D. dependent var

0,0122

S.E. of regression

0,012207

Akaike info criterion

-6,481045

Sum squared resid

0,652818

Schwarz criterion

-6,475219

Log likelihood

14213,69

Hannan-Quinn criterion

-6,478989

Durbin-Watson stat

2,133713

S&P 500

TGARCH

Dependent Variable: S_P_RETURNS

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4385 after adjustments

Convergence achieved after 13 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*RESID(-1)^2*(RESID(-1)<0) + C(5)*GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000259

0,000126

2,049269

0,0404

Variance Equation

C

0,0000012

0,000000121

9,996835

0

RESID(-1)^2

-0,005437

0,00582

-0,934108

0,3502

RESID(1)^2

*(RESID(-1)<0)

0,139636

0,009177

15,21553

0

GARCH(-1)

0,925616

0,005367

172,4502

0

R-squared

-0,000003

Mean dependent var

0,00028

Adjusted R-squared

-0,000916

S.D. dependent var

0,0122

S.E. of regression

0,012205

Akaike info criterion

-6,481504

Sum squared resid

0,652477

Schwarz criterion

-6,474222

Log likelihood

14215,7

Hannan-Quinn criterion

-6,478935

Durbin-Watson stat

2,134829

NIKKEI 225

GARCH

Dependent Variable: NIKKEI_225RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4279 after adjustments

Convergence achieved after 10 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,000362

0,000189

1,914373

0,0556

Variance Equation

C

0,00000448

0,000000565

7,91805

0

RESID(-1)^2

0,088203

0,006759

13,04953

0

GARCH(-1)

0,893277

0,008275

107,945

0

R-squared

-0,000001

Mean dependent var

-0,0000126

Adjusted R-squared

-0,000468

S.D. dependent var

0,015366

S.E. of regression

0,01537

Akaike info criterion

-5,744911

Sum squared resid

1,01012

Schwarz criterion

-5,740451

Log likelihood

12294,24

Hannan-Quinn criterion

-5,743335

Durbin-Watson stat

2,082219

NIKKEI 225

EGARCH

Dependent Variable: NIKKEI_225RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4279 after adjustments

Convergence achieved after 12 iterations

Presample variance: backcast (parameter = 0.7)

LOG(GARCH) = C(2) + C(3)*ABS(RESID(-1)/@SQRT(GARCH(-1))) + C(4)

*RESID(-1)/@SQRT(GARCH(-1)) + C(5)*LOG(GARCH(-1))

Coefficient

Std. Error

z-Statistic

Prob.

C

-0,0000403

0,000183

-0,220319

0,8256

Variance Equation

C(2)

-0,358527

0,032843

-10,91628

0

C(3)

0,167039

0,012457

13,40912

0

C(4)

-0,088896

0,006766

-13,13873

0

C(5)

0,973246

0,003131

310,8366

0

R-squared

-0,000003

Mean dependent var

-0,0000126

Adjusted R-squared

-0,000939

S.D. dependent var

0,015366

S.E. of regression

0,015373

Akaike info criterion

-5,768817

Sum squared resid

1,010122

Schwarz criterion

-5,761384

Log likelihood

12347,38

Hannan-Quinn criterion

-5,766191

Durbin-Watson stat

2,082214

NIKKEI 225

TGARCH

Dependent Variable: NIKKEI_225RETURN

Method: ML - ARCH (Marquardt) - Normal distribution

Sample (adjusted): 1/05/1993 6/01/2010

Included observations: 4279 after adjustments

Convergence achieved after 10 iterations

Presample variance: backcast (parameter = 0.7)

GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*RESID(-1)^2*(RESID(-1)<0) + C(5)*GARCH(-1)

Coefficient

Std. Error

z-Statistic

Prob.

C

0,0000103

0,00019

0,05415

0,9568

Variance Equation

C

0,0000045

0,000000552

8,151523

0

RESID(-1)^2

0,026404

0,005898

4,476885

0

RESID(1)^2

*(RESID(-1)<0)

0,116867

0,010776

10,84497

0

GARCH(-1)

0,896669

0,00833

107,6376

0

R-squared

-0,000002

Mean dependent var

-0,0000126

Adjusted R-squared

-0,000938

S.D. dependent var

0,015366

S.E. of regression

0,015373

Akaike info criterion

-5,766856

Sum squared resid

1,010121

Schwarz criterion

-5,759422

Log likelihood

12343,19

Hannan-Quinn criterion

-5,76423

Durbin-Watson stat

2,082216