Study On The Arch And Garch Models Finance Essay

Published: November 26, 2015 Words: 3756

This paper is divided into two parts. The first part describes the ARCH and GARCH models and their uses in Finance studies and gets results for two country index. The second part deals with the determinants of financial leverage by using panel data models.

Part I: ARCH and GARCH Models

The concept of time-varying volatility is crucially important in finance area. In the light of recent advancements, quantitative models are required to interpret investors' behaviour regarding "expected returns" as well as "risk". (Asteriou and Hall 2007:249). According to Brooks et al (2000:378), "Autoregressive Conditional Heteroscedasticity" (ARCH) model, which was proposed by Engle (1982), and "Generalised ARCH" (GARCH) model, which was developed by Bollerslev (1986), seems to be successful tools in modelling and forecasting volatility.

This part focuses on modelling the financial time series of Australian General Index and Egyptian CMA General Index and examines the volatility in these markets. It aims to understand the changes of prices over time and the "process by which financing decisions" are taken through "volatility modelling" (Floros 2008:32).

The first part is organised as follow: Section I briefly introduces the ARCH and GARCH models, section II presents the importance of modelling and forecasting volatility in Finance, section III provides data information, section IV presents the analysis and the estimation of the two models for each index and section V concludes on the issues discussed above.

I. ARCH Model

The ARCH model suggests that "the variance of the residuals at time t depends on the squared error terms from past periods" (Asteriou and Hall 2007:250).

The form of ARCH (q) is:

and requires

According to Brooks (2008:387) this model arises from the "autoregressive model"

Where,

The vector of the explanatory variables which are lags of the dependent variable

error term which follows and

which is the ARCH model.

GARCH model

The GARCH model is a generalised ARCH model and suggests that the "conditional variance" of residuals at time t depends on the "squared error term" and the "conditional variance" during the previous period (Gujarati and Porter 2009:796).

The form of GARCH(q,p) is:

error term and conditional variance during the previous period (Gujarati and Porter 2009).

As Brooks (2008:393) argues, the GARCH model is "more parsimonious" and overcomes some problems of the ARCH model such as the violation of "non-negativity constraints". Moreover, he explains that the GARCH(1,1) captures a high-order of ARCH terms and it can be written as an "infinite-order ARCH model".

Volatility in Finance

There are several reasons why researchers apply these models to model and forecast volatility.

Firstly, "variability of inflation" brings difficulties for decision makers while high "volatility of stock prices" and in the "exchange rates means huge losses and profits" for investors and traders (Gujarati and Porter 200:791).

Moreover, risk managers and option traders need to know the minimum future decline of their "portfolio value" and the "expected volatility" of their contracts in order to reduce the risk by hedging them while portfolio managers may "sell a stock or a portfolio", when they expect volatile periods (Mammadli 2004:1).

Finally, volatility estimation in the stock markets is used to price "financial derivatives" and options using the Black-Scholes formula (Koop 2008:197).

Data

The data includes 675 daily index prices for Australian stock market (Australia General Index) and 675 for Egyptian stock market (CMA General Index) over the period 01/07/2008-31/01/2011. Index prices were selected from DataStream and are transformed into a continuously compounded percentage returns using the , where is the index price on day t and the price on previous trading day.

Analysis of data and results

Figure 1A and 2A plot the daily returns of both indices.

In figure 1A, we observe that during the period between 50-100 days (August 2008) and 150-200 days (December 2008-January 2009) of the sample, there is high return volatility with large positive and negative returns. The periods between these volatile days appear to be more tranquil with lower volatility.

In figure 2A, during the period between 50-100 days (August-September 2008) and 350-400 days (July 2009) of the sample, we can observe periods with wide swings which are more volatile and riskier. In the last days of the sample (January 2011), there are high negative returns. The periods between these volatile days appear to be relative with lower levels of volatility.

The high volatile periods correspond to major economic events such as the stock market crash of 2008 (Gujarati and Porter 2009).

There is no evidence of "serial correlation in the returns", however, in both series "the volatility appears to cluster" (volatility clustering) which means that large returns are followed by large returns and small returns are followed by small returns (Diebold 2007:342).

Descriptive statistics

Table 1A reports descriptive statistics for daily returns data. The mean daily return is slightly positive for Australian Index (0.000302) and slightly negative for Egyptian Index (-0.00157). The returns of both indices are approximately symmetric with slightly right skewed and the kurtosis is higher than the standard Gaussian distribution.

In figures 3A and 4A, returns seem to be highly leptokurtic, many observations are around the mean and some outliers are far from the mean. The returns have high peak and fat tails compared to normal distribution.

Test for autocorrelation

The autocorrelation test in table 2A shows that there is no evidence of serial correlation in our sample.

Testing for stationary

In order to investigate the stationary of the return series, we use the Augmented Dickey-Fuller (ADF) where we test (Asreriou and Hall 2007).

The results with no constant and no lagged term are shown in tables 3A and 4A. We can reject the null hypothesis which means that both of the series are stationary at 99% confidence level.

Testing for ARCH effects

According to Asteriou and Hall (2007), the absence of homoskedasticity in the residuals of an estimated model leads to inefficient coefficient estimation and t-statistic, when we use the OLS (Ordinary least squared) method. In order to get "fully efficient estimators", we need to "set up a model which recognises if there are ARCH effects", the ARCH model (Asteriou and Hall 2007:252).

Firstly, we estimate the AR (1) model and we check for possible presence of ARCH effects in the residuals.

For an ARCH (1) process:

We test:

From the results in tables 5A and 6A, we can reject the null hypothesis at 95% confidence level and we can conclude that ARCH effects are present in the residuals of the estimated model.

Estimating ARCH model

According to Floros (2008:35), because linear models cannot explain phenomena such as "volatility clustering, leptokurtosis, leverage effects and long memory", we consider ARCH and GARCH models appropriate to model the non-constant volatility parameters.

After the test of stationary and ARCH effect, we can conclude that ARCH models will provide us with better results for both returns data.

The first step is to estimate an ARCH(1) model based on the following formula

Because the ARCH(1) model "fails to capture all of the volatility dynamics, longer lags are needed" (Diebold 2007:345).

The second step is testing for higher-order ARCH effects and estimating the respective ARCH model. We conclude to the final length of lag by looking each time at the p-values of added lag coefficient of estimated ARCH model. If the last lag is non significant at 95% confidence level, we drop it and we continue with the previous model.

For Australian Index returns the estimated ARCH (1) model (table 7A) is

For Egyptian Index returns the estimated model (table 8A) is

In both cases, the p-value of the coefficients of lag 1 is less than 0.05 so both coefficients are statistically significant at 95% level of confidences. This indicates that volatility today depends on the residuals squared of the previous day. The positive coefficients suggest that volatility was high in previous day, so it will continue to be high today, indicating clustering volatility.

We continue by trying higher-order ARCH model as we have described in the second step.

For the Australian Index (table 9A), we conclude that the coefficient of lag 5 is not significant at 95% coefficient level and does not add significant explanatory power to the model. We drop the fifth lag and we end up with the ARCH(4) model:

.

For Egyptian Index (table 10A), we conclude that the ARCH(2) model is the best because the lag 3 coefficient of the ARCH(3) is not significant at 95% confidence level. The estimated ARCH(2) model is:

All coefficients are statistically significant at 95% confidence level. This indicates that for Australian Index, the volatility today depends on the residuals squared of the previous four days while for Egyptian index; the volatility today depends on the residual squared of previous two days.

Conditional standard deviation for ARCH

Figure 5A shows the conditional standard deviation for the ARCH(4) model of Australian Index. It is seemed that the value of the conditional volatility ranges between 0-0.02% and in some periods the volatility exceeds this range.

Figure 6A shows the same graph for the ARCH(2) model of Egyptian Index. The value of the conditional volatility ranges between 0-0,01% which is an indication that the Egyptian stock market is less volatile and mature than the Australian stock market.

Estimating GARCH model

We can estimate the GARCH(1,1) model, as an alternative to the long lag length of ARCH(4) and ARCH(2) models, in order to have more parsimonious models which can be identified easier.

The estimated GARCH(1,1) model for Australian Index (table 11A) is

While for Egyptian Index (table 12A) is

From tables 11A and 12A, we can conclude that all coefficients are statistically significant at 95% confidence level and that the extension to a GARCH(1,1) model seems necessary in both cases.

The sum of GARCH coefficients is close to one in both cases which implies "persistence of the conditional variance" meaning that "a large positive or negative return will lead future forecasts of the variance to be high" (Floros 2008:39).

By changing the values of p and q in GARCH model (table13A), we obtain either insignificant parameters or some negative ARCH lagged terms. These results suggest that they are not appropriate models and that GARCH(1,1) is perfected model for both financial series.

Conditional standard deviation for GARCH

In figures 7A and 8A, we plot the conditional variance series for GARCH(1,1) and we observe that is quiet similar with the correspond variance series plot for ARCH models(figures 5A,6A).

This suggests that both models give the same results and GARCH(1,1) is better instead of higher-order ARCH model because it had only three parameters which made estimations easier and had fewer losses of degrees of freedom (Asteriou and Hall 2007:262).

Conclusion

In summary of our results, we can conclude that daily returns can be characterised by symmetric GARCH models. GARCH models are able to capture one of the well-known empirical regularities of asset returns, the feature of clustering volatility (Bologna and Cavallo 2002).

This study is recommended to financial managers and modellers dealing with stock markets and investors who hold a market portfolio to forecast the volatility of their portfolio.

However, numerous extensions basic on ARCH and GARCH models have been developed the recent years in order to explain "asymmetric effects", such as the leverage effect and the stock market time series (Mammadli 2004:1). Moreover, "as more and more volatility securities are priced and traded in financial markets, the demand for good models, process and forecasts pushes the research forward" (Engle 1995:xi).

Part II: Determinants of financial leverage by using panel data models

Capital structure is one of the most "enigmatic issues" in corporate finance since 1958 when modern theory of capital structure was established by Modigliani and Miller (Reimoo 2008:5). According to Teker et al (2009:179), capital structure is a "combination of debt and equity" used by firms to finance their assets.

Financial leverage is a useful "indicator to determine the optimal capital structure of firms" where managers can minimise the cost of capital and maximise firm's value, so many studies focus on the "specific characteristics of firms" that influence the financial leverage in the capital structure (ÇaÄŸlayan and Åžak 2010:57).

This part analyses the determinants of leverage and consequently of the capital structure by using panel techniques and three estimation models.

This part is organised as follows: section I describes the data, section II introduces the methodology and discusses the specification of the models we use, section III compares the models and section IV provides analytical and critical discussion of the results. Section V summarises and concludes.

I. Data and descriptive statistics

The selected sample is a balanced panel of 349 companies for a period of 8 years, with a total of 2792 observations.

Table 1B presents the definition of independent variables and dependent variable which are employed in our analysis and the expected impacts of independent variables on leverage based on empirical findings as shown by Garcia et al (2005).

Table 2B shows the descriptive statistics of the variables. The mean of leverage ratio is around 19.77% for the sample. On average, 35.04% of the firms' assets are warrants and 6.5% is cost of debts. The mean of growth opportunities and firms' size is 5.82 and 14.17 respectively.

Table 3B gives the correlation matrix which suggests that there is no problem of multicollinearity among the variables of our sample.

II. Methodology

According to Koop (2008:253), panel data is considered to be a popular statistical method which analyses "cross-sectional and time series aspects". By using panel data analysis, it is possible to control for "heterogeneity biasness" which arise when we omit some "unknown variables or variables" which may affect the dependent variable (Reimoo 2008:52). Therefore, results are more accurate and unbiased.

We apply pooled ordinary least squares (OLS), fixed effects and random effects models to estimate the following model:

According to Reimoo (2008), each of the above models has limitations and by using only one method will be ineffective for final results.

Pooled OLS model

As Shah and Khan (2007:274) describe, this model is the simplest panel data model and assumes that the relationship between the variables are constant over time and ignore the "individual effects" in the sample.

According to Asteriou and Hall (2007), the form of the pooled OLS model is:

i denotes the companies and t the years

the explanatory variables matrix

error term includes unobservable time and individual effects and follow.

Another advantage of pooled OLS model is that it can apply to many cases with various time periods. However, this model assumes that there is no important "industry or time effect on leverage", there is not distinction between the companies and the "slope coefficients" of variables are "identical" for all companies (Shah and Khan 2007:274).

These assumptions do not correspond to our sample which is not homogeneous. It includes companies from two different markets and individual firms may have different characteristics. Therefore, the final results may be biased and the model may make wrong conclusions, so we need to work with individual effect models.

Fixed effects model

According to Reimoo (2008:54), fixed effects model assumes that "the slope to be homogeneous" and each company has its "own intercept" which absorbs their individual effects.

As Baltagi (2005:11) describes, the form of fixed effect model is

unobservable individual effects which are fixed parameters

remainder disturbance which follow

In order to estimate this model, "least square dummy variable (LSDV) approach" can be used which includes a dummy variable for each company (Brooks 2008:491). A drawback of this approach is that when we have a large number of companies (N), we lost up N "degrees of freedom" and it is impossible to obtain precise estimations for so many parameters (Asteriou and Hall 2007:347).

However, in our analysis in order to avoid this problem, we perform a transformation of the model. According to Verbeek (2008:360), we take the "deviations from individual time-means" which has a result of removing the dummy variables. The estimations of this transformation are called "within estimators" and we get the same coefficient results with LSDV approach but different standard errors (Verbeek 2008:360).

As Gujarati and Porter (2009:600) state, a disadvantage of within estimations approach is that it drops the "time-invariant variables", in our case the market variable and we do not know how the leverage reacts to these variables. Moreover, the within estimation may "distort the parameters values and can certainly remove any long run effects" (Asreriou and Hall 2007:347).

Random effects model

As Baltagi (2005) argues, when the N is large we can avoid the loss of degrees of freedom by using the random effects model.

The form of random effect model is:

,

which assumes that is "random" and follows (Baltagi 2005:14).

An advantage of random effect model is that has fewer variables to estimate, uses dummy variables and "superior estimator compared to the fixed effect model because the former is the GLS estimator and the latter a limited case of the random effect model" (Asteriou and Hall 2007:348).

However, as Asteriou and Hall (2007:348) state, a drawback of the model is that we need to make "specific assumptions about the distribution of the random components" , .

According to Gujarati and Porter (2009:603), random effects model assumes that the explanatory variables are not correlated with the while the fixed effects model does not make this assumption. Another drawback of random effects model is that it is not valid and the estimations are biased when (Koop 2008).

III. Analysis and comparison of the models

Estimation of the models

Before moving on to the results, we compare the statistical significance of the three models and interpret the results of the most significant models.

Firstly, we estimate the pooled OLS model (table 4B). After the autocorrelation and the heteroskedasticity tests (tables 5B, 6B), we conclude that our data suffers from autocorrelation and heteroskedasticity problems. In order to count both problems and to get more accurate results, we estimate the FGLS (feasible generalized least squares) model in table 7B.

Secondly, we estimate the random effects and the fixed effects models (tables 8B, 9B) and we compare them.

Fixed effects versus Random effects

In order to decide which of the model between fixed effect and random effect model is more suitable for our model we use the Hausman test (Gujarati and Porter 2009).

The hypotheses of this test are

From table 10B, we reject the null hypothesis and the random effects model at 95% level of confidence and we prefer the fixed effect estimation.

By using the Breusch and Pagan Lagrangian Multiplier, we test the hypotheses (Gujarati and Porter 2009).

From table 11B, we reject the null hypothesis of constant variable at 95% level of confidence and we conclude that the random effect model is not appropriate so the fixed effect model is chosen.

Fixed effects versus pooled OLS

In order to test the hypotheses fixed effects model versus pooled model, we use the F-test which is generated from the fixed effect model (table 9B) in the bottom line (Koop 2008). At 95% level of confidence, we conclude that the fixed effect model should be preferred than pooled OLS.

Other tests for fixed effects model

From the above tests we feel more confident to use the fixed effects model and explain our results.

By testing for time-fixed effects (table12B), we reject the null hypothesis that "all the years coefficients are jointly equal to zero", therefore, no time fixed effects are needed (Shaban 2011:8).

Moreover, in table 13B, Wald test checks the null hypothesis of homoskedastic. We conclude that present heteroskedasticity in the fixed effects model, at 95% level of confidence, while in table 14B we find evidence that there is problem of serial correlation. Therefore, we run the fixed effects model with the "cluster" option to count these problems and to have unbiased results and robust standard errors (Hoechle 2007:6). As we can see, the coefficients of the cluster model are similar with those of fixed effects model (table15B).

IV. Discussion of the Results

Table 16B reports the overall results which seem to be logical and confirm our expectations.

The overall results under the robust fixed effects model suggest the following associations between leverage ratio and the independent variables.

The logarithm of firm size, the level of warrants and the company's reputation are significant and positively related with the leverage ratio. However, the growth resources, the cost of debts and the growth opportunities are statistically significant and negatively related to leverage ratio of firms.

More specific, we can conclude that 1% increase in firm size, warrants and company's reputation will result in an increase in leverage ratio of 0.0815%, 0.1296% and 0.1874% respectively.

As Garcia et al (2005:66) suggest, the larger a firm is, it has more opportunities to penetrate new markets and borrow more "financial resources for lenders". According to Rajan and Zingales, the more warrants a firm has, the more debt and higher leverage ratio it has, because firms use the warrants as "collateral" for debt financing (cited in Sayeed 2011:25).

Moreover, as shown in our results, 1% increase in growth opportunities and growing resources will decrease leverage ratio by 0.0009% and 0.0363% respectively.

As Garcia et al (2005:62) explain, the growth opportunities and resources "reduce the problem of under-investment" which requires more debt and this is the reason why these factors have negative relationship with leverage ratio.

Finally, 1% increase in cost of debt will decrease leverage ratio by 0.0229% because when the costs of borrowing funds are high, firms try to decrease financing debt.

However, the fixed effects model omits the market variable which is the same for many companies in the sample and it does not provide the impact of market on leverage ratio of firms. By using the result of random effect model, we can conclude that firms which belong to market 1 have lower leverage ratio than firms which belong to market 2.

Conclusion

The developments in panel data analysis allow us to use time series and cross section data concurrently and examine the determinants of financial leverage in a sample of 349 companies for 8 years.

Under the fixed effects model, our results confirm our predictions that firm size, warrants and company's reputation are positively related with leverage ratio while growth opportunities, growing resources and cost of debt are negatively related with leverage ratio. Finally, we have seen that the total leverage is not only affected by firms' own characteristics but also by the market the firms belong.

Therefore, financial managers can use these results to decide the mix of debt and equity in their firms in order to minimize the cost of capital, which is one of their main objectives.