Vary has always been one of the most widely used methods of measuring risk by portfolio managers and financial institutions alike. It proposes a straightforward question – “what loss level is such that we are X% sure that it will not be exceeded in N business days”? Vary is popular because it captures risk in a single number and is easy to understand, however it is not without its pitfalls. Standard Vary calculations assume that stock returns follow a Normal Distribution, which we know to be untrue.
It is evident that returns follow a distribution with far larger ails, and so by assuming a normal distribution we are grossly underestimating the possibility of a large shock to the market. Vary also assigns all of its risk weight to the Sixth quintile of the loss distribution, and tells us nothing about the potential for losses greater than the Sixth quintile. For this reason Expected Shortfall (also known as Conditional Vary or Expected Tail Loss) is often viewed as a more conclusive risk measure.
SEES can be defined as the expected loss given that the loss exceeds the equivalent Vary level.
Unlike Vary, SEES assigns equal weight to all quintiles greater than the specified Sixth quintile. The financial crises of 2007-08 sent showplaces through banking systems globally, and many U. S. And European institutions were hit with unforeseen losses. Many commentators believed that firms were not adequately measuring their risk levels. In 2013, due to the poor handling of this period of extreme financial distress, the Basel Committee on Banking Supervision proposed a fundamental overhaul of bank trading-book rules. These are commonly known as the Third Basel Accord, or “Basel Ill” colloquially.
Basel Ill attempted to standardize the approach to measuring credit risks. One of the primary proposals of the accord was that banks should change from the Vary measurement method to Expected Shortfall, which is more efficient at capturing extreme losses. Given that Vary is a cornerstone of risk calculations in both financial institutions and universities worldwide, we were intrigued to see how Vary could be efficiently and accurately modeled, and how these models behaved in comparison with the suggested improvement SEES.
Analysis Unfortunately, selecting a single method for capturing and forecasting risk is not as straightforward. ‘radar as it sounds. Regardless of what method is chosen, there will always be an element or parameter which is not perfectly represented. Therefore we will take four of the most commonly utilized methods for calculating Vary and run a backbiting exercise on them all. “An Empirical Examination of Daily Stock Return Distributions for U. S. Stocks” – S. T. Rachel, S. V. Staunton, A. Abigail, F. J. Fazing. Backbiting is simply a procedure used to compare the performance of different risk measures.
We decided to focus on four primary methods of estimating the volatility of returns put forward by Jon Danielson (2011), which is the key element in calculating useful Vary estimates:2 C Moving Average (MA): Estimates volatility as the average of squared deviations from the mean. Uses a rolling window of consecutive data points. Historical Simulation (HAS): Creates a historical series of portfolio value changes and groups these into percentiles. It then calculates the Vary based on this distribution of returns. Exponentially Weighted Moving Average (EMMA): Similar to MA model, EMMA attaches greater weight to the most recent data points.
In doing so, it captures current volatility more effectively. Generalized Autoregressive Conditional Hydroelectricity (GARTH(I , 1)): GARTH oodles attempt to address the issue of volatility clustering (Hydrostatics volatility) by accounting for time-varying volatility, through the use of a second term in the standard regression model. Each of these methods will give a different estimate of volatility. They will estimate volatility based on a tangent portfolio we put together of five stocks – Johnson & Johnson ON]), McDonald’s (MAC), International Business Machines (MOM), American Express (XP) and Wall-Mart Stores (WANT).
We believe this to be a reasonably well diversified and therefore realistic portfolio, despite this not being intrinsically vital for our analysis. We constructed the tangency portfolio based on an in-sample period from 1 995 through to the end of 2004. We then calculated a daily Vary level of the tangency portfolio using our four chosen methods of volatility estimation. Our aim was to measure the frequency with which the actual loss of the portfolio on a given day exceeded the accompanying Vary estimate. All of our estimates were calculated at a 1% significance level, i. E. E could be 99% certain that we would lose no more than the calculated Vary value. If the loss does exceed Vary, this is considered a Vary violation”. At the end of our assessment we can then analyses the frequency with which Vary was violated, giving us an idea of the accuracy of each method of estimation. The Violation Ratio (IVR) for each method can be defined as: Ideally we hoped to obtain a violation ratio very close to 1 for at least one of our estimation techniques. A violation ratio of 1 indicates to us that the risk is being forecasted accurately as the actual number of violations equals the expected number.
A figure greater than one highlights that the model undercoat’s risk, and less than one indicates an overcastting of risk. It should be noted that we did not engage in active management of our tangency portfolio. This is because, to accurately compare Vary forecasts, the portfolio must have a steady weight distribution over time. Also, the length of our estimation window had to be sufficiently large to accommodate all four estimation techniques. For example, the historical method requires at least 300 days of information to calculate 1% Vary, whereas the EMMA method only requires a minimum 2 “Financial Risk Forecasting’ -J Danielson. F 30. In order to capture performance over a long period of time we opted to cake a ten year snapshot for our analysis – the first 6 years were used as our estimation window, and the final four years for calculating our Vary models. Assuming 250 trading days in a year, a 1% Vary forecast should see 2. 5 violations per year, or 10 violations over our 4 year test period. However as we have mentioned before, this is based on the assumption of returns following a normal distribution. Findings The plot of our findings highlights some interesting results.
A cursory glance at the graph immediately highlights that the HAS method of estimation, which does not fluctuate much over time, provides extremely large estimates for Vary, and appears to overestimate risk. We obtained a violation ratio for HAS of approximately 0. 197, which confirms that the estimate is very inaccurate and far too conservative. The MA method appears to be slightly less conservative than HAS according to the graph, however its IVR of 0. 197 shows us that it was violated exactly the same amount of times as the HAS model.
We can also see that the EMMA method was equally poor in providing estimates, but in terms of under-estimating risk. Our EMMA model obtained a IVR of 1. 868, which means it was violated with far too much regularity. The GARTH(I , 1) model was the highest performing of the four models, with a IVR of 1. 18. Despite both of these methods underestimating risk to varying extents, we can clearly see that the GARTH method provided the most accurate Vary figures. To further analyses which models performed the best, we decided to perform the Cupric test.
Paul Auspice’s test “attempts to determine whether the observed frequency of exceptions is consistent with the frequency of expected exceptions according to the Vary model and chosen confidence interval”. 3 This test operates according to the following hypothesis: HO: The model is “correct”, in terms of violations, HI: The model is incorrect. If the model is indeed correct, the violations should follow a Binomial distribution with the probability of observing x violations given n observations specified as: C] Pr(0) = ( ) 0. 010 x 0. CLC-D The Cupric test statistic is calculated via the following formula: 0 D O – -2 – +2 – SIC) Where: P = specified confidence level (1%), N = sample size, M = observed number of violations. Upon performing the Cupric test we obtained the following p-values for each model: t] E-WHAM = 0. 0130 MA = 0. 0017 HAS = 0. 0017 GARTH(I, 1) = 0. 749 As the p-values for EMMA, MA and HAS are all below 0. 05 we can reject the null hypothesis for all three models at the 5% significance level. The GARTH model is the only one deemed to exhibit the correct number of violations.
Having completed our investigation thus far at a 99% confidence level, we were intrigued to see what results we would obtain running the same calculations at a 95% confidence level. Our 95% Vary backbiting graph can be seen below. 3 “Backbiting Vary models: Quantitative and Qualitative Tests” – C. Blanch, M. Kooks. Again, we computed the IVR and Cupric test for each model. It is interesting to note that, at the 95% confidence level, the EMMA method provides the most reliable estimates according to the IVR: HAS = 0. 216 MA = O. 177 EMMA = 1. 003 GARTH(I, 1) = 0. 863 95% Cupric test results: HAS z 0. 0 MA 0. 00 EMMA = 0. 983 GARTH(I, 1) = 0. 314 As the p-values for MA and HAS are below 0. 05 we can reject the null hypothesis for both models at the 5% significance level. The EMMA and GARTH models are the only ones deemed to exhibit the correct number of violations. Given these findings, HAS and MA consistently underestimate Vary which is particularly undesirable. Overall, the GARTH model for estimating Vary appears to be an appropriate measure of risk to apply to real-world investment portfolios, but when the confidence level is relaxed to 95% the EMMA method also performs particularly strongly.
However it has been shown that manipulation favor is too easy to accomplish through strategic buying and selling of options. 4 Manipulating the tails of the distribution can result in a seemingly low Vary figure which, in truth, is masking the potential for losses of a far greater magnitude. For this reason we wanted to test our best Vary model against the equivalent expected shortfall value, calculated with the same risk estimation method. However it must be noted that backbiting SEES is a far trickier prospect than backbiting Vary, as SEES tests an expectation as opposed to testing a single quintile. “Comparative Analyses of Expected Shortfall and Value-at-Risk under Market Stress” – Y. Yamaha, T. Yeshiva. Our SEES backbiting followed a similar procedure to the Vary testing. In this testing scenario we were investigating to see if the mean of returns when Vary is breached is equal to the mean SEES calculation for that given day. Clearly, if all estimates are correct then the ratio should be exactly equal to 1 . We decided to perform this backbiting at 99% on the GARTH and EMMA models only, as according to the Cupric test these were top two performing Vary models. T] 1. 132 EMMA= 1. 63 We can see that our GARTH SEES estimates were violated slightly less frequently than the Vary estimates, with the IVR down from 1. 180 to 1 . 132 approximately, while the EMMA estimates were breached significantly less than before. These results signify a solid improvement on all fronts. We then carried out the same procedure at 95% for comparison below: 95% IVR: = 1. 077 EMMA= 1. 164 We can see at the 95% confidence level, the GARTH model is violated approximately as many times as is expected. However it should be noted that there is a significant flaw in the backbiting procedures for SEES.
Backbiting SEES is essentially a test built on top of a prior existing Vary test, and so any inaccuracies in estimating Vary will carry through and create further inaccuracies in our SEES calculations. We can conclude that the reliability of our SEES backbiting is heavily reliant on having exceptionally accurate Vary testing procedures. In order to mitigate this problem, SEES backbiting should e carried out over a far larger sample period to ensure accurate data. As we can see from the graphs above, these models performed quite similarly (more so at the 99% confidence level), especially during periods of “average” volatility.
With this in mind we were interested to discover how each of the models behaved in periods of extreme volatility and so we we decided to analyses their performance on the day that the portfolio realized its greatest loss (the 16th observation of the out of sample period). To do this we compared the expected monetary loss as estimated by the respective SEES models to the realties oratorio loss on this day. Using a portfolio value of ?1 million we obtained the following results: Expected Shortfall Realized Loss EMMA Estimate GARTH Estimate 99% ?74,550. 21 ?74,393. 6 ?90,804. 73 95% ?50,576. 22 ?70,277. 31 Both models held firm at 95%. Yet on this day at the 99% confidence level the SEES calculated using the EMMA model was exceeded, however the GARTH model held firm. It seems that this is due to the fact that the GARTH model is capable of adapting more quickly to changes in volatility than the EMMA model. This characteristic is very important for risk modeling. As volatility changes we would like our risk measures to capture this information and adapt quickly, alerting us to sudden, sharp increases in its value.
In fact, by inspecting the 99% graph more closely, we can see that following periods of high volatility, the EMMA model tends to overestimate SEES for many days before correcting itself. We also calculated the mean relative error of the SEES estimates for days when the expected loss level was breached to gauge the accuracy of each of the models in these situations. Relative error is defined as: – 000000 The 99% mean relative error for the EMMA and GARTH models were very similar s we expected. Respectively these were 16. 24% and 15. 6% (with standard deviations of 15. 45% and 13. 00%). At the 95% level these error values were 20. 58% for EMMA and 18. 59% for GARTH. These relative error statistics are in line with our expectations for each model thus far. Conclusion To conclude, it is clear from our investigation that SEES provides a more comprehensive measure of potential losses than Vary. Despite the additional estimation uncertainty involved in calculating SEES, Vary provides no gauge for potential losses greater than a given quintile, and so is too easily manipulated to a misleading extent.
Cite this APRM Assignment
APRM Assignment. (2018, May 15). Retrieved from https://graduateway.com/aprm-assignment/