Monday, March 9, 2020
Extreme conditional value at risk a coherent scenario for risk management The WritePass Journal
Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE1. INTRODUCTION1.1.BACKGROUND1.2à RSEARCH PROBLEM1.3à RELEVENCE OF THE STUDY1.4à RESEARCH DESIGNCHAPTER 2: RISK MEASUREMENT AND THE EMPIRICALDISTRIBUTION OF FINANCIAL RETURNS2.1à Risk Measurement in Finance: A Review of Its Origins2.2à Value-at-risk (VaR)2.2.1 Definition and concepts2.2.2 Limitations of VaR2.3à Conditional Value-at-Risk2.4à The Empirical Distribution of Financial Returns2.4.1à The Importance of Being Normal2.4.2 Deviations From NormalityCHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK?1.3. Extreme Value Theory3.1. The Block of Maxima Method3.2.à à The Generalized Extreme Value Distribution3.2.1. Extreme Value-at-Risk3.2.2.à Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of RiskCHAPTER 4: DATA DISCRIPTION.CHAPTER 5: DISCUSION OF EMPIRICAL RESULTSCHAPTER 6: CONCLUSIONSà References Related CHAPTER ONE 1. INTRODUCTION Extreme financial losses that occurred during the 2007-2008 financial crisis reignited questions of whether existing methodologies, which are largely based on the normal distribution, are adequate and suitable for the purpose of risk measurement and management. The major assumptions employed in these frameworks are that financial returns are independently and identically distributed, and follow the normal distribution. However, weaknesses in these methodologies has long been identified in the literature. Firstly, it is now widely accepted that financial returns are not normally distributed; they are asymmetric, skewed, leptokurtic and fat-tailed. Secondly, it is a known fact that financial returns exhibit volatility clustering, thus the assumption of independently distributed is violated. The combined evidence concerning the stylized facts of financial returns necessitates the need for adapting existing methodologies or developing new methodologies that will account for all the stylised facts of financial returns explicitly. In this paper, I discuss two related measures of risk; extreme value-at-risk (EVaR) and extreme conditional value-at-risk (ECVaR). I argue that ECVaR is a better measure of extreme market risk than EVaR utilised by Kabundi and Mwamba (2009) since it is coherent, and captures the effects of extreme markets events. In contrast, even though EVaR captures the effect of extreme market events, it is non-coherent. 1.1.BACKGROUND Markowitz (1952), Roy (1952), Shape (1964), Black and Scholes (1973), and Mertonââ¬â¢s (1973) major toolkit in the development of modern portfolio theory (MPT) and the field of financial engineering consisted of means, variance, correlations and covariance of asset returns. In MPT, the variance or equivalently the standard deviation was the panacea measure of risk. A major assumption employed in this theory is that financial asset returns are normally distributed. Under this assumption, extreme market events rarely happen. When they do occur, risk managers can simply treat them as outliers and disregard them when modelling financial asset returns. The assumption of normally distributed asset returns is too simplistic for use in financial modelling of extreme market events. During extreme market activity similar to the 2007-2008 financial crisis, financial returns exhibit behavior that is beyond what the normal distribution can model. Starting with the work of Mandelbrot (1963) there is increasingly more convincing empirical evidence that suggest that asset returns are not normally distributed. They exhibit asymmetric behavior, ââ¬Ëfat tailsââ¬â¢ and high kurtosis than the normal distribution can accommodate. The implication is that extreme negative returns do occur, and are more frequent than predicted by the normal distribution. Therefore, measures of risk based on the normal distribution will underestimate the risk of portfolios and lead to huge financial losses, and potentially insolvencies of financial institutions. To mitigate the effects of inadequate risk capital buffers stemming from underestimation of risk by normality-based financial modelling, risk measures such as EVaR that go beyond the assumption of normally distributed returns have been developed. However, EVaR is non-coherent just like VaR from which it is developed. The implication is that, even though it captures the effects of extreme mar ket events, it is not a good measure of risk since it does not reflect diversification ââ¬â a contradiction to one of the cornerstone of portfolio theory. ECVaR naturally overcomes these problems since it coherent and can capture extreme market events. 1.2à RSEARCH PROBLEM The purpose of this paper is to develop extreme conditional value-at-risk (ECVaR), and propose it as a better measure of risk than EVaR under conditions of extreme market activity with financial returns that exhibit volatility clustering, and are not normally distributed. Kabundi and Mwamba (2009) have proposed EVaR as a better measure of extreme risk than the widely used VaR, however, it is non-coherent. ECVaR is coherent, and captures the effect of extreme market activity, thus it is more suited to model extreme losses during market turmoil, and reflects diversification, which is an important requirement for any risk measure in portfolio theory. 1.3à RELEVENCE OF THE STUDY The assumption that financial asset returns are normally distributed understates the possibility of infrequent extreme events whose impact is more detrimental than that of events that are more frequent. Use of VaR and CVaR underestimate the riskiness of assets and portfolios, and eventually lead to huge losses and bankruptcies during times of extreme market activity. There are many adverse effects of using the normal distribution in the measurement of financial risk, the most visible being the loss of money due to underestimating risk. During the global financial crisis, a number of banks and non-financial institutions suffered huge financial losses; some went bankrupt and failed, partly because of inadequate capital allocation stemming from underestimation of risk by models that assumed normally distributed returns. Measures of risk that do not assume normality of financial returns have been developed. One such measure is EVaR (Kabundi and Mwamba (2009)). EVaR captures the effect of extreme market events, however it is not coherent. As a result, EVaR is not a good measure of risk since it does not reflect diversification. In financial markets characterised by multiple sources of risk and extreme market volatility, it is important to have a risk measure that is coherent and can capture the effect of extreme market activity. ECVaRà is advocated to fulfils this role of ensuring extreme market risk while conforming to portfolio theoryââ¬â¢s wisdom of diversification. 1.4à RESEARCH DESIGN Chapter 2 will present a literature review of risk measurement methodologies currently used by financial institutions, in particular, VaR and CVaR. I also discuss the strengths and weaknesses of these measures. Another risk measure not widely known thus far is the EVaR. We discuss EVaR as an advancement in risk measurement methodologies. I advocate that EVaR is not a good measure of risk since it is non-coherent. This leads to the next chapter, which presents ECVaR as a better risk measure that is coherent and can capture extreme market events. Chapter 3 will be concerned with extreme conditional value-at-risk (ECVaR) as a convenient modelling framework that naturally overcomes the normality assumption of asset returns in the modelling of extreme market events. This is followed with a comparative analysis of EVaR and ECVaR using financial data covering both the pre-financial crisis and the financial crisis periods. Chapter 4 will be concerned with data sources, preliminary data description, and the estimation of EVaR, and ECVaR. Chapter 5 will discuss the empirical results and the implication for risk measurement. Finally, chapter 6 will give concussions and highlight the directions for future research. CHAPTER 2: RISK MEASUREMENT AND THE EMPIRICAL DISTRIBUTION OF FINANCIAL RETURNS 2.1à Risk Measurement in Finance: A Review of Its Origins The concept of risk has been known for many years before Markowitzââ¬â¢s Portfolio Theory (MPT). Bernoulli (1738) solved the St. Petersburg paradox and derived fundamental insights of risk-averse behavior and the benefits of diversification.à In his formulation of expected utility theory, Bernoulli did not define risk explicitly; however, he inferred it from the shape of the utility function (Bulter et al. (2005:134); Brancinger Weber, (1997: 236)). Irving Fisher (1906) suggested the use of variance to measure economic risk. Von Neumann and Morgenstern (1947) used expected utility theory in the analysis of games and consequently deduced many of the modern understanding of decision making under risk or uncertainty.à Therefore, contrary to popular belief, the concept of risk has been known well before MPT. Even though the concept of risk was known before MPT, Markowitz (1952) first provided a systematic algorithm to measure risk using the variance in the formulation of the mean-variance model for which he won the Nobel Prize in 1990. The development of the mean-variance model inspired research in decision making under risk and the development of risk measures. The study of risk and decision making under uncertainty (which is treated the same as risk in most cases) stretch across disciplines. In decision science and psychology, Coombs and Pruitt (1960), Pruitt (1962), Coombs (1964), Coombs and Meyer (1969), and Coombs and Huang (1970a, 1970b) studied the perception of gambles and how their preference is affected by their perceived risk. In economics, finance and measurement theory, Markowitz (1952, 1959), Tobin (1958), Pratt (1964), Pollatsek Tversky (1970), Luce (1980) and others investigate portfolio selection and the measurement of risk of those portfolios, and gambles in general. T heir collective work produces a number of risk measures that vary in how they rank the riskiness of options, portfolios, or gambles. Though the risk measures vary, Pollatsek and Tversky (1970: 541) recognises that they share the following:à (1) Risk is regarded as a property of choosing among options. (2) Options can be meaningfully ordered according to their riskiness. (3) As suggested by Irving Fisher in 1906, the risk of an option is somehow related to the variance or dispersion in its outcomes. In addition to these basic properties, Markowitz regards risk as a ââ¬Ëbadââ¬â¢, implying something that is undesirable. Since Markowitz (1952), many risk measures such as the semi-variance, absolute deviation, and the lower semi-variance etc. (see Brachinger and Weber, (1997)) were developed, however, the variance continued to dominate empirical finance. It was in the 1990s that a new measure, VaR was popularised and became industry standard as a risk measure. I present this ris k measure in the next section. 2.2à Value-at-risk (VaR) 2.2.1 Definition and concepts Besides these basic ideas concerning risk measures, there is no universally accepted definition of risk (Pollatsek and Tversky, 1970:541); as a result, risk measures continue to be developed. J.P Morgan Reuters (1996) pioneered a major breakthrough in the advancement of risk measurement with the use of value-at-risk (VaR), and the subsequent Basel committee recommendation that banks could use it for their internal risk management. VaR is concerned with measuring the risk of a financial position due to the uncertainty regarding the future levels of interest rates, stock prices, commodity prices, and exchange rates. The risk resulting in the movement of these market factors is called market risk. VaR is the expected maximum loss of a financial position with a given level of confidence over a specified horizon. VaR provides answers to question: what is the maximum loss that I can lose over, say the next ten days with 99 percent confidence? Put differently, what is the maximum loss that will be exceeded only one percent of the times in the next ten day? I illustrate the computation of VaR using one of the methods that is available, namely parametric VaR. I denote by the rate of return and by the portfolio value at time. Then is given by (1) The actual loss (the negative of the profit, which is) is given by (2) When is normally distributed (as is normally assumed), the variable has a standard normal distribution with mean of and standard deviation of. We can calculate VaR from the following equation: (3) where implies a confidence level. If we assume a 99% confidence level, we have (4) Inà we have -2.33 as our VaR at 99% confidence level, and we will exceed this VaR only 1% of the times. From (4), it can be shown that the 99% confidence VaR is given byVaR (5)Generalising from (5), we can state the quantile VaR of the distribution as follows (6)VaR is an intuitive measure of risk that can be easily implemented. This is evident in its wide use in the industry. However, is it an optimal measure? The next section addresses the limitations of VaR. 2.2.2 Limitations of VaR Artzner et al. (1997,1999) developed a set of axioms that if satisfied by a risk measure, then that risk measure is ââ¬Ëcoherentââ¬â¢. The implication of coherent measures of risk is that ââ¬Å"it is not possible to assign a function for measuring risk unless it satisfies these axiomsâ⬠(Mitra, 2009:8). Risk measures that satisfy these axioms can be considered universal and optimal since they are founded on the same mathematical axioms that are generally accepted. Artzner et al. (1997, 1999) put forward the first axioms of risk measures, and any risk measure that satisfies them is a coherent measure of risk. Letting be a risk measure defined on two portfolios and. Then, the risk measure is coherent if it satisfies the following axioms: (1)à à Monotonicity:à if then We interpret the monotonicity axiom to mean that higher losses are associated with higher risk. (2)à à Homogeneity: à à for; Assuming that there is no liquidity risk, the homogeneity axiom mean that risk is not a function of the quantity of a stock purchased, therefore we cannot reduce or increase risk by investing different amounts in the same stock. (3)à à Translation invariance: , where is a riskless security; This means that investing in a riskless asset does not increase risk with certainty. (4)à à Sub-additivity:à Possibly the most important axiom, sub-additivity insures that a risk measure reflects diversification ââ¬â the combined risk of two portfolios is less than the sum of the risks of individual portfolios. VaR does not satisfy the most important axiom of sub-additivity, thus it is non-coherent. More so, VaR tells us what we can expect to lose if an extreme event does not occur, thus it does not tell us the extend of losses we can incur if a ââ¬Å"tailâ⬠event occurs. VaR is therefore not optimal measure of risk. The non-coherence, and therefor non-optimality of VaR as a measuring of risk led to the development of conditional value-at-risk (CVaR) by Artzner et al. (1997, 1999), and Uryasev and Rockafeller (1999). I discus CVaR in the next section. 2.3à Conditional Value-at-Risk CVaR is also known as ââ¬Å"Expected Shortfallâ⬠(ES),à à ââ¬Å"Tail VaRâ⬠, or ââ¬Å"Tail conditional expectationâ⬠, and it measures risk beyond VaR. Yamai and Yoshiba (2002) define CVaR as the conditional expectation of losses given that the losses exceed VaR. Mathematically, CVaR is given by the following: (7) CVaR offers more insights concerning risk that VaR in that it tells us what we can expect to lose if the losses exceed VaR. Unfortunately, the finance industry has been slow in adopting CVaR as its preferred risk measure. This is besides the fact that ââ¬Å"the actuarial/insurance community has tended to pick up on developments in financial risk management much more quickly than financial risk managers have picked up on developments in actuarial scienceâ⬠(Dowd and Black (2006:194)). Hopefully, the effects of the financial crisis will change this observation. In much of the applications of VaR and CVaR, returns have been assumed to be normally distributed. However, it is widely accepted that returns are not normally distributed. The implication is that, VaR and CVaR as currently used in finance will not capture extreme losses. This will lead to underestimation of risk and inadequate capital allocation across business units. In times of market stress when extra capital is required, it will be inadequate. This may lead to the insolvency of financial institutions. Methodologies that can capture extreme events are therefore needed. In the next section, I discuss the empirical evidence on financial returns, and thereafter discuss extreme value theory (EVT) as a suitable framework of modelling extreme losses. 2.4à The Empirical Distribution of Financial Returns Back in 1947, Geary wrote, ââ¬Å"Normality is a myth; there never was, and never will be a normal distributionâ⬠(as cited by Krishnaiah (1980: 279). Today this remark is supported by a voluminous amount of empirical evidence against normally distributed returns; nevertheless, normality continues to be the workhorse of empirical finance. If the normality assumption fails to pass empirical tests, why are practitioners so obsessed with the bell curve? Could their obsession be justified? To uncover some of the possible responses to these questions, let us first look at the importance of being normal, and then look at the dangers of incorrectly assuming normality. 2.4.1à The Importance of Being Normal The normal distribution is the widely used distribution in statistical analysis in all fields that utilises statistics in explaining phenomenon. The normal distribution can be assumed for a population, and it gives a rich set of mathematical results (Mardia, 1980: 279). In other words, the mathematical representations are tractable, and are easy to implement. The populations can simply be explained by its mean and variance when the normal distribution is assumed. The panacea advantage is that the modelling process under normality assumption is very simple. In fields that deal with natural phenomenon, such as physics and geology, the normal distribution has unequivocally succeeded in explaining the variables of interest. The same cannot be said in the finance field. The normal probability distribution has been subject to rigorous empirical rejection. A number of stylized facts of asset returns, statistical tests of normality and the occurrence of extreme negative returns disputes the normal distribution as the underlying data generating process for asset returns. We briefly discuss these empirical findings next. 2.4.2 Deviations From Normality Ever since Mandelbrot (1963), Fama (1963), Fama (1965) among others, it is a known fact that asset returns are not normally distributed. The combined empirical evidence since the 1960s points out the following stylized facts of asset returns: (1)à à Volatility clustering: periods of high volatility tend to be followed by periods of high volatility, and period of low volatility tend to be followed by low volatility. (2)à à Autoregressive price changes: A price change depends on price changes in the past period. (3)à à Skewness: Positive prices changes and negative price changes are not of the same magnitude. (4)à à Fat-tails: The probabilities of extreme negative (positive) returns are much larger than predicted by the normal distribution. (5)à à Time-varying tail thickness: More extreme losses occur during turbulent market activity than during normal market activity. (6)à à Frequency dependent fat-tails: high frequency data tends to be more fat-tailed than low frequency data. In addition to these stylized facts of asset returns, extreme events of 1974 Germany banking crisis, 1978 banking crisis in Spain, 1990s Japanese banking crisis, September 2001, and the 2007-2008 US experience ( BIS, 2004) could not have happened under the normal distribution. Alternatively, we could just have treated them as outliers and disregarded them; however, experience has shown that even those who are obsessed with the Gaussian distribution could not ignore the detrimental effects of the 2007-2008 global financial crisis. With these empirical facts known to the quantitative finance community, what is the motivation for the continued use of the normality assumption? It could be possible that those that stick with the normality assumption know only how to deal with normally distributed data. It is their hammer; everything that comes their way seems like a nail! As Esch (2010) notes, for those that do have other tools to deal with non-normal data, they continue to use the normal distribution on the grounds of parsimony. However, ââ¬Å"representativity should not be sacrificed for simplicityâ⬠(Fabozzi et al., 2011:4). Better modelling frameworks to deal with extreme values that are characteristic of departures from normality have been developed. Extreme value theory is one such methodology that has enjoyed success in other fields outside finance, and has been used to model financial losses with success. In the next chapter, I present extreme value-based methodologies as a practical and better methodology to overcome non-normality in asset returns. CHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK? 1.3. Extreme Value Theory Extreme value theory was developed to model extreme natural phenomena such as floods, extreme winds, and temperature, and is well established in fields such as engineering, insurance, and climatology. It provides a convenient way to model the tails of distributions that capture non-normal activities. Since it concentrates on the tails of distributions, it has been adopted to model asset returns in time of extreme market activity (see Embrechts et al. (1997); McNeil and Frey (2000); Danielsson and de Vries (2000). Gilli and Kellezi (2003) points out two related ways of modelling extreme events. The first way describes the maximum loss through a limit distribution known as the generalised extreme value distribution (GED), which is a family of asymptotic distributions that describe normalised maxima or minima.à The second way provides asymptotic distribution that describes the limit distribution of scaled excesses over high thresholds, and is known as the generalised Pareto distribution (GPD). The two limit distributions results into two approaches of EVT-based modelling the block of maxima method and the peaks over threshold method respectively[2]. 3.1. The Block of Maxima Method Let us consider independent and identically distributed (i.i.d) random variable à with common distribution function â⠱. Let be the maximum of the first random variables. Also, let us suppose is the upper end of. For, the corresponding results for the minima can be obtained from the following identity (8) almost surely converges to whether it is finite or infinite since, Following Embrechts et al. (1997), and Shanbhang and Rao (2003), the limit theory finds norming constants and a non-degenerate distribution function in such a way that the distribution function of a normalized version of converges to as follows;, as (9) is an extreme value distribution function, and â⠱ is the domain of attraction of, (written as), if equation (2) holds for suitable values of and. It can also be said that the two extreme value distribution functions and belong in the same family if for some à à à à à à à à à à and all. Fisher and Tippett (1928), De Haan (1970, 1976), Weissman (1978), and Embrechts et al. (1997) show that the limit distribution function belongs to one of the following three density functions for some. (10) (11) (12) Any extreme value distribution can be classified as one of the three types in (10), (11) and (12). à and à are the standard extreme value distribution and the corresponding random variables are called standard extreme random variables. For alternative characterization of the three distributions, see Nagaraja (1988), and Khan and Beg (1987). 3.2.à à The Generalized Extreme Value Distribution The three distribution functions given in (10), (11) and (12) above can be combined into one three-parameter distribution called the generalised extreme value distribution (GEV) given by,, with (13) We denote the GEV by, and the values andgive rise to the three distribution functions in (3). In equation (4) above, and represent the location parameter, the scale parameter, and the tail-shape parameter respectively. corresponds to the Frechet, and distributioncorresponds to the Weibull distribution. The case where reduces to the Gumbel distribution. To obtain the estimates of we use the maximum likelihood method, following Kabundi and Mwamba (2009). To start with, we fit the sample of maximum losses to a GEV. Thereafter, we use the maximum likelihood method to estimate the parameters of the GEV from the logarithmic form of the likely function given by; (14) To obtain the estimates of we take partial derivatives of equation (14) with respect to and, and equating them to zero. 3.2.1. Extreme Value-at-Risk The EVaR defined as the maximum likelihood à quantile estimator of, is by definition given by (15)à The quantity is the quantile of, and I denote it as the alpha percept VaR specified as follows following Kabundi and Mwamba (2009), and Embrech et al. (1997): (16) Even though EVaR captures extreme losses, by extension from VaR it is non-coherent. As such, it cannot be used for the purpose of portfolio optimization since it does not reflect diversification. To overcome this problem, In the next section, I extend CVaR to ECVaR so as to capture extreme losses coherently. 3.2.2.à Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of Risk I extend ECVaR from EVaR in a similar manner that I used to extend CVaR from VaR. ECVaR can therefore be expressed as follows: (17) In the following chapter, we describe the data and its sources. CHAPTER 4: DATA DISCRIPTION. I will use stock market indexes of five advanced economies comprising that of the United States, Japan, Germany, France, and United Kingdom, and five emerging economies comprising Brazil, Russia, India, China, and South Africa. Possible sources of data that will be used are I-net Bride, Bloomberg, and individual country central banks. CHAPTER 5: DISCUSION OF EMPIRICAL RESULTS In this chapter, I will discuss the empirical results. Specifically, the adequacy of ECVaR will be discussed relative to that of EVaR. Implications for risk measurement will also be discussed in this chapter. CHAPTER 6: CONCLUSIONS This chapter will give concluding remarks, and directions for future research. à References [1] Markowitz, H.M.: 1952, Portfolio selection, Journal of Finance 7 (1952), 77-91 2 Roy, A.D.: 1952, Safety First and the Holding of Assets. Econometrica, vol. 20 no 3 p 431-449. 3 Shape, W.F.: 1964, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, Vol. 19 No 3 p 425-442. 4 Black, F., and Scholes, M.: 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 18 () 637-59. 5 Merton, R. C.: 1973, The Theory of Rational Option Pricing.à Bell Journal of Economics and Management Science, Spring. 6 Artzner, Ph., F. Delbaen, J.-M. Eber, And D. Heath .: 1997, Thinking Coherently, Risk 10 (11) 68ââ¬â71. 7 Artzner, Ph., Delbaen, F., Eber, J-M., And Heath , D.: 1999, Thinking Coherently. Mathematical Finance, Vol. 9, No. 3à 203ââ¬â228 8 Bernoulli, D.: 1954, Exposition of a new theory on the measurement of risk, Econometrica 22 (1) 23-36, Translation of a paper originally published in Latin in St. Petersburg in 1738. 9 Butler, J.C., Dyer, J.S., and Jia, J.: 2005, An Empirical Investigation of the Assumption of Risk ââ¬âValue Models. Journal of Risk and Uncertainty, vol. 30 (2), pp. 133-156. 10 Brachinger, H.W., and Weber, M.: 1997, Risk as a primitive: a survey of measures of perceived risk. OR Spektrum, Vol 19 () 235-250 [1] Fisher, I.: 1906, The nature of Capital and Income. Macmillan. 1[1] von Neumann, J. and Morgenstern, O.: 1947, Theory of games and economic behaviour, 2nd ed., Princeton University Press. [1]2 Coombs, C.H., and Pruitt, D.G.: 1960, Components of Risk in Decision Making: Probability and Variance preferences. Journal of Experimental Psychology, vol. 60 () pp. 265-277. [1]3 Pruitt, D.G.: 1962, Partten and Level of risk in Gambling Decisions. Psychological Review, vol. 69 ()( pp. 187-201. [1]4 Coombs, C.H.: 1964, A Theory of Data. New York: Wiley. [1]5à Coombs, C.H., and Meyer, D.E.: 1969, Risk preference in Coin-toss Games. Journal of Mathematical Psychology, vol. 6 () p 514-527. [1]6 Coombs, C.H., and Huang, L.C.: 1970a, Polynomial Psychophysics of Risk. Journal of Experimental psychology, vol 7 (), pp. 317-338. [1]7 Markowitz, H.M.: 1959, Portfolio Selection: Efficient diversification of Investment. Yale University Press, New Haven, USA. [1]8 Tobin, J. E.: 1958, liquidity preference as behavior towards risk. Review of Economic Studies p 65-86. [1]9 Pratt, J.W.: 1964, Risk Aversion in the Small and in the Large. Econometrica, vol. 32 () p 122-136. 20 Pollatsek, A. and Tversky, A.: 1970, A theory of Risk. Journal of Mathematical Psychology 7 (no issue) 540-553. 2[1] Luce, D. R.:1980, Several possible measures of risk. Theory and Decision 12 (no issue) 217-228. 22 J.P. Morgan and Reuters.: 1996, RiskMetrics Technical document. Available at http://riskmetrics.comrmcovv.html Accessedâ⬠¦ 23 Uryasev, S., and Rockafeller, R.T.: 1999, Optimization of Conditional Value-at-Risk. Available at gloriamundi.org 24 Mitra, S.: 2009, Risk measures in Quantitative Finance. Available on line. [Accessedâ⬠¦] 25 Geary, R.C.: 1947, Testing for Normality, Biometrika, vol. 34, pp. 209-242. 26 Mardia, K.V.: 1980, P.R. Krishnaiah, ed., Handbook of Statistics, Vol. 1. North-Holland Publishing Company. Pp. 279-320. 27 Mandelbrot, B.: 1963, The variation of certain speculative prices. Journal of Business, vol. 26, pp. 394-419. 28 Fama, E.: 1963, Mandelbrot and the stable paretian hypothesis. Journal of Business, vol. 36, pp. 420-429. 29 Fama, E.: 1965, The behavior of stock market prices. Journal of Business, vol. 38, pp. 34-105. 30 Esch, D.: 2010, Non-Normality facts and fallacies. Journal of Investment Management, vol. 8 (1), pp. 49-61. 3[1] Stoyanov, S.V., Rachev, S., Racheva-Iotova, B., Fabozzi, F.J.: 2011, Fat-tailed Models for Risk Estimation. Journal of Portfolio Management, vol. 37 (2). Available at iijournals.com/doi/abs/10.3905/jpm.2011.37.2.107 32 Embrechts, P., Uppelberg, C.K.L, and T. Mikosch.: 1997, Modeling extremal events for insurance and finance, Springer 33 McNeil, A. and Frey, R.: 2000, Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach, Journal of Empirical Finance, Volume 7, Issues 3-4, 271- 300. 34 Danielsson, J. and de Vries, C.: 2000, Value-at-Risk and Extreme Returns, Annales dEconomie et deb Statistique, Volume 60, 239-270. 35Gilli, G., and Kellezi, E.: (2003), An Application of Extreme Value Theory for Measuring Risk, Department of Econometrics, University of Geneva, Switzerland.à Available from: gloriamundi.org/picsresources/mgek.pdf 36 Shanbhag, D.N., and Rao, C.R.: 2003, Extreme Value Theory, Models and Simulation. Handbook of Statistics, Vol 21(). Elsevier Science B.V. 37 Fisher, R. A. and Tippett, L.H.C.: 1928, Limiting forms of the frequency distribution of the largest or smallest member of a sample. Proc. Cambridge Philos. Soc. Vol 24, 180-190. 38 De Haan, L.: 1970, On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tract, Vol. 32. Mathematisch Centmm, Amsterdam 39 De Haan, L.: 1976, Sample extremes: an elementary introduction. Statistica Neerlandica, vol. 30, 161-172. 40 Weissman, I.: 1978, Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. vol. 73, 812-815. 4[1] Nagaraja, H. N.: 1988, Some characterizations of continuous distributions based on regressions of adjacent order statistics and record values. Sankhyà A 50, 70-73. 42 Khan, A. H. and Beg, M.I.: 1987, Characterization of the Weibull distribution by conditional variance. Snaky A 49, 268-271. 43 Kabundi, A. and Mwamba, J.W.M.: 2009, Extreme value at Risk: a Scenario for Risk management. SAJE Forthcoming.
Subscribe to:
Posts (Atom)