Monday, March 9, 2020

Extreme conditional value at risk a coherent scenario for risk management The WritePass Journal

Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE1. INTRODUCTION1.1.BACKGROUND1.2   RSEARCH PROBLEM1.3   RELEVENCE OF THE STUDY1.4   RESEARCH DESIGNCHAPTER 2: RISK MEASUREMENT AND THE EMPIRICALDISTRIBUTION OF FINANCIAL RETURNS2.1   Risk Measurement in Finance: A Review of Its Origins2.2   Value-at-risk (VaR)2.2.1 Definition and concepts2.2.2 Limitations of VaR2.3   Conditional Value-at-Risk2.4   The Empirical Distribution of Financial Returns2.4.1   The Importance of Being Normal2.4.2 Deviations From NormalityCHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK?1.3. Extreme Value Theory3.1. The Block of Maxima Method3.2.  Ã‚   The Generalized Extreme Value Distribution3.2.1. Extreme Value-at-Risk3.2.2.   Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of RiskCHAPTER 4: DATA DISCRIPTION.CHAPTER 5: DISCUSION OF EMPIRICAL RESULTSCHAPTER 6: CONCLUSIONS  References Related CHAPTER ONE 1. INTRODUCTION Extreme financial losses that occurred during the 2007-2008 financial crisis reignited questions of whether existing methodologies, which are largely based on the normal distribution, are adequate and suitable for the purpose of risk measurement and management. The major assumptions employed in these frameworks are that financial returns are independently and identically distributed, and follow the normal distribution. However, weaknesses in these methodologies has long been identified in the literature. Firstly, it is now widely accepted that financial returns are not normally distributed; they are asymmetric, skewed, leptokurtic and fat-tailed. Secondly, it is a known fact that financial returns exhibit volatility clustering, thus the assumption of independently distributed is violated. The combined evidence concerning the stylized facts of financial returns necessitates the need for adapting existing methodologies or developing new methodologies that will account for all the stylised facts of financial returns explicitly. In this paper, I discuss two related measures of risk; extreme value-at-risk (EVaR) and extreme conditional value-at-risk (ECVaR). I argue that ECVaR is a better measure of extreme market risk than EVaR utilised by Kabundi and Mwamba (2009) since it is coherent, and captures the effects of extreme markets events. In contrast, even though EVaR captures the effect of extreme market events, it is non-coherent. 1.1.BACKGROUND Markowitz (1952), Roy (1952), Shape (1964), Black and Scholes (1973), and Merton’s (1973) major toolkit in the development of modern portfolio theory (MPT) and the field of financial engineering consisted of means, variance, correlations and covariance of asset returns. In MPT, the variance or equivalently the standard deviation was the panacea measure of risk. A major assumption employed in this theory is that financial asset returns are normally distributed. Under this assumption, extreme market events rarely happen. When they do occur, risk managers can simply treat them as outliers and disregard them when modelling financial asset returns. The assumption of normally distributed asset returns is too simplistic for use in financial modelling of extreme market events. During extreme market activity similar to the 2007-2008 financial crisis, financial returns exhibit behavior that is beyond what the normal distribution can model. Starting with the work of Mandelbrot (1963) there is increasingly more convincing empirical evidence that suggest that asset returns are not normally distributed. They exhibit asymmetric behavior, ‘fat tails’ and high kurtosis than the normal distribution can accommodate. The implication is that extreme negative returns do occur, and are more frequent than predicted by the normal distribution. Therefore, measures of risk based on the normal distribution will underestimate the risk of portfolios and lead to huge financial losses, and potentially insolvencies of financial institutions. To mitigate the effects of inadequate risk capital buffers stemming from underestimation of risk by normality-based financial modelling, risk measures such as EVaR that go beyond the assumption of normally distributed returns have been developed. However, EVaR is non-coherent just like VaR from which it is developed. The implication is that, even though it captures the effects of extreme mar ket events, it is not a good measure of risk since it does not reflect diversification – a contradiction to one of the cornerstone of portfolio theory. ECVaR naturally overcomes these problems since it coherent and can capture extreme market events. 1.2   RSEARCH PROBLEM The purpose of this paper is to develop extreme conditional value-at-risk (ECVaR), and propose it as a better measure of risk than EVaR under conditions of extreme market activity with financial returns that exhibit volatility clustering, and are not normally distributed. Kabundi and Mwamba (2009) have proposed EVaR as a better measure of extreme risk than the widely used VaR, however, it is non-coherent. ECVaR is coherent, and captures the effect of extreme market activity, thus it is more suited to model extreme losses during market turmoil, and reflects diversification, which is an important requirement for any risk measure in portfolio theory. 1.3   RELEVENCE OF THE STUDY The assumption that financial asset returns are normally distributed understates the possibility of infrequent extreme events whose impact is more detrimental than that of events that are more frequent. Use of VaR and CVaR underestimate the riskiness of assets and portfolios, and eventually lead to huge losses and bankruptcies during times of extreme market activity. There are many adverse effects of using the normal distribution in the measurement of financial risk, the most visible being the loss of money due to underestimating risk. During the global financial crisis, a number of banks and non-financial institutions suffered huge financial losses; some went bankrupt and failed, partly because of inadequate capital allocation stemming from underestimation of risk by models that assumed normally distributed returns. Measures of risk that do not assume normality of financial returns have been developed. One such measure is EVaR (Kabundi and Mwamba (2009)). EVaR captures the effect of extreme market events, however it is not coherent. As a result, EVaR is not a good measure of risk since it does not reflect diversification. In financial markets characterised by multiple sources of risk and extreme market volatility, it is important to have a risk measure that is coherent and can capture the effect of extreme market activity. ECVaR   is advocated to fulfils this role of ensuring extreme market risk while conforming to portfolio theory’s wisdom of diversification. 1.4   RESEARCH DESIGN Chapter 2 will present a literature review of risk measurement methodologies currently used by financial institutions, in particular, VaR and CVaR. I also discuss the strengths and weaknesses of these measures. Another risk measure not widely known thus far is the EVaR. We discuss EVaR as an advancement in risk measurement methodologies. I advocate that EVaR is not a good measure of risk since it is non-coherent. This leads to the next chapter, which presents ECVaR as a better risk measure that is coherent and can capture extreme market events. Chapter 3 will be concerned with extreme conditional value-at-risk (ECVaR) as a convenient modelling framework that naturally overcomes the normality assumption of asset returns in the modelling of extreme market events. This is followed with a comparative analysis of EVaR and ECVaR using financial data covering both the pre-financial crisis and the financial crisis periods. Chapter 4 will be concerned with data sources, preliminary data description, and the estimation of EVaR, and ECVaR. Chapter 5 will discuss the empirical results and the implication for risk measurement. Finally, chapter 6 will give concussions and highlight the directions for future research. CHAPTER 2: RISK MEASUREMENT AND THE EMPIRICAL DISTRIBUTION OF FINANCIAL RETURNS 2.1   Risk Measurement in Finance: A Review of Its Origins The concept of risk has been known for many years before Markowitz’s Portfolio Theory (MPT). Bernoulli (1738) solved the St. Petersburg paradox and derived fundamental insights of risk-averse behavior and the benefits of diversification.   In his formulation of expected utility theory, Bernoulli did not define risk explicitly; however, he inferred it from the shape of the utility function (Bulter et al. (2005:134); Brancinger Weber, (1997: 236)). Irving Fisher (1906) suggested the use of variance to measure economic risk. Von Neumann and Morgenstern (1947) used expected utility theory in the analysis of games and consequently deduced many of the modern understanding of decision making under risk or uncertainty.   Therefore, contrary to popular belief, the concept of risk has been known well before MPT. Even though the concept of risk was known before MPT, Markowitz (1952) first provided a systematic algorithm to measure risk using the variance in the formulation of the mean-variance model for which he won the Nobel Prize in 1990. The development of the mean-variance model inspired research in decision making under risk and the development of risk measures. The study of risk and decision making under uncertainty (which is treated the same as risk in most cases) stretch across disciplines. In decision science and psychology, Coombs and Pruitt (1960), Pruitt (1962), Coombs (1964), Coombs and Meyer (1969), and Coombs and Huang (1970a, 1970b) studied the perception of gambles and how their preference is affected by their perceived risk. In economics, finance and measurement theory, Markowitz (1952, 1959), Tobin (1958), Pratt (1964), Pollatsek Tversky (1970), Luce (1980) and others investigate portfolio selection and the measurement of risk of those portfolios, and gambles in general. T heir collective work produces a number of risk measures that vary in how they rank the riskiness of options, portfolios, or gambles. Though the risk measures vary, Pollatsek and Tversky (1970: 541) recognises that they share the following:   (1) Risk is regarded as a property of choosing among options. (2) Options can be meaningfully ordered according to their riskiness. (3) As suggested by Irving Fisher in 1906, the risk of an option is somehow related to the variance or dispersion in its outcomes. In addition to these basic properties, Markowitz regards risk as a ‘bad’, implying something that is undesirable. Since Markowitz (1952), many risk measures such as the semi-variance, absolute deviation, and the lower semi-variance etc. (see Brachinger and Weber, (1997)) were developed, however, the variance continued to dominate empirical finance. It was in the 1990s that a new measure, VaR was popularised and became industry standard as a risk measure. I present this ris k measure in the next section. 2.2   Value-at-risk (VaR) 2.2.1 Definition and concepts Besides these basic ideas concerning risk measures, there is no universally accepted definition of risk (Pollatsek and Tversky, 1970:541); as a result, risk measures continue to be developed. J.P Morgan Reuters (1996) pioneered a major breakthrough in the advancement of risk measurement with the use of value-at-risk (VaR), and the subsequent Basel committee recommendation that banks could use it for their internal risk management. VaR is concerned with measuring the risk of a financial position due to the uncertainty regarding the future levels of interest rates, stock prices, commodity prices, and exchange rates. The risk resulting in the movement of these market factors is called market risk. VaR is the expected maximum loss of a financial position with a given level of confidence over a specified horizon. VaR provides answers to question: what is the maximum loss that I can lose over, say the next ten days with 99 percent confidence? Put differently, what is the maximum loss that will be exceeded only one percent of the times in the next ten day? I illustrate the computation of VaR using one of the methods that is available, namely parametric VaR. I denote by the rate of return and by the portfolio value at time. Then is given by (1) The actual loss (the negative of the profit, which is) is given by (2) When is normally distributed (as is normally assumed), the variable has a standard normal distribution with mean of and standard deviation of. We can calculate VaR from the following equation: (3) where implies a confidence level. If we assume a 99% confidence level, we have (4) In   we have -2.33 as our VaR at 99% confidence level, and we will exceed this VaR only 1% of the times. From (4), it can be shown that the 99% confidence VaR is given byVaR (5)Generalising from (5), we can state the quantile VaR of the distribution as follows (6)VaR is an intuitive measure of risk that can be easily implemented. This is evident in its wide use in the industry. However, is it an optimal measure? The next section addresses the limitations of VaR. 2.2.2 Limitations of VaR Artzner et al. (1997,1999) developed a set of axioms that if satisfied by a risk measure, then that risk measure is ‘coherent’. The implication of coherent measures of risk is that â€Å"it is not possible to assign a function for measuring risk unless it satisfies these axioms† (Mitra, 2009:8). Risk measures that satisfy these axioms can be considered universal and optimal since they are founded on the same mathematical axioms that are generally accepted. Artzner et al. (1997, 1999) put forward the first axioms of risk measures, and any risk measure that satisfies them is a coherent measure of risk. Letting be a risk measure defined on two portfolios and. Then, the risk measure is coherent if it satisfies the following axioms: (1)  Ã‚   Monotonicity:   if then We interpret the monotonicity axiom to mean that higher losses are associated with higher risk. (2)  Ã‚   Homogeneity:   Ã‚   for; Assuming that there is no liquidity risk, the homogeneity axiom mean that risk is not a function of the quantity of a stock purchased, therefore we cannot reduce or increase risk by investing different amounts in the same stock. (3)  Ã‚   Translation invariance: , where is a riskless security; This means that investing in a riskless asset does not increase risk with certainty. (4)  Ã‚   Sub-additivity:   Possibly the most important axiom, sub-additivity insures that a risk measure reflects diversification – the combined risk of two portfolios is less than the sum of the risks of individual portfolios. VaR does not satisfy the most important axiom of sub-additivity, thus it is non-coherent. More so, VaR tells us what we can expect to lose if an extreme event does not occur, thus it does not tell us the extend of losses we can incur if a â€Å"tail† event occurs. VaR is therefore not optimal measure of risk. The non-coherence, and therefor non-optimality of VaR as a measuring of risk led to the development of conditional value-at-risk (CVaR) by Artzner et al. (1997, 1999), and Uryasev and Rockafeller (1999). I discus CVaR in the next section. 2.3   Conditional Value-at-Risk CVaR is also known as â€Å"Expected Shortfall† (ES),     Ã¢â‚¬Å"Tail VaR†, or â€Å"Tail conditional expectation†, and it measures risk beyond VaR. Yamai and Yoshiba (2002) define CVaR as the conditional expectation of losses given that the losses exceed VaR. Mathematically, CVaR is given by the following: (7) CVaR offers more insights concerning risk that VaR in that it tells us what we can expect to lose if the losses exceed VaR. Unfortunately, the finance industry has been slow in adopting CVaR as its preferred risk measure. This is besides the fact that â€Å"the actuarial/insurance community has tended to pick up on developments in financial risk management much more quickly than financial risk managers have picked up on developments in actuarial science† (Dowd and Black (2006:194)). Hopefully, the effects of the financial crisis will change this observation. In much of the applications of VaR and CVaR, returns have been assumed to be normally distributed. However, it is widely accepted that returns are not normally distributed. The implication is that, VaR and CVaR as currently used in finance will not capture extreme losses. This will lead to underestimation of risk and inadequate capital allocation across business units. In times of market stress when extra capital is required, it will be inadequate. This may lead to the insolvency of financial institutions. Methodologies that can capture extreme events are therefore needed. In the next section, I discuss the empirical evidence on financial returns, and thereafter discuss extreme value theory (EVT) as a suitable framework of modelling extreme losses. 2.4   The Empirical Distribution of Financial Returns Back in 1947, Geary wrote, â€Å"Normality is a myth; there never was, and never will be a normal distribution† (as cited by Krishnaiah (1980: 279). Today this remark is supported by a voluminous amount of empirical evidence against normally distributed returns; nevertheless, normality continues to be the workhorse of empirical finance. If the normality assumption fails to pass empirical tests, why are practitioners so obsessed with the bell curve? Could their obsession be justified? To uncover some of the possible responses to these questions, let us first look at the importance of being normal, and then look at the dangers of incorrectly assuming normality. 2.4.1   The Importance of Being Normal The normal distribution is the widely used distribution in statistical analysis in all fields that utilises statistics in explaining phenomenon. The normal distribution can be assumed for a population, and it gives a rich set of mathematical results (Mardia, 1980: 279). In other words, the mathematical representations are tractable, and are easy to implement. The populations can simply be explained by its mean and variance when the normal distribution is assumed. The panacea advantage is that the modelling process under normality assumption is very simple. In fields that deal with natural phenomenon, such as physics and geology, the normal distribution has unequivocally succeeded in explaining the variables of interest. The same cannot be said in the finance field. The normal probability distribution has been subject to rigorous empirical rejection. A number of stylized facts of asset returns, statistical tests of normality and the occurrence of extreme negative returns disputes the normal distribution as the underlying data generating process for asset returns. We briefly discuss these empirical findings next. 2.4.2 Deviations From Normality Ever since Mandelbrot (1963), Fama (1963), Fama (1965) among others, it is a known fact that asset returns are not normally distributed. The combined empirical evidence since the 1960s points out the following stylized facts of asset returns: (1)  Ã‚   Volatility clustering: periods of high volatility tend to be followed by periods of high volatility, and period of low volatility tend to be followed by low volatility. (2)  Ã‚   Autoregressive price changes: A price change depends on price changes in the past period. (3)  Ã‚   Skewness: Positive prices changes and negative price changes are not of the same magnitude. (4)  Ã‚   Fat-tails: The probabilities of extreme negative (positive) returns are much larger than predicted by the normal distribution. (5)  Ã‚   Time-varying tail thickness: More extreme losses occur during turbulent market activity than during normal market activity. (6)  Ã‚   Frequency dependent fat-tails: high frequency data tends to be more fat-tailed than low frequency data. In addition to these stylized facts of asset returns, extreme events of 1974 Germany banking crisis, 1978 banking crisis in Spain, 1990s Japanese banking crisis, September 2001, and the 2007-2008 US experience ( BIS, 2004) could not have happened under the normal distribution. Alternatively, we could just have treated them as outliers and disregarded them; however, experience has shown that even those who are obsessed with the Gaussian distribution could not ignore the detrimental effects of the 2007-2008 global financial crisis. With these empirical facts known to the quantitative finance community, what is the motivation for the continued use of the normality assumption? It could be possible that those that stick with the normality assumption know only how to deal with normally distributed data. It is their hammer; everything that comes their way seems like a nail! As Esch (2010) notes, for those that do have other tools to deal with non-normal data, they continue to use the normal distribution on the grounds of parsimony. However, â€Å"representativity should not be sacrificed for simplicity† (Fabozzi et al., 2011:4). Better modelling frameworks to deal with extreme values that are characteristic of departures from normality have been developed. Extreme value theory is one such methodology that has enjoyed success in other fields outside finance, and has been used to model financial losses with success. In the next chapter, I present extreme value-based methodologies as a practical and better methodology to overcome non-normality in asset returns. CHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK? 1.3. Extreme Value Theory Extreme value theory was developed to model extreme natural phenomena such as floods, extreme winds, and temperature, and is well established in fields such as engineering, insurance, and climatology. It provides a convenient way to model the tails of distributions that capture non-normal activities. Since it concentrates on the tails of distributions, it has been adopted to model asset returns in time of extreme market activity (see Embrechts et al. (1997); McNeil and Frey (2000); Danielsson and de Vries (2000). Gilli and Kellezi (2003) points out two related ways of modelling extreme events. The first way describes the maximum loss through a limit distribution known as the generalised extreme value distribution (GED), which is a family of asymptotic distributions that describe normalised maxima or minima.   The second way provides asymptotic distribution that describes the limit distribution of scaled excesses over high thresholds, and is known as the generalised Pareto distribution (GPD). The two limit distributions results into two approaches of EVT-based modelling the block of maxima method and the peaks over threshold method respectively[2]. 3.1. The Block of Maxima Method Let us consider independent and identically distributed (i.i.d) random variable   with common distribution function â„ ±. Let be the maximum of the first random variables. Also, let us suppose is the upper end of. For, the corresponding results for the minima can be obtained from the following identity (8) almost surely converges to whether it is finite or infinite since, Following Embrechts et al. (1997), and Shanbhang and Rao (2003), the limit theory finds norming constants and a non-degenerate distribution function in such a way that the distribution function of a normalized version of converges to as follows;, as (9) is an extreme value distribution function, and â„ ± is the domain of attraction of, (written as), if equation (2) holds for suitable values of and. It can also be said that the two extreme value distribution functions and belong in the same family if for some   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   and all. Fisher and Tippett (1928), De Haan (1970, 1976), Weissman (1978), and Embrechts et al. (1997) show that the limit distribution function belongs to one of the following three density functions for some. (10) (11) (12) Any extreme value distribution can be classified as one of the three types in (10), (11) and (12).   and   are the standard extreme value distribution and the corresponding random variables are called standard extreme random variables. For alternative characterization of the three distributions, see Nagaraja (1988), and Khan and Beg (1987). 3.2.  Ã‚   The Generalized Extreme Value Distribution The three distribution functions given in (10), (11) and (12) above can be combined into one three-parameter distribution called the generalised extreme value distribution (GEV) given by,, with (13) We denote the GEV by, and the values andgive rise to the three distribution functions in (3). In equation (4) above, and represent the location parameter, the scale parameter, and the tail-shape parameter respectively. corresponds to the Frechet, and distributioncorresponds to the Weibull distribution. The case where reduces to the Gumbel distribution. To obtain the estimates of we use the maximum likelihood method, following Kabundi and Mwamba (2009). To start with, we fit the sample of maximum losses to a GEV. Thereafter, we use the maximum likelihood method to estimate the parameters of the GEV from the logarithmic form of the likely function given by; (14) To obtain the estimates of we take partial derivatives of equation (14) with respect to and, and equating them to zero. 3.2.1. Extreme Value-at-Risk The EVaR defined as the maximum likelihood   quantile estimator of, is by definition given by (15)   The quantity is the quantile of, and I denote it as the alpha percept VaR specified as follows following Kabundi and Mwamba (2009), and Embrech et al. (1997): (16) Even though EVaR captures extreme losses, by extension from VaR it is non-coherent. As such, it cannot be used for the purpose of portfolio optimization since it does not reflect diversification. To overcome this problem, In the next section, I extend CVaR to ECVaR so as to capture extreme losses coherently. 3.2.2.   Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of Risk I extend ECVaR from EVaR in a similar manner that I used to extend CVaR from VaR. ECVaR can therefore be expressed as follows: (17) In the following chapter, we describe the data and its sources. CHAPTER 4: DATA DISCRIPTION. I will use stock market indexes of five advanced economies comprising that of the United States, Japan, Germany, France, and United Kingdom, and five emerging economies comprising Brazil, Russia, India, China, and South Africa. Possible sources of data that will be used are I-net Bride, Bloomberg, and individual country central banks. CHAPTER 5: DISCUSION OF EMPIRICAL RESULTS In this chapter, I will discuss the empirical results. Specifically, the adequacy of ECVaR will be discussed relative to that of EVaR. Implications for risk measurement will also be discussed in this chapter. CHAPTER 6: CONCLUSIONS This chapter will give concluding remarks, and directions for future research.   References [1] Markowitz, H.M.: 1952, Portfolio selection, Journal of Finance 7 (1952), 77-91 2 Roy, A.D.: 1952, Safety First and the Holding of Assets. Econometrica, vol. 20 no 3 p 431-449. 3 Shape, W.F.: 1964, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, Vol. 19 No 3 p 425-442. 4 Black, F., and Scholes, M.: 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 18 () 637-59. 5 Merton, R. C.: 1973, The Theory of Rational Option Pricing.   Bell Journal of Economics and Management Science, Spring. 6 Artzner, Ph., F. Delbaen, J.-M. Eber, And D. Heath .: 1997, Thinking Coherently, Risk 10 (11) 68–71. 7 Artzner, Ph., Delbaen, F., Eber, J-M., And Heath , D.: 1999, Thinking Coherently. Mathematical Finance, Vol. 9, No. 3   203–228 8 Bernoulli, D.: 1954, Exposition of a new theory on the measurement of risk, Econometrica 22 (1) 23-36, Translation of a paper originally published in Latin in St. Petersburg in 1738. 9 Butler, J.C., Dyer, J.S., and Jia, J.: 2005, An Empirical Investigation of the Assumption of Risk –Value Models. Journal of Risk and Uncertainty, vol. 30 (2), pp. 133-156. 10 Brachinger, H.W., and Weber, M.: 1997, Risk as a primitive: a survey of measures of perceived risk. OR Spektrum, Vol 19 () 235-250 [1] Fisher, I.: 1906, The nature of Capital and Income. Macmillan. 1[1] von Neumann, J. and Morgenstern, O.: 1947, Theory of games and economic behaviour, 2nd ed., Princeton University Press. [1]2 Coombs, C.H., and Pruitt, D.G.: 1960, Components of Risk in Decision Making: Probability and Variance preferences. Journal of Experimental Psychology, vol. 60 () pp. 265-277. [1]3 Pruitt, D.G.: 1962, Partten and Level of risk in Gambling Decisions. Psychological Review, vol. 69 ()( pp. 187-201. [1]4 Coombs, C.H.: 1964, A Theory of Data. New York: Wiley. [1]5   Coombs, C.H., and Meyer, D.E.: 1969, Risk preference in Coin-toss Games. Journal of Mathematical Psychology, vol. 6 () p 514-527. [1]6 Coombs, C.H., and Huang, L.C.: 1970a, Polynomial Psychophysics of Risk. Journal of Experimental psychology, vol 7 (), pp. 317-338. [1]7 Markowitz, H.M.: 1959, Portfolio Selection: Efficient diversification of Investment. Yale University Press, New Haven, USA. [1]8 Tobin, J. E.: 1958, liquidity preference as behavior towards risk. Review of Economic Studies p 65-86. [1]9 Pratt, J.W.: 1964, Risk Aversion in the Small and in the Large. Econometrica, vol. 32 () p 122-136. 20 Pollatsek, A. and Tversky, A.: 1970, A theory of Risk. Journal of Mathematical Psychology 7 (no issue) 540-553. 2[1] Luce, D. R.:1980, Several possible measures of risk. Theory and Decision 12 (no issue) 217-228. 22 J.P. Morgan and Reuters.: 1996, RiskMetrics Technical document. Available at http://riskmetrics.comrmcovv.html Accessed†¦ 23 Uryasev, S., and Rockafeller, R.T.: 1999, Optimization of Conditional Value-at-Risk. Available at gloriamundi.org 24 Mitra, S.: 2009, Risk measures in Quantitative Finance. Available on line. [Accessed†¦] 25 Geary, R.C.: 1947, Testing for Normality, Biometrika, vol. 34, pp. 209-242. 26 Mardia, K.V.: 1980, P.R. Krishnaiah, ed., Handbook of Statistics, Vol. 1. North-Holland Publishing Company. Pp. 279-320. 27 Mandelbrot, B.: 1963, The variation of certain speculative prices. Journal of Business, vol. 26, pp. 394-419. 28 Fama, E.: 1963, Mandelbrot and the stable paretian hypothesis. Journal of Business, vol. 36, pp. 420-429. 29 Fama, E.: 1965, The behavior of stock market prices. Journal of Business, vol. 38, pp. 34-105. 30 Esch, D.: 2010, Non-Normality facts and fallacies. Journal of Investment Management, vol. 8 (1), pp. 49-61. 3[1] Stoyanov, S.V., Rachev, S., Racheva-Iotova, B., Fabozzi, F.J.: 2011, Fat-tailed Models for Risk Estimation. Journal of Portfolio Management, vol. 37 (2). Available at iijournals.com/doi/abs/10.3905/jpm.2011.37.2.107 32 Embrechts, P., Uppelberg, C.K.L, and T. Mikosch.: 1997, Modeling extremal events for insurance and finance, Springer 33 McNeil, A. and Frey, R.: 2000, Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach, Journal of Empirical Finance, Volume 7, Issues 3-4, 271- 300. 34 Danielsson, J. and de Vries, C.: 2000, Value-at-Risk and Extreme Returns, Annales dEconomie et deb Statistique, Volume 60, 239-270. 35Gilli, G., and Kellezi, E.: (2003), An Application of Extreme Value Theory for Measuring Risk, Department of Econometrics, University of Geneva, Switzerland.   Available from: gloriamundi.org/picsresources/mgek.pdf 36 Shanbhag, D.N., and Rao, C.R.: 2003, Extreme Value Theory, Models and Simulation. Handbook of Statistics, Vol 21(). Elsevier Science B.V. 37 Fisher, R. A. and Tippett, L.H.C.: 1928, Limiting forms of the frequency distribution of the largest or smallest member of a sample. Proc. Cambridge Philos. Soc. Vol 24, 180-190. 38 De Haan, L.: 1970, On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tract, Vol. 32. Mathematisch Centmm, Amsterdam 39 De Haan, L.: 1976, Sample extremes: an elementary introduction. Statistica Neerlandica, vol. 30, 161-172. 40 Weissman, I.: 1978, Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. vol. 73, 812-815. 4[1] Nagaraja, H. N.: 1988, Some characterizations of continuous distributions based on regressions of adjacent order statistics and record values. Sankhy   A 50, 70-73. 42 Khan, A. H. and Beg, M.I.: 1987, Characterization of the Weibull distribution by conditional variance. Snaky A 49, 268-271. 43 Kabundi, A. and Mwamba, J.W.M.: 2009, Extreme value at Risk: a Scenario for Risk management. SAJE Forthcoming.

Saturday, February 22, 2020

To Build or Buy (Modell's Sporting Goods) Assignment

To Build or Buy (Modell's Sporting Goods) - Assignment Example ia, Connecticut, New Jersey, Rhode Island, New Hampshire, Delaware, Massachusetts, Maryland, Virginia, and the District of Columbia (Modell’s Sporting Goods, 2013). Exclusive Brand Offerings: the business will offer its customers high-quality goods at competitive prices, which are marketed under exclusive brands. The business will invest in procurement and development staff that sources performance-based goods that are targeted to the enthusiast of sporting for sale under brands. The company’s private label products will present value to its customers at every price point and offer it with high gross. Competitive Pricing: The business will position itself to be aggressive on price, but the business will not endeavor to be a price leader. The business will maintain a strategy of matching its competitors advertised prices. In the case, a customer discovers that a competitor has a lower price of an item; the business will lower its price. In addition, under the "Right Price Promise," just in case within 30 days of buying an item from the company, the buyer finds a lower price by a competitor, the company will refund the difference. The business will seek to offer value to customers and uphold a reputation as the main provider of value. Broad collection of Brand Name products: the business will deal with a variety of popular brands including Columbia, Nike, North Face, Callaway, Under Armor, Adidas, and private label products sold under names that include Walter Hagen and Ativa, which are found in its stores. The breadth of its product selections in every group of sporting goods provides customers a variety of price points. Genuine Sporting Goods Retailer: The business history will be a retailer of authentic athletic products, footwear and apparel, which implies that it will offer athletic merchandise that is of high quality and intended to improve customers’ performance. The business will believe that its customers seek authentic, real product offerings, and

Thursday, February 6, 2020

Public health Essay Example | Topics and Well Written Essays - 250 words - 12

Public health - Essay Example It would monitor continuously by setting measurable goals at all levels of national, state and local bodies. Its mission is also to incorporate the help of various government and non-government agencies for inputs to improve its policies and practices. It would also encourage research and data collection to improve and innovate healthcare deliverables. Healthy People 2010 had mainly focused on eliminating disparities in healthcare delivery and improvement of health of all its citizens irrespective of race, class, culture and color. Vulnerable segment had become vital area of focus. The program’s major objective was to increase quality of life along with increased life expectancy. The risk reductions and creating awareness for preventive measures was critical aspect of the program. It was also a collaborative effort with focus on research and development in the area. Healthy People 2020, on the other hand, emphasizes elimination of disparities in healthcare deliverable by improving social and physical environment and promote equity and health for all people. Promoting healthy behavior and lifestyle changes are important aspects of the program that are thought to be critical ingredient for improving quality of life in old age. (words:

Tuesday, January 28, 2020

King of the castle tension Essay Example for Free

King of the castle tension Essay ?â€Å"I’m the King of the Castle†: Literature Coursework Investigate the ways in which Susan Hill uses language to create tension and a sense of foreboding in â€Å"I’m the King of the Castle† Susan Hill implements several writing techniques to create tension in the novel. Tension in this sense simply means mental strain or excitement in the readers. One of the techniques used is shown when she uses a third-person narration to narrate the story. This narrator is omniscient and implies that he/ she is not one of the characters in the novel, and yet at the same time knows everything that is running through the characters minds. Hill uses this technique to bring the readers on a journey of moving freely in time and space to allow them to know what any character is doing or thinking at any one point of time. This is only possible because the narrator is not a character in the novel and is allowed to be anywhere, anytime. Susan Hill uses many different techniques to put a point across, the most important being her use of imagery. However her writing also has many other qualities such as good structure and her ability to think like her characters. In addition she manages to build up tension and uses different ways of emphasising words or phrases. All of these factors contribute to her unique evocative style and add to her reputation of being a very talented writer. In chapter eleven, she describes vividly how Kingshaw feels sick with fright when Hooper locks him in the shed. He retched, and then began to vomit, all over the sacks, the sick coming down his nose and choking him. It tasted bitter. He bent forwards, holding his stomach. When it finished he wiped his mouth on the sleeve of his shirt. He was shivering again. This passage is an example of her excellent use of imagery. She conjures up a picture of the scene as well as expressing Kingshaws fears and senses in an evocative style by using a scene that we can all relate to and understand. An example of Susan Hills good structure is at the very beginning of the novel, when Hooper and Kingshaw first meet, Hooper sends Kingshaw a note saying I didnt want you to come here. This sets up the story line from the beginning, leading us to expect events to come. Then at the very end of the novel before Kingshaw commits suicide, Hopper sends him a final note saying Something will happen to you Kingshaw. She shows the ability to be able to think like a child, which adds to the overall affect of the book because the main character is Kingshaw who is a child. This process of her thoughts gives us a wider understanding of Kingshaws character and his thoughts. Examples of her thinking like a child appear in many forms in the novel. One of them is her use of childish language and grammar. Now, he thought, I know what Hooper is really like. Hes a baby. And stupid. And a bully. Notice in this particular phrase that she uses childish words like baby, stupid and bully. The use of short abrupt sentences emphasise the words and adds to the childish theme, because it is grammatically incorrect to start a sentence with a conjunction, which is what a child may do). Another form of her childish thinking is how she shows an understanding of childrens fears and their reactions. An example of this is Kingshaws fear of moths. There are a lot of moths, Hooper said softly, there always are, in woods. Pretty big ones, as well. Kingshaws stomach clenched. In his nostrils, he could smell the mustiness of the Red Room. This passage shows how Hooper taunts Kingshaw with his fear (childishly). She shows Kingshaws reaction to his fear by saying his stomach clenched. She then continues with his memory of the Red Room, where he had been scared by the death moths, using her evocative style to describe how he associates moths with the musty smell of the Red Room. She uses the example of moths throughout the book, along with Kingshaws other fears such as birds. To keep the reader alert Susan Hill tended to change from one scene to another very abruptly. A Classic example is in chapter sixteen, when every one was in the Breakfast room on the day of Mrs. Helena Kingshaw and Mr. Hoopers wedding announcement. Suddenly the scene changes to them being in a muddy field. This can be quite confusing for the reader but it does keep them alert. It was also in this scene where Susan Hill showed her ability to build up tension. This was done by Kingshaw expressing his fears about something that we do not know about, and Mrs. Helena Kingshaw talking about how he was scared by this thing when he was little. As the passage continues the writer gives us a clue that the unknown fear is of a certain place and finally (after a page of writing) she tells us that the place in question is a circus. Susan Hill uses many different techniques to build up an atmosphere. In my opinion the most effective atmosphere that she created was in chapters twelve and thirteen, when Hooper falls off the castle wall. When Kingshaw reaches the top of the castle (without Hooper) he feels a sense of power. He shouts out â€Å"Im the King of the castle† which relates to the title of the book. To make us understand how Kingshaw really does feel King, she repeats the phrase I am the King thrice. He felt so powerful that he thought he could kill Hooper. When Kingshaw is in a rage with Hopper, telling him to come down, he swears at him, this shocks the reader, as he is only a child. When Hooper is falling off the castle wall Kingshaw commands TAKE YOUR HANDS OFF THE WALL, HOOPER. The use of capital letter creates the effect that what he is saying is important. When Hooper falls and is carried off on a stretcher, thunder rumbles in the back ground which gives the ironic affect that it is not going to be a good thing for Kingshaw. Kingshaw is then made to get down from the castle, which can be classed as an example of his life. Every time he reaches the top he is always forced to go back down which is, once again, ironic. The whole book gives an immense sense of tension to the reader. The atmosphere is one of suspense and danger. The overall use of abrupt, simple dialogue accentuates the feeling of incoming peril. Susan Hill writes the novel in a way which causes the reader to constantly be alert, and to expect the sinister and foreboding to occur. Arsalan Abdullah

Monday, January 20, 2020

Fairness of the SAT :: Standardized Tests ACT SAT Essays

The Scholastic Aptitude Test (SAT) was created to test college-bound students on their mathematical and verbal aptitudes and to thus predict their ability to succeed academically in college. In the United States, the SAT is the oldest and most widely used college entrance test. It was first administered in June 1926 to only 8,040 high school students and is now taken by over 2 million students. Over the years, the SAT has become one of the most important tests of a teenager's life for admission to college. The test is administered seven times a year at thousands of testing centers throughout the United States. Most colleges consider the SAT to be a reliable predictor of academic success in college and is therefore used as a critical tool when selecting applicants. However, the question that has to be confronted is whether the test is fair to all students. Educators have been questioning the validity of the SAT to determine college admission or to predict academic success because the test appears to be discriminatory and biased against women, minorities, and the poor (low income). The Educational Testing Service (ETS), which produces and administers the test, claims that the SAT in its current form "is an impartial and objective measure of student ability" (Owen 272). However, critics of the SAT argue "that tests like the SAT measure little more than the absorption of white upper-middle-class culture and penalize the economically disadvantaged" (Owen 10). The statistical reality of SAT scores is that: students who take coaching/prep courses do better than those who are not coached; men do better than women; whites do better than blacks; and the rich do better than the poor. Based upon my research, the SAT appears to be discriminatory against women, minorities, and the poor, and a test this flawed should not be used as a key factor in c ollege admission or as a predictor of academic success. In March 2005, a "new and improved" SAT will be introduced to theoretically eliminate any questions deemed biased and discriminatory. This revised SAT would appear to be a concession to the out-cry of criticism against the current test. However, since the new test will emphasize achievement rather than aptitude, it will once again favor the student who can afford coaching and attends a high school with a superior curriculum, i.e. the rich and white. An "equal opportunity" college entrance examination is virtually impossible because someone will always have/obtain an advantage.

Sunday, January 12, 2020

Understanding Man’s Power

In recent years, we have come to understand that relations between men and women are governed by a sexual politics that exists outside individual men's and women's needs and choices. It has taken us much longer to recognize that there is a systematic sexual politics of male-male relationships as well. Under patriarchy, men's relationships with other men cannot help but be shaped and patterned by patriarchal norms, though they are less obvious than the norms governing male-female relationships. A society could not have the kinds of power dynamics that exist between women and men in our society without certain kinds of systematic power dynamics operating among men as well. Men do not just happily bond together to oppress women. In addition to hierarchy over women, men create hierarchies and rankings among themselves according to criteria of â€Å"masculinity. † Men at each rank of masculinity compete with each other, with whatever resources they have, for the differential payoffs that patriarchy allows men. Men in different societies choose different grounds on which to rank each other. Many societies use the simple facts of age and physical strength to stratify men. Our society stratifies men according to physical strength and athletic ability in the early years, but later in life focuses on success with women and ability to make money. In our society, one of the most critical rankings among men deriving from patriarchal sexual politics is the division between gay and straight men. This division has powerful negative consequences for gay men and gives straight men privileges. But in addition, this division has a larger symbolic meaning. Our society uses the male heterosexual-homosexual dichotomy as a central symbol for all the rankings of masculinity, for the division on any grounds between males who are â€Å"real men† and have power, and males who are not. Any kind of powerlessness or refusal to compete becomes imbued with imagery of homosexuality. In the men's movement documentary film Men's Lives, a high school male who studies modern dance says that others often think he is gay because he is a dancer. When asked why, he gives three reasons: because dancers are â€Å"free and loose,† because they are â€Å"not big like football players,† and because â€Å"you're not trying to kill anybody. † The patriarchal connection: if you are not trying to kill other men, you must be gay. Another dramatic example of men's use of homosexual insults as weapons in their power struggle with each other comes from a document which provides one of the richest case studies of the politics of male-male relationships to yet appear: Woodward and Bernstein's The Final Days. Ehrlichman jokes that Kissinger is queer, Kissinger calls an unnamed colleague a psychopathic homosexual, and Haig jokes that Nixon and Rebozo are having a homosexual relationship. From the highest ranks of male power to the lowest, the gay-straight division is a central symbol of all the forms of ranking and power relationships which men put on each other. MEN S POWER WITH WOMEN The relationships between the patriarchal stratification and competition which men experience with each other, and men's patriarchal domination of women, are complex. Let us briefly consider several points of interconnection between them. First, women are used as SYMBOLS OF SUCCESS in men's competition with each other. It is sometimes thought that competition for women is the ultimate source of men's competition with each other. There is considerable reason, however, to see women not as the ultimate source of male-male competition, but rather as only symbols in a male contest where real roots lie much deeper. Second, women often play a MEDIATING role in the patriarchal struggle among men. Women get together with each other, and provide the social lubrication necessary to smooth over men's inability to relate to each other non-competitively. A modern myth, James Dickey's novel Deliverance, portrays what happens when men's relationships with each other are not mediated by women. According to Heilburn, the central message of Deliverance is that when men get beyond the bounds of civilization, which really means beyond the bounds of the civilizing effects of women, men rape and murder each other. A third function women play in male-male sexual politics is that relationships with women provide men a REFUGE from the dangers and stresses of relating to other males. Traditional relationships with women have provided men a safe place in which they can recuperate from the stresses they have absorbed in their daily struggle with other men, and in which they can express their needs without fearing that these needs will be used against them. If women begin to compete with men and have power in their own right, men are threatened by the loss of this refuge. Finally, a fourth function of women n males' patriarchal competition with each other is to reduce the stress of competition by serving as an UNDERCLASS. As Elizabeth Janeway has written in Between Myth and Morning, under patriarchy women represent the lowest status, a status to which men can fall only under the most exceptional circumstances, if at all. Competition among men is serious, but its intensity is mitigated by the fact tha t there is a lowest possible level to which men cannot fall. One reason men fear women's liberation, writes Janeway, is that the liberation of women will take away this unique underclass status of women. Men will now risk falling lower than ever before, into a new underclass composed of the weak of both sexes. Thus, women's liberation means that the stakes of patriarchal failure for men are higher than they have been before, and that it is even more important for men not to lose. Thus, men's patriarchal competition with each other makes use of women as symbols of success, as mediators, as refuges, and as an underclass. In each of these roles, women are dominated by men in ways that derive directly from men's struggle with each other. Men need to deal with the sexual politics of their relationships with each other if they are to deal fully with the sexual politics of their relationships with women. MEN'S POWER IN SOCIETY At one level, men's social identity is defined by the power they have over women and the power they can compete for against other men. But at another level, most men have very little power over their own lives. How can we understand this paradox? The major demand to which men must accede in contemporary society is that they play their required role in the economy. But this role is not intrinsically satisfying. The social researcher Daniel Yankelovich has suggested that about 80% of U. S. male workers experience their jobs as intrinsically meaningless and onerous. They experience their jobs and themselves as worthwhile only through priding themselves on the hard work and personal sacrifice they are making to be breadwinners for their families. Accepting these hardships reaffirms their role as family providers and therefore as true men. Linking the breadwinner role to masculinity in this way has several consequences for men. Men can get psychological payoffs from their jobs which these jobs never provide in themselves. By training men to accept payment for their work in feelings of masculinity, rather than in feelings of satisfaction, men will not demand that their jobs be made more meaningful. As a result, jobs can be designed for the more important goal of generating profits. Further, the connection between work and masculinity makes men accept unemployment as their personal failing as males, rather than analyze and change the profit-based economy whose inevitable dislocations make them unemployed or unemployable. Men's jobs are increasingly structured as if men had no direct roles or responsibilities in the family–indeed, as if they did not have families at all. But paradoxically, at the same time that men's responsibilities in the family are reduced to facilitate more efficient performance of their work role, the increasing dehumanization of work means that jobs give men only the satisfaction of fulfilling the family breadwinner role. The relative privilege that men get from sexism, and more importantly the false consciousness of privilege men get from sexism, play a critical role in reconciling men to their subordination in the larger political economy. This analysis does not imply that men's sexism will go away if they gain control over their own lives, or that men do not have to deal with their sexism until they gain this control. Rather, the point is that we cannot fully understand men's sexism or men's subordination in the larger society unless we understand how deeply they are related. Ultimately, we have to understand that patriarchy has two halves which are intimately related to each other. Patriarchy is a dual system, a system in which men oppress women, and in which men oppress themselves and each other. At one level, challenging one part of patriarchy inherently leads to challenging the other. This is one way to interpret why the idea of women's liberation led so soon to the idea of men's liberation, which ultimately means freeing men from the patriarchal sexual dynamics they now experience with each other. But because the patriarchal sexual dynamics of male-male relationships are less obvious than those of male-female relationships, men now face a real danger: while the patriarchal oppression of women may be lessened as a result of the women's movement, the patriarchal oppression of men may be untouched. The real danger for men posed by the attack that the women's movement is making on patriarchy is not that this attack will go too far, but that it will not go far enough. Ultimately, men cannot go any further in relating to women as equals

Saturday, January 4, 2020

Hesses Siddhartha as it Parallels Maslows Hierarchy of...

Hesses Siddhartha as it Parallels Maslows Hierarchy of Needs Several parallels can be drawn between the psychologist Abraham Maslows theoretical hierarchy of needs and the spiritual journey of Siddhartha, the eponymous main character in Herman Hesses novel. Maslows hierarchy of needs is somewhat of a pyramid that is divided into eight stages of need through which one progresses throughout ones entire life. During the course of his lifetime, Siddharthas personality develops in a manner congruent with the stages of Maslows hierarchy. Siddharthas progress from each of the major sections of the hierarchy is marked by a sharp change in his life or behavior. Siddhartha is the story of a young mans journey in search of†¦show more content†¦He specialized in the study of human personality and development. One of his major contributions was the development of a theoretical hierarchy of needs as a model for understanding the development of an individuals personality. The eight steps in Maslows hierarchy of needs, from the most basic requirements for survival to the most abstract, are as follows: Group One ˆ Physiological needs: hunger, thirst, bodily comforts, etc ˆ Safety/security: out of danger ˆ Belongingness and love: affiliate with others, be accepted ˆ Esteem: to achieve, be competent, gain approval Group Two ˆ Cognitive: to know, to understand, and explore ˆ Aesthetic: symmetry, order, and beauty Group Three ˆ Self-actualization: to find self-fulfillment and realize ones potential ˆ Transcendence: to help others find self-fulfillment and realize their potential Not every individual, according to Maslow, will progress through every stage, nor will an individual necessarily work through each stage in sequence or only once. Siddhartha rose through the early stages of development early in life. In fact, the book begins with Siddhartha already having mastered the first four stages. Because Siddhartha was born into a well-to-do upper caste family, he was able to satisfy such