Home

About Us

PnL Explained Professionals FAQ

Site Map

Glossary

Membership

Contact Us

           

Main Page for Subject Area

PnL Explained

CTRM Software

Statistics

 

Volatilities

 

Outline

1) Definition

2) How Volatilities Are Used In CTRM Systems

3) How to Calculate Volatilities

4) Assumption of Normal Distribution

5) Assumption of Lognormal Distribution

6) Considerations for CTRM Systems

7) EWMA (Exponentially Weighted Moving Average) and GARCH (Generalized Auto-Regressive Conditional Heteroscedasticity) Methodology For Calculating Volatilities

 

1) Definition

‘Volatility’ is the annualized standard deviation of percentage price changes for, in this case, a commodity.

 

It is a measure of how widely ‘things’ (commodity prices) move around.

 

Notes:

1.1) We’ll refine this definition a bit below.

1.2) Notice we said ‘price changes’ and not ‘prices’ in the definition. 

1.3) ‘Standard Deviation’ is a concept from the field of statistics, i.e., a generic math concept.  It is defined as the square root of the ‘variance’, another term/concept from statistics.  Those two terms are generic to all fields, e.g., physics, etc.  It is only ‘volatility’ that has a special definition for use with commodities markets (and financial markets in general).

 

 

2) How Volatilities Are Used In CTRM Systems

 

Two ways:

2.1) For use in option pricing models, i.e., as one of the inputs to, for example, the Black Scholes option pricing model.  (The other inputs are time to expiration in years, interest rate, strike price of the option and market price.

 

2.2) For calculating VaR (value at risk).  Only ‘parametric’ (closed form formula) and ‘Monte Carlo’ methodologies of VaR use volatilities.  They aren’t used/needed if you use the Historic VaR approach.

 

 

3) How to Calculate Volatilities

 

This shows how to calculate volatilities using historic prices.  For this section, we’ll assume you are planning to use the volatilities for use with VaR.

 

If you were going to use the volatilities for use with option pricing, you probably would not use the method described here.  Instead you would derive volatilities (back solve) using current market *prices* (i.e., option prices) to figure out what volatilities must be applicable, so that if you plug them into a given pricing model with the rest of the inputs (underlying price, option strike, interest rate, time in years to expiration), you get the same option price as the currently traded value.

 

3.1) First step is to decide on your time frame to use.  Do you use 2 months of price data?  6 months?  1 year?  This decision will likely change the end result you come out with.  For example, if the most recent 2 months happened to be especially volatility, using just 2 months of price data may give a higher calculated volatility value that using a years’ worth of price data.

 

For this example, we’ll use 11 trading days.  This is probably never a value to use for real, as it is too low.  It is good for keeping things short for a short intro, and the methodology is the same no matter how many days you use.

 

These are the prices we’ll use.  We are assuming today is 10-Jun-202X.

Day #

Date

Price

11

27-May-202X

52.53

10

28-May-202X

53.14

9

29-May-202X

52.12

8

30-May-202X

51.66

7

31-May-202X

53.42

6

3-Jun-202X

51.54

5

4-Jun-202X

52.66

4

5-Jun-202X

52.71

3

6-Jun-202X

52.43

2

7-Jun-202X

51.90

1

10-Jun-202X

52.13

 

The ‘Day #’ doesn’t mean anything functionally/mathematically.  Just showing it to help clarify things for the reader.

 

 

3.2) Next step is to calculate the price changes.  With 11 days of prices, we’ll get one less than that, i.e., 10 days, of price changes.

 

Note for the ‘Percent Price Changes’, we are giving the ‘one plus’ version of the return (‘return’ = ‘price change’), so we show 100.3998% instead of 0.3998%.  

 

The way you interpret the ‘one plus’ returns is as follows:

If you have 100%, that means, no change.  i.e., you have ‘100%’ of the prior value’. If you have a ‘one plus’ return of 101.1612% (as shown for the first return below), then you have 101.1612% of the prior day’s value. 

 

If the ‘one plus’ return was 0 (zero), then that means you are down 100%, i.e., lost all value, i.e., price went from, for example, 52.53 down to zero.  This means that ‘one plus’ returns can never be below zero, i.e., at least for commodity prices (i.e., things that are not spreads of two or more prices), or at least we can assume that commodity prices can’t go below zero.

 

Day #

Date

Price

Price Change

Percent
Price Change

11

27-May-202X

52.53

 

 

10

28-May-202X

53.14

0.61

101.1612%

9

29-May-202X

52.12

-1.02

98.0805%

8

30-May-202X

51.66

-0.46

99.1174%

7

31-May-202X

53.42

1.76

103.4069%

6

3-Jun-202X

51.54

-1.88

96.4807%

5

4-Jun-202X

52.66

1.12

102.1731%

4

5-Jun-202X

52.71

0.05

100.0949%

3

6-Jun-202X

52.43

-0.28

99.4688%

2

7-Jun-202X

51.90

-0.53

98.9891%

1

10-Jun-202X

52.13

0.23

100.4432%

 

 

3.3) Now take the standard deviation of the above ten percent price changes.

 

In Excel, you use the ‘STDEV’ function.  Don’t use the ‘STDEVP’ function.  The ‘P’ is for ‘Population’ and the ‘P’ function should only be used when you know all of the possible outcomes and, especially, know the ‘true’ average of your values.  While we can calculate the average of the values above, that is just a ‘statistical sample’ and not assumed to be the ‘true’ (or real) average of the ‘distribution’ of price changes.  Hence the need for using STDEV.

 

This is the formula I used.  For me, the data is in column G, rows 5 to 14.

 

=STDEV(G5:G14)

 

And we get:

 

0.0200004328

Or, expressed as a percent:

2.00004328%

 

Side notes:

a) If you take the standard deviation of the non ‘one plus’ returns, i.e., [0.0116%, -0.0192%, -0.0088%, 0.0341%, -0.0352%, 0.0217%, 0.0009%, -0.0053%, -0.0101%, 0.0044%] then you’ll get the exact same value.  Try it and see.

b) As a reminder: we made up the prices, i.e., so don’t infer too much based on the calculated values.

 

3.4) So far, the standard deviation we have is of daily price changes.  We still need to convert this to annualized price changes.

 

To do this, we multiply by the square root of the number of days in the year.  We’ll assume 256 days in a year for this example.  That is 365 minus weekends and holidays.  Of course, the actual number of good business days in a year will vary year by year and based on local holidays.

 

What’s nice about using 256 is that the square root is 16, a nice whole number.

 

So, to get the annualized volatility, you get:

 

32.0006924% = 2.00004328% * 16

 

In other words, about 32%.

 

 

4) Assumption of Normal Distribution

For people familiar with the Normal Distribution, a.k.a., ‘the bell curve’, you may recall that:

Plus or minus one standard deviation has about 2/3 of the values.

Plus or minus two standard deviations has about 95% of the values.

Plus or minus two standard deviations has about 99% of the values.

 

Using our example of 32% above, and assuming a starting point of $50, we get:

+/-1 one standard deviation is a range of 34 to 66.  That means, we expect, if the price today is 50, and the volatility is 32%, that there is a 2/3 chance that the price will be between 34 and 66 in one year’s time *if the prices are normally distributed*.

+/-2 two standard deviations is a range of 18 to 82.  So 95% chance of prices within that range.

+/-3 two standard deviations is a range of 2 to 98.  So 99% chance of prices within that range.

 

The above info is given to help with the idea of helping with peoples’ intuition about that ‘32% annualized volatility’ means.   You might be thinking/reacting, why are we talking about crude oil at $2?!?  That can’t happen (or is very unlikely to happen).  Response:  That is what you get when you assume a normal distribution.  You can improve on that, though, by making a different assumption on how to model the distribution of commodity price changes.

 

5) Assumption of Lognormal Distribution

For commodity price changes (i.e., not prices), some will model VaR using the assumption that commodity price changes are lognormally distributed. 

 

This is usually a better assumption than assuming commodity price changes are normally (i.e., as opposed to lognormally) distributed.  The lognormal distribution is just the log (the natural log, which is Excel function ‘ln’) of the normal distribution. 

 

The normal distribution is symmetric and goes from minus infinity to positive infinity.  The lognormal distribution, by the properties of logs, goes from zero (starts at zero) and goes up to positive infinity. 

 

This matches the properties of our ‘one plus’ returns.  E.g, prices can double or triple, e.g., from 50 to 100 or 150, but we assume they can’t go from 50 to -50 or -100.  i.e., they are asymmetric in that they can go up more than they can do down, which matches the lognormal distribution.

 

When we assume that prices for a particular price index are lognormally distributed (in the context of using volatilities in a VaR calculation), then we need to take the log (the natural log, which is Excel function ‘ln’) of the *price changes* (not the prices). 

 

Here is the same table from above, with an extra column for the log of the percent price changes.

 

Day #

Date

Price

Price Change

Percent
Price Change

Log of Percent
Price Change

11

27-May-202X

52.53

 

 

 

10

28-May-202X

53.14

0.61

101.1612%

1.1546%

9

29-May-202X

52.12

-1.02

98.0805%

-1.9381%

8

30-May-202X

51.66

-0.46

99.1174%

-0.8865%

7

31-May-202X

53.42

1.76

103.4069%

3.3501%

6

3-Jun-202X

51.54

-1.88

96.4807%

-3.5827%

5

4-Jun-202X

52.66

1.12

102.1731%

2.1498%

4

5-Jun-202X

52.71

0.05

100.0949%

0.0949%

3

6-Jun-202X

52.43

-0.28

99.4688%

-0.5326%

2

7-Jun-202X

51.90

-0.53

98.9891%

-1.0160%

1

10-Jun-202X

52.13

0.23

100.4432%

0.4422%

 

Note that the log of the percent change has percentages that may match our intuition better about what to expect in terms of formatting.  E.g., we see that from one day to the next the price changed by -1.9381%, which may be easier for us to understand than saying the price today was 98.0805% of the prior day’s price.

 

By using the Excel function stdev on the new column, we get a calculated volatility of 2.00050152% for the log of the price changes, or, an annualized value of 32.008% (multiply by 16).

 

Compare that new value to the 32.001% value we got from just using the simpler percent changes and we can see that it didn’t make much of a difference in this case.

 

As a rule, the bigger the price changes (as a percent), the more it will make a difference which approach you use, i.e., using just the percent change or the log of the percent change.

 

6) Considerations for CTRM Systems

 

6.1) CTRM systems should be able to automatically calculate volatilities using the settlement (‘closing’) prices already in the system.

6.1.1) There should be a ‘hook’ into the process, so firms can customize the calculation.  i.e., at some step in the process, where the system has gathered up all of the prices needed to calculate the volatilities, users should be able to tactically modify the prices, e.g., to handle a special method of interpolating data for days missing prices.

6.1.2) One consideration in how systems calculate volatilities for commodities is to allow users to configure to take either the prices for a specific month, e.g., December 2019, and use a set of those prices, e.g., for 6 months, to calculate the volatility.  Or, instead, set things up so the system will instead take, for example, a month that is a constant 4 months out. 

 

6.2) CTRM systems should also allow for easy import and export of volatilities.  Should allow for:

6.2.1) CSV format (text)

6.2.2) XML format

6.2.3) Native Excel format

6.2.4) Web service, i.e., for loading (importing) or exporting volatilities.

 

6.3) On the topic of how volatilities are used with option valuation, CTRM systems need to support a lot of variations, i.e., be flexible with the design and implementation of how a particular volatility or set of volatilities is ‘looked up’ for valuation of an option.

 

6.3.1) First, talking about the structure of volatilities, you would need, for various users:

 

6.3.1.1) One volatility per month.  E.g., for NYMEX Crude oil contracts, allow firms to enter in a volatility value for August, another one for September and so on. 

Side note: As a general rule, the front month, i.e., closest to today, months have the highest volatility and things drop off from there.  That is because price-influencing events, e.g., hurricanes, frosts, etc., tend to impact the front months more with prices 1 or 2 years out barely moving, both because people figure the event will have sorted itself out by then and because the back months tend to trade less.

 

6.3.1.2) One volatility per month per strike.  This is called ‘two-dimensional’, with the dimensions being time (e.g., the contract months) and the strike price being the second dimension.

 

This then leads to the need to support different ways to define the strikes. E.g., you might setup specific values, e.g., for crude might be 48, 49, 50, 51, and so on.  Though that is less likely to be used.

More likely, people will want to setup relative strikes, i.e., relative to the market / the at the money strike.

So, for example, like this:

-3, -2, -1, 0, 1, 2, 3, etc.

So if the market is at 50 and your option has a strike price of 50, it would get its volatility value from the ‘0’ column, for the appropriate month (row).

 

Another variation is to setup as percentages or ratios.  E.g., like this:

80%, 90%, 100%, 110%, 120%, etc.

Or, setup an alternate way as

-20%, -10%, 0%, +10%, +20%.

So if the market is at $50 and your option has a strike price of $55, then the system would look up the volatility from the 110% (or +10%) column the appropriate month (row).

 

6.3.1.3) One volatility per month based on the delta of the option.  As a note, an at the money option has a delta near 0.5, though it won’t be precisely 0.5, and systems should account for that.  A user might want columns like:

.2, .3, .4, .5, .6, .7, etc.

With then rows by month.

Alternately, a user may want to set the precise value for an at the money option, so would want to specify values by:

.2, .3, .4, ATM, .6, .7, etc.

Where ‘ATM’ would be a special system feature that understands that it is the at the money value.

 

6.3.1.4) Interpolation, i.e., interpolated value for options with strikes between the values specified by the ‘volatility matrix’ of rows and columns is super important to get the correct option values and CTRM system should support

6.3.1.4.1) a wide variety of interpolation methods

6.3.1.4.2) ways for users to customize their own interpolation method

6.3.1.4.3) and, critically, a tool in the system for users to debug/diagnose/view interpolated values.  e.g., without having to enter in an actual option, users should be able to get the ‘output’ volatility values for any strike price, especially one where interpolation is needed.

 

6.3.2) Another thing CTRM systems need to get right is the correct ‘lookup’ of volatilities based on the correct date.  The date for an option might be, meaning the date to use for lookup:

a) the option expiration date, which may be, for example, November 26, 2019 or

b) the contract month of the option, which may be for the same example as above, the Dec-19 contract. 

This should be a setting per option volatility definition.  The key is to align what a firm is putting as intended inputs (for volatilities) with what the system will actually use for an option.

 

6.3.3) Special considerations for averaging.

6.3.3.1) Suppose you have an averaging option, e.g., one that has a (fixed) strike price versus the average of the business day settle prices, e.g., about 22, over a calendar month for NYMEX crude oil.  Assuming the month is for June, this would get its prices from the July and August contract months, with a ratio of about 2/3 to 1/3. 

 

Though this option gets its prices in June, the June contract month would expire sometime in the end of may, so needs to get its prices from the first nearby contract, which would be the July contract for the beginning of the month, until it expires, and then it would be the August contract.

 

To get the correct volatility for use with an averaging option, the CTRM system will need to pull the volatilities, as supplied by the user, for two months, the July and August months.  


CTRM systems should not only have that work correctly, they should be able to ‘show their work’ so as to show the details, i.e., one value per day, that make up the average volatility (for the set of days) that would be fed into the option pricing model.

 

Moreover, CTRM systems need to handle the case/situation of how prices are set (‘fixed’) during the pricing month, i.e., June in this case.  Prior to the start of the pricing month, the volatility used would be an average of, say, 14 days from July and 8 days from August (for 22 days in total).  At some point, e.g., with just 6 days left, all of the volatility values would come from the August month from the volatility matrix, because all of the July price values (in the overall average) would have already been set/known/fixed.

 

In summary, CTRM systems need both handle the case of averaging options so as to supply the correct volatilities as appropriate, and make it easy for product control/risk managers to delve into the details of how things are working.

 

6.3.4) Call versus put.

Systems should allow for a different set of volatilities to be used for calls versus puts.  While in theory, the same volatility should be used for a call option as for a put option, all other things being equal, some firms have special reasons in some cases to use different volatilities.  So, at a minimum, a CTRM system should support that.

 

6.3.5) Smile volatilities.

This is another way of saying, a CTRM system should support a formula approach for the system to calculate which volatility value should be used for a particular option deal as an input to its pricing model.

 

Such a formula is often called a ‘smile formula’, due to its most common shape, which has the middle part be lower than the sides (the ‘wings’), i.e., so shaped like a smile.

 

CTRM systems should allow for formulas to be built-in and also for the formulas to be kept externally, i.e., the CTRM system would have just the inputs to the formula, and the mathematical part of the formula would be external, e.g., a DLL or a web service.

 

A common formula might have just three inputs:

a) ATM volatility – the volatility value to use when an option strike is considered at the money

b) a measure of how tilted (symmetric or asymmetric) the ‘smile’ curve is

c) a measure of how flat (or not flat) the ‘smile’ curve is

though some more complicated formulas may have 9 or more inputs to further define the shape of the curve to whatever model a firm wants to use.

 

6.3.4) Three dimensional volatilities

For things like swaptions, i.e., options on swaps, a CTRM should have support for 3-dimensional volatility inputs/outputs, sometimes called ‘volatility surfaces’.   Besides expiration date and strike price, the tenor (how long it is, e.g., 3 months long or 12 months long) can be used as a way for the system to lookup which specific volatility value should be used when pricing a particular option.

 

6.3.4) Front month issue

For markets like natural gas and power, where you might have daily prices published and projected for the remainder of the month (‘BOM’ or ‘balance of month’), you have an interesting issue with regard to volatilities for the month you are currently in.  The volatility value for the month is, in this case, an average. 

 

At the start of the month, you may have 22 days of prices that are part of that average.  Towards the end of the month, you’ll have at one point, for example, just 3 days.

 

All things being equal, the average of more things (days in this case) will have a lower volatility due to the mathematical property of things (or the central limit theory, if you want to get statistical).  As an example, if you have a 6-sided single dice (I internet-searched and found that both ‘die’ and ‘dice’ are acceptable for a single die from a set of dice), and you roll it twice, you would not be surprised to have an average 1.5 (e.g., you roll a one and a 2).  However, if you roll it a 1000 times, you would really expect the average to be somewhere between 3 and 4.

 

i.e., the *average* of a set of random trials (dice roll) has a lower volatility the more ‘trials’ are in the average, i.e., it has a lower/smaller range of most likely values.

 

For CTRM systems for natural gas or power, to handle this situation, users may want to use the 2nd month, i.e., first full month out, to source the volatility value for the first month.  E.g., if we are 11-Jun-2019, we might want to use the July 2019 volatility value for June 2019 options (and for July 2019), because using a volatility based on the balance of month June 2019 prices may result in a volatility value that is theoretically too high for the reason listed above.

 

 

7) EWMA (Exponentially Weighted Moving Average) and GARCH (Generalized Auto-Regressive Conditional Heteroscedasticity) Methodology for Calculating Volatilities

 

7.1) EWMA is another method to calculate volatilities.  Compared to the two approaches that were shown above, which was a) based on the percent change of daily prices and b) based on the natural log of the percent change of the daily prices, the EWMA approach is ‘just another method’ that will calculate what would typically be a slightly different number. 

 

The key being that any of these approaches would start with the same set of prices.  And, one of the key decisions that will impact the volatility value that ultimately gets calculated will be, perhaps materially, influenced by the time frame used.  E.g., 2 months of prices, 6 month, a year, or other.

 

7.2) The idea behind EWMA is that the more recent prices and therefore price changes would have a bigger weighting in the overall volatility calculation.  So if you are taking prices for the prior year and the prior two months were especially volatile, then we might expect the EWMA to show a higher calculated volatility value than taking all of the price changes of the prior year and giving them an equal weight.

 

7.3) Should it be used by risk managers?

Note that we are talking about in a context of calculating VaR (value at risk). And, as a reminder, that the ‘Historic VaR’ approach doesn’t require volatilities of any kind… it just needs the historic prices.

 

The negatives are:

7.3.1) This is more complicated.  In general, complicated is bad.  Even if the calculated value is marginally better, it needs to overcome a hurdle in order to be used of being worth the extra complexity.  My thought challenge is to imagine you had to explain to an auditor why you chose this approach over a simpler one.  How would you do it?

 

7.3.2) The end result numbers wouldn’t necessarily be material, meaning meaningful in changing decision making, including things like trader bonuses, risk capital required, etc.  E.g., if the end result VaR changes by a few percent as a result of using EWMA, is that material? 

 

7.3.3) There isn’t necessarily any theoretical reason to use one particular weighting or another.  The weighting factor may have a value of 0.8 or 0.9.  How would you choose which one to use?  And feel that it is the best one?

 

7.3.4) As a comparison… I personally would prefer to run VaR twice, first with, say 1 year of data, and then with 2 months of data.  From prior experience, I tend to get about the same values, so it isn’t material, i.e., doesn’t matter. 

 

However, if I did see some material difference, e.g., VaR being 30% higher with two months of data, then I would want to know that.  And I would want to investigate. 

 

Part of what I am saying is, that having the knowledge that it does matter, i.e., if you use 2 months or 1 year of data does matter, is something perhaps important to know.  If you run VaR just one way, as EWMA, then you don’t get that knowledge.

 

7.3.5) For BlueCTRM, the nice thing about this is that people can build this in if they want it.  i.e., the EWMA method of calculating volatilities.  They could do this as:

a) Contributing to the code base, i.e., open source.

b) Build their own plugin/addin/extension to the system for personal use, if they feel this is a competitive advantage

c) Sell it as a licensable product.  i.e., that is based on the open standards of BlueCTRM.

 

7.4) Next up is GARCH.

Generalized Auto-Regressive Conditional Heteroscedasticity

 

7.5) Explaining what that means starts with an explanation of ‘Heteroscedasticity’.

Heteroscedasticity means that the volatility changes over time.  More technically/accurately, it is the standard deviation (or the variance) that is changing over time.

 

Let’s use two examples.

Example 1 is a die roll for a 6-sided die.  Imaging you roll a die once per business day for a year.

 

Example 2 is the price change in crude oil.

 

For Example 1, you can assume that the standard deviation of your rolls is the same for the full year.  i.e., assume the same die and the same person rolling it.  The distribution of values is still 1 to 6 with an assumed equal likelihood of each.

 

For Example 2, you can’t make that same assumption.  Some big event may have causes prices to be more volatile the prior two months, than the 10 months before that, i.e., assume some big event happened 2 months ago and things have been different since then.

 

For commodity price changes, it is understood that the volatilities will change over time in practice.  Furthermore, it isn’t necessary theoretically ‘ok’ to take price changes that span multiple ‘volatility regimes’, though in practice, there isn’t always a choice about that.

 

7.6) The Auto-Regressive Conditional part of the name means this: That volatilities tend to be mean-reverting.  Not prices, which can go up and up forever, but, rather, volatilities.  E.g., if there is a particularly busy/volatile time, e.g., due to a frost, cocoa prices become very volatile, then we can expect that to be short lived. 

A GARCH approach might produce a *lower* volatility number for use with VaR than the recent period, assuming that the recent period is unusually volatility, i.e., the expectation being that the risk should start to go back to normal.

 

7.7) As with EWMA:

a) Similar criticisms.  Except that GARCH is even more complicated, so would need to be that much more materially better to put the extra effort/work in.

 

7.8) The biggest criticism of any standard VaR approach is the assumption that commodity price changes are lognormally (or normally) distributed.  That is demonstrably not true.  Commodity price changes are likely to have much bigger moves than you would predict from the lognormal/normal distribution.

 

These can be called ’6-sigma’ moves (or 7-sigma, 8-sigma, etc.).  ‘sigma’ is the spelled out Greek letter used to represent ‘standard deviation’.  These are also sometimes called a ‘fat tailed’ distribution because the ‘winds’ (left/right sides) contain more area than a similar type normal distribution (‘bell curve’).

 

The point here is that the precise way you calculate volatilities pales in comparison… is never going to be as materially an issue, as an incorrect assumption about the distribution of commodity price changes.

 

7.9) As a more practical matter, have a think about this.  CTRM systems will typically assume some number of good business days in the year for the purposes of calculating volatilities.  Recall from above that you need to multiply the standard deviation of the daily (log percent) changes by the square root of the number of days in the year, to get the annualized number for use with VaR.

 

Imagine a comparison of using 250 good business days or 260.

 

The square root of 250 is: 15.8113

The square root of 260 is: 16.1245

 

That is about a 2% difference, which will carry over into the VaR calculations. 

 

Side note: It is probably best to use a hard coded assumption as to the number of days, rather than try to calculate the real number of days in a year, which will vary by holidays, and get especially complicated if you are trading in multiple locations around the world which have different holiday dates.

 

7.10) One thing to keep in mind for VaR calculations is the difference between a ‘confidence interval’, which is two-sided, and a ‘confidence level’, which is one-sides.

 

For a confidence interval, people may have learned that for a normal distribution, plus or minus 2 standard deviations is approximately 95% of the possible values.  They may remember a table like this:

+/- 1 standard deviation: 66%

+/- 2 standard deviation: 95%

+/- 3 standard deviation: 99%

 

Using the Excel function ‘NORM.S.INV(N)’ (inverse normal), we can plug in percent probability values, e.g., ‘NORM.S.INV(0.025)’.

 

Plug in 0.025 and you get -1.95996 (about -2 standard deviations).

Plug in 0.975 and you get +1.95996 (about +2 standard deviations).

i.e., 0.975 - 0.025 = 95%, a two-sided ‘confidence interval’.

 

That value is *not* what you would use with VaR, if you want a 95% confidence level (one sided) VaR.  i.e., you want the threshold where 5% of the possible values are ‘to the left’ and 95% of the values are ‘to the right’ of the dividing line.

 

For that, by the way, you would have:

NORM.S.INV(0.05) = -1.6448

i.e., everything to the right of -1.6448 is 95%.

 

Point here is: I may have seen CTRM systems which may have the wrong number of standard deviations used in their VaR formula.  E.g., 1.95996 instead of 1.6448 as a threshold. 

 

i.e., so check that is correct before spending too much time comparing approaches for calculating volatilities from a set of historic prices.

 

7.11) Point here is that other assumptions/parameters besides the volatility calculation formula/approach can have the same to bigger implications on the end result VaR number… and that the other assumptions/parameters are easier to understand/explain versus something complicated like GARCH.

Such as:

7.11.1) Assumption that price changes are lognormal:  Huge implication on VaR calc, but not much in the way of practical alternatives:

7.11.2) Date range of prices to use, e.g., prior 2 months or prior 1 year of pries:  Typically not a material change. Good approach is to run VaR using more than one range of dates (of prices/price changes) for calculating volatilities.

7.11.3) Assumption about how many business days are in a year: May have an impact of a few percent on VaR.

 

 

 

 

Home

Site Map

Contact Us