The great debate: Volatility as a risk management tool

A fear-driven headline is persuasive: ‘Volatility has spiked’, ‘The VIX hits its highest level in 10 years’, ‘Panic selling causes volatility to rise’. Yet, conceptually, the relationship between volatility and risk remains a bit of a misnomer. While the Oxford dictionary defines risk as ‘a situation involving exposure to danger’, the goal of identifying risk at a portfolio level is far more challenging.

When making investment decisions, many investors confuse uncertainty (an unknown distribution of returns) and risk (a certain distribution). This plays an important role in risk management, as it tends to result in investors becoming overconfident when they use historical analytics as a precursor to future outcomes. So, in this context, how does one go about building a systematic process to manage risk? The industry standard has become known as volatility – a concept that is popular with participants due – in part – to its simplicity.

What is volatility trying to achieve?

In a portfolio management sense, volatility is typically used to help understand downside risk, which is generally bound to one of two problems:

  1. Experiencing a permanent loss of capital in an investment that is not recoverable in an investors’ timeframe
  2. Behavioural reactions that cause one to crystalise a temporary loss, thereby making it permanent when it was otherwise recoverable

Before delving into volatility and its usefulness, a vital part of this is to consider the impact various investment horizons have on risk management. Downside risk (and volatility to some extent) has a greater relevance over shorter time periods, as it can result in a permanent loss of capital. For example, in Chart 1 we illustrate the relationship between drawdowns and volatility over a rolling five-year time frame (using the S&P 500 since 1900), showing the sensitivity to a short time frame.

To the naked eye, Chart 1 may suggest that volatility is a decent, yet imperfect assessment of risk. It helps an investor loosely understand the range of outcomes that may be expected from an asset, such as the inherent risk between equities and other assets (such as cash or bonds), and may even be useful in building an asset allocation with a risk tolerance level in mind.

Chart 1: Ongoing relationship between volatility and drawdowns
Chart 1: Ongoing relationship between volatility and drawdowns
Source: Morningstar Investment Management calculation , Shiller data as of 31/12/2017

However, in order to truly understand risk and build asset allocations holistically, the imperfections of volatility are worth investigating. For example, what are the in-built assumptions? Is portfolio risk sensitive to valuations? And could there be a better way to assess risk?

Getting to know volatility

There are several limitations to address regarding the use of volatility as a measure of risk. Central to this is an inherent assumption that all outcomes are normally distributed. In this regard, there is an abundance of evidence that shows that extreme downside returns empirically occur approximately ten times more often than the normal distribution predicts. This follows James Xiong’s paper, ‘Nailing Downside Risk’, which deliberated that volatility measures are likely to underestimate downside risk.

In fact, overcoming this ‘fat tail’ is an area of academic interest, with derivatives of volatility such as value at risk (VaR) or ideally conditional value at risk (C-VaR) often cited as improvements to traditional volatility measures. The idea is to reduce the reliance on normal distribution and express risk in a way that is far more realistic and intuitive.

Yet, when attempting to quantify such risk, care must always be taken when using historical inputs. Historical estimation is indeed useful in understanding risk in perspective; however, it fails to account for structural change in an asset class (e.g. the different country exposures in emerging markets) and those vulnerable to leverage. This is most notable in fast-shifting landscapes, but is also time-dependant – for instance, the US equity market was dominated by rail companies 100 years ago, so may have little bearing on the tech giants today.

Within this, the key is to focus on the long term but build in a margin of safety that reflects the uncertainty. Therefore, rather than focus on short-term measures such as the VIX, which is a 30-day implied volatility measure, we would suggest looking at 10- or even 50-year measures of risk and adopting a margin of safety that reflects the potential for structural change in an asset class. One of the many ways to do this is to support long-term analysis with deterministic processes (for example, using a factor model) as well as scenario analysis to come up with more robust risk measures.

Practical evidence to challenge the overuse of volatility

The point here is not to dismiss volatility-type measures in its entirety, but to realise the dangers of its overreliance as a risk management tool. While some of the theoretical problems are outlined above, we can also further validate its limitations by checking the capabilities of volatility as a predictive tool within an asset class.

For this exercise, we take the average volatility of the S&P 500 over rolling ten-year periods and compare it to the average maximum drawdown over the prevailing ten years from 1871 to 2017. In other words, we are checking whether low volatility can result in high ‘risk of a permanent loss of capital’ or high volatility results in a high risk of loss. We must stress that we are investigating the long-term relationship (i.e. ten years), which is naturally more stable than for shorter time horizons, removing some of the noise but simultaneously giving it less opportunity to point to extremes (Chart 2).

Chart 2: Randomness of outcomes: Volatility is not a good predictor of future drawdowns

Low to high volatility (right), high to low drawdown (bottom)

Chart 2: Randomness of outcomes: Volatility is not a good predictor of future drawdowns
Source: Morningstar Investment Management calculation, Shiller data as of 31/12/2017
What is the empirical evidence that supports drawdown analysis over volatility?

Let’s continue with the theme that risk is not a function of historical volatility, but rather a function of its valuation, i.e. if markets have increased beyond their actual worth, markets are deemed to have greater downside risk and vice versa. This is illustrated simplistically in Chart 3.

Chart 3: Risk: A function of valuation
Chart 3: Risk: A function of valuation
Source: Morningstar Investment Management

In practical terms, we want to use this context to look at the portfolio sensitivity to drawdown risk based on different valuations. We do this in a multi-asset context by splitting the investment universe into three buckets, where the most attractive markets are compared to the least attractive.

This is an imperfect science itself, and is notably subjected to hindsight bias, however, empirically shows that when markets are expensive, the downside risk grows (Chart 4).

Chart 4: Valuation matters: Downside risk evolves over time
Chart 4: Valuation matters: Downside risk evolves over time
Source: Morningstar Investment Management calculation , Morningstar Direct, MSCI ,
IMF as of 31/12/2016

When comprehending the lessons at a risk management level, it supports the notion that permanent losses are more likely to be driven by factors such as valuation (overpaying for an asset), fundamentals (asset quality deteriorates), or financing (gearing, redemptions, crowded trades). While some of this may coincide with higher volatility, it should by no means be considered a prerequisite.

Another element of the risk management process is in understanding how well the individual investments fit together. This is most commonly measured via a correlation matrix, however, can also include sensitivity and factor analysis to help identify exposure overlap. The goal is to establish a specific role for each asset to play in the overall investment mix, which goes beyond traditional correlation measures (a by-product of volatility) and involves fundamental analysis of risk along with valuation-conditional drawdown assessments to determine the risk of a permanent loss of capital within the overall framework.

Volatility and behaviour

Underlying all of the above, sound portfolio management should always reflect objectives as a priority, with the portfolio construction and risk management process a derivative of that objective. It is therefore never as simple as saying, ‘The US market is expensive so we avoid that asset. Russia is cheap so we invest heavily in it.’ Instead, advisers should be concerned with ensuring that the clients’ objectives are best reflected via a holistically considered portfolio that contemplates valuations and behaviour on a risk-adjusted basis. 

This brings us back to how people may be using measures such as volatility, rather than the issues with the metrics themselves. For instance, in attempting to quantify risk, many investors subject themselves to behavioural deficiencies such as loss aversion, the recency bias and overconfidence. Said another way, people are inclined to focus on fear-driven measures (for example, by watching the VIX) while placing a lot of confidence in historical analysis (e.g. expecting future volatility to match the past). These biases can ultimately lead to irrational assessments of risk to the detriment of portfolio outcomes.

Ultimately, we find that there are no silver bullets when it comes to risk management. Yes, some metrics are better than others. And yes, focusing on drawdown metrics rather than volatility could improve the likelihood of success. However, a successful risk management process comes from the discipline of understanding what is knowable and what is not. Therefore, rather than overemphasising volatility, or any other quantitative measure, one of the best ways to control for risk is to buy fundamentally strong investments that are attractively valued. Beyond that, sound risk management is about being aware of the behavioural pitfalls that underpin decision-making and building a framework that helps us overcome these challenges.

1 Note, we rank equity markets by valuation metrics including the CAPE5, CAPE10, price-to-sales and price-to-book ratios, whereas we rank fixed income markets by real yields (nominal yield minus three-year trailing inflation) and currency by the five-year change inreal exchange rates.

Commerzbank Disclaimer
The views expressed in this article are those of the author and may differ from the published views of Commerzbank Corporate Clients Research Department, the communication has been prepared separately of such department. No representations, guarantees or warranties are made by Commerzbank with regard to the accuracy, completeness or suitability of the data.