Measures of market volatility have traded well below their longterm historical averages in recent years, partly because extremely lax central bank monetary policy anaesthetised investors against the prospect of big market corrections. In early February, however, equity markets corrected sharply as attention focused once again on big upward moves in volatility.
The fall in equity markets in the first days of February would have clearly confirmed the view of all the low volatility Cassandras and prophets of doom out there. However, rather than demonising low volatility per se, one should pay closer attention to how the market deals with it.
A fear-driven headline is persuasive: ‘Volatility has spiked’, ‘The VIX hits its highest level in 10 years’, ‘Panic selling causes volatility to rise’. Yet, conceptually, the relationship between volatility and risk remains a bit of a misnomer. While the Oxford dictionary defines risk as ‘a situation involving exposure to danger’, the goal of identifying risk at a portfolio level is far more challenging.
Smart beta ETFs have evolved since the first single-factor ETF appeared back in 2003. Investors today can choose from exposure to a single-factor or multifactor approach that combines factors in an equal- or custom-weighted way. We believe that a transparent, rule-based, multifactor, smart-beta strategy with custom factor weightings may provide improved long-term, risk-adjusted returns with better downside protection.
It is of paramount importance that an investor considers returns in risk-adjusted terms rather than in isolation, as the return on any investment must be considered relative to the amount of risk taken. In the world of quantitative asset management, one of the main measures of risk is volatility. Consequently, quants seek to achieve the highest return-to-volatility ratio in order that investors are appropriately compensated for the amount of volatility that their portfolio is expected to experience.