First, note that we will soon be going to press with a new paper, entitled “Tactical Alpha: A Quantitative Case for Active Asset Allocation”. Here is the abstract for the paper:
Grinold linked investment alpha and Information Ratio to the breadth of independent active bets in an investment universe with his Fundamental Law of Active Management. Breadth is often misinterpreted as the number of eligible securities in a manager’s investment universe, but this ignores the impact of correlation. When correlation is considered, a small universe of uncorrelated assets may explain more than half the breadth of a large stock universe. Given low historical correlations between global asset classes in comparison with individual securities in a market, we make the case that investors may be well served by increasing allocations to Tactical Alpha strategies in pursuit of higher Information Ratios. This hypothesis is validated by a novel theoretical analysis, and bolstered by two empirical examples applied to a global asset class universe and U.S. stock portfolios.
UPDATE: THE PAPER IS PUBLISHED! We would encourage those who are interested in global allocation strategies to give our new Tactical Alpha paper a read. We’ve yet to distribute this widely. We believe it provides a strong argument for investors to consider a larger allocation to active asset allocation strategies in general. You’ll be granted immediate access to a pre-release copy here.
In the meantime, it’s no secret that we are big fans of active asset allocation, which is sometimes called Tactical Alpha. Our enthusiasm stems from the following observations from our own research, and from other published sources:
1) Asset allocation dominates portfolio outcomes. From an empirical standpoint, Kaplan demonstrated that policy asset allocation explained, on average, 104% of institutional portfolio performance. In other words, portfolios would be better off, by about 4% (note: not 4 percentage points) by sticking to passive exposures for each reference portfolio sleeve than by allocating to active managers. From a theoretical standpoint, Staub and Singer demonstrated that asset allocation explains 65% of orthogonal portfolio breadth, while security selection accounts for just 35%.
2) Asset risk premia are extremely time-varying, such that asset classes can underperform vs. long-term means for extended periods of time, sometimes decades.
3) Asset returns and covariances exhibit persistence in the short-term which enables economically significant forecasting of one or both parameters over short and intermediate horizons. This means it’s possible to systematically alter allocations to asset classes through time to take advantage of time-varying premia. (See AAA whitepaper here)
Note that we are specifically interested in products which allocate to a broad universe of global asset classes, as domestic balanced funds are not really balanced at all.
A variety of firms have launched strategies to take advantage of dynamic risk (volatility or covariance) forecasts, dynamic return forecasts, or both. Quite loosely, we might call global asset allocation strategies that only rely on dynamic risk estimates ‘risk parity’ products, while strategies that incorporate short-term tactical shifts are Global Tactical Asset Allocation (GTAA) products. Some products, like Invesco’s Balanced Risk product, are hybrids, in that they are primarily risk parity strategies, but engage in mild tactical tilts as well.
The question is, how should we evaluate the performance of these strategies? It hardly makes sense to mark them against the S&P500, as these funds are responsible for allocating across global assets, not just US stocks. Should a global asset allocation mandate be considered a failure because it fails to keep up with US stocks when they are the best performing global asset class? What about a domestic balanced fund? Again, it’s silly to benchmark a global mandate against a purely domestic asset universe.
To us, the only reasonable passive benchmark for a global asset allocation portfolio is the Global Market Portfolio, which we described in our post, A Global Market Benchmark with ETFs and Factor Tilts. You’ll find a version of the proposed benchmark in Figure 1., with one small change: we replaced IGOV and BNDX with BWX in order to have a sufficient history of daily returns for our analysis. This removes the allocation to international corporate bonds altogether, and replaces it with an allocation to global government bonds, but results should be pretty close. As it is, we could only go back to mid-2011, but it will suffice for illustrative purposes.
Figure 1. Global Market Portfolio proxy with ETFs
Figure 2. Performance of ETF proxy Global Market Portfolio
Source: GestaltU, data sourced from CSI data and Bloomberg
A short history, to be sure, and we are certainly not suggesting the performance is indicative of what anyone should expect over the long term. We could (and might) backtest the portfolio using Global Financial Data indexes, but it isn’t relevant to this analysis.
Our interest is in quantifying the degree of out- or under-performance observed in global allocation strategies relative to this true passive benchmark. There are a number of ways to measure this, of course. The simplest method would be to compare returns, but of course this metric tells us nothing about the likelihood that the returns were due purely to random chance. For example, a strategy which delivered a total return of 40% over the past 3+ years, but with a volatility of 20%, could easily have just been lucky. After all, a strategy with an expected return of 0%, but a standard deviation of 20%, has a 95% range of returns of +57% / -57% over any random 3-year period. In this context, the 40% total return to the strategy observed over 3 years (1.12^3 – 1) would not prompt a statistician to reject the hypothesis that the mean return is actually zero. In fact, there is a 15% chance that the true mean return of the strategy is actually negative, despite the apparent strength. Such is the counterintuitive nature of random processes.
The next most straightforward method is the Sharpe ratio, which describes the excess returns (above the risk free rate) per unit of risk, where risk is defined as volatility. At the very least, this ratio captures some of the randomness embedded in the returns process, so it is much less likely to be a function of good luck than raw returns. However, the Sharpe ratio is also not without some flaws. First, Sharpe ratios measured over short, or even intermediate-term periods on passive strategies rarely reflect the true nature of the strategy. That’s because asset returns and volatilities are highly non stationary. That is, while an asset may exhibit very high returns with low volatility for a few years, this observation offers few clues about the true risk and return qualities of the asset. For example, the average 60-day volatility of returns for US stocks in the three years prior to 2008 was less than 15%, but in 2008 60-day realized volatility on the S&P 500 spiked above 80.
Sharpe ratios are also problematic when used to evaluate relative performance across strategies in the short term. The current environment is a great case in point. The Vanguard US Balanced Fund ETF has a Sharpe ratio of 1.19 over the same period in which the GMP delivered a Sharpe ratio of 0.69. Does this mean a US Balanced Fund is the most efficient portfolio? Or does it mean it has been a lucky period for US-centric investors? Almost certainly it is the latter.
For the reasons outlined above (and others beyond the scope of this article), the investment industry has eschewed absolute measures like returns and Sharpe ratios when quantifying the value of active strategies. Rather, sophisticated Advisors are interested in the true measure of skill: alpha.
Alpha is maybe the most misrepresented quantity in finance. It is most often cited as the absolute excess returns above a benchmark index, so that if a US large cap benchmark delivered 10% returns in a period, and a large-cap equity mutual fund delivered 12% in the same period, many people would say the manager had delivered 2% of alpha. This is (often egregiously) incorrect. The true definition of alpha is the excess return from a strategy that can not be explained by the strategy’s sensitivity to an underlying benchmark (or other factors):
The critical component is β, which is a function of the correlation of the strategy with the benchmark, and the strategy’s relative volatility. So a strategy’s beta is high if it is highly correlated with the benchmark, and it has a relatively high volatility. Conversely, the beta is low if it has a low correlation with the benchmark or it has a low relative volatility.
Beta is traditionally determined by regressing the returns to the strategy on the returns to the benchmark because this method also, quite conveniently, also yields the alpha. After all, alpha is simply the intercept term in the linear regression. Of course, there is a random component to alpha, as with all analysis of financial time series, so it helps to know the statistical significance of alpha. This is measured as a standard t-score, which allows us to source a probability value that the strategy’s alpha is statistically significant. This t-score is a direct function of both the number of observations, and the magnitude of alpha, so we can be more confident of alpha being a reflection of skill vs. luck with a longer observation horizon, or if the magnitude of alpha is quite large.
Lastly, some might be concerned with a strategy’s information ratio (IR). Recall that the IR is also measured against a benchmark, and tracks the strategy’s excess returns above the benchmark’s returns per unit of tracking error. It is essentially a relative Sharpe ratio, where the return series used to calculate Sharpe is the daily returns to the strategy minus the daily returns to the benchmark.
Many institutions who entertain adding global active asset allocation products focus narrowly on IR as their primary performance metric, but we are less sure of its utility. The institutions we have spoken to are interested in the IR of the active product relative to their reference portfolio, but this places a high degree of confidence in the reference portfolio as the optimal long-term asset allocation. As stated above, given the rather extreme time-varying nature of asset class premiums, and the wide range of possible future returns from a given reference portfolio, it seems silly to be too concerned with tracking error vs. this portfolio. If the reference portfolio turns out to be sub-optimal, an active global asset allocation strategy will be evaluated on its tracking error vs an unpalatable benchmark. This seems counterintuitive to us. Nevertheless, it is worth examination.
In Measuring Tactical Alpha Part 2, we will continue to examine these performance measures and others, and look at how select Global Tactical Asset Allocation products stack up against the Global Market Portfolio. We’ll also make our new paper available. As we said, stay tuned.