Sunday, August 25, 2013

AQR's Quality at a Reasonable Price

Our intrepid equity researchers at AQR have come out with a new paper adding to the color on how to pick a strategy given value considerations.  In Asness, Frazzini and Pedersen's latest paper, Quality Minus Junk, they first try to create a 'quality' metric, and then try to meld it with value.

Quality is defined very clearly as the composite of 4 factors (each of which is made up of 3-5 ratios):

• Profitability (eg, Net Income/Assets)
• Growth (eg, change in Profitability)
• Safety (eg, volatility, leverage)
• Payout (eg, equity issuance, dividend payout)

They find that

1) Stocks with higher 'quality' have higher market/book ratios (higher price ceteris paribus)
2) A long-short portfolio, where one goes long high quality, short low quality, generates significant, positive excess and total returns

They assert that a value-quality portfolio that tries to balance quality with value has nice properties, and the Sharpe maximizing combination is about 70% quality, 30% value.  This is coming from Asness, who is a pretty big value proponent, so I think this is rather telling (value losing it's pre-eminence!).

Their quality metric has a kitchen-sink aspect to it, with about 20 ratios that go into those 4 different groupings.  I could imagine many people would find this an attractive framework to develop and tweak their own quality metric, substituting for various ratios, or subtle changes to the functional form.  Haugen and Baker's (2008) Case Closed, and Zack's Handbook of Investment Anomalies are good places to look for alternative ratios.

I would like to see how this QMJ factor compares to Analytic Investor's Volatile Minus Stable (VMS) factor...they seem similar, though obviously 1) they are negatively correlated and 2) the VMS factor is simply a vol factor, which is just one part of the 'quality' metric.

Lastly, I love the little note at the end:
Our results present an important puzzle for asset pricing: We cannot tie the returns of quality to risk
By construction their return-generating metric seems patently 'anti-risk', as quality implies 'low risk'. The risk-begets-return theory obviously has a lot of intuition, because empirically it's counterfactual when not irrelevant.  I think if you divided the data described by asset pricing theory into 'puzzles' and 'consistent', it's mainly puzzles.

Economath and the Drake Equation

There were several posts last week on the hypothesis that there's too much emphasis on mathematical modeling in modern economics.  Most said yes (Dave Hendersen, Bryan Caplan, Noahpundit, Robin Hanson, The NewYorkTimes), though Krugman said no.

Krugman's experience is very pertinent as his Nobel Prize winning model on increasing returns to scale is a good example of obtuse economodeling: its thesis was known before being the basis of the centuries-old infant industry argument, and after Krugman it was no easier to apply. Consider Detroit, a popular application for regional increasing returns when applied to autos in the early 20th century: what were the key conditions that allowed it enjoy increasing returns to scale in the early 20th century, but then decreasing returns to scale later in the century? He doesn't say.

Krugman responded that his theory changed the debate, because it showed--under certain parameterizations--that increasing returns to scale can be an argument for lower trade barriers! While true, this is a possibility, not a probability, and those who believe in increasing returns to scale invariably are more inclined to believe in selective tariffs, that is, they don't use Krugman's model to support free trade but rather increased protection. So, it hasn't changed the debate and is counter to his assertion that his New Trade Theory is "probably the main story" in import-export arguments for decreasing trade restrictions; his new model has not changed the debate at all, merely added another obscure reference to the confabulators. Increasing returns to scale remains 1) a fringe argument and 2) used primarily to support trade restrictions, as it was in the 1900s before Krugman's New Trade Theory model.

Krugman is a very smart person, but the fact he can't see this highlights that the greatest lies we tell are the ones we tell ourselves, because he clearly has the capacity to see slight inconsistencies and flaws in others (he's a meticulous advocate against his opponents).

I think a lot of math in econ is like the cargo cult phenomenon, where people see correlations (planes and cargo) and suppose the essence of something is one of those correlations (eg, build models of planes, and cargo will show up). Thus, just as naive people think the essence of a good poem is rhyming, naive economists think that setting up a hypothesis as if one were deriving the Dirac equation or special relativity seems like the essence of a science. Unfortunately, economic equations rarely work out that way.

Consider the Drake equation.

$N = R_{\ast} \cdot f_p \cdot n_e \cdot f_{\ell} \cdot f_i \cdot f_c \cdot L$
Where
N = the number of civilizations in our galaxy with which communication might be possible
R* = the average number of star formation per year in our galaxy
fp = the fraction of those stars that have planets
np= The number of planets, per solar system, with an environment suitable for life
etc.

None of the terms can be known, and most cannot even be estimated. As a result, the Drake equation can have any value from a hundred billion to zero. An expression that can imply anything implies nothing. I mean, this formulation is worthy of writing down, but it's very different than the Dirac equation or Newton's laws, even though at some level there's a similarity.

I remember teaching a money and banking course, and a fun way to get the kids introduced to economic models is to show them the Baumol-Tobin money demand model.  This can be derived from some simple assumptions, and applies calculus to the maximization function individuals would apply, generating the equation:

$M= \left ( \frac {CY} {2i} \right )^{\frac {1} {2}}$

Where
M=money demand
C=cost of withdrawing money
Y=Total income
i=interest rate

All very rigorous and tidy.  Yet, it doesn't help predict interest rates, or the size of money aggregates.  It's empirically vacuous, because it simply doesn't fit the data.

That's one of the more concrete equations.  Most equations are like this one for money demand:

Basically one merely argues what arguments should be in the function and then the derivatives on those arguments.  Thus, the first argument is 'permanent income' Yp, and the first derivative here is positive.  Yet, the parameters can vary wildly, and may even be endogenous themselves. At the end of the day, atheoretical vector-autoregressions do a better job predicting any of these variables.

Yet, for all the insufficiency of mathematics in creating a good science, sociologists show that an absence of rigor doesn't seem to be any better.  I think this highlights there's no delusion greater than the notion that method can make up for lack of common sense. Ultimately, there is no method but to be very intelligent.

Tuesday, August 13, 2013

Is The Low Vol Anomaly Really a Skew Effect?

The idea that low volatility stocks have higher returns than high volatility stocks is difficult for economists to digest, because it's so hard to square with standard theory.  It brings to mind Dostoyevsky's line "If God is dead, then everything is permitted." Similarly, when one sees their favored theory as being abandoned, it seems like all explanation is lost and chaos reigns. Yet, when a wrong theory is adopted, well, as the ever-logical Bertrand Russel used to note, if 1+1=1, everything is both true and untrue.  We need a framework to evaluate reality, and it has to be consistent.

Alas, many frameworks are largely untrue, leading to inconsistencies and explanations that are transparently tendentious.  The sign of a bad Weltanschauung is that explanations for reality become more and more convoluted, like epicycles in Ptolemaic astronomy. I'll gladly enjoy the hypocrisy of those who don't share my worldview because, as the Detroit bankruptcy has reminded us (eg, its bankruptcy blamed on too much or too little gov't), people might admit tactical errors, but they'll go to their grave with their worldview (see Max Planck).

Consider the recent papers arguing that low volatility is really just a skew effect, in which case their worldview is safe. In the recent Journal of Economic Perspectives, longtime behavioral finance academic Nicholas Barberis wrote a paper on Kahneman and Tversky's prospect theory (that's Nobel prize winning Danny Kahneman, who's unimpeachability seems somewhere around that of Nelson Mandela)  It's helpful to note that this insight is 34 years old, because many  seem to all think these newfangled behavioural insights are going to revolutionize economics as if they haven't been applied continuously over the past generation.

Barberis goes over his Barberis and Huang (2008) model where prospect theory is used to motivate the hypothesis that a security’s skewness in the distribution of its returns will be priced. A positively skewed security— a security whose return distribution has a right, upper, tail is longer than its left tail—will be overpriced relative to the price it would command in an economy with standard  investors. As a result, investors are willing to pay a high price for lottery-ticket type stocks.

Barberis references several papers, including Bali, Cakici, and Whitelaw (2011), and Conrad, Dittmar, and Ghysels (here's the 2009 version, though a more recent version was just published in the Journal of Finance).  He also finds it relevant to the underperformance of IPOs, the low average return of distressed stocks, of bankrupt stocks, of stocks traded over the counter, and of out-of-the-money options (all of these assets have positively skewed returns); the low relative valuations of conglomerates as compared to single-segment firms (single-segment firms have more skewed returns); and the lack of diversification in many household portfolios (households may choose to be undiversified in positively skewed stocks so as to give themselves at least a small chance of becoming wealthy).

It seems like an orthogonal way to address these puzzles compared to the constrained rational approach offered by Betting Against Beta, but there's a problem, and it's that the well-know equity risk premium has a negative skew relative to what's considered less premium-worthy, long-term bonds. That is, equities in general have a lower (ie, more negative) skew than bonds, and this is the most prominent 'risk premium', so it must not be an exception to a rule.

US Monthly Data 1962-2013

 10-year US T-Bond SP500 Index AnnRet 7.05% 7.28% AnnStdev 6.86% 15.05% Skew 61.09% -42.16%

Note that indices have negative skew while individual stocks have positive skew.  This is because correlations go up in down markets, and this predictable tendency creates a problem for idiosyncratic skew pricing models.  That is, in the CAPM and other asset pricing models, risk factors have prices that are linear in the covariances, otherwise there is arbitrage, the essence of the Arbitrage Pricing Theory: whatever risks are priced, they are based on additive moments, so risk and returns are linear functions.  Now we have priced risks that are not just diversifiable, but change sign depending on what else is in the portfolio.  If true, there is an implausible level of profit to be had from buying portfolios and selling the constituents.

As an ivy league confabulator Barberis deftly ignores this inconsistency and instead notes that the equity risk premium makes perfect sense given Benartzi and Thaler’s (1995) idea that if you focus only on the net changes in wealth (technically, U(x) vs. U(w+x)), you can get this to work in cumulative prospect theory, because losses hurt more than gains, so one gets paid to take risk in this case.

Alas, there's a limit to how much skew and variance can both be priced in the same universe, where people love positive skew and hate variance.  If skew explains most of the volatility anomaly, that implies people can't be globally risk averse because they would like extremum up-moves too much, and these happen proportionally more for volatile stocks.  Yet if that's true there's no risk premium of any sort, because people would simply buy single assets or derivatives and have no incentive to mitigate risk via bundling and arbitrage.  This has been shown formally by Levy, Post, and van Vliet (2003), but it should be intuitive: skew is positively correlated with volatility for stocks with lognormal returns, so there's a point at which one's love of skew dominates one's fear of volatility. If that point is reached, volatility is always less costly than skew is beneficial. This constrains the size of the skew-loving effect to be an order of magnitude less than the risk premium if global risk aversion exists. If global risk aversion does not exist, then the rest of the general framework presented in simply meaningless.

So we have  prospect theory explaining the overpricing of high volatility stocks due to skew, the underpricing of equity indices due to 'narrow framing.' One could add that prospect theory is used to explain why people overpay for longshots at the horse track, in that 'decisions weights' applied to payoffs prospect theory are observationally equivalent to overoptimistic probability assessments (see Snowberg and Wolfers (2010)), and that Danny Kahneman is an admirer of Nassim Taleb's Black Swan theory, which argues that small probability events are generally underappreciated. In other words, whatever the probability density function and expected return, it's explained by prospect theory.

Skew also shows up also in the recent publication of Conrad, Dittmar, and Ghysels (2013), who are incredibly meticulous in their analysis of how skew relates to future returns, highlighting what three top researchers over several years can do to data.  Yet, they then ignore the elephant in the room, that is, if volatility is negatively priced and skew is positively priced, how do these both exist in equilibrium?  It should be hard for these authors to say they don't care, because they are very exhaustive in their analysis, noting at one point:
We use several methods to estimate [the stochastic discount function] Mt(τ) that allow for higher co-moments to influence required returns. These methods differ in the details of specific factor proxies, the number of higher co-moments allowed, and the construction of the SDF.
Alas, as usual in analysis of SDFs, there is no take-away input one can use to measure risk, no soon-to-be-indispensable tool, just a promise that this has all been vouchsafed against high-falutin theory and so 'it's all good.'  Consistency is a good thing, but only in certain dimensions. One of the authors, Dittmar (2002), wrote a very nice paper for the Journal of Finance in 2002 noting that if you restricts a non-linear pricing kernel to obey the risk-aversion needed to ensure that the market portfolio is the optimal portfolio, the explanatory power goes away of higher moments. With all the abstruse checks in this paper, one would think he might want to address that issue, but instead he ignores it.

I'm sure former JoF editor Cam Harvey read this while nodding approvingly throughout (he's referenced every other page, and a big believer that risk explains most everything in finance).  While understanding SDFs and their risk premiums won't help you get a job at a hedge fund, it will help you get published and be popular among publishing academics.

I agree that skew is important, as it measures the upside potential that delusional lottery-ticket buying investors love, and because of relative wealth preferences, arbitrage is costly and their footprint remains.  That's a mathematically consistent story.  Skew loving effects can't exist on the same par with variance hating effects in any consistent story about asset returns. Is this important?  Consistency can be overdone, but I don't think this is foolish because one tends to see what one believes rather than vice versa, and I think there's more power and predictability in viewing volatility as merely a desirable attribute for delusional investors, as opposed to something that pays you a premium.

Paradoxically, behavioral refinements such as prospect theory are preventing needed outside-the-box adjustments and are used to maintain a defective status quo, one that has been wrong on a profound empirical issue for 50 years (ie, the risk premium). These putative revolutionary insights allow academics to wax eloquent on how their complex paradigm handles subtleties such as any of those 50 behavioral quirks, and outside commentators are pleased to be part of a new vanguard, obliviously marching in basically the same, pointless, confabulating path.

Sunday, August 11, 2013

Now Not the Time to Value-Tilt Low Vol

Every week, a low volatility researcher has the same epiphany: tilt low volatility towards value.  This addresses two pressing issues simultaneously: avoiding overbought securities and adding value alpha.

A neat articulation of this view is from Feifei Li of Research Affiliates, who first shows that lots of people are investing in low volatility (there's another such piece here by Dangle and Kashofer from Vienna UofT). Clearly growth in low volatility is rising exponentially, and our intuition senses a Malthusian endgame that will be nasty and brutish.

That might seem scary, but to put it in perspective, there's now \$80B in value ETFs alone, so this isn't anywhere close to value and size. Next, she shows some valuation metrics.  Three different types of low vol portfolios are seemingly higher priced using two different value metrics, book/market and earnings yield.  That is, low vol portfolios over the past 10 years used to have higher earnings yields than the market, and higher book/market ratios; now it's the reverse.

To put these into perspective, the relative difference in the book-to-price ratio moving from 0.3 to 0.6 is about moving from the 15th percentile to the 45th percentile.  Li suggests adding a valuation criteria to low volatility to counteract this value-creep. The basic idea is, say the book/market ratio has a linear relation with expected return, where a higher book/market is associated with a higher return.  So if we take the universe of a set of low vol stocks, say the constituents of the ETF SPLV, which looks a the 100 least volatile stocks of the past year, and then take those stocks with the highest book/market ratios within that set, we simultaneously capture more of the value effect and avoid overbought stocks. That seems like a win-win improvement.

There are two problems with this approach.  First, the return to the book/market is not linear.  Therefore, merely moving your average book/market ratio may may you feel better, but unless you pick the right stocks, you won't change much.  Here's the average return by book/market deciles, for those stocks above the 20th percentile of the NYSE (all data here are from Ken French's excellent website, I use the 20th percentile cut-off because stocks below that aren't really investable in scale anyway, so potentially misleading).

Now, these are average monthly return premium above the market average.  If we are looking at geometric returns, that sharp increase for the top decile isn't there, but forget that for now (I think the geometric average is more relevant given that in practice people don't rebalance monthly, but to each his own).  The key is, this relationship over the investable universe is basically all happening at the end-deciles, not in between.  Thus, average book/market decile can be misleading, because not much happens between the 30th and 90th percentiles.

Curiously, market cap is not allocated evenly across all ten book/market deciles because the cutoffs for the size and book/market sorts are constructed once a year using the NYSE.  For example, currently, there's 3 times as much market cap in book/market decile 1 than book/market decile 10.

Here's the market-cap-weighted average book/market decile over time (in blue).  I'm just calculating a number generated by French's data here, all the work is in this Excel spreadsheet (there's nothing proprietary going on here).  So here's that average number calculated each month, and the total return on French's value factor (aka, HML, or High-Minus-Low factor portfolio proxy).

Clearly the low average decile corresponds to big increases in the HML factor returns.  If I take that time series, and put the data into deciles, I get a pretty clear pattern for future HML returns:

Basically, the value (ie, HML) factor only pays off when the average book/market decile is in the bottom third of its distribution.  Alas, we're not there, we are around the 70th percentile right now.  So, here's the average return for the value factor, for that 50% of the time when the average book/market decile is above average (ie, now):

That line is sloping the wrong way if you are banking on a value premium.

In sum, loading up on the value factor to improve low volatility is dangerous because 1) the relation between book/market and returns not linear, so simple portfolio averages can be misleading and 2) the value premium can be predictably predictable given the distribution of the market across book/market deciles.

In practice, the value premium to passive indices seems about 1-2% since it was popularized around 1990.  The 2.8% HML premium from 1928-2013 is due a lot to shorting low book/market stocks, a premium with dubious feasibility, so this number is not a good rule of thumb for the value of tilting towards value.  Value ETFs like IWD arose fortuitously around 2000, and so their 3% annual outperformance is all from the bursting of the internet bubble--if those value ETFs went back to 1990, the return premium would be less.  I would estimate there's 100 basis points in the value factor, yet, that's by itself.  When you try to use value to add to other strategies, it's not obviously beneficial, and most low vol practitioners are doing this, so you really aren't thinking outside the box.

Monday, August 05, 2013

On the Inverse Correlation between Expected Risk and Return

Imagine a world where expected returns are solely a function of covariances as standard theory implies. Then for assets with specific covariances, the market should give them specific expected returns. People expect risk and return to be positively correlated in this theory.

Instead, Sharpe and Amromin find that people expect volatility and returns to be inversely correlated: when they are bullish they expect low volatility, and when they are bearish they expect high volatility. This is counter to standard theory, which is why it has been in 'working paper' hell for 6 years, because referees find a lot to quibble with when results don't make sense (eg, high vol-low return papers in the 1990s). If you generate a paper like this, it really helps to already have credibility (eg, Fama-French 1992), because otherwise there will be a thousand reasons not to publish it.

On vacation last week I read a great airplane book, You are Not so Smart by blogger David McRaney, which highlights psychological biases in a succinct, interesting way.  He noted the work of  Finucane, Alhakami, Slovic, and Johnson (2000), in a paper entitled The affect heuristic in judgments of risks and benefits. Slovic is the other coauthor of the famous book on behavioral biases, Judgement Under Uncertainty: Heuristics and Biases. They asked a bunch of people about controversial issues like natural gas, food preservatives, and nuclear power.  They divided subjects into groups where some people read only about the risks, while others only read about the benefits of various issues.  Needless to say, those exposed to the risk arguments estimated the risks of these technologies to be higher, and those primed by the benefits judged there to be higher benefits.  However, those who saw the elucidation of risks also judged there to be lower benefits, and those who read about benefits saw lower risks.

Logically, risk and return are separate, but intuitively, we see them as part of a whole, related in a totally antithetical way.  For example, Warren Buffet has famously written that those stocks with the highest return had the lowest risk because such stocks had the largest 'cushion' in their forecast error.  'Low risk' and 'high return' are both good, so they go together in most people's intuition, as do the bad qualities, 'high risk' and 'low return.'

This is part of the 'halo effect', where people see those who are handsome as smarter, because things with good qualities are seen as intrinsically good, so good in each and every way.  Think of a saint with a halo: he was probably good at everything.  Indeed, if you give people positive information on one attribute, they will tend to assume the others attributes are correlated.  Clearly this makes some sense, as I imagine 'fitness' in a person, in terms of their desirable traits, has a general factor the way IQ helps explain language, math and visual-spacial skills, but it has its limits.  This is also why it's hard for people to accept that a lout like Hitler really was nice to dogs and a decent painter, because it seems like that implies you liked his other attributes.

A big theory as to why low volatility stocks outperform high volatility stocks is the Asness, Frazzini and Petersen's Betting Against Beta theory.  I'm more in the 'no risk premium plus delusional lottery ticket demand' camp.  In my view, people buy high beta stocks incidentally because these tend to have characteristics amenable to comforting delusions: big stories, potential for big gains.  In the Betting Against Beta view, people buy high beta stocks because of the higher return implied by this covariance, and their constrained in their allocation to equities by rules of thumb and regulations. I think investors are focused on the return and underestimating the risk, but in any case, buying in spite of it.

The Betting against Beta theory does follow more directly from the Capital Asset Pricing Model than my take, but Sharpe and Amromin, and now I learn, Finucane, Alhakami, Slovic, and Johnson are more on my side.