part-3-6a-and-6b

Modeling Cyclical Markets – Part 3

Originally Published November 28, 2016 in Advisor Perspectives

In Part 1, I introduced my Primary-ID model that identifies major price movements in the stock market. In Part 2, I presented Secondary-ID, a complementary model that tracks minor price movements. In this article, I combine these two rules- and evidence-based models into a composite called Cycle-ID and discuss the virtue of a single model.

I examine the efficacy of Cycle-ID from three separate but related perspectives. The first area is the utility of the composite. What are the benefits of moving from a binary scale (bullish or bearish) of Primary-ID and Secondary-ID to a ternary scale (bullish, neutral or bearish) of Cycle-ID? The second topic is synergy. Can the composite perform better than the sum of the parts? Finally, how can we use Cycle-ID to custom-design strategies similar to those of risk-parity for the purpose of matching market return and risk? Do these more complex investment strategies add value?

Cycle-ID – a composite model for cyclical markets

All investors need to achieve a dual objective. The first is capital preservation by minimizing losses in market downturns. The second is wealth accumulation by maximizing market exposure during market upturns. Not losing money should always be the main investment focus but making money is why we invest in the first place. Investors often achieve one objective at the expense of the other. For instance, many so-called secular bears avoided the 2000 dot-com crash and/or the 2008 sub-prime meltdown, but missed the 2003-2007 and the 2009-2016 bull markets. My cyclical-market models are aimed at preserving capital during stormy seasons as well as accumulating wealth in equity-friendly climates.

The signal scores of both Primary-ID and Secondary-ID are binary: +1 is bullish and -1 is bearish. Cycle-ID is the sum of the two models and therefore its scores are +2, 0 and -2. What do the three Cycle-ID scores mean? A Cycle-ID score of +2 indicates that both primary and secondary price movements are positive. In other words, the stock market is in a rally phase (a positive Secondary-ID) within a cyclical bull market (a positive Primary-ID). A Cycle-ID score of -2 indicates that both the primary and secondary price movements are negative. Put simply, the stock market is in a retracement phase (a negative Secondary-ID) within a cyclical bear market (a negative Primary-ID). When Cycle-ID is at zero, the stock market is either in a correction phase (a negative Secondary-ID) within a cyclical bull market (a positive Primary-ID), or in a rally phase (a positive Secondary-ID) within a cyclical bear market (a negative Primary-ID). Since the two cycle models are in conflict, one would naturally assume that the market is neutral. However, there is a counterintuitive interpretation of the zero Cycle-ID score that I will present later.

Figures 1A shows the monthly S&P 500 (in black) in logarithmic scale along with the Cycle-ID score (in blue) from 1900 to September 2016. Figure 1B is Robert Shiller's cyclically adjusted price-to-earnings ratio (CAPE) from which the vector-based indicators of Primary-ID and Secondary-ID are derived. Figure 1C is similar to Figure 5A in Part 1 and Figure 1D is the same chart as Figure 4D in Part 2 but updated to the end of September. All green segments in Figures 1C and 1D represent +1 scores and all red segments, -1. The blue Cycle-ID score in Figure 1A is the sum of the Primary-ID and Secondary-ID scores at either +2, 0 or -2.

Figures 1C and 1D show that green segments overwhelm red segments in both duration and quantity. History shows that corrections in cyclical bull markets are more prevalent than rallies in cyclical bear markets. Therefore a zero Cycle-ID score is not really neutral, but has a bullish bias. This subtle difference in the Cycle-ID score interpretation can make a huge impact on the investment outcomes over an extended time horizon.

part-3-1a-1b-1c-1d

For ease of visual inspection, the contents in Figures 1A to 1D are zoomed in for the period from 2000 to September 2016 as shown in Figures 2A to 2D, respectively. Cycle-ID score hit -2 several times during the protracted 2000-2003 dot-com meltdown. Cycle-ID also reached -2 in July 2008, months before the collapse of Lehman Brothers' and the global financial systems. During the bulk of the two cyclical bull markets from 2003 to 2007 and from 2009 to 2016, Cycle-ID was at +2 most of the time. During the last 16 years while market experts were engaging in worthless debates on whether we were in a secular bull or bear market, Cycle-ID identified all major and minor price movements objectively without any ambiguity. The rules-based clarity and the evidence-based credibility of Cycle-ID enabled investors and advisors to meet their dual objective – capital preservation in bad times and wealth accumulation in good times.

Let's examine the hypothetical performance statistics to see if Cycle-ID effectively met the dual objective in 116 years.

part-3-2a-2b-2c-2d

Cycle-ID performance stats

The ternary scale of Cycle-ID allows for many different combinations of investment strategies including the use of leverages and shorts. I intentionally selected a set of fairly aggressive strategies for the purpose of stress-testing Cycle-ID. This aggressive strategy set is used to demonstrate Cycle-ID's potential efficacy and is not an investment strategy recommendation.

The aggressive strategies are translated to execution rules as follows. When the Cycle-ID score is at +2 (the market is in a rally mode within a bull market), exit the previous position and buy an S&P 500 ETF with 2x leverage (e.g. SSO) at the close in the following month. When the score is at -2 (the market is in a retracement phase within a bear market), exit the previous position and buy an inversed unleveraged S&P 500 ETF (e.g. SH) at the close in the following month. When Cycle-ID is at zero, close the previous position (either leveraged long or unleveraged short) and buy an unleveraged long S&P 500 ETF (e.g. SPY) at the close in the following month. This mildly bullish interpretation of the zero score is based on the evidence that in over 100 years cyclical bull markets significantly outnumber their cyclical bear counterparts as shown in Figure 1C. Bull market corrections also outnumber bear market rallies as shown in Figures 1C and 1D. As a result, I consider a zero Cycle-ID rating a bullish call rather than a neutral market stance in the stress test.

Figure 3A is the same as Figure 1A with the aggressive set of strategies specified in blue on the upper left. Figure 3B shows that the cumulative return of Cycle-ID is 20.8% compounded over 116 years, far and above the compound annual growth rate (CAGR) of Secondary-ID at 12.8% and Primary-ID at 10.4%. The equity curves of Primary-ID and Secondary-ID are those presented in Parts 1 and 2, respectively. They are updated to the end of September and shown here as references. The S&P 500 compounded total return is at 9.4%, the performance benchmark for comparison.

part-3-3a-and-3b

Figures 4A and 4B display contents similar to those shown in Figures 3A and 3B except the base year for the equity curves is changed from 1900 to 2000. Despite two drastically different timeframes, 116 years in Figure 3 and 16 years in Figure 4, the relative CAGR gaps between Cycle-ID and the S&P 500 total return are nearly identical, roughly 1100 bps in both periods. Risk characteristics are also similar in the two periods. Cycle-ID has a lower maximum drawdown, but shows a 1.5x higher volatility (annualized standard deviation) than that of the S&P 500. The consistency in both CAGR edge and risk gap suggests that Cycle-ID has had a relatively stable value-adding ability for over a century.

part-3-4a-and-4b

Simulating leveraged and inverse indices

I now digress to describe briefly the methodology for computing both the leveraged and the inverse-proxy indices used in the stress test presented above. The traditional ways for setting up such positions are by borrowing capital from a margin account and by selling borrowed equities. Leveraged and inverse ETF financial products did not exist in 1900 but are readily available today. To simulate a leveraged and an inverse time series from an underlying index, Marco Avellaneda and Stanley Zhangdeveloped the appropriate formulae. Their algorithms can be used to compute the leveraged and inverse S&P 500 proxies as follows: A 2x leveraged S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one plus twice the month-to-month percentage change of the S&P 500. An inversed S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one minus the month-to-month percentage change of the S&P 500. In my back tests, both time series are assumed to be rebalanced monthly.

To check the accuracy of the Avellaneda and Zhang algorithms, I simulated two ETF proxies and compared them to the closing prices of two widely held ETFs: a 2x leveraged S&P 500 ETF (ticker: SSO) and an inversed S&P 500 ETF (ticker: SH). All time series are rebalanced daily. The results are shown in Figures 5A and 5B. The tracking errors averaged over 10 years are found to be 0.2% and 0.1%, respectively. These errors may be small, but they could diverge over a longer time period. On the other hand, the two divergences have run in opposite directions and the errors tend to cancel each other. Nevertheless, the algorithms appear adequate in testing Cycle-ID for illustration purposes.

part-3-5a-and-5b

Back to Cycle-ID, my intent is not to boast about the spectacular Cycle-ID CAGR of 20.8% shown in Figure 3B or to recommend a particular aggressive investment strategy. In fact, the high CAGR must be viewed with caution because both fund costs/fees and tracking error could potentially lower the hypothetical return. In addition, Figure 3B also shows that the higher hypothetical CAGR comes with higher risks. Cycle-ID has a maximum drawdown almost as high as that of the S&P 500 total return index and the annualized standard deviation (SD) is 1.6x higher than the market benchmark. Nevertheless, this simple exercise does underscore the alpha generating potential of Cycle-ID.

Return and risk tradeoffs

Besides the aggressive set of strategies of 2x long and 1x short, I also tested various combinations of leveraged and short positions. The best way to visualize absolute and risk-adjusted returns among different strategies is to plot CAGR against two risk measures –maximum drawdown (Figure 6A) and volatility (Figure 6B). In Figures 6A and 6B, the green curves represent long-only strategies (no shorts) with different leverage levels ranging from unleveraged (1.0 L) to 2x leveraged (2.0 L). The blue curves show the effects of adding a short strategy (1x short) into the mix while varying the degree of long leverage. The preferred corner is at the top-left showing the highest return with the lowest risk. The red dots represent the S&P 500, which is located far away from the preferred corner.

part-3-6a-and-6b

When short strategy is added (the blue lines), the rules are as follows. When Cycle-ID score is zero, exit the previous position and buy the S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit previous position and buy the S&P 500 with various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID is at -2, exit previous position and buy an inverse S&P 500 (e.g. SH) at next month's close.When no short is used (the green lines), the rules are as follows. When Cycle-ID score is zero, buy the unleveraged S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit the previous position and buy the S&P 500 at various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID score is -2, exit the previous position and buy the 10-year U.S. Treasury bond (e.g. TLT) at next month's close. The return while holding long positions is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Several observations from Figures 6A and 6B are noteworthy. These characteristics are probably germane to leveraged long and short strategies in general and not specific to Cycle-ID.

  • First, when no leverage and no shorts are used (the bottom-left green dots in Figures 6A and 6B), the performance of Cycle-ID is between that of Primary-ID and Secondary-ID. Hence, no synergy is expected when the performances of two models are averaged.
  • Second, the blue curves are to the right of the green curves indicating that adding short strategies increases risk more than return. It's inherently more challenging to profit from short positions because down markets are brief and volatile.
  • Third, the curves in Figure 6A are convex (higher marginal return with each added unit of drawdown) but the curves in Figure 6B are concave (lower marginal return with each added unit of volatility). The curvature disparity reflects the basic difference between these two types of risk. Volatility measures the uncertainty in the outcome of a bet. Maximum drawdown depicts the bloodiest outcome of a wrong bet.
  • With Cycle-ID, one can tailor strategies to either match market risk or gain. For instance, if one can endure an S&P 500 drawdown of -83%, one can supercharge CAGR from 9.7% to over 21% by using the 2L/1S strategy shown in Figure 6A. If one can tolerate a 17% market volatility, one can boost CAGR to over 14% by using the 2L/0S strategy in Figure 6B. Conversely, if one just wants to earn a market return of 9.7%, by extrapolating the green lines in Figures 6A and 6B to intercept a horizontal line at 9.7%, one can reduce drawdown from -83% to below -30% or to calm volatility from 17% to less than 12%.
  • A widely known approach for engineering a portfolio with either market-matching return or market-matching volatility is risk parity. It budgets allocation weights by the inverse variances of all the assets in a portfolio. Cycle-ID achieves the same mission by using a single index – the S&P 500. No risk budgeting algorithm is needed. Looking for a robust risk management tool in a chaotic, nonlinear and dynamic investment world, I would pick simplicity over complexity every-time.

Concluding remarks

My cyclical-market models are relevant to both Modern Portfolio Theory (MPT) and Efficient Market Hypothesis (EMH) – the two pillars in the temple of modern finance.

Harry Markowitz in 1952 introduced MPT – the use of mutual cancellations in the uncorrelated variance-covariance matrixes across asset classes to minimize portfolio risk. Jack Treynor was the first to note in 1962 that the risk premium (the spread between risky and risk-free returns) of a stock is proportional to the covariance between that stock and the overall market. William Sharpe in 1964 simplified Markowitz's complex matrixes into a single-index model – the Capital Asset Pricing Model (CAPM) that uses beta to represent stock or portfolio risk (price fluctuations relative to those of the market). The type of risk both MPT and CAPM focus on is volatility –uncertainties in the potential (expected) returns of one's bets. Neither MPT nor CAPM tackles the types of risk investors dread the most – temporary equity drawdowns and permanent capital losses from making the wrong bets. Volatility and beta are the types of risk financial theorists deliberate about in scholastic faculty lounges. Drawdowns and permanent losses are the types of risk that drain investors' wealth and prey on their emotions.

Furthermore, variance-covariance matrixes and betas calculated from historical data could lose their anti-correlation magic or regression line linearity when the next crisis hits. In 2008, for instance, the effectiveness in risk reduction by either EMH's diversification or CAPM's beta diminished when capital protection was needed the most. Most importantly, even when MPT and CAPM work, they can only diversify away specific risk. Both models offer no solution in managing systematic or systemic risk. Company specific risk is minuscule when compared to the risk from the overall stock market or from the collapse of the global financial systems. There were hardly any diversified portfolios that could shelter one's wealth during cyclical bear markets in the financial meltdowns in 1929 and 2008. MPT and CAPM are ill-equipped in mitigating these titanic financial shockwaves with tsunami-scale impacts that affect all asset classes around the globe.

Cycle-ID is a positive alternative to the traditional risk hedging concepts. First, Cycle-ID eliminates company-specific risks by investing only in a broad market index – the S&P 500. There's no need to optimize an "efficient portfolio" of asset classes and hope that their anti-correlation attributes would remain unchanged going forward. More importantly, Cycle-ID reduces both systematic (i.e., interest rate, inflation, currency or recession) and systemic risk (global financial systems, interlinked liquidity freeze, or geopolitical instability) by exposing one's capital only in fertile seasons with the appropriate market exposure levels. Market exposure level is objectively matched with the perennially changing market environment. This is accomplished by closely tracking the first and second derivative of the Shiller CAPE – the aggregate market appraisal by all market participants.

Both the traditional risk hedging approaches (i.e., MPT and CAPM) and Cycle-ID employ the long-standing wisdom of diversification to manage risks. The difference is that the traditional approaches diversify in assets with uncorrelated covariances to hedge against company-specific risk. My model diversifies in market exposure in harmony with the investment climate to achieve a dual investment goal: (1) to minimize systematic and systemic risks in bad times; and (2) to increase market exposure in good times. The Chinese character for risk has an insightful duality – one pictogram for danger and the other for opportunity. While risk can harm us when we are exposed to it, risk is also a driver for higher returns if we exploit it to our advantage.

Let's turn to the Efficient Market Hypothesis which was postulated by Eugene Fama in 1965. According to Richard Thaler, EMH has two separate but related theses. The first thesis is that the stock market is always right and the second, the market is difficult to beat in the long run. Behavioral economists argue that the market is not always right because its pricing mechanism does not always function perfectly. Humans are not always rational beings and financial bubbles are the proofs of market mispricing. Both traditional and behavioral economists, however, agree on the second thesis that the market is hard to beat in the long run. The market is hard to beat because it's quite efficient (discounting all known information that could affect market prices) most of the time. But the market is not totally efficient all the time. Market prices often diverge from the market's intrinsic values – an observation first articulated in 1938 by John Burr Williams before the inception of behavioral economics. Hence, it's difficult but not impossible to beat the S&P 500 in a long run.

Primary-ID, Secondary-ID, and Cycle-ID along with a handful of legendary investors and some previously published models of mine (Holy Grail, Super Macro, and TR-Osc) demonstrate that it's possible to outperform the S&P 500 total return. They do so not by predicting the future. Price prediction is futile because both the amplitude of impact and the frequency of occurrence of the various price drivers are totally random. So what are the secret ingredients for a market-beating model?

To outperform the S&P 500, following my five criteria alone is not enough. The five criteria (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data) would only increase the odds of model robustness, they do not offer an outperformance edge over the S&P 500. To beat the market, a model must also exhibit a quality of originality with a counterintuitive flavor. If a model is too intuitively obvious, many will have already discovered it and its edge will disappear. More importantly, to beat the market, a model must be relatively unknown so that it's not widely followed. If you can find the model in the Bloomberg terminal, it is already part of the market. The market cannot outperform itself.

My cyclical-market models are simple (a single metric – the CAPE), focused (one index – the S&P 500), logical (vectors over scalar) and above all, transparent (all rules are disclosed). Should you be worried that after publishing my models, their future efficacy will diminish? I would argue that such concern is unwarranted. First, only a fraction of the total investor universe will read my articles. Even if some have read them, only a fraction will believe in the models. Even if those who have read the articles are swayed by the rationale of the models, only a tiny fraction could internalize their conviction and have the discipline to follow-through over time. These probabilities are multiplicative and protect the models from being homogenized by the masses.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

 

 

figure-4

Modeling Cyclical Markets – Part 1

Originally Published October 24, 2016 in Advisor Perspectives

The proverbial wisdom is that there are two types of stock market cycles – secular and cyclical. I argued previously that secular cycles not only lacked statistical basis to be credible, but their durations of 12 to 14 years are also impractical for most investors. We live in an internet age with a time scale measured in nanoseconds. Wealth managers often turn over their portfolios after only a few years. Simply put, secular cycles can last longer than financial advisors can retain their clients.

The second type of cycle is called a “cyclical market” and is believed to comprise both primary and secondary waves. Economic cycles are thought to drive primary waves. According to the National Bureau of Economic Research (NBER), the average economic cycle length is 4.7 years, which would be more suitable for the typical holding periods of most investors.

To succeed in accumulating wealth in bull markets and preserving capital in bear markets, we must first define and detect primary and secondary markets. In this article, I present a modeling approach to spot primary cycles. Modeling secondary market cycles will be the topic of Part 2.

Common flaws in modeling financial markets

Before presenting my model on primary markets, I must digress to discuss two common mistakes in modeling financial markets. For example, when modeling secular market cycles and market valuations, analysts use indicators such as the Crestmont P/E, the Alexander P/R and the Shiller CAPE (cyclically adjusted price-earnings ratio). By themselves, these indicators are fundamentally sound. It's the modeling approach using these indicators that is flawed.

Models on valuations and secular cycles cited above share two assumptions. First, they assume that the amplitude (scalar) of the indicators can be relied on to indicate market valuations and secular outlook. Extremely high readings are interpreted as overvaluations or cycle crests, and extremely low readings, undervaluation or cycle troughs. Second, it's assumed that mean reversion will always drive the extreme readings in the models back into line.

Figure 1A shows the S&P 500 from 1881 to mid-2016 in logarithmic scale. Figure 1B is the Shiller CAPE overlay. The solid purple horizontal line is the mean from 1881 to 1994 and has a value of 14.8. The upper and lower dashed purple lines represent one standard deviation above and below the mean of 14.8, respectively. The solid brown line to the right is the mean from 1995 to mid-2016 and has a value of 26.9. The upper and lower dashed brown lines are one standard deviation above and below the post-1995 mean of 26.9, respectively. One standard deviation above the pre-1995 mean is 19.4 and one standard deviation below the post-1995 mean is 20.4. The data regimes in the two adjoining timeframes do not overlap. The statistically distinct nature of the two regimes invalidates the claim by many secular cycle advocates and CAPE-based valuations practitioners that the elevated CAPE readings after 1995 are just transitory statistical outliers and will fall back down in due course.

figure-1a-and-1b

Let's examine the investment impacts from these two assumptions. The first assumption is that extreme amplitudes can be used to track cycle turning points. Figure 1B shows that both high and low extremes are arbitrary and relative. As such, they cannot be used as absolute valuation markers. For example, after 1995, the entire amplitude range has shifted upward. Secular cyclists and value investors would have sold stocks in 1995 when the CAPE first pierced above 22, exceeding major secular bull market crests in 1901, 1937 and 1964. They would have missed the 180% gain in the S&P 500 from 1995 to its peak in 2000. More recently, the CAPE dipped down to 13 at the bottom of the sub-prime crash. Secular cycle advocates and value investors would consider a CAPE of 13 not cheap enough relative to previous secular troughs in 1920, 1933, 1942, 1949, 1975 and 1982. They would have asked clients to switch from stocks to cash only to miss out the 200% gain in the S&P 500 since 2010. These are examples of huge upside misses caused by the first flawed assumption used in these scalar-based models.

The second assumption is that mean reversion always brings the out-of-bound extremes back into line. This assumption falters on three counts. First, mean reversion is not mean regression. The former is a hypothesis and the latter, a law in certain statistics like Gaussian distributions (the bell curve). Second, mean regression is guaranteed only for distributions that resemble a bell curve. If the distributions follow the power-law or the Erlang statistics, even mean regression is not guaranteed. Finally, neither mean regression nor mean reversion is a certainty if the overshoots are not by chance, but are the results of causation. Elevated CAPE will last as long as the causes (Philosophical Economics, Jeremy Siegel and James Montier) remain in place. The second assumption creates a false sense of security that could be very harmful to your portfolios.

The confusion caused by both of these false assumptions is illustrated in Figure 1B. For the 26.9 mean, reversion has already taken place in 2002 and 2009. But for the 14.8 mean, reversion has a long way to go. All scalar models that rely on arbitrary amplitudes for calibration and assume a certainty of mean reversion are doomed to fail.

A vector-based modeling approach

The issues cited above are the direct pitfalls of using scalar-based indicators. One can think of scalar as an AM (amplitude modulation) radio in a car. The signals can be easily distorted when the car goes under an overpass. Vector, on the other hand is analogous to FM (frequency modulation) signals, which are encoded not in amplitude but in frequency. Solid objects can attenuate amplitude-coded signals but cannot corrupt frequency-coded ones. Likewise, vector-based indicators are immune to amplitude distortions caused by external interferences such as Fed policies, demographics, or accounting rule changes that might cause the overshoot in the scalar CAPE. Models using vector-based indicators are inherently more reliable.

Instead of creating a new vector-based indicator from scratch, one can transform any indicator from a scalar to a vector with the help of a filter. Two common signal-processing filters used by electronic engineers to condition signals are low-pass filters and high-pass filters. Low-pass filters improve lower frequency signals by blocking unwanted higher frequency chatter. An example of a low-pass filter is the moving average, which transforms jittery data series into smoother ones. High-pass filters improve higher frequency signals by detrending irrelevant low frequency noise commonly present in the physical world. The rate-of-change (ROC) operator is a simple high-pass filter. ROC is defined as the ratio of the change in a variable over a specific time interval. Common time intervals used in financial markets are year-over-year (YoY) or month-over-month (MoM). By differentiating (taking the rate-of-change of) a time series, one transforms it from scalar to vector. A scalar only shows amplitude, but a vector contains both amplitude and direction contents. Let me illustrate how such a transformation works.

Figures 2A is identical to Figure 1B, the scalar version of the Shiller CAPE. Figure 2B is a vector transformation, the YoY-ROC of the scalar Shiller CAPE time series. There are clear differences between Figure 2A and Figure 2B. First, the post-1995 overshoot aberration in Figure 2A is no longer present in Figure 2B. Second, the time series in Figure 2B has a single mean, i.e. the mean from 1881 to 1994 and the mean from 1995 to present are virtually the same. Third, Figure 2B shows that the plus and minus one standard deviations from the two time periods completely overlap. This proves statistically that the vector-based indicator is range-bound across its entire 135-year history. Finally, the cycles in Figure 2B are much shorter than that in Figure 2A. Shorter cycles are more conducive to mean reversion.

figure-2a-and-2b

It's clear that the YoY-ROC filter mitigates many calibration issues associated with the scalar-based CAPE. The vector-based CAPE is range-bound, has a single and stable mean and has shorter cycle lengths. These are key precursors for mean reversion. In addition, there are theoretical reasons from behavioral economics that vectors are preferred to scalars in gauging investors' sentiment. I will discuss the theoretical support a bit later.

The vector-based CAPE periods versus economic cycles

Primary market cycles are believed to be driven by economic cycles. Therefore, to detect cyclical markets, the indicator should track economic cycles. Figure 3A shows the S&P500 from 1950 to mid-2016. The YoY-ROC GDP (Gross Domestic Product) is shown in Figure 3B and the YoY-ROC CAPE in Figure 3C. The Bureau of Economic Analysis (BEA) published U.S. GDP quarterly data only after 1947.

The waveform of the YoY-ROC GDP is noticeably similar to that of the YoY-ROC CAPE. In fact, the YoY-ROC CAPE has a tendency to roll over before the YoY-ROC GDP dips into recessions, often by as much as one to two quarters. The YoY-ROC GDP and the YoY-ROC CAPE are plotted as if the two curves were updated at the same time. In reality, the YoY-ROC CAPE is nearly real-time (the S&P 500 and earnings are at month-ends and the Consumer Price Index has a 15-day lag). GDP data, on the other hand, is not available until a quarter has passed and is revised three times. The YoY-ROC CAPE indicator is updated ahead of the final GDP data by as much as three months. Hence, the YoY-ROC CAPE is a true leading economic indicator.

figure-3a-3b-3c

Although the waveforms in Figures 3B and 3C look alike, they are not identical. How closely did the YoY-ROC CAPE track the YoY-ROC GDP in the past 66 years? The answer can be found with the help of regression analysis. Figure 4 shows an R-Squared of 29.2%, the interconnection between GDP growth rate and the YoY-ROC CAPE. A single indicator that can explain close to one-third of the movements of the annual growth rate of GDP is truly amazing considering the simplicity of the YoY-ROC CAPE and the complexity of GDP and its components.

figure-4

Primary-ID – a model for primary market cycles

Finding an indicator that tracks economic cycles is only a first step. To turn that indicator into an investment model, we have to come up with a set of buy and sell rules based on that indicator. Primary-ID is a model I designed years ago to monitor major price movements in the stock market. In the next article, I will present Secondary-ID, a complementary model that tracks minor stock market movements. I now illustrate my modeling approach with Primary-ID.

A robust model must meet five criteria: simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data. Primary-ID meets all five criteria. First, Primary-ID is elegantly simple – only one adjustable parameter for "in-sample training." Second, the vector-based CAPE is fundamentally sound. Third, buy and sell rules are clearly defined. Forth, the Shiller CAPE is statistically relevant because it covers over two dozen samples of business cycles. Fifth, the Shiller database is quite sufficient because it provides over a century of monthly data.

Figure 5A shows both the S&P 500 and the YoY-ROC CAPE from 1900 to 1999. This is the training period to be discussed next. The curves are in green when the model is bullish and in red when bearish. Bullish signals are generated when the YoY-ROC CAPE crosses above the horizontal orange signal line at -13%. Bearish signals are issued when the YoY-ROC CAPE crosses below the same signal line. The signal line is the single adjustable parameter in the in-sample training.

figure-5a

Figure 5B compares the cumulative return of Primary-ID to the total return of the S&P 500, a benchmark for comparison. A $1 invested in Primary-ID in January 1900 hypothetically reached $30,596 at the end of 1999, a compound annual growth rate (CAGR) of 10.9%. The S&P 500 over the same period earned $23,345, a CAGR of 10.3%. The 60 bps CAGR gap may seem small, but it doubles the cumulative wealth after 100 years. The other significant benefit of Primary-ID is that its maximum drawdown is less than two third of that of the S&P 500. It trades on average once every five years, very close to the average business cycle of 4.7 years published by NBER.

The in-sample training process

Figures 5A and 5B show a period from 1900 to 1999, which is the back-test period used to find the optimum signal line for Primary-ID. The buy and sell rules are as follows: When the YoY-ROC CAPE crosses above the signal line, buy the S&P 500 (e.g. SPY) at next month's close. When the YoY-ROC CAPE crosses below the signal line, sell the S&P 500 at next month's close and park the proceeds in US Treasury bond. The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding bond is the sum of both bond yields and bond price percentage changes caused by interest rate changes.

Figure 6 shows the back test results in two tradeoff spaces. The plot on the left is a map of CAGR versus maximum drawdown for various signal lines. The one on the right is CAGR as a function of the position of the signal line. For comparison, the S&P 500 has a total return of 10.6% and a maximum drawdown of -83% in the same period. Most of the blue dots in Figure 6 beat the total return and all have maximum drawdowns much less than that of the S&P 500.

figure-6

Figures 6A and 6B only show a range of signal lines that offers relatively high CAGR. What is not shown is that all signal lines above -10% underperform the S&P 500. The two blue dots marked by blue arrows in both charts are not the highest returns nor the lowest drawdowns. They are located in the middle range of the CAGR sweet spot. I judiciously select a signal line at -13% that does not have the maximum CAGR. An off-peak parameter gives the model a better chance to outperform the optimized performance in the backtest. Picking the optimized adjustable parameter would create an unrealistic bias for the out-of-sample test results. Furthermore, an over-optimized model even if it passes the out-of-sample test is prone to underperform in real time. A parameter that is peaked during back-tests will likely lead to inferior out-of-sample results as well as actual forecasts.

Why do all signal lines above -10% give lower CAGR's than those within -10% and -19%? There is a theoretical reason for such an asymmetry to be discussed a bit later.

The out-of-sample validation

The out-of-sample test is a guard against the potential risk of over-fitting during in-sample optimization. It's like a dry-run before applying the model live with real money. Passing the out-of-sample test, however, does not necessarily guarantee a robust model but failing the out-of-sample test is certainly ground for model rejection.

Here’s how out-of-sample testing works. The signal line selected in the training exercise is applied to a new set of data from January 2000 to July 2016 with the same buy and sell rules. Figure 7A shows both the S&P 500 and the YoY-ROC CAPE.

Figure 7B compares the cumulative return of Primary-ID to the total return of the S&P 500. A $1 invested in Primary-ID in January 2000 would hypothetically make $3.50 in mid-2016, a CAGR of 7.8%. Investing $1 in the S&P 500 over the same period would have earned $2.02, a CAGR of 4.3%. An added perk Primary-ID offers is the maximum drawdown of -23%, half of that of the S&P 500’s -51%. It trades on average once every five years, similar to that in the in-sample test, and therefore profits are taxed at long-term capital gains rates.

figure-7a-and-7b

Primary-ID sidestepped two infamous bear markets: the dot-com crash and the sub-prime meltdown. It also fully invested in equities during the two mega bull markets in the last 16 years. The value of the YoY-ROC CAPE as a leading economic indicator and the efficacy of Primary-ID as a cyclical market model are validated.

Theoretical support for Primary-ID

The theoretical support for Primary-ID can be found in prospect theory proposed by Daniel Kahneman and Amos Tversky in 1979. Prospect theory offers three original axioms that lend support to Primary-ID. The first axiom shows that there is a two-to-one asymmetry between the pain of losses versus the joy of gains – losses hurt twice as much as gains bring joy. Recall from Figure 2B that the sweet spot for CAGR comes from signal lines located between -10% and -19%, more than one standard deviation below the mean near 0%. Why is the sweet spot located that far off center? The reason could be the result of the asymmetry in investors' attitude toward reward versus risk. Prospect theory explains an old Wall Street adage – let profits, run but cut losses short. Primary-ID adds a new meaning to this old motto – buy swiftly, but sell late. In other words, buy quickly once YoY-ROC CAPE crosses above -13% but don't sell until YoY-ROC CAPE crosses below -13%.

The second prospect theory axiom deals with scalar and vector. The authors wrote, "Our perceptual apparatus is attuned to the evaluation of changes or differences rather than to the evaluation of absolute magnitudes." In other words, it's not the level of wealth that matters; it's the change in the level of wealth that affects investors' behavior. This explains why the vector-based CAPE works better than the original scalar-based CAPE. The former captures human behaviors better than the latter.

The third prospect theory axiom proposed by Kahneman and Tversky is that "the value function is generally concave for gains and commonly convex for losses." Richard Thaler explains this statement in layman's terms in his 2015 book entitled "Misbehaving." The value function represents investors' attitudes toward reward and risk. The terms concave and convex refer to the curve shown in Figure 3 in the 1979 paper. A concave (or convex) value function simply means that investors' sensitivity to joy (or pain) diminishes as the level of gain (or loss) increases. The diminishing sensitivity is observed only on the change in investors' attitude (vector) and not on the investors' attitude itself (scalar). Investors' diminishing sensitivity toward both gains and losses is the reason that the YoY-ROC CAPE indicator is range-bound and why mean reversion occurs more regularly. The original Shiller CAPE is a scalar time series and does not benefit from the third axiom. Therefore the apparent characteristics of range-bound and mean reversion of the scalar Shiller CAPE in the past are the exceptions, not the norms.

Concluding remarks

The stock market is influenced by different driving forces including economic cycles, credit cycles, Fed policies, seasonal/calendar factors, equity premium anomaly, risk aversion shifts the equity premium puzzle and bubble/crash sentiment. At any point in time, the stock market is simply the superposition of the displacements of all these individual waves. Economic cycle is likely the dominant wave that drives cyclical markets, but it is not the only one. That's why the R-squared is only at 29.2% and not all bear markets were accompanied by recessions (such as 1962, 1966 and 1987).

The credibility of the Primary-ID model in gauging primary cyclical markets is grounded on several factors. First, it is based on a fundamentally sound metric – the Shiller CAPE. Second, its indicator (YoY-ROC CAPE) is a vector that is more robust than a scalar. Third, the model tracks the cycle dynamics between the market and the economy relatively well. Forth, the excellent agreement between the five-year average signal length of Primary-ID (0.2 trades per year shown in Figures 5B and 7B) and the average business cycle of 4.7 years reported by NBER adds credence to the model. Finally, the Primary-ID model has firm theoretical underpinnings in behavioral economics.

It's a widely held view that the stock market exhibits both primary and secondary waves. If primary waves are predominantly driven by economic cycles, what drives secondary waves? Can we model secondary market cycles with a vector-based approach similar to that in Primary-ID? Can such a model complement or even augment Primary-ID? Stay tuned for Part 2 where I debut a model called Secondary-ID that will address all these questions.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached atted@ttswadvisory.com.