Figure 6 Six Models vs Three Benchmarks

Random Walk Part 4 – Can We Beat a Radically Random Stock Market?

Originally Published October 9, 2017 in Advisor Perspectives

This is the final article of my four-part series into the fallacy of the random-walk paradigm. In Part 1 and Part 2, I showed that asset prices do not follow a tidy bell curve and instead are radically random. In Part 3, I demonstrated that many bad risk management practices are the direct results of equating volatility to risk. In this article, I offer a probability-based framework that captures the true nature of investment reward and risk.

The Efficient Market Hypothesis (EMH) argues that the market is hard to beat because very few people could make better forecasts than the collective market wisdom, which instantly discounts all available information. My new reward-risk framework reveals a little-known secret that market gains and losses have very different distribution profiles. We can beat the S&P 500 not by making better forecasts, but by exploiting the dual personality of Mr. Market.

The random walk theory has been the core of modern finance since Louis Bachelier wrote his 1900 PhD thesis. Economists define reward as the mean return (expected value) and risk as the standard deviation (volatility) of the returns. These mathematical terms may be convenient for academics in formulating their economic theories, but make no sense to the average investor. Reward has a positive overtone but mean could be negative. Risk has a negative undertone but standard deviation weighs gains and losses equally. Investors view reward and risk as two sides of the same coin – reward comes from gains and risk comes from losses. My reward-risk framework quantifies this subtle diametrical symmetry.

The random-walk definitions of investment reward and risk

Modern finance adopted the mean-variance paradigm to frame reward and risk. Appendix A presents the mathematical definitions. Figure 1 illustrates the reward and risk graphically with the annual returns of the S&P 500 from 1928 to 2016 (data sources: MetaStock and Yahoo Finance). The dark blue curve is the random walk probability density function (PDF). Reward (mean or expected value) is computed by integrating the total area under the PDF curve using equation A1 in Appendix A. Risk (the square root of variance), computed with equation A2, is one-half of the width of the light blue central region bounded by ± one standard deviation. The random walk PDF roughly matches the data (the jagged gray area) in the central region except near the peak. Beyond ± one standard deviations, data reside mostly above the PDF curve.

Figure 2 compares the random walk PDFs (blue curves) to the actual S&P 500 returns (gray areas) in a one-, five- and 10-year horizon. The peaks of the PDFs denote the means. The red arrows signify ± one standard deviations. Random walk's notions of mean and volatility bear no resemblance to actual returns and risks in the real world. The longer the return horizons are, the larger the gaps. This is why so many conventional risk management practices derived from the mean-variance paradigm broke down during financial crises. The academics' bell curve paradigm offers investors no protection against financial market risks.

My gain-loss framework for investment reward and riskI offer a new probability-based framework for defining reward and risk. The formulas are presented in Appendix B. Figure 3 illustrates the concept. I define investment reward as the expected gain – the sum of all probability-weighted gains in a return histogram. It is computed by integrating numerically the total green area in Figure 3 using equation B1 in Appendix B. I define risk as the expected loss – the sum of all probability-weighted losses. It is computed by summing the total pink area in Figure 3 using equation B2.

The random-walk paradigm treats both positive and negative dispersions as risks. The expected gain-loss framework only considers losses (red bars) as risks but views the widely disperse green bars (gains) as gainful opportunities. The bell curve does not include all data, especially those at the extremes. The new model accounts for all outlier gains and all tailed risks weighted by their observed probabilities.

The old paradigm versus the new framework

Besides being unrealistic and impractical, the random-walk paradigm has one more subtle fault that is underreported. Mean and variance have different units of measure – mean is in percent but variance is in percent-squared. For unit compatibility, William Sharpe was compelled to use standard deviation – the square root of variance in his Sharpe Ratio. Even so, mean and standard deviation still do not have contextual uniformity. Mean signifies the most probable outcome and standard deviation measures the spread of those outcomes. Comparing reward (mean) to risk (volatility) is like comparing apples and oranges. For instance, Figure 1 shows that the mean-to-volatility ratio of the S&P 500 is 0.48 (dividing a mean of 7.7% by a volatility of 16%). Does this imply that the reward of investing in the S&P 500 is less than half the risk?

By contrast, "expected gain" and "expected loss" are two sides of the same coin – returns with opposite signs. Unlike the Sharpe Ratio that lacks clarity, the expected-gain-to-expected-loss ratio has absolute significance. For instance, Figure 3 shows that from 1928 to 2016, the S&P 500 index has an expected gain of 12.5% versus an expected loss of -4.3%. The expected gain of the S&P 500 is 2.9 times larger than the size of the expected loss.

Challenging the EMH's explanation on why the market is hard to beat

Why is the S&P 500 total-return such a formidable challenge for many active managers and market timers? The Efficient Market Hypothesis (EMH) offers a two-part explanation. First, market prices instantly (efficiently) reflect the collective appraisals of all market participants. Second, one can only beat the market by outsmarting the collective wisdom. On the surface, both points appear logical, but they are not, in fact, as logical as they appear.

The first point may explain why it is difficult for arbitrageurs to make a living because any price gap is instantly exploited. This argument, however, does not apply to the financial markets where prices are not single-valued functions of information. The same news can have multiple meanings and price implications depending on the receivers. Different interpretations of the same news draw buyers and sellers to the table. Price is an equilibrium point where the sellers believe their price is fair but high, while the buyers think is reasonable but low. The market is not a super forecaster, but an efficient auction-clearing house that facilitates buyers and sellers with different appraisals to transact.

The second EMH argument is self-inconsistent. It asserts that few can beat the market because outsmarting the collective forecast is hard to do. No random-walk followers including the EMH faithful should endorse the practice of forecasting because forecasting randomness is a contradiction in terms. Randomness, by definition, is unpredictable.

The real reason why the market is hard to beat

My gain-loss framework offers a painfully obvious explanation of why the market is hard to beat. Figure 4 parses the same data in Figure 2 in terms of gains and losses. It shows that the market offers investors abundant gainful opportunities (green bars), but that they are highly erratic. The probabilities inside the blue rectangle in the middle chart are nearly the same but the gains span from 0% to 80%. The bottom chart shows comparable probabilities for gains ranging from 0% to 200%. To time the market with virtually flat gain distributions is futile. That is why the buy-and-hold approach is unbeatable in the green zone.

The characteristics of market losses (pink bars) are very different. First, the pink zones are much narrower than the green areas. Second, while the green area grows with time (from 12.5% in one year, to 46.6% in five years and 101.7% in 10 years), the pink areas are insensitive to the holding period. In fact, as the holding period expands from five to ten years, the pink area shrinks from -5.2% to -2.9%.

It is a fool's errand to time the market in the green areas, where the probabilities are almost flat and the distributions grow with the holding period. It is prudent to stay in the market and gather those wildly erratic gains. In contrast, the pink areas are confined and insensitive to time. Hence, mitigating losses in the pink areas is much more manageable. My gain-loss framework not only explains why it is hard to beat the market, but also reveals a clue on how to do it logically.

Unlock a little-known market beating secret

How can we differentiate whether the current market is in the green or pink zone? I previously published five models that were designed to do just that. The five models are Holy GrailSuper MacroTR-OscPrimary-ID and Secondary-ID. They detect the green/pink market phases from five orthogonal perspectives – trend, the economy, valuations, major market cycles and minor price movements, respectively.

Figure 5 shows the annual return histograms of the five models against three investment benchmarks – the S&P 500 total-return, the 10-year US Treasury bond, and the 60/40 mix (60% equity and 40% bond rebalanced monthly) (data: Shiller). They were computed from the eight equity curves – the compound growths from January 1900 to September 2017.

Figure 5 shows that my five models share two common features: their green bars are comparable to those of the S&P 500 but with narrower pink areas. In other words, their expected gains are as good as the S&P 500 but their expected losses are much lower. As a result, they offer much higher expected gain-to-loss ratios than that of the S&P 500. A model with a higher expected gain-to-loss ratio than the S&P 500 can surely beat the market return. I will quantify this point a bit later but first, I must challenge yet another modern finance doctrine.

Challenging the Capital Asset Pricing Model

The Capital Asset Pricing Model (CAPM) asserts that first, no return of any asset mix between the S&P 500 and the Treasury bond or Treasury bill can exceed the Capital Market Line (CML); and second, one can only increase return by taking on more risk via leverage. Figure 6 is a plot of expected gains versus expected losses from the data in Figure 5. The dashed red line is the CML connecting the S&P 500 total-return to the 10-year US Treasury bond total-return (bond yields plus bond price changes caused by interest rate changes). Also shown are my five models (light blue dots) and the three benchmarks (red squares). I add a sixth model Cycle-ID (black dot) to show the effects of leverage. All six models are counterexamples to the CAPM. The five unlevered models reside far above the CML. The levered Cycle-ID beats the S&P 500 expected gain by 68% with only 85% of the risk. Hence, both CAPM claims are untrue.

CAGR and maximum drawdown comparisons

It is simple math that investors can beat the S&P 500 total-return if they can achieve close to the S&P 500 expected gains but cut their expected losses sizably relative to the S&P 500. Table 1 lists the compound annual growth rates (CAGRs) of all nine strategies in six sets of bull and bear full market cycles from January 1900 to September 2017. All six models have CAGRs higher than the S&P 500 total-return consistently in different bull and bear full cycles over a century.

My six models not only outperform the total compound returns of the S&P 500 in all cycle sets, they also offer lower risks than the S&P 500 measured by maximum drawdown. Table 2 compares maximum drawdowns of the nine strategies in different market cycles.

A dynamic active-passive investment approachWhich investment approach is better, passive or active? This ongoing debate misses the point. There is a time to be passive and a time to be active. Passive investors underperform active managers in bear markets and active investors are no match to buy-and-holders in bull markets.

Figure 4 reveals that Mr. Market has a dual personality. When he is content, he spreads his random gains over a wide green area. When he is mad, he directs his wrath at a narrow pink zone. Therefore it is feasible to logically beat Mr. Market at his own game – be a passive investor in the green areas to gather the wildly disperse gains but be an active risk manager in the pink areas to trim market losses.

Here is how investors can do that in practice. Do regular checkups on Mr. Market's health. If we detect a mood shift from good to bad, reduce market exposure (actively preserve capital in the pink zones). Otherwise, we stay in the market (passively cumulate wealth in the green areas).

Market health checkups are not market forecasts. Doctors do not forecast our medical conditions at annual exams. They conduct routine diagnoses and look for symptoms. If some tests come back positive, then the doctors actively treat those illnesses. Otherwise, patients would passively count their blessings until the next checkup.

Similarly, in regular market health checkups, we do not forecast market outlook but conduct diagnoses and look for warning signs. For instance, my five models were designed to monitor subtle shifts in trend (Holy Grail), the economy (Super Macro), valuation (TR-Osc), major market cycle (Primary-ID) and minor price movement (Secondary-ID). When a medical test comes back positive, we do not panic but seek second or third opinions. Likewise, investors should not assess market health based on a single indicator, but use the weight-of-evidence from multiple orthogonal models.

Using this dynamic active-passive approach and developing an integrated market monitoring system, investors can achieve the dual objective of capital preservation in bad times and wealth accumulation in good times.

Concluding Remarks

The key findings from all four random-walk series are summarized below:

1. Modern finance assumes that all asset prices follow a random walk. The academics define reward as the mean return at the peak of a bell curve. Data taken from a variety of asset classes (Part 1 and Part 2) with return horizons from one day to 10 years are far too erratic to fit the random walk statistics.

2. The histograms of a variety of asset classes not only reject the bell curve, they do not fit any well-known analytical probability theory. The types of randomness observed are akin to Frank Knight's "radical uncertainties", Donald Rumsfeld's "the unknown unknowns" or Nassim Nicholas Taleb's "Black Swans".

3. In a multimodal histogram with no central tendency, mean and variance are ill defined. The mean-variance paradigm is unfit to depict real-world prices.

4. Modern finance misreads risk as volatility (Part 3). Volatility reflects diversity in market views when the act of buying or selling does not affect the price. Volatility facilitates trades, lubricates liquidity and alleviates financial market risks. In contrast, a market with a single dominant view creates a buyer-seller imbalance. Risks come from uni-directional price movements that freeze liquidity and exacerbate bubbles or panics.

5. Investment risk comes in many forms – market risk, geopolitical events, inflation, currency, interest rate, recession, etc. Regardless the sources, all risks lead to the same outcome – an unacceptable loss in the form of income or capital, or both.

6. High uncertainties and radically random distributional gains are not risks, but represent abundant opportunities and widely scattered investment rewards.

7. I define investment reward as the cumulative probability-weighted gain; and investment risk as the cumulative probability-weighted loss. My new framework accurately captures all observed data and is applicable to probability distributions of any shape and form. More importantly, it has an intuitive appeal to investors.

8. A Random walk is a theory. A theory is supposed to describe and explain empirical observations. It provides analytical formulas that can predict the future. However, theories that hypothesize causation but disregard any aberration that does not fit their paradigms are theoretical landmines for all uninformed followers.

9. My gain-loss framework is not a theory, but a phenomenological model. It truthfully measures observations with statistical tools but offers no causation explanations or analytical formulas. As stated in Part 2, investors' adaptive behavioral dynamics render all analytical models in mathematical finance imprecise at best. An empirical model that objectively captures data with no theoretical bias is more practical for investors.

10. My new framework reveals that Mr. Market has a dual personality. He keeps his losses in time insensitive and confined zones but lets his gains run wild and loose. We can exploit this asymmetry to beat Mr. Market at his own game.

11. The active-passive investment approach capitalizes on this gain-loss asymmetry and tilts the betting odds in our favor. We actively mitigate losses in the pink zones via regular market health checkups. Otherwise, we stay as passive investors and pick up the radically random profits Mr. Market leaves behind.

12. How can we detect Mr. Market's mood changes? My six rules-based market monitors demonstrate that early warning detection is possible.

13. Warren Buffett consistently beats the market. He could be a practitioner of the dynamic active-passive approach because his favorite holding period is "forever" (a passive investor) subject to his first and second rules of "don't lose money" (an active manager).

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at mailto:ted@ttswadvisory.com.

Appendix A: The Mean-Variance Paradigm

Modern finance defines reward as the mean return (also known as the expected value). Mean return is the cumulative probability weighted return defined as follows:

where r is return (a continuous random variable) and Prob(r) is the Gaussian probability density function (PDF) of r. The integration limits are - infinitive to + infinitive. The cumulative sum of Prob(r) is normalized to 100%.

Modern finance defines risk as volatility (also known as standard deviation). It is the cumulative probability weighted root-mean-squared (RMS) of the deviation of each r from Mean. The mathematical formula for risk is:

where Mean is given by Eqn (A1). The integration limits are - infinitive to + infinitive.

The reward-to-risk ratio is therefore the mean divided by the standard deviation. Replacing r in Eqn(A1) and Eqn(A2) by the quantity r minus the risk-free interest rate (this quantity is also known as the equity risk premium), the reward-to-risk ratio becomes the Sharpe Ratio.

Appendix B: The Gain-Loss Framework

Investors view reward and risk from a gain-loss perspective. The best way to capture this intuitive view is with a pair of complementary formulas called "Expected Gain" and "Expected Loss". Investment reward is the expected gain, which is the sum of all probability weighted positive returns. The formula is:

where r* is return (a discrete random variable) and Prob*(r*) is the observed probability of return r*. The cumulative sum of Prob*(r*) is normalized to 100%. The limits of integration are from zero to + infinitive, so only positive returns (gains) are summed. In Eqn (A1), r and Prob(r) are Gaussian function variables. In Eqn (B1), r* and Prob*(r*) are measured data.

Correspondingly, investment risk is defined as the expected loss, which is the sum of all cumulative probability weighted negative returns. The formula for expected loss is:

The integration limits are - infinitive and zero, namely, only negative returns (losses) are summed.

The reward-to-risk ratio is the expected gain divided by the expected loss, both of which are in percent.

part-3-6a-and-6b

Modeling Cyclical Markets – Part 3

Originally Published November 28, 2016 in Advisor Perspectives

In Part 1, I introduced my Primary-ID model that identifies major price movements in the stock market. In Part 2, I presented Secondary-ID, a complementary model that tracks minor price movements. In this article, I combine these two rules- and evidence-based models into a composite called Cycle-ID and discuss the virtue of a single model.

I examine the efficacy of Cycle-ID from three separate but related perspectives. The first area is the utility of the composite. What are the benefits of moving from a binary scale (bullish or bearish) of Primary-ID and Secondary-ID to a ternary scale (bullish, neutral or bearish) of Cycle-ID? The second topic is synergy. Can the composite perform better than the sum of the parts? Finally, how can we use Cycle-ID to custom-design strategies similar to those of risk-parity for the purpose of matching market return and risk? Do these more complex investment strategies add value?

Cycle-ID – a composite model for cyclical markets

All investors need to achieve a dual objective. The first is capital preservation by minimizing losses in market downturns. The second is wealth accumulation by maximizing market exposure during market upturns. Not losing money should always be the main investment focus but making money is why we invest in the first place. Investors often achieve one objective at the expense of the other. For instance, many so-called secular bears avoided the 2000 dot-com crash and/or the 2008 sub-prime meltdown, but missed the 2003-2007 and the 2009-2016 bull markets. My cyclical-market models are aimed at preserving capital during stormy seasons as well as accumulating wealth in equity-friendly climates.

The signal scores of both Primary-ID and Secondary-ID are binary: +1 is bullish and -1 is bearish. Cycle-ID is the sum of the two models and therefore its scores are +2, 0 and -2. What do the three Cycle-ID scores mean? A Cycle-ID score of +2 indicates that both primary and secondary price movements are positive. In other words, the stock market is in a rally phase (a positive Secondary-ID) within a cyclical bull market (a positive Primary-ID). A Cycle-ID score of -2 indicates that both the primary and secondary price movements are negative. Put simply, the stock market is in a retracement phase (a negative Secondary-ID) within a cyclical bear market (a negative Primary-ID). When Cycle-ID is at zero, the stock market is either in a correction phase (a negative Secondary-ID) within a cyclical bull market (a positive Primary-ID), or in a rally phase (a positive Secondary-ID) within a cyclical bear market (a negative Primary-ID). Since the two cycle models are in conflict, one would naturally assume that the market is neutral. However, there is a counterintuitive interpretation of the zero Cycle-ID score that I will present later.

Figures 1A shows the monthly S&P 500 (in black) in logarithmic scale along with the Cycle-ID score (in blue) from 1900 to September 2016. Figure 1B is Robert Shiller's cyclically adjusted price-to-earnings ratio (CAPE) from which the vector-based indicators of Primary-ID and Secondary-ID are derived. Figure 1C is similar to Figure 5A in Part 1 and Figure 1D is the same chart as Figure 4D in Part 2 but updated to the end of September. All green segments in Figures 1C and 1D represent +1 scores and all red segments, -1. The blue Cycle-ID score in Figure 1A is the sum of the Primary-ID and Secondary-ID scores at either +2, 0 or -2.

Figures 1C and 1D show that green segments overwhelm red segments in both duration and quantity. History shows that corrections in cyclical bull markets are more prevalent than rallies in cyclical bear markets. Therefore a zero Cycle-ID score is not really neutral, but has a bullish bias. This subtle difference in the Cycle-ID score interpretation can make a huge impact on the investment outcomes over an extended time horizon.

part-3-1a-1b-1c-1d

For ease of visual inspection, the contents in Figures 1A to 1D are zoomed in for the period from 2000 to September 2016 as shown in Figures 2A to 2D, respectively. Cycle-ID score hit -2 several times during the protracted 2000-2003 dot-com meltdown. Cycle-ID also reached -2 in July 2008, months before the collapse of Lehman Brothers' and the global financial systems. During the bulk of the two cyclical bull markets from 2003 to 2007 and from 2009 to 2016, Cycle-ID was at +2 most of the time. During the last 16 years while market experts were engaging in worthless debates on whether we were in a secular bull or bear market, Cycle-ID identified all major and minor price movements objectively without any ambiguity. The rules-based clarity and the evidence-based credibility of Cycle-ID enabled investors and advisors to meet their dual objective – capital preservation in bad times and wealth accumulation in good times.

Let's examine the hypothetical performance statistics to see if Cycle-ID effectively met the dual objective in 116 years.

part-3-2a-2b-2c-2d

Cycle-ID performance stats

The ternary scale of Cycle-ID allows for many different combinations of investment strategies including the use of leverages and shorts. I intentionally selected a set of fairly aggressive strategies for the purpose of stress-testing Cycle-ID. This aggressive strategy set is used to demonstrate Cycle-ID's potential efficacy and is not an investment strategy recommendation.

The aggressive strategies are translated to execution rules as follows. When the Cycle-ID score is at +2 (the market is in a rally mode within a bull market), exit the previous position and buy an S&P 500 ETF with 2x leverage (e.g. SSO) at the close in the following month. When the score is at -2 (the market is in a retracement phase within a bear market), exit the previous position and buy an inversed unleveraged S&P 500 ETF (e.g. SH) at the close in the following month. When Cycle-ID is at zero, close the previous position (either leveraged long or unleveraged short) and buy an unleveraged long S&P 500 ETF (e.g. SPY) at the close in the following month. This mildly bullish interpretation of the zero score is based on the evidence that in over 100 years cyclical bull markets significantly outnumber their cyclical bear counterparts as shown in Figure 1C. Bull market corrections also outnumber bear market rallies as shown in Figures 1C and 1D. As a result, I consider a zero Cycle-ID rating a bullish call rather than a neutral market stance in the stress test.

Figure 3A is the same as Figure 1A with the aggressive set of strategies specified in blue on the upper left. Figure 3B shows that the cumulative return of Cycle-ID is 20.8% compounded over 116 years, far and above the compound annual growth rate (CAGR) of Secondary-ID at 12.8% and Primary-ID at 10.4%. The equity curves of Primary-ID and Secondary-ID are those presented in Parts 1 and 2, respectively. They are updated to the end of September and shown here as references. The S&P 500 compounded total return is at 9.4%, the performance benchmark for comparison.

part-3-3a-and-3b

Figures 4A and 4B display contents similar to those shown in Figures 3A and 3B except the base year for the equity curves is changed from 1900 to 2000. Despite two drastically different timeframes, 116 years in Figure 3 and 16 years in Figure 4, the relative CAGR gaps between Cycle-ID and the S&P 500 total return are nearly identical, roughly 1100 bps in both periods. Risk characteristics are also similar in the two periods. Cycle-ID has a lower maximum drawdown, but shows a 1.5x higher volatility (annualized standard deviation) than that of the S&P 500. The consistency in both CAGR edge and risk gap suggests that Cycle-ID has had a relatively stable value-adding ability for over a century.

part-3-4a-and-4b

Simulating leveraged and inverse indices

I now digress to describe briefly the methodology for computing both the leveraged and the inverse-proxy indices used in the stress test presented above. The traditional ways for setting up such positions are by borrowing capital from a margin account and by selling borrowed equities. Leveraged and inverse ETF financial products did not exist in 1900 but are readily available today. To simulate a leveraged and an inverse time series from an underlying index, Marco Avellaneda and Stanley Zhangdeveloped the appropriate formulae. Their algorithms can be used to compute the leveraged and inverse S&P 500 proxies as follows: A 2x leveraged S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one plus twice the month-to-month percentage change of the S&P 500. An inversed S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one minus the month-to-month percentage change of the S&P 500. In my back tests, both time series are assumed to be rebalanced monthly.

To check the accuracy of the Avellaneda and Zhang algorithms, I simulated two ETF proxies and compared them to the closing prices of two widely held ETFs: a 2x leveraged S&P 500 ETF (ticker: SSO) and an inversed S&P 500 ETF (ticker: SH). All time series are rebalanced daily. The results are shown in Figures 5A and 5B. The tracking errors averaged over 10 years are found to be 0.2% and 0.1%, respectively. These errors may be small, but they could diverge over a longer time period. On the other hand, the two divergences have run in opposite directions and the errors tend to cancel each other. Nevertheless, the algorithms appear adequate in testing Cycle-ID for illustration purposes.

part-3-5a-and-5b

Back to Cycle-ID, my intent is not to boast about the spectacular Cycle-ID CAGR of 20.8% shown in Figure 3B or to recommend a particular aggressive investment strategy. In fact, the high CAGR must be viewed with caution because both fund costs/fees and tracking error could potentially lower the hypothetical return. In addition, Figure 3B also shows that the higher hypothetical CAGR comes with higher risks. Cycle-ID has a maximum drawdown almost as high as that of the S&P 500 total return index and the annualized standard deviation (SD) is 1.6x higher than the market benchmark. Nevertheless, this simple exercise does underscore the alpha generating potential of Cycle-ID.

Return and risk tradeoffs

Besides the aggressive set of strategies of 2x long and 1x short, I also tested various combinations of leveraged and short positions. The best way to visualize absolute and risk-adjusted returns among different strategies is to plot CAGR against two risk measures –maximum drawdown (Figure 6A) and volatility (Figure 6B). In Figures 6A and 6B, the green curves represent long-only strategies (no shorts) with different leverage levels ranging from unleveraged (1.0 L) to 2x leveraged (2.0 L). The blue curves show the effects of adding a short strategy (1x short) into the mix while varying the degree of long leverage. The preferred corner is at the top-left showing the highest return with the lowest risk. The red dots represent the S&P 500, which is located far away from the preferred corner.

part-3-6a-and-6b

When short strategy is added (the blue lines), the rules are as follows. When Cycle-ID score is zero, exit the previous position and buy the S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit previous position and buy the S&P 500 with various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID is at -2, exit previous position and buy an inverse S&P 500 (e.g. SH) at next month's close.When no short is used (the green lines), the rules are as follows. When Cycle-ID score is zero, buy the unleveraged S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit the previous position and buy the S&P 500 at various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID score is -2, exit the previous position and buy the 10-year U.S. Treasury bond (e.g. TLT) at next month's close. The return while holding long positions is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Several observations from Figures 6A and 6B are noteworthy. These characteristics are probably germane to leveraged long and short strategies in general and not specific to Cycle-ID.

  • First, when no leverage and no shorts are used (the bottom-left green dots in Figures 6A and 6B), the performance of Cycle-ID is between that of Primary-ID and Secondary-ID. Hence, no synergy is expected when the performances of two models are averaged.
  • Second, the blue curves are to the right of the green curves indicating that adding short strategies increases risk more than return. It's inherently more challenging to profit from short positions because down markets are brief and volatile.
  • Third, the curves in Figure 6A are convex (higher marginal return with each added unit of drawdown) but the curves in Figure 6B are concave (lower marginal return with each added unit of volatility). The curvature disparity reflects the basic difference between these two types of risk. Volatility measures the uncertainty in the outcome of a bet. Maximum drawdown depicts the bloodiest outcome of a wrong bet.
  • With Cycle-ID, one can tailor strategies to either match market risk or gain. For instance, if one can endure an S&P 500 drawdown of -83%, one can supercharge CAGR from 9.7% to over 21% by using the 2L/1S strategy shown in Figure 6A. If one can tolerate a 17% market volatility, one can boost CAGR to over 14% by using the 2L/0S strategy in Figure 6B. Conversely, if one just wants to earn a market return of 9.7%, by extrapolating the green lines in Figures 6A and 6B to intercept a horizontal line at 9.7%, one can reduce drawdown from -83% to below -30% or to calm volatility from 17% to less than 12%.
  • A widely known approach for engineering a portfolio with either market-matching return or market-matching volatility is risk parity. It budgets allocation weights by the inverse variances of all the assets in a portfolio. Cycle-ID achieves the same mission by using a single index – the S&P 500. No risk budgeting algorithm is needed. Looking for a robust risk management tool in a chaotic, nonlinear and dynamic investment world, I would pick simplicity over complexity every-time.

Concluding remarks

My cyclical-market models are relevant to both Modern Portfolio Theory (MPT) and Efficient Market Hypothesis (EMH) – the two pillars in the temple of modern finance.

Harry Markowitz in 1952 introduced MPT – the use of mutual cancellations in the uncorrelated variance-covariance matrixes across asset classes to minimize portfolio risk. Jack Treynor was the first to note in 1962 that the risk premium (the spread between risky and risk-free returns) of a stock is proportional to the covariance between that stock and the overall market. William Sharpe in 1964 simplified Markowitz's complex matrixes into a single-index model – the Capital Asset Pricing Model (CAPM) that uses beta to represent stock or portfolio risk (price fluctuations relative to those of the market). The type of risk both MPT and CAPM focus on is volatility –uncertainties in the potential (expected) returns of one's bets. Neither MPT nor CAPM tackles the types of risk investors dread the most – temporary equity drawdowns and permanent capital losses from making the wrong bets. Volatility and beta are the types of risk financial theorists deliberate about in scholastic faculty lounges. Drawdowns and permanent losses are the types of risk that drain investors' wealth and prey on their emotions.

Furthermore, variance-covariance matrixes and betas calculated from historical data could lose their anti-correlation magic or regression line linearity when the next crisis hits. In 2008, for instance, the effectiveness in risk reduction by either EMH's diversification or CAPM's beta diminished when capital protection was needed the most. Most importantly, even when MPT and CAPM work, they can only diversify away specific risk. Both models offer no solution in managing systematic or systemic risk. Company specific risk is minuscule when compared to the risk from the overall stock market or from the collapse of the global financial systems. There were hardly any diversified portfolios that could shelter one's wealth during cyclical bear markets in the financial meltdowns in 1929 and 2008. MPT and CAPM are ill-equipped in mitigating these titanic financial shockwaves with tsunami-scale impacts that affect all asset classes around the globe.

Cycle-ID is a positive alternative to the traditional risk hedging concepts. First, Cycle-ID eliminates company-specific risks by investing only in a broad market index – the S&P 500. There's no need to optimize an "efficient portfolio" of asset classes and hope that their anti-correlation attributes would remain unchanged going forward. More importantly, Cycle-ID reduces both systematic (i.e., interest rate, inflation, currency or recession) and systemic risk (global financial systems, interlinked liquidity freeze, or geopolitical instability) by exposing one's capital only in fertile seasons with the appropriate market exposure levels. Market exposure level is objectively matched with the perennially changing market environment. This is accomplished by closely tracking the first and second derivative of the Shiller CAPE – the aggregate market appraisal by all market participants.

Both the traditional risk hedging approaches (i.e., MPT and CAPM) and Cycle-ID employ the long-standing wisdom of diversification to manage risks. The difference is that the traditional approaches diversify in assets with uncorrelated covariances to hedge against company-specific risk. My model diversifies in market exposure in harmony with the investment climate to achieve a dual investment goal: (1) to minimize systematic and systemic risks in bad times; and (2) to increase market exposure in good times. The Chinese character for risk has an insightful duality – one pictogram for danger and the other for opportunity. While risk can harm us when we are exposed to it, risk is also a driver for higher returns if we exploit it to our advantage.

Let's turn to the Efficient Market Hypothesis which was postulated by Eugene Fama in 1965. According to Richard Thaler, EMH has two separate but related theses. The first thesis is that the stock market is always right and the second, the market is difficult to beat in the long run. Behavioral economists argue that the market is not always right because its pricing mechanism does not always function perfectly. Humans are not always rational beings and financial bubbles are the proofs of market mispricing. Both traditional and behavioral economists, however, agree on the second thesis that the market is hard to beat in the long run. The market is hard to beat because it's quite efficient (discounting all known information that could affect market prices) most of the time. But the market is not totally efficient all the time. Market prices often diverge from the market's intrinsic values – an observation first articulated in 1938 by John Burr Williams before the inception of behavioral economics. Hence, it's difficult but not impossible to beat the S&P 500 in a long run.

Primary-ID, Secondary-ID, and Cycle-ID along with a handful of legendary investors and some previously published models of mine (Holy Grail, Super Macro, and TR-Osc) demonstrate that it's possible to outperform the S&P 500 total return. They do so not by predicting the future. Price prediction is futile because both the amplitude of impact and the frequency of occurrence of the various price drivers are totally random. So what are the secret ingredients for a market-beating model?

To outperform the S&P 500, following my five criteria alone is not enough. The five criteria (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data) would only increase the odds of model robustness, they do not offer an outperformance edge over the S&P 500. To beat the market, a model must also exhibit a quality of originality with a counterintuitive flavor. If a model is too intuitively obvious, many will have already discovered it and its edge will disappear. More importantly, to beat the market, a model must be relatively unknown so that it's not widely followed. If you can find the model in the Bloomberg terminal, it is already part of the market. The market cannot outperform itself.

My cyclical-market models are simple (a single metric – the CAPE), focused (one index – the S&P 500), logical (vectors over scalar) and above all, transparent (all rules are disclosed). Should you be worried that after publishing my models, their future efficacy will diminish? I would argue that such concern is unwarranted. First, only a fraction of the total investor universe will read my articles. Even if some have read them, only a fraction will believe in the models. Even if those who have read the articles are swayed by the rationale of the models, only a tiny fraction could internalize their conviction and have the discipline to follow-through over time. These probabilities are multiplicative and protect the models from being homogenized by the masses.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

 

 

part-2-5a-and-5b

Modeling Cyclical Markets – Part 2

Originally Published November 7, 2016 in Advisor Perspectives

In Part 1 of this series, I presented Primary-ID, a rules- and evidence-based model that identifies major price movements, which are traditionally called cyclical bull and bear markets. This article debuts Secondary-ID, a complementary model that objectively defines minor price movements, which are traditionally called rallies and corrections within bull and bear markets.

The traditional definitions of market cycles

Market analysts define market cycles by the magnitude of price movements. Sequential up and down price movements in excess of 20% are called primary cycles. Price advances more than 20% are called bull markets. Declines worse than -20% are called bear markets. Sequential up and down price movements within 20% are called secondary cycles. Price retracements milder than -20% are called corrections. Advances shy of 20% are called rallies. Talking heads at CNBC frequently use this financial vernacular.

But has anyone bothered to ask how factual these fancy terms and lofty labels really are?

Experts also measure market cycles by their durations. They reported that since 1929, there have been 25 bull markets with gains over 20% with a median duration of 13.1 months, and 25 bear markets with losses worse than 20% with a median duration of 8.3 months.

But is "median duration" the proper statistical yardstick to measure stock market cycle lengths?

Fact-checking the 20% thresholds

Before presenting Secondary-ID, let’s pause to fact-check these two market cycle yardsticks. The ±20% primary cycle rules-of-thumb have little practical use in guiding investment decisions. If we wait for a +20% confirmation before entering the market, we would have missed the bulk of the upside. Conversely, if we wait for an official kick-off of a new cyclical bear market, our portfolios would have shrunk by -20%. The ±20% thresholds may be of interests to stock market historians, but offer no real benefit to investors.

Besides being impractical, the ±20% demarcations are also baseless. This falsehood can be demonstrated by examining the historical evidence. Figures 1A and 1B show the daily closing prices of the S&P 500 from 1928 to 2015. The green bars in Figure 1A are price advances from an interim low to an interim high of over +5%. The red bars in Figure 1B are price retracements from an interim high to an interim low worse than -5%. Price movements less than ±5% are ignored as noise. There were a total of 198 advances and 166 retracements in 88 years. From the figures, it's not obvious why ±20% were picked as the thresholds for bull and bear markets. The distributions of green and red bars show no unique feature near the ±20% markers.

part-2-1a-and-1b

To determine how indistinct the ±20% markers are in the distributions, I plot the same daily data in histograms as shown in Figures 2A and 2B. The probabilities of occurrence are displayed on the vertical axes for each price change in percent on the horizontal axes. For example, Figure 2A shows that a +20% rally has a 3% chance of occurring; and Figure 2B shows that a -20% retreat has near a 3.5% chance. There is no discontinuity either at +20% in Figure 2A that separates bull markets from rallies, nor at -20% in Figure 2B that differentiates bear markets from corrections.

part-2-2a-and-2b

There are, however, two distinct distribution patterns in both up and down markets. Figure 2A shows an exponential drop in the probability distribution with increasing rally sizes from +10% to +40%. Above +45%, the histogram is flat. Figure 2B shows a similar exponential decline in the probability distribution with increasing retracements from -5% to -40%. Beyond -45%, the histogram is again flat. The reasons behind the exponential declines in the distributions and the two-tier histogram pattern are beyond the scope of this paper. It's clear, however, that there is no distinct inflection point at ±20% from Figures 2A and 2B. In fact, it would be more statistically correct to use the ±45% as the thresholds for bull and bear markets. But such large thresholds for primary cycles would be worthless for investors.

Figures 2A and 2B also expose one other fallacy. It's often believed that price support and resistance levels are set by the Fibonacci ratios. One doesn't have to read scientific dissertations using advanced mathematical proofs to dispel the Fibonacci myth. A quick glance at Figure 2A or 2B would turn any Fibonacci faithful into a skeptic. If price tops and bottoms are set by the Fibonacci ratios, we would have found such footprints at ±23.6%, ±38.2%, ±50.0%, ±61.8%, or ±100%. No Fibonacci pivot points can be found in 88 years of daily S&P 500 data.

Fact-checking the market duration yardstick

I now turn to the second cyclical market yardstick-cycle duration. It's been reported that since 1929, the median duration for bull markets is 13.1 months and the median duration for bear markets is 8.1 months. The same report also notes that the spread in bull market durations spans from 0.8 to 149.8 months; and the dispersion among bear market durations extents from 1.9 to 21 months. When the data is so widely scattered, the notion of median is meaningless. Let me explain why with the following charts.

Figures 3A and 3B show duration histograms for all rallies and retreats, respectively. The vertical axes are the probabilities of occurrence for each duration shown on the horizontal axes. The notions of median and average are only useful when the distributions have a central tendency. When the frequency distributions are skewed to the extent seen in Figure 3A or both are skewed and dispersed like in Figure 3B, median durations cited in those reports are meaningless.

part-2-3a-and-3b

Figures 3A and 3B also expose one other myth. We often hear market gurus warning us that the bull (or bear) market is about to end because it's getting old. Chair Yellen was right when she said that economic expansions don't die of old age. Cyclical markets don't follow an actuarial table. They can live on indefinitely until they get hit by external shocks. Positive shocks (pleasant surprises) end bear markets and negative shocks (abrupt panics) kill bull markets. These black swans follow Taleb distributions in which average and median are not mathematically defined. In my concluding remarks I further expand on the origin of cyclical markets.

Many Wall Street beliefs and practices are just glorified folklores decorated with Greek symbols and pseudo-scientific notations to puff up their legitimacy. Many widely followed technical and market-timing indicators are nothing but glamorized traditions and legends. Their theoretical underpinnings must be carefully examined and their claims must be empirically verified. It's unwise to put ones' hard earned money at risk by blindly following any strategy without fact-checking it first, no matter how well accepted and widely followed it may be.

Envisioning cyclical markets through a calculus lens

Now that I have shown how absurd these two yardsticks are in gauging market cycles, I would like to return to the subject at hand – modeling cyclical markets. The methodology is as follows: First, start with a metric that is fundamentally sound. The Super Bowl indicator, for example, is an indicator with no fundamental basis. Next, transform the metric into a quasi range-bound indicator. Then devise a set of rational rules using the indicator to formulate a hypothesis. High correlations without causations are not enough. Causations must be grounded in logical principles such as economics, behavioral finance, fractal geometry, chaos theory, game theory, number theory, etc. Finally, a hypothesis must be empirically validated with adequate samples to be qualified as a model.

Let me illustrate my modeling approach with Primary-ID. The Shiller CAPE (cyclically adjusted price-earnings ratio) is a fundamentally sound metric. But when the CAPE is used in its original scalar form, it is prone to calibration error because it's not range-bound. To transform the scalar CAPE into a range-bound indicator, I compute the year-over-year rate-of-change of the CAPE (e.g. YoY-ROC % CAPE). A set of logically sound buy-and-sell rules is devised to activate the indicator into actionable signals. After the hypothesis is validated empirically over a time period with multiple bull and bear cycles, Primary-ID is finally qualified as a model.

This modeling approach can be elucidated with a calculus analogy. The scalar Shiller CAPE is analogous to "distance." The vector indicator YoY-ROC % CAPE is analogous to "velocity." When "velocity" is measured in the infinitesimal limit, it's equivalent to the "first derivative" in calculus. In other words, Primary-ID is similar to taking the first derivative of the CAPE. There are, however, some differences between the YoY-ROC % CAPE indicator and calculus. First, a derivative is an instantaneous rate-of-change of a continuous function. The YoY-ROC % CAPE indicator is not instantaneous, but with a finite time interval of one year. Also, the YoY-ROC % CAPE indicator is not a continuous function, but is based on a discrete monthly time series – the CAPE. Finally, a common inflection point of a derivative is the zero crossing, but the signal crossing of Primary-ID is at -13%.

Secondary-ID – a model for minor market movements

I now present a model called Secondary-ID. If Primary-ID is akin to "velocity" or the first derivative of the CAPE and is designed to detect major price movements in the stock market, then Secondary-ID is analogous to "acceleration/deceleration" or the second derivative of the CAPE and is designed to sense minor price movements. Secondary-ID is a second-order vector because it derives its signals from the month-over-month rate-of-change (MoM-ROC %) of the year-over-year rate-of-change (YoY-ROC %) in the Shiller CAPE metric.

Figures 4A to 4D show the S&P 500, the Shiller CAPE, Primary-ID signals and Secondary-ID signals, respectively. The indicator of Primary-ID (Figure 4C) is identical to that of Secondary-ID (Figure 4D), namely, the YoY-ROC % CAPE. But their signals differ. The signals in Figures 4C and 4D are color-coded – bullish signals are green and bearish signals are red. The details of the buy and sell rules for Primary-ID were described in Part 1. The bullish and bearish rules for Secondary-ID are presented below.

part-2-4a-4b-4c-4d

Bullish signals are triggered by a rising YoY-ROC % CAPE indicator or when the indicator is above 0%. For bearish signals, the indicator must be both falling and below 0%. "Rising" is defined as a positive month-over-month rate-of-change (MoM-ROC %) in the ROC % CAPE indicator; and "falling", a negative MoM-ROC %. Because it is a second-order vector, Secondary-ID issues more signals than Primary-ID. It's noteworthy that the buy and sell signals of Secondary-ID often lead those of Primary-ID. The ability to detect acceleration and deceleration makes Secondary-ID more sensitive to changes than Primary-ID that detects only velocity.

For ease of visual examination, Figures 5A shows the S&P 500 color-coded with Secondary-ID signals. Figure 5B is the same as Figure 4D describing how those signals are triggered by the buy and sell rules. Since 1880, Secondary-ID has called 26 of the 28 recessions (a 93% success rate). The two misses were in 1926 and 1945, both were mild recessions. Secondary-ID turned bearish in 1917, 1941, 1962, 1966 and 1977 but no recessions followed. However, these bearish calls were followed by major and/or minor price retracements. If Mr. Market makes a wrong recession call and the S&P 500 plummets, it's pointless to argue with him and watching our portfolios tank. Secondary-ID is designed to detect accelerations and decelerations in market appraisal by the mass. Changes in appraisal often precede changes in market prices, regardless of whether those appraisals lead to actual economic expansions or recessions.

part-2-5a-and-5b

Secondary-ID not only meets my five criteria for robust model design (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data), it has one more exceptional merit – no overfitting. In the development of Secondary-ID, there is no in-sample training involved and no optimization done on any adjustable parameter. Secondary-ID has only two possible parameters to adjust. The first one is the time-interval for the second-order rising and falling vector. Instead of looking for an optimum time interval, I choose the smallest time increment in a monthly data series – one month. One month in a monthly time series is the closest parallel to the infinitesimal limit on a continuous function. The second possible adjustable parameter is the signal crossing. I select zero crossing as the signal trigger because zero is the natural center of an oscillator. The values selected for these two parameters are the most logical choices and therefore no optimization is warranted. Because no parameters are adjusted, there's no need for in-sample training. Hence Secondary-ID is not liable to overfitting.

Performance comparison: Secondary-ID, Primary-ID and the S&P 500

The buy and sell rules of Secondary-ID presented above are translated into executable trading instructions as follows: When the YoY-ROC CAPE is rising (i.e. a positive MoM-ROC %), buy the S&P 500 (e.g. SPY) at the close in the following month. When the YoY-ROC CAPE is below 0% and falling (i.e. a negative MoM-ROC %), sell the S&P 500 at the close in the following month and use the proceeds to buy U.S. Treasury bond (e.g. TLT). The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Figures 6A shows the S&P 500 total return index and the total return of the U.S. Treasury bond. In 116 years, return on stocks is nearly twice that of bonds. But in the last three decades, bond prices have risen dramatically thanks to a steady decline in inflation since 1980 and the protracted easy monetary policies since 1995. Figures 6B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a $1 investment made in January 1900. The S&P 500 has a total return of 9.7% with a maximum drawdown of -83%. By comparison, Primary-ID has a hypothetical compound annual growth rate (CAGR) of 10.4% with a maximum drawdown of -50% and trades once every five years on average. The performance stats on Primary-ID are slightly different from that shown in Figure 5B in Part 1 because Figure 6B is updated from July to August 2016.

Secondary-ID delivers a hypothetical CAGR of 12.8% with a -36% maximum drawdown and trades once every two years on average. Note that Primary-ID and Secondary-ID are working in parallel to avoid most if not all bear markets. Secondary-ID offers an extra performance edge by minimizing the exposure to bull market corrections and by participating in selected bear market rallies.

part-2-6a-and-6b

I now apply the same buy and sell rules in the recent 16 years to see how the model would have performed in a shorter but more recent sub-period. This is not an out-of-sample test since there's no in-sample training. Rather, it's a performance consistency check with a much shorter and more recent period. Figures 7A shows the total return of the S&P 500 and the U.S. Treasury bond price index from 2000 to August 2016. The return on bonds in this period is higher than that of the S&P 500. Record easy monetary policies since 2003 and large-scale asset purchases by global central banks since 2010 pumped up bond prices. Two severe back-to-back recessions dragged down the stock market. Figures 7B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a $1 investment made in January 2000.

part-2-7a-and-7b

Since 2000, the total return index of the S&P 500 has returned 4.3% compounded with a maximum drawdown of -51%. By comparison, Primary-ID has a CAGR of 7.7% with a maximum drawdown of -23% and trades once every five years on average. Again, the performance stats on Primary-ID shown in Figure 7B are slightly different from that shown in Figure 5B in Part 1 because Figure 7B is updated to August 2016. Secondary-ID delivers a hypothetical CAGR of 10.5% with a maximum drawdown of only -16% and trades once every 1.4 years on average. The performance edge in return and risk of Secondary-ID over both Primary-ID and the S&P 500 total return index is remarkable. The consistency in performance gaps in both the entire 116-year period and in the most recent 16-year sub-period lends credence to Secondary-ID.

Theoretical support for both cyclical market models

The traditional concepts of "primary cycles" and "secondary cycles" rely on amplitude and periodicity yardsticks to track market cycles in the past and to predict market cycles in the future. Primary-ID and Secondary-ID do not deal with primary or secondary market cycles. Their focus is on cyclical markets – major and minor price movements. All market movements are driven by changes in investors' collective market appraisals. The Shiller CAPE is selected as the core metric because it is a value assessment gauge-based fundamental indicator – appraising the inflation adjusted S&P 500 price relative to its rolling 10-year average earnings. Although the scalar-CAPE is prone to overshoot and valuations misinterpretation, the first- and second-order vectors of the CAPE are not. Primary-ID and Secondary-ID sense both major changes and minor shifts in investors' collective market appraisal that often precede market price action.

Like Primary-ID, Secondary-ID also finds support from many of the behavioral economics principles. First, prospect theory shows that a -10% loss hurts investors twice as much as the pleasure a +10% gain brings. Such reward-risk disparities are recognized by the asymmetrical buy and sell rules in both models. Second, both models use vector-based indicators. This is supported by the findings of Daniel Kahneman and Amos Tversky that investors are sensitive to the relative changes (vectors) in their wealth much more so than to the absolute levels of their wealth (scalars). Finally, the second-order vector in Secondary-ID is equivalent to the second derivative of the concave and convex value function described by the two distinguished behavioral economists in 1979.

Concluding remarks – cyclical markets vs. market cycles

I developed rules- and evidence-based models to assess cyclical markets and not market cycles. The traditional notion of market cycles is defined with a prescribed set of pseudo-scientific attributes such as amplitude and periodicity that are neither substantiated by historical evidence nor grounded in statistics. Cyclical markets, on the other hand, are the outcomes of random external shocks imposing big tidal waves and small ripples on a steadily rising economic growth trend. Cyclical markets cannot be explained or predicted using the traditional cycle concepts because past cyclical patterns are the outcomes of non-Gaussian randomness. Let me illustrate with a simple but instructive narrative.

Cyclical markets can be visualized with a simple exercise. Draw an ascending line on a graph paper with the y-axis in a logarithmic scale and the x-axis in a linear time scale. The slope of the line is the long-term compound growth rate of the U.S. economy. Next, disrupt this steadily rising trendline with sharp ruptures of various amplitudes at erratic time intervals. These abrupt ruptures represent man-made crises (e.g., recessions) or natural calamities (e.g., earthquakes). Amplify these shocks with overshoots in both up and down directions to emulate the cascade-feedback loops driven by the herding mentality of human psychology. You now have a proxy of the S&P 500 historical price chart.

This descriptive model of cyclical markets explains why conventional market cycle yardsticks – the ±20% thresholds and median durations will never work. Unpredictable shocks will not adhere to a prescribed set of amplitude or duration. Non-Gaussian randomness cannot be captured by the mathematical formulae defining average and median. The conceptual framework of market cycles is flawed and that's why it fails to explain cyclical markets.

Looking from the perspective of Primary-ID and Secondary-ID, cyclical bull markets can last as long as the CAPE velocity is positive and/or accelerating. Cyclical bear markets can last as long as the CAPE velocity is negative and/or decelerating. Stock market movements are not constrained by the ±20% thresholds or cycle life-expectancy stats. Primary-ID detects the velocity of the stock market valuation assessment by all stock market participants that drives bull or bear markets. Secondary-ID senses subtle accelerations and decelerations in the same collective market valuation assessment. These second-order waves manifest themselves in stock market rallies and corrections. It doesn't matter whether the market is down less than -20%, labeled by experts as a correction, or plunges by worse than -20%, which is called a cyclical bear market, Primary-ID and Secondary-ID capture the price movements just the same.

Does synergy exist between Primary-ID and Secondary-ID? Would the sum of the two offer performance greater than those of the parts? A composite index of the two models enables the use of leverage and short strategies that pave the way for more advanced portfolio engineering and risk management tactics. Do these more complex strategies add value? For answers, please stay tuned for Part 3.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

 

figure-3-super-macro-holy-grail-and-the-sp500-total-return

Super Macro – A Fundamental Timing Model

Originally Published April 10, 2012 in Advisor Perspectives

Buy-and-hold advocates cite two reasons why tactical investing should fail. It violates the efficient market hypothesis (EMH), they say, and it is nothing more than market-timing in disguise.

But they are wrong. Rather than endure losses in bear markets – as passive investors must – I have shown that a simple trend-following model dramatically improves results, most recently in an Advisor Perspectives article last month.  Now it’s time to extend my approach by showing how this methodology can be applied to fundamental indicators to further improve performance.

The EMH does not automatically endorse buy-and-hold, nor does it compel investors to endure losses in bear markets. Financial analysts forecast earnings and economists make recession calls routinely, yet academics ridicule market timers as fortunetellers, and market timers resort to labeling themselves as tactical investors to avoid the stigma. Why?

Perhaps what sparks resentment toward market timers is not their predictions, but how they make their predictions. Reading tea leaves is acceptable as long as the tea has a "fundamental analysis" label, but market timing is treated as voodoo because it offends the academic elite, whose devotion to the notion of random walk is almost religious.

I am not a market timer, because I can't foretell the future. But neither do I buy the random-walk theory, because my Holy Grail verifies the existence of trends. Timing is everything. When your religion commands you to hold stocks even when the market is behaving self-destructively, it's time to find a new faith.

Timing models that follow price trends are technical timing models. "The Holy Grail" is an example of a technical timing model. Timing models that monitor the investment climate are fundamental timing models. My Super Macro model is a prime example of a fundamental timing model that works.  Before presenting my Super Macro, I will first disclose the details of my earning-growth (EG) model. As one of the 18 components of Super Macro, the EG model illustrates my methodology in model design.

But first let’s look at the engineering science that makes these models possible.

Macroeconomics, an engineering perspective

table-1-super-macro-model

Engineers assess all systems by their input, output, feedbacks, and controls. From an engineering perspective, the economy is like an engine. It has input (the labor market andhousing) and output (earnings andproduction). The engine analogy and the economic terms in the parenthetical are presented in Table 1. At equilibrium, the engine runs at a steady state, with balanced input and output. When aggregate demand exceeds aggregate supply, the engine speeds up to rebalance. This leads to economic expansions that drive cyclical bull markets. When output outpaces input, the engine slows down. This causes the economy to contract, leading to cyclical bear markets.

The economic engine has multiple feedback loops linking its output to input. Feedback loops can amplify small input changes to produce massive output differentials. Financial leverage is a positive feedback to the economy like a turbocharger is to a car engine. Strong economic growth entices leverage expansions (credit demands), which in turn accelerates economic growth. This self-feeding frenzy can shift the engine into overdrive.

Deleveraging, on the other hand is a negative feedback loop. It creates fear and panic that are manifest in a huge surge in risk premium (credit spreads). The lack of confidence among investors, consumers and businesses could choke an already sluggish economy into a complete stall.

In a free-market system, price is a natural negative feedback mechanism that brings input and output into equilibrium. When demand outpaces supply, price will rise (inflation) to curtail demand. When supply exceeds demand, price will fall (deflation).

The speed of an engine is controlled by the accelerator and the brakes. The central bank, attempting to fight inflation while maximizing employment, uses its monetary levers (interest rates) to control the supply of money and credit. Because of the complex feedback loops within the economic engine, the Fed often overshoots its targets. The unavoidable outcome has been business cycles, which are in turn the root causes of cyclical bull and bear markets.

A fundamental timing model

Models that monitor the economic engine are called fundamental timing models. One example is the EG model, which uses a four-year growth rate of S&P 500 earnings to generate buy and sell signals. (Four years was the average business cycle length in the last century.) The EG model meets my five criteria for a good working model.

  1. Simplicity: The EG model has only one input: the S&P500 earnings.
  2. Commonsense rationale: The EG model is based on a sound fundamental principle that earnings and earnings growth drive stock prices.
  3. Rule-based clarity: Its rules boil down to following trends when they are strong but being contrarian when growth rates are extremely negative.
  4. Sufficient sample size: There have been 29 business cycles since 1875.
  5. Relevant data: Earnings are relevant, as profits are the mother's milk of stocks.

figure-1-the-eg-model-1875-to-2012

The strategy is simple: buy the S&P 500 when the earnings growth index is below -48% or when it is rising. The first buy logic is a contrarian play and the second is a trend follower. Sell signals must meet two conditions: the earnings growth index must be falling, and it must be under 40%. The 40% threshold prevents one from selling the market prematurely when earnings growth remains strong.

Figure 1 shows the resulting bullish and bearish signals from 1875 to present.

Earnings growth is a key market driver, watched closely by both momentum players and value investors. The signals shown in Figure 1 demonstrate that the model avoided the majority of business-cycle-linked bear markets. The EG model, however, could not envision events that were not earnings-driven, such as the 1975 oil embargo and the 1987 program-trading crash.

Like the Holy Grail, my EG model outperforms buy-and-hold in both compound annual growth rate (CAGR) and risk (standard deviation and maximum drawdown). Since 1875, the CAGR of EG was 9.7% with an annualized standard deviation of 12.5% and a maximum drawdown of -42.6%. By comparison, the buy-and-hold strategy with dividend reinvestment delivered a CAGR of 9.0% with a standard deviation of 15.4% and a devastating maximum drawdown of -81.5%.

Since 2000, the EG model has issued only two sell signals. The first spanned January 30, 2001 to August 30, 2002 – during which time the dot-com crash obliterated one third of the S&P 500’s value. The second sell signal came on June 31, 2008, right before the subprime meltdown started, and it ended on March 31, 2009, three weeks after the market bottomed. Who says that market timing is futile? Both Holy Grail and EG worked not by predicting the future, but by steering investors away when the market trend and/or the fundamentals were hostile to investing.

Earnings growth is a yardstick to measure the health of 500 US corporations. Stock price, however, discounts information beyond such microeconomic data. In order to gauge the well-being of the economy more broadly, I need a macroeconomic climate monitor.

But the economy is extremely complex. Meteorologists monitor the weather by measuring the temperature, pressure, and humidity. How do we monitor the economy?

My Super Macro model

Before investing, we should first find out how the economic engine is running. If one wants to know the operating conditions of an engine, he reads gauges installed to track the engine's inputs, outputs, control valves, and feedback loops.

Table 1 lists the 18 gauges I watch to calibrate the economic engine, which I then integrate into a monitoring system I call "Super Macro." The EG model is one of the sub-components of Super Macro. In this paper, I have fully disclosed the design of the EG model. The details of the rest of remaining models are proprietary, but I can assure you that they satisfied the five design criteria for a robust model.

Super Macro performance: January 1920 to March 2012

Figure 2 shows all Super Macro signals since 1920. The blue line is the Super Macro Index (SMI), which is the sum of all signals from the 18 gauges listed in Table 1. There are two orange "Signal Lines." Super Macro turns bullish when the blue line crosses above either one of the two signal lines and remains bullish until the blue line crosses below that signal line. Super Macro turns bearish when the blue line crosses below either signal line and remains bearish until the blue line crosses above that signal line. The color-coded S&P 500 curve depicts the timing of the bullish and bearish signals.

figure-2-super-macro-signals

The Super Macro index has demonstrated its leading characteristics throughout history. While my EG model didn't detect the oil embargo recession from 1974 to 1975, the SMI began its decline in 1973 and crossed below the 50% signal line in November 1973, just before the market plunged by 40%. From 2005 to 2007, during a sustained market advance, the SMI was in a downward trend, warning against excessive credit and economic expansions. On September 30, 2008, at the abyss of the subprime meltdown, the SMI bottomed; it then surged above the -20% Signal Line on March 31 2009, three weeks after the current bull market began.

Like the Holy Grail and EG models, Super Macro outperformed buy-and-hold in both CAGR and risk. From 1920 to March 2012, the CAGR of Super Macro was 10.1%, with an annualized standard deviation of 14.1% and a maximum drawdown of -33.2%. By comparison, the buy-and-hold strategy with dividend reinvestment delivered a CAGR of 9.9% with a standard deviation of 17.2% and a maximum drawdown of -81.5%.

Super Macro, Holy Grail and the buy-and-hold strategy

Let's compare Super Macro and Holy Grail to the S&P 500 total return from 1966 to March 2012, the period that is the most relevant to the current generations of investors. It covers two secular bear markets (from 1966 to 1981 and from 2000 to present) and one secular bull cycle (from 1982 to 1999). Secular markets, like cyclical markets, can be objectively defined. They will be the topics of a future article.

Figure 3 shows cumulative values for a $1,000 initial investment made in January 1966 in each of the three strategies. The Holy Grail outperformed the S&P 500 in the two secular bear cycles, but it underperformed during the 18-year secular bull market. As noted before, the buy-and-hold approach did not make sense in bear markets, but it worked in bull cycles. The cumulative value of Super Macro depicted by the blue curve always beat the other two throughout the entire 46-year period.

figure-3-super-macro-holy-grail-and-the-sp500-total-return

The CAGR of the Super Macro model from 1966 to March 2012 was a spectacular 11.4%, with an annualized standard deviation of 12.5% and a maximum drawdown of -33.2%. The Holy Grail model in the same period had a CAGR of 9.5%, with a lower standard deviation of 11.2% and a smaller maximum drawdown of -23.2%. By comparison, the S&P 500 total-return index delivered a CAGR of 9.3% but with a higher standard deviation of 15.4% and a massive maximum drawdown of -50.9%.

The current secular bear market cycle, which began in 2000, highlights the key differences between Super Macro, the Holy Grail, and the buy-and-hold approach. The S&P 500 total return delivered a meager 1.5% compound rate, with a standard deviation of 16.3% and a maximum drawdown of -50.9%. The trend-following Holy Grail returned a compound rate of 6.2%, with a low standard deviation of 9.5% and a small maximum drawdown of only -12.6%. Super Macro timed market entries and exits by macroeconomic climate gauges. It incurred intermediate levels of risk (a standard deviation of 12.4% and a maximum drawdown of -33.2%), but it delivered a remarkable CAGR of 8.5% from January 2000 to March 2012.

The main difference between a macro model and a technical model is that the timing of fundamentals is often early, while a trend follower always lags. In the next article, I will present an original concept that turns the out-of-sync nature of these two types of timing models to our advantage in investing.

Rule-based models achieve the two most essential objectives in money management: capital preservation in bad times and capital appreciation in good times. If you are skeptical about technical timing models like the Holy Grail, I hope my fundamentals-based Super Macro model will persuade you to take a second look at market timing as an alternative to the buy-and-hold doctrine. Timing models, both technical and fundamental, when designed properly, can achieve both core objectives, while the buy-and-hold approach ignores the first one. Over the past decade, we saw how fatal not paying attention to capital preservation can be.

Theodore Wong graduated from MIT with a BSEE and MSEE degree. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For over three decades, Ted has developed a true passion in the financial markets. He applies engineering statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.