part-2-5a-and-5b

Modeling Cyclical Markets – Part 2

Originally Published November 7, 2016 in Advisor Perspectives

In Part 1 of this series, I presented Primary-ID, a rules- and evidence-based model that identifies major price movements, which are traditionally called cyclical bull and bear markets. This article debuts Secondary-ID, a complementary model that objectively defines minor price movements, which are traditionally called rallies and corrections within bull and bear markets.

The traditional definitions of market cycles

Market analysts define market cycles by the magnitude of price movements. Sequential up and down price movements in excess of 20% are called primary cycles. Price advances more than 20% are called bull markets. Declines worse than -20% are called bear markets. Sequential up and down price movements within 20% are called secondary cycles. Price retracements milder than -20% are called corrections. Advances shy of 20% are called rallies. Talking heads at CNBC frequently use this financial vernacular.

But has anyone bothered to ask how factual these fancy terms and lofty labels really are?

Experts also measure market cycles by their durations. They reported that since 1929, there have been 25 bull markets with gains over 20% with a median duration of 13.1 months, and 25 bear markets with losses worse than 20% with a median duration of 8.3 months.

But is "median duration" the proper statistical yardstick to measure stock market cycle lengths?

Fact-checking the 20% thresholds

Before presenting Secondary-ID, let’s pause to fact-check these two market cycle yardsticks. The ±20% primary cycle rules-of-thumb have little practical use in guiding investment decisions. If we wait for a +20% confirmation before entering the market, we would have missed the bulk of the upside. Conversely, if we wait for an official kick-off of a new cyclical bear market, our portfolios would have shrunk by -20%. The ±20% thresholds may be of interests to stock market historians, but offer no real benefit to investors.

Besides being impractical, the ±20% demarcations are also baseless. This falsehood can be demonstrated by examining the historical evidence. Figures 1A and 1B show the daily closing prices of the S&P 500 from 1928 to 2015. The green bars in Figure 1A are price advances from an interim low to an interim high of over +5%. The red bars in Figure 1B are price retracements from an interim high to an interim low worse than -5%. Price movements less than ±5% are ignored as noise. There were a total of 198 advances and 166 retracements in 88 years. From the figures, it's not obvious why ±20% were picked as the thresholds for bull and bear markets. The distributions of green and red bars show no unique feature near the ±20% markers.

part-2-1a-and-1b

To determine how indistinct the ±20% markers are in the distributions, I plot the same daily data in histograms as shown in Figures 2A and 2B. The probabilities of occurrence are displayed on the vertical axes for each price change in percent on the horizontal axes. For example, Figure 2A shows that a +20% rally has a 3% chance of occurring; and Figure 2B shows that a -20% retreat has near a 3.5% chance. There is no discontinuity either at +20% in Figure 2A that separates bull markets from rallies, nor at -20% in Figure 2B that differentiates bear markets from corrections.

part-2-2a-and-2b

There are, however, two distinct distribution patterns in both up and down markets. Figure 2A shows an exponential drop in the probability distribution with increasing rally sizes from +10% to +40%. Above +45%, the histogram is flat. Figure 2B shows a similar exponential decline in the probability distribution with increasing retracements from -5% to -40%. Beyond -45%, the histogram is again flat. The reasons behind the exponential declines in the distributions and the two-tier histogram pattern are beyond the scope of this paper. It's clear, however, that there is no distinct inflection point at ±20% from Figures 2A and 2B. In fact, it would be more statistically correct to use the ±45% as the thresholds for bull and bear markets. But such large thresholds for primary cycles would be worthless for investors.

Figures 2A and 2B also expose one other fallacy. It's often believed that price support and resistance levels are set by the Fibonacci ratios. One doesn't have to read scientific dissertations using advanced mathematical proofs to dispel the Fibonacci myth. A quick glance at Figure 2A or 2B would turn any Fibonacci faithful into a skeptic. If price tops and bottoms are set by the Fibonacci ratios, we would have found such footprints at ±23.6%, ±38.2%, ±50.0%, ±61.8%, or ±100%. No Fibonacci pivot points can be found in 88 years of daily S&P 500 data.

Fact-checking the market duration yardstick

I now turn to the second cyclical market yardstick-cycle duration. It's been reported that since 1929, the median duration for bull markets is 13.1 months and the median duration for bear markets is 8.1 months. The same report also notes that the spread in bull market durations spans from 0.8 to 149.8 months; and the dispersion among bear market durations extents from 1.9 to 21 months. When the data is so widely scattered, the notion of median is meaningless. Let me explain why with the following charts.

Figures 3A and 3B show duration histograms for all rallies and retreats, respectively. The vertical axes are the probabilities of occurrence for each duration shown on the horizontal axes. The notions of median and average are only useful when the distributions have a central tendency. When the frequency distributions are skewed to the extent seen in Figure 3A or both are skewed and dispersed like in Figure 3B, median durations cited in those reports are meaningless.

part-2-3a-and-3b

Figures 3A and 3B also expose one other myth. We often hear market gurus warning us that the bull (or bear) market is about to end because it's getting old. Chair Yellen was right when she said that economic expansions don't die of old age. Cyclical markets don't follow an actuarial table. They can live on indefinitely until they get hit by external shocks. Positive shocks (pleasant surprises) end bear markets and negative shocks (abrupt panics) kill bull markets. These black swans follow Taleb distributions in which average and median are not mathematically defined. In my concluding remarks I further expand on the origin of cyclical markets.

Many Wall Street beliefs and practices are just glorified folklores decorated with Greek symbols and pseudo-scientific notations to puff up their legitimacy. Many widely followed technical and market-timing indicators are nothing but glamorized traditions and legends. Their theoretical underpinnings must be carefully examined and their claims must be empirically verified. It's unwise to put ones' hard earned money at risk by blindly following any strategy without fact-checking it first, no matter how well accepted and widely followed it may be.

Envisioning cyclical markets through a calculus lens

Now that I have shown how absurd these two yardsticks are in gauging market cycles, I would like to return to the subject at hand – modeling cyclical markets. The methodology is as follows: First, start with a metric that is fundamentally sound. The Super Bowl indicator, for example, is an indicator with no fundamental basis. Next, transform the metric into a quasi range-bound indicator. Then devise a set of rational rules using the indicator to formulate a hypothesis. High correlations without causations are not enough. Causations must be grounded in logical principles such as economics, behavioral finance, fractal geometry, chaos theory, game theory, number theory, etc. Finally, a hypothesis must be empirically validated with adequate samples to be qualified as a model.

Let me illustrate my modeling approach with Primary-ID. The Shiller CAPE (cyclically adjusted price-earnings ratio) is a fundamentally sound metric. But when the CAPE is used in its original scalar form, it is prone to calibration error because it's not range-bound. To transform the scalar CAPE into a range-bound indicator, I compute the year-over-year rate-of-change of the CAPE (e.g. YoY-ROC % CAPE). A set of logically sound buy-and-sell rules is devised to activate the indicator into actionable signals. After the hypothesis is validated empirically over a time period with multiple bull and bear cycles, Primary-ID is finally qualified as a model.

This modeling approach can be elucidated with a calculus analogy. The scalar Shiller CAPE is analogous to "distance." The vector indicator YoY-ROC % CAPE is analogous to "velocity." When "velocity" is measured in the infinitesimal limit, it's equivalent to the "first derivative" in calculus. In other words, Primary-ID is similar to taking the first derivative of the CAPE. There are, however, some differences between the YoY-ROC % CAPE indicator and calculus. First, a derivative is an instantaneous rate-of-change of a continuous function. The YoY-ROC % CAPE indicator is not instantaneous, but with a finite time interval of one year. Also, the YoY-ROC % CAPE indicator is not a continuous function, but is based on a discrete monthly time series – the CAPE. Finally, a common inflection point of a derivative is the zero crossing, but the signal crossing of Primary-ID is at -13%.

Secondary-ID – a model for minor market movements

I now present a model called Secondary-ID. If Primary-ID is akin to "velocity" or the first derivative of the CAPE and is designed to detect major price movements in the stock market, then Secondary-ID is analogous to "acceleration/deceleration" or the second derivative of the CAPE and is designed to sense minor price movements. Secondary-ID is a second-order vector because it derives its signals from the month-over-month rate-of-change (MoM-ROC %) of the year-over-year rate-of-change (YoY-ROC %) in the Shiller CAPE metric.

Figures 4A to 4D show the S&P 500, the Shiller CAPE, Primary-ID signals and Secondary-ID signals, respectively. The indicator of Primary-ID (Figure 4C) is identical to that of Secondary-ID (Figure 4D), namely, the YoY-ROC % CAPE. But their signals differ. The signals in Figures 4C and 4D are color-coded – bullish signals are green and bearish signals are red. The details of the buy and sell rules for Primary-ID were described in Part 1. The bullish and bearish rules for Secondary-ID are presented below.

part-2-4a-4b-4c-4d

Bullish signals are triggered by a rising YoY-ROC % CAPE indicator or when the indicator is above 0%. For bearish signals, the indicator must be both falling and below 0%. "Rising" is defined as a positive month-over-month rate-of-change (MoM-ROC %) in the ROC % CAPE indicator; and "falling", a negative MoM-ROC %. Because it is a second-order vector, Secondary-ID issues more signals than Primary-ID. It's noteworthy that the buy and sell signals of Secondary-ID often lead those of Primary-ID. The ability to detect acceleration and deceleration makes Secondary-ID more sensitive to changes than Primary-ID that detects only velocity.

For ease of visual examination, Figures 5A shows the S&P 500 color-coded with Secondary-ID signals. Figure 5B is the same as Figure 4D describing how those signals are triggered by the buy and sell rules. Since 1880, Secondary-ID has called 26 of the 28 recessions (a 93% success rate). The two misses were in 1926 and 1945, both were mild recessions. Secondary-ID turned bearish in 1917, 1941, 1962, 1966 and 1977 but no recessions followed. However, these bearish calls were followed by major and/or minor price retracements. If Mr. Market makes a wrong recession call and the S&P 500 plummets, it's pointless to argue with him and watching our portfolios tank. Secondary-ID is designed to detect accelerations and decelerations in market appraisal by the mass. Changes in appraisal often precede changes in market prices, regardless of whether those appraisals lead to actual economic expansions or recessions.

part-2-5a-and-5b

Secondary-ID not only meets my five criteria for robust model design (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data), it has one more exceptional merit – no overfitting. In the development of Secondary-ID, there is no in-sample training involved and no optimization done on any adjustable parameter. Secondary-ID has only two possible parameters to adjust. The first one is the time-interval for the second-order rising and falling vector. Instead of looking for an optimum time interval, I choose the smallest time increment in a monthly data series – one month. One month in a monthly time series is the closest parallel to the infinitesimal limit on a continuous function. The second possible adjustable parameter is the signal crossing. I select zero crossing as the signal trigger because zero is the natural center of an oscillator. The values selected for these two parameters are the most logical choices and therefore no optimization is warranted. Because no parameters are adjusted, there's no need for in-sample training. Hence Secondary-ID is not liable to overfitting.

Performance comparison: Secondary-ID, Primary-ID and the S&P 500

The buy and sell rules of Secondary-ID presented above are translated into executable trading instructions as follows: When the YoY-ROC CAPE is rising (i.e. a positive MoM-ROC %), buy the S&P 500 (e.g. SPY) at the close in the following month. When the YoY-ROC CAPE is below 0% and falling (i.e. a negative MoM-ROC %), sell the S&P 500 at the close in the following month and use the proceeds to buy U.S. Treasury bond (e.g. TLT). The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Figures 6A shows the S&P 500 total return index and the total return of the U.S. Treasury bond. In 116 years, return on stocks is nearly twice that of bonds. But in the last three decades, bond prices have risen dramatically thanks to a steady decline in inflation since 1980 and the protracted easy monetary policies since 1995. Figures 6B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a $1 investment made in January 1900. The S&P 500 has a total return of 9.7% with a maximum drawdown of -83%. By comparison, Primary-ID has a hypothetical compound annual growth rate (CAGR) of 10.4% with a maximum drawdown of -50% and trades once every five years on average. The performance stats on Primary-ID are slightly different from that shown in Figure 5B in Part 1 because Figure 6B is updated from July to August 2016.

Secondary-ID delivers a hypothetical CAGR of 12.8% with a -36% maximum drawdown and trades once every two years on average. Note that Primary-ID and Secondary-ID are working in parallel to avoid most if not all bear markets. Secondary-ID offers an extra performance edge by minimizing the exposure to bull market corrections and by participating in selected bear market rallies.

part-2-6a-and-6b

I now apply the same buy and sell rules in the recent 16 years to see how the model would have performed in a shorter but more recent sub-period. This is not an out-of-sample test since there's no in-sample training. Rather, it's a performance consistency check with a much shorter and more recent period. Figures 7A shows the total return of the S&P 500 and the U.S. Treasury bond price index from 2000 to August 2016. The return on bonds in this period is higher than that of the S&P 500. Record easy monetary policies since 2003 and large-scale asset purchases by global central banks since 2010 pumped up bond prices. Two severe back-to-back recessions dragged down the stock market. Figures 7B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a $1 investment made in January 2000.

part-2-7a-and-7b

Since 2000, the total return index of the S&P 500 has returned 4.3% compounded with a maximum drawdown of -51%. By comparison, Primary-ID has a CAGR of 7.7% with a maximum drawdown of -23% and trades once every five years on average. Again, the performance stats on Primary-ID shown in Figure 7B are slightly different from that shown in Figure 5B in Part 1 because Figure 7B is updated to August 2016. Secondary-ID delivers a hypothetical CAGR of 10.5% with a maximum drawdown of only -16% and trades once every 1.4 years on average. The performance edge in return and risk of Secondary-ID over both Primary-ID and the S&P 500 total return index is remarkable. The consistency in performance gaps in both the entire 116-year period and in the most recent 16-year sub-period lends credence to Secondary-ID.

Theoretical support for both cyclical market models

The traditional concepts of "primary cycles" and "secondary cycles" rely on amplitude and periodicity yardsticks to track market cycles in the past and to predict market cycles in the future. Primary-ID and Secondary-ID do not deal with primary or secondary market cycles. Their focus is on cyclical markets – major and minor price movements. All market movements are driven by changes in investors' collective market appraisals. The Shiller CAPE is selected as the core metric because it is a value assessment gauge-based fundamental indicator – appraising the inflation adjusted S&P 500 price relative to its rolling 10-year average earnings. Although the scalar-CAPE is prone to overshoot and valuations misinterpretation, the first- and second-order vectors of the CAPE are not. Primary-ID and Secondary-ID sense both major changes and minor shifts in investors' collective market appraisal that often precede market price action.

Like Primary-ID, Secondary-ID also finds support from many of the behavioral economics principles. First, prospect theory shows that a -10% loss hurts investors twice as much as the pleasure a +10% gain brings. Such reward-risk disparities are recognized by the asymmetrical buy and sell rules in both models. Second, both models use vector-based indicators. This is supported by the findings of Daniel Kahneman and Amos Tversky that investors are sensitive to the relative changes (vectors) in their wealth much more so than to the absolute levels of their wealth (scalars). Finally, the second-order vector in Secondary-ID is equivalent to the second derivative of the concave and convex value function described by the two distinguished behavioral economists in 1979.

Concluding remarks – cyclical markets vs. market cycles

I developed rules- and evidence-based models to assess cyclical markets and not market cycles. The traditional notion of market cycles is defined with a prescribed set of pseudo-scientific attributes such as amplitude and periodicity that are neither substantiated by historical evidence nor grounded in statistics. Cyclical markets, on the other hand, are the outcomes of random external shocks imposing big tidal waves and small ripples on a steadily rising economic growth trend. Cyclical markets cannot be explained or predicted using the traditional cycle concepts because past cyclical patterns are the outcomes of non-Gaussian randomness. Let me illustrate with a simple but instructive narrative.

Cyclical markets can be visualized with a simple exercise. Draw an ascending line on a graph paper with the y-axis in a logarithmic scale and the x-axis in a linear time scale. The slope of the line is the long-term compound growth rate of the U.S. economy. Next, disrupt this steadily rising trendline with sharp ruptures of various amplitudes at erratic time intervals. These abrupt ruptures represent man-made crises (e.g., recessions) or natural calamities (e.g., earthquakes). Amplify these shocks with overshoots in both up and down directions to emulate the cascade-feedback loops driven by the herding mentality of human psychology. You now have a proxy of the S&P 500 historical price chart.

This descriptive model of cyclical markets explains why conventional market cycle yardsticks – the ±20% thresholds and median durations will never work. Unpredictable shocks will not adhere to a prescribed set of amplitude or duration. Non-Gaussian randomness cannot be captured by the mathematical formulae defining average and median. The conceptual framework of market cycles is flawed and that's why it fails to explain cyclical markets.

Looking from the perspective of Primary-ID and Secondary-ID, cyclical bull markets can last as long as the CAPE velocity is positive and/or accelerating. Cyclical bear markets can last as long as the CAPE velocity is negative and/or decelerating. Stock market movements are not constrained by the ±20% thresholds or cycle life-expectancy stats. Primary-ID detects the velocity of the stock market valuation assessment by all stock market participants that drives bull or bear markets. Secondary-ID senses subtle accelerations and decelerations in the same collective market valuation assessment. These second-order waves manifest themselves in stock market rallies and corrections. It doesn't matter whether the market is down less than -20%, labeled by experts as a correction, or plunges by worse than -20%, which is called a cyclical bear market, Primary-ID and Secondary-ID capture the price movements just the same.

Does synergy exist between Primary-ID and Secondary-ID? Would the sum of the two offer performance greater than those of the parts? A composite index of the two models enables the use of leverage and short strategies that pave the way for more advanced portfolio engineering and risk management tactics. Do these more complex strategies add value? For answers, please stay tuned for Part 3.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

 

figure-4

Modeling Cyclical Markets – Part 1

Originally Published October 24, 2016 in Advisor Perspectives

The proverbial wisdom is that there are two types of stock market cycles – secular and cyclical. I argued previously that secular cycles not only lacked statistical basis to be credible, but their durations of 12 to 14 years are also impractical for most investors. We live in an internet age with a time scale measured in nanoseconds. Wealth managers often turn over their portfolios after only a few years. Simply put, secular cycles can last longer than financial advisors can retain their clients.

The second type of cycle is called a “cyclical market” and is believed to comprise both primary and secondary waves. Economic cycles are thought to drive primary waves. According to the National Bureau of Economic Research (NBER), the average economic cycle length is 4.7 years, which would be more suitable for the typical holding periods of most investors.

To succeed in accumulating wealth in bull markets and preserving capital in bear markets, we must first define and detect primary and secondary markets. In this article, I present a modeling approach to spot primary cycles. Modeling secondary market cycles will be the topic of Part 2.

Common flaws in modeling financial markets

Before presenting my model on primary markets, I must digress to discuss two common mistakes in modeling financial markets. For example, when modeling secular market cycles and market valuations, analysts use indicators such as the Crestmont P/E, the Alexander P/R and the Shiller CAPE (cyclically adjusted price-earnings ratio). By themselves, these indicators are fundamentally sound. It's the modeling approach using these indicators that is flawed.

Models on valuations and secular cycles cited above share two assumptions. First, they assume that the amplitude (scalar) of the indicators can be relied on to indicate market valuations and secular outlook. Extremely high readings are interpreted as overvaluations or cycle crests, and extremely low readings, undervaluation or cycle troughs. Second, it's assumed that mean reversion will always drive the extreme readings in the models back into line.

Figure 1A shows the S&P 500 from 1881 to mid-2016 in logarithmic scale. Figure 1B is the Shiller CAPE overlay. The solid purple horizontal line is the mean from 1881 to 1994 and has a value of 14.8. The upper and lower dashed purple lines represent one standard deviation above and below the mean of 14.8, respectively. The solid brown line to the right is the mean from 1995 to mid-2016 and has a value of 26.9. The upper and lower dashed brown lines are one standard deviation above and below the post-1995 mean of 26.9, respectively. One standard deviation above the pre-1995 mean is 19.4 and one standard deviation below the post-1995 mean is 20.4. The data regimes in the two adjoining timeframes do not overlap. The statistically distinct nature of the two regimes invalidates the claim by many secular cycle advocates and CAPE-based valuations practitioners that the elevated CAPE readings after 1995 are just transitory statistical outliers and will fall back down in due course.

figure-1a-and-1b

Let's examine the investment impacts from these two assumptions. The first assumption is that extreme amplitudes can be used to track cycle turning points. Figure 1B shows that both high and low extremes are arbitrary and relative. As such, they cannot be used as absolute valuation markers. For example, after 1995, the entire amplitude range has shifted upward. Secular cyclists and value investors would have sold stocks in 1995 when the CAPE first pierced above 22, exceeding major secular bull market crests in 1901, 1937 and 1964. They would have missed the 180% gain in the S&P 500 from 1995 to its peak in 2000. More recently, the CAPE dipped down to 13 at the bottom of the sub-prime crash. Secular cycle advocates and value investors would consider a CAPE of 13 not cheap enough relative to previous secular troughs in 1920, 1933, 1942, 1949, 1975 and 1982. They would have asked clients to switch from stocks to cash only to miss out the 200% gain in the S&P 500 since 2010. These are examples of huge upside misses caused by the first flawed assumption used in these scalar-based models.

The second assumption is that mean reversion always brings the out-of-bound extremes back into line. This assumption falters on three counts. First, mean reversion is not mean regression. The former is a hypothesis and the latter, a law in certain statistics like Gaussian distributions (the bell curve). Second, mean regression is guaranteed only for distributions that resemble a bell curve. If the distributions follow the power-law or the Erlang statistics, even mean regression is not guaranteed. Finally, neither mean regression nor mean reversion is a certainty if the overshoots are not by chance, but are the results of causation. Elevated CAPE will last as long as the causes (Philosophical Economics, Jeremy Siegel and James Montier) remain in place. The second assumption creates a false sense of security that could be very harmful to your portfolios.

The confusion caused by both of these false assumptions is illustrated in Figure 1B. For the 26.9 mean, reversion has already taken place in 2002 and 2009. But for the 14.8 mean, reversion has a long way to go. All scalar models that rely on arbitrary amplitudes for calibration and assume a certainty of mean reversion are doomed to fail.

A vector-based modeling approach

The issues cited above are the direct pitfalls of using scalar-based indicators. One can think of scalar as an AM (amplitude modulation) radio in a car. The signals can be easily distorted when the car goes under an overpass. Vector, on the other hand is analogous to FM (frequency modulation) signals, which are encoded not in amplitude but in frequency. Solid objects can attenuate amplitude-coded signals but cannot corrupt frequency-coded ones. Likewise, vector-based indicators are immune to amplitude distortions caused by external interferences such as Fed policies, demographics, or accounting rule changes that might cause the overshoot in the scalar CAPE. Models using vector-based indicators are inherently more reliable.

Instead of creating a new vector-based indicator from scratch, one can transform any indicator from a scalar to a vector with the help of a filter. Two common signal-processing filters used by electronic engineers to condition signals are low-pass filters and high-pass filters. Low-pass filters improve lower frequency signals by blocking unwanted higher frequency chatter. An example of a low-pass filter is the moving average, which transforms jittery data series into smoother ones. High-pass filters improve higher frequency signals by detrending irrelevant low frequency noise commonly present in the physical world. The rate-of-change (ROC) operator is a simple high-pass filter. ROC is defined as the ratio of the change in a variable over a specific time interval. Common time intervals used in financial markets are year-over-year (YoY) or month-over-month (MoM). By differentiating (taking the rate-of-change of) a time series, one transforms it from scalar to vector. A scalar only shows amplitude, but a vector contains both amplitude and direction contents. Let me illustrate how such a transformation works.

Figures 2A is identical to Figure 1B, the scalar version of the Shiller CAPE. Figure 2B is a vector transformation, the YoY-ROC of the scalar Shiller CAPE time series. There are clear differences between Figure 2A and Figure 2B. First, the post-1995 overshoot aberration in Figure 2A is no longer present in Figure 2B. Second, the time series in Figure 2B has a single mean, i.e. the mean from 1881 to 1994 and the mean from 1995 to present are virtually the same. Third, Figure 2B shows that the plus and minus one standard deviations from the two time periods completely overlap. This proves statistically that the vector-based indicator is range-bound across its entire 135-year history. Finally, the cycles in Figure 2B are much shorter than that in Figure 2A. Shorter cycles are more conducive to mean reversion.

figure-2a-and-2b

It's clear that the YoY-ROC filter mitigates many calibration issues associated with the scalar-based CAPE. The vector-based CAPE is range-bound, has a single and stable mean and has shorter cycle lengths. These are key precursors for mean reversion. In addition, there are theoretical reasons from behavioral economics that vectors are preferred to scalars in gauging investors' sentiment. I will discuss the theoretical support a bit later.

The vector-based CAPE periods versus economic cycles

Primary market cycles are believed to be driven by economic cycles. Therefore, to detect cyclical markets, the indicator should track economic cycles. Figure 3A shows the S&P500 from 1950 to mid-2016. The YoY-ROC GDP (Gross Domestic Product) is shown in Figure 3B and the YoY-ROC CAPE in Figure 3C. The Bureau of Economic Analysis (BEA) published U.S. GDP quarterly data only after 1947.

The waveform of the YoY-ROC GDP is noticeably similar to that of the YoY-ROC CAPE. In fact, the YoY-ROC CAPE has a tendency to roll over before the YoY-ROC GDP dips into recessions, often by as much as one to two quarters. The YoY-ROC GDP and the YoY-ROC CAPE are plotted as if the two curves were updated at the same time. In reality, the YoY-ROC CAPE is nearly real-time (the S&P 500 and earnings are at month-ends and the Consumer Price Index has a 15-day lag). GDP data, on the other hand, is not available until a quarter has passed and is revised three times. The YoY-ROC CAPE indicator is updated ahead of the final GDP data by as much as three months. Hence, the YoY-ROC CAPE is a true leading economic indicator.

figure-3a-3b-3c

Although the waveforms in Figures 3B and 3C look alike, they are not identical. How closely did the YoY-ROC CAPE track the YoY-ROC GDP in the past 66 years? The answer can be found with the help of regression analysis. Figure 4 shows an R-Squared of 29.2%, the interconnection between GDP growth rate and the YoY-ROC CAPE. A single indicator that can explain close to one-third of the movements of the annual growth rate of GDP is truly amazing considering the simplicity of the YoY-ROC CAPE and the complexity of GDP and its components.

figure-4

Primary-ID – a model for primary market cycles

Finding an indicator that tracks economic cycles is only a first step. To turn that indicator into an investment model, we have to come up with a set of buy and sell rules based on that indicator. Primary-ID is a model I designed years ago to monitor major price movements in the stock market. In the next article, I will present Secondary-ID, a complementary model that tracks minor stock market movements. I now illustrate my modeling approach with Primary-ID.

A robust model must meet five criteria: simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data. Primary-ID meets all five criteria. First, Primary-ID is elegantly simple – only one adjustable parameter for "in-sample training." Second, the vector-based CAPE is fundamentally sound. Third, buy and sell rules are clearly defined. Forth, the Shiller CAPE is statistically relevant because it covers over two dozen samples of business cycles. Fifth, the Shiller database is quite sufficient because it provides over a century of monthly data.

Figure 5A shows both the S&P 500 and the YoY-ROC CAPE from 1900 to 1999. This is the training period to be discussed next. The curves are in green when the model is bullish and in red when bearish. Bullish signals are generated when the YoY-ROC CAPE crosses above the horizontal orange signal line at -13%. Bearish signals are issued when the YoY-ROC CAPE crosses below the same signal line. The signal line is the single adjustable parameter in the in-sample training.

figure-5a

Figure 5B compares the cumulative return of Primary-ID to the total return of the S&P 500, a benchmark for comparison. A $1 invested in Primary-ID in January 1900 hypothetically reached $30,596 at the end of 1999, a compound annual growth rate (CAGR) of 10.9%. The S&P 500 over the same period earned $23,345, a CAGR of 10.3%. The 60 bps CAGR gap may seem small, but it doubles the cumulative wealth after 100 years. The other significant benefit of Primary-ID is that its maximum drawdown is less than two third of that of the S&P 500. It trades on average once every five years, very close to the average business cycle of 4.7 years published by NBER.

The in-sample training process

Figures 5A and 5B show a period from 1900 to 1999, which is the back-test period used to find the optimum signal line for Primary-ID. The buy and sell rules are as follows: When the YoY-ROC CAPE crosses above the signal line, buy the S&P 500 (e.g. SPY) at next month's close. When the YoY-ROC CAPE crosses below the signal line, sell the S&P 500 at next month's close and park the proceeds in US Treasury bond. The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding bond is the sum of both bond yields and bond price percentage changes caused by interest rate changes.

Figure 6 shows the back test results in two tradeoff spaces. The plot on the left is a map of CAGR versus maximum drawdown for various signal lines. The one on the right is CAGR as a function of the position of the signal line. For comparison, the S&P 500 has a total return of 10.6% and a maximum drawdown of -83% in the same period. Most of the blue dots in Figure 6 beat the total return and all have maximum drawdowns much less than that of the S&P 500.

figure-6

Figures 6A and 6B only show a range of signal lines that offers relatively high CAGR. What is not shown is that all signal lines above -10% underperform the S&P 500. The two blue dots marked by blue arrows in both charts are not the highest returns nor the lowest drawdowns. They are located in the middle range of the CAGR sweet spot. I judiciously select a signal line at -13% that does not have the maximum CAGR. An off-peak parameter gives the model a better chance to outperform the optimized performance in the backtest. Picking the optimized adjustable parameter would create an unrealistic bias for the out-of-sample test results. Furthermore, an over-optimized model even if it passes the out-of-sample test is prone to underperform in real time. A parameter that is peaked during back-tests will likely lead to inferior out-of-sample results as well as actual forecasts.

Why do all signal lines above -10% give lower CAGR's than those within -10% and -19%? There is a theoretical reason for such an asymmetry to be discussed a bit later.

The out-of-sample validation

The out-of-sample test is a guard against the potential risk of over-fitting during in-sample optimization. It's like a dry-run before applying the model live with real money. Passing the out-of-sample test, however, does not necessarily guarantee a robust model but failing the out-of-sample test is certainly ground for model rejection.

Here’s how out-of-sample testing works. The signal line selected in the training exercise is applied to a new set of data from January 2000 to July 2016 with the same buy and sell rules. Figure 7A shows both the S&P 500 and the YoY-ROC CAPE.

Figure 7B compares the cumulative return of Primary-ID to the total return of the S&P 500. A $1 invested in Primary-ID in January 2000 would hypothetically make $3.50 in mid-2016, a CAGR of 7.8%. Investing $1 in the S&P 500 over the same period would have earned $2.02, a CAGR of 4.3%. An added perk Primary-ID offers is the maximum drawdown of -23%, half of that of the S&P 500’s -51%. It trades on average once every five years, similar to that in the in-sample test, and therefore profits are taxed at long-term capital gains rates.

figure-7a-and-7b

Primary-ID sidestepped two infamous bear markets: the dot-com crash and the sub-prime meltdown. It also fully invested in equities during the two mega bull markets in the last 16 years. The value of the YoY-ROC CAPE as a leading economic indicator and the efficacy of Primary-ID as a cyclical market model are validated.

Theoretical support for Primary-ID

The theoretical support for Primary-ID can be found in prospect theory proposed by Daniel Kahneman and Amos Tversky in 1979. Prospect theory offers three original axioms that lend support to Primary-ID. The first axiom shows that there is a two-to-one asymmetry between the pain of losses versus the joy of gains – losses hurt twice as much as gains bring joy. Recall from Figure 2B that the sweet spot for CAGR comes from signal lines located between -10% and -19%, more than one standard deviation below the mean near 0%. Why is the sweet spot located that far off center? The reason could be the result of the asymmetry in investors' attitude toward reward versus risk. Prospect theory explains an old Wall Street adage – let profits, run but cut losses short. Primary-ID adds a new meaning to this old motto – buy swiftly, but sell late. In other words, buy quickly once YoY-ROC CAPE crosses above -13% but don't sell until YoY-ROC CAPE crosses below -13%.

The second prospect theory axiom deals with scalar and vector. The authors wrote, "Our perceptual apparatus is attuned to the evaluation of changes or differences rather than to the evaluation of absolute magnitudes." In other words, it's not the level of wealth that matters; it's the change in the level of wealth that affects investors' behavior. This explains why the vector-based CAPE works better than the original scalar-based CAPE. The former captures human behaviors better than the latter.

The third prospect theory axiom proposed by Kahneman and Tversky is that "the value function is generally concave for gains and commonly convex for losses." Richard Thaler explains this statement in layman's terms in his 2015 book entitled "Misbehaving." The value function represents investors' attitudes toward reward and risk. The terms concave and convex refer to the curve shown in Figure 3 in the 1979 paper. A concave (or convex) value function simply means that investors' sensitivity to joy (or pain) diminishes as the level of gain (or loss) increases. The diminishing sensitivity is observed only on the change in investors' attitude (vector) and not on the investors' attitude itself (scalar). Investors' diminishing sensitivity toward both gains and losses is the reason that the YoY-ROC CAPE indicator is range-bound and why mean reversion occurs more regularly. The original Shiller CAPE is a scalar time series and does not benefit from the third axiom. Therefore the apparent characteristics of range-bound and mean reversion of the scalar Shiller CAPE in the past are the exceptions, not the norms.

Concluding remarks

The stock market is influenced by different driving forces including economic cycles, credit cycles, Fed policies, seasonal/calendar factors, equity premium anomaly, risk aversion shifts the equity premium puzzle and bubble/crash sentiment. At any point in time, the stock market is simply the superposition of the displacements of all these individual waves. Economic cycle is likely the dominant wave that drives cyclical markets, but it is not the only one. That's why the R-squared is only at 29.2% and not all bear markets were accompanied by recessions (such as 1962, 1966 and 1987).

The credibility of the Primary-ID model in gauging primary cyclical markets is grounded on several factors. First, it is based on a fundamentally sound metric – the Shiller CAPE. Second, its indicator (YoY-ROC CAPE) is a vector that is more robust than a scalar. Third, the model tracks the cycle dynamics between the market and the economy relatively well. Forth, the excellent agreement between the five-year average signal length of Primary-ID (0.2 trades per year shown in Figures 5B and 7B) and the average business cycle of 4.7 years reported by NBER adds credence to the model. Finally, the Primary-ID model has firm theoretical underpinnings in behavioral economics.

It's a widely held view that the stock market exhibits both primary and secondary waves. If primary waves are predominantly driven by economic cycles, what drives secondary waves? Can we model secondary market cycles with a vector-based approach similar to that in Primary-ID? Can such a model complement or even augment Primary-ID? Stay tuned for Part 2 where I debut a model called Secondary-ID that will address all these questions.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached atted@ttswadvisory.com.

figure-2a-secular-market-cycles-since-1900

Secular Market Cycles – Fact or Illusion?

Originally Published October 3, 2016 in Advisor Perspectives

Conventional wisdom dictates that equity markets adhere to long-term secular cycles and that investors should adjust their allocations based on whether valuation metrics, such as the Shiller CAPE, are relatively high or low. But what if the notion of secular market cycles is misguided because, for example, the sample size of past cycles is insufficient to attach any statistical significance?

I have been a student of the stock market for over 40 years. I rarely heard the term "secular market cycles" before the 1970s. In 1991, Angus Maddison used the term "long waves" to describe economic activities in 16 advanced capitalist countries since 1820. The term "secular cycles" has gained popularity since 2000 when Robert Shiller published the first edition of Irrational Exuberance. Figure 1 is taken from page 8 of his book, which shows the now famous Shiller CAPE (cyclically adjusted price-earnings ratio). Shiller's chart featured four major tops from 1881 to 2000. The last peak was spot-on in nailing the dot-com bubble.

Many believed that Shiller had deciphered the incoherent S&P 500 chart into a comprehensible rhythmic waveform with CAPE.

figure-1-reprint-from-robert-shiller-irrational-exuberance

Since then, the notion of "secular market cycles" has been increasingly accepted as an undisputable fact in both academic research and investment circles. Experts are busy giving meaning to such cyclical patterns. They rationalize causal connections between secular market cycles and socioeconomic shifts and attribute those cycles to structural factors such as technological advances, demographic waves, inflation trends, political reforms or wars.

The problem arises when analysts advise clients to deploy different investment strategies depending on whether the current phase is a secular bull or bear market. In order to know which strategy is appropriate, investors must first identify where they are in a secular market cycle. Unfortunately, the same experts who can explain past cycles are in total disarray regarding the current cycle. Since 2010, analysts have been debating if the secular bear market that started in 2000 is still in place or if a new secular bull market has already begun. During their six-year debate, the S&P 500 has melted up over 200%.

A deep dive into the secular market debate

Are we currently in a secular bull or secular bear market? You can find experts with a wide dispersion of opinions –– the bearish camp, the bullish camp and those on the agnostic fence.

Leading the bearish camp are many renowned analysts who believe that the secular bear market started in 2000 continues today. Members include Ed Easterling, Michael Alexander, John Hussman,Jeremy Grantham, John Mauldin, Russell Napier, Joseph Calhoun, Van Tharp and Martin Pring (who might have turned bullish recently). Many of them justify their bearish stances with only one or two secular cycles of data supported by anecdotal evidence. Easterling and Alexander extended the database to over a century but could only increase the number of cycles to four or eight.

Such sample sizes are too small for any meaningful statistical analysis.

In the bullish camp, there is a contingent of prominent experts who believe that a new secular bull market began sometime after 2009. Doug Short, Jill Mislinski, Guggenheim Partners and others presented calendar tables that depicted the periods of their secular cycles in the last century. Others include Chris Puplava, Liz-Ann Sounders, Craig Johnson, Jeffrey Saut, Barry Ritholtz, Ralph Acamporaand Tim Hayes. Institutional members include Fidelity, INVESCO and Bank of America Merrill Lynch. Most of the analysts in this camp turned bullish in the period from 2012 to 2014, after the March 2009 price had been firmly established as the bottom of the preceding secular bear market.

Members sitting on the agnostic fence are harder to find. It takes honesty, humility and, above all, guts to admit publically that you don't have the answer. Doug Ramsey turned from bearish to neutral in 2014.Alex Planes hedged his mildly bearish stance by acknowledging that no one could be certain about the exact cycle phase except in hindsight.

Easterling and Alexander are the only two researchers in all three camps who applied rule-based models on more than a century of data to define secular market cycles. The transparency of their methodologies allows peer reviews. Their work and findings are summarized below.

Ed Easterling's secular market cycle model

Ed Easterling of Crestmont Research is a recognized authority on the subject of secular market cycles and has written extensively on the subject. According to Easterling, secular bull markets start at the troughs of below-average price-to-earnings ratios (P/Es) or the Crestmont P/Es and secular bear markets start at the peaks of above-average P/Es. Based on these "rules," Easterling tabulated a secular cycle calendar from 1901 to 2015 with four secular bull and five secular bear markets. The performances of secular bull versus bear markets is tabulated in Table 1.

table-1-secular-cycles-per-easterling

In Figure 2A, the S&P 500 is in green to depict Easterling's secular bull markets and in red for his bear markets. Figure 2B is an overlay of the Shiller CAPE and is the same as Figure 1 above but is extended to 2015. According to Easterling, the current secular bear market that began in 2000 shows no sign of ending soon. His basic premise is that secular bull markets in the past didn't begin until either the Shiller CAPE or the Crestmont P/E bottomed at below-average levels.

figure-2a-secular-market-cycles-since-1900

The mean of the Shiller CAPE from 1881 to mid-2016 is 16.7. The CAPE dipped down to 13.3 in March 2009, but that was not "below-average" enough for Easterling. He noted that in all four previous bear market bottoms in the 1920s, 1930s, 1940s and the 1980s, the CAPE dropped to at least 10 and, most of the time, close to 5.

Michael Alexander's secular market cycle model

Michael Alexander wrote a ground-breaking book in 2000 entitled: Stock Cycles: Why Stocks Won't Beat Money Markets Over the Next Twenty Years. Alexander developed a database of over 200 years, much longer than other researchers. As a result, he was able to show more supporting evidence that linked his secular cycles to economic fundamentals. Alexander argued that there were two alternate types of secular cycles – monetary cycles followed by real cycles. In monetary cycles, falling inflation produced secular bull markets and rising inflation, secular bear markets. In real cycles, strong or consistent earnings growth fueled secular bull markets, and weak or inconsistent earnings drove secular bear markets. The secular bear market that began in 2000 and continues to the present day is a weak and inconsistent earnings phase of a real cycle.

Alexander developed a new metric called the P/R ratio (price-to-resource ratio) to detect secular market turning points. His metric is grounded on sound fundamentals and the derivation of P/R was detailed in the Appendix in his book. His P/R ratio resembles Easterling's P/E and he uses a similar rule narrative – secular bull markets start after P/R ratios have bottomed, and secular bear markets start after P/R ratios have peaked.

Table 2 summarizes Alexander's original findings, which ended in 2000. I updated his table through 2015, which is consistent with his bearish market stance posted in a recent blog.

table-2-secular-cycles-per-alexander

The common thesis Easterling and Alexander share

Both Easterling and Alexander applied quantitative metrics to define secular cycles. From their statements, we can find a common thesis in their bearish arguments.

In April 2013, Easterling affirmed that "the current secular bear will continue at least for another five to ten years until the CAPE reaches 10 or lower."

In July 2015, Easterling reaffirmed that "Crestmont Research identifies – without hesitation or doubt – the current cycle as the continuation of a secular bear market...we have a strong conviction that the prospect of a secular bull is far away...this secular bear, however, started at dramatically higher levels due to the late 1990s bubble... the reality is that the level of stock market valuation (i.e., P/E) is not low enough to provide the lift to returns that drives secular bull markets....the current P/E is at or above the typical starting level for a secular bear market."

In August 2013, Alexander wrote, "I sold my last position last month when the S&P 500 was in the low 1600's. The P/R graph shows that the market has reached roughly the same position relative to past secular bear markets as it had in 2007...The bet I am making is that there will be another downturn as there was in the past and this downturn will send the S&P 500 down to 1250."

In June 2015, Alexander published a blog entitled "10,000 point decline in the Dow in the cards over the next three years." Based on the declines from P/R peaks to P/R troughs in previous secular bear markets, he projected a secular bear bottom for the Dow Jones Industrial Average to be around 8000 and the S&P 500 around 900 by 2018.

The self-assurance expressed in the statements by Easterling and Alexander is admirable. But their doomsday forecasts are misplaced. When we become too personally or professionally invested in a supposition, we fall into an overconfidence trap.

Philip Tetlock in his book Superforecasting: The Art and Science of Prediction identifies key traits that separate good forecasters from bad. Hedgehogs are lousy forecasters because they are overconfident on their immutable grand theories and stubbornly cling to their confirmation biases despite contradictory evidence. Foxes, on the other hand are much better forecasters primarily because they are skeptical about grand theories, diffident in their beliefs and ready to adjust their convictions based on actual events. Foxes are true Bayesians.

The key to successful forecasts is to keep an open mind. In his book Sapiens: A Brief History of Humankind, Yuval Noah Harari argues that a new mindset in the 16th century based on the Latin word ignoramus – the willingness to admit ignorance -- was the catalyst that set in motion the Scientific Revolution that continues today. As Mark Twain said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

A common thesis behind the bearish stances of both Easterling and Alexander is that the current levels of their metrics – Easterling's P/E and Alexander's P/R – are still too high relative to the starts of all previous secular bull markets in the past century. This logic compels them to reject any possibility of a new secular bull market. I will challenge their logic a bit later. But I would like to clarify two common misconceptions first.

Misconceptions about the term "secular cycles"

Cycle advocates claim that the existence of secular cycles is self-evident as proven by the large performance gap between secular bull and bear markets shown in Tables 1 and 2. Large differences in the returns exist, but they don't necessarily prove the existence of secular cycles. Secular bull markets are defined as the periods from troughs (either in price, P/E or P/R ratio) to peaks, and secular bear markets, from peaks to troughs. By such definitions, returns in bull markets must be higher than those in bear markets. The self-evidence argument is a circular logic and can be illustrated with a simple analogy. The temperatures from June to August are relatively high not because of the summer season. Rather, the summer season is defined from June to August because the temperatures in those months are relatively high. Claiming that secular bull markets create wealth and secular bear markets destroy wealth is as trivial as saying that June, July and August are hot because of summer and the winter months are cold because of winter. Sequential high and low returns do not prove the existence of secular cycles because those patterns are used to define secular cycles in the first place.

The term "cycles" in engineering and sciences refers to events with regular periodicity or at a uniform frequency. The term "cycle" used by stock market researchers refers to contiguous pairs of up and down markets. Investment analysts claimed that the stock market exhibits cycles at an interval of 17 to 18 years. One analyst even calculated an average cycle as precisely 17.6 years.

The notion of "average" is only meaningful when the sample distribution has a central tendency, i.e., not flat, multi-modal or skewed. When the spreads on the "half cycle" are so widespread (from 3 to 25 years shown in columns 3 and 7 in Tables 1 and 2, respectively) and the sample sizes are so small (4 and 8 "full cycles" shown in Tables 1 and 2), the term "average" may not even be mathematically definable. The stock market does exhibit pseudo sine-wave oscillatory patterns because investors’ sentiment fluctuates between greed and fear emotional extremes. Such extremes are captured by my TR-Osc and several other models to be presented in my future articles. But there's no evidence of any periodicity. The term "cycles" is highly misleading and grossly misused and claims such as 17.6 years cycle length are absurd.

Why are secular cycles so illusive?

Let's return to the question – why is there no consensus among analysts on the current secular cycle phase? Is that because, when standing in the middle of a cycle, one cannot see the future direction of the market? It's understandable that if price turning points are used to define cycles, a cycle in progress cannot be identified until a higher high or a lower low has been clearly established.

But the lack of consensus is not limited to the cycle currently in progress. Experts couldn't even agree with the benefit of hindsight on past secular cycles. For example, none of the secular chronology published by Easterling, Alexander, Short, Guggenheim, Ramsey, Hussman, Maddison and Fidelitylooks exactly alike. For those analysts who used anecdotal evidence, descriptive arguments and only a few decades of supporting data to define their cycles, different hindsights should not be a total surprise. But one would expect the two cycle calendars from Easterling and Alexander to be similar because both researchers apply similar quantitative metrics and objective rules on over a century of market data to determine their secular cycles. How different are their secular calendars?

Compare Easterling's secular calendar shown in Tables 1 to Alexander's in Table 2. From 1900 to 2015, Easterling counted five bear markets and four bull markets, while Alexander identified only four bear markets and three bull markets. That's a whopping 30% discrepancy. An average investor has only 30 to 40 years to build his or her retirement nest egg, missing or adding one full secular cycle with an "average cycle of 17.6 years" could mean a world of difference.

Flawed assumptions common to both valuations and secular cycle models

Secular cycle metrics used by Easterling and Alexander share many common attributes with the traditional valuations gauges such as the Shiller CAPE, the Tobin-Q, the Buffet market-cap-to-GNP ratio, price-to-earnings, price-to-dividend and price-to-book ratio. I previously argued that their uniformly high readings in the past 20 years indicated two common flaws. Many experts have begun to question whether the two-decade long elevated CAPE readings really reflect high market valuations or if they are signs of possible calibration malfunction. Many "fixes" have been proposed to adjust the high levels back down (see Philosophical Economics, Jeremy Siegel and James Montier). When a gauge needs fixing, it means that users have lost confidence in its accuracy. The same critiques I made to challenge the validity of many of the valuations models also apply to the bearish secular market thesis of Easterling and Alexander.

Easterling's P/E, Alexander's P/R and all of the valuations gauges cited above share two key operating assumptions. First, they rely on the absolute levels of the readings in their metrics to appraise future market outlook. Second, they assume that mean reversion will always bring the outliers back to the normal range. The first assumption – high absolute levels (relative to the historical means) translate to low future returns – will only hold when the time series has a stable mean (a single mean that is constant in time). The means of all those valuations gauges cited above have shifted upward significantly in the last two decades. With multiple means, the out-of-bound data wouldn't know which mean to revert to. The elevation anomaly observed in the CAPE also appeared in both the Crestmont P/E and the Alexander P/R, which led both researchers to hold their secular bear market stances for over a decade.

The second assumption is mean reversion, which is misunderstood to imply that any data that is temporarily out-of-bound will always self-correct and migrate towards the mean. They have mistakenmean regression for mean reversion. Mean regression is a law in probability that states that random outliers in a normal probability distribution have a tendency to move towards the mean driven by random statistical processes. Mean reversion, on the other hand, is the result of causation, not randomness. Mean reversion is a causal hypothesis (not a law) postulated to explain certain observed tendency towards the mean. Jeremy Siegel, Philosophical Economics, James Montier and others have proposed various causes to explain the elevation in the Shiller CAPE. If causations are involved, the elevations in the CAPE and other metrics are not random, and therefore mean regression has no jurisdiction. Past mean reversion episodes in the Crestmont P/E, the Alexander P/R and the Shiller CAPE are no guarantees for their future reappearances. Since there is no mathematical law to mandate mean reversion, these metrics could stay elevated or suppressed indefinitely. The means could also step up or down to different plateaus if a new cause merges and shifts the baselines of their previous means.

Concluding remarks

The Shiller CAPE, the Crestmont P/E and the Alexander P/R are all good metrics built on solid economic fundamentals. The problem arises when these metrics are wrongly applied to gauge market valuations or to define long-term market cycles. Over the years, these widely held but misconceived models have become the sacred cows in the theology of investments. Any challenge to the cardinal truth would be denounced by the high priests as a heresy. Observations that cannot be explained by the traditional doctrines are conveniently casted as one-off anomalies. The fact is that secular cycles and the other related valuations models are not infallible axioms based on first principles but are merely hypotheses yet to be validated. Perhaps it would take a heretic from outside the investment circles with no career risk to point out the obvious flaws in this “cardinal truth.” I argue that the elevated readings in various secular cycle and valuations metrics since 1995 are not anomalous aberrations but are empirical evidence against the orthodoxy.

In fact, the dispersion in opinions among all secular cycle advocates could be viewed as a nullification of the secular cycle hypothesis. Analysts used price data from 1800 to 2000 for the "in-sample training" of their models. These models are "trained" to interpret the past. It is therefore no surprise that they can depict past cycles. Market behaviors from 2009 to present, however, could be looked at as the "out-of-sample" test results of these models. The confusion among analysts on their post-2009 market stances could be considered as a form of inconsistency between the out-of-sample test outcome and their in-sample data mining. When the out-of-sample reality stirs up a controversy that lasts for six years, it raises the presumption of doubt whether the secular cycle notion is a good approximation of realty.

There are two mathematical explanations for why these models give contradicting out-of-sample market stances even though they were trained with the same in-sample data. Any model that is constructed with fewer than a dozen input samples is deemed to be unreliable. First, the smaller the sample size, the more susceptible the model is to curve-fitting. Second, small in-sample sizes mathematically guarantee out-of-sample predictions to have low confidence intervals, high margins of error or both. The 2008 sub-prime meltdown was a horrific example of insufficient and irrelevant in-sample data. All credit rating agencies used U.S. housing market data from the 1970s to the 1990s to model the default risk of mortgage-backed securities. During this training period, mortgage default rates were very low and the U.S. real estate market was booming. If these credit agencies were to incorporate U.S. housing data from 1890 to 1950 (both bull and bear housing markets) or housing data from Japan since the 1970s (bear housing market) in their Gaussian copula credit risk models, we may not have had the sub-prime crash.

Daniel Kahneman in Thinking, Fast and Slow described two distinct human mental faculties – a spontaneous pattern recognition ability followed by a reflective aptitude to rationalize. Our ancestors survived in the savannah jungles mainly with their first mental faculty – extracting camouflaged signals from noise swiftly to outwit both stronger predators and faster prey. Having survived the jungles, humans had more time to indulge in contemplation. It's our propensity for ex-post rationalizations that gave birth to culture, religion, philosophy and sciences. Today, however, living in an internet maze packed with terabytes of data, our innate pattern perceptive intuition and our natural rationalization tendency are often fooled by randomness.

These two human traits manifest in the behaviors of secular cycle advocates. They first visualize a handful of apparent cyclical patterns like those in the Shiller CAPE. They then draw causal connections that link these observations to fundamental causes without bothering to check for statistical significance. According to Angus Maddison (see p. 16 in the reference), long-wave patterns are not caused by any periodic structural forces proposed by secular theorists, but rather, the results of accidental systematic shocks and subsequent attempts to stabilize the aftermaths by monetary and fiscal policies. Secular patterns are the reflection of these random wave-like disturbances on an otherwise continuously rising economic growth curve.

If those perceived cyclical patterns are purely accidental and caused by unpredictable random shocks, it's entirely plausible that the imaginative secular cycles could be mirages misconceived by the overzealous cycle advocates. Their faithful followers could be searching for something nonexistent. The Crestmont P/E, the Alexander P/R and the Shiller CAPE might stay elevated indefinitely and never revert to or undershoot their historical means. Long-lasting elevations could place the misguided forecasters in a special class of perpetual permabears.

Market watchers love the secular market controversy because a protracted debate keeps them relevant. Unfortunately, investors only have 30-plus years to accumulate wealth. Should we entrust our hard earned money to a hypothesis that might take an average secular period of 17.6 years to pan out? If by then the hypothesis is proven wrong, investors would have wasted half of their investing life-cycle.

It's a common belief that the stock market exhibits both secular and cyclical waves. If the concept of secular markets is dubious and the nature of the in-progress secular phase is always unclear, we should shift our attention to the shorter version called cyclical markets. Does the notion of cyclical markets share the same flawed premises as their secular cousin? What drives cyclical markets? Can they be defined, identified and modeled objectively? Modeling cyclical markets and the efficacy of such models will be the topics of my next articles.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

figure-3-super-macro-holy-grail-and-the-sp500-total-return

Super Macro – A Fundamental Timing Model

Originally Published April 10, 2012 in Advisor Perspectives

Buy-and-hold advocates cite two reasons why tactical investing should fail. It violates the efficient market hypothesis (EMH), they say, and it is nothing more than market-timing in disguise.

But they are wrong. Rather than endure losses in bear markets – as passive investors must – I have shown that a simple trend-following model dramatically improves results, most recently in an Advisor Perspectives article last month.  Now it’s time to extend my approach by showing how this methodology can be applied to fundamental indicators to further improve performance.

The EMH does not automatically endorse buy-and-hold, nor does it compel investors to endure losses in bear markets. Financial analysts forecast earnings and economists make recession calls routinely, yet academics ridicule market timers as fortunetellers, and market timers resort to labeling themselves as tactical investors to avoid the stigma. Why?

Perhaps what sparks resentment toward market timers is not their predictions, but how they make their predictions. Reading tea leaves is acceptable as long as the tea has a "fundamental analysis" label, but market timing is treated as voodoo because it offends the academic elite, whose devotion to the notion of random walk is almost religious.

I am not a market timer, because I can't foretell the future. But neither do I buy the random-walk theory, because my Holy Grail verifies the existence of trends. Timing is everything. When your religion commands you to hold stocks even when the market is behaving self-destructively, it's time to find a new faith.

Timing models that follow price trends are technical timing models. "The Holy Grail" is an example of a technical timing model. Timing models that monitor the investment climate are fundamental timing models. My Super Macro model is a prime example of a fundamental timing model that works.  Before presenting my Super Macro, I will first disclose the details of my earning-growth (EG) model. As one of the 18 components of Super Macro, the EG model illustrates my methodology in model design.

But first let’s look at the engineering science that makes these models possible.

Macroeconomics, an engineering perspective

table-1-super-macro-model

Engineers assess all systems by their input, output, feedbacks, and controls. From an engineering perspective, the economy is like an engine. It has input (the labor market andhousing) and output (earnings andproduction). The engine analogy and the economic terms in the parenthetical are presented in Table 1. At equilibrium, the engine runs at a steady state, with balanced input and output. When aggregate demand exceeds aggregate supply, the engine speeds up to rebalance. This leads to economic expansions that drive cyclical bull markets. When output outpaces input, the engine slows down. This causes the economy to contract, leading to cyclical bear markets.

The economic engine has multiple feedback loops linking its output to input. Feedback loops can amplify small input changes to produce massive output differentials. Financial leverage is a positive feedback to the economy like a turbocharger is to a car engine. Strong economic growth entices leverage expansions (credit demands), which in turn accelerates economic growth. This self-feeding frenzy can shift the engine into overdrive.

Deleveraging, on the other hand is a negative feedback loop. It creates fear and panic that are manifest in a huge surge in risk premium (credit spreads). The lack of confidence among investors, consumers and businesses could choke an already sluggish economy into a complete stall.

In a free-market system, price is a natural negative feedback mechanism that brings input and output into equilibrium. When demand outpaces supply, price will rise (inflation) to curtail demand. When supply exceeds demand, price will fall (deflation).

The speed of an engine is controlled by the accelerator and the brakes. The central bank, attempting to fight inflation while maximizing employment, uses its monetary levers (interest rates) to control the supply of money and credit. Because of the complex feedback loops within the economic engine, the Fed often overshoots its targets. The unavoidable outcome has been business cycles, which are in turn the root causes of cyclical bull and bear markets.

A fundamental timing model

Models that monitor the economic engine are called fundamental timing models. One example is the EG model, which uses a four-year growth rate of S&P 500 earnings to generate buy and sell signals. (Four years was the average business cycle length in the last century.) The EG model meets my five criteria for a good working model.

  1. Simplicity: The EG model has only one input: the S&P500 earnings.
  2. Commonsense rationale: The EG model is based on a sound fundamental principle that earnings and earnings growth drive stock prices.
  3. Rule-based clarity: Its rules boil down to following trends when they are strong but being contrarian when growth rates are extremely negative.
  4. Sufficient sample size: There have been 29 business cycles since 1875.
  5. Relevant data: Earnings are relevant, as profits are the mother's milk of stocks.

figure-1-the-eg-model-1875-to-2012

The strategy is simple: buy the S&P 500 when the earnings growth index is below -48% or when it is rising. The first buy logic is a contrarian play and the second is a trend follower. Sell signals must meet two conditions: the earnings growth index must be falling, and it must be under 40%. The 40% threshold prevents one from selling the market prematurely when earnings growth remains strong.

Figure 1 shows the resulting bullish and bearish signals from 1875 to present.

Earnings growth is a key market driver, watched closely by both momentum players and value investors. The signals shown in Figure 1 demonstrate that the model avoided the majority of business-cycle-linked bear markets. The EG model, however, could not envision events that were not earnings-driven, such as the 1975 oil embargo and the 1987 program-trading crash.

Like the Holy Grail, my EG model outperforms buy-and-hold in both compound annual growth rate (CAGR) and risk (standard deviation and maximum drawdown). Since 1875, the CAGR of EG was 9.7% with an annualized standard deviation of 12.5% and a maximum drawdown of -42.6%. By comparison, the buy-and-hold strategy with dividend reinvestment delivered a CAGR of 9.0% with a standard deviation of 15.4% and a devastating maximum drawdown of -81.5%.

Since 2000, the EG model has issued only two sell signals. The first spanned January 30, 2001 to August 30, 2002 – during which time the dot-com crash obliterated one third of the S&P 500’s value. The second sell signal came on June 31, 2008, right before the subprime meltdown started, and it ended on March 31, 2009, three weeks after the market bottomed. Who says that market timing is futile? Both Holy Grail and EG worked not by predicting the future, but by steering investors away when the market trend and/or the fundamentals were hostile to investing.

Earnings growth is a yardstick to measure the health of 500 US corporations. Stock price, however, discounts information beyond such microeconomic data. In order to gauge the well-being of the economy more broadly, I need a macroeconomic climate monitor.

But the economy is extremely complex. Meteorologists monitor the weather by measuring the temperature, pressure, and humidity. How do we monitor the economy?

My Super Macro model

Before investing, we should first find out how the economic engine is running. If one wants to know the operating conditions of an engine, he reads gauges installed to track the engine's inputs, outputs, control valves, and feedback loops.

Table 1 lists the 18 gauges I watch to calibrate the economic engine, which I then integrate into a monitoring system I call "Super Macro." The EG model is one of the sub-components of Super Macro. In this paper, I have fully disclosed the design of the EG model. The details of the rest of remaining models are proprietary, but I can assure you that they satisfied the five design criteria for a robust model.

Super Macro performance: January 1920 to March 2012

Figure 2 shows all Super Macro signals since 1920. The blue line is the Super Macro Index (SMI), which is the sum of all signals from the 18 gauges listed in Table 1. There are two orange "Signal Lines." Super Macro turns bullish when the blue line crosses above either one of the two signal lines and remains bullish until the blue line crosses below that signal line. Super Macro turns bearish when the blue line crosses below either signal line and remains bearish until the blue line crosses above that signal line. The color-coded S&P 500 curve depicts the timing of the bullish and bearish signals.

figure-2-super-macro-signals

The Super Macro index has demonstrated its leading characteristics throughout history. While my EG model didn't detect the oil embargo recession from 1974 to 1975, the SMI began its decline in 1973 and crossed below the 50% signal line in November 1973, just before the market plunged by 40%. From 2005 to 2007, during a sustained market advance, the SMI was in a downward trend, warning against excessive credit and economic expansions. On September 30, 2008, at the abyss of the subprime meltdown, the SMI bottomed; it then surged above the -20% Signal Line on March 31 2009, three weeks after the current bull market began.

Like the Holy Grail and EG models, Super Macro outperformed buy-and-hold in both CAGR and risk. From 1920 to March 2012, the CAGR of Super Macro was 10.1%, with an annualized standard deviation of 14.1% and a maximum drawdown of -33.2%. By comparison, the buy-and-hold strategy with dividend reinvestment delivered a CAGR of 9.9% with a standard deviation of 17.2% and a maximum drawdown of -81.5%.

Super Macro, Holy Grail and the buy-and-hold strategy

Let's compare Super Macro and Holy Grail to the S&P 500 total return from 1966 to March 2012, the period that is the most relevant to the current generations of investors. It covers two secular bear markets (from 1966 to 1981 and from 2000 to present) and one secular bull cycle (from 1982 to 1999). Secular markets, like cyclical markets, can be objectively defined. They will be the topics of a future article.

Figure 3 shows cumulative values for a $1,000 initial investment made in January 1966 in each of the three strategies. The Holy Grail outperformed the S&P 500 in the two secular bear cycles, but it underperformed during the 18-year secular bull market. As noted before, the buy-and-hold approach did not make sense in bear markets, but it worked in bull cycles. The cumulative value of Super Macro depicted by the blue curve always beat the other two throughout the entire 46-year period.

figure-3-super-macro-holy-grail-and-the-sp500-total-return

The CAGR of the Super Macro model from 1966 to March 2012 was a spectacular 11.4%, with an annualized standard deviation of 12.5% and a maximum drawdown of -33.2%. The Holy Grail model in the same period had a CAGR of 9.5%, with a lower standard deviation of 11.2% and a smaller maximum drawdown of -23.2%. By comparison, the S&P 500 total-return index delivered a CAGR of 9.3% but with a higher standard deviation of 15.4% and a massive maximum drawdown of -50.9%.

The current secular bear market cycle, which began in 2000, highlights the key differences between Super Macro, the Holy Grail, and the buy-and-hold approach. The S&P 500 total return delivered a meager 1.5% compound rate, with a standard deviation of 16.3% and a maximum drawdown of -50.9%. The trend-following Holy Grail returned a compound rate of 6.2%, with a low standard deviation of 9.5% and a small maximum drawdown of only -12.6%. Super Macro timed market entries and exits by macroeconomic climate gauges. It incurred intermediate levels of risk (a standard deviation of 12.4% and a maximum drawdown of -33.2%), but it delivered a remarkable CAGR of 8.5% from January 2000 to March 2012.

The main difference between a macro model and a technical model is that the timing of fundamentals is often early, while a trend follower always lags. In the next article, I will present an original concept that turns the out-of-sync nature of these two types of timing models to our advantage in investing.

Rule-based models achieve the two most essential objectives in money management: capital preservation in bad times and capital appreciation in good times. If you are skeptical about technical timing models like the Holy Grail, I hope my fundamentals-based Super Macro model will persuade you to take a second look at market timing as an alternative to the buy-and-hold doctrine. Timing models, both technical and fundamental, when designed properly, can achieve both core objectives, while the buy-and-hold approach ignores the first one. Over the past decade, we saw how fatal not paying attention to capital preservation can be.

Theodore Wong graduated from MIT with a BSEE and MSEE degree. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For over three decades, Ted has developed a true passion in the financial markets. He applies engineering statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.