# Random Walk Part 4 – Can We Beat a Radically Random Stock Market?

Originally Published October 9, 2017 in Advisor Perspectives

This is the final article of my four-part series into the fallacy of the random-walk paradigm. In Part 1 and Part 2, I showed that asset prices do not follow a tidy bell curve and instead are radically random. In Part 3, I demonstrated that many bad risk management practices are the direct results of equating volatility to risk. In this article, I offer a probability-based framework that captures the true nature of investment reward and risk.

The Efficient Market Hypothesis (EMH) argues that the market is hard to beat because very few people could make better forecasts than the collective market wisdom, which instantly discounts all available information. My new reward-risk framework reveals a little-known secret that market gains and losses have very different distribution profiles. We can beat the S&P 500 not by making better forecasts, but by exploiting the dual personality of Mr. Market.

The random walk theory has been the core of modern finance since Louis Bachelier wrote his 1900 PhD thesis. Economists define reward as the mean return (expected value) and risk as the standard deviation (volatility) of the returns. These mathematical terms may be convenient for academics in formulating their economic theories, but make no sense to the average investor. Reward has a positive overtone but mean could be negative. Risk has a negative undertone but standard deviation weighs gains and losses equally. Investors view reward and risk as two sides of the same coin – reward comes from gains and risk comes from losses. My reward-risk framework quantifies this subtle diametrical symmetry.

The random-walk definitions of investment reward and risk

Modern finance adopted the mean-variance paradigm to frame reward and risk. Appendix A presents the mathematical definitions. Figure 1 illustrates the reward and risk graphically with the annual returns of the S&P 500 from 1928 to 2016 (data sources: MetaStock and Yahoo Finance). The dark blue curve is the random walk probability density function (PDF). Reward (mean or expected value) is computed by integrating the total area under the PDF curve using equation A1 in Appendix A. Risk (the square root of variance), computed with equation A2, is one-half of the width of the light blue central region bounded by ± one standard deviation. The random walk PDF roughly matches the data (the jagged gray area) in the central region except near the peak. Beyond ± one standard deviations, data reside mostly above the PDF curve.

Figure 2 compares the random walk PDFs (blue curves) to the actual S&P 500 returns (gray areas) in a one-, five- and 10-year horizon. The peaks of the PDFs denote the means. The red arrows signify ± one standard deviations. Random walk's notions of mean and volatility bear no resemblance to actual returns and risks in the real world. The longer the return horizons are, the larger the gaps. This is why so many conventional risk management practices derived from the mean-variance paradigm broke down during financial crises. The academics' bell curve paradigm offers investors no protection against financial market risks.

My gain-loss framework for investment reward and riskI offer a new probability-based framework for defining reward and risk. The formulas are presented in Appendix B. Figure 3 illustrates the concept. I define investment reward as the expected gain – the sum of all probability-weighted gains in a return histogram. It is computed by integrating numerically the total green area in Figure 3 using equation B1 in Appendix B. I define risk as the expected loss – the sum of all probability-weighted losses. It is computed by summing the total pink area in Figure 3 using equation B2.

The random-walk paradigm treats both positive and negative dispersions as risks. The expected gain-loss framework only considers losses (red bars) as risks but views the widely disperse green bars (gains) as gainful opportunities. The bell curve does not include all data, especially those at the extremes. The new model accounts for all outlier gains and all tailed risks weighted by their observed probabilities.

The old paradigm versus the new framework

Besides being unrealistic and impractical, the random-walk paradigm has one more subtle fault that is underreported. Mean and variance have different units of measure – mean is in percent but variance is in percent-squared. For unit compatibility, William Sharpe was compelled to use standard deviation – the square root of variance in his Sharpe Ratio. Even so, mean and standard deviation still do not have contextual uniformity. Mean signifies the most probable outcome and standard deviation measures the spread of those outcomes. Comparing reward (mean) to risk (volatility) is like comparing apples and oranges. For instance, Figure 1 shows that the mean-to-volatility ratio of the S&P 500 is 0.48 (dividing a mean of 7.7% by a volatility of 16%). Does this imply that the reward of investing in the S&P 500 is less than half the risk?

By contrast, "expected gain" and "expected loss" are two sides of the same coin – returns with opposite signs. Unlike the Sharpe Ratio that lacks clarity, the expected-gain-to-expected-loss ratio has absolute significance. For instance, Figure 3 shows that from 1928 to 2016, the S&P 500 index has an expected gain of 12.5% versus an expected loss of -4.3%. The expected gain of the S&P 500 is 2.9 times larger than the size of the expected loss.

Challenging the EMH's explanation on why the market is hard to beat

Why is the S&P 500 total-return such a formidable challenge for many active managers and market timers? The Efficient Market Hypothesis (EMH) offers a two-part explanation. First, market prices instantly (efficiently) reflect the collective appraisals of all market participants. Second, one can only beat the market by outsmarting the collective wisdom. On the surface, both points appear logical, but they are not, in fact, as logical as they appear.

The first point may explain why it is difficult for arbitrageurs to make a living because any price gap is instantly exploited. This argument, however, does not apply to the financial markets where prices are not single-valued functions of information. The same news can have multiple meanings and price implications depending on the receivers. Different interpretations of the same news draw buyers and sellers to the table. Price is an equilibrium point where the sellers believe their price is fair but high, while the buyers think is reasonable but low. The market is not a super forecaster, but an efficient auction-clearing house that facilitates buyers and sellers with different appraisals to transact.

The second EMH argument is self-inconsistent. It asserts that few can beat the market because outsmarting the collective forecast is hard to do. No random-walk followers including the EMH faithful should endorse the practice of forecasting because forecasting randomness is a contradiction in terms. Randomness, by definition, is unpredictable.

The real reason why the market is hard to beat

My gain-loss framework offers a painfully obvious explanation of why the market is hard to beat. Figure 4 parses the same data in Figure 2 in terms of gains and losses. It shows that the market offers investors abundant gainful opportunities (green bars), but that they are highly erratic. The probabilities inside the blue rectangle in the middle chart are nearly the same but the gains span from 0% to 80%. The bottom chart shows comparable probabilities for gains ranging from 0% to 200%. To time the market with virtually flat gain distributions is futile. That is why the buy-and-hold approach is unbeatable in the green zone.

The characteristics of market losses (pink bars) are very different. First, the pink zones are much narrower than the green areas. Second, while the green area grows with time (from 12.5% in one year, to 46.6% in five years and 101.7% in 10 years), the pink areas are insensitive to the holding period. In fact, as the holding period expands from five to ten years, the pink area shrinks from -5.2% to -2.9%.

It is a fool's errand to time the market in the green areas, where the probabilities are almost flat and the distributions grow with the holding period. It is prudent to stay in the market and gather those wildly erratic gains. In contrast, the pink areas are confined and insensitive to time. Hence, mitigating losses in the pink areas is much more manageable. My gain-loss framework not only explains why it is hard to beat the market, but also reveals a clue on how to do it logically.

Unlock a little-known market beating secret

How can we differentiate whether the current market is in the green or pink zone? I previously published five models that were designed to do just that. The five models are Holy GrailSuper MacroTR-OscPrimary-ID and Secondary-ID. They detect the green/pink market phases from five orthogonal perspectives – trend, the economy, valuations, major market cycles and minor price movements, respectively.

Figure 5 shows the annual return histograms of the five models against three investment benchmarks – the S&P 500 total-return, the 10-year US Treasury bond, and the 60/40 mix (60% equity and 40% bond rebalanced monthly) (data: Shiller). They were computed from the eight equity curves – the compound growths from January 1900 to September 2017.

Figure 5 shows that my five models share two common features: their green bars are comparable to those of the S&P 500 but with narrower pink areas. In other words, their expected gains are as good as the S&P 500 but their expected losses are much lower. As a result, they offer much higher expected gain-to-loss ratios than that of the S&P 500. A model with a higher expected gain-to-loss ratio than the S&P 500 can surely beat the market return. I will quantify this point a bit later but first, I must challenge yet another modern finance doctrine.

Challenging the Capital Asset Pricing Model

The Capital Asset Pricing Model (CAPM) asserts that first, no return of any asset mix between the S&P 500 and the Treasury bond or Treasury bill can exceed the Capital Market Line (CML); and second, one can only increase return by taking on more risk via leverage. Figure 6 is a plot of expected gains versus expected losses from the data in Figure 5. The dashed red line is the CML connecting the S&P 500 total-return to the 10-year US Treasury bond total-return (bond yields plus bond price changes caused by interest rate changes). Also shown are my five models (light blue dots) and the three benchmarks (red squares). I add a sixth model Cycle-ID (black dot) to show the effects of leverage. All six models are counterexamples to the CAPM. The five unlevered models reside far above the CML. The levered Cycle-ID beats the S&P 500 expected gain by 68% with only 85% of the risk. Hence, both CAPM claims are untrue.

CAGR and maximum drawdown comparisons

It is simple math that investors can beat the S&P 500 total-return if they can achieve close to the S&P 500 expected gains but cut their expected losses sizably relative to the S&P 500. Table 1 lists the compound annual growth rates (CAGRs) of all nine strategies in six sets of bull and bear full market cycles from January 1900 to September 2017. All six models have CAGRs higher than the S&P 500 total-return consistently in different bull and bear full cycles over a century.

My six models not only outperform the total compound returns of the S&P 500 in all cycle sets, they also offer lower risks than the S&P 500 measured by maximum drawdown. Table 2 compares maximum drawdowns of the nine strategies in different market cycles.

A dynamic active-passive investment approachWhich investment approach is better, passive or active? This ongoing debate misses the point. There is a time to be passive and a time to be active. Passive investors underperform active managers in bear markets and active investors are no match to buy-and-holders in bull markets.

Figure 4 reveals that Mr. Market has a dual personality. When he is content, he spreads his random gains over a wide green area. When he is mad, he directs his wrath at a narrow pink zone. Therefore it is feasible to logically beat Mr. Market at his own game – be a passive investor in the green areas to gather the wildly disperse gains but be an active risk manager in the pink areas to trim market losses.

Here is how investors can do that in practice. Do regular checkups on Mr. Market's health. If we detect a mood shift from good to bad, reduce market exposure (actively preserve capital in the pink zones). Otherwise, we stay in the market (passively cumulate wealth in the green areas).

Market health checkups are not market forecasts. Doctors do not forecast our medical conditions at annual exams. They conduct routine diagnoses and look for symptoms. If some tests come back positive, then the doctors actively treat those illnesses. Otherwise, patients would passively count their blessings until the next checkup.

Similarly, in regular market health checkups, we do not forecast market outlook but conduct diagnoses and look for warning signs. For instance, my five models were designed to monitor subtle shifts in trend (Holy Grail), the economy (Super Macro), valuation (TR-Osc), major market cycle (Primary-ID) and minor price movement (Secondary-ID). When a medical test comes back positive, we do not panic but seek second or third opinions. Likewise, investors should not assess market health based on a single indicator, but use the weight-of-evidence from multiple orthogonal models.

Using this dynamic active-passive approach and developing an integrated market monitoring system, investors can achieve the dual objective of capital preservation in bad times and wealth accumulation in good times.

Concluding Remarks

The key findings from all four random-walk series are summarized below:

1. Modern finance assumes that all asset prices follow a random walk. The academics define reward as the mean return at the peak of a bell curve. Data taken from a variety of asset classes (Part 1 and Part 2) with return horizons from one day to 10 years are far too erratic to fit the random walk statistics.

2. The histograms of a variety of asset classes not only reject the bell curve, they do not fit any well-known analytical probability theory. The types of randomness observed are akin to Frank Knight's "radical uncertainties", Donald Rumsfeld's "the unknown unknowns" or Nassim Nicholas Taleb's "Black Swans".

3. In a multimodal histogram with no central tendency, mean and variance are ill defined. The mean-variance paradigm is unfit to depict real-world prices.

4. Modern finance misreads risk as volatility (Part 3). Volatility reflects diversity in market views when the act of buying or selling does not affect the price. Volatility facilitates trades, lubricates liquidity and alleviates financial market risks. In contrast, a market with a single dominant view creates a buyer-seller imbalance. Risks come from uni-directional price movements that freeze liquidity and exacerbate bubbles or panics.

5. Investment risk comes in many forms – market risk, geopolitical events, inflation, currency, interest rate, recession, etc. Regardless the sources, all risks lead to the same outcome – an unacceptable loss in the form of income or capital, or both.

6. High uncertainties and radically random distributional gains are not risks, but represent abundant opportunities and widely scattered investment rewards.

7. I define investment reward as the cumulative probability-weighted gain; and investment risk as the cumulative probability-weighted loss. My new framework accurately captures all observed data and is applicable to probability distributions of any shape and form. More importantly, it has an intuitive appeal to investors.

8. A Random walk is a theory. A theory is supposed to describe and explain empirical observations. It provides analytical formulas that can predict the future. However, theories that hypothesize causation but disregard any aberration that does not fit their paradigms are theoretical landmines for all uninformed followers.

9. My gain-loss framework is not a theory, but a phenomenological model. It truthfully measures observations with statistical tools but offers no causation explanations or analytical formulas. As stated in Part 2, investors' adaptive behavioral dynamics render all analytical models in mathematical finance imprecise at best. An empirical model that objectively captures data with no theoretical bias is more practical for investors.

10. My new framework reveals that Mr. Market has a dual personality. He keeps his losses in time insensitive and confined zones but lets his gains run wild and loose. We can exploit this asymmetry to beat Mr. Market at his own game.

11. The active-passive investment approach capitalizes on this gain-loss asymmetry and tilts the betting odds in our favor. We actively mitigate losses in the pink zones via regular market health checkups. Otherwise, we stay as passive investors and pick up the radically random profits Mr. Market leaves behind.

12. How can we detect Mr. Market's mood changes? My six rules-based market monitors demonstrate that early warning detection is possible.

13. Warren Buffett consistently beats the market. He could be a practitioner of the dynamic active-passive approach because his favorite holding period is "forever" (a passive investor) subject to his first and second rules of "don't lose money" (an active manager).

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at mailto:ted@ttswadvisory.com.

Modern finance defines reward as the mean return (also known as the expected value). Mean return is the cumulative probability weighted return defined as follows:

where r is return (a continuous random variable) and Prob(r) is the Gaussian probability density function (PDF) of r. The integration limits are - infinitive to + infinitive. The cumulative sum of Prob(r) is normalized to 100%.

Modern finance defines risk as volatility (also known as standard deviation). It is the cumulative probability weighted root-mean-squared (RMS) of the deviation of each r from Mean. The mathematical formula for risk is:

where Mean is given by Eqn (A1). The integration limits are - infinitive to + infinitive.

The reward-to-risk ratio is therefore the mean divided by the standard deviation. Replacing r in Eqn(A1) and Eqn(A2) by the quantity r minus the risk-free interest rate (this quantity is also known as the equity risk premium), the reward-to-risk ratio becomes the Sharpe Ratio.

Appendix B: The Gain-Loss Framework

Investors view reward and risk from a gain-loss perspective. The best way to capture this intuitive view is with a pair of complementary formulas called "Expected Gain" and "Expected Loss". Investment reward is the expected gain, which is the sum of all probability weighted positive returns. The formula is:

where r* is return (a discrete random variable) and Prob*(r*) is the observed probability of return r*. The cumulative sum of Prob*(r*) is normalized to 100%. The limits of integration are from zero to + infinitive, so only positive returns (gains) are summed. In Eqn (A1), r and Prob(r) are Gaussian function variables. In Eqn (B1), r* and Prob*(r*) are measured data.

Correspondingly, investment risk is defined as the expected loss, which is the sum of all cumulative probability weighted negative returns. The formula for expected loss is:

The integration limits are - infinitive and zero, namely, only negative returns (losses) are summed.

The reward-to-risk ratio is the expected gain divided by the expected loss, both of which are in percent.

# Random Walk Part 3 – What’s Wrong with Depicting Risk as Volatility?

Originally Published October 2, 2017 in Advisor Perspectives

This is the third of my four-part empirical research into the fallacy of the random walk view of investment reward and risk. Part 1 and Part 2 discussed the random walk's failure in portraying asset returns. A random walk depicts risk as volatility.

This article explains why this view is problematic. Risk and volatility are different conceptually. The bell-curve understates actual risks by huge amounts. For asset returns that are not bell-shaped, volatility has no meaning. Finally, volatility is akin to noise, which alleviates instead of elevating risk.

In part 4, I will define investment reward and risk mathematically. I will demonstrate how this probability-based framework enables investors to beat the S&P 500 total-return with less risk.

Risk and volatility are conceptually different

Volatility is a measure of uncertainty in a bell-shaped distribution. The academics define risk as uncertainty primarily for mathematical convenience. Glyn Holton argued that if risk were akin to uncertainty, then a man jumping out of an airplane without a parachute would face no risk because his death was 100% certain. Figure 1 illustrates Holton's argument in an investment context. The green dune at the bottom right is an investment with an expected value of 100% and a volatility of 10%. The pink spike on the left is another investment with an expected value of -99% and a volatility of 1%.

From a risk perspective, modern finance favors the pink investment that offers a 10 times lower volatility than the green one. Investors, however, would intuitively avoid the pink investment because they see the 100% odds of a total loss with no chance of any gain. The academics view the green investment as more risky because it is 10-times more volatile than the pink one. Investors would jump on the green one because they see near 100% odds of doubling their money with zero chance of any loss.

Figure 1 illustrates the conceptual flaw in the random walk notion of risk. Outcome uncertainty is not necessarily risk. Risk is an unacceptable loss.

Random walk grossly underestimates risk

Benoit Mandelbrot points out in The Misbehavior of Markets that Gaussian statistics (random walks) that are behind modern finance grossly underestimate the probabilities of many stock market crashes. To find out how way-off the random walk predictions are, I computed the probability density function (PDF) of the daily returns of the Dow Jones Industrial Average (DJIA) using a measured mean of 0.03% and a standard deviation of 1.24% (from Figure 7 in Part 1). The dark blue curve in figure 2 is the random walk PDF showing the model's predictions of the DJIA's one-day price changes in 116 years (data sources: MetaStock and Yahoo Finance).

The inset table in figure 2 lists 10 of the worst one-day drops in 116 years. On October 19, 1987, for instance, the DJIA dropped -22.61%, a decline that the random walk PDF predicted to have a probability of only 4.2E-075 (the fifth column in the table). To grasp the magnitude of this miss, I use the age of our universe (roughly 14 billion years or 1.4E+010) as the unit of measure. A 4.2E-075 probability means that the Black Monday crash should have occurred only once in a billion billion billion billion billion billion billion times the age of our universe – one followed by 64 zeros. The probability is practically zero.

In reality, the actual frequency of occurrence of each of the seven worst events including Black Monday is 3.2E-005 or 0.0032% (one event out of 31,793 trading days from 1/2/1900 to 12/30/2016). A probability of 0.0032% may be small, but the impact of a one-day crash of -23% was enormous. To make things worse, those bad days tended to cluster – five times from 1928 to 1933, twice in 1987 and twice in 2008.

Volatility is a meaningless metric except for bell-shaped distributions

Even if we disregard the conceptual flaw and ignore the huge errors of viewing risk as volatility, we still face with an operational issue. Volatility only works as a statistical yardstick in an ideal bell curve. In the real world, most asset histograms are not bell-shaped. Figure 3 (taken from Part 1 and Part 2) shows five-year return histograms of four widely diverse asset classes (light blue bars). None of them resembles the corresponding random walk PDFs (dark blue curves). The notions of mean and variance are not workable in these wild histograms.

Modern finance doctrines such as Markowitz's portfolio selection, Sharpe's beta and Black-Scholes' risk neutrality view the world through a random walk lens describing everything in terms of means and variances. Figure 3 shows that in the real world, there is no central mean or recognizable variance. It comes as no surprise that many investment strategies based on these doctrines failed to protect investors against risk during financial market meltdowns.

Modern finance confuses risk with noise

The left chart in figure 4 is a typical noise output of an electronic amplifier measured with an oscilloscope. The right chart is the corresponding histogram. Noise in an amplifier shares the same mathematical root as volatility in asset prices.

I constructed a simulated price series using the random walk pricing model outlined in the Appendix. Figure 5 is a simulated DJIA index computed with Equation (A1) along with its daily returns computed with equation (A2). The band labeled "Random-Walk Daily % Change" in Figure 5 looks very similar to the amplifier noise band in Figure 4 because both are Gaussian. What the academics define as investment risk is what electrical engineers call white noise.

Then what is risk?

Figure 6 shows the actual DJIA and its daily percent changes from 1929 to 1934. The white noise band labeled "Random-Walk Daily % Change" in Figure 5 is uniform and well behaved within ± 1.5%. The band labeled "Actual DJIA Daily % Change" in Figure 6 is erratic and thorny with sharp spikes extended beyond ± 10%. The uniform band in Figure 5 is noise. The negative spikes in Figure 6 are risks. Noise is akin to volatility – uncertainty in the outcomes. Risk is associated with the harmful impact from a negative outcome.

Finally, I use two NASA incidences to illustrate the key difference between noise and risk. The Hubble Space Telescope had a defective mirror that caused spherical aberration that distorted its signals. As a result, Hubble's signals were noisy. The Space Shuttle Challenger broke apart shortly after launch due to improper O-rings. As a result, seven astronauts lost their lives. The Hubble fiasco was a noise issue leading to fuzzy images. The Challenger disaster was a risk matter resulting in deaths. Noise and risk are materially different.

Concluding remarks

The academics envision investors strolling down Wall Street with random movements, but at a smooth and orderly pace. Such tidy bell-shaped randomness is volatility. In reality, investors do not walk in small steps; they jog, run, jump and even take giant leaps, and deep dives, creating turbulence and chaos in the financial markets. Such violent footprints are risks.

Volatility measures fluctuations and risk signifies dangers. They are not synonymous as modern finance claims. Fischer Black in his 1986 article entitled Noise argued that volatility was a risk-reducing agent. When there is diversity (noise) in market views, prices will fluctuate, giving rise to volatility. Volatility is the mechanism through which buyers and sellers carry out normal price discoveries. Noise facilitates transactions and volatility alleviates risk in the financial markets. In contrast, a market that has only one prevailing belief is noise-free. Prices no longer fluctuate but charge in the direction of the dominant view. When the bears coalesce and become the majority, liquidity will dry up, panic selling will set in and risk will surge.

By defining risk as volatility, the random walk theorists create a paradox. When price fluctuations are less than ± one standard deviation from the mean, the bell curve has some resemblance to the real world. In the central region of the bell curve, however, volatility is not risk but a risk stabilizer. At the tail ends of the bell curve, where real risks reside, volatility is a meaningless metric because Gaussian statistics no longer work.

In the next and final article, I will define both investment reward and risk using a framework different from the random walk paradigm. Like the random walk, my new framework has firm probability underpinnings. Unlike the random walk, my new formulas are applicable to all probability distributions regardless of their shapes and forms. More importantly, the new model yields new insights on how to increase the odds of beating the S&P500 total return with less risk – a direct challenge to Efficient Market Hypothesis.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at mailto:ted@ttswadvisory.com.

Appendix: The Random-walk Pricing Model

The random walk pricing model relates the price at time T + ΔT (Price T + ΔT) to the price at time T (Price T) with the following formula:

Price T + ΔT = Price T x [1 + Mean ± Volatility]. (A1)

Dividing both sides of Equation (A1) by Price T and rearranging terms, one obtains:

(Price T + ΔT / Price T) - 1 = Mean ± Volatility. (A2)

Equation (A1) is a simulated random walk price series. Equation (A2) is the return (rate of change) on that price series within a time increment of ΔT.

In the above two equations,

Mean = Expected Value (Cumulative Probability Weighted Return); and (A3)

Volatility = N x Standard Deviation, (A4)

The sign and the amplitude of N can be simulated using a Gaussian noise generator with a bell-shaped probability distribution.

To create a daily (ΔT = 1 day) DJIA time series, I use a mean of 0.03% and a standard deviation of 1.24%, the same parameters obtained from Figure 7 in Part 1, which were used to compute the DJIA daily return PDF in figure 2.

# Random Walk Part 2 – Does Any Asset Price Fit the Bell Curve?

Originally Published September 25, 2017 in Advisor Perspectives

This is the second of my four-part empirical research into the fallacy of the random walk paradigm in investments. Part 1focused on the failure of the random walk to depict the Dow Jones Industrial Average. In this article, I expand the study to include six diverse asset classes including large-caps (the S&P500), small-caps (the Russell 2000), emerging markets, gold, the dollar and the 10-year Treasury bond. Asset prices do not walk in tiny steps along an orderly bell curve, but often take giant leaps leaving chaotic turbulence behind.

What are the common features among seven widely diverse asset classes? Why are asset prices so hard to pin down by random walk or other analytical and descriptive models? What are the investment implications if asset returns do not fit the bell curves?

Part 3 will deal with its flaws in defining investment risk. Part 4 offers a new reward and risk framework alternative to the random walk paradigm. The new framework yields new insights on how to logically beat the S&P 500 total return.

Empirical evidence against the random walk model

In Part 1, I presented price return histograms of the Dow Jones Industrial Average from 1900 to 2016. In this paper, I extend the empirical research to six other assets. Detailed results are in the Appendix. Figure 1 summarizes the five-year return histograms (light blue bars) of seven asset classes – the DJIA from Part 1 and six other assets from the Appendix. No return histogram (light blue bar) has any resemblance to its corresponding random walk probability density functions PDF (dark blue curve). All random walk PDFs are unimodal (with a single central mean) with matching wings on both sides. All empirical distributions have multiple peaks with no central symmetry.

Common themes among different asset classes

1. None of the return histograms of the seven asset classes follows the random walk bell curve for periods longer than one day. The longer the return periods, the less bell-shaped they become. Real world prices do not follow a random walk.

2. Even for the one-day returns, all histograms show asymmetric fat tails. Random walk theorists have no explanation for them and treat them as anomalous statistical outliers.

3. All return distributions of time horizons beyond one year do not have a single mean and a definable variance. Random walk's mean-variance paradigm does not reflect reality.

4. Random walk's bell curve underestimates the probabilities of both large losses and gains beyond ± one standard deviation. This has dire consequences in risk management.

5. It is standard practice to scale returns and volatilities in different time horizons by the number of trading days and by the square root of trading days, respectively. Data show that neither return nor the volatility follows such scaling rules.

Why are asset prices so elusive for the theoreticians to capture?

1. Random walk is not the only model that fails to explain price behaviors. Power-lawsgame theoryagent-based modelsbehavioral finance and adaptive market hypothesis are all incomplete theories at best. The rational beliefs equilibria model appears to have the potential for solving the asset-pricing puzzle but it is still too early to tell.

2. Why are prices so hard to pin down? The culprits are the agents involved. Unlike mindless pollens and particles that blindly obey physical laws, humans write their own rules, adapt to their own mistakes and adjust their actions to new encounters with a mix of unscripted rational and emotional responses.

3. Investors appraise prices from the perspective of an imagined future shaped by their past memories and current value judgments. These mental chain reactions transform their decision-making processes into highly complex systems. Our imagined future today can in turn create a new future that could then alter the current reality. The feedback loops continue in real time and exacerbate the already complex systems.

4. To model asset prices properly, economists must track all these nonlinear, multifaceted and interactive dynamics. By comparison, modeling the physical world is a trivial task.

What are the implications for investors?

1. There are many types of randomness described by different analytical probability density functions (PDFs). The bell curve is one of the most well behaved kinds. Figure 1 exhibits a wild form of randomness that does not fit any known PDF. Frank Knight called them "radical uncertainties". Donald Rumsfeld named them "unknown unknowns". Nassim Nicholas Taleb referred to them as "Black Swans".

2. Because asset prices do not follow the bell curve, it comes as no surprise why many random-walk based doctrines stopped working when prices plunged beyond one standard deviation. Markowitz's mean-covariance matrix, Sharpe's beta, Fama's factors and Black-Scholes' volatility neutrality all broke down during market crashes.

3. Because asset prices are radically random, investors should be skeptical of all market forecasts. Figure 2 shows the 10-year return histogram of the S&P 500 from 1928 to 2016. If you think that forecasting returns on a widely disperse bell curve is challenging, then mining for predictive patterns in the erratic randomness in figure 2 is a fool's errand because the probabilities of a +200% gain and a -20% loss are nearly the same.

4. Additionally, statistical flaws are common in many long-term forecasts. For example, analysts use overlapping data to compute the CAPE-based regression lines (here) and researchers gauge secular market cycles with sample sizes of less than a dozen (here). These forecasters not only try to predict something that is statistically unpredictable, they do so with incorrect math.

Concluding remarks

We live our lives every day without forecasting when the next big earthquakes will hit our hometowns. We mitigate quake risks by upgrading building codes and buying insurance.

Likewise, because asset prices exhibit radical randomness, predicting future returns is futile. Investors should focus on managing risk. To manage investment risk properly, however, we must first identify what risk truly is. Unfortunately, the academics view risk as volatility through a distorted random walk lens. In Part 3, I will point out the conceptual flaws and operational pitfalls of viewing volatility as risk. Mistaking risk as volatility has dire financial consequences for investors.

In Part 4, I will present a new framework for defining reward and risk as an alternative to the random walk paradigm. The new reward-risk framework offers investors a probabilistic path for beating the S&P500 total return – a blasphemy in the view of the efficient market orthodoxy. We win not by developing models with superior forecasting ability, but by beating Mr. Market at his own game. Stay tuned for details.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at mailto:ted@ttswadvisory.com.

Appendix: Returns of six Asset Classes – Actual Histograms vs. Random Walk PDFs

Section (A) shows linear and logarithmic histograms of short-horizon returns of three equity assets (the S&P500, the Russell 2000 and emerging markets). Section (B) shows linear and logarithmic histograms of short-horizon returns of three alternative assets (gold, the dollar and the 10-year Treasury bond). Section © shows linear and logarithmic histograms of long-horizon returns of the S&P500, the Russell 2000 and emerging markets. Section (D) shows linear and logarithmic histograms of long-horizon returns of gold, the dollar and the 10-year Treasury bond. Data sources are FREDYahoo Finance and MetaStock.

(A) Short-horizon returns of large-caps, small-caps and the emerging markets

Figure A1 shows linear histograms of the returns in short-term horizons – daily, weekly, monthly and quarterly (top to bottom) across different equity styles – the S&P500 (big-caps), the Russell 2000 (small-caps) and emerging markets (left to right). Big-caps and small-caps (left and center) bear some resemblances to the random walk probability density functions (PDFs). However, for the emerging markets (right), fat tails are visible even in monthly and quarterly returns. Gaps between the data and the PDFs near the peaks are noticeable among all three assets.

Figure A2 displays the same data in logarithmic histograms. All log histograms exhibit fat tails on both sides of the peaks. The random walk PDFs fail to account for the high probabilities at both return extremes across all three assets.

(B) Short-horizon returns of gold, the dollar and the U.S. Treasury bond

Figure A3 shows three different alternatives – gold, the dollar and the 10-year Treasury bond (left to right) in linear histograms of short-horizon returns – daily, weekly, monthly and quarterly (top to bottom). Deviations from the random walk PDFs grow with increasing time horizons. Gold and the dollar have wider spreads than the random walk PDFs while the 10-year Treasury has tall spikes near the peaks.

Figure A4 displays the same data in a log scale. All histograms exhibit higher probabilities at both extremes than those predicted by the PDFs. Gold and bond are asymmetric and skewed to the right relative to the PDFs.

(C ) Long-horizon returns of large-caps, small-caps and the emerging markets

Figure A5 shows linear histograms of long-horizon returns – one year, five years and ten year (top to bottom) across three equity indices – the S&P500, the Russell 2000 and emerging markets (left to right). All charts show multiple peaks and wildly different spreads.

Figure A6 shows the same data in a log scale. Beyond one year, the data distributions show no central tendency and they are skewed to the right. Actual returns offer much better upside opportunities than what random walk predicts.

(D) Long-horizon returns of gold, the dollar and the U.S. Treasury bond

Figure A7 shows linear histograms of different long-horizon returns – one year, five years and ten year (top to bottom) across three alternatives – gold, the dollar and the 10-year Treasury bond (left to right). None of these plots resembles the random-walk bell curve.

Figure A8 shows log histograms of the same data in Figure A7. Most of the fat tails are skewed to the right versus the perfectly symmetric random walk PDFs.

# Random Walk Part 1 – A Random Walk down a Dead-end Street

Originally Published September 18, 2017 in Advisor Perspectives

This is the first of a four-part empirical research study into the fallacy of the “random walk” view on investment reward and risk. The first two parts focus on the inefficacy of such a view on asset price behaviors. Part 3 deals with random walk's deficiency in characterizing risk. The final paper presents a new framework for defining and managing investment reward and risk.

Modern finance blossomed after the 1960s and brought us Nobel Prize-winning ideas such as Modern Portfolio TheoryCapital Asset Pricing ModelArbitrage Pricing Theorythe Black-Scholes option formulaEfficient Market Hypothesis and others. They all have one common theme – asset prices move in a random walk, a term popularized by Burton Malkiel's classic book entitled "A Random Walk Down Wall Street."

What is a random walk? How does one prove that prices follow a random walk? Are short-term random walks different from long-term ones? Do prices have memories? Do returns scale with time and volatilities scale with the square root of time? Can prices still be random if they do not follow a random walk? Why are prices so difficult to model? Are some passive index fund advocates like Jack Bogle, Burton Malkiel and Eugene Fama insincere when they favor certain regionsassets or factors over others if they truly believe in random walk and efficient markets?

I address some of these questions using the daily closes of the Dow Jones Industrial Average (DJIA) from 1900 to 2016. In Part 2, I extend the quest to six asset classes – large-caps (the S&P 500), small-caps (the Russell 2000), emerging markets, gold, the dollar and the 10-year Treasury bond. I will challenge many modern finance doctrines that are based on the random walk paradigm.

What is a random walk?

In 1900, Louis Bachelier planted the mathematical seeds of random walk. Some 50 years later, Harry Markowitz and others cultivated his seeds into a blooming field called modern finance. Bachelier theorized that prices fluctuated in a Brownian motion – a term used by Albert Einstein in the title of his 1905 paper. Einstein reasoned that molecules diffuse in a manner resembling the way pollens jiggle in water – an observation first made by botanist Robert Brown in 1827.

The terms random walkBrownian motion (arithmetic and geometric), and Gaussian distribution (normal and log-normal) have subtle mathematical differences. In modern finance, however, these terms are used interchangeably by MarkowitzOsborne and Fama. In this and the three follow-up articles, the term random walk refers to Gaussian statistics (bell-shaped distributions).

Modern finance is built on the notion that rational investors speculate in an efficient market like pollens and molecules wander mindlessly in a fluid. Hence, distributions of all asset price returns should follow the bell-shaped probability density functions(PDFs). To find out how well the random walk model reflects reality, I compare the theoretical PDFs to actual return histograms over various time horizons from one day to 10 years using the daily closing prices of the DJIA from 1900 to 2016. In all time horizons, two types of histogram are used – linear and logarithmic. A tutorial on the basics of probability density function and the construction of both linear and logarithmic histograms are presented in the Appendix.

Figure 1 illustrates graphically several key random walk assertions. Asset returns follow a bell curve. The peak of the bell curve (the mean) grows with time and the width (volatility) spreads out with the square root of time. Volatility and risk are synonymous. To earn higher returns, one has to take more risk. Do these claims have empirical support? Let's fact check each in more detail.

Do short-horizon returns follow a random walk?

Figure 2 shows linear histograms of short-horizon returns from daily, weekly, monthly to quarterly prices (top to bottom). The light blue bars are data and the dark blue curves are the random-walk PDFs. Even though both the data distributions and the random walk PDFs appear to have similar shapes, they do not match exactly.

Figure 3 shows the same data with the y-axes in a log scale. The ranges on the x-axes are expanded to accommodate the extended log-scale ranges. In all return horizons, the PDFs only match the data near the central regions of the histograms. All PDFs in all four time horizons miss the fat tails visible on both positive and negative return extremes.

Do long-horizon returns follow a random walk?

Figure 4 shows histograms of returns in 1-year, 5-year and 10-year horizons (top to bottom). Beyond one year, empirical distributions are no longer bell-shaped. They are multi-modal (more than one peak) and asymmetric (uneven sides). The random walk model only works for bell-shaped distributions with a single central mean and a definable variance. For horizons longer than one year, the model clearly fails to reflect realities.

Figure 5 shows the same data as those in Figure 4 but displays them in a log scale. Fat tails are prominent in all three charts. Positive fat tails dominate negative ones. The random-walk PDFs not only underestimate risk (left fat tails), but also understate potential rewards (right fat tails). Random walk PDFs consistently miss both returns and risks. These shortfalls have huge implications for investors. I will cover these topics in detail in the second and third paper.

What is random walk's IID assumption?

Random walk theorists assume that asset prices follow an independent and identically distributed (IID) stochastic process. The term "independent" means that prices today are not coupled to prices in the past. The term "identically distributed" means that histograms taken from different time periods should have similar looks so that their means and variances can be summed or subtracted.

To fact check the IID assumption, I divided 116 years of the daily DJIA data into four 30-year segments: 1900-1929, 1930-1958, 1959-1987 and 1988-2016. Figure 6 shows that the random walk PDFs (dark blue curves) of all five periods are identical according to the IID assumption. The top four charts in Figure 6 show five-year return histograms in each of these four time segments. The bottom chart shows the five-year return histogram of the entire 116 years.

The actual return distributions (light blue bars) are totally different in all five periods. The histogram shows each time period has multiple peaks but none of the peaks line up with those in the other periods. The dispersions in the distributions within all five time segments are drastically different. Their variances are not even definable, let alone additive. Empirical data clearly refutes the random walk assumption that asset prices are independent and identically distributed (IID).

Do asset prices have memories?

Through an IID lens, prices should have no short-term or long-term memories. The second chart in figure 6 has an unusually long left tail running from -55% to -80%. No other period exhibits a comparable pattern. One plausible explanation for such a gigantic tail is that the dark memories from the 1929 Great Depression haunted investors for the next 30 years. Prices have memories because investors do.

The academics assume that investors are mechanical robots like pollens and molecules with no memory of their past and no aspiration for their future. Pollens floating in water may have no memory and no direction, but investors speculating on Wall Street remember the past and contemplate the future. Investors not only act according to their own experiences and dreams, they can also be affected by the actions of other investors. The random walk assumption that humans act like mindless particles is naïve leading to many unsound investment concepts and practices. One such erroneous practice is the time scaling methodology to be discussed next.

The temporal scaling rules for return and volatility

The notion of random walk came from Brown's observation of the spatial dispersion of pollens and Einstein's depiction of the spatial diffusion of molecules. In a spatial random walk, the mean walking distance is proportional to the number of steps and the deviation from the mean is proportional to the square root of the number of steps. In a price random walk postulated by Bachelier, space is replaced with time. By analogy, the mean of asset returns should scale with time and the volatility should scale with the square root of time.

For example, to annualize daily return, analysts are told to multiply (the more precise term is "daily compound") the daily return by 252. To annualize daily volatility, analysts multiply the daily volatility by the square root of 252. The number 252 is the average trading days in a year. Such scaling rules are valid if and only if prices follow a random walk. If prices do not adhere to a bell curve, these normalization standards adopted by the academic and analysts will certainly lead to erroneous results.

Figure 7 compares the random walk prescribed returns and volatilities (teal-blue bars) to the actual returns and volatilities (sand-brown bars) for various time intervals from one day to 2,520 days (10 years). Returns and volatilities measured from daily to quarterly intervals more or less agree with the model. As the interval reaches one year, however, divergences emerge. Beyond one year, the gaps between data and model widen with increasing time horizons.

What causes the scaling rules to fail? Figure 8 shows that for short horizons from one day to one year (the top four log histograms), returns roughly adhere to the PDFs if fat tails are ignored. For time horizons longer than one year (bottom two charts), the linear histograms are no longer bell-shaped. That's when the random-walk time scaling rules break down.

Concluding remarks

Empirical evidence from figures 7 and 8 clearly shows that the random walk cartoon portrayed in Figure 1 is nothing but an illusion. For time horizons shorter than a year, fat tails outside the bell curves are prominent. For periods longer than one year, mean and volatility do not obey the random walk time scaling rules. When distributions are no longer Gaussians, the notions of mean and volatility are statistically meaningless.

The 1929 crash was conveniently cast away by the academics as a one-off event because it was not on the bell curve. Then in 1987, an even bigger shock came on the October 19th Black Monday. Again, it was treated by the random walkers as an anomaly. In 1997, the market was hit by the Asian currency collapse; in 1998, the Russian sovereign debt default; and in 2000, the U.S. dot-com crash. Eight short years later in 2008, the global financial system had a total meltdown. The professors, however, continued to call these huge recurring shocks anomalous outliers. Data are anomalous only because they cannot be explained by the models. When a model fails to explain the data, physical scientists would have no choice but to reject the model. Economists, on the other hand, would stubbornly cling on to their pet theory and selectively dismiss all contradicting evidence as anomalies.

There are many types of randomness beside random walk. Benoît Mandelbrot in 1963 used the power-law distributions (also known as Pareto-Lévy distributions) to account for fat tails observed in cotton prices that the random-walk model missed. The academy was slow to accept Mandelbrot's model because of its unwieldy math, such as infinite variance and all higher central moments. Paul Cootner, editor of "The Random Character of Stock Market Prices" had the candor to admit that "If Mandelbrot is right, almost all of our statistical tools are obsolete. Before consigning centuries of work to the ash pile, I would like to have some assurance that all our work is truly useless." If Cootner were alive today, he would have the assurance he seeks from piles of data invalidating the random walk statistics.

Random walk is the mathematical bedrock of modern finance. If the math is built on shaky ground, then all the economic theories and investment models constructed on the modern finance foundation could be theoretical landmines.

Is the Dow Jones Industrial Average the only index that does not follow random walk? In part 2, I will fact check a broad basket of asset prices including those of large-caps, small-caps, emerging markets, gold, the U.S. dollar and the U.S. Treasury bond. The empirical findings yield new insights on the true nature of asset price behaviors.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at mailto:ted@ttswadvisory.com.

Appendix: Probability Density Function; Linear and Logarithmic Histograms

An excellent review on the probability theories applicable to the financial markets can be found in this reference.

The random walk probability density function (PDF) is at the heart of all Gaussian distributions. An asset price return histogram is a plot of the probability distributions of all observed returns. The x-axis of a return histogram is constructed by grouping all returns into small but equal bins. The y-axis displays the probability in each bin. Figure 1A is a histogram of all annual returns of the DJIA from 1900 to 2016. To find the probability of an annual return of +10%, for instance, go to the +10% tick mark on the x-axis and trace a vertical line up until it intercepts the blue PDF curve. Then moving horizontally across to the y-axis, one finds a probability value of 2.25%.

The empirical return distribution in Figure 1A appears to match the model PDF fairly well. But appearances can be deceiving. A random walk PDF is bell-shaped only if the y-axis is displayed in a linear scale. A linear histogram suppresses all probability contents beyond one standard deviation from the mean. Figure 1B is a log histogram, which removes such scale distortion and shows all probability contents in ratio proportions. Figure 1B displays the same contents in Figure 1A but in a log scale. The differences are striking. Beyond two standard deviations on both sides, fat tails eclipse the random walk PDF. The gaps continue to widen as the PDF curve declines exponentially with increasing gains and losses.

In both Figures 1A and 1B, the light blue bars are the actual DJIA annual returns. The dark blue curves are the computed random walk PDFs based on a measured mean (the expected value of returns) of 7.3% and a measured volatility (annualized standard deviation of returns) of 17.1%. The green bars in the central regions are the probabilities within ±2 standard deviations, which covers 95.5% of the total area under the PDF curves. The ±2 standard deviations from the mean are, respectively, +41.5% (7.3% + 2 x 17.1%) and -26.9% (7.3% - 2 x 17.1%).

Linear and log histograms are complementary. Linear histograms detect gaps better in the central region. For instance, Figure A1 shows that the odds of returns from 2% to 7% are higher than those predicted by the random walk PDF (red box). Such discrepancies are hidden in Figure A2. Log histograms, like Figure A2, expose fat tails at both extremes that are invisible in a linear plot. With both types of displays, one can detect discrepancies over the entire return spectrum.

# Modeling Cyclical Markets – Part 3

Originally Published November 28, 2016 in Advisor Perspectives

In Part 1, I introduced my Primary-ID model that identifies major price movements in the stock market. In Part 2, I presented Secondary-ID, a complementary model that tracks minor price movements. In this article, I combine these two rules- and evidence-based models into a composite called Cycle-ID and discuss the virtue of a single model.

I examine the efficacy of Cycle-ID from three separate but related perspectives. The first area is the utility of the composite. What are the benefits of moving from a binary scale (bullish or bearish) of Primary-ID and Secondary-ID to a ternary scale (bullish, neutral or bearish) of Cycle-ID? The second topic is synergy. Can the composite perform better than the sum of the parts? Finally, how can we use Cycle-ID to custom-design strategies similar to those of risk-parity for the purpose of matching market return and risk? Do these more complex investment strategies add value?

Cycle-ID – a composite model for cyclical markets

All investors need to achieve a dual objective. The first is capital preservation by minimizing losses in market downturns. The second is wealth accumulation by maximizing market exposure during market upturns. Not losing money should always be the main investment focus but making money is why we invest in the first place. Investors often achieve one objective at the expense of the other. For instance, many so-called secular bears avoided the 2000 dot-com crash and/or the 2008 sub-prime meltdown, but missed the 2003-2007 and the 2009-2016 bull markets. My cyclical-market models are aimed at preserving capital during stormy seasons as well as accumulating wealth in equity-friendly climates.

The signal scores of both Primary-ID and Secondary-ID are binary: +1 is bullish and -1 is bearish. Cycle-ID is the sum of the two models and therefore its scores are +2, 0 and -2. What do the three Cycle-ID scores mean? A Cycle-ID score of +2 indicates that both primary and secondary price movements are positive. In other words, the stock market is in a rally phase (a positive Secondary-ID) within a cyclical bull market (a positive Primary-ID). A Cycle-ID score of -2 indicates that both the primary and secondary price movements are negative. Put simply, the stock market is in a retracement phase (a negative Secondary-ID) within a cyclical bear market (a negative Primary-ID). When Cycle-ID is at zero, the stock market is either in a correction phase (a negative Secondary-ID) within a cyclical bull market (a positive Primary-ID), or in a rally phase (a positive Secondary-ID) within a cyclical bear market (a negative Primary-ID). Since the two cycle models are in conflict, one would naturally assume that the market is neutral. However, there is a counterintuitive interpretation of the zero Cycle-ID score that I will present later.

Figures 1A shows the monthly S&P 500 (in black) in logarithmic scale along with the Cycle-ID score (in blue) from 1900 to September 2016. Figure 1B is Robert Shiller's cyclically adjusted price-to-earnings ratio (CAPE) from which the vector-based indicators of Primary-ID and Secondary-ID are derived. Figure 1C is similar to Figure 5A in Part 1 and Figure 1D is the same chart as Figure 4D in Part 2 but updated to the end of September. All green segments in Figures 1C and 1D represent +1 scores and all red segments, -1. The blue Cycle-ID score in Figure 1A is the sum of the Primary-ID and Secondary-ID scores at either +2, 0 or -2.

Figures 1C and 1D show that green segments overwhelm red segments in both duration and quantity. History shows that corrections in cyclical bull markets are more prevalent than rallies in cyclical bear markets. Therefore a zero Cycle-ID score is not really neutral, but has a bullish bias. This subtle difference in the Cycle-ID score interpretation can make a huge impact on the investment outcomes over an extended time horizon.

For ease of visual inspection, the contents in Figures 1A to 1D are zoomed in for the period from 2000 to September 2016 as shown in Figures 2A to 2D, respectively. Cycle-ID score hit -2 several times during the protracted 2000-2003 dot-com meltdown. Cycle-ID also reached -2 in July 2008, months before the collapse of Lehman Brothers' and the global financial systems. During the bulk of the two cyclical bull markets from 2003 to 2007 and from 2009 to 2016, Cycle-ID was at +2 most of the time. During the last 16 years while market experts were engaging in worthless debates on whether we were in a secular bull or bear market, Cycle-ID identified all major and minor price movements objectively without any ambiguity. The rules-based clarity and the evidence-based credibility of Cycle-ID enabled investors and advisors to meet their dual objective – capital preservation in bad times and wealth accumulation in good times.

Let's examine the hypothetical performance statistics to see if Cycle-ID effectively met the dual objective in 116 years.

Cycle-ID performance stats

The ternary scale of Cycle-ID allows for many different combinations of investment strategies including the use of leverages and shorts. I intentionally selected a set of fairly aggressive strategies for the purpose of stress-testing Cycle-ID. This aggressive strategy set is used to demonstrate Cycle-ID's potential efficacy and is not an investment strategy recommendation.

The aggressive strategies are translated to execution rules as follows. When the Cycle-ID score is at +2 (the market is in a rally mode within a bull market), exit the previous position and buy an S&P 500 ETF with 2x leverage (e.g. SSO) at the close in the following month. When the score is at -2 (the market is in a retracement phase within a bear market), exit the previous position and buy an inversed unleveraged S&P 500 ETF (e.g. SH) at the close in the following month. When Cycle-ID is at zero, close the previous position (either leveraged long or unleveraged short) and buy an unleveraged long S&P 500 ETF (e.g. SPY) at the close in the following month. This mildly bullish interpretation of the zero score is based on the evidence that in over 100 years cyclical bull markets significantly outnumber their cyclical bear counterparts as shown in Figure 1C. Bull market corrections also outnumber bear market rallies as shown in Figures 1C and 1D. As a result, I consider a zero Cycle-ID rating a bullish call rather than a neutral market stance in the stress test.

Figure 3A is the same as Figure 1A with the aggressive set of strategies specified in blue on the upper left. Figure 3B shows that the cumulative return of Cycle-ID is 20.8% compounded over 116 years, far and above the compound annual growth rate (CAGR) of Secondary-ID at 12.8% and Primary-ID at 10.4%. The equity curves of Primary-ID and Secondary-ID are those presented in Parts 1 and 2, respectively. They are updated to the end of September and shown here as references. The S&P 500 compounded total return is at 9.4%, the performance benchmark for comparison.

Figures 4A and 4B display contents similar to those shown in Figures 3A and 3B except the base year for the equity curves is changed from 1900 to 2000. Despite two drastically different timeframes, 116 years in Figure 3 and 16 years in Figure 4, the relative CAGR gaps between Cycle-ID and the S&P 500 total return are nearly identical, roughly 1100 bps in both periods. Risk characteristics are also similar in the two periods. Cycle-ID has a lower maximum drawdown, but shows a 1.5x higher volatility (annualized standard deviation) than that of the S&P 500. The consistency in both CAGR edge and risk gap suggests that Cycle-ID has had a relatively stable value-adding ability for over a century.

Simulating leveraged and inverse indices

I now digress to describe briefly the methodology for computing both the leveraged and the inverse-proxy indices used in the stress test presented above. The traditional ways for setting up such positions are by borrowing capital from a margin account and by selling borrowed equities. Leveraged and inverse ETF financial products did not exist in 1900 but are readily available today. To simulate a leveraged and an inverse time series from an underlying index, Marco Avellaneda and Stanley Zhangdeveloped the appropriate formulae. Their algorithms can be used to compute the leveraged and inverse S&P 500 proxies as follows: A 2x leveraged S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one plus twice the month-to-month percentage change of the S&P 500. An inversed S&P 500 index is computed by multiplying each month's ending value of the index by the quantity one minus the month-to-month percentage change of the S&P 500. In my back tests, both time series are assumed to be rebalanced monthly.

To check the accuracy of the Avellaneda and Zhang algorithms, I simulated two ETF proxies and compared them to the closing prices of two widely held ETFs: a 2x leveraged S&P 500 ETF (ticker: SSO) and an inversed S&P 500 ETF (ticker: SH). All time series are rebalanced daily. The results are shown in Figures 5A and 5B. The tracking errors averaged over 10 years are found to be 0.2% and 0.1%, respectively. These errors may be small, but they could diverge over a longer time period. On the other hand, the two divergences have run in opposite directions and the errors tend to cancel each other. Nevertheless, the algorithms appear adequate in testing Cycle-ID for illustration purposes.

Back to Cycle-ID, my intent is not to boast about the spectacular Cycle-ID CAGR of 20.8% shown in Figure 3B or to recommend a particular aggressive investment strategy. In fact, the high CAGR must be viewed with caution because both fund costs/fees and tracking error could potentially lower the hypothetical return. In addition, Figure 3B also shows that the higher hypothetical CAGR comes with higher risks. Cycle-ID has a maximum drawdown almost as high as that of the S&P 500 total return index and the annualized standard deviation (SD) is 1.6x higher than the market benchmark. Nevertheless, this simple exercise does underscore the alpha generating potential of Cycle-ID.

Besides the aggressive set of strategies of 2x long and 1x short, I also tested various combinations of leveraged and short positions. The best way to visualize absolute and risk-adjusted returns among different strategies is to plot CAGR against two risk measures –maximum drawdown (Figure 6A) and volatility (Figure 6B). In Figures 6A and 6B, the green curves represent long-only strategies (no shorts) with different leverage levels ranging from unleveraged (1.0 L) to 2x leveraged (2.0 L). The blue curves show the effects of adding a short strategy (1x short) into the mix while varying the degree of long leverage. The preferred corner is at the top-left showing the highest return with the lowest risk. The red dots represent the S&P 500, which is located far away from the preferred corner.

When short strategy is added (the blue lines), the rules are as follows. When Cycle-ID score is zero, exit the previous position and buy the S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit previous position and buy the S&P 500 with various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID is at -2, exit previous position and buy an inverse S&P 500 (e.g. SH) at next month's close.When no short is used (the green lines), the rules are as follows. When Cycle-ID score is zero, buy the unleveraged S&P 500 (e.g. SPY) at next month's close. When Cycle-ID is at +2, exit the previous position and buy the S&P 500 at various leverages (from 1.0 L to 2.0 L) at next month's close. When Cycle-ID score is -2, exit the previous position and buy the 10-year U.S. Treasury bond (e.g. TLT) at next month's close. The return while holding long positions is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Several observations from Figures 6A and 6B are noteworthy. These characteristics are probably germane to leveraged long and short strategies in general and not specific to Cycle-ID.

• First, when no leverage and no shorts are used (the bottom-left green dots in Figures 6A and 6B), the performance of Cycle-ID is between that of Primary-ID and Secondary-ID. Hence, no synergy is expected when the performances of two models are averaged.
• Second, the blue curves are to the right of the green curves indicating that adding short strategies increases risk more than return. It's inherently more challenging to profit from short positions because down markets are brief and volatile.
• Third, the curves in Figure 6A are convex (higher marginal return with each added unit of drawdown) but the curves in Figure 6B are concave (lower marginal return with each added unit of volatility). The curvature disparity reflects the basic difference between these two types of risk. Volatility measures the uncertainty in the outcome of a bet. Maximum drawdown depicts the bloodiest outcome of a wrong bet.
• With Cycle-ID, one can tailor strategies to either match market risk or gain. For instance, if one can endure an S&P 500 drawdown of -83%, one can supercharge CAGR from 9.7% to over 21% by using the 2L/1S strategy shown in Figure 6A. If one can tolerate a 17% market volatility, one can boost CAGR to over 14% by using the 2L/0S strategy in Figure 6B. Conversely, if one just wants to earn a market return of 9.7%, by extrapolating the green lines in Figures 6A and 6B to intercept a horizontal line at 9.7%, one can reduce drawdown from -83% to below -30% or to calm volatility from 17% to less than 12%.
• A widely known approach for engineering a portfolio with either market-matching return or market-matching volatility is risk parity. It budgets allocation weights by the inverse variances of all the assets in a portfolio. Cycle-ID achieves the same mission by using a single index – the S&P 500. No risk budgeting algorithm is needed. Looking for a robust risk management tool in a chaotic, nonlinear and dynamic investment world, I would pick simplicity over complexity every-time.

Concluding remarks

My cyclical-market models are relevant to both Modern Portfolio Theory (MPT) and Efficient Market Hypothesis (EMH) – the two pillars in the temple of modern finance.

Harry Markowitz in 1952 introduced MPT – the use of mutual cancellations in the uncorrelated variance-covariance matrixes across asset classes to minimize portfolio risk. Jack Treynor was the first to note in 1962 that the risk premium (the spread between risky and risk-free returns) of a stock is proportional to the covariance between that stock and the overall market. William Sharpe in 1964 simplified Markowitz's complex matrixes into a single-index model – the Capital Asset Pricing Model (CAPM) that uses beta to represent stock or portfolio risk (price fluctuations relative to those of the market). The type of risk both MPT and CAPM focus on is volatility –uncertainties in the potential (expected) returns of one's bets. Neither MPT nor CAPM tackles the types of risk investors dread the most – temporary equity drawdowns and permanent capital losses from making the wrong bets. Volatility and beta are the types of risk financial theorists deliberate about in scholastic faculty lounges. Drawdowns and permanent losses are the types of risk that drain investors' wealth and prey on their emotions.

Furthermore, variance-covariance matrixes and betas calculated from historical data could lose their anti-correlation magic or regression line linearity when the next crisis hits. In 2008, for instance, the effectiveness in risk reduction by either EMH's diversification or CAPM's beta diminished when capital protection was needed the most. Most importantly, even when MPT and CAPM work, they can only diversify away specific risk. Both models offer no solution in managing systematic or systemic risk. Company specific risk is minuscule when compared to the risk from the overall stock market or from the collapse of the global financial systems. There were hardly any diversified portfolios that could shelter one's wealth during cyclical bear markets in the financial meltdowns in 1929 and 2008. MPT and CAPM are ill-equipped in mitigating these titanic financial shockwaves with tsunami-scale impacts that affect all asset classes around the globe.

Cycle-ID is a positive alternative to the traditional risk hedging concepts. First, Cycle-ID eliminates company-specific risks by investing only in a broad market index – the S&P 500. There's no need to optimize an "efficient portfolio" of asset classes and hope that their anti-correlation attributes would remain unchanged going forward. More importantly, Cycle-ID reduces both systematic (i.e., interest rate, inflation, currency or recession) and systemic risk (global financial systems, interlinked liquidity freeze, or geopolitical instability) by exposing one's capital only in fertile seasons with the appropriate market exposure levels. Market exposure level is objectively matched with the perennially changing market environment. This is accomplished by closely tracking the first and second derivative of the Shiller CAPE – the aggregate market appraisal by all market participants.

Both the traditional risk hedging approaches (i.e., MPT and CAPM) and Cycle-ID employ the long-standing wisdom of diversification to manage risks. The difference is that the traditional approaches diversify in assets with uncorrelated covariances to hedge against company-specific risk. My model diversifies in market exposure in harmony with the investment climate to achieve a dual investment goal: (1) to minimize systematic and systemic risks in bad times; and (2) to increase market exposure in good times. The Chinese character for risk has an insightful duality – one pictogram for danger and the other for opportunity. While risk can harm us when we are exposed to it, risk is also a driver for higher returns if we exploit it to our advantage.

Let's turn to the Efficient Market Hypothesis which was postulated by Eugene Fama in 1965. According to Richard Thaler, EMH has two separate but related theses. The first thesis is that the stock market is always right and the second, the market is difficult to beat in the long run. Behavioral economists argue that the market is not always right because its pricing mechanism does not always function perfectly. Humans are not always rational beings and financial bubbles are the proofs of market mispricing. Both traditional and behavioral economists, however, agree on the second thesis that the market is hard to beat in the long run. The market is hard to beat because it's quite efficient (discounting all known information that could affect market prices) most of the time. But the market is not totally efficient all the time. Market prices often diverge from the market's intrinsic values – an observation first articulated in 1938 by John Burr Williams before the inception of behavioral economics. Hence, it's difficult but not impossible to beat the S&P 500 in a long run.

Primary-ID, Secondary-ID, and Cycle-ID along with a handful of legendary investors and some previously published models of mine (Holy Grail, Super Macro, and TR-Osc) demonstrate that it's possible to outperform the S&P 500 total return. They do so not by predicting the future. Price prediction is futile because both the amplitude of impact and the frequency of occurrence of the various price drivers are totally random. So what are the secret ingredients for a market-beating model?

To outperform the S&P 500, following my five criteria alone is not enough. The five criteria (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data) would only increase the odds of model robustness, they do not offer an outperformance edge over the S&P 500. To beat the market, a model must also exhibit a quality of originality with a counterintuitive flavor. If a model is too intuitively obvious, many will have already discovered it and its edge will disappear. More importantly, to beat the market, a model must be relatively unknown so that it's not widely followed. If you can find the model in the Bloomberg terminal, it is already part of the market. The market cannot outperform itself.

My cyclical-market models are simple (a single metric – the CAPE), focused (one index – the S&P 500), logical (vectors over scalar) and above all, transparent (all rules are disclosed). Should you be worried that after publishing my models, their future efficacy will diminish? I would argue that such concern is unwarranted. First, only a fraction of the total investor universe will read my articles. Even if some have read them, only a fraction will believe in the models. Even if those who have read the articles are swayed by the rationale of the models, only a tiny fraction could internalize their conviction and have the discipline to follow-through over time. These probabilities are multiplicative and protect the models from being homogenized by the masses.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

# Modeling Cyclical Markets – Part 2

Originally Published November 7, 2016 in Advisor Perspectives

In Part 1 of this series, I presented Primary-ID, a rules- and evidence-based model that identifies major price movements, which are traditionally called cyclical bull and bear markets. This article debuts Secondary-ID, a complementary model that objectively defines minor price movements, which are traditionally called rallies and corrections within bull and bear markets.

The traditional definitions of market cycles

Market analysts define market cycles by the magnitude of price movements. Sequential up and down price movements in excess of 20% are called primary cycles. Price advances more than 20% are called bull markets. Declines worse than -20% are called bear markets. Sequential up and down price movements within 20% are called secondary cycles. Price retracements milder than -20% are called corrections. Advances shy of 20% are called rallies. Talking heads at CNBC frequently use this financial vernacular.

But has anyone bothered to ask how factual these fancy terms and lofty labels really are?

Experts also measure market cycles by their durations. They reported that since 1929, there have been 25 bull markets with gains over 20% with a median duration of 13.1 months, and 25 bear markets with losses worse than 20% with a median duration of 8.3 months.

But is "median duration" the proper statistical yardstick to measure stock market cycle lengths?

Fact-checking the 20% thresholds

Before presenting Secondary-ID, let’s pause to fact-check these two market cycle yardsticks. The ±20% primary cycle rules-of-thumb have little practical use in guiding investment decisions. If we wait for a +20% confirmation before entering the market, we would have missed the bulk of the upside. Conversely, if we wait for an official kick-off of a new cyclical bear market, our portfolios would have shrunk by -20%. The ±20% thresholds may be of interests to stock market historians, but offer no real benefit to investors.

Besides being impractical, the ±20% demarcations are also baseless. This falsehood can be demonstrated by examining the historical evidence. Figures 1A and 1B show the daily closing prices of the S&P 500 from 1928 to 2015. The green bars in Figure 1A are price advances from an interim low to an interim high of over +5%. The red bars in Figure 1B are price retracements from an interim high to an interim low worse than -5%. Price movements less than ±5% are ignored as noise. There were a total of 198 advances and 166 retracements in 88 years. From the figures, it's not obvious why ±20% were picked as the thresholds for bull and bear markets. The distributions of green and red bars show no unique feature near the ±20% markers.

To determine how indistinct the ±20% markers are in the distributions, I plot the same daily data in histograms as shown in Figures 2A and 2B. The probabilities of occurrence are displayed on the vertical axes for each price change in percent on the horizontal axes. For example, Figure 2A shows that a +20% rally has a 3% chance of occurring; and Figure 2B shows that a -20% retreat has near a 3.5% chance. There is no discontinuity either at +20% in Figure 2A that separates bull markets from rallies, nor at -20% in Figure 2B that differentiates bear markets from corrections.

There are, however, two distinct distribution patterns in both up and down markets. Figure 2A shows an exponential drop in the probability distribution with increasing rally sizes from +10% to +40%. Above +45%, the histogram is flat. Figure 2B shows a similar exponential decline in the probability distribution with increasing retracements from -5% to -40%. Beyond -45%, the histogram is again flat. The reasons behind the exponential declines in the distributions and the two-tier histogram pattern are beyond the scope of this paper. It's clear, however, that there is no distinct inflection point at ±20% from Figures 2A and 2B. In fact, it would be more statistically correct to use the ±45% as the thresholds for bull and bear markets. But such large thresholds for primary cycles would be worthless for investors.

Figures 2A and 2B also expose one other fallacy. It's often believed that price support and resistance levels are set by the Fibonacci ratios. One doesn't have to read scientific dissertations using advanced mathematical proofs to dispel the Fibonacci myth. A quick glance at Figure 2A or 2B would turn any Fibonacci faithful into a skeptic. If price tops and bottoms are set by the Fibonacci ratios, we would have found such footprints at ±23.6%, ±38.2%, ±50.0%, ±61.8%, or ±100%. No Fibonacci pivot points can be found in 88 years of daily S&P 500 data.

Fact-checking the market duration yardstick

I now turn to the second cyclical market yardstick-cycle duration. It's been reported that since 1929, the median duration for bull markets is 13.1 months and the median duration for bear markets is 8.1 months. The same report also notes that the spread in bull market durations spans from 0.8 to 149.8 months; and the dispersion among bear market durations extents from 1.9 to 21 months. When the data is so widely scattered, the notion of median is meaningless. Let me explain why with the following charts.

Figures 3A and 3B show duration histograms for all rallies and retreats, respectively. The vertical axes are the probabilities of occurrence for each duration shown on the horizontal axes. The notions of median and average are only useful when the distributions have a central tendency. When the frequency distributions are skewed to the extent seen in Figure 3A or both are skewed and dispersed like in Figure 3B, median durations cited in those reports are meaningless.

Figures 3A and 3B also expose one other myth. We often hear market gurus warning us that the bull (or bear) market is about to end because it's getting old. Chair Yellen was right when she said that economic expansions don't die of old age. Cyclical markets don't follow an actuarial table. They can live on indefinitely until they get hit by external shocks. Positive shocks (pleasant surprises) end bear markets and negative shocks (abrupt panics) kill bull markets. These black swans follow Taleb distributions in which average and median are not mathematically defined. In my concluding remarks I further expand on the origin of cyclical markets.

Many Wall Street beliefs and practices are just glorified folklores decorated with Greek symbols and pseudo-scientific notations to puff up their legitimacy. Many widely followed technical and market-timing indicators are nothing but glamorized traditions and legends. Their theoretical underpinnings must be carefully examined and their claims must be empirically verified. It's unwise to put ones' hard earned money at risk by blindly following any strategy without fact-checking it first, no matter how well accepted and widely followed it may be.

Envisioning cyclical markets through a calculus lens

Now that I have shown how absurd these two yardsticks are in gauging market cycles, I would like to return to the subject at hand – modeling cyclical markets. The methodology is as follows: First, start with a metric that is fundamentally sound. The Super Bowl indicator, for example, is an indicator with no fundamental basis. Next, transform the metric into a quasi range-bound indicator. Then devise a set of rational rules using the indicator to formulate a hypothesis. High correlations without causations are not enough. Causations must be grounded in logical principles such as economics, behavioral finance, fractal geometry, chaos theory, game theory, number theory, etc. Finally, a hypothesis must be empirically validated with adequate samples to be qualified as a model.

Let me illustrate my modeling approach with Primary-ID. The Shiller CAPE (cyclically adjusted price-earnings ratio) is a fundamentally sound metric. But when the CAPE is used in its original scalar form, it is prone to calibration error because it's not range-bound. To transform the scalar CAPE into a range-bound indicator, I compute the year-over-year rate-of-change of the CAPE (e.g. YoY-ROC % CAPE). A set of logically sound buy-and-sell rules is devised to activate the indicator into actionable signals. After the hypothesis is validated empirically over a time period with multiple bull and bear cycles, Primary-ID is finally qualified as a model.

This modeling approach can be elucidated with a calculus analogy. The scalar Shiller CAPE is analogous to "distance." The vector indicator YoY-ROC % CAPE is analogous to "velocity." When "velocity" is measured in the infinitesimal limit, it's equivalent to the "first derivative" in calculus. In other words, Primary-ID is similar to taking the first derivative of the CAPE. There are, however, some differences between the YoY-ROC % CAPE indicator and calculus. First, a derivative is an instantaneous rate-of-change of a continuous function. The YoY-ROC % CAPE indicator is not instantaneous, but with a finite time interval of one year. Also, the YoY-ROC % CAPE indicator is not a continuous function, but is based on a discrete monthly time series – the CAPE. Finally, a common inflection point of a derivative is the zero crossing, but the signal crossing of Primary-ID is at -13%.

Secondary-ID – a model for minor market movements

I now present a model called Secondary-ID. If Primary-ID is akin to "velocity" or the first derivative of the CAPE and is designed to detect major price movements in the stock market, then Secondary-ID is analogous to "acceleration/deceleration" or the second derivative of the CAPE and is designed to sense minor price movements. Secondary-ID is a second-order vector because it derives its signals from the month-over-month rate-of-change (MoM-ROC %) of the year-over-year rate-of-change (YoY-ROC %) in the Shiller CAPE metric.

Figures 4A to 4D show the S&P 500, the Shiller CAPE, Primary-ID signals and Secondary-ID signals, respectively. The indicator of Primary-ID (Figure 4C) is identical to that of Secondary-ID (Figure 4D), namely, the YoY-ROC % CAPE. But their signals differ. The signals in Figures 4C and 4D are color-coded – bullish signals are green and bearish signals are red. The details of the buy and sell rules for Primary-ID were described in Part 1. The bullish and bearish rules for Secondary-ID are presented below.

Bullish signals are triggered by a rising YoY-ROC % CAPE indicator or when the indicator is above 0%. For bearish signals, the indicator must be both falling and below 0%. "Rising" is defined as a positive month-over-month rate-of-change (MoM-ROC %) in the ROC % CAPE indicator; and "falling", a negative MoM-ROC %. Because it is a second-order vector, Secondary-ID issues more signals than Primary-ID. It's noteworthy that the buy and sell signals of Secondary-ID often lead those of Primary-ID. The ability to detect acceleration and deceleration makes Secondary-ID more sensitive to changes than Primary-ID that detects only velocity.

For ease of visual examination, Figures 5A shows the S&P 500 color-coded with Secondary-ID signals. Figure 5B is the same as Figure 4D describing how those signals are triggered by the buy and sell rules. Since 1880, Secondary-ID has called 26 of the 28 recessions (a 93% success rate). The two misses were in 1926 and 1945, both were mild recessions. Secondary-ID turned bearish in 1917, 1941, 1962, 1966 and 1977 but no recessions followed. However, these bearish calls were followed by major and/or minor price retracements. If Mr. Market makes a wrong recession call and the S&P 500 plummets, it's pointless to argue with him and watching our portfolios tank. Secondary-ID is designed to detect accelerations and decelerations in market appraisal by the mass. Changes in appraisal often precede changes in market prices, regardless of whether those appraisals lead to actual economic expansions or recessions.

Secondary-ID not only meets my five criteria for robust model design (simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data), it has one more exceptional merit – no overfitting. In the development of Secondary-ID, there is no in-sample training involved and no optimization done on any adjustable parameter. Secondary-ID has only two possible parameters to adjust. The first one is the time-interval for the second-order rising and falling vector. Instead of looking for an optimum time interval, I choose the smallest time increment in a monthly data series – one month. One month in a monthly time series is the closest parallel to the infinitesimal limit on a continuous function. The second possible adjustable parameter is the signal crossing. I select zero crossing as the signal trigger because zero is the natural center of an oscillator. The values selected for these two parameters are the most logical choices and therefore no optimization is warranted. Because no parameters are adjusted, there's no need for in-sample training. Hence Secondary-ID is not liable to overfitting.

Performance comparison: Secondary-ID, Primary-ID and the S&P 500

The buy and sell rules of Secondary-ID presented above are translated into executable trading instructions as follows: When the YoY-ROC CAPE is rising (i.e. a positive MoM-ROC %), buy the S&P 500 (e.g. SPY) at the close in the following month. When the YoY-ROC CAPE is below 0% and falling (i.e. a negative MoM-ROC %), sell the S&P 500 at the close in the following month and use the proceeds to buy U.S. Treasury bond (e.g. TLT). The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding the bond is the sum of both bond coupon and bond price changes caused by interest rate movements.

Figures 6A shows the S&P 500 total return index and the total return of the U.S. Treasury bond. In 116 years, return on stocks is nearly twice that of bonds. But in the last three decades, bond prices have risen dramatically thanks to a steady decline in inflation since 1980 and the protracted easy monetary policies since 1995. Figures 6B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a \$1 investment made in January 1900. The S&P 500 has a total return of 9.7% with a maximum drawdown of -83%. By comparison, Primary-ID has a hypothetical compound annual growth rate (CAGR) of 10.4% with a maximum drawdown of -50% and trades once every five years on average. The performance stats on Primary-ID are slightly different from that shown in Figure 5B in Part 1 because Figure 6B is updated from July to August 2016.

Secondary-ID delivers a hypothetical CAGR of 12.8% with a -36% maximum drawdown and trades once every two years on average. Note that Primary-ID and Secondary-ID are working in parallel to avoid most if not all bear markets. Secondary-ID offers an extra performance edge by minimizing the exposure to bull market corrections and by participating in selected bear market rallies.

I now apply the same buy and sell rules in the recent 16 years to see how the model would have performed in a shorter but more recent sub-period. This is not an out-of-sample test since there's no in-sample training. Rather, it's a performance consistency check with a much shorter and more recent period. Figures 7A shows the total return of the S&P 500 and the U.S. Treasury bond price index from 2000 to August 2016. The return on bonds in this period is higher than that of the S&P 500. Record easy monetary policies since 2003 and large-scale asset purchases by global central banks since 2010 pumped up bond prices. Two severe back-to-back recessions dragged down the stock market. Figures 7B shows the cumulative total returns of Primary-ID, Secondary-ID and the S&P 500 on a \$1 investment made in January 2000.

Since 2000, the total return index of the S&P 500 has returned 4.3% compounded with a maximum drawdown of -51%. By comparison, Primary-ID has a CAGR of 7.7% with a maximum drawdown of -23% and trades once every five years on average. Again, the performance stats on Primary-ID shown in Figure 7B are slightly different from that shown in Figure 5B in Part 1 because Figure 7B is updated to August 2016. Secondary-ID delivers a hypothetical CAGR of 10.5% with a maximum drawdown of only -16% and trades once every 1.4 years on average. The performance edge in return and risk of Secondary-ID over both Primary-ID and the S&P 500 total return index is remarkable. The consistency in performance gaps in both the entire 116-year period and in the most recent 16-year sub-period lends credence to Secondary-ID.

Theoretical support for both cyclical market models

The traditional concepts of "primary cycles" and "secondary cycles" rely on amplitude and periodicity yardsticks to track market cycles in the past and to predict market cycles in the future. Primary-ID and Secondary-ID do not deal with primary or secondary market cycles. Their focus is on cyclical markets – major and minor price movements. All market movements are driven by changes in investors' collective market appraisals. The Shiller CAPE is selected as the core metric because it is a value assessment gauge-based fundamental indicator – appraising the inflation adjusted S&P 500 price relative to its rolling 10-year average earnings. Although the scalar-CAPE is prone to overshoot and valuations misinterpretation, the first- and second-order vectors of the CAPE are not. Primary-ID and Secondary-ID sense both major changes and minor shifts in investors' collective market appraisal that often precede market price action.

Like Primary-ID, Secondary-ID also finds support from many of the behavioral economics principles. First, prospect theory shows that a -10% loss hurts investors twice as much as the pleasure a +10% gain brings. Such reward-risk disparities are recognized by the asymmetrical buy and sell rules in both models. Second, both models use vector-based indicators. This is supported by the findings of Daniel Kahneman and Amos Tversky that investors are sensitive to the relative changes (vectors) in their wealth much more so than to the absolute levels of their wealth (scalars). Finally, the second-order vector in Secondary-ID is equivalent to the second derivative of the concave and convex value function described by the two distinguished behavioral economists in 1979.

Concluding remarks – cyclical markets vs. market cycles

I developed rules- and evidence-based models to assess cyclical markets and not market cycles. The traditional notion of market cycles is defined with a prescribed set of pseudo-scientific attributes such as amplitude and periodicity that are neither substantiated by historical evidence nor grounded in statistics. Cyclical markets, on the other hand, are the outcomes of random external shocks imposing big tidal waves and small ripples on a steadily rising economic growth trend. Cyclical markets cannot be explained or predicted using the traditional cycle concepts because past cyclical patterns are the outcomes of non-Gaussian randomness. Let me illustrate with a simple but instructive narrative.

Cyclical markets can be visualized with a simple exercise. Draw an ascending line on a graph paper with the y-axis in a logarithmic scale and the x-axis in a linear time scale. The slope of the line is the long-term compound growth rate of the U.S. economy. Next, disrupt this steadily rising trendline with sharp ruptures of various amplitudes at erratic time intervals. These abrupt ruptures represent man-made crises (e.g., recessions) or natural calamities (e.g., earthquakes). Amplify these shocks with overshoots in both up and down directions to emulate the cascade-feedback loops driven by the herding mentality of human psychology. You now have a proxy of the S&P 500 historical price chart.

This descriptive model of cyclical markets explains why conventional market cycle yardsticks – the ±20% thresholds and median durations will never work. Unpredictable shocks will not adhere to a prescribed set of amplitude or duration. Non-Gaussian randomness cannot be captured by the mathematical formulae defining average and median. The conceptual framework of market cycles is flawed and that's why it fails to explain cyclical markets.

Looking from the perspective of Primary-ID and Secondary-ID, cyclical bull markets can last as long as the CAPE velocity is positive and/or accelerating. Cyclical bear markets can last as long as the CAPE velocity is negative and/or decelerating. Stock market movements are not constrained by the ±20% thresholds or cycle life-expectancy stats. Primary-ID detects the velocity of the stock market valuation assessment by all stock market participants that drives bull or bear markets. Secondary-ID senses subtle accelerations and decelerations in the same collective market valuation assessment. These second-order waves manifest themselves in stock market rallies and corrections. It doesn't matter whether the market is down less than -20%, labeled by experts as a correction, or plunges by worse than -20%, which is called a cyclical bear market, Primary-ID and Secondary-ID capture the price movements just the same.

Does synergy exist between Primary-ID and Secondary-ID? Would the sum of the two offer performance greater than those of the parts? A composite index of the two models enables the use of leverage and short strategies that pave the way for more advanced portfolio engineering and risk management tactics. Do these more complex strategies add value? For answers, please stay tuned for Part 3.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached at ted@ttswadvisory.com.

# Modeling Cyclical Markets – Part 1

Originally Published October 24, 2016 in Advisor Perspectives

The proverbial wisdom is that there are two types of stock market cycles – secular and cyclical. I argued previously that secular cycles not only lacked statistical basis to be credible, but their durations of 12 to 14 years are also impractical for most investors. We live in an internet age with a time scale measured in nanoseconds. Wealth managers often turn over their portfolios after only a few years. Simply put, secular cycles can last longer than financial advisors can retain their clients.

The second type of cycle is called a “cyclical market” and is believed to comprise both primary and secondary waves. Economic cycles are thought to drive primary waves. According to the National Bureau of Economic Research (NBER), the average economic cycle length is 4.7 years, which would be more suitable for the typical holding periods of most investors.

To succeed in accumulating wealth in bull markets and preserving capital in bear markets, we must first define and detect primary and secondary markets. In this article, I present a modeling approach to spot primary cycles. Modeling secondary market cycles will be the topic of Part 2.

Common flaws in modeling financial markets

Before presenting my model on primary markets, I must digress to discuss two common mistakes in modeling financial markets. For example, when modeling secular market cycles and market valuations, analysts use indicators such as the Crestmont P/E, the Alexander P/R and the Shiller CAPE (cyclically adjusted price-earnings ratio). By themselves, these indicators are fundamentally sound. It's the modeling approach using these indicators that is flawed.

Models on valuations and secular cycles cited above share two assumptions. First, they assume that the amplitude (scalar) of the indicators can be relied on to indicate market valuations and secular outlook. Extremely high readings are interpreted as overvaluations or cycle crests, and extremely low readings, undervaluation or cycle troughs. Second, it's assumed that mean reversion will always drive the extreme readings in the models back into line.

Figure 1A shows the S&P 500 from 1881 to mid-2016 in logarithmic scale. Figure 1B is the Shiller CAPE overlay. The solid purple horizontal line is the mean from 1881 to 1994 and has a value of 14.8. The upper and lower dashed purple lines represent one standard deviation above and below the mean of 14.8, respectively. The solid brown line to the right is the mean from 1995 to mid-2016 and has a value of 26.9. The upper and lower dashed brown lines are one standard deviation above and below the post-1995 mean of 26.9, respectively. One standard deviation above the pre-1995 mean is 19.4 and one standard deviation below the post-1995 mean is 20.4. The data regimes in the two adjoining timeframes do not overlap. The statistically distinct nature of the two regimes invalidates the claim by many secular cycle advocates and CAPE-based valuations practitioners that the elevated CAPE readings after 1995 are just transitory statistical outliers and will fall back down in due course.

Let's examine the investment impacts from these two assumptions. The first assumption is that extreme amplitudes can be used to track cycle turning points. Figure 1B shows that both high and low extremes are arbitrary and relative. As such, they cannot be used as absolute valuation markers. For example, after 1995, the entire amplitude range has shifted upward. Secular cyclists and value investors would have sold stocks in 1995 when the CAPE first pierced above 22, exceeding major secular bull market crests in 1901, 1937 and 1964. They would have missed the 180% gain in the S&P 500 from 1995 to its peak in 2000. More recently, the CAPE dipped down to 13 at the bottom of the sub-prime crash. Secular cycle advocates and value investors would consider a CAPE of 13 not cheap enough relative to previous secular troughs in 1920, 1933, 1942, 1949, 1975 and 1982. They would have asked clients to switch from stocks to cash only to miss out the 200% gain in the S&P 500 since 2010. These are examples of huge upside misses caused by the first flawed assumption used in these scalar-based models.

The second assumption is that mean reversion always brings the out-of-bound extremes back into line. This assumption falters on three counts. First, mean reversion is not mean regression. The former is a hypothesis and the latter, a law in certain statistics like Gaussian distributions (the bell curve). Second, mean regression is guaranteed only for distributions that resemble a bell curve. If the distributions follow the power-law or the Erlang statistics, even mean regression is not guaranteed. Finally, neither mean regression nor mean reversion is a certainty if the overshoots are not by chance, but are the results of causation. Elevated CAPE will last as long as the causes (Philosophical Economics, Jeremy Siegel and James Montier) remain in place. The second assumption creates a false sense of security that could be very harmful to your portfolios.

The confusion caused by both of these false assumptions is illustrated in Figure 1B. For the 26.9 mean, reversion has already taken place in 2002 and 2009. But for the 14.8 mean, reversion has a long way to go. All scalar models that rely on arbitrary amplitudes for calibration and assume a certainty of mean reversion are doomed to fail.

A vector-based modeling approach

The issues cited above are the direct pitfalls of using scalar-based indicators. One can think of scalar as an AM (amplitude modulation) radio in a car. The signals can be easily distorted when the car goes under an overpass. Vector, on the other hand is analogous to FM (frequency modulation) signals, which are encoded not in amplitude but in frequency. Solid objects can attenuate amplitude-coded signals but cannot corrupt frequency-coded ones. Likewise, vector-based indicators are immune to amplitude distortions caused by external interferences such as Fed policies, demographics, or accounting rule changes that might cause the overshoot in the scalar CAPE. Models using vector-based indicators are inherently more reliable.

Instead of creating a new vector-based indicator from scratch, one can transform any indicator from a scalar to a vector with the help of a filter. Two common signal-processing filters used by electronic engineers to condition signals are low-pass filters and high-pass filters. Low-pass filters improve lower frequency signals by blocking unwanted higher frequency chatter. An example of a low-pass filter is the moving average, which transforms jittery data series into smoother ones. High-pass filters improve higher frequency signals by detrending irrelevant low frequency noise commonly present in the physical world. The rate-of-change (ROC) operator is a simple high-pass filter. ROC is defined as the ratio of the change in a variable over a specific time interval. Common time intervals used in financial markets are year-over-year (YoY) or month-over-month (MoM). By differentiating (taking the rate-of-change of) a time series, one transforms it from scalar to vector. A scalar only shows amplitude, but a vector contains both amplitude and direction contents. Let me illustrate how such a transformation works.

Figures 2A is identical to Figure 1B, the scalar version of the Shiller CAPE. Figure 2B is a vector transformation, the YoY-ROC of the scalar Shiller CAPE time series. There are clear differences between Figure 2A and Figure 2B. First, the post-1995 overshoot aberration in Figure 2A is no longer present in Figure 2B. Second, the time series in Figure 2B has a single mean, i.e. the mean from 1881 to 1994 and the mean from 1995 to present are virtually the same. Third, Figure 2B shows that the plus and minus one standard deviations from the two time periods completely overlap. This proves statistically that the vector-based indicator is range-bound across its entire 135-year history. Finally, the cycles in Figure 2B are much shorter than that in Figure 2A. Shorter cycles are more conducive to mean reversion.

It's clear that the YoY-ROC filter mitigates many calibration issues associated with the scalar-based CAPE. The vector-based CAPE is range-bound, has a single and stable mean and has shorter cycle lengths. These are key precursors for mean reversion. In addition, there are theoretical reasons from behavioral economics that vectors are preferred to scalars in gauging investors' sentiment. I will discuss the theoretical support a bit later.

The vector-based CAPE periods versus economic cycles

Primary market cycles are believed to be driven by economic cycles. Therefore, to detect cyclical markets, the indicator should track economic cycles. Figure 3A shows the S&P500 from 1950 to mid-2016. The YoY-ROC GDP (Gross Domestic Product) is shown in Figure 3B and the YoY-ROC CAPE in Figure 3C. The Bureau of Economic Analysis (BEA) published U.S. GDP quarterly data only after 1947.

The waveform of the YoY-ROC GDP is noticeably similar to that of the YoY-ROC CAPE. In fact, the YoY-ROC CAPE has a tendency to roll over before the YoY-ROC GDP dips into recessions, often by as much as one to two quarters. The YoY-ROC GDP and the YoY-ROC CAPE are plotted as if the two curves were updated at the same time. In reality, the YoY-ROC CAPE is nearly real-time (the S&P 500 and earnings are at month-ends and the Consumer Price Index has a 15-day lag). GDP data, on the other hand, is not available until a quarter has passed and is revised three times. The YoY-ROC CAPE indicator is updated ahead of the final GDP data by as much as three months. Hence, the YoY-ROC CAPE is a true leading economic indicator.

Although the waveforms in Figures 3B and 3C look alike, they are not identical. How closely did the YoY-ROC CAPE track the YoY-ROC GDP in the past 66 years? The answer can be found with the help of regression analysis. Figure 4 shows an R-Squared of 29.2%, the interconnection between GDP growth rate and the YoY-ROC CAPE. A single indicator that can explain close to one-third of the movements of the annual growth rate of GDP is truly amazing considering the simplicity of the YoY-ROC CAPE and the complexity of GDP and its components.

Primary-ID – a model for primary market cycles

Finding an indicator that tracks economic cycles is only a first step. To turn that indicator into an investment model, we have to come up with a set of buy and sell rules based on that indicator. Primary-ID is a model I designed years ago to monitor major price movements in the stock market. In the next article, I will present Secondary-ID, a complementary model that tracks minor stock market movements. I now illustrate my modeling approach with Primary-ID.

A robust model must meet five criteria: simplicity, sound rationale, rule-based clarity, sufficient sample size, and relevant data. Primary-ID meets all five criteria. First, Primary-ID is elegantly simple – only one adjustable parameter for "in-sample training." Second, the vector-based CAPE is fundamentally sound. Third, buy and sell rules are clearly defined. Forth, the Shiller CAPE is statistically relevant because it covers over two dozen samples of business cycles. Fifth, the Shiller database is quite sufficient because it provides over a century of monthly data.

Figure 5A shows both the S&P 500 and the YoY-ROC CAPE from 1900 to 1999. This is the training period to be discussed next. The curves are in green when the model is bullish and in red when bearish. Bullish signals are generated when the YoY-ROC CAPE crosses above the horizontal orange signal line at -13%. Bearish signals are issued when the YoY-ROC CAPE crosses below the same signal line. The signal line is the single adjustable parameter in the in-sample training.

Figure 5B compares the cumulative return of Primary-ID to the total return of the S&P 500, a benchmark for comparison. A \$1 invested in Primary-ID in January 1900 hypothetically reached \$30,596 at the end of 1999, a compound annual growth rate (CAGR) of 10.9%. The S&P 500 over the same period earned \$23,345, a CAGR of 10.3%. The 60 bps CAGR gap may seem small, but it doubles the cumulative wealth after 100 years. The other significant benefit of Primary-ID is that its maximum drawdown is less than two third of that of the S&P 500. It trades on average once every five years, very close to the average business cycle of 4.7 years published by NBER.

The in-sample training process

Figures 5A and 5B show a period from 1900 to 1999, which is the back-test period used to find the optimum signal line for Primary-ID. The buy and sell rules are as follows: When the YoY-ROC CAPE crosses above the signal line, buy the S&P 500 (e.g. SPY) at next month's close. When the YoY-ROC CAPE crosses below the signal line, sell the S&P 500 at next month's close and park the proceeds in US Treasury bond. The return while holding the S&P 500 is the total return with dividends reinvested. The return while holding bond is the sum of both bond yields and bond price percentage changes caused by interest rate changes.

Figure 6 shows the back test results in two tradeoff spaces. The plot on the left is a map of CAGR versus maximum drawdown for various signal lines. The one on the right is CAGR as a function of the position of the signal line. For comparison, the S&P 500 has a total return of 10.6% and a maximum drawdown of -83% in the same period. Most of the blue dots in Figure 6 beat the total return and all have maximum drawdowns much less than that of the S&P 500.

Figures 6A and 6B only show a range of signal lines that offers relatively high CAGR. What is not shown is that all signal lines above -10% underperform the S&P 500. The two blue dots marked by blue arrows in both charts are not the highest returns nor the lowest drawdowns. They are located in the middle range of the CAGR sweet spot. I judiciously select a signal line at -13% that does not have the maximum CAGR. An off-peak parameter gives the model a better chance to outperform the optimized performance in the backtest. Picking the optimized adjustable parameter would create an unrealistic bias for the out-of-sample test results. Furthermore, an over-optimized model even if it passes the out-of-sample test is prone to underperform in real time. A parameter that is peaked during back-tests will likely lead to inferior out-of-sample results as well as actual forecasts.

Why do all signal lines above -10% give lower CAGR's than those within -10% and -19%? There is a theoretical reason for such an asymmetry to be discussed a bit later.

The out-of-sample validation

The out-of-sample test is a guard against the potential risk of over-fitting during in-sample optimization. It's like a dry-run before applying the model live with real money. Passing the out-of-sample test, however, does not necessarily guarantee a robust model but failing the out-of-sample test is certainly ground for model rejection.

Here’s how out-of-sample testing works. The signal line selected in the training exercise is applied to a new set of data from January 2000 to July 2016 with the same buy and sell rules. Figure 7A shows both the S&P 500 and the YoY-ROC CAPE.

Figure 7B compares the cumulative return of Primary-ID to the total return of the S&P 500. A \$1 invested in Primary-ID in January 2000 would hypothetically make \$3.50 in mid-2016, a CAGR of 7.8%. Investing \$1 in the S&P 500 over the same period would have earned \$2.02, a CAGR of 4.3%. An added perk Primary-ID offers is the maximum drawdown of -23%, half of that of the S&P 500’s -51%. It trades on average once every five years, similar to that in the in-sample test, and therefore profits are taxed at long-term capital gains rates.

Primary-ID sidestepped two infamous bear markets: the dot-com crash and the sub-prime meltdown. It also fully invested in equities during the two mega bull markets in the last 16 years. The value of the YoY-ROC CAPE as a leading economic indicator and the efficacy of Primary-ID as a cyclical market model are validated.

Theoretical support for Primary-ID

The theoretical support for Primary-ID can be found in prospect theory proposed by Daniel Kahneman and Amos Tversky in 1979. Prospect theory offers three original axioms that lend support to Primary-ID. The first axiom shows that there is a two-to-one asymmetry between the pain of losses versus the joy of gains – losses hurt twice as much as gains bring joy. Recall from Figure 2B that the sweet spot for CAGR comes from signal lines located between -10% and -19%, more than one standard deviation below the mean near 0%. Why is the sweet spot located that far off center? The reason could be the result of the asymmetry in investors' attitude toward reward versus risk. Prospect theory explains an old Wall Street adage – let profits, run but cut losses short. Primary-ID adds a new meaning to this old motto – buy swiftly, but sell late. In other words, buy quickly once YoY-ROC CAPE crosses above -13% but don't sell until YoY-ROC CAPE crosses below -13%.

The second prospect theory axiom deals with scalar and vector. The authors wrote, "Our perceptual apparatus is attuned to the evaluation of changes or differences rather than to the evaluation of absolute magnitudes." In other words, it's not the level of wealth that matters; it's the change in the level of wealth that affects investors' behavior. This explains why the vector-based CAPE works better than the original scalar-based CAPE. The former captures human behaviors better than the latter.

The third prospect theory axiom proposed by Kahneman and Tversky is that "the value function is generally concave for gains and commonly convex for losses." Richard Thaler explains this statement in layman's terms in his 2015 book entitled "Misbehaving." The value function represents investors' attitudes toward reward and risk. The terms concave and convex refer to the curve shown in Figure 3 in the 1979 paper. A concave (or convex) value function simply means that investors' sensitivity to joy (or pain) diminishes as the level of gain (or loss) increases. The diminishing sensitivity is observed only on the change in investors' attitude (vector) and not on the investors' attitude itself (scalar). Investors' diminishing sensitivity toward both gains and losses is the reason that the YoY-ROC CAPE indicator is range-bound and why mean reversion occurs more regularly. The original Shiller CAPE is a scalar time series and does not benefit from the third axiom. Therefore the apparent characteristics of range-bound and mean reversion of the scalar Shiller CAPE in the past are the exceptions, not the norms.

Concluding remarks

The stock market is influenced by different driving forces including economic cycles, credit cycles, Fed policies, seasonal/calendar factors, equity premium anomaly, risk aversion shifts the equity premium puzzle and bubble/crash sentiment. At any point in time, the stock market is simply the superposition of the displacements of all these individual waves. Economic cycle is likely the dominant wave that drives cyclical markets, but it is not the only one. That's why the R-squared is only at 29.2% and not all bear markets were accompanied by recessions (such as 1962, 1966 and 1987).

The credibility of the Primary-ID model in gauging primary cyclical markets is grounded on several factors. First, it is based on a fundamentally sound metric – the Shiller CAPE. Second, its indicator (YoY-ROC CAPE) is a vector that is more robust than a scalar. Third, the model tracks the cycle dynamics between the market and the economy relatively well. Forth, the excellent agreement between the five-year average signal length of Primary-ID (0.2 trades per year shown in Figures 5B and 7B) and the average business cycle of 4.7 years reported by NBER adds credence to the model. Finally, the Primary-ID model has firm theoretical underpinnings in behavioral economics.

It's a widely held view that the stock market exhibits both primary and secondary waves. If primary waves are predominantly driven by economic cycles, what drives secondary waves? Can we model secondary market cycles with a vector-based approach similar to that in Primary-ID? Can such a model complement or even augment Primary-ID? Stay tuned for Part 2 where I debut a model called Secondary-ID that will address all these questions.

Theodore Wong graduated from MIT with a BSEE and MSEE degree and earned an MBA degree from Temple University. He served as general manager in several Fortune-500 companies that produced infrared sensors for satellite and military applications. After selling the hi-tech company that he started with a private equity firm, he launched TTSW Advisory, a consulting firm offering clients investment research services. For almost four decades, Ted has developed a true passion for studying the financial markets. He applies engineering design principles and statistical tools to achieve absolute investment returns by actively managing risk in both up and down markets. He can be reached atted@ttswadvisory.com.