Fama_Efficient Capital Markets: II

This paper is the second review work on Market efficiency (hence II). The first was written in 1970 (please read Fama_Efficient Capital Markets: A Review of Theory and Empirical Work)

Fama starts this paper by “sequels are rarely as good as the originals” .

This paper actually as a review for “new era of EMH”. Which contains a review on efficient markets that explains what it means to have efficient markets, reviews the literature on the efficient markets, discusses the various hypotheses on efficient markets, and anomalies. The paper also redefines the common definitions of efficient markets and investigates the joint-hypothesis problem, the costs of information, and various pricing models.

Any investigation of market efficiency has at least two problems: (1) Information and transaction costs and (2) The joint-hypothesis problem.

Fama defines Market Efficiency as the state where “security prices reflect all available information.” After giving the definition, Fama immediately introduces the problem of information costs: “A precondition for this strong version of hypothesis is that information and trading costs, the costs of getting prices to reflect information, are always zero (Grossman and Stiglitz (1980).)” “As there are surely positive information and trading costs, the extreme version of market efficiency is surely false.” However, this extreme view has an advantage in that it is “a clean benchmark.” Fama then says he will use the extreme view and let readers decide on the information and transaction costs.

In the 1970’s paper Fama used the terms weak-form, semi-strong form, and strong form efficiency. In this paper, he focuses on: (1) tests for return predictability; (2) event studies; (3)  tests of private Information

1. Test for Return Predictability

When looking at return predictability, Fama points out the change in focus in this area. Formerly it was just testing short-run return predictability from past returns. Now it includes other variables such as ” dividend yields (D/P), Earnings/price (E/P), and term-structure variables” as well as for longer horizons.

Lo and MacKinlay (1988) find positive autocorrelations (especially in small stocks). These results exist even after Conrad and Kaul (1990) attempt to correct for the nonsynchronous-trading problem.

French and Roll (1986) reported that “stock prices are more variable when the market is open.” This has been interpreted by some as noise and an indication of a market inefficiency. However, the size of the autocorrelations is small for short-run autocorrelations.

For longer-term horizons, Shiller (1984) and Summers (1986) present a view that “stock prices take large slowly decaying swings away from fundamental values, but short-horizons…have little autocorrelations.” Tests of this model have been “largely fruitless.” There has been some evidence of negative autocorrelations in the 3-5 year horizons but as Fama and French (1988) show these largely disappear when the 1926-1940 period is dropped. Note that as the number of periods is small, these tests suffer from a lack of power.

“Fama and French 1988 emphasize that….irrational bubbles…are indistinguishable from rational time-varying expected returns.”

Contrarians: DeBondt-Thaler (1985) and others have reported that there are large reversals in winners and losers. (Market overreactions.) However this may be caused by the small firm effect (Zarowin 1981), or a distressed-firm effect (Chan and Chen 1991)). Fama-French (1989) “argue that the variation in the expected returns…is consistent with modern intertemporal asset-pricing models.”

Keidon (1988) points out that seasonals are not necessarily “embarrassments for market efficiency” since there may be underlying reasons for the deviations and the size of the variations are small relative to transaction costs. Further he warns that some of these anomalies are expected with “mining” of CRSP data.

Cross-Sectional Return Predictability: Any test of asset pricing models runs into the joint-hypothesis problem. Thus we can never know whether the market is inefficient or the model is wrong. Obviously, the choice of model may influence the findings.

Most of the early tests used the Capital Asset Pricing Model (CAPM). These tests were largely successful but there were some shortcomings. For example, the Sharpe-Lintner-Black model (CAPM) failed the zero-beta test (zero beta portfolio had a return higher than RF rate) but passed most tests. However, Roll’s 1977 criticisms caste doubt on early tests as the “market portfolios” used for the testing did not test the actual market portfolio. Fama falls back to the position that CAPM is a good model as it has increased our understanding in spite of the many anomalies. Example: book to market (which has “displaced size as the premier anomaly”).

Fama French 1991 show that for US stocks the relation between Beta and expected “returns is feeble even when Beta is the only explanatory variable.” This is less so when the data is expanded to include bonds (Stambaugh 1992).

As CAPM seemed to be failing, new models were suggested. These have not been met with the widespread adoption that faced CAPM. However, some do show promise.

Multifactor Models:

          APT–Tests by Roll and Ross (1980), Chen 1983, and Lehmann and Modest (1988) find that even after controlling for “up to 15 factors” the size anomaly still exists. Fighting over the number of factors is a problem with testing these models. Additionally “it leaves one hungry for economic insights about how factors relate to uncertainties about consumption and portfolio opportunities.” Further the flexibilty inherent in these models is a double-edged sword as it can lead to the equivalent of data dredging

          Consumption Based Pricing models (Rubinstein (1976), Lucas (1978), and Breeden (1979) are the “most elegant of the available intertemporal asset pricing models.” (in Breedens model “a security’s consumption B[eta] is the slope in the regression of its return to the growth in per capita consumption.” Tests of this are generally done both in a cross-sectional and time series manner “using the pathbreaking approach of Hansen and Singleton (1982). Estimation is with Hansen’s (1982) generalized methods of moments. The results are summarized in a Chi-square number that usually rejects the test but provides no insight thus “failing the test of usefulness.”

Out of the consumption based tests have come the “equity-premium puzzle” (Mehra-Prescott 1986) which suggests investors must be extremely risk averse to explain the spread on stocks over Treasuries. Fama believes this risk aversion is possible as people are afraid of a reduced cost of living. He uses the fear of recessions as evidence.

“The central cross-section prediction of Breeden’s (1979)…model is that expected returns are a positive linear function of consumption betas. On this score the model does fairly well.” Breeden, Gibbons, and Litzenberger (1989).

When Chen, Roll, and Ross (1986) test the consumption betas and other factors in the same model, they find the consumption betas do not add explanatory power and are thus dropped.

Conclusion on predictability section…we really do not have a pricing model. Not surprisingly multi-factor models work better (not surprising because researcher can look until they find something). Moreover, it is possible that all of the models are capturing the same risk factor but we do not recognize it yet.

2. Event Studies

Event studies got their start in the 1969 Fama-French- Jensen and Roll paper on dividend splits (FFJR 1969). Interestingly, the author conceded the motivation for the paper had bee to warrant continued funding of CRSP data. Event studies have since been done on many topics and provide the best evidence that the market incorporates new information very quickly and usually correctly. There are some studies showing exceptions (for example Ball and the time it takes for the market to incorporate ALL information from earning surprises.)

The basic idea of any event study is that the event is question is investigated in “event-time” which allows many similar events to be looked at simultaneously. This allows the impact of the event to be isolated from market wide events that also impact stock prices.

3. Tests for Private Information

Several different ways of investigating this:

  Insider trading: insiders do beat the market: Jaffe (1974) and Seyjun (1986)

  Security Analysts: Value Line and other anomalies suggest that analysts do provide some information. This is inconsistent with Efficient Markets IF you assume no information costs, but is perfectly consistent if information is costly to obtain (Grossman Stiglitz (1980)

  Professional portfolio management-Results largely consistent with the idea that on average people do not beat the market. There are some conflicting stories, but most agree with this conclusion.

Overall it appears the market is quite efficient BUT not perfectly so. There appears to be some predictability and some mean reversion in long-run returns, but not so much in the short-run tests. Often times the apparent market inefficiencies are just that, apparent as they are due to measurement or modeling errors.

Full paper: Fama_EMC II (.pdf)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s