Objective research to aid investing decisions
Menu
Value Allocations for September 2019 (Final)
Cash TLT LQD SPY
Momentum Allocations for September 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Fundamental Valuation

What fundamental measures of business success best indicate the value of individual stocks and the aggregate stock market? How can investors apply these measures to estimate valuations and identify misvaluations? These blog entries address valuation based on accounting fundamentals, including the conventional value premium.

Returns Around Earnings Announcements Worldwide

Do stocks around the world tend to perform better around the time of annual earnings announcements by respective firms than during the rest of the year? In the June 2011 draft of their paper entitled “The Earnings Announcement Premium Around the Globe”, Brad Barber, Emmanuel De George, Reuven Lehavy and Brett Trueman investigate whether the earnings announcement premium (elevated returns during earnings announcement months) is a global phenomenon or is isolated to U.S. stocks. They employ a hedge portfolio, reformed monthly, that is long (short) stocks of firms expected (not expected) to announce annual earnings during the next month, The long and short sides are equal-weighted, and the stocks within each side are value-weighted. Using roughly 200,000 annual earnings announcements for about 28,000 firms in 46 countries during 1990 through 2009 to estimate announcement months during 1991-2010, and associated monthly stock returns, they find that: Keep Reading

Creative Destruction Risk Premium

Are some firms more at risk of creative destruction by new technologies? If so, does the market offer a premium to investors in such firms? In his March 2011 paper entitled “Creative Destruction and Asset Prices”, Joachim Grammig explores the concept of creative destruction as an explanation for the size effect and the value premium under the proposition that associated firms have a higher probability of being destroyed by technological change. He defines the pace of technological change as the annual percentage change in U.S. patents issued (patent activity growth). Using annual counts of newly issued patent from the U.S. Patent and Trademark Office and annual data on 25 portfolios of U.S. stocks formed by double-sorts on size and book-to-market ratio over the period 1927 through 2008, he finds that: Keep Reading

Value Premium as Risk Compensation

Are value stocks priced low because the companies are in financial distress? In their May 2011 paper entitled “Is the Value Premium Really a Compensation for Distress Risk?”, Wilma de Groot and Joop Huij investigate the relationships between the value premium and alternative measures of firm distress risk. Their core methodology employs monthly double-sorts on firm book-to-market ratio and each of four measures of firm financial risk: (1) financial leverage (debt-to-assets ratio); (2) a structural model of distance-to-default; (3) credit spread (between firm bonds and maturity-matched Treasuries); and, (4) credit rating. Using data to calculate these measures for the 1,500 largest U.S. firms, along with associated monthly stock prices, over the period September 1991 (limited by availability of credit spread data) through December 2009, they find that: Keep Reading

Enhancing/Streamlining Asset Rotation

Can investors systematically benefit from the perspective that trading is the exchange of one asset for another, not the buying and selling of a single asset? In his paper entitled “Optimal Rotational Strategies Using Combined Technical and Fundamental Analysis”, third-place winner for the 2011 Wagner Award presented by the National Association of Active Investment Managers, Tony Cooper presents methods and tools designed to exploit the precept that valuations are relative. An organizing concept for these methods and tools is the Binary Decision Chart (BDC), which in one form addresses simultaneous analysis of two competing investments for the purpose of switching or weighting and in an extended form addresses combining technical analysis (based on observed price action) and fundamental analysis (indicator-based prediction). BDCs are cumulative return charts, but the horizontal axis may be a technical or fundamental indicator rather than time. More specifically, using various asset price series and indicators, he illustrates the following methods/tools: Keep Reading

Fed Model Respecified?

The Fed Model relates the aggregate earnings yield (E/P) of the stock market to Treasury bond or bill yields under the assumption that investors view equities and government bonds as competing ways to achieve yield. Might supply (company management), rather than demand (investors), more precisely drive the relationship between E/P and interest rates? In the April 2011 (incomplete) draft of his paper entitled “Understanding the Fed Model, Capital Structure, and then Some”, J.H. Timmer argues that the stock market earnings yield tends to equilibrium not with the government bond yield but with the average after-tax corporate bond yield as companies adjust capital structure (mix of equity and bonds) to maximize earnings per share. SEC Rule 10b-18 (explicitly allowing share repurchases) enabled fine adjustment toward equilibrium as of 1982. Using annual estimates of one-year forward earnings yields and corporate bond yields for a subset of S&P 500 companies and assuming a constant corporate tax rate of 30% over the period 1968 through 2006, he finds that: Keep Reading

The Earnings Yield Anomaly Revisited

Does the earnings yield (inverse of price-to-earnings ratio, or E/P) usefully predict returns for individual stocks? In their April 2011 paper entitled “Reexamination of the Earnings-Price Anomaly by the Buy-Sell Strategy”, Hsin-Yi Yu and Li-Wen Chen test a long-only strategy that forms monthly value-weighted portfolios based on time-series sorting rather than cross-sectional sorting. Time-series sorting ranks stocks according to current E/P of each relative to its range over the prior decade. The strategy tested buy stocks near the top of their respective ten-year ranges and subsequently sells them when they move to the bottom. Intuitively, stocks near the top (bottom) of their respective historic E/P ranges are likely to be undervalued (overvalued). For reference, they also test a strategy that forms portfolios based on cross-sectional sorting by current E/P and held for a fixed interval, while noting that such sorts make little sense because average E/P varies considerably by industry. Using earnings and price data for all common stocks listed on NYSE, AMEX and NASDAQ from January 1962 to December 2010, they find that: Keep Reading

Evolution of the Accruals Anomaly (to Extinction?)

Is the accruals anomaly still on solid ground? In their paper entitled “The Accrual Anomaly”, Patricia Dechow, Natalya Khimich, and Richard Sloan review the origin of and subsequent research on the accruals anomaly. They characterize accruals as “the piece of earnings that is ‘made up’ by accountants” as opposed to the balance coming from cash flow. Using the original analysis and updated firm accounting and stock return data for the period 1970 through 2007, they find that: Keep Reading

The Efficient Innovation Premium

Do the stocks of firms that get the most bang per research buck tend to outperform? In the March 2011 update of their paper entitled “Innovative Efficiency and Stock Returns”, David Hirshleifer, Po-Hsuan Hsu and Dongmei Li investigate the relationship between innovative efficiency (IE) and future stock returns. They consider three alternative definitions of IE:  (1) patents granted per dollar of R&D capital investment two years previous; (2) patents granted per dollar of R&D expenditures two years previous; and, (3) adjusted patent citations (a measure of patent quality) per dollar of cumulative R&D expense over the five years ending two years previous. The two-year lag between patent activity and investment in R&D reflects the average patent application-grant delay. Return predictability tests involve annually reformed value-weighted stock portfolios comprised of six intersections derived from independent sorts at the end of each February into: small or big market capitalization; and, low, middle or high IE. Using accounting and patent data for a broad sample of U.S. firms over the period 1981 through 2006, and associated stock returns and risk adjustment factors through February 2008, they find that: Keep Reading

Interaction of Investor Sentiment and Stock Return Anomalies

Does aggregate investor sentiment affect the strength of well-known U.S. stock return anomalies? In their January 2011 paper entitled “The Short of It: Investor Sentiment and Anomalies”, Robert Stambaugh, Jianfeng Yu and Yu Yuan explore the interaction of aggregate investor sentiment with 11 cross-sectional stock return anomalies. Their approach reflects expectations that: (1) overpricing of stocks is more common than underpricing due to short-sale constraints; and, (2) a high sentiment level amplifies overpricing. Specifically, they consider the effect of investor sentiment on hedge portfolios that are long (short) the highest(lowest)-performing) value-weighted deciles of stocks sorted on: financial distress (two measures), net stock issuance, composite equity issuance, total accruals, net operating assets, momentum, gross profit-to-assets, asset growth, return-on-assets and investment-to-assets. They use a long-run sentiment index derived from principal component analysis of six sentiment measures: trading volume as measured by NYSE turnover; the dividend premium; the closed-end fund discount; the number of and first-day returns on Initial Public Offerings; and, the equity share in new issues. They measure anomaly alphas relative to the three-factor model (adjusting for market, size, book-to-market). Using monthly sentiment and stock return anomaly data as available over the period July 1965 through January 2008, they find that: Keep Reading

Technical Boost to Fundamental Stock Market Forecasting?

Do technical indicators add value to fundamental indicators in assessing broad stock market valuation? In their March 2011 paper entitled “Forecasting the Equity Risk Premium: The Role of Technical Indicators”, Christopher Neely, David Rapach, Jun Tu and Guofu Zhou examine the powers of technical and fundamental indicators to predict stock market returns. They consider 12 variations of three stock market index technical indicators: (1) relative values of two moving averages (1 month versus 3, 6, 9 and 12 months); (2) return momentum (past 3, 6, 9 and 12 months); and, (3) relative values of two on-balance volume moving averages (1 month versus 3, 6, 9 and 12 months). They consider 14 fundamental indicators ranging from stock market valuation ratios to Treasury yields, yield spreads and the default spread. They compare mean squared equity risk premium forecast errors for technical and fundamental indicators to that for the historical average premium. They also compare the average utility gain for a mean-variance investor who allocates monthly between stocks and Treasury bills based on either technical or fundamental market forecasts to that for an investor who uses the historical average premium. Finally, they generate equity risk premium forecasts based on a rolling principal component analysis that encapsulates the predictive powers of the 26 technical and fundamental indicators into three or four variables. Using monthly price and volume data for the dividend-adjusted S&P 500 Index and monthly readings of the 14 U.S. fundamental indicators as available over the period 1927 through 2008 (1926-1959 for in-sample optimization and 1960–2008 for out-of-sample testing), along with NBER business expansion and contraction dates, they find that: Keep Reading

Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts