登入選單
返回Google圖書搜尋
Stock Analysis in the Twenty-First Century and Beyond
註釋Stock Analysis in the Twenty-First Century and Beyond For years, financial analysts have struggled with the fact that practically all the financial measures used to analyze corporate performance lack predictive power when it comes to forecasting the market performance of the company’s stock. Numerous academic studies have documented and reported this lack of predictability. Correlation coefficients close to zero have been reported for the relationship between stock market performance and such critical financial measures as earnings growth, sales growth, price/earnings ratio, return on equity, intrinsic value (models based on discounted cash flow or dividends), and many more. It is this disconnect between traditional financial measures and the performance of stocks in the marketplace that has led to the now-famous efficient market hypothesis, the cornerstone of modern portfolio theory. To accept the idea that the future performance of stocks is unpredictable is to say that nothing a company does will affect the future performance of its stock in the market, and that is absurd. It would be more accurate to say that everything a company does will affect the future performance of its stock in the market. The problem with this statement is that it makes the forecasting of future stock performance so complex that it removes it from the realm of human solution. Confident in the belief that something other than chance and irrational investors determine future stock prices, several research groups around the world have started exploring the use of intelligent computer programs (programs that self-organize based on environmental feedback). Early results are very promising and have provided a glimpse of the economic forces described by Adam Smith as the invisible hand that guides economic activity. Stock Analysis in the Twenty-First Century and Beyond describes the stock analysis problem and explores one of the more successful efforts to harness the new intelligent computer technology. Many people mistakenly classify Artificially Intelligent (AI) computer systems as a form of quantitative analysis. There are two distinct differences between advanced AI systems and traditional quantitative analysis. They are (1) who makes up the selection rules and weighting and (2) what information is used to discriminate between good- and poor-performing securities. In most quantitative systems, even in an advanced expert system form, humans make up the investment rules and mathematically derive the weightings associated with the rules. Computer systems that depend on outside human intelligence to program their actions are not inherently intelligent. In advanced AI systems, the computer makes up its own rules and weightings. The computer learns from examples of good- and poor-performing stocks and determines its own ways for discriminating between them. The procedures that are derived by the computer are often so complex that they defy human understanding. In addition to making up its own rules, advanced AI systems look at corporate financial data differently. Just like in the human brain, where information is not stored in the brain cells but rather in the connections and relationships between cells, so too is corporate performance information stored in the relationships between financial numbers. Assessing the performance of companies is not so much in the numbers as it is in the connections between the numbers. Financial analysts recognized this early on and have used first-order relational information in the form of financial ratios for many years (price/book, debt/equity, current assets / current liabilities, price/earnings, etc.). Now with advanced AI systems, we are finally able to look at and evaluate high-order interrelationships in financial data that have been far too complex to analyze with less sophisticated systems. These then are the fundamental differences between what has been used in the past and what will be used in the future. Cdr. Thomas E. Berghage