a) “Paint a picture to suit yourself …”
Not only poor data quality influences the quality of a forecast, but also the deliberate selectivity and manipulation of the data used to draw up a forecast. Although economists and analysts work with very similar models, it is not uncommon for them to generate very different results. This is because some intentionally select data to produce results that suit the interests of their clients or their own ideological point-of-view.
In recent years, the American statistician Nate Silver has acquired renown for producing successful election forecasts. In his book “The signal and the noise”, he brilliantly describes the general problems of the methods used to draw up predictions. Based on the research of the political scientist Philip E. Tetlock, he differentiates between two main types of prognosticator – the ‘fox’ and the ‘hedgehog‘. Their personalities were originally described by the Greek poet Archilochos: “The fox knows many things, but the hedgehog knows one big thing“.
‘Foxes’ tend to think more cross-functionally, rely more on empirical findings, express their forecasts fairly cautiously and are more prepared to revise them. ‘Hedgehogs’, on the other hand, have an overall vision of the world and perceive every-day problems in a way that fits with their ideology. They express their forecasts more confidently, are rarely prepared to revise them, and generally put off-the-mark predictions down to bad luck or adverse circumstances, but rarely down to their own mistakes. The predictions made by ‘hedgehogs’ are normally much less accurate than those made by ‘foxes’, but are presented in a more convincing manner.
It is practically impossible for an impartial observer to tell whether a forecaster has deliberately selected a data set to produce the desired results or whether he has conducted his analysis in an unprejudiced and open-minded manner. Fatally, biased predictions often attract extensive public attention. This is because the parties that stand to benefit from them – investment bankers, lobbyists, etc. – tend to use them excessively in their public relations. In addition, the ‘hedgehog’ predictions are frequently formulated more clearly and presented more confidently, and therefore lend themselves to publication in the mass media.
Manipulated forecasts often sound highly scientific and apply particularly elaborate methods. The general public – and even many experts – are fooled by this combination of poor-quality data and assumptions with complex scientific methods. And this is an extremely dangerous mix, as described by Benjamin Graham, as early as in 1958: “The combination of precise formulas and highly imprecise assumptions can be used to establish, or rather to justify, practically any value one wishes, however high.”
Today the risk models of the US rating agencies are often cited as an example of this danger, as they systematically assessed US mortgage bonds as much too safe in the time before the 2007 financial crisis. Although the data underlying their calculations was up-to-date and of the best quality available, the predictions generated by the models were completely wrong because of the “design” of the assumptions. (This is described in detail in the book by Nate Silver ‘The signal and the noise’ cited above.)
As manipulated prognoses almost always fail, yet at the same time attract considerable attention, they have considerably damaged the reputation of forecasts of corporate and economic developments.
b) Prediction cultures
Empirical research has shown that economic forecasts vary greatly depending on their target group. As they differ considerably – in terms of the way they are produced and the way they are presented – it is justifiable to describe them as different ‘prediction cultures‘. The type of forecast, either a time-related point forecast or a “rough” prediction of trends, depends on the type of culture in which the forecast is made.
There are cultures in which forecasts are drawn up exclusively for presentation purposes. The customers use these forecasts to support their own decisions. For instance, economists and stock analysts generate predictions for investors, who base their investment decisions on them. This type of forecast is produced in so-called ‘show-prediction cultures’. Their main purpose is that they enable people to sound like experts. In the eyes of the general public, a person who is able to produce forecasts that are as precise as possible qualifies as an expert. Point forecasts meet this requirement, even if they are not actually possible. To meet the public’s expectations, economic analysts do their best to make their predictions sound fixed, formal, figure-based and artificially precise. As it is a convention to present a scientifically-based and clear analysis of a situation, ambiguity and uncertainty are played down and, to a great extent, swept under the carpet.
The so-called crash prophecies play a prominent role among show predictions. These are extremely negative predictions, which attract considerable public attention. Investors are much more sensitive about losses than profits in the same order of magnitude. This phenomenon, which has been explored by behavioural economists, is caused by loss aversion. Investors are scared to make losses. In addition, when they make losses, they are afraid to admit to them. Forecasters can therefore expect to attract extensive attention with statements warning the public about high losses. And if they happen to be right, they are guaranteed years of celebrity status.
The professional life of Elaine Garzarelli, the only investment strategist to predict the October crash in 1987, is a good example of how a reputation as a crash prophet can lastingly shape an entire career. She was not particularly successful with her predictions as a whole and made some spectacularly bad forecasts in the years after 1987 (e.g. at the time of the 2003 market trough, she predicted that bad times would continue). An investment fund based on her forecasts performed only moderately and was discontinued several years later. Nevertheless, after her successful prediction in 1987, she was ranked as top quantitative analyst by the readers of Institutional Investor for 11 years and remained one of the best-paid strategists on Wall Street for a long time.
A special category of crash prophets are the investment strategists who have been issuing warnings that the economy and stock market are about to crash for decades. Today Marc Faber and Albert Edwards are the best known examples of this category. Although, contrary to Elaine Garzarelli, they have almost always been completely wrong, they attract a great deal of media attention. This may be due to the fact that their prophecies are very similar to traditional biblical end-of-the world scenarios and play with people’s primal fears. Their predictions are absolutely perfect ‘hedgehog’ forecasts. The data is interpreted so that it complies with a given catastrophe scenario, regardless of how flawed the reasoning behind it is.
In addition, there are cultures in which players produce forecasts for their own use, as a basis for their own actions. These are therefore mostly rough statements about tendencies on the financial markets. In other words, they provide a general direction, but not the scale or a specification about a point in time. They are intended as tools for internal company purposes to aid investment decisions.
For instance, portfolio managers make their own predictions, which they do not have to communicate, but can convert directly into decisions. Economic events are considered to be unique and of a one-off nature. As events of this kind cannot be accurately predicted, forecasters do not strive to provide a precise figure for a future event or to pin it down to an exact date. These forecasts are therefore predominantly trend forecasts, which predict the direction of an economic development, but not its scale or the exact point in time.
The status of a portfolio manager does not ultimately depend on how successful his forecasts are, but on the outcome of his investment decisions. The result is measured in terms of the performance of the fund he manages. Forecasts are only one of several instruments that can be used to obtain a positive result. For this reason, it seems justified to label this an “outcome-prediction culture”.
c) Forecasts and feedback
Feedback is a mechanism in signal- or information-processing systems, which causes interactions between the original signal – in this case the forecast – and the entire system – in this case the economy.
Feedback can be generated in multiple forms in technical, biological, geological, economic or social systems. Depending on the nature of feedback, it can reinforce a process or mitigate and limit it. In the first case, it is called positive feedback; in the second case, it is called either degenerative feedback or negative feedback.
For forecasts there are two basic types of feedback loop – the self-fulfilling prophecy, which is a form of positive feedback, and the self-defeating prophecy, a form of negative feedback. Both of these terms can be traced back to the sociologist Robert K. Merton, who analysed social mechanisms to explain the impact of certain attitudes and behaviours. In 1948, he was the first to recognise that persons who claim that their own forecasts are valid on the basis of events they have caused themselves are making an error.
A self-fulfilling prophecy describes the phenomenon according to which an expected behaviour of another person or economic entity is promoted or even brought about by a forecast. Examples of this were observed during the recent Euro crisis. As a reaction to the pessimistic economic forecasts, the financial conditions for crisis countries deteriorated on the international capital markets, which in turn negatively affected their growth. As a result, these countries failed to attain their deficit targets, leading to even worse predictions, a further increase in interest rates, etc. The ECB had to intervene to break the vicious circle.
Companies implement investor-relation strategies in order to influence the way in which analysts draw up their profit estimates and adapt them to recent developments. The aim of this ‘expectations management’ is to achieve a maximum valuation of equities and bonds, which keeps the financing costs low. This raises profit, which in turn justifies a higher valuation.
Contrary to the self-fulfilling prophecy, the self-defeating prophecy is a prediction that prevents what it predicts from happening. This form of prediction is particularly important in risk management. A typical example is, for example, the prediction of a disaster (perhaps an accident or production stoppage), which leads to measures that prevent the disaster from occurring.
Unfortunately, it has become very difficult in economics to differentiate between warnings that should be taken seriously and the doomsday scenarios forecast by crash prophets. The interdependence of media interests and the formulation of forecasts has an unfortunate outcome: the more extreme a forecast, the more appealing it is as subject matter for a sensationalist story. Publicity for moderate warning forecasts is, on the other hand, rare.