a) “Garbage in, garbage out”
If predictions are drawn up on the basis of data analyses, the quality of the underlying data is crucial. The most sophisticated forecast model is absolutely no good if the quality of the information entered into it is poor. Statisticians describe this phenomenon as “garbage in, garbage out”.
The quality of data used for economic forecasts is a major problem, in particular regarding predictions about macro-economic performance – i.e. about the domestic product, unemployment, export statistics and inflation. For some of these measures, reliable data is sometimes only available months after it has been collected. Initial predictions often subsequently have to be dramatically revised several times, particularly when the economic trend reverses. At the beginning of 2008, the US economy was already in a considerable downturn, as shown subsequently by revised data. However, most economists were not aware of this at the time, as they were basing their forecasts on incorrect information. As a result, they were making forecasts that were far too optimistic.
Ensuring that they are working with correct data is a significant challenge – not only for macro-economists but also for corporate analysts. However, the problem for analysts is less about obtaining up-to-date data, but more about maintaining objectivity. Because they have to rely on data provided by the management of the company they are analysing to draw up their profit estimates. And these are always manipulated to suit the interests of the company. This can result in problems, particularly if analysts develop unrealistic growth expectations, which in turn lead to over-optimistic forecasts.
In addition, it is not only the quality of data, but also the quantity that can be a problem. With too little data, it is impossible to make a meaningful statement. Too much data gives rise to another problem – so-called noise. This is when the data set contains irrelevant data, which needs to be filtered out in order to generate a meaningful analysis. And this is a tricky task.
In this context, the problem of spurious relationships often arises. Statistical analyses identify a relationship between two different factors, which is pure coincidence and not actually a logical connection. A famous example of this is the so-called “Super-Bowl indicator”, which says that if a National Football Conference (NFC) team wins the final of the Super Bowl, US shares will increase in value in the same year. However, on the other hand, if an American Football Conference (AFC) team wins, stock prices will drop. Between 1967 and 1997, this indicator had a 90% success rate in forecasting the future stock market developments. Since then, however, there has no longer been a clear connection. The lack of logical link between stock market prices and football outcomes is obvious. Particularly in the age of “big data”, statisticians sometimes do not reflect sufficiently about whether the data they are analysing is really logically linked. Forecasting errors are therefore bound to arise.
b) The ‘margin of error’ business
In other areas, statistically determined forecasts are usually delivered together with a possible margin of error, e.g. the forecasters specify the probability of the predictions occurring. Economic forecasts, however, do not generally include a margin of error. Yet it would be a helpful piece of additional information for evaluating the reliability of a forecast.
One reason for this could be that it is not possible to meaningfully specify a margin of error due to statistical problems related to the quality, age and quantity of data used. This, in turn, suggests that forecasts have been drawn up in dubious circumstances.
Another reason could be that the margin of error is so high that forecasts deliberately conceal it to avoid undermining their credibility. This argument is particularly relevant for the “show predictions”, which are explained in more detail in part 3. Forecasters want to make themselves look like experts so it clearly does not make sense to draw attention to possible errors.
c) Short, medium or long term?
The majority of economic forecasts are made for time frames covering the next 1-2 years, in other words for the medium term. Unfortunately, making predictions for this particular time frame is more difficult than for any other. This is because the economy undergoes complex and evolutionary processes. And it follows trends. Short-term trends can often be predicted relatively easily, because they are a continuation of current developments or follow seasonal cycles. The problem with short-term predictions, however, is not their reliability in determining trends, but more often that the underlying data is not always good quality and up-to-date.
It is practically impossible to determine medium-term trends. This is because surprising developments are bound to arise in complex systems such as the economy within a certain period of time, due to innovations, cyclical factors, political events, etc. This phenomenon, termed “emergence“, is the state in which new characteristics or structures arise spontaneously within a system as a result of the interactions of its elements.
Long-term economic developments, on the other hand, are easier to describe because they are determined by so-called mega-trends. They are influenced by structural factors such as demography, education or production growth. Medium-term fluctuations cancel each other out and therefore have only a minimal impact on long-term predictions.
These days meteorologists only provide forecasts for very short periods of time, for which their models generate relatively good predictions. Economist and financial analysts, on the other hand, focus strongly on delivering medium-term predictions, although these are particularly challenging. This is partly due to public demand, as the general public is mainly interested in medium-term forecasts. However, in my opinion, it is also because a large number of macro-economists do not have a sufficient grasp of the complexity of economic relationships. They therefore systematically overestimate the effectiveness of their models. It is therefore no surprise that serious forecasting errors are made at economic turning points, in particular.
The focus on generating profit estimates for the next business years can also be a problem for stock analysts. Although it is – to a certain extent – possible to do so for companies with steady revenues, e.g. food producers, it is completely impossible for cyclical ones. Nevertheless, a point profit forecast has to be delivered for the next years to meet the expectations of the general public. And, what’s more, the analyst’s performance is evaluated on the basis of this impossible prediction.
Leave a Reply