The Delphi technique: Past, present, and future …
These difficulties have not deterred many traditional analysts and long-range forecasters from using such methods and thereby generating dubious advice for their sponsors. Within futures research, however, thesetechniques--when used well--are applied in a very distinctive way. The objective is not to foretell the future, which is obviously impossible, but to provide purely extrapolative base-line projections to use as a point of reference when obtaining projections of the same trends by more appropriate methods. What would the world look like if past and current forces for change were allowed to play themselves out? What if nothing novel ever happened again? The only value of these mathematical forecasting techniques in futures research is to provide answers to these remarkably speculative questions. But once they are answered, a reference will have been established for getting on with more serious forecasting.
How can the answer be improved?
When cause is not an essential factor, trends are often forecast using time as the independent variable. Much of the "trend extrapolation" in futures research takes this form. Common methods of time-series forecasting being used today are the smoothing, decomposition, and autoregression/moving average methods. Smoothing methods are used to eliminate randomness from a data series to identify an underlying pattern, if one exists, but they make no attempt to identify individual components of the underlying pattern. Decomposition methods can be used to identify those components--typically, the trend, the cycle, and the seasonal factors--which are then predicted individually. The recombination of these predicted patterns is the final forecast of the series. Like smoothing methods, decomposition methods lack a fully developed theoretical basis, but they are being used today because of their simplicity and short-term accuracy. Autoregression is essentially the same as the classical multivariate regression, the only difference being that the independent (predictor) variables are simply the time-lagged values of the dependent (predicted) variable. Because time-lagged values tend to be highly correlated, coupling autoregression with the moving average method produces a very general class of time-series models called autoregression/moving average (ARMA) models.
For example, in a study by Boucher and Neufeld (I 98 1), a set of I I I trends was forecast 20 years hence both mathematically (using an ARMA technique) and judgmentally (using the Delphi technique). Analysis of the results showed that the average difference between the two sets of forecasts was over 15 percent. By the first forecasted year (which was less than a year from the date of the completion of the Delphi), the divergence already than 10 percent; by the 20th year, it had reached 20 percent. This result is interesting because even experienced managers usually accept mathematical forecasts uncritically. They like their apparent scientific objectivity, they have been trained in school to accept their plausibility, and acceptance has been reinforced by an endless stream of such projections from government, academia, and other organizations. Seeing judgmental and mathematical results side-by-side can thus be most instructive. Moreover, as some futures researchers believe, if the difference between such a pair of projections is 10 percent or more, it is probably worth examining in depth.
What Are Four Primary Forecasting Techniques
As futures research has developed since the mid-1960s, much work has gone into the invention and application of techniques intended to overcome these and other limitations of widely practiced methods of forecasting. In general, the newer methods are alike in that they tend to deal as explicitly and systematically as possible with the various elements of alternative futures, the aim being to provide the wherewithal for users to retrace the steps taken. The following paragraphs highlight some of these methods.
UQconnect, The University of Queensland
Another reason that success with Delphi is hard to achieve is that, despite 20 years of serious applications, very little is known about how and why the consensus-building process in Delphi works or what it actually produces. No wide-ranging research on the fundamentals of the method has been done for more than a decade. According to Olaf Helmer, one of the inventors of Delphi, "Delphi still lacks a completely sound theoretical basis.... Delphi experience derives almost wholly either from studies carried out without proper experimental controls or from controlled experiments in which students are used as surrogate experts" (Linstone and Turoff 1975, p. v). The same is true today. The practical implication is that most of what is "known" about Delphi consists of rules of thumb based on the experience of individual practitioners.