Advertising Principles - Evidence-based principles

Systems Approach


Source: J. Scott Armstrong, Long-Range Forecasting: from Crystal Ball to Computer, 2nd Edition, New York: Wiley-Interscience, Chapter 2, pp. 13-22.


DESCRIPTION OF THE SYSTEMS APPROACH

The systems approach uses two basic ideas. First, one should examine objectives before considering ways of solving a problem; and, second, one should begin by describing the system in general terms before proceeding to the specific. These ideas are illustrated in Exhibit 1. The arrows indicate time priorities, and the boxes indicate the separate steps of the approach. Thus the first step is the identification of the objectives of the system, and the last step is the development of an operational program.

Header_left

A separate time period should be allocated to each step in the systems approach. This is important because most of us feel more comfortable when working on the operational program. Time spent on the other stages often appears to be wasted; it creates tension, especially for experts. However, dramatic gains are often made by working on the three other stages, and that is where creative people spend more time. Therefore it is generally wise to allocate a significant portion of time to each of the four stages. Each step should be done independently of the one following. Most importantly, the identification of objectives should proceed without consideration being given to the steps that follow. Each step should be carried out in an explicit manner with a written summary.

Identifying Objectives

"Cheshire-Puss, she[Alice] began, Would you tell me, please, which way I ought to go from here?"
"That depends a good deal on where you want to get to," said the Cat.
Lewis Carroll

One interesting thing about this section is that it is written not for us, but for other people. We all have a good ability to recognize the objectives of a system. I must confess, however, that I, like those others, often ignore the advice in this section, sometimes to my lasting regret.

The first step in the systems approach is to identify the ultimate objectives of the system. This analysis should start at the highest conceptual level Common Sense, certainly, but those other people don't follow it. Therefore we pay experts like Ted Levitt to spend their lives telling other people about this idea. Incidentally, Ted Levitt wrote what is probably the most famous article in the field of marketing. This article stresses the importance of beginning with ultimate objectives rather than with operational goals. (Actually, I think he stole the idea from the Bible or from Shakespeare … or was it Adam Smith?) In any event, you can read his article (Levitt, 1960), or you can read a book about the article (Levitt, 1962). His advice is sound, simple, and often ignored by other people.

No consideration should be given to alternative strategies during the initial phase. Set up a separate time period for this analysis. Write a separate report on objectives. Review the objectives to ensure that they do not imply strategies. This advice is also simple. Why is it often ignored? Perhaps because we are taught to accept tradition. Tradition tells us how, not why.

An example from marketing may help to illustrate the analysis of objectives. One of my projects involved making forecasts for the cereal market. It was possible to recast this problem in more general terms by looking at needs such as nourishment. Man's need for nourishment has been studied extensively, and it changes slowly; however his preference for different types of foods, such as cereals, changes rapidly. Cereals are a means to an end; substitutes for cereals include eggs, instant breakfasts, pills, or fasting. In our problem, it helped first to go back to consider the ultimate needs served by the product.

One technique for identifying objectives is the stakeholder analysis. The first step in a stakeholder analysis is to identify all the groups (or individuals) who may possibly be affected by a change in the system. Brainstorming can help in compiling a list of these groups. Another approach is to publicize the fact that changes are being considered in the organization and see who responds. Still another alternative is to consult experts.

When the list of interest groups has been identified, the objectives of each group should be listed. This may also be done by brainstorming. Even better, one can survey people from each group in order to determine their objectives.

Although Long-Range Forecasting relates to means (forecasting methods) rather than to objectives (decision making), this book places much emphasis on objective-setting. One small example: I make a distinction between forecasting models and measurement models. The objectives of this book related to forecasting models. Measurement models are only a means to the development of forecasting models.

The preceding paragraph is probably not much clearer than a passage from Alice in Wonderland. We call forecasting models “forecasting models” because that is what they are. Most people who discuss forecasting models don't call them forecasting models. They find other names such as regression, computer, judgmental, or econometric models. In short, they name the model not for its objective but for its method. The fact that the model1 is named on the basis of means rather than ends tends to divert the researcher's attention from what he is trying to do, to how he does it – an unfortunate, but certainly not uncommon, consequence.

My associate, Fritz Dressler, has generalized about the researcher's devotion to method rather than objectives and has developed the rainmaker theories. Here are some examples:

"The rainmaker gets so involved with the dance that he sometimes forgets that he has to make rain"
Rainmaker Theory Number 1.
"Yes, I know it didn't rain" but didn't you like the dance?" I other words, the successful rainmaker is the one who can convince his client that he really didn't want rain – he wanted to watch the dance.
Rainmaker Theory Number 2.
"Who cares why it rains?" The science of rainmaking evolves into the science of rainmaking dances.
Rainmaker Theory Number 3.

Here is a rain dance example. Once upon a time, I was on an admissions committee for a graduate business school. The committee used a weighting scheme to predict which students would successfully complete the graduate program. Someone (probably I) suggested that we take a broader view of our objectives. For example, what should people be like after they finish the program? In what ways do we or can we help people? What benefits do they receive? The proposal was roundly shouted down. We spent about an hour discussing whether we could devote 15 minutes to assessing objectives then voted not to spend the 15 minutes. The consensus was: " I've been in discussions like this before and nothing fruitful ever comes of them". The chairman summed up things this way: "First we'll figure out how to admit students. If we have any time, we'll come back and examine why we're admitting them". And he was serious! This is only one of many similar experiences I have encountered in both business and academic life. For further reading, try Vonnegut's Player Piano, Kafka's Amerika, or Heller's Catch-22. For further evidence, look at your own organization; note that you generally have no problems if you fail to reach the ultimate goal of the organization. But what happens if you fail to use the prescribed method?

The identification of objectives is probably the most important step in the systems approach. This is especially true because it is the most widely ignored step.

Indicators of Success

Before charging off after the objectives, you should decide what point you are trying to reach, and how you would know if you ever got there. In other words, it is important to establish explicit indicators of success.

The list of objectives from preceding the section serves as the starting point in the development of indicators of success. In this step, procedures are developed to measure how changes in the system will affect each of the objectives. This phase is difficult and requires a lot of psychic energy.

It is desirable to find measures that can be quantified and, better still, measures that can be accurately quantified. In the example that described procedures for admission to graduate school, the success of students was quantified by grades in courses. However, faculty grades of students lack reliability. Different raters usually provide widely different grades for the same piece of work if the ratings are done independently.

The measures should relate to the objectives of the forecasting exercise. This is a question of validity. Do the indicators measure what they are intended to? As to the prediction of student success, grades by teachers have been shown to lack validity. A brief summary of the evidence on this issue is provided by Armstrong (1980). (What do grades by faculty measure, you ask? Probably obedience, ability to follow directions, flattery, and cheating).

Alternative Strategies

It is important to prepare more than one strategy for meeting the objectives. Sometimes it helps to obtain suggestions from different people, especially from those with different types of expertise. This process can be improved further by using brainstorming.

Ideally, the various strategies should differ substantially. Unfortunately, we are usually trained in narrow specialties and consequently become experts in only a small number of strategies. One of the objectives of Long-Range Forecasting is to help you go beyond your current area of expertise.

Developing Programs

At this stage, a preliminary screening can be conducted to reduce the alternative strategies to a manageable number. This can be done by rating the alternative strategies against one another, using the indicators of success as the criteria. You could set minimum acceptable targets for each indicator and then see which strategies can exceed each of these minimums. (This approach is known as "“satisficing".”) Another approach is to compare programs using a subjective unit weighting scale of better (+1), even (0), or worse (-1), and summing the total.

The more promising strategies can then be translated into an operational program. Scenarios may be useful at this point to help ensure that each aspect of the program has been adequately covered. Scenarios are stories describing the environment, how a strategy can be implemented, and what results are likely.

Further screening may then follow, using the same procedures as above. Satisficing can be applied with new minimum levels, and the unit weighting scheme can also be used. At this point, however, choices among different strategies become more difficult to make. One solution is to try to translate each indicator into a common unit of measure, such as dollars. This requires weighting the various objectives of stakeholders to provide what is essentially a cost-benefit analysis.

An alternative solution is to carry along more than one strategy, perhaps on an experimental basis. Alternative strategies can also serve as contingency plans. If the preferred strategy does not perform as hoped, or if conditions change, the contingency plan can be used. Comparisons among alternative programs can also be aided by presenting the scenarios to the stakeholders for their reactions.

Ideally, this step would complete the systems analysis. In practice, however, it is frequently necessary to return to earlier steps once this point is reached. In other words, the systems approach is used in an iterative manner.

VIOLATING THE SYSTEMS APPROACH

What happens when the systems approach is violated in forecasting? That’'s simple: you are more likely to get poor forecasts, and there is a greater probability that someone will criticize your work in a book on forecasting methods.

Hacke (1967) demonstrated what can happen when the systems approach is ignored. He provided a nine-year forecast of transistor production. Although the forecast was published in 1967, it had actually been based on data through 1957 and the forecast was made for 1966. The forecast was made by extrapolating historical data on transistors. Forecasters using the systems approach would find this odd because consumers have no basic need for transistors. They have need for things like visual and audio entertainment, and thus for TVs and radios, but not for transistors. Transistors represent only a means to an end. Furthermore, substitutes for transistors, such as vacuum tubes, existed. The growth rate of transistors was about 230% per year. As a result, the direct extrapolation forecasted a sizable production for 1966. After playing with different extrapolation techniques, Hacke predicted (i.e.  with 95% confidence limits) that the U.S. production rate in 1966 would be between 6 billion and 690 billion units. Now wouldn'’t you say that this is a rather large confidence interval?

Contrast the direct model in Hacke's example with one that initially forecasts at a slightly higher conceptual level by considering the market for transistors and its close substitute, vacuum tubes. Certainly this model is closer to representing the visual and audio entertainment market. An analysis of the data for this market, using the 1941-1957 period, indicated that the growth rate was 11% per year. On the basis of this growth rate, a market forecast could be prepared.

This forecast for transistors can be made by forecasting the transistor market share and multiplying it by the market forecast. The market share was only 6% in 1957, but it had been growing rapidly. On the assumption that growth would continue, the market share could be between 6% and 100% in 1966. Let's choose the midpoint, 53% as the 1966 forecast. Using this simple model, transistor production would grow from 30 million units in 1957 to 700 million units in 1966. This forecast is substantially less than the minimum forecast of 6 billion units based on the direct extrapolation.

The actual production of transistors in 1966 was about 850 million units. The industry grew at about the same rate, and the market share for transistors increased from about 6% to about 60%. This actual value was 14% of Hacke's minimum prediction and less than 1% of the most likely prediction.

The prediction from the simple model that was formulated in more general terms proved to be much more accurate. Furthermore, given the market forecast, the assumption of growth, and the limits on market share, it would have been impossible to miss the prediction as badly as did the direct extrapolation in the illustration by Hacke.

OTHER GOOD ADVICE

I have never been able to teach anyone how to use the systems approach. A few people have learned how to use it; sometimes they said I helped. But the biggest problem is simple a matter of trying.

Everyone likes to read about general laws in a book like this. So I will tell you about a law that has inspired me to get over my natural tendency to say "no" . It's the well-known Gerstenfeld's law of trying. It was discovered one night by my friend, Art Gerstenfeld, upon returning home from work. Gerstenfeld's son met him at the door, and the following exchange took place between the two:

"Daddy, fix my bike for me."
"I don't know anything about bikes."
"Daddy, please fix my bike."
"I don't know how to fix your bike!"
"Daddy, please fix my bike!"
"I don't know how to fix your bike!"
PAUSE
"But, Daddy, you can try, can't you?"
ANOTHER PAUSE
"Yes, I suppose that I can try."
"And you know, said my friend Gerstenfeld later, "I did fix that bike."
SUMMARY

In this chapter the systems approach was proposed as a way to structure forecasting problems. This calls for an evaluation of objectives before means, and for a consideration of the general before the specific. It is carried out in four steps as shown in Exhibit 2. This exhibit also contains a checklist of procedures that might be used in each step.

Exhibit 2: Checklist for the Systems Approach

Header_left

In addition, four general procedures carry across all four steps:

  1. Write it! (An average person with time, a pencil, paper, and a systematic approach will beat a smart person who"thinks on his feet".)
  2. Plan separate time periods for each step.
  3. Omit reference to steps that come later.
  4. Remember that objectives come first and the program comes last.

Finally, the issue is not whether we understand this approach (most of us do), do whether we use it (most of us don't). Gerstenfeld's law of trying was suggested to inspire use of the system approach.

REFERENCES
Armstrong, J. Scott, "Teacher vs. Learner Responsibility in management Education",
Department of Marketing, Working Paper, 1980.
Hacke, James E., Jr., The Feasibility of Anticipating Economic and Social Consequences of a Major Technological Innovation. Stanford Research Institute, Menlo Park, Calif. October 1967.
Levitt, Theodore, "Marketing Myopia,"  Harvard Business review, 38(July-August 1960), 45-56.
Levitt, Theodore, Innovation in Marketing. New York: McGraw-Hill, 1962.

© Copyright J. Scott Armstrong and Kesten C. Green. All rights are reserved.

Contact us with any suggestions here