Companies give a number of reasons for not measuring the effectiveness of advertising and promotions strategies:
1. Cost. Perhaps the most commonly cited reason for not testing (particularly among smaller firms) is the expense. Good research can be expensive, in terms of both time and money. Many managers decide that time is critical and they must implement the program while the opportunity is available. Many believe the monies spent on research could be better spent on improved production of the ad, additional media buys, and the like.
While the first argument may have some merit, the second does not. Imagine what would happen if a poor campaign were developed or the incentive program did not motivate the target audience. Not only would you be spending money without the desired effects, but the effort could do more harm than good. Spending more money to buy media does not remedy a poor message or substitute for an improper promotional mix. For example, one of the nation's leading brewers watched its test-market sales for a new brand of beer fall short of expectations. The problem, it thought, was an insufficient media buy. The solution, it decided, was to buy all the TV time available that matched its target audience. After two months sales had not improved, and the product was abandoned in the test market. Analysis showed the problem was not in the media but rather in the message, which communicated no reason to buy. Research would have identified the problem, and millions of dollars and a brand might have been saved. The moral: Spending research monies to gain increased exposure to the wrong message is not a sound management decision.
2. Research problems. A second reason cited for not measuring effectiveness is that it is difficult to isolate the effects of promotional elements. Each variable in the marketing mix affects the success of a product or service. Because it is often difficult to measure the contribution of each marketing element directly, some managers become frustrated and decide not to test at all. They say, "If I can't determine the specific effects, why spend the money?"
This argument also suffers from weak logic. While we agree that it is not always possible to determine the dollar amount of sales contributed by promotions, research can provide useful results. As demonstrated by the introduction and examples in IMC Perspective 19-1, communications effectiveness can be measured and may carry over to sales or other behaviors.
3. Disagreement on what to test. The objectives sought in the promotional program may differ by industry, by stage of the product life cycle, or even for different people within the firm. The sales manager may want to see the impact of promotions on sales, top management may wish to know the impact on corporate image, and those involved in the creative process may wish to assess recall and/or recognition of the ad. Lack of agreement on what to test often results in no testing.
Again, there is little rationale for this position. With the proper design, many or even all of the above might be measured. Since every promotional element is designed to accomplish its own objectives, research can be used to measure its effectiveness in doing so.
4. The objections of creative. It has been argued by many (and denied by others) that the creative department does not want its work to be tested and many agencies are reluctant to submit their work for testing. This is sometimes true. Ad agencies' creative departments argue that tests are not true measures of the creativity and effectiveness of ads; applying measures stifles their creativity; and the more creative the ad, the more likely it is to be successful. They want permission to be creative without the limiting guidelines marketing may impose. The Chiat/Day ad shown in Exhibit 19-2 reflects how many people in the advertising business feel about this subject.
At the same time, the marketing manager is ultimately responsible for the success of the product or brand. Given the substantial sums being allocated to advertising and promotion, it is the manager's right, and responsibility, to know how well a specific program—or a specific ad—will perform in the market. Interestingly, in a study examining the 200 most awarded commercials over a 2-year span, it was shown that 86 percent were deemed effective in achieving their goals, versus only 33 percent for other ads—proving that creative ads are effective.3
Exhibit 19-2 Chiat/Day expresses its opinion of recall tests
To advertisers interested inLday after reealU we submit a ease history
5. Time. A final reason given for not testing is a lack of time. Managers believe they already have too much to do and just can't get around to testing, and they don't want to wait to get the message out because they might miss the window of opportunity.
Planning might be the solution to the first problem. While many managers are overworked and time-poor, research is just too important to skip.
The second argument can also be overcome with proper planning. While timeliness is critical, getting the wrong message out is of little or no value and may even be harmful. There will be occasions where market opportunities require choosing between testing and immediate implementation. But even then some testing may help avoid mistakes or improve effectiveness. For example, after the terrorist attacks on September 11, Motorola developed an ad designed to portray the quality of its mobile phones by showing an FDNY fireman using one. While the ad may have had good intentions, many people felt it was an attempt to capitalize on a tragedy. As a result, much negative publicity was generated. The problem could have been avoided had Motorola pretested consumers' responses to the ad. In most instances, proper planning and scheduling will allow time for research.
We now examine how to measure the effects of communications. This section considers Advertising Effectiveness what elements to evaluate, as well as where and how such evaluations should occur.
In Chapter 5, we discussed the components of the communications model (source, message, media, receiver) and the importance of each in the promotional program. Marketers need to determine how each is affecting the communications process. Other decisions made in the promotional planning process must also be evaluated.
Source Factors An important question is whether the spokesperson being used is effective and how the target market will respond to him or her. For example, Tiger Woods has proved to be a successful salesperson for Nike and Buick. Or a product spokesperson may be an excellent source initially but, owing to a variety of reasons, may lose impact over time. For example, Britney Spears had been an effective spokesperson for Pepsi, particularly with the teen market. The question was, Will she be able to retain this relationship as she gets older? Apparently Pepsi thought not, as her contract was not renewed. In other instances, changes in the source's attractiveness or likeability or other external factors may lead to changes in source effectiveness. Pepsi pulled a TV sport featuring rapper Ludacris after Fox TV's Bill O'Reilly attacked the violent lyrics in Ludacris's songs.4
Message Variables Both the message and the means by which it is communicated are bases for evaluation. For example, in the beer example discussed earlier, the message never provided a reason for consumers to try the new product. In other instances, the message may not be strong enough to pull readers into the ad by attracting their attention or clear enough to help them evaluate the product. Sometimes the message is memorable but doesn't achieve the other goals set by management. One study showed that 7 of the 25 products that scored highest on interest and memorability in Video Storyboard Tests' ad test had flat or declining sales.5 A number of factors regarding the message and its delivery may have an impact on its effectiveness, including the headline, illustrations, text, and layout.
Many ads are never seen by the public because of the message they convey. For example, an ad in which Susan Anton ate a slice of Pizza Hut pizza was considered too erotic for the company's small-town image. Likewise, an ad created for General Electric in which Uncle Sam got slapped in the face (to demonstrate our growing trade imbalance) was killed by the company's chair.6
Media Strategies Media decisions need to be evaluated. Research may be designed to determine which media class (for example, broadcast versus print), subclass (newspaper versus magazines), or specific vehicles (which newspapers or magazines) generate the most effective results. The location within a particular medium
(front page versus back page) and size of ad or length of commercial also merit examination. For example, research has demonstrated that readers pay more attention to larger ads.7 As shown earlier, a variety of methods have been employed to measure the effectiveness of advertising on the Internet. Similarly, direct-response advertisers on TV have found that some programs are more effective than others. One successful direct marketer found that old TV shows yield more responses than first runs:
The fifth rerun of "Leave It to Beaver" will generate much more response than will the first run of a prime-time television program. Who cares if you miss something you have seen four times before? But you do care when it's the first time you've seen it.8
Another factor is the vehicle option source effect, "the differential impact that the advertising exposure will have on the same audience member if the exposure occurs in one media option rather than another."9 People perceive ads differently depending on their context.10
A final factor in media decisions involves scheduling. The evaluation of flighting versus pulsing or continuous schedules is important, particularly given the increasing costs of media time. As discussed in Chapter 10 and IMC Perspective 19-1, there is evidence to support the fact that continuity may lead to a more effective media schedule than does flighting. Likewise, there may be opportunities associated with increasing advertising weights in periods of downward sales cycles or recessions. The manager experimenting with these alternative schedules and/or budget outlays should attempt to measure their differential impact.11
Budgeting Decisions A number of studies have examined the effects of budget size on advertising effectiveness and the effects of various ad expenditures on sales. Many companies have also attempted to determine whether increasing their ad budget directly increases sales. This relationship is often hard to determine, perhaps because using sales as an indicator of effectiveness ignores the impact of other marketing mix elements. More definitive conclusions may be possible if other dependent variables, such as the communications objectives stated earlier, are used.
Was this article helpful?