Focus groups are a quantitative method of testing ads.

I was hesitant to write about the use of focus groups for advertising evaluation. I can hear some of you thinking, “This is news? Really? This is Research 101; everyone knows that focus groups shouldn’t be used for advertising evaluation!” And, many concur. Conduct a Google search on advertising and focus groups, and you’ll find quite a few research companies that would agree with this assertion. However, we still find people that insist on conducting focus groups for advertising evaluation; many of our clients would find this approach to be sacrilegious [some of our clients would beat us about the head and shoulders for even suggesting focus groups for ad evaluation].

So, I hesitated to tackle this topic; that is, until yet another client with advertising evaluation needs called recently to say they needed focus groups to test a series of ad campaigns. Actually, this issue comes up quite a bit—even among seasoned marketers and researchers. For this client, we advised against the use of focus groups to evaluate ads due to the ever-present danger of “groupthink.” We explained that focus groups are ideal for generating ideas [i.e., for developing fodder to help with the creation of ads or for ideation] but not so much for evaluating ads or concepts. Whenever the research calls for an evaluative mode, we typically recommend in-depth, in-person interviews [IDIs] so that the phenomenon of groupthink is eliminated.

So, what is groupthink? A pioneer in the research of groupthink, Irving Janis defined the term from a psychological perspective as “a mode of thinking that people engage in when they are deeply involved in a cohesive group, when the members' strivings for unanimity override their motivation to realistically appraise alternative courses of action.” While focus group participants may not be a deeply “cohesive group” in the sense that Janis conveys, the groupthink phenomenon can and does occur when participants with similar behavior and demographics are brought together in a focus group. Unless you conduct in-depth interviews [IDIs], there is no way to totally eliminate groupthink.

Nevertheless, we see some clients requesting focus groups for advertising evaluation for several reasons:

  1. First, clients are often concerned about the cost of IDIs.
  2. Second, some clients and prospective clients confuse the larger number of respondents in focus groups with statistical reliability.
  3. Third, clients find the IDI process monotonous and time-consuming to observe.

 Cost as an Issue. Regarding the most recent request from a client to conduct focus groups, we strongly recommended IDIs for the client’s advertising evaluation needs, but they were concerned about the cost. It is true that IDIs are often more expensive than focus groups on a per-individual basis. We suggested to this client that if budget and travel restrictions existed, we could evaluate the ads via an online bulletin board [OLBB] where participants’ responses can be isolated so that they don’t see each other’s answers, thereby eliminating the online version of groupthink [of course, we can flip a switch and allow OLBB participants to see each others’ comments, if that is appropriate]. But with OLBBs, the cost of real-time transcriptions is included [another advantage of OLBBs is that you don’t have to wait for the transcripts], the cost of the OLBB platform was far less expensive than renting a facility for several days, and we didn’t have to incur travel costs.

 Reliability as an Issue. Another reason that clients prefer focus groups for advertising evaluation is that they want to get as many opinions as they can in the shortest time possible. So, if a client did four focus groups of eight respondents each, that would be 32 opinions, and these groups could be conducted over the course of two evenings—maybe one day if the groups are short enough. BUT, this is not a sample of n=32, it’s a convenience sample of four focus groups. There are really no "individuals" in a group—it’s simply a group. Talking about sample size in focus groups is invalid, and it’s amazing how many experienced researchers fail to understand this.

 As far as the numbers are concerned, we often tell clients that with as few as 16 or 24 in-depth interviews, we can evaluate stimulus and begin to see if real patterns exist. However, the four-group scenario is no more “reliable” just because of the larger numbers. The difference between 16 or 24 IDIs versus 32 opinions from a focus group is meaningless in the qualitative methodology. You might have a higher comfort level with more participants via focus groups, but that doesn’t make them any more “reliable.” If you want a large number of opinions and the ability to have group discussions, why not conduct a clinic? We did that for one client who was evaluating a disruptive technology in the printing area. They wanted the ability to obtain a sample size of n=400 AND discuss the concept with respondents. As such, we conducted 45, 90-minute, “classroom style” clinics. Each classroom style clinic group consisted of approximately 10 respondents each and was geographically dispersed. In the end, we were able to provide this client with the robust sample size they desired as well as the ability to observe respondents discussing the concept.

 The Dullness of IDIs as an Issue. And finally, some clients are just more comfortable observing groups [it may be all they know] and don’t want to sit in a facility all day long observing IDIs. After all, they have a job to do during the day and would prefer to observe focus groups in the evening to avoid missing any work [although we do conduct groups in the morning, lunch time and afternoon]. Some clients just find the IDI process wearisome. Busy executives especially disdain IDIs. It’s hard to combat this attitude, but we often tell clients they can have it fast and easy or right, but often not both. Moreover, when senior executives will be observing during the day, we help ease the viewing pain by recruiting a “floater” for certain interview time slots, greatly reducing the chance of a “no-show” and avoiding a wasted time slot with busy senior executives observing, which is never good.

Summary. Now, I can hear some of you saying, “But we conduct focus groups all the time to evaluate ads [or other stimulus]. We mitigate groupthink by utilizing self-administered questionnaires [SAQs], so that the respondents can record their answers privately and individually.” [And yes, if you’re wondering, we have conducted focus groups to evaluate ads using the aforementioned SAQs—but this was not our recommendation.]

So, while it is possible to conduct focus groups to evaluate ads and other stimulus, should you? Sure, you can use SAQs to mitigate groupthink, but then you’ll have quiet focus groups, as your client observes expensive and often hard-to-recruit respondents while they sit silently and fill out their SAQs. Furthermore, we have observed that some respondents may hesitate to reiterate what they write on their self-administered questionnaires. So, clients in the back room may hear one thing when the SAQs actually show something completely different. Hence, confusion can set in. Is this a good use of your research budget? We think not.

So the next time someone recommends focus groups for advertising evaluation, think again. For most cases, we recommend IDIs—whether in-person or online—over focus groups as it makes the research completely unassailable. The same can be said when the primary goal is to understand the purchase decision process, something that can be as idiosyncratic as evaluating advertising.

Which of the following is the most widely used method for Posttesting print ads?

[p. 621] The most commonly employed method for posttesting print ads is: A. the recognition method.

Which of the following is classified as a pretest laboratory method for conducting research to measure advertising effectiveness?

Which of the following is classified as a pre-test laboratory method for conducting research to measure an advertising effectiveness? The vehicle option source effect.

Which form of testing may measure the number of coupons returned phone calls generated or direct responses through reader cards?

Inquiry tests: designed to measure advertising effectiveness on the basis of inquiries generated from ads appearing in various print media. The inquiry may take the form of the number of coupons returned, phone calls generated, or direct inquiries through reader cards.

Chủ Đề