INTRODUCTION
Advertising research is critical to understanding how consumers will respond to various message and media strategies. Research, in fact, generally is considered the first step of any advertising campaign-planning process. The results have considerable implications for campaign strategy, and, as such, the information obtained plays an important role in managerial decision making. The quality, reliability, and validity of that research clearly are critical.
Advertising research that is published in high-quality academic journals, such as the Journal of Advertising Research (JAR) and Journal of Advertising (JA), goes through a rigorous peer-review process to ensure that published articles meet these high-quality standards. Once published, these results subsequently are filtered into the professional community and used to inform advertising practice. In particular, the JAR has a high managerial readership, and, hence, findings published in JAR have the potential for significant impact in the advertising industry.
Academic research typically is published only in a field's top journals when something novel or at least “new” is discovered. That is, once something is reported in the literature, a general premise has been that there is no need to test it again. During the peer-review process, in fact, it is not uncommon for a reviewer to question how the current research results differ from those of a previous paper in the discipline. If the study and the results do not differ from already-published research, the reviewer often questions the paper's contribution and, hence, might recommend rejection of the submitted manuscript on that basis.
Replication research essentially helps to keep existing knowledge in check (Nosek, Spies, and Motyl, 2012), yet it is considered rare in marketing. A key argument against replications is that an academic scientist's professional success depends on publishing, and publishing norms emphasize novel, positive results (Nosek et al., 2012). Disciplinary incentives, therefore, encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. In short, controversy about replications remains, and this problem persists in the advertising literature.
THE RARITY OF REPLICATION RESEARCH
There is a rich body of scholarship going back several decades on the value of and need for replication research in the fields of advertising and marketing. One analysis sampled 1,120 papers from three major marketing journals and found no replications (Hubbard and Armstrong, 1994). The authors found that just 1.8 percent of the papers were extensions, concluding that, on average, these extensions appeared seven years after the original study. Published extensions, moreover, typically reported results that conflicted with the original studies. Of the 20 published extensions, 12 conflicted with the original results; only three provided full confirmation. The authors also noted that the publication rate for such papers had been decreasing since the 1970s (Hubbard and Armstrong, 1994).
A content analysis of 18 leading business journals, covering the years from 1970 to 1991, similarly showed that published replication and extension research was uncommon in the business disciplines (Hubbard and Vetter, 1996). The authors noted that such research typically constituted less than 10 percent of published empirical work in the accounting, economics, and finance areas, and 5 percent or less in the management and marketing fields. In keeping with previous research (Hubbard and Armstrong, 1994), the authors noted that such work usually conflicted with existing findings.
These results raise the prospect that empirical results in these areas may be of limited value for guiding the development of business theory and practice (Hubbard and Vetter, 1996). Some researchers have argued that “replication is a means of increasing the confidence in the truth value of a claim. Its dismissal as a waste of space incentivizes novelty over truth. As a consequence, when a false result gets into the published literature, it is difficult to expel. There is little reinforcement for conducting replications to affirm or reject the validity of prior evidence and few consequences for getting it wrong. The principal incentive is publication” (Nosek et al., 2012, p. 617).
This argument is consistent with the general belief that published replications do not attract as many citations after publication as do the original studies, even when the studies produce different results and fail to support the original research. Researchers more recently have noted the rarity of replication studies, which make up less than 2 percent of publications in the social sciences.1
Why the Lack of Replication Is Bad for Advertising
Scholars for decades have warned about the consequences of the dearth of replication research. More than 35 years ago, a content analysis of all 1977, 1978, and 1979 issues of the leading advertising, marketing, and communication publications assessed the frequency of replication in advertising research (Reid, Soley, and Winner, 1981). Results suggested that replications seldom were published in advertising research; consequently, the possibility existed that empirical results were absorbed uncritically into the advertising literature as verified knowledge.
The authors of that research provided recommendations to ensure that replication would become a recognized and practiced component of the advertising-research process. They argued, “The determination of whether replication is practiced or neglected in advertising research will reveal the extent to which a foundation for advertising theory exists…. If replication is revealed to not be an integral component of the advertising research process, what is known about the process and effects of advertising is based on unverified evidence” (Reid et al., 1981, p. 5).
A more recent analysis of replication studies from 1980 through 2012 in the Journal of Advertising, Journal of Advertising Research, International Journal of Advertising, and Journal of Current Issues & Research in Advertising followed the replication logic of the positivistic perspective of quantitatively oriented social-science research (Park, Venger, Park, and Reid, 2015). The articles were coded as either research or nonresearch articles, and replications within the research articles were coded by replication approach, replication type, and findings of the replication studies relative to original results.
Of the 5,269 articles, 2,856 were coded as research articles. The researchers partitioned data by replication approach to address the criticism that intrastudy replication is not true replication (Park et al., 2015); they then compared the results by decade and journal. When both intrastudy and interstudy approaches were considered replication, 184 (6.4 percent) of the research articles were identified as replication studies, and replications were found to increase gradually over time, particularly after 2000. The number of replications dropped to 84 (2.9 percent) of the journal research articles, however, when intrastudy replications were excluded as replicative studies. It is interesting that whether they were from intrastudy or interstudy replication, almost all results either supported or partially supported the original research.
If results of nonreplicated studies are accepted into the advertising community, they often become what might be considered “truths,” and these “truths” are hard to dispel. Prior reports demonstrated how these incentives inflate the rate of false effects in published science: When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation (Nosek et al., 2012).
Some researchers believe that improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery (Munafò et al., 2017). The argument reinforces beliefs from almost a quarter century ago: “The publication of replications and extensions helps to protect the literature from the uncritical acceptance and dissemination of erroneous and questionable results. It is unrealistic to expect the peer-review system to shoulder this burden alone. The members of a discipline have a collective responsibility to ask whether a given result is plausible, reproducible and/or generalizable” (Hubbard and Armstrong, 1994, pp. 233–234).
This should be the overarching goal of the advertising academic community. Replication, in fact, is one way that science actually can self-correct: “Replication studies—experiments that replicate previous work by reanalyzing existing data or doing new experiments that recreate it—are a cornerstone of reproducibility. If research findings can be replicated, they are more trustworthy and reliable.”2
Concern about what constitutes “truth,” stemming from concern about the lack of replications, is growing (Evanschitzky, Baumgarth, Hubbard, and Armstrong, 2007). The editorial policies of some leading marketing journals now encourage more replication-type articles. The Journal of Consumer Research's website, for example, recently updated its authors' guidelines, stating that one appropriate manuscript type is “Reassessments of previously reported research findings or insights, with possible refinements.”3
Some authors even have warned that practitioners should be skeptical about using the results published in marketing and advertising journals, because hardly any have been successfully replicated. That was the key takeaway from a report about an extension of a 1994 study on whether such efforts have influenced the number of replication studies published in leading marketing journals (Evanschitzky et al., 2007). The results indicated that the replication rate had decreased to 1.2 percent. The authors further suggested that practitioners should ignore findings until additional support via replications is reported and that researchers should put little stock in the outcomes of one-shot studies. This suggests that practitioners also should be cautious about integrating advertising-research results into their planning efforts unless the results have been replicated.
What Is a Replication?
Although some have argued that replication has great value but little “real-life” application in the true sense, the activity itself, regardless of the degree of precision of the replication, can have great merit in extending understanding about a method or a concept (Park et al., 2015). What constitutes a replication, moreover, is not developed fully yet. Despite a lack of consensus, recent efforts have shaped our understanding of the various forms of replications and how they can make a meaningful impact in our field. In fact, the built-in flexibility paves the way for substantive and publishable work that replicates and extends knowledge or the continued pursuit of the elusive truth.
There is the argument, for example, that in contrast to the truncated view that replications have little to offer beyond what already is known, there is a broader understanding of replications (Hüffmeier, Mazei, and Schultze, 2016). This view asserts that replications are conceptualized better as a process of conducting consecutive studies that increasingly consider alternative explanations, critical contingencies, and real-world relevance. This belief is in line with the present author's own work (the author has argued that such an approach can help make theory more relevant to practitioners; Royne, 2016) and that of others who have noted that replication cannot be taken only in its literal sense, as “an exact copy” (Reid et al., 1981). By definition, an original study and a replication study cannot be identical because of variables such as the passage of time, study participants, and human factors (Earp and Trafimow, 2015).
Still other researchers have adopted Harvard University Psychology Professor Daniel Gilbert's perspective that “replications should be treated as data points, not verdicts, and…that no matter how precisely another researcher attempts to execute a study, he or she will always do something slightly differently” (Novotney, 2014, p. 32). Research from 2016, moreover, suggests that “the current devaluation of replications indicates a narrow understanding that replications are about single studies of a specific type (e.g., exact versus conceptual replications) and that the presumed value of replications is to merely attempt to show again what is already known (thus lacking perceived novelty)” (Hüffmeier et al., 2016, p. 82).
This thought process is consistent with recently published replications. Among them was an attempt to replicate empirically findings (from Kees, Burton, Andrews, and Kozup, 2006) that more graphic pictorial cigarette warnings positively influenced smoking-cessation intentions and that evoked fear was a primary mechanism underlying this relationship (Davis and Burton, 2016). The replication study, however, differed in three substantive ways:
Cigarette advertisements were used (as opposed to packaging and warning statements).
Food and Drug Administration–mandated pictures were used (as opposed to self-selected pictures).
The samples differed.
The results indicated partial corroboration of the prior research (Kees et al., 2006), as follows:
Strong support that more graphic pictorials positively influenced warning-effectiveness perceptions and smoking-cessation intentions;
Confirmation of evoked fear as the primary mediating mechanism underlying effects;
Failure to support a difference between moderately and highly graphic pictorials.
Another study (Bellman, Wooley, and Varan, 2016) used Affectiva's facial-tracking technology to replicate a classic program–advertisement matching study (Kamins, Marks, and Skinner, 1991). This conceptual replication tested the original study's program–advertisement matching effect on informational advertisements (which the authors believed to be more common than the sad advertisements the original study tested), using cognitive recall as the outcome measure. This replication study also differed from the original in three substantive ways:
It utilized a mixed experimental design.
It employed different genres of television shows (including international content).
It utilized a new biometric-process measure (computer-detected smiling).
The results corroborated and extended the original study's findings: The authors found strong support that program–advertisement matching mattered only for nonpositive advertisements and that program–advertisement matching recommendations could be applied to informational advertisements in informational programs. This is a more common combination than sad advertisements in sad programs.
Finally, there was an empirical replication of a study (Gwinner and Eaton, 1999) conducted in a similar manner that demonstrated the effects of brand sponsorship on image congruence between sponsoring brands and sponsored sporting events (Kwon, Ratneshwar, and Kim, 2016). The authors sought to correct potential methodological flaws of the original study, such as lack of a fully balanced design. The authors also enhanced statistical analyses. Results showed partial corroboration of the original findings, including strong support for the finding that brand sponsorship, in general, increased image congruence between sponsoring brands and sponsored sporting events. Findings only showed mild support of the match-up hypothesis, however, and no support of a moderating influence of image-based similarity on the extent of image congruence. The researchers offered plausible explanations for the lack of full corroboration.
CONCLUSION
What is most valuable in replication studies is that, by offering new insight into the theories tested, they elucidate advertising knowledge. Recent studies have shown that replications have the potential to contribute in that they can help researchers to better understand the concept or theory of interest by looking at it in a different way or in a different context. This recent work, furthermore, has demonstrated that replications can help a researcher get closer to what is believed to be the “truth,” even if the actual truth cannot be truly determined and is changing as advertising changes.
Some authors have argued that “the perceived absence of a replication tradition in the social sciences is the result of incorrect perceptions regarding both the acceptability of replication studies and the form that such studies should follow” (Easly, Madden, and Dunn, 2000, p. 90). For advertising to integrate researchers' accurate findings more fully into the industry, academic research must be transparent and social sciences believe to represent the ever elusive truth.”
Replications also provide the opportunity to enhance methodologies and improve on possible limitations identified in earlier studies. There is a potential richness in opportunities to integrate conceptual and empirical replications into existing and future research. This could help to validate existing studies and extend research to benefit advertising practitioners. A 2001 paper, for example, revealed the importance of and need for replication specifically in the advertising and public policy arena (Abernethy and Wicks, 2001).
To assist in this process, advertising-research journals should consider responding to recent calls for more transparency of data and statistical analysis of scientific research to allow other researchers to replicate their published work more easily. This might include the open sharing of research instruments and datasets. Scholars advanced this general notion years ago (Reid, Rottfeld, and Wimmer, 1982).
With new media and the evolving Internet pervading our society, replication research in advertising has become even more critical where it can help advertising practitioners contend in an increasingly competitive environment. Advertising journals must be willing to publish the research, and the replications must be published even when they reproduce the findings they set out to test. Those findings might not be novel, because they are not different. They are, however, substantiated and thus likely will provide the most value to the advertising industry. As previous authors put it, “The real fact that emerges from the advertising literature is that a replication tradition is needed in the advertising discipline” (Reid et al., 1981, p. 9).
ABOUT THE AUTHOR
Marla B. Royne is the Great Oaks Foundation professor of marketing and chair at the University of Memphis Fogelman College of Business and Economics Department of Marketing & Supply Chain Management. Her research focuses primarily on the development of effective message and media strategies, with an emphasis on social issues, including public health and the environment. Royne is the past editor of the Journal of Advertising (JA) and is on the editorial board of the Journal of Advertising Research (JAR). Her research has appeared in a number of journals, including JA, JAR, Journal of Retailing, and Journal of Public Policy and Marketing.
ACKNOWLEDGMENT:
The author thanks Jonathan Ross Gilbert for helpful comments on an earlier version of this article.
Footnotes
↵1 D. de Weerd-Wilson and W. Gunn. (2017, January 31). “How Elsevier Is Breaking Down Barriers to Reproducibility.” Retrieved September 1, 2017, from the Elsevier website: https://www.elsevier.com/connect/how-elsevier-is-breaking-down-barriers-to-reproducibility.
↵2 D. De Weerd-Wilson and W. Gunn. (2017, January 31). “How Elsevier Is Breaking Down Barriers to Reproducibility.”
↵3 Journal of Consumer Research. (2017). “Manuscript Submission Guidelines.” Retrieved from http://www.ejcr.org/guidelines.htm.
Editor's Note
“Speaker's Box” invites academics and practitioners to identify significant areas of research affecting advertising and marketing. The intent of these contributions is to bridge the gap between the length of time it takes to produce rigorous work and the acceleration of change within practice. With this contribution, Marla B. Royne throws the spotlight on the difficulty of publishing replication studies. She points out that, although there is no unified consensus on what qualifies as a replication study, people know it when they see it, and it remains a difficult “sell” in terms of publishing for most academic journals. Without replication, the credibility of any research findings is questionable, and it may be wise for academics and practitioners to follow with caution the resulting advice from any study lacking replication. Royne suggests that replications offer a number of ways to enhance and develop our understanding and appreciation of topics and get closer to the ever-elusive “truth.” As part of a solution, she calls for advertising journals to consider a variety of ways by which replication may be fostered and encouraged.
Douglas C. West
Professor of Marketing, King's College London
Contributing Editor, Journal of Advertising Research
- Copyright© 2018 ARF. All rights reserved.