ABSTRACT
Neuromeasures show promise for measuring responses to advertising that respondents cannot accurately verbalize, but their application to advertising is in its infancy. This article identifies issues with implementing such measures for better advertising decision making and discusses future research priorities. It cautions marketers not to believe all that is claimed and recommends further systematic testing of the measures. It provides buyers of neuroscientific research with questions that the authors believe should be asked of vendors. The authors encourage vendors to develop robust answers underpinned by empirical validations, which will advance advertising understanding and practice.
MANAGEMENT SLANT
Traditional measures of advertising effectiveness are not sufficient to fully understand response to advertising, so new approaches to advertising testing should continue to be investigated.
Neuro-approaches are promising but not yet perfect—the data needs to be manipulated to “see” the patterns; the outputs require interpretation; and different software can give different answers.
Marketers should be somewhat skeptical of the new tools, ask lots of questions of vendors, and work with them on validations.
Marketers should prioritize their investment in research that has been shown to be predictive of in-market behaviors of interest.
Vendors should have good a priori knowledge of what to expect in their measures in response to advertising stimuli under specified conditions.
Marketers have many decisions to make when producing advertising, from strategic direction to creative tactics, including who to cast, where to film, and what music to play. Traditionally, marketers have viewed consumers as rational agents, so the norm has been to ask relevant people their thoughts and feelings to aid decision making. Market research respondents, with their desire to help, typically can and do give what appear to be rational responses when asked questions such as “What did you like about the ad?” Nevertheless, these self-reported responses may not tap into the real answers. Approximately 95 percent of human thinking is unconscious (Zaltman, 2003) so respondents often do not know what is influencing them (Nisbett and Wilson, 1977; Wilson and Brekke, 1994). Moreover, emotions often occur in the absence of the ability to verbally respond about them (LeDoux, 1998).
Much has been discovered in recent years about how the brain builds and stores memories (Anderson and Bower, 1980; Cabeza and Moscovitch, 2013). Because most advertising needs to work through buyers' memories (Kennedy, Sharp, and Hartnett, 2013), this knowledge has major implications for advertising development and measurement. Emotion is central to human thinking and decision making (Bechara, Damasio, and Damasio, 2000; Damasio, 2000) though debates still exist (LeDoux, 2012). Emotions impact what people pay attention to (Yiend, 2010), hence which advertising has any chance of influencing behavior. If advertisers focus only on rational responses, they potentially miss knowledge of how to make great advertising. Improved understanding of how peoples' brains and bodies respond to advertising—emotionally, rationally, unconsciously, and consciously—across varied conditions is needed for improved theories of advertising, to guide more effective decision making and to refine the tools for measuring success.
This article aims to guide those with an interest in advertising into the complex space of neuro-tools and provides a framework to assist them. The literature review is not exhaustive but aims to highlight key issues that are critical to advertising. The authors hope it encourages advertisers to continue reading work from other disciplines, such as psychology and neuroethics (Lang and Bradley, 2010; LeDoux, 2012; Levy, 2008; Orquin and Loose, 2013).
The authors acknowledge the incredible advances in this space from some skilled researchers. But they also point to variations in existing neuro–psychophysiological tools; indeed, marketers need to do their homework. This may include engaging experts (Varan et al., 2015). The current authors raise key concerns and then suggest a range of questions advertisers should ask prospective vendors. This article concludes with research priorities, which will be important to improve advertising neurotools as well as advertising theory and practice in the future.
DO NEUROMEASURES OFFER A BETTER WAY TO TEST ADVERTISING?
Neuroscience has revealed much about the human brain and how it processes stimuli and makes decisions. Advertisers, however, should not believe everything they hear or assume that neuro-tools are advertising ready. The authors advise caution in considering some of the claims being made.
Some discussions of the brain are reminiscent of phrenology: attributing memory to a brain section or attention to a bump on the skull. Neuroscience is more complex than this, as acknowledged by those close to the field (e.g. Roskies, 2008). Multiple sites of the brain are implicated in the same psychological construct. The amygdala, for instance, is one of many sites implicated in emotion, along with the prefrontal cortex, orbitofrontal cortex, anterior cingulate cortex, and hypothalamus, among others (Dalgleish, 2004).
A “how-not-to” example of this was when a marketing consultant wrote in The New York Times that people literally love their iPhones, based on activity in the insular cortex linked to the sight of an iPhone (Lindstrom, 2011). As a group of neuroscientists pointed out in a heated rebuttal (Poldrack, 2011a, 2011b), activity in any brain region does not easily equate to a given mental state. Insular cortical activity does not always mean love—it can also be associated with negative emotions. Neuroscience has a good but incomplete understanding of brain regions and functions, and work continues on the interactions between them.
The idea of a “buy button” can be traced to the early days of neuromarketing. In a call for the United States government to ban ad testing, the belief that neuromarketers seek to “find a buy button inside the skull” was raised with concern in a letter to the Emory University president from a consumer watchdog: “it sounds like something that could have happened in the former Soviet Union, for the purposes of behaviour control” (Ruskin, 2003, p. 1). Some neuromarketing suppliers have themselves reinforced the erroneous idea of a “buy button.” Consider a statement by Patrick Renvoise and Christophe Morin, the founders of SalesBrain, a neuromarketing agency launched in 2002. In their book, Neuromarketing: Understanding the Buy Buttons in Your Customer's Brain (Thomas Nelson Inc, 2007), Renvoise and Morin wrote, “Neuromarketing will quickly increase your selling effectiveness, enabling you to push your customers' ‘Buy Buttons.’” The statement inferred that activation of certain parts of the brain leads consumers to buy a certain brand. Such an inference, the authors believe, is too simplistic.
The current authors also question the suggestion that neuroscience delivers the truth akin to photography of people's minds or thoughts. One branding expert described EEG data as unequivocal: “brain-waves are … straight shooters” (Lindstrom, 2012, p. 23). They “don't waver, hold back, equivocate, cave in to peer pressure, conceal their vanity, or say what they think the person across the table wants to hear”. Or, as a planner told The New York Times: “Instead of hypotheses about what people think and feel, you actually see what they think and feel … I'm not such a huge fan of ad testing… but measuring biological responses is absolutely useful” (Elliott, 2008).
An article in Forbes magazine proclaimed, “brain waves don't lie” (Wells, 2003). Such statements, the current authors believe, suggest a truth that can be measured yet fail to consider the manipulation required to “see” the data; the requirement to interpret the data; and the implication that a given response matches a behavior or psychological state.
The layperson or uninformed marketer likely would consider the output of functional magnetic resonance imaging (fMRI) as actual activity when, in fact, a great deal of data manipulation is required, and even then only indirect signals may be understood, not the thinking (Page and Raymond, 2006; Roskies, 2008). “Brain images are influential because they provide a physical basis for abstract cognitive processes, appealing to people's affinity for reductionistic explanations of cognitive phenomena” (McCabe and Castel, 2008, p. 343). The reporting leads marketers to simplify what is possible and to assume that what is reported is the truth even though experts appreciate the potential for misinterpretation and urge caution (Roskies, 2008).
A number of factors can alter neuro-outputs. Common software programs, with their own analysis protocols, can identify different brain structures involved in a task despite using the same data (Fusar-Poli et al., 2010). In the academic arena, concern has been raised regarding the application of proper statistical procedures. Respected journals that follow electroencephalography (EEG)—an electrophysiological monitoring method to record electrical activity of the brain—and magnetoencephalography (MEG)—a technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain—have noted failures to apply the correct tests to avoid identifying false positives (Vecchiato, et al., 2010). This tendency for error elegantly was demonstrated by subjecting an Atlantic salmon to pictures of varying emotional valence while the fish was being monitored by fMRI (Bennett, Baird, Miller, and Wolford, 2009). By not applying statistical correction, the researchers were able to establish a brain response from the salmon in response to emotive stimuli—particularly remarkable given that the fish was dead. Commercial use of such procedures is unknown, especially to an advertiser unaccustomed to these techniques.
Even where the same analysis protocols and stimuli are used, the test–retest reliability of an fMRI is between one-third and one-half (Bennett and Miller, 2010). This means that when the same advertisement is shown to a person at Time A and Time B, only a proportion of the same neural activity is observed. This lack of reliability may be a concern; alternatively, brain activity may differ between tests due to habituation, novelty, and mood. Systematic documentation of variation under different conditions is needed so advertisers know what to expect and how this plays out in-market where most advertising needs to work more than once. Drawing on what is known about the stability of traditional measures (Rungie et al., 2005) may facilitate learning in this regard.
In the public domain, the belief that a given neurological response maps directly to a given behavioral act or psychological state has resulted in assertions that product placement does not work and “cigarette warnings … actually encouraged smokers to light up” (Lindstrom, 2012, p. 30). Such claims typically rely on subjective interpretation of single studies that point to a simplistic interpretation of the psychological processes involved in single brain regions. This is perhaps the most widespread error concerning the interpretation of neurostudies.
A FRAMEWORK FOR ASSESSING NEURO VENDORS AND THEIR TOOLS
The above discussion has highlighted concerns, yet there is no doubt that neuroscience tools are underpinned by solid empirical research and offer a different perspective on advertising response. For marketing practitioners wanting to use such tools to test their advertising, the authors here offer more detail and provide a Framework (See Figure 1), which guides the selection of a vendor and tools to increase the chance of conducting valid and reliable research.
The foundation of the Framework is a vendor who understands both the technologies and marketing, while four pillars highlight the qualities of, or tools used by, a good vendor, namely: theoretically robust tools for marketing; quality data collection on appropriate sample; transparent appropriate analysis; and evidence-based advice. The desired result is a vendor capable of predicting in-market success.
A good neuroadvertising study requires the right research team, yet there is considerable variation in the quality of vendors. The current authors support calls for standards and vendor accreditation to help marketers select credible providers (Noble, 2014), but in the meantime suggest buyers be skeptical and ask a lot of questions.
Understanding the Technologies And Marketing
The authors recommend asking prospective vendors:
What are they measuring?
What independent validations support their technology and analysis approach?
What do they predict?
If they are making advertising recommendations, what is their knowledge of marketing?
If they are recruiting and conducting experiments, what is their knowledge of market research?
Vendors should demonstrate complete transparency. They may not have all the answers, but if they are open and willing to learn, it is a good start. A critical foundation of a successful study will be a good vendor who understands the relevant technology as well as how marketing works.
To take the conversations to the next level, it is useful to discuss each of the four pillars identified in the Framework.
Are the Tools and Measures Theoretically Robust?
Advertisers must understand what a tool is measuring and why this is important in their context. Do all advertisers need to know, for example, about attention to their advertisement and/or emotion and/or memory responses? Although that discussion is outside the scope of this article, as the new knowledge from these new tools builds, many advertisers may find they need to rethink traditional advertising theory.
Once the advertiser identifies the constructs to be measured, any provider should be able to point to a large body of scientific evidence that supports that vendor's tool. Some tools, such as fMRI and EEG, have a long-established literature; others, such as voice analyzers, suffer from damning independent evidence (Eriksson and Lacerda, 2007). The authors, therefore, advise caution until supporting evidence is provided.
In one realistic scenario, an advertiser could commission one of a number of very different tools—neuro and others—to make the same decision, such as which advertisement to air, having had each tool justified theoretically along similar lines. Other authors have researched how neuromarketing firms attempt to differentiate themselves (McDowell and Dick, 2013).
Advertisers must understand what each tool is measuring and why that metric is important, or they will end up with conflicting answers. Two different moment-by-moment traces for the same advertisement, for example, could tell very different stories. Specifically, an Ipsos' emotion measure, which is controlled by the respondent with a mouse, may not match a Neuro-Insight's engagement measure based on Steady-State Topography (SST; See Figure 2). SST is a refinement of EEG that incorporates the presentation of an oscillating visual stimulus designed to provide the brain with a baseline electrical response allowing for greater separation of signal and noise (Silberstein and Nield, 2008).
It is rare to see the same stimuli, such as a television ad, measured with competing approaches. Although case study comparisons will never be able to tell which approach is better, they help demonstrate why validated tools for specific questions are needed. Without systematic comparisons, the current authors caution, decisions will be justified by different approaches without users even realizing that a different tool could lead to a different decision.
In one particular instance, the Neuro-Insight measure appeared more variable (See Figure 2). This is reminiscent of most brain readouts where engagement is not sustained at high levels; a resultant sine-like pattern emerges. From seconds 5–9 engagement initially increased, then declined. The Ipsos dial, in comparison, remained unchanged. The difference may be reflective of the music, as the visual remained unchanged. From the ninth second the patterns appeared in sync, the Ipsos dial moved up quickly with the introduction of color and the voiceover, with a more positive and energetic tone. The Neuro-Insight measure also climbed. Both measures dropped at the pack shot; Neuro-Insight engagement reached its lowest level with Ipsos experiencing its first real decline.
This post hoc analysis identified some common outcomes from both measures, though many were different and most marketers would not know with any certainty which to use for what decisions. The Neuro-Insight measure appeared more sensitive to tone, as seen with variable readings during the first 10 seconds, whereas the Ipsos trace demonstrated a lack of early interest. It may be that respondents did not move the dial when there were quick emotion changes, or that the changes simply did not register long enough for the respondent to change the dial. This is akin to one conception of emotions as having quick onset and only being fleeting; they “happen before one is aware they have started” (Elkman, 1992, p. 185).
Clearly all measures and tools are not theoretically equal, which has major practical implications.
Quality Data Collection On Appropriate Sample
Advertisers need to ensure that they are not only measuring the right things but are also measuring them appropriately, with a good sample. Although this sounds obvious—“just apply standard quality market research practices”—it is complicated by other issues in this space. Data collection for some neuro-techniques can require environments free of electrical interference, environmental noise, changing temperature or even other respondents. Neuroscience research traditionally has dealt with such requirements by undertaking the studies in controlled labs, but such laboratory work may be too detached from advertisers' goals (McQuarrie, 1998). Most media exposure and buying occur in settings very different from laboratories due to clutter and distractions. Some neurophysiological tools offer more naturalistic settings, whereas others, and particularly fMRI, are far from naturalistic. The current authors encourage more direct comparisons of controlled laboratory research versus in-market approaches to understand what questions each can better address.
Other techniques have unique challenges. As a metric, skin conductance varies because some people will not respond with phasic responses; medication or fatigue can suppress response; women's menstrual cycles alter skin conductance; and tropical residents will not respond in room temperatures that are normal for residents in temperate regions. Issues like being left- or right-handed or wearing glasses might also need consideration in the sampling and/or analysis of some tools (Potter and Bolls, 2012).
Clearly, experts in the specific tools will be aware of these quality requirements and control for them. Nevertheless, the authors advise marketers to ensure that vendors have appropriate guidelines relevant to their tools in place, such as clear protocols on data collection and quality control procedures.
Whatever the tool, marketers should ensure they get an evidenced-based answer to the question, “How many respondents is enough?” Historically, neuro-samples have been small, based on the assumption that people respond in the same way, but research outside of marketing raises serious concerns about this supposition. Even if the research conducted is otherwise perfect, the average statistical power of studies in the neurosciences is too low, resulting in overestimates of effect size and low reproducibility of results. Put simply, many of the conclusions drawn from such studies are probably false (Button et al., 2013).
Neuro-providers tend to have science related backgrounds and may not be familiar with sampling and quality issues from traditional market research, such as rotating test content or not telling respondents that they are taking part in an advertising study. For most advertisers, the default sample should be category users (Kennedy et al., 2013). If samples are biased toward brand loyals or heavy buyers, they do not provide the opportunity to understand how light or nonbrand buyers respond, yet such users are critical to growth. Such users may respond differently in terms of attention, emotion, and memory to advertising stimuli because they likely will have the least-established memory structures. Brand usage likely would influence response (Castleberry and Ehrenberg, 1990; Romaniuk and Wight, 2009), therefore researchers must know what it means in terms of who should be sampled.
Complicating things further, many brands also need to research across markets. There is some initial evidence (Northover, 2012) that consumers from different cultures respond differently to advertising when response is measured biometrically. This finding may have sampling and reporting implications for advertisers operating across markets or in culturally diverse environments.
Transparent Appropriate Analysis
Neurovendors also should be able to provide written protocols documenting how their data will be analyzed.
To critique analysis in the neurospace, one needs to understand the algorithms in use. Blackbox algorithms commonly are used to amalgamate measures and differ between vendors. In a road safety environment, Thornton and Rossiter (2004) concluded that a skin conductance scoring algorithm was not suitable for road safety advertising, yet most marketers might not even think to ask if it were. Algorithms are only as good as the dependent variable they identify, yet there is no consistent definition of “effective advertising,” so two algorithms ostensibly measuring the same construct may well be different if they use different dependent variables. Algorithms typically constitute Intellectual Property, yet transparency should be encouraged to aid validation. Perhaps neurovendors should compete on the quality of their data, not the uniqueness of their measures (Varan et al., 2015).
Ignoring the issue of different algorithms for a moment, it is useful to consider benchmarks or guidelines that might be used by analysts to make recommendations. If one advertisement is more emotional than another, it might be expected that this correlates to subsequent effectiveness, but is this consistently true? How much more emotional does it need to be? Are there times when less emotional advertising is better? Are other responses also needed, such as attention to branding? To the current authors' knowledge, neuroscientific research still has not answered to these and other questions. When critiquing the evidence, marketers should recognize it cannot just come from case studies.
As previously mentioned, viewers and listeners of advertising have histories with many of the brands and categories advertised, which will impact their memories and emotional responses. While systematic differences are known in traditional metrics such as brand image data (Romaniuk, Bogomolova, and Dall'Olmo Riley, 2012), how such deviations play out in neurostudies has not yet been documented. It will be critical to know how to analyze such data, such as looking at responses of users versus nonusers separately.
Advances in technology have prompted the creation of automated processes and visual dashboards providing at-a-glance views of key metrics, but automation also creates problems around data quality and insight (Noble, 2014). Even if confidentiality agreements are needed, vendors should show how they analyze data and explain why.
Evidenced-Based Advice
If providers are going to give evidence-based advice, the current authors assert, they should have a priori expectations of the patterns that will be seen in their data with validated recommendations under known conditions. What response do great advertisements get? Do scene changes with certain characteristics improve attention scores?
Providing answers to these questions requires experimentation, documentation, and hypothesis testing with advertising stimuli, as some have started (Martinez-Fiestas, Viedma del Jesus, Sanchez-Fernandez, and Montoro-Rios, 2015; Rossiter and Silberstein, 2001; Silberstein and Nield, 2008; Vecchiato, Astolfi et al., 2010). For this knowledge to become generalizable and predictable, different researchers must replicate the results and look for patterns across many sets of data (Ehrenberg, 1995; Ehrenberg and Bound, 2000.) Those making the recommendations need to be aware of this knowledge and how far it extends.
An example of the sort of advice that may be given in a marketing context is useful to demonstrate why this matters. As one neuroscientist, Gemma Calvert, suggested in a Guardian article, “if research showed a chocolate bar's crunchiness made it appealing … [she would] advise manufacturers to make it more crunchy” (Neate, 2012). Yet, in the current authors view, there should be evidence that making it crunchier is effective. It is possible that the marketer stumbled onto the perfect level of crunchiness and that is why it was reflected in the brain's reward system? Alternatively, current users simply may be responding to the fact that they like this chocolate bar (whether or not they have consciously considered its level of crunchiness). Although marketers desire interpretation of their results with recommendations, these need to be underpinned by evidence.
One exciting aspect of some new approaches is the rapid temporal feedback on stimuli. A key benefit is the ability to create optimized cut-down versions, such as from 30 seconds to a quality 15-second version. Great detail is afforded on how people are responding to each second of the advertisement, but one should not assume that advertising can be reduced to a reductionist view so simply. Questions arise such as, “What if this shows that people react very well to non-branded scenes?” “What if the scenes to which they reacted well were incoherent when spliced, or respondents only reacted well to Scene B because it followed Scene A?” Neuro-outputs require subjective interpretation, and it is unlikely advertising recommendations will ever simply come from strict adherence to the neuroreadout. The current authors urge marketers to be clear on what is an evidenced-based recommendation and what is input into the process. Creative professionals still have a critical role in producing great advertising.
Predicting In-Market Success
Growing knowledge from neuroscientific studies will help improve predicting in-market advertising success, but this is not yet a reality. The desired outcome of much advertising is nudging sales, yet rarely have neuromeasures been assessed against this. In fact, often neuromeasures are correlated with self-reported metrics—the very metrics they often aim to replace. There are, however, exceptions demonstrating it is possible to validate against relevant in-market behaviors (Northover, 2012; Silberstein and Nield, 2008; Weinstein, Weinstein, and Drozdenko, 1984).
Sufficient validations do not yet exist, but advertisers should continue to ask for them and work with suppliers to produce them. Vendors should be able to offer independent, third-party–verified evidence for their tools, validated against in-market behaviors, such as choice and sales, not compared with self-reported metrics.
Validation attempts should be commended when they occur. Among vendors, Neuro-Insight and Innerscope have been the most active in the peer-reviewed literature. Their dependent measures have included recognition (Rossiter & Silberstein, 2001), brand-choice shift (Silberstein and Nield, 2008), recall (Siefert et al., 2008), and online behaviors (Siefert et al., 2009). Although some research firms have performed their own internal validations of in-house tools, these are weaker than peer-reviewed validations. The Advertising Research Foundation's (ARF) NeuroStandards Collaboration Projects 1 and 2 (Stipp, 2014; Stipp and Woodard, 2011) and recent JAR articles (Martinez-Fiestas et al., 2015; Smit, Boerman, and van Meurs, 2015; Varan et al., 2015) are important contributions, but—given the range of measures and varied advertising conditions—should just be the start. Even simple implicit measurement already has a plethora of approaches (Vandeberg, Smit, and Murre, 2015) and evolution continues. Ideally harvesting data from many providers, which the ARF has demonstrated is possible but difficult, should be encouraged.
RESEARCH AGENDA
Throughout the current article, many research questions have been raised. Most important to note, more validations are needed to show that new neuroscientific tools work for advertising decision making and are, ideally, consistently better than current approaches. The scientific method breaks down when industry is asked simply to trust neuro science research firms when they say they are right—consider the Royal Society's motto “on the words of no one.” Considering the young age of the industry, it is no wonder that validations are limited, but they are an ongoing priority, particularly using real advertising stimuli, which typically are not simple. Advertising stimuli include varied visual and verbal elements from music, celebrities, stories, nostalgic cues, and branding—and change quickly in video stimuli. Given the broad range of stimuli, one could expect diverse responses. Relying on knowledge developed in response to simple stimuli may be a useful start. Tools need to be validated with actual brands and for advertising, however, including documenting whether response to storyboards, video matics, or finished copy is comparable—or varies by tool.
Other subquestions include the following: How realistic does the environment need to be? What response do great (and unsuccessful) advertisements get for the different constructs, for example, attention, memory, arousal, and valence? There are already some important contributions, such as the demonstration of the validity of EMG as a measure of the valence of emotional response to radio advertisements (Bolls, Lang, and Potter, 2001); skin conductance experiments that have explored the validity of musical beats as auditory structural features; and the potential for increases in tempo to lead to greater sympathetic arousal (Dillman Carpentier and Potter, 2007.) Nevertheless, the field needs more discoveries.
With the growth in combinations of measures collected, including combining traditional and neuromeasures, there is a growing agenda for validating these. Using both types of measures to understand the same stimuli may be one way to reduce noise, but when they give different answers this may add costs and confusion. These problems need evidence-based answers.
A separate agenda concerns understanding how to use such data to make better advertising-related decisions (Wierenga, 2011). What are neuro-studies most useful for? Is it identifying sales-effective copy, copy that gets attention, or copy that builds branded memories? Or is it isolating the best cut-back version; pre- and/or post-testing; or determining transferability of copy across markets? Priority attention should be directed at documenting whether different people or groups of people make the same decisions with the same neuro-output. Are certain skill sets or combinations of skills needed by advertisers for better decisions? Is there a need for cross-disciplinary groups, such as marketers and advertising staff who understand the brands and marketing, coupled with brain scientists and technicians who are specialists in the data? How big should such groups be? Should their decisions be structured? Linking to forecasting literature (Armstrong, Green, and Graefe, 2015; Green and Armstrong, 2015) may be fruitful as it provides guidelines for making more accurate predictions.
The impact of granular testing on creativity also should be considered. Engineered advertisements never will be popular with creatives, who fear that research or rules applied to the process will stifle their creativity. Those who use these approaches likely will learn where they can be creative and perhaps where there are some “rules,” such as “center screen branding gets noticed more.” Advertising unlikely will be completely engineered, but there is potential to highlight some very useful knowledge.
All of this will require systematic documentation. Where do we see consistent patterns across multiple conditions, such as established and new brands or categories, varying types of advertising, varied media, and with different kinds of respondents—users and nonusers, different demographics, different cultures? New questions also will continue to arise as the technologies evolve.
Building Future Advertising Theory
Those who create advertising do not just want to be told to cut Scene X or increase branding from second Y. They want to learn what consistently produces great advertising. These tools look very promising to produce new perspectives and advance theory in the following two areas:
How advertising impacts the brain behaviorally and directly on a range of constructs, such as attention, emotion, arousal, valence and memory, and how each links with in-market buying.
Granular response to all sorts of executional tactics, including music, colors, characters, and more.
Neuroscientists are still early in their journey to understand the brain, so it is no wonder that marketers are in the early stage of theoretical development incorporating this new knowledge. Those who have made relevant marketing contributions should share their knowledge. Marketers must build skills in this space and invest to learn what is known about the brain and what this means for advertising theory.
There already have been some important theoretical contributions for advertising related to long-term memory (Kemp et al., 2002; Rossiter and Silberstein, 2001; Silberstein, Harris, Nield, and Pipingas, 2000; Silberstein and Nield, 2008), as well as discussions on the theory of psychophysiology (e.g., Lang, Potter, and Bolls, 2009; Potter and Bolls, 2012). For robust theory to be developed around how advertising works, systematic documentation is needed of what these tools measure and how valid they are.
Advancing theory faces challenges until neuro-skills are developed more widely among more advertisers and more broadly across the different advertising conditions. Additionally there must be more consistency and transparency across providers, including conceptualizing clear operational definitions, such as a message's emotional tone versus the audience's emotional response (Bolls et al., 2001). Traditionally, researchers working in this space have tried to control the variation through experimental design and the setup of the laboratory (Lang, Potter, and Bolls, 2009; Potter and Bolls, 2012), but to advance advertising marketers and researchers need to understand how the brain works in processing real advertising in-market. People need to ensure that future experiments are as realistic as possible, or are shown to be predictive of in-market outcomes, to ensure that the lessons apply to real advertising and modern dynamic cluttered media environments.
Wide dissemination, more discussion, and further theoretical contributions should be valued. It is an exciting time to be in marketing as the industry incorporates this new knowledge and builds competency in this space.
ABOUT THE AUTHORS
Rachel Kennedy is associate professor and a founding researcher at the Ehrenberg-Bass Institute for Marketing Science, University of South Australia. She is committed to producing and disseminating scientific knowledge about marketing to help grow brands. Kennedy has been awarded a number of prestigious awards, been published in key marketing journals, and is on the editorial boards for Journal of Advertising Research, International Journal of Market Research, and International Journal of Advertising.
Haydn Northover completed his PhD with the Ehrenberg-Bass Institute in 2013. His thesis investigated the use of neuroscience in advertising measurement, an area he remains committed to investigating.
- © Copyright 2016 The ARF. All rights reserved.