Abstract
The value of mass media advertising can be demonstrated by quantifying what happens when it is removed. The current study does this, extending the work of Hartnett, Gelzinis, Beal, et al. (2021) by documenting changes in market share for 365 U.S. brands from 22 consumer goods categories that stopped advertising for at least one year. Market shares of brands without advertising declined, on average, at a steady rate year over year. On average, market share declines were more common and substantial among small brands and those losing share before advertising ceased. That prior findings generalize to a new market and many new categories increases confidence in the results.
MANAGEMENT SLANT
The study expands and adds robustness to prior evidence that when brands stop advertising, declines become more common and more significant, on average, as time increases.
Using market share (where prior research used sales), losses were quantified as declining by 10 percent after one year, 20 percent after two years, and 28 percent after three years relative to the last advertised year, on average.
Such quantification facilitates financial forecasting and portfolio decision making concerning advertising cessations.
Brand size and market share trajectory before stopping advertising affect the rate of market share decline, so they should be factored into advertising cessation decisions.
The magnitude of market share decline varied considerably across categories. Consumer goods with longer interpurchase intervals appear to suffer greater average decreases after three years without advertising.
INTRODUCTION
Most marketers agree that ongoing advertising investment helps to maintain and improve a brand’s market position. This perspective is supported by research on advertising spending across various markets and categories (e.g., Danenberg, Kennedy, Beal, and Sharp, 2016; Hansen and Christensen, 2005; Jones, 1990). Commentators advise that advertising cessation should be approached with caution. Mel Edwards, the global chief executive of Wunderman Thompson warned that “going dark makes brands vulnerable, and people may trade your brand for something else” (Edwards, 2020, p. 1). Despite an appreciation of the risks, it is still common for brands to have extended periods when they do not advertise (Chemmanur and Yan, 2019). Brands stop advertising for various reasons. This can include the pressure of inflating earnings to avoid buyouts; competition for budget across brand portfolios; or other business requirements, ranging from capital investments to reallocating advertising budgets, to promotions to secure shelf space from powerful retailers (Schroer, 1990).
Advertisers must provide solid evidence of the value and impact of their advertising effects to secure or maintain their budgets. Advertisers that can justify the impact of advertising on the company’s bottom line are more likely to secure their budgets. One way is to quantify the potential consequences of stopping advertising. As with all marketing decision making, the goal is to be evidence based. Unfortunately, systematic empirical documentation of the results from turning off advertising is scarce and largely predates the twenty-first century.
One recent study looked at prolonged advertising absences for 41 brands in the Australian alcohol market, using volume sales and advertising spend data collected over two decades, from 1996 to 2015 (Hartnett et al., 2021). The study found that advertising absences lasting a year or longer were often associated with lower sales volumes relative to the last advertised year. It further revealed two important conditions: brand size and sales trajectory before stopping advertising. Although the study is currently one of the Journal of Advertising Research’s most-read articles and generated significant excitement online among advertisers, it only tested a single product category in a single market, so the generalizability of response patterns to lengthy advertising cessations is unknown.
Findings from single studies, as such, must be tested for our field to advance, and replication is a good scientific practice essential to building trustworthy knowledge (Royne, 2018). Replications, however, are often undervalued (Nature, 2020). The current study showcases the importance of replication (and extension) to add evidence to this critical advertising spending decision. It closely replicates the approach established by Hartnett et al. (2021) but examines market share rather than volume sales as the measured outcome. This makes the knowledge far more useful across different categories and markets. The data span 22 consumer packaged goods categories, with varying degrees of category size and consumer interpurchase cycle, for products sold in the United States over six consecutive years, from 2010 to 2015. Advertising cessation spans several broad-reach media, including television, radio, out-of-home, and print advertising. The current study provides more robust quantification, with the identification of conditions that matter, for what marketers can expect when a previously advertised brand goes silent.
BACKGROUND
Much of the evidence for what happens when mass media advertising stops comes from split-cable television experiments run between 1982 and 2008 (e.g., Hu, Lodish, and Krieger, 2007; Hu, Lodish, Krieger, and Hayati, 2009; Lodish, Abraham, Kalmenson, Livelsberger, et al., 1995; Riskey, 1997). Zero-weight tests compare matched cells, such as markets or regions, where a brand continues to advertise at a normal weight in one market, typically on television, and goes dark in the other for 12 months (Hu et al., 2007; Hu et al., 2009; Lodish et al., 1995; Riskey, 1997). About half of the zero-weight tests recorded significant sales differences between markets, with sales typically lower in the dark market (Lodish et al., 1995; Riskey, 1997).
Controlled in-market experiments provide precise, tangible results but require high levels of patience, commitment, and cooperation from brand owners. These requirements can present barriers to collating extensive samples of experimental observations when brands stop advertising. An arguably more accessible alternative is to use historical data to observe what happens to sales when advertising stops occur naturally. Using historical data also enables researchers to examine advertising cessation beyond the 12-month timeframe typical of in-market experiments, where carryover effects of advertising precessation are likely present (Leone, 1995). Extending the cessation timeframe, therefore, enables researchers to better understand the enduring impact of advertising on brand buying and performance.
Harnett et al., (2021) analyzed historical data and presented new findings on the longer-term consequences of not advertising. About half of the brands experienced sales declines greater than 10 percent after one year without advertising, which is consistent with findings from in-market experiments discussed earlier. The proportion of brand decline increased each year without advertising, and by the fourth year, all brands that remained unadvertised were declining. Sales were down across brands, on average, by 16 percent after one unadvertised year, 25 percent after two, and 36 percent after three, indicating that brand decline occurs at a reasonably steady rate rather than at an accelerating or exponential rate.
The authors reported that brand size affected sales response to long advertising cessations. Larger brands experienced relative stability for one or two years without advertising, on average, from which point declines began in earnest. Conversely, declines were more immediate and greater for small brands, on average. Specific to the first unadvertised year, these results were consistent with observations from in-market experiments for established versus new brands (Hu et al., 2009; Lodish et al., 1995; Riskey, 1997). Prior sales trajectory before stopping advertising also affected sales changes. Previously stable and growing brands experienced minimal losses for the first two years without advertising, from which point decline set in. Previously declining brands continued to decline rapidly in the first two years without advertising, which then leveled out, perhaps because rapidly declining brands were withdrawn from the market.
These discoveries are interesting but are limited to a single study analyzing a single product category and market. Hartnett et al. (2021) were also blind to other variables, such as distribution changes or price promotions, which could have contributed to in-market sales changes, given that the data were from a natural experiment, not a controlled one. Consequently, it is fair to speculate that the findings may be idiosyncratic to the one dataset and may not extend to, or be as pronounced for, brands in different categories or markets that present different conditions.
The lack of replication research is considered a major problem for our discipline because it signals uncritically accepting the legitimacy of all research results.
The current study aims to replicate the approach of Hartnett et al. (2021) with a new, much larger dataset of in-market observations to determine whether the nature and magnitude of the relationship between advertising cessation and brand performance hold. The following section outlines the rationale for such replication research, which speaks to the importance of the current study.
WHY REPLICATIONS ARE NEEDED IN ADVERTISING RESEARCH
Replications seek to repeat a prior study to determine whether the initial empirical results are observed again. This process is regarded as a crucial aspect of the scientific method. There are two different types of replication—namely, interstudy and intrastudy (Easley, Madden, and Dunn, 2000). Interstudy replications are conducted at a time separate from the original study that attempts to duplicate the previously published findings. This approach also encompasses replication with extension, where new conditions, including categories, markets, and timeframes, are incorporated (Evanschitzky, Baumgarth, Hubbard, and Armstrong, 2007; Hubbard and Armstrong, 1994). Intrastudy replications, by contrast, are designed to examine multiple conditions or experiments in one investigation to establish the reproducibility of findings and to identify boundary conditions (Ehrenberg, 1990). Replications can also be considered close or differentiated, depending on the similarity of conditions within or across studies (Lindsay and Ehrenberg, 1993).
Why replicate? Replicated findings verify that discoveries are reliable and trustworthy (Hubbard and Armstrong, 1994). Going further, by identifying patterns or regularities between variables, such as advertising cessation and brand performance, across many sets of data, incorporating different conditions, marketers can make predictions about what will likely happen in response to future activities (Ehrenberg and Bound, 1993; Kennedy and Hartnett, 2018; Uncles and Wright, 2004). When the findings are sufficiently robust, patterns can be quantified and expressed as empirical generalizations.
Despite the benefits replication studies can offer, long-standing biases and barriers have impeded the widespread publication of replication results in the social sciences (Easley et al., 2000; Easley, Madden, and Gray, 2013; Lindsay and Ehrenberg, 1993; Madden et al., 1995). Replication studies are rare in marketing research broadly and in advertising research specifically. Across leading marketing journals, less than two percent of empirical articles published from 1974 to 2011 were replications (Evanschitzky et al., 2007; Hubbard and Armstrong, 1994; Kwon, Shan, Lee, and Reid., 2017). Meanwhile, across major advertising journals, only three percent of empirical articles published from 1980 to 2012 were replications (Park, Venger, Park, and Reid, 2015). The lack of replication research is considered a major problem for our discipline (Evanschitzky et al., 2007; Kwon et al., 2017; Royne, 2018), because it signals uncritically accepting the legitimacy of all research results. Academics and practitioners alike have been cautioned against following advice from a single study because it is “virtually meaningless and useless in itself” (Lindsay and Ehrenberg, 1993, p. 217).
Replication studies are not always confirmatory, however. Only 40 percent of replication studies published in leading marketing journals from 1974 to 1989 (Hubbard and Armstrong, 1994) and 75 percent published from 1990 to 2004 (Evanschitzky et al., 2007) confirmed or partially confirmed earlier results. Confirmatory replications are more common in advertising journals, where 93 percent of replications confirmed prior results, wholly or partially (Park et al., 2015). Higher figures could represent a growing publication bias against failed replications.
Notably, successful and failed replications play a role in developing sound marketing and advertising knowledge. Consistent, generalized findings are a positive outcome, because those adopting the implications of the research do not need to worry about deviations (i.e., it dispels the idea that “my brand/category/market is different”; Ehrenberg, 1990). Meanwhile, failure to replicate results could indicate a boundary condition or exceptional case to a generalized pattern (Uncles, 2011). The point is that research benefits from extensive replication and extension.
The current research is an interstudy replication of Hartnett et al. (2021) and a purposefully designed intrastudy replication covering numerous consumer goods categories. It extends Hartnett et al. (2021) in several ways, but not too drastically. The original data were for alcoholic beverages, including on- and off-premises sales in Australia; that is, bulk keg sales to bars and pubs and units sold by specialty alcohol retailers. The current data span 22 consumer goods categories sold widely in supermarkets in the United States. The original analysis examined volume sales as the dependent variable. The current analysis has a different operationalization of outcomes, examining value market shares. If the original, largely exploratory, results are reproduced under these close conditions, the generalizability of the relationship between advertising cessation and behavioral brand performance measures can start to be established.
The original research reported considerable variation in sales changes after advertising cessation across cases, particularly for small brand cases. Beyond looking to confirm or refute the main effects found previously, the authors’ concerns are to more precisely quantify the magnitude and variation of effects through an increased number of systematic observations and to determine how consistently these effects occur across previously identified conditions (i.e., brand size and prior trajectory), as well as across varied product categories, such as cereal versus household cleaners versus cough remedies. These outcomes should help establish a sound theory regarding how advertising spend works to support sales and cement the findings so advertisers can use them confidently in practice. The research questions are as follows:
RQ1: What happens to market share after a brand stops mass media advertising for a year or more?
RQ2: How do brand size and prior trajectory affect market share changes after a brand stops mass media advertising for a year or more?
This paper also introduces a new research question relevant to the study design:
RQ3: Does the relationship between market share change and advertising cessation hold across product categories?
METHOD
The Data
The current study merges brand media spending and consumer purchase records in the United States from 2010 to 2015 provided by the Kilts Center for Marketing at the University of Chicago Booth School of Business. A strength of the current study is that the dataset includes all or most competitive brands in the selected product categories, a distinction from Hartnett et al. (2021), which skewed to brands owned by the company that provided the data.
Media information (from Ad Intel) covers television, radio, magazine, newspaper, outdoor, online website display, and cinema advertising (2013 to 2015 only for cinema). Internet information was recorded from reportable advertising-supported websites captured by Nielsen’s probing technology. Paid search and social media advertising were not tracked. The data were reported at the spot level, with the advertised date, primary brand, media type, and estimated cost reported, which can be aggregated into desired time intervals (i.e., annually for the current study).
Nielsen’s panel consists of a sample of more than 60,000 households in any year (2010 to 2015). Panelists use in-home scanners to record their household purchases from retail stores, having recorded sales at the Unique Product Code level.
Both the media and panel data consist of over 100 product categories. Twenty-two diverse product categories are used for the current research, with brands matched across these independent datasets. Initially, the data were coded at the parent brand level (e.g., all Coca-Cola variants were coded as one Coca-Cola brand); however, that method of summing to a single parent brand masked advertising stop cases for variants and was considered unsuited for the current research. Hence, brands are coded at the variant level across datasets wherever possible (e.g., Diet Coca-Cola). Not all brands have media and sales data recorded at the variant level as described earlier (e.g., in the cookie category, Oreo has 22 variants recorded in the sales data, but it is coded as a single [parent] brand in the media data). Brands are coded to the closest disaggregated level in cases like these to be matched across datasets. Three independent coders checked brand variant lists for consistent.
Brands with an average yearly advertising spend and sales under $1,000 were excluded from the analysis. These brands are tiny (i.e., less than 0.01 percent share of voice or market share) and often do not have media and sales data for all years across datasets. This approach prevents results from being biased by brands with very few purchases (Dawes, 2009; Trinh, Romaniuk, ansd Tanusondjaja, 2015) and still allows for a robust number of observations of advertising cessations for each category.
Identifying Advertising Stops and Market Share Changes
The current study identified an advertising stop when a brand’s advertising spend across media was reduced by 99 percent from one calendar year to the next. The computation departs slightly from that used in the original study, which identified an advertising stop when a brand’s annual advertising spend across media was less than one percent of its average yearly spend. The initial criterion was sensible when the average value was obtained over many data points (i.e., 20 years). For the dataset in the current study, which spans a shorter timeframe, the 99 percent spend reduction approach was simpler to calculate and understand.
Annual market share figures for each calendar year were calculated from sales revenue, which is an important departure from the original study, which examined sales volume. Revenue market share is a more robust dependent variable, as it is more strongly connected to profit (Bhattacharya, Morgan, and Rego, 2021). For the research, it also works better for quantification and comparison across many categories, which report markedly different sales volumes. For practitioners, this facilitates the incorporation of the cessations knowledge into business cases to justify and defend advertising budgets.
Once a stop was identified, the brand’s market share in the year immediately prior was used as the baseline, and its market share was indexed against this value for the unadvertised year(s) that followed. As such, changes in market share are reported relative to the brand’s last advertised year. Index scores of 80 in Year 1 and 70 in Year 2, for example, represent a 20 percent and a 30 percent decrease, respectively, in the brand’s market share without advertising from Year 0, or the base year (index of 100). This approach is consistent with that used in the original research.
With six consecutive years of data, the most extended documented stop is four years, because each cessation period must be preceded by two advertised years to establish baseline market share and the prior trajectory condition.
Brand Conditions
Brand size was classified into two groups: large or small, on the basis of the brand’s market share in the base year. Brands were coded as large (or small) when their market share was above or equal to (or below) the category median in their last advertised year. The original study assessed brands as small, medium, or large according to fixed cutoffs based on volume sales, which was suitable because those data comprised a limited number of brands in a single industry. Data in the current research spans many categories, which vary tremendously in scale and brand market share and demand a different approach. The decision to use a category median split is in line with recent consumer behavior studies (e.g., Bruce, Becker, and Reinartz, 2020; Trinh and Dawes, 2020).
To establish a prior trajectory, the brand’s market share in the year before the last advertised year was indexed against the base year (i.e., the last advertised year). Cases with indexed share changes of ±10 percent or more before stopping were considered growing or declining, respectively, and cases with indexed share changes of less than ±10 percent were classified as stable. This approach is consistent with the original research.
Category Conditions
Behavioral factors, such as category purchase frequency, have been shown to influence market dynamics (e.g., Graham and Kennedy, 2022; Trinh and Anesbury, 2015). The impact of significant interventions, such as investments in brand advertising, or lack thereof, could be moderated by these behavioral factors, changing how brands respond. The Food Marketing Institute categorizes products into four groups: Staples are products that most households need and buy frequently (high penetration, high frequency); niches are bought by fewer households, but those that do buy them do so frequently (low penetration, high frequency); variety enhancers are bought by many households but only occasionally (high penetration, low frequency); fill-ins are only purchased occasionally by a small group of people (low penetration, low frequency).
Annual penetration and average purchase frequency were calculated for each category in the current study. (See Figure 1 for the resulting classifications.)
Sample of Cessation Cases
There were 377 cases from 365 brands that stopped advertising for at least one year (12 brands stopped advertising, restarted, and then stopped again). Of these, 197 cases ceased advertising for one year only, while 180 cases continued without advertising for two years, 91 cases continued for three years, and 34 cases continued for four years. Brands with cessations shorter than four years either resumed advertising or reached the final year of the dataset.
Different conditions are well represented across cases; although small brands outnumber large brands (72 percent versus 28 percent), there are nearly equal numbers from each of the trajectory groups (29 percent growing, 34 percent stable, and 37 percent declining), which means that all kinds of brands stop advertising (not just small or struggling brands). Categories are differentially represented, with more cases from skincare, haircare, and cereal categories. (See the Appendix for an outline of the full sample details.)
RESULTS
Market Share Trends after Advertising Stops
Results are reported in aggregate and by condition for each of the 22 categories (See Table 1). In the initial discussion, results were compared with the findings of Hartnett et al. (2021) (shown as sales volume change) to assess the generalizability of this new broader evidence with the initial work.
Mean market share indices show that brands generally lost market share after stopping advertising (See Table 1). On average, brands’ market share changed by −10 percent from the base year after one year without advertising (cf. −16 percent for sales volume in the original study), −20 percent after two years (cf. −25 percent for sales volume), −28 percent after three years (cf. −36 percent for sales volume), and −30 percent after four years (cf. −54 percent for sales volume). The average rate of share decline from year to year is consistent for the first three years. It decelerates in the fourth year: −10 percent in the first year (cf. −16 percent for sales volume), a further −10 percent (cf. −9 percent) from the first to second year, −8 percent from the second to third year (cf. −9 percent), and −2 percent (cf. −18 percent) from the third to fourth year.
Although the dependent variables of sales volume and market share are not directly comparable, the magnitude and steady rate of decline across studies are rather consistent. The main difference is in the fourth year, when the average sales volume loss reported by Hartnett et al. (2021) is more extreme than what is found for market share here.
Market share indices for cases varied considerably for each of the four years, as indicated by the standard deviation values (See Table 1) and observed when cases are presented graphically, with indices widely dispersed around the mean (See Figure 2, Graph A). This dispersion makes it clear that although, on average, brands lost market share without advertising, not all brands declined after a cessation. Applying the criterion that brand market share indices less than 90 are substantively declining, of all cases, 49 percent (cf. 53 percent in the original study) declined in Year 1 without advertising; 61 percent (cf. 62 percent), in Year 2; 71 percent (cf. 71 percent), in Year 3; and 71 percent (cf. 100 percent), after four years.
The figures related to the commonality of decline closely resemble those seen for cessations of alcohol brand advertising in Australia (Hartnett et al., 2021), again, except for the fourth year; the deceleration of decline appears to have occurred slightly earlier in this dataset. Furthermore, the finding that about half of the brands declined in the first year without advertising is also consistent with results across the older zero-weight experiments (Hu et al., 2007; Hu et al., 2009; Lodish et al., 1995; Riskey, 1997).
Market Share Trends by Brand Conditions
Brand Size. Mean market share indices for small brands were consistently lower than for large brands over the three years (Figure 2, Graph B). Independent-samples t tests showed significant differences between small and large brands in Years 2 and 3 (p < .05), with small to medium effects (ds = .38 and .50, respectively; Cohen, 1988). The average decline was similar for small and large brands in Year 1. This differs from the findings of Hartnett and colleagues (2021), in which small brand declines were found to be more immediate (occurring in Year 1) and substantial than for larger brands.
Prior Market Share Trajectory. Brands already in decline experienced larger declines without advertising than previously stable or growing brands across the years (See Figure 2, Graph C), which aligns with expectations. A one-way analysis of variance showed that means between trajectory groups were significantly different in Years 1 and 2 (p < .05), with small to medium effects (η2 = .04 in Year 1, and η2 = .06 in Years 2 and 3; Cohen, 1988). Stable and growing brands experienced initial stability without advertising in the first two years. These trajectory patterns were all also observed by Hartnett et al. (2021).
Brand Size × Prior Trajectory. Brands already in a downward trajectory, regardless of size, lost the most market share, on average, each year without advertising, with small declining brands proving the “biggest losers” (See Figure 2, Graph D). Mean indices were significantly different for Years 1 and 2 between the six classifications (p < .05), with small to medium effects (η2s = .04 and .08, respectively). Large stable and large growing brands stand out as most resistant to market sales losses, even after three years of darkness; average indices are persistently close to the 100 base year index. Small stable and small growing brands stayed largely stable, on average, in the first year without advertising but declined in Years 2 and 3.
Multiple Regression
Multiple regression was conducted to compare the findings to those of the original study. Raw values were used for brand size (market shares in the last advertised year, ranging from .01 percent to 16.6 percent), prior trajectory (percentage changes in the market share before advertising stopped, ranging from −90 percent to 1,330 percent), and market share changes after advertising ceased. The 1.5 × IQR (interquartile range) rule (Tukey, 1977) was used to detect outliers, and six observations were removed. The brand size condition was calculated from the average yearly sales log-transformed in the original study. The authors attempted log transformation to overcome heteroscedasticity (i.e., skewed toward small brands), but it did not improve the model, so raw values were used for the brand size variable. Variance inflation factors ranged from .99 to 1.01, so there was no concern for multicollinearity among the explanatory variables.
Small stable and small growing brands stayed largely stable, on average, in the first year without advertising but declined in Years 2 and 3.
The model reported significant results, F(2, 673) = 19.99, p < .001. Brand size and prior trajectory were significant predictors but only explained a small proportion of variance in market share changes (R2 = .06), which is far less than that in the original study (R2 = .36). Standardized beta weights suggest that prior trajectory (β = .22) more strongly predicts market share changes than brand size (β = .09). The relative importance of the two conditions is consistent with Hartnett et al.’s work (2021), which reported β = .50 for prior trajectory and β = .35 for brand size, now seen across many more categories.
Market Share Trends by Product Categories
There is a consistent pattern of market share decline over time after advertising stops, on average, for 17 of 22 categories (See Table 1). In five categories—household cleaners, cat food, baby food, nappies (diapers), and shaving equipment—brands retained or even grew their market share without advertising.
Beyond the overall pattern, individual categories responded to advertising cessation with varying magnitudes. Looking specifically at changes in the first unadvertised year, crackers and cough remedy brands experienced the most significant drop, losing 23 percent and 20 percent market share, respectively, on average. Coffee had the most rapid decline rate over time, with brands losing 80 percent of market share by the third year, on average. Brands that stopped advertising in the coffee category were primarily small brands (20 of 23).
Grouping categories by category buying behavior (See Table 1) does not clarify these observed differences. Categories such as ice cream and soup (staples) and hair care and skin care (variety enhancers and fill-ins, respectively) are similarly stable in the first year without advertising, on average (i.e., indices from 94 to 96 in Year 1). One result was outstanding. Brands from low-purchase-frequency categories (variety enhancers and fill-ins) that remained dark for more than two years were likely to suffer more, on average, than brands from high-purchase-frequency categories (staples and niches); their indices were 66 and 66 versus 81 and 81, respectively, in Year 3.
DISCUSSION
The current research replicates and extends the work of Hartnett et al. (2021), which provided an analytical approach to understanding the relationship between brand advertising and sales; specifically, what happens to brand sales when mass-reach advertising is absent for an extended period.
The original analysis of 57 cessation cases in a single category showed that most alcohol brands lost sales in the first year without advertising, and decline became more common as brands went longer without advertising. The current analysis of a much larger sample of 365 cessation cases across diverse packaged goods categories shows patterns broadly consistent with those in Hartnett et al.’s (2021) study. On average, market shares declined without advertising, and as in the original study, average declines year over year were relatively moderate. Similarly, about half of the cases substantively reduced (index of <90) in market share in the first year without advertising, with more brands experiencing substantive declines the longer they were unadvertised.
Hartnett et al. (2021) identified the brand size and prior sales trajectory as important conditions, where the latter was a better predictor of sales trends without advertising. Again, brand size and prior trajectory were found to affect market share changes in the current research. The explanatory power of these conditions was less for this dataset, but the relative size of effects was consistent. One divergent finding in the current study was that small brands, as a group, did not suffer such steep, immediate market share declines; their average loss was closer to that of larger brands in the first year without advertising. The initial stability for all consumer goods brands, except for already declining brands, likely speaks to the fact that advertising effects carry forward for a time and that activities other than advertising contribute to brand performance, at least initially.
The current study documented cessation cases from 22 categories, which provides the opportunity to look for significant sameness in patterns between categories and/or identify potential boundary conditions. This is a vital step in building robust knowledge that is useful to researchers and advertisers. The categories differ in popularity (penetration) and repeat purchase (frequency). The average market share change without decline was consistently negative, which was observed for most categories. Exceptional categories spanned different product classification types, which suggests that this is not a unifying boundary condition; perhaps, something other than category- or brand-specific factors is playing a role here. Longer cessations were linked, however, with a more considerable decline in the third year for products with low purchase frequency.
This replication is, by and large, considered a successful replication of the patterns reported by Hartnett et al. (2021). The value in confirming the prior findings is in showing that they are not isolated cases and, therefore, an essential contribution to advertising knowledge. The contributions and implications revisited in the following text merit attention from advertising researchers and practitioners. Looking ahead, the authors encourage further replication, particularly more radical extensions to services and durable products or developing markets. Such replications would tell us something new and make for a more powerful generalization (Lindsay and Ehrenberg, 1993).
Theoretical Contributions
Generalized patterns, as documented in this paper, provide an essential foundation for further development of the theory that explains and predicts how advertising works. The authors’ findings resonate with the theory that acknowledges advertising’s long-term effects, where it has a key role as a brand reminder and to reassure shoppers (e.g., Broadbent, 2000; Ehrenberg, Barnard, Kennedy, and Bloom, 2002; Jones, 2007). Although brands that went dark generally declined their share, there was variation across identified conditions in brands’ histories, such as prior trajectory and brand size.
The authors observe that dark brands generally lose market share, and other studies have identified that share loss is tied to brands losing customers (penetration) rather than customer loyalty (Ailawadi, Lehmann, and Neslin, 2001; Nenycz-Thiel, Dawes, and Romaniuk, 2018; Romaniuk, Dawes, and Nenycz-Thiel, 2014). A shrinking customer base (rather than loss of loyalty) is dangerous in the long term, because a brand’s customer base is mostly made up of infrequent buyers: the distribution of occasions that households buy a category/brand follows a negative binomial distribution, with light buyers as the largest group and progressively smaller numbers of medium and heavy buyers (Chatfield, Ehrenberg, and Goodhardt, 1966; Dawes and Trinh, 2017; Ehrenberg, 2000; Sharp and Romaniuk, 2021). Dark brands likely lose market shares because of failing to acquire or nudge very infrequent buyers (the bulk of the customer base). Ehrenberg, Barnard, Kennedy, and Bloom (2002) commented, “We think that advertising is needed to try and maintain both salience (penetration) and customer retention, and also to give the brand a chance of catching its fair share of ‘the leaks’” (p. 14). Future studies detailing the changes in dark brands’ customer base are encouraged to shed further light on this.
A shrinking customer base (rather than loss of loyalty) is dangerous in the long term, because a brand’s customer base is mostly made up of infrequent buyers.
The key theory for why market share declined in response to advertising cessation is that dark brands become harder to think of when a purchase occasion arises, reflecting a loss of mental availability (Romaniuk, 2021, 2013). Mass reach advertising is one of the few scalable tools that marketers have to keep the brand accessible in memory, even for those who are familiar with the brand (Stocchi, Wright, and Driesener, 2016) and especially among light or nonbrand buyers (Vaughan, Beal, and Romaniuk, 2016). Longer intervals between purchase occasions provide more scope for memory erosion, which makes continuity of advertising presence particularly vital over the long term (Graham and Kennedy, 2022). Future advertising cessation studies could benefit by incorporating brand memory measures to more directly test this mechanism.
Implications for Advertisers
The current research across 365 brands in 22 categories is a solid foundation for quantifying the likely outcomes of stopping advertising, especially for consumer goods brands. Practitioners can use the current research as a baseline to determine the possible effects of stopping advertising when advertising cuts are required and as input into business cases to keep their advertising budgets. Companies with portfolios of brands can plan the likely impact of ceasing advertising given the conditions identified, which can help with tough decisions regarding how long a cessation may be able to last if a brand (or brands) will not be supported.
Several decades ago, Erwin Ephron, a prolific media researcher and consultant (Metzger, 2013), introduced the shelf-space advertising model. “Advertising needs continuity, because not being there with a message is like being out-of-stock” (Ephron, 1995, p. 18). Ephron advocated for weekly reach planning, which would mean being on air for more weeks at a lower weight. The current research looks at longer time horizons (years) but also adds to the evidence that continuity over the long-term benefits brand performance (Danenberg et al., 2016; Gijsenberg and Nijs, 2019). Thus, the broad recommendation is that brands that do not want to decline in share (e.g., they want to grow or at least maintain share) should ideally schedule advertising with continuity: spend something on advertising every year.
This cross-category research gives further insights. Large, stable, and/or growing brands can weather long-term advertising cessations better than their small and declining brand counterparts. This finding aligns with research showing that large brands can afford to underspend relative to their market share and maintain share (Danenberg et al., 2016; Jones, 1990). A cessation is an extreme form of underspending, with nonspending brands typically omitted from share-of-voice research. The implication is that large brands can stop advertising with a lower risk of substantive losses, presenting an opportunity to improve profit reports. Large brands, however, must be careful not to overmilk their present privileged situation because of past investments. Otherwise, they risk sacrificing future sales, many of which will come from light brand buyers (Dawes, Graham, Trinh, and Sharp, 2022).
Limitations and Future Research
The current research examined the effect of one marketing activity (mass media spend) on brand performance without considering other marketing activities. This narrow view is in keeping with replicating the original study. It must be acknowledged here, however, as it was previously, that changes in other brand-level activities (e.g., price promotions or new product launches), as well as category-level conditions (e.g., advertising intensity or dominance of private label brands), could also moderate the relationship between advertising cessation and brand market share performance. In future studies, researchers may want to account for these factors.
The current study analyzed only brands that remained in the marketplace despite advertising cessation. Hence, the results reflect the performance of surviving dark brands. Tiny brands are more likely to drop out of the market when investments cease, so the performance of unadvertised brands may be inflated.
Future research should investigate shorter cessations, such as quarterly stops. These temporary stops are likely much more common in practice than the longer stops examined here. Many advertisers opt to burst campaigns for weeks or months and then go silent for weeks or months rather than opt for a continuous presence over extended periods.
Another avenue worth exploring is when brands resume advertising after a prolonged hiatus. One early study found that, after 18 months of complete advertising cessation, sales decline recovered within 6 months of reinstating the advertising (Ackoff and Emshoff, 1975). Specific to recessionary conditions, it has been suggested that it may take up to five years to recover from one year without advertising (Field, 2008). Research into postcessation advertising investments could signal solutions for the potential consequences to brand performance identified in the current study.
ABOUT THE AUTHORS
Peilin Phua is a lecturer at UniSA Business and a senior marketing scientist at the Ehrenberg-Bass Institute for Marketing Science, University of South Australia. Her research, which focuses on consumer behavior and advertising, has been published in the Journal of Retailing and Consumer Services and Journal of Advertising Research.
Nicole Hartnett is a senior marketing scientist at the Ehrenberg-Bass Institute for Marketing Science, University of South Australia. She has a keen interest in advertising creativity and effectiveness, considering measurement approaches, and managerial decision making. Hartnett’s work can be found in the Journal of Advertising Research, Journal of Advertising, and European Journal of Marketing.
Virginia Beal is a senior marketing scientist at the Ehrenberg-Bass Institute. Her research, which focuses on advertising effectiveness, media usage, and scheduling, has appeared in the Journal of Advertising Research, Journal of Business Research, and International Journal of Advertising.
Giang Trinh is an associate professor at UniSA Business and a senior marketing scientist at the Ehrenberg-Bass Institute. His research expertise lies in quantitative method development and applications in the areas of consumer purchasing behavior, brand competition, market structure, price promotion, advertising and sales relationship, and new product launches. Trinh’s work can be found in Marketing Letters, European Journal of Marketing, Journal of Business Research, Journal of Retailing and Consumer Services, International Business Review, Journal of Marketing Management, Journal of Business and Industrial Marketing, International Journal of Market Research, Journal of Product and Brand Management, Journal of Consumer Behaviour, and Australasian Marketing Journal.
Rachel Kennedy is a research professor, director, and co-founder of the Ehrenberg-Bass Institute. Her research is focused on advertising and media knowledge to help grow brands. Kennedy is on a number of journal editorial boards, and her work can be found in the Journal of Advertising Research, Journal of Advertising, Journal of Business Research, and Journal of Retailing and Consumer Services, among others.
ACKNOWLEDGMENT
The researcher(s)’ own analyses were calculated (or derived) based in part on data from The Nielsen Company (US), LLC and marketing databases provided through the Nielsen Datasets at the Kilts Center for Marketing Data Center at The University of Chicago Booth School of Business.
The conclusions drawn from the Nielsen data are those of the researcher(s) and do not reflect the views of Nielsen. Nielsen is not responsible for, had no role in, and was not involved in analyzing and preparing the results reported herein.
Appendix Cases of Advertising Stops By Conditions and Categories
- Received November 15, 2022.
- Received (in revised form) February 9, 2023.
- Accepted March 6, 2023.
- Copyright © 2023 ARF. All rights reserved.