Abstract
In recent years, artificial intelligence (AI) technology has been used to create advertising messages. This study examined the factors that influence consumers’ overall appreciation of AI-created advertisements. The findings indicate that, in addition to its direct effect on consumer reactions to AI-created advertisements, consumers’ perceived objectivity of the general advertisement creation process positively influences machine heuristic—a rule of thumb that machines are more secure and trustworthy than humans. This effect boosted consumer appreciation of AI-created advertisements. Consumers’ perceived objectivity of advertisement creation negatively influenced perceived eeriness of AI advertising, which jeopardized consumer appreciation of AI-created advertisements. Consumers’ feelings of uneasiness with robots were found to have a positive influence on both machine heuristic and perceived eeriness of AI advertising.
MANAGEMENT SLANT
Emphasizing the objective process of advertisement creation, such as AI creating advertisements based on a systematic analysis of existing commercials, may attract more favorable consumer responses to AI-created advertisements.
Because consumers’ uncomfortable feelings about robots or related technologies may both benefit and jeopardize the effectiveness of AI-created advertisements, advertisers need to understand their target audience’s opinions of robots or technologies with intelligence to develop appropriate strategies of AI advertising.
Advertisers are not suggested to invest in personifying the AI programs for advertisement creation, as consumer perceptions of how AI is like a human do not influence their appreciation of AI-created advertisements.
INTRODUCTION
Artificial intelligence (AI), which refers to the computational techniques that “identify patterns from data and infer underlying rules … for adaptively achieving specific goals” (Sundar, 2020, pp. 2–3), is now being utilized in advertising by professionals (Li, 2019). The adoption of AI is reflected in every step of the advertising process, including mining consumer insights, optimizing advertising placement, evaluating campaign effectiveness, and even creating advertising messages (Chen, Xie, Dong, and Wang, 2019; Qin and Jiang, 2019). Currently, more than 50 percent of advertisers have taken advantage of AI technology to communicate with their consumers (Business Insider Intelligence, 2018). AI was believed to associate with at least 80% of the digital advertising market (Ad Exchanger, 2019). Because AI is restructuring the advertising industry (Kietzmann, Paschen, and Treen, 2018), there is a growing effort to study this emerging technology in advertising research.
Because AI in advertising is still in its infancy, the research has been primarily conceptual and mainly focused on “the introduction and explanation of related concepts” (Qin and Jiang, 2018, p. 339). Some scholars have discussed the building blocks of AI and explained how AI affects advertising by navigating readers through the consumer journey (Kietzmann et al., 2018). Others have explained how AI-powered advertisements work and highlighted the advantages of personalization, contextualization, and real time of AI-created creatives (Chen et al., 2019). Including a few empirical studies on AI-powered advertisements (Deng, Tan, Wang, and Pan, 2019; Malthouse, Hessary, Vakeel, Burke, and Fudurić, 2019), the current scholarship on AI advertising has examined AI technology predominantly from the perspective of advertising professionals.
To date, little attention in this research stream has been paid to the consumer perspective. Consumers have been exposed to AI-created advertisements, such as the ones from Lexus (Griner, 2018) and Burger King (Sung, 2018). AI-created advertisements in this study refer to advertisements that are either partially or completely created by AI programs. In the Lexus case, AI was used to analyze a large number of previous car commercials and generate the script of the commercial. The Burger King advertisement was similar and was created by AI after it analyzed thousands of fast-food commercials. In both examples, consumers are explicitly informed of the role of AI in advertisement creation at the beginning of the commercials. Because average consumers may not understand how exactly AI works, they may not react to AI-created advertisements in the way expected by professionals. Do consumers appreciate AI-created advertisements? What factors influence their appreciation? As consumers are the final judges of advertising effectiveness, addressing these questions is critical to advertising practitioners who may consider investing in AI advertising.
Accordingly, the purpose of this study was to explore the factors that influence consumers’ overall appreciation of AI-created advertisements. The authors built a conceptual model rooted in the emerging scholarship of human–AI interaction. The human–AI interaction literature indicates that people’s perceptions of machine agency and human agency decide how they react to AI and AI-created content (Sundar, 2020). The perception that machine agency improves human agency elicits a positive heuristic of AI, whereas the belief that machine agency threatens human agency triggers discomfort or eeriness. Because advertisement creation is traditionally considered to be a human job and AI advertising highlights automation, the battle between human agency and machine agency should strongly manifest in consumer reactions to AI-created advertisements. The current study thus contained both positive and negative perceptions of AI in advertising, which are indicated by machine heuristic and perceived eeriness, respectively, and presents how some antecedents (i.e., task objectivity, AI human likeness, and the previous experience of consumers with AI-like entities) function through these perceptions to eventually influence consumer appreciation of AI-created advertisements. The key contribution of this study is to provide a conceptual model that explains and predicts consumer responses to the advertising messages created by AI programs. It supplements the existing literature of AI advertising by focusing on the consumer perspective. The findings of this study will help digital advertisers make better decisions when it comes to leveraging AI technology in creating advertising messages.
LITERATURE REVIEW AND HYPOTHESES
Human–AI Interaction
Human–AI interaction is the process where people “orient to the media source as if it is an intelligent entity that is capable of modifying content in unprecedented ways” (Sundar, 2020, p. 6). According to the modality agency interactivity navigability (MAIN) model, some cues presented on the media interface could trigger certain heuristics that may influence consumer evaluations of the media content (Sundar, 2008). This is because most people are cognitive misers: They tend to think and solve problems in simpler, less effortful ways rather than in more sophisticated, more effortful ways (Fiske and Taylor, 1991). Once there are opportunities to take mental shortcuts during information processing, people tend to do so to avoid effortful analysis of information. The identification of AI as the information source may activate the stereotypes of machines in the minds of consumers, which will shape their responses to AI and AI-created content (Sundar, 2020; Sundar and Kim, 2019).
The common stereotypes of machines contain both positive and negative aspects. The positive aspect speaks to the ability of machines to take planned actions accurately (Gray, Gray, and Wegner, 2007). Machines operate on the basis of predetermined rules. When handling information, machines are often believed to be more objective and less biased than humans (Sundar, 2020; Sundar and Kim, 2019). Such positive stereotypes of machines are conceptualized as machine heuristic, which refers to “the mental shortcut wherein we attribute machine characteristics or machine-like operation when making judgments about the outcome of an interaction” (Sundar and Kim, 2019, p. 2). The negative aspect of machine stereotypes pertains to the inability of machines to experience emotions and sensations, which clearly differentiates machines from human beings (Gray et al., 2007; Haslam, Bain, Douge, Lee, and Bastain, 2008). This negative stereotype tends to trigger feelings of discomfort or eeriness when the boundaries between humans and machines become obscure, such as when a machine behaves like a person or does a human job.
These different aspects of machine stereotypes influence how people assess AI performance in various domains. On the one hand, as machines are believed to work efficiently and objectively, perceptions of AI and AI-created content often are associated with the positive machine heuristic (M. K. Lee, 2018; Sundar and Kim, 2019). On the other hand, the designation of AI as an information source also may trigger perceived eeriness, as machines are considered “unfit for ‘human tasks’ that involve subjective judgments and emotional capabilities” (Sundar, 2020, p. 7). When facing AI-created advertisements, consumers will notice the role of AI as the message creator in some way (e.g., recognizing certain cues on the media interface or being directly informed by the advertisers). When consumers believe that AI creates the advertising messages, they may perceive the advertisement creation process to be more accurate and objective (i.e., machine heuristic) but still experience a certain degree of eeriness, as advertisement creation is traditionally considered a human job. Both machine heuristic and perceived eeriness are expected to influence consumer appreciation of AI-created advertisements, but in different ways. Thus:
H1: Machine heuristic positively influences consumer appreciation of AI-created advertisements.
H2: Perceived eeriness of AI advertising negatively influences consumer appreciation of AI-created advertisements.
Perceived Objectivity of Advertisement Creation
The human–AI interaction literature also has suggested several factors that may strengthen or weaken machine heuristic and perceived eeriness when people interact with AI or AI-created content. One such factor is task objectivity. An objective task “involves facts that are quantifiable and measurable” and is not “open to interpretation and based on personal opinion or intuition” (Castelo, Bos, and Lehmann, 2019, p. 812). As discussed previously, machines or robots are believed to be good at accurately performing planned actions but not at handling emotional situations or scenarios that involve subjective judgments (Gray et al., 2007). An objective task, such as assembling cars or performing data analysis, thus, often is perceived to be more suitable for machines than a subjective task, such as creating artistic work or recognizing people’s emotions.
When it comes to AI or algorithms, the key building blocks of AI technology, existing research has identified similar patterns in people’s perceptions. Some scholars have found that people were more likely to trust and rely on algorithms to perform objective tasks, such as providing financial services (Castelo et al., 2019). People also preferred algorithms for tasks that involve numbers and could be objectively evaluated (Logg, Minson, and Moore, 2019). In digital journalism, AI has been found to be more suitable for creating news articles for more objective topics, such as finance, sports, and weather (Liu and Wei, 2018; Waddell, 2018). When an algorithm was used to predict how funny a joke was, however, people were less likely to make decisions based on the recommendations of the algorithm, because this was considered as a more subjective task (Yeomans, Shah, Mullainathan, and Kleinberg, 2019).
Those who advocate the subjective nature of advertising focus on the artistic elements of advertisements and the fact that consumers are not always rational.
Accordingly, task objectivity is expected to play a role in influencing how consumers react to AI-created advertisements. There has long been debate on whether advertisement creation is subjective or objective. Those who advocate the subjective nature of advertising focus on the artistic elements of advertisements and the fact that consumers are not always rational (A. Lee, 2018). One of the most famous quotes for this viewpoint is from William Bernbach, who argued: “Advertising is fundamentally persuasion and persuasion happens to be not a science, but an art.” Given the rapid advancement of digital technologies, however, data and analytics have played an irreplaceable role in this field, greatly injecting scientific elements in advertising (Ignatius, 2013). Consumers have heard the buzzwords “big data” and “digital analytics” for a while, which may shape their impressions on the process of advertisement creation to some extent. If consumers believe advertisement creation is more objective, they would be expected to appreciate AI-created advertisements, because the machine heuristic informs them that AI could perform objective tasks more accurately. The belief that advertisement creation is objective also could inhibit the negative stereotypes of machines, which essentially result from the inability of AI to conduct subjective missions. Task objectivity, therefore, is expected to positively influence consumer appreciation of AI-created advertisements through its positive impact on machine heuristic as well as its negative impact on perceived eeriness of AI advertising. Thus:
H3: Perceived objectivity of advertisement creation (a) positively influences machine heuristic but (b) negatively influences perceived eeriness of AI advertising.
H4: Perceived objectivity of advertisement creation positively influences consumer appreciation of AI-created advertisements.
AI Human Likeness
A second antecedent covered in this conceptual model is AI human likeness. The research paradigm of “Computers Are Social Actors” (Nass and Moon, 2000; Sundar and Nass, 2000) has shown that using humanlike characters in designing machines and computer programs can effectively trigger users to attribute humanlike characteristics to computers (Nowak and Rauh, 2005). Focusing on human responses in the human-computer interaction process, the paradigm suggests that individuals treat robotic agents as social actors and apply social rules when interacting with them as if they were interacting with real human beings (Nass and Moon, 2000; Nass, Moon, and Green, 1997). Computers with gendered voice output, for instance, were found to elicit gender stereotypes, such that participants rated a female-voiced computer to be more informative about love and relationships than a male-voiced computer (Nass et al., 1997). People also demonstrated more involvement in their conversations with the computers that provided intimate self-disclosure during human-computer interaction (Moon, 2000).
Perceived human likeness also has emerged to be an important concept in the human–AI interaction literature. It may influence both aspects of the aforementioned machine stereotypes: machine heuristic and perceived eeriness. When people find AI programs to exhibit more humanlike features, they tend to evaluate AI as less machinelike. There appears to be a negative relationship between human likeness and machine likeness. The higher the humanness that people perceive in AI in general, the less likely they would be to generate a machine heuristic that attributes machinelike characteristics to AI programs.
On the other hand, increasing human likeness also may trigger perceived eeriness, which could be explained by both the social identity theory and the literature on the theory of the “uncanny valley of mind.” Social identity theory posits that people establish their self-concept from meaningful social group relationships. This social identification prompts individuals to react negatively when the existence of an out-group member threatens the uniqueness of their in-group (Tajfel, 1982; Ferrari, Paladino, and Jetten, 2016). The increased human likeness of AI could be perceived as a challenge from the out-group (i.e., artifacts) to the distinctiveness of humans as an in-group. Especially in the current research context, advertisement creation may be considered by most people as a human job that could not easily be replaced by a machine, which may aggravate the perception of eeriness. Similarly, according to the “uncanny valley of mind” theory, when the human likeness of machines reaches a close-to-realistic stage that overlaps with what is perceived as human distinctiveness, such as complex cognitive abilities and emotional responses, individuals experience a sensation of eeriness and disturbing feelings (Stein and Ohler, 2017). Such uncanniness of technology stems from the violation of the overarching mental categories in human minds that assume the categorical perceptions of “human” and “nonhuman” (MacDorman and Ishiguro, 2006). The increase in the human likeness of machines thus may cause unpleasant cognitive dissonance (MacDorman and Ishiguro, 2006).
The aforementioned rationale leads to the anticipation that consumer perceptions of AI human likeness would exert similar influences in the context of AI advertising. In particular, a high level of human likeness of AI programs may not only reduce consumer perceptions of machine heuristic but also challenge their belief of humans being unique, thus triggering a high level of eeriness of AI advertising. Given the expected relationships of machine heuristic and perceived eeriness with AI-created advertisement appreciation, it is also anticipated that perceived AI human likeness will negatively influence consumer appreciation of AI-created advertisements. Thus:
H5: Perceived AI human likeness (a) negatively influences machine heuristic but (b) positively influences perceived eeriness of AI advertising.
H6: Perceived AI human likeness negatively influences consumer appreciation of AI-created advertisements.
Uneasiness with Robots
The last antecedent expected to influence how consumers react to AI-created advertisements is rooted in their previous experiences with robots. Because a majority of laypeople do not directly engage in the process of AI creation of media messages, they tend to make sense of this process based on some similar entities with which they are more familiar, including machines, robots, and computers. Robots are particularly perceived closer to AI, as entertainment media, including movies and video games, have long portrayed robots to possess high levels of intelligence (Besley and Shanahan, 2005; Sundar, Waddell, and Jung, 2016). According to cultivation theory, impressions of robots that are based on mass-media portrayals could form an illusion of reality used for making real-life judgments (Morgan and Shanahan, 2010). Research has confirmed that previous experiences with robots could significantly influence their perceived usefulness of both companion and assistant robots (Sundar et al., 2016). Building on this, the human–AI interaction framework posits that a person’s previous experience of AI or AI-like entities may influence the psychological effects of AI-related heuristics (Sundar, 2020).
In the context of AI advertising, the similar effects of the previous experiences of consumers with robots are expected to occur. This study focused on the concept of uneasiness with robots, which indicates the negative perceptions people form about robots or robotlike entities on the basis of their previous experiences. As discussed earlier, mass media content is a key source that contributes to people’s previous experiences of robots. Because fictional media content often depicts robots to be highly intelligent, people are likely to question what will happen if that becomes true in real life, thus leading to feelings of uneasiness with robots (Sundar et al., 2016). This uneasiness does not exclusively result from past media consumption. A qualitative study recorded many answers from participants about how their direct experiences with robots or AI-based technologies contribute to their feelings of uneasiness (Shank, Graves, Gott, Gamez, and Rodriguez, 2019). Regardless of its specific sources, uneasiness with robots reflects the accumulative existing experiences of individuals with robots or technologies that possess certain levels of intelligence. On the basis of the human–AI interaction literature, consumer uneasiness with robots is expected to affect how they react to AI-created advertisements. Because machine heuristic is a positive mental shortcut about AI and perceived eeriness of AI advertising is negative in nature, uneasiness with robots is predicted to inhibit consumer appreciation of AI-created advertisements through its negative influence on machine heuristic as well as its positive influence on perceived eeriness. (See Figure 1 for the model showing all the hypothesized relationships.) Thus:
H7: Uneasiness with robots (a) negatively influences machine heuristic but (b) positively influences perceived eeriness of AI advertising.
H8: Uneasiness with robots negatively influences consumer appreciation of AI-created advertisements.
METHOD
Survey Procedure
To test the proposed conceptual model, the authors conducted an online survey using a nationally representative sample of U.S. consumers on Qualtrics. An introduction was presented to explain the purpose of the study: understanding the opinions of consumers regarding the integration of AI in creating advertising messages. The authors believe that this presurvey introduction is important because it is likely that not all respondents are aware of AI-created advertisements. It is a fair assumption, however, that a majority of the respondents have heard about AI and/or had direct or indirect experiences with AI-powered applications. A 2017 global study found that 84 percent of respondents used AI-powered services (Pega, 2017). Although not everyone is aware of the involvement of AI in advertisement creation, it is believed that informing people of the existence of AI-created advertisements will make them apply their past knowledge of AI to form perceptions of those advertisements. Respondents were given the definition of AI in everyday language. The key variables in the conceptual model were then measured, including perceived objectivity of advertisement creation, perceived AI human likeness, machine heuristic, perceived eeriness of AI advertising, uneasiness with robots, and appreciation of AI-created advertisements. Demographic information was collected at the end of the survey.
Survey Respondents
A sample of 528 U.S. residents (N = 528) was recruited by Qualtrics. This sample was managed to represent the U.S. population in terms of gender, age, race, and household income based on the U.S. Census data. From the Qualtrics panel, 1,525 individuals were selected randomly and invited to participate in the survey, and 640 respondents completed the study, a 41.97 percent response rate. Those who did not spend half of the median time were automatically removed by Qualtrics for quality management, thus leaving 528 valid responses in the final sample. The mean age of respondents was 45.77 (SD = 17.16). (See detailed demographic information in Table 1.)
Measures
All the measures were self-reported on a 7-point scale and modified to reflect the research topic of this study (See Table 2). Perceived objectivity of advertisement creation was measured using three items adopted from Mazurek (2019). Perceived AI human likeness was measured using six items adopted from Bellur and Sundar (2017). Machine heuristic was measured using four items adopted from Waddell (2018). Perceived eeriness of AI advertising was measured using three items adopted from Ho and MacDorman (2017). Uneasiness with robots was measured using four items adopted from Sundar et al. (2016). Appreciation of AI-created advertisements was measured using five items adopted from Van Rompay and Veltkamp (2014).
Data Analysis
To analyze the conceptual model and to test the proposed hypotheses, the authors first examined the reliability, convergent validity, and discriminant validity of all the measures based on a confirmatory factor analysis using AMOS software. Then, the authors checked for common method bias, because all the variables were measured among the same group of participants. After that, the authors tested the hypothesized relationships among the latent variables by using a structural equation modeling analysis using AMOS. Last, an additional analysis was conducted using SPSS.
RESULTS
Measurement Model
A first-order confirmatory factor analysis was conducted to test the fitness of the measurement model for the latent variables. The initial model fit for the confirmatory factor analysis model was not desirable, so the standardized regression weight was examined for each item. One item of advertisement creation objectivity (i.e., POAC_1), one item of AI human likeness (i.e., PAL_2), two items of uneasiness with robots (i.e., UER_3 and UER_4), and one item of AI advertisement appreciation (i.e., AAA_1) were deleted because of low regression weights. The revised confirmatory factor analysis model had desirable model fit on the basis of the recommendations from Hair (2010) and from Hooper, Coughlan, and Mullen (2008). The goodness-of-fit (GFI) indices for the revised measurement model indicated satisfactory fit for the data: χ2/df = 2.205, GFI = 0.940, normed-fit index (NFI) = 0.949, comparative fit index (CFI) = 0.971, and root-mean-square error of approximation (RMSEA) = 0.048.
Standardized loading, Cronbach’s alpha, composite reliability, and average variance extracted estimates were used to assess reliability and convergent validity of the measures (See Table 3). Standardized loadings ranged from 0.682 to 0.911, which were all significant (Nunnally, 1978). Cronbach’s alpha ranged from 0.817 to 0.922, all exceeding the minimum limit of 0.70 (Chin, 1998). Composite reliabilities ranged from 0.818 to 0.924, all exceeding the minimum limit of 0.70 (Hair, 2010). The estimates of average variance extracted ranged from 0.588 to 0.736, all exceeding the minimum limit of 0.50 (Fornell and Larcker, 1981). All constructs in the measurement model, therefore, had significant reliability and convergent validity.
To test discriminant validity, the square root of average variance extracted was calculated for each construct and was compared to its correlation coefficients with other constructs (See Table 4). The comparison results showed that, in Table 4, all the numbers on the diagonal (i.e., the square root of average variance extracted) were larger than the corresponding off-diagonal numbers (i.e., correlation coefficients), indicating adequate discriminant validity (Fornell and Larcker, 1981).
Harman’s single-factor test (see Podsakoff, MacKenzie, Lee, and Podsakoff, 2003) was conducted to check common method bias. Specifically, the authors ran an exploratory factor analysis, and the unrotated factor solution revealed that no single factor could explain a majority of the variance (the first factor only explained 33.538 percent of the total variance). This result indicated that common method bias was not an issue in the current data.
Structural Equation Model
On the basis of the validated measurement model, the proposed conceptual model was tested using structural equation modeling. The following indices were used to estimate the model fit: χ2/df < 5.00 (Hooper et al., 2008), GFI > 0.90 (Hooper et al., 2008), NFI > 0.90 (Bentler, 1992), CFI > 0.90 (Bentler, 1992), and RMSEA < 0.08 (Hu and Bentler, 1999).
The model fit indices for the conceptual model were as follows: χ2/df = 3.311, GFI = 0.908, NFI = 0.921, CFI = 0.943, and RMSEA = 0.066. All model-fit indices exceeded the suggested acceptance levels, demonstrating that the model presented a good fit with the current data. The path coefficients were then examined for testing hypotheses.
Path Analysis
The path analysis results indicated that machine heuristic positively influenced consumer appreciation of AI-created advertisements; β = 0.494, SE = 0.052, p = .000 (See Figure 2). Thus, H1 was supported. Perceived eeriness of AI advertising negatively influenced consumer appreciation of AI-created advertisements; β = –0.108, SE = 0.039, p = .012. Thus, H2 was supported. Perceived objectivity of advertisement creation positively influenced machine heuristic; β = 0.560, SE = 0.045, p = .000. Thus, H3a was supported. Perceived objectivity of advertisement creation negatively influenced perceived eeriness of AI advertising; β = –0.305, SE = 0.044, p = .000. Thus, H3b was supported. Perceived objectivity of advertisement creation positively influenced consumer appreciation of AI-created advertisements; β = 0.264, SE = 0.045, p = .000. Thus, H4 was supported. Perceived AI human likeness did not influence machine heuristic; β = 0.070, SE = 0.028, p = .087. Thus, H5a was not supported. Perceived AI human likeness did not influence perceived eeriness of AI advertising; β = –0.016, SE = 0.031, p = .724. Thus, H5b was not supported. Perceived AI human likeness did not influence consumer appreciation of AI-created advertisements; β = –0.042, SE = 0.023, p = .266. Thus, H6 was not supported. Uneasiness with robots positively influenced machine heuristic; β = 0.189, SE = 0.035, p = .000. Thus, H7a was not supported. Uneasiness with robots positively influenced perceived eeriness of AI advertising; β = 0.198, SE = 0.039, p = .000. Thus, H7b was supported. Uneasiness with robots did not influence consumer appreciation of AI-created advertisements; β = 0.045, SE = 0.030, p = .286. Thus, H8 was not supported.
Additionally, the results of the path analysis indicated that the three antecedents (i.e., perceived objectivity of advertisement creation, perceived AI human likeness, and uneasiness with robots) explained 35.4 percent of the variance of machine heuristic and 13.2 percent of the variance of perceived eeriness of AI advertising. These five variables explained 51.1 percent of the variance of consumers’ appreciation of AI-created advertisements.
Additional Analysis on Demographics
Finally, a multiple linear regression analysis was conducted to test the association between demographics and respondent appreciation of AI-created advertisements to obtain some additional insights from the data. The results indicated that age was negatively associated with AI advertisement appreciation; β = –0.01, t = –2.95, p = .003, meaning that younger respondents tended to appreciate AI-created advertisements more than older respondents. Household income was positively associated with AI advertisement appreciation; β = 0.13, t = 4.59, p = .000, indicating that respondents with a higher household income tended to appreciate AI-created advertisements more than respondents with a lower household income.
DISCUSSION
Although some scholars have noticed the unprecedented impact of AI on the advertising industry, they have mainly examined this technology from the perspective of the advertiser. To the best knowledge of the authors, this study is the first one that investigates what influences consumers’ general appreciation of AI-created advertisements. A model was built based on responses from a nationally representative sample of U.S. consumers. The findings indicated that consumers’ overall perceptions of advertisement creation as an objective process and feelings of unease with robots are important antecedents that determine their appreciation of AI-created advertisements through the psychological mechanisms of machine heuristic and perceived eeriness. These findings are believed to provide both theoretical and practical insights that help deepen understanding of AI advertising.
Theoretical Implications
One of the primary theoretical contributions of this study is to investigate AI advertising from the consumer perspective. This fills the gap in the current advertising literature, which has predominantly examined AI in advertising from the perspective of advertisers or advertising professionals. Although some scholars have proposed several directions for future research on AI advertising (Li, 2019), the consumer perspective has not been sufficiently emphasized. Consumers are the final judges of advertising effectiveness; therefore, their overall appreciation of AI-created advertisements should be a key consideration in this body of scholarship.
This study also contributes to the human–AI interaction literature by integrating different theoretical frameworks (e.g., the MAIN model, the “Computers Are Social Actors” paradigm, and social identity theory) as well as by demonstrating the psychological impact of AI source orientation in the advertising domain, which has not attracted sufficient attention from the human–AI interaction scholars. The findings that both machine heuristic and perceived eeriness influence consumer appreciation of AI-created advertisements prove the important roles of both positive and negative stereotypes of machines in determining consumer reactions to AI and AI-created content, thus supporting the human–AI interaction model (see Sundar, 2020).
Another contribution of this study is to substantiate the prominence of task objectivity in understanding consumer reactions to AI-created advertisements. To the authors’ best knowledge, the extent to which consumers perceive the process of advertisement creation to be objective has not been widely examined in the extant advertising literature. This is probably because advertisement creation traditionally is associated with the idea of creativity (Dahlén, Rosengren, and Törn, 2008). Although there are numbers and data involved in components of the advertising process such as consumer research and media buying, these components typically are inaccessible to consumers. The adoption of AI technology in advertising, however, makes task objectivity a factor that cannot be ignored when it comes to examining consumer responses to advertisements. The current study helps clarify why task objectivity is important to AI advertising. Specifically, consumer beliefs of advertisement creation being objective benefit their appreciation of AI-created advertisements by facilitating the positive effects of machine heuristic as well as inhibiting the negative effects of perceived eeriness. In addition to these indirect effects, the confirmed research model also indicated that task objectivity had a significant direct effect on AI advertisement appreciation, revealing the possibility of other unexplored mediators (see Zhao, Lynch, and Chen, 2010 for the theoretical implications of significant direct effects). The effort of theory building for AI advertising should take task objectivity and its underlying mechanisms into account.
Consumer beliefs of advertisement creation being objective benefit their appreciation of AI-created advertisements by facilitating the positive effects of machine heuristic as well as inhibiting the negative effects of perceived eeriness.
An additional antecedent that has been confirmed to influence consumers’ overall appreciation of AI-created advertisements is the feeling of uneasiness with robots. This antecedent represents the previous (direct or indirect) experiences of individuals with the entities that are similar to AI. Contradictory to the prediction of the authors, however, consumer experiences of uneasiness with robots were found not only to increase perceived eeriness but also to facilitate machine heuristic. These findings are probably due to the fact that uneasiness indicates one’s unfamiliarity with the technologies that showcase some levels of intelligence, like AI and robots. In other words, the reason why people feel uneasy with these technologies is because they do not possess the computational and engineering knowledge that helps make sense of these technologies. The more uneasy one feels with robots, therefore, the more likely one will rely on stereotypes of robots or machines when interacting with AI. Because the stereotypes of machines contain both positive and negative aspects, as indicated by the human–AI interaction literature, it is understandable that uneasiness with robots positively contributes to both machine heuristic and perceived eeriness. The current research model further indicated that the direct effect of uneasiness with robots on AI advertisement appreciation was not significant. This suggests that machine heuristic and perceived eeriness explain most of the variance in this relationship and the possibility that other mediators exist in this relationship is small. As a double-edged sword, uneasiness with robots should be given extra attention in the future research of AI advertising.
Last, it is also worth noting that perceived human likeness of AI seems to exert less impact on consumer reactions to AI-created advertisements than predicted. Although previous human–AI interaction research reported significant influences of human likeness on consumer responses to AI, most studies examined this construct from the perspective of anthropomorphism when people have direct interactions with AI programs, such as chatbots. Anthropomorphism is the process whereby people attribute human characteristics—including physical features, such as a humanlike face or body, and humanlike capability, such as rational thinking and conscious feeling—to nonhumans (Waytz, Cacioppo, and Epley, 2010; Waytz, Heafner, and Epley, 2014). Anthropomorphic design cues, such as giving machines human names, for instance, have been documented to influence user perceptions of chatbots as well as the companies behind the chatbots (Araujo, 2018). The insignificant effects of human likeness in this study could be attributed to the specific research context: AI advertising. Unlike conversing with a chatbot, consumers seldom interact with the AI creators of advertisements. Because of the lack of consumer interaction with AI, it is understandable that human likeness plays a less significant role in the context of AI advertising compared with its role in other human–AI interaction scenarios. The findings of this study on human likeness suggest that human–AI interaction researchers should carefully identify the variables that fit their unique research contexts.
Practical Implications
In addition to the aforementioned theoretical contributions, the findings of this study are meaningful to advertising professionals who plan to invest in AI technology for creating advertisements. First, this study identified machine heuristic and perceived eeriness as two underlying factors that contribute to consumers’ overall appreciation of AI-created advertisements. These findings are useful to practitioners who may be able to strengthen machine heuristic or weaken perceived eeriness when delivering AI-created advertisements to consumers. One example of doing so is taking advantage of the media context. A television show illustrating how AI and related technologies help the FBI solve murders may help trigger the machine heuristic of the audience, whereas a documentary about arts and humanity may make people depreciate the creativity of AI. The selection of media context, therefore, should be a concern of advertising professionals when presenting advertisements created by AI.
The selection of media context should be a concern of advertising professionals when presenting advertisements created by AI.
Second, this study confirmed that once consumers believe advertisement creation to be a more objective task, they tend to appreciate AI-created advertisements to a greater extent. Because consumer perceptions of task objectivity are malleable and could be shaped by external factors (Castelo et al., 2019), this finding provides professionals with a tactic for increasing consumer appreciation and acceptance of AI-created advertisements: emphasizing the objective process of advertisement creation. Of course, consumer perceptions of advertisement creation are formed longitudinally and are influenced by many factors. Whereas a single advertiser may not be able to change consumer perceptions of advertisement creation in general, it is possible to shape how consumers perceive advertisement creation in a specific campaign. In the Lexus video commercial that was scripted entirely using AI, for example, the words “written by artificial intelligence” appeared at the beginning of the commercial.
A more effective way could be to disclose more about how AI objectively creates the advertisement, such as stating “This commercial was created by AI after analyzing big data in the automobile industry” or “This commercial was created by AI based on its systematic analysis on 10 years of automobile advertisements.” Such disclosures present a justification of using AI in advertisement creation (as handling big data is beyond human capacity, for example) while enhancing consumer-perceived objectivity of the advertisement creation process. In the digital context in which AI currently is used to analyze consumer browsing behaviors and provide customized advertising messages, disclosing how AI works may not only address consumer privacy concerns but also increase the perceived objectivity of the advertising process. This could be achieved by a simple disclosure such as “This advertisement is created by AI based on digital trace data on this platform.”
Third, as mentioned earlier, for consumers, existing uneasiness with robots is a double-edged sword in deciding their reactions to AI-created advertisements. It could both benefit their appreciation of AI-created advertisements through the activation of the positive machine heuristic and jeopardize such appreciation by making them perceive AI advertising as eerie. For advertising professionals who want to adopt AI for advertisement creation, it is important, therefore, to understand how uneasy their target consumers may feel about robots or technologies with certain levels of intelligence. Because a large part of the uneasiness comes from the consumption of mass media content (Sundar et al., 2016), consumer segmentation based on previous media consumption may provide some solutions. Age also could be a segmentation criterion. Because younger people, compared with older people, may feel less uneasy with robots or related technologies, advertisers who target different age groups should pay attention to this factor when running AI-created advertisements.
Last, the finding that AI human likeness is not a factor that determines the reactions of consumers to AI-created advertisements also provides some useful insights to professionals. There seems to be a trend toward making AI programs humanlike in many domains, such as AI robots with human appearances (Cutting Edge, 2019), chatbots with human voices (Edwards, Edwards, Stoll, Lin, and Massey, 2019), and AI programs that behave like human beings (Fachechi, Agliari, and Barra, 2019). Although increased human likeness may encourage people to accept AI when they are actually interacting with AI applications, this study suggests that it is not a major concern when people just deal with the media content (i.e., advertisements) created by AI. On the basis of this finding, advertisers do not have to invest too much in personifying their AI programs, which could help manage the advertising budget.
Limitations and Future Research
Although this study is believed to provide meaningful theoretical and practical contributions, there is definitely much space for future investigations. First, this study focused on overall perceptions of AI-created advertisements. Future research may explore this topic from a different perspective by focusing on consumer responses to some specific AI-created advertisements with the help of experimental studies. This line of research may take different media platforms and different message formats into consideration. Other advertisement-specific variables, such as brand attitude and purchase likelihood, also could be tested. Future research also may compare human-created advertisements with AI-created advertisements and explore what factors drive the different responses of consumers. Second, the authors explicitly informed respondents that AI would be the focus of this study. This demand characteristic of survey introduction may bias respondent answers to some extent. Future research can use a more neutral introduction. Third, this study was conducted using a nationally representative sample, which is not a limitation but opens the door for future research to focus on different groups. The additional analysis on demographics discovered that age and income are potential factors that influence consumer reactions to AI-created advertisements. Future research could, on the basis of these findings, explore why younger individuals and people with higher income find AI-created advertisements more acceptable. Potential factors may include media consumption and cultivation as well as access to AI-powered services or applications. The education level of respondents was not measured in this study. Future research may obtain additional insight into how education could affect consumer responses to AI in advertising. Fourth, the confirmed relationships in this conceptual model also could provide several possible directions for future research. Experiments, for example, could be conducted in which participant-perceived objectivity of the advertisement creation process is manipulated so that it could help detect the causal relationship between task objectivity and responses to AI-created advertisements. Fifth, other factors that are not contained in the current model also may be considered for future research. A key feature of AI-created advertisements, for example, is automation, which indicates that consumers lose some control over receiving advertising messages. Thus, future research may study the role of consumer locus of control in affecting their responses to AI-created/-distributed advertisements.
ABOUT THE AUTHORS
Linwan Wu is an assistant professor in the School of Journalism and Mass Communications at the University of South Carolina. His research focuses on advertising psychology and communication technology. Wu’s work has been published in the Journal of Advertising, Journal of Advertising Research, and International Journal of Advertising, among other journals.
Taylor Jing Wen is an assistant professor in the School of Journalism and Mass Communications at the University of South Carolina. She conducts research in consumer psychology and media effects in the context of marketing, health, and risk communications. Her work can be found in the International Journal of Advertising, Journal of Current Issues & Research in Advertising, and Journal of Interactive Advertising, among others.
- Received May 13, 2020.
- Received (in revised form) October 18, 2020.
- Accepted December 7, 2020.
- Copyright © 2021 ARF. All rights reserved.
REFERENCES
ARF MEMBERS
If you are a member of the Advertising Research Foundation, you can access the content by logging in here
Log In
Pay Per Article - You may access this article (from the computer you are currently using) for 30 days for US$20.00
Regain Access - You can regain access to a recent Pay per Article purchase if your access period has not yet expired.