Artificial Intelligence

What Charlize Theron and Mr Bean taught us about AI in marketing

Marketers need to take a close look at artificial intelligence (AI). Chat GPT had a rough ride in mid-February 2023 when the language model produced false information about several topics. A pre-recorded demonstration about the technology’s integration with Microsoft Bing included several mistaken statements about products and the financial performance of The Gap and Lululemon. Shares of Microsoft fell 5%, from $272 on 14 February to $258 on 17 February. That’s a $100 billion fall in market capitalisation.   

Because my career started in AI, as a machine learning and data mining researcher at City University, I wasn’t surprised to see inflated expectations of AI dashed by the limitations of the technology. Humans and machines can both get things wrong. Natural language models can be better at producing outputs that look right than they can produce truth. While Microsoft didn’t help itself by not fact-checking a pre-recorded demo, the current state of the technology does mean that false statements can be produced easily by AI language models if they look credible.

AI opportunities

That said AI has many possibilities.

  • Video: The deep fake video of actor & model Charlize Theron with Mr Bean’s face shows some of the profound power of AI’s ability to great realistic ideas.
  • Images: Tools like Dali, and GANs (generative adversarial networks) more widely, can produce both realistic and fantastic models that can fool humans.
  • AI Influencers: In the influencer world, we also see changes. Influencers are being created which are AIs: Lil Miquela came out a few years ago as non-human. ‘Her’ content had been highly convincing. Her music career has launched. Real humans have engaged in collaborations with her, and big brands like Givenchy. As an influencer, she’s worth over £100m. And, unlike a human, you can try to control AIs. They have no embarrassing tweets or old tapes waiting to be forgotten: or do they? Lil Miquela was hacked by a Trump fan, with remarkable consequences.
  • Influential AI: However, influential AI will have a bigger impact than AI influencers. In the last US elections, an NGO designed a campaign to encourage voting (Represent.us) by keying into the threat of outside interference. Its deep fake videos of dictators fooled some people, and it was effective in communicating a message to them. Deep fake videos have certainly impacted voters reflecting on the elections. Its impact continues to grow, and so do ethical concerns.
    Metaverse. We’re similarly in the early days of the Metaverse. Established consumer brands have learnt from gaming leaders and are creating powerful emotional experiences, like Ariane Grande’s 2021 Fortnite concert, and valuable solutions like NFTs. In consumer markets, Nike has been an innovator in metaverse strategies, building out spaces, acquiring a digital sneaker business, and keying into Generation Alpha (the next generation of consumers) and experimenting. Coca-Cola’s virtual ‘Starlight’ flavour and virtual products from Burberry and Louis Vuitton also grab headlines.
  • B2B AI: However, there are advanced B2B options. I and others have been writing about the impact of programmatic advertising for a decade. Siemens’ Digital Twin systems allow businesses to replicate complex physical systems, visualise them and anticipate user challenges, workflow experiences and test their value. AI is already well deployed in B2B marketing and sales settings. Personalization is a huge opportunity.

AI risks

Despite all these opportunities, the brand risks of AI are also massive.

  • Deception: Brands are particularly at risk from impersonators intentionally producing fakes. Deep fakes are powerful and will be unavoidable temptations for many marketers. Because some AI outputs (such as NFTs) can be traceable, it also means that brands could pursue synthetic imitators. In February 2023, Hermes won a New York court case against an artist that sold NFTs modeled on its Birkin handbag. Unintentional deception is almost impossible to avoid when generative models are used. AIs mistakenly create false outputs (as Chat GPT did).
  • Emotion: A brand’s AI use can shock the audience, and AIs can act in disturbing and unexpected ways. Buyers often prefer humans.
  • Centralization: What Web 3.0 aimed to add to Web 2.0’s empowerment – the ability not only to communicate peer-to-peer with other users – was a shift in ownership. Web 3 had promised decentralization, so users and technologies would regulate it. However, Meta and other major vendors clearly aim to own the infrastructure.
  • Legal liability: Many professionals are rushing into AI-generated content, without understanding the risks. One is that generative AIs necessarily reuse ideas and images created by humans. Intellectual property rights are already very complex. While few countries, if any, have specific laws on the use of AI in marketing, advertisers do face many legal requirements to be truthful. Furthermore, AI can make decisions faster, but choices can be wrong. More expansively, there are major human rights implications of AI use.

Industry analysts are overwhelmingly bullish about the possibilities for AI, and rightly so. However, while AI accelerates business and reduces many risks, it also produces new risks.

Written by Duncan Chapple

LinkedIn

Get in touch to work with a world-class team of B2B tech marketers

Improve your industry reputation and influence, grow your customers base and drive investment through transformative integrated marketing.