As marketers start using ChatGPT, Google’s Bard, Microsoft’s Bing Chat, Meta AI, or their own large language models (LLMs), they need to worry about “hallucinations” and how to prevent them.
IBM provides the following definition for hallucinations: “AI hallucination is a phenomenon in which a large language model, often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating results that are meaningless or completely inaccurate.
“Typically, if a user makes a request to a generative AI tool, they want an output that addresses the message appropriately (ie, a correct answer to a question). However, sometimes algorithms of AI produce outputs that are not based on training data, are decoded incorrectly by the transformer, or do not follow any identifiable pattern. In other words, it ‘hallucinates’ the response.”
Suresh Venkatasubramanian, a Brown University professor who helped co-author the White House’s bill for an AI bill of rights, said in a Entry to the CNN blog that the problem is that LLMs are simply trained to “produce a plausible-sounding response” to user prompts.
“So in that sense, any answer that sounds plausible, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces. There’s no knowledge of truth there.”
He said a better behavioral analogy than hallucinating or lying, which has connotations of something wrong or bad intent, would be to compare these computer outputs to how his youngest son would tell stories at age four.
“You just have to say, ‘So what happened?’ and would continue to produce more stories,” Venkatasubramanian added. “And he would go on and on.”
Frequency of hallucinations
If hallucinations were “black swan” events, which rarely occur, they would be something marketers should be aware of, but not necessarily pay close attention to.
However, according to studies by Vectara, chatbots fabricate details in at least 3% of interactions, and up to 27%, despite measures taken to prevent such occurrences.
“We gave the system 10 to 20 facts and asked for a summary of those facts,” Amr Awadallah, Vectara’s chief executive and former Google executive, said in a Entry to the Investis Digital blog. “It’s a fundamental problem that the system can still introduce errors.”
According to the researchers, hallucination rates may be higher when chatbots perform other tasks (beyond simple summarization).
What should marketers do?
Despite the potential challenges posed by hallucinations, generative AI offers many advantages. To reduce the possibility of hallucinations, we recommend:
Use Generative AI only as a starting point for writing: Generative AI is a tool, not a replacement for what you do as a marketer. Use it as a starting point and then develop prompts to solve questions to help you complete your work. Make sure your content always aligns with your brand voice.
Check the contents of the LLM generation: Peer review and teamwork are essential.
Check the sources: LLMs are designed to work with large volumes of information, but some sources may not be credible.
Use LLMs tactically: Run your drafts through generative AI to search for missing information. If generative AI suggests something, check it out first, not necessarily because of the odds of a hallucination, but because good marketers scrutinize their work, as mentioned above.
Monitoring the evolution: Keep up with the latest developments in AI to continually improve the quality of your results and stay on top of new capabilities or emerging issues with hallucinations and anything else.
Benefits of hallucinations?
However, as dangerous as they can be, hallucinations may have some value, according to FiscalNote’s Tim Hwang.
In a Entry to the Brandtimes blog, Hwang said, “LLMs are bad at everything we expect computers to be good at,” he says. “And LLMs are good at everything we expect computers to be bad at.”
He further explained, “So using AI as a search tool isn’t really a great idea, but “storytelling, creativity, aesthetics, these are things that the technology is fundamentally very, very good at.” .
Because brand identity is basically what people think of a brand, hallucinations should be considered a feature, not a bug, according to Hwang, who added that it’s possible to ask AI to ·lucine its own interface.
Therefore, a seller can provide LLM with any arbitrary set of objects and tell it to do things that you couldn’t normally measure, or would be expensive to measure by other means, effectively causing the LLM to hallucinate.
An example from the blog post mentioned is assigning objects a specific score based on the degree to which they align with the brand, then giving the AI a score and asking consumers who are more likely to become consumers of the entire life of the brand based on this score.
“Hallucinations are really, in some ways, the core element of what we want from these technologies,” Hwang said. “I think instead of rejecting them, instead of fearing them, I think it’s manipulating these hallucinations that will create the most benefit for people in the advertising and marketing space.”
Emulation of consumer perspectives
A recent application of hallucinations is exemplified by the “Insights Machine,” a platform that allows brands to create AI personas based on detailed target audience demographics. These AI people interact as genuine individuals, offering diverse responses and viewpoints.
While AI people can sometimes provide unexpected or mind-blowing answers, they primarily serve as catalysts for creativity and inspiration among marketers. The onus is on humans to interpret and use these responses, underscoring the fundamental role of hallucinations in these transformative technologies.
Because AI is at the center of marketing, it is subject to machine error. This fallibility can only be verified by humans, a perpetual irony in the age of AI marketing.
Pini Yakuelco-founder and CEO of Optimovewrote this article.
[ad_2]
Source link