If you prefer to listen to this article instead of reading it, here you go:
Like many of us in our respective fields, I’ve been spending time researching and conceptualizing how AI will affect crisis communication and crisis leadership.
Generative AI has already fundamentally altered how the world operates, and the technology is still in its infancy. As AI develops, it will — very soon — become a billion times smarter than the smartest human being to ever live. As Mo Gawdat put it in his book Scary Smart, this can be understood as Albert Einstein’s intelligence in comparison to a fly. And we, us humans, will be the fly.
We’ve only begun to test the possibilities AI offers and we’ve yet to grapple with the challenges it presents, both now and in the future.
However, make no mistake: AI absolutely will affect crisis communication — and crisis in general, for that matter.
Recently, I read a fascinating article by my friend and the queen of PR herself, Gini Deitrich. In this article, Gini discusses how AI stands to affect the PR industry by powering data-driven strategies, enhancing creativity, and providing new opportunities for storytelling.
And then she said something that got me thinking even further…
In explaining what AI can offer — and will be able to offer — in the area of crisis predictions and prevention, she suggested that we will be able to use AI as a tool to help get ahead of one of the most challenging aspects of crisis communication: emotional escalation. In this, she paints this picture:
“Take the This Is Us and Crock-Pot example. […] When the house burned down because the Crockpot malfunctioned, the Newell Brands comms team was unprepared for the onslaught of people overreacting to a TV character dying and throwing out their Crock-Pots.
Imagine if they’d had AI working on it back then. Before the episode had ended, an algorithm would have alerted them to what people were about to do, it would have drafted an optimal response, and even sketched out a media strategy. All while they were still crying over Jack’s death.”
This was the best example she could have used to make my brain go 🤯 and here’s why…
Could AI predict the emotional escalation of a crisis… before it even begins to happen?
This is the question that threw me into a tailspin as I read Gini’s article. One reason the Crisis Ready® Program is so crucial is because a crisis can spring up at any time, even if your organization believes that it’s doing everything right.
Could AI remove the risk associated with that unpredictability?
Could AI anticipate and predict what is often seen as the irrational and unpredictable escalation of human emotion in relation to an event or an incident?
Rather than reacting to an unexpected crisis after the fact, could AI allow your communications team to craft its response well before the chaos erupts — even when that “chaos” wasn’t meant to be, nor was it anticipated to be, chaotic in the first place?
Had Gini not used the This Is Us episode as her example, I don’t think I would have thought and pondered this as intensely as I have. I’m certainly grateful that she did!
What makes this example so inspired is that, when it happened, it truly was as unpredictable and unforeseeable as something can get.
Most of the time, by practicing empathetic leadership, we can anticipate the emotions that people might feel in the event of an issue or a crisis. The This Is Us Crock-Pot scenario was a rare case where the lack of foresight of the audience’s impulsive reaction was completely understandable.
Here’s what happened: (also, spoiler alert!)
- In the episode, Jack Pearson, beloved character, shockingly dies of smoke inhalation when the family’s generic slow cooker — read: NOT a branded Crock-Pot appliance — short-circuits in the night and sets the home ablaze.
- Fans of the show are deeply saddened by this emotional storyline.
- In the depths of their emotion, their brains get triggered with fear at the relatability of that storyline. Essentially, on a subconscious level, their brains say, “Oh my god! I have a Crock-Pot and I don’t want my family to die in the same horrific way that Jack did, so I’m going to throw out mine even though I’ve been using it for decades without such an incident!” (remember that one of the Crisis Ready® Rules is “You can’t beat emotion with logic.”
- These fans, all feeling similarly, rally online and, banding together, begin throwing out their Crock-Pots to proactively save their families lives.
- Crock-Pot, the brand, wakes up the next morning to thousands of generational customers threatening to never purchase from or use the brand again.
- Crisis for Crock-Pot.
It’s such a wild story, right?!
Even with all the risk assessments that the This Is Us writers may or could have done to foresee and anticipate such a drastic frenzy of a reaction by viewers, foreseeing this one was next to impossible — they didn’t even put product placement in the episode!
So, by understanding human emotion and the impact of association and relatability, could AI have predicted this Crock-Pot crisis ahead of time?
This is where my mind went after reading Gini’s post, even though I’m pretty certain she was suggesting that AI would have caught the escalating situation long before Crock-Pot did the following morning, but still after the frenzy had begun.
The latter, I believe to be highly accurate. The former is the notion that I was fascinated.
As a thought exercise, I’d like to explore how AI might have predicted this highly unpredictable incident ahead of time. Along the way, I will post questions that I believe we must ask if we are ever to use its predictive capabilities to their fullest effect.
This exercise is meant to invite discussion. I’m not an expert on AI and I would love to hear from you where you agree, where you disagree, where you see things differently, and whatever I may be missing in my current knowledge of artificial (emotional) intelligence. I invite you to be a part of this reflection in the comments section below.
Question 1: How do you teach AI to make connections like the human brain does?
Nothing in the offending episode suggested that the malfunctioning appliance was a Crock-Pot. The device itself, which was shown on the episode several times, was clearly a generic, no-name appliance.
There was no way for Crock-Pot to pinpoint this single episode — out of the deafening noise of prolific pop culture content — and identify it as a risk. This Is Us had no obligation to alert Crock-Pot of the plotpoint because it wasn’t using their brand.
Nonetheless, Americans link all slow cookers with the Crock-Pot brand, arguably with an even stronger association than they identify tissues as Kleenex. Ultimately, it didn’t matter that the trademarked word was never said, nor the emblem used. The brand was still evoked.
How would artificial intelligence have made that jump?
Question 2: How does AI gauge the strength of different emotional reactions and attachments?
Throwing out Crock-Pots was an irrational response. As explained above, the audience was driven by real fear for the lives and safety of their own families as well as real grief — albeit for a fictional character.
Furthermore, slow cookers are designed to be left unattended. On paper, the fiery catastrophe that This Is Us fans watched on the screen was not relatable nor realistic because, in the history of Crock-Pot, no family has ever experienced it.
And yet the fear of losing their own family was relatable — and much stronger than the not-so-minor factual detail that Crock-Pots are designed to be safe and have never in their history caused a fire before.
At the same time, the tragedy of losing Jack — a character who had only been in people’s homes for a couple of years — overrode their loyalty to a kitchen appliance that many have used for decades and through generations.
As we teach in our Crisis Ready Course on Honing the Art of Crisis Communication and Leadership, fear is a very powerful force.
In fact, the episode effectively used that common generational narrative, showing the Pearson’s retiring neighbors giving them the appliance as a warm gesture to the young family years ago. The episode twisted the familial ties the audience associated with their grandmother’s Crock-Pot into a tragedy. It was brilliant writing and storytelling that was designed to pull on people’s heart strings, though not in the way that it effectively did.
How does AI determine which heart strings win the day across millions of households?
Question 3: Could AI analyze emotions on a massive scale?
What particularly interests me, as we test the capabilities of AI, is this application to predict and analyze emotions on a larger scale.
Let’s face it, even the most emotionally intelligent of human beings can still find themselves blindsided by an unexpected emotional reaction.
And yet — theoretically — you could feed AI the models for fear. You could show it the role fear has played in past crises, through the use of disinformation, propaganda, politics, media sensationalization — the list goes on and on. You could feed it until it could map out where a seemingly inconsequential event could spark into a fire of outrage, devastation, or any other fear-based emotion.
You could teach it to weigh emotional factors and gauge competing narratives to predict the results in a way that humans tend to only realize in hindsight.
Right?
I certainly believe so. After all, it’s said to be on the pathway to being 1 BILLION times smarter than us human flies, and that intelligence includes emotional intelligence.
Key Takeaways
There’s definitely an application for using AI to analyze emotions as they are happening (along with many other necessary and helpful predictive elements to support our profession).
In the immediate hours after that fateful “Super Bowl Sunday” episode, I can see how an emotionally intelligent AI model would have helped Crock-Pot quickly analyze the emotions being expressed across thousands of Twitter accounts, predicting the risks and consequences to the brand, and quickly supporting in the crafting of an effective response.
Would that same AI model, running in the background of NBC’s servers, been able to sound the alarms and give the company a proactive heads up before the episode aired? And, moreover, if This Is Us or Crock-Pot had received this proactive heads-up, would the humans on the receiving end of this predictive analysis have even categorized it as a real risk, considering the practical facts and the history of the product and its brand loyalty?
If it couldn’t make the prediction before the episode aired, could we train AI with enough emotional intelligence to calculate the emotional response of a national audience while they were watching an episode like that?
I believe so, but there are so many factors to consider between artificial and human intelligence and know-how. Hence the brain explosion emoji!
What do you think?
What role do you think generative AI will play in analyzing and predicting the potential emotions of hundreds, thousands, or even millions of people? How well do you think we could rely on those predictions? And how can we learn to work together, in partnership with AI, to do so?
How do you think AI will help you enhance your organization’s ability to be Crisis Ready®?
Founder and CEO of the Crisis Ready Institute, Melissa Agnes is the author of Crisis Ready: Building an Invincible Brand in an Uncertain World, and a leading authority on crisis preparedness, reputation management, and brand protection. Agnes is a coveted keynote speaker, commentator, and advisor to some of today’s leading organizations faced with the greatest risks. Learn more about Melissa and her work here.