How are communicators really using AI? Missouri School of Journalism’s Jon Stemmle has answers

Jon Stemmle

By Austin Fitzgerald

COLUMBIA, Mo. (Jan. 24, 2024) — Professor Jon Stemmle at the Missouri School of Journalism has co-authored a research paper that examines how public relations professionals are using artificial intelligence in their work.

Based on a survey of 75 communication professionals, the paper is designed to aid communicators and organizations looking to integrate AI into their workflows. In it, Stemmle and co-authors Mark Dollins and Elizabeth Ballew identify benefits of AI — such as efficiently handling mundane tasks like editing and organizing presentations — as well as privacy concerns and the potential for biases that suggest a need for caution.

“This research goes beyond the hype and looks at what is actually happening in agencies and PR offices around the country,” said David Kurpius, dean of the School of Journalism. “Whether Jon is bringing that knowledge into the classroom or sharing it with the industry, the result is a more informed community of strategic communicators who know what their peers are doing and how to learn from them.”

Aside from describing use cases, the paper seeks to take a broader pulse of the industry on the use of technology that experts and industry professionals agree will revolutionize journalism and strategic communication.

The data we gathered matched up with much of what I had anecdotally heard: that people were kind of toying with AI but not really using it. But on the other hand, they all thought they would be using it within the next year.

Jon Stemmle

“This is a paradigm shift for strategic communicators, specifically those in PR. The data we gathered matched up with much of what I had anecdotally heard: that people were kind of toying with AI but not really using it,” Stemmle said. “But on the other hand, they all thought they would be using it within the next year.”

Still, those who were “toying” with AI were not necessarily doing so aimlessly. In addition to exploring the ability to speed up mundane tasks, respondents described using open-source technology to take time saved on creating meeting notes or drafting press releases and focus on the more strategic and creative aspects of their work. Concerns of those surveyed were related to privacy, ethics and the bias of current AI tools.

“With ChatGPT, if you put it in there, they own it,” Stemmle said. “So, if you’re about to release a new product, you’re not going to run it through ChatGPT and ask, for example, ‘how can I write about the new iPhone 16? That’s why the larger organizations are creating proprietary AI tools so that they can use the technology, but their information is protected.’”

The emphasis on proprietary versions of open-source AI is just one example of what Stemmle considers to be a wise approach to the technology: using it as a starting point, not as the finished product. The paper argues that the human touch is necessary to preserve empathy, check for accuracy and minimize bias, amongst a host of other ethical considerations.

In other words, the paper suggests AI is useful as an assistant, not as a replacement. It’s a philosophy Stemmle also brings into the classroom, where his students use generative AI to help them gather initial ideas for press releases and other content, learning where the technology falls short in the process.

“If students are learning the basics of things like how to write properly and how to put graphics together, they’re going to know enough to know what’s wrong with AI and be able to fix it,” Stemmle said. “I think that’s where it comes in. If I get a press release in nine seconds, maybe I’ve got a framework. Maybe I’ve got an idea for something cool. I still need to modify it and fact check it, but editing is much quicker than writing.”

The study was conducted in collaboration with Dollins’ North Star Communications Consulting.

Updated: January 25, 2024

Related Stories

Expand All Collapse All