I considered using ChatGPT to write this blog about ChatGPT and PR, but that felt too on-the-nose, meta and a little lazy. So I wrote it myself.
Or did I?
No, I did actually write this myself. This is me. But it’s valid to wonder because for certain types of content, ChatGPT is pretty good.
Since the New Year, the buzz about ChatGPT has been relentless — both good and bad. In less than two months, the chatbot has surpassed 100 million active users. Microsoft announced a multiyear, multibillion investment in OpenAI to inject ChatGPT into its productivity tools and Google unveiled its own rival chatbot — an introduction that didn’t go particularly well, wiping off $100m in Alphabet’s valuation after it (confidently) shared a factual error in its very first demo.
We’re in a new phase of AI that will have far-reaching effects across every organization. So what does ChatGPT mean for PR and communications? Today, it’s somewhere between an “entry point to creativity” or a conduit to a “cavalcade of bullshit,” according to practitioners’ takes in a recent PR Week piece. With one foot in the future, Inkhouse is also very much in the experimentation phase with AI tools.
Here’s what I found after a few weeks of playing around with the bot.
Recently our CEO Beth Monaghan threw down in a battle of first drafts with ChatGPT — tasked with writing a literary essay, Beth’s was much, much better than the robot’s.
But recalibrating expectations, it can be a great tool to get started on less critical content, like third-tier press releases or more prosaic blog content, with “enormous time savings to be found in generating first drafts of formulaic content.” It’s formulaic, and for some content, that’s fine.
For more complex or creative communications, ChatGPT can generate something that is better than staring at a blank page. If I’m struggling to write a more complex piece of content, it’ll give me something to react to — even reacting to something terrible or off-message can be enough to dislodge writer’s block.
When the writing just isn’t coming, I’m a firm believer in taking “creativity breaks” — stepping away from the difficult deliverable and doing something mindless, like surfing the web for 5 minutes. Lately I’ve been using ChatGPT breaks to create throwaway content for everything from Boston Celtics historical fiction (in my alternate timeline, Len Bias lives and the team wins three more titles) to a song about my dog and his brother written in the style of Bob Dylan. It’s mostly a lark (although the song did a decent job of approximating pre-1965 Dylan) that can help clear the mental cobwebs before getting back to the task at hand.
With the right query, it can provide decent industry- and market-background for new business pitches. But don’t take it as Gospel because sometimes it gets straightforward things very wrong. For example, it insists Inkhouse’s CEO is someone other than Beth Monaghan (baffling) and it has suggested I am a native of Puerto Rico (thinking there’s some confusion related to PR’s potential dual meaning but sadly untrue).
Where does it fall short? Well, a lot of places, but here are a few:
The Google Bard misstep was a high-profile failure with a real-world impact. Even in my own noodlings with the tool I’ve seen AI “hallucinations” — a confident response of incorrect or deceptive data — with content that was just plain wrong. These egregious or obvious examples make me question what else might be incorrect.
ChatGPT is really good at presenting what’s already out there, but as AI expert Tomas Chamorro-Premuzic puts it, “being a true human expert will take knowing more about the subject matter than the ‘wisdom of the crowds’ on which the AI draws.”
Thought leadership requires authenticity, domain expertise and an interesting point of view — taking the existing conversation beyond what’s already been said. None of these are ChatGPT’s strengths.
We help our clients tell stories that appeal to their audiences’ emotional needs to inspire action. ChatGPT doesn’t really “do” emotions — it generates text based on input, patterns and associations. By definition, it will soullessly regurgitate what’s already been written, without any true feeling or humanity.
Quite the opposite, you can make the argument that ChatGPT has come at tremendous human cost. Thanks to Time magazine’s tremendous reporting, we know that OpenAI paid Kenyan workers less than $2 to make ChatGPT “less toxic,” having to scan through tens of thousands of snippets of horrific content from the darkest corners of the web to help teach the content, using “invisible workers (who) remain on the margins even as their work contributes to billion-dollar industries.”
The news cycle for this story was shockingly brief, flooded out by more breathless AI enthusiasm, and that’s just wrong.
“The data that you enter into an AI app is potentially not at all entirely private to you and you alone. It could be that your data is going to be utilized by the AI maker to presumably seek to improve their AI services or might be used by them and/or even their allied partners for a variety of purposes. You have now been forewarned.” — AI expert Lance Eliot in a recent Forbes piece.
This limits ChatGPT’s utility for public or sensitive information.
AI is absolutely going to be a big part of PR’s future. For ChatGPT, with the right expectations, I see it as another valuable tool — like everything from Google Search to Grammarly — to help us work better in the present, too.
Want to hear more about AI’s role in our lives and its impacts on our jobs? Subscribe to our newsletter to stay updated.
Ed Harrison oversees Inkhouse’s growth in the eastern U.S., including Boston and New York, building company culture and ensuring client-service excellence while helping drive the agency’s expansion into new markets and geographies.