Photo by Tara Winstead
Insights
Teams
October 30, 2024

How can you use AI for strategic communications?

Alexandra M. Merceron

How is artificial intelligence reshaping the nonprofit landscape? Alexandra M. Merceron, executive vice president at RUBENSTEIN, joins Farra Trompeter, co-director, to explore the impact of artificial intelligence on nonprofits. They discuss emerging themes, risks, ethical concerns, how to navigate AI’s challenges, available safeguards, and three key implications AI has on nonprofit communications.

Transcript

Farra Trompeter: Welcome to the Smart Communications podcast. This is Farra Trompeter, co-director and worker-owner at Big Duck. In today’s episode, we’re gonna ask the question, how can you use AI for strategic communications? Now, we’ve had a few other episodes about AI. We will link to them in the show notes in the transcript at bigduck.com/insights. But we haven’t quite really honed in on AI as it relates to how we’re approaching communication. So I’m really excited for the conversation, and we will soon be talking with Alexandra M Merceron. Alex uses she/her pronouns and is a communications professional and educator with more than 20 years of experience helping organizations manage their reputations and engage audiences for the greater good. She’s currently an executive vice president at RUBENSTEIN, a full-service agency specializing in reputation management. Alex is also an award-winning lecturer in strategic communication at Columbia University, which actually connects to how we met. I first met Alex this summer. We’re recording this conversation in October 2024, and I met Alex a few months ago while I was teaching at the Marxe School of Public and International Affairs, which is part of CUNY Baruch College. And Alex came in and offered an amazing guest lecture to our students there, taking classes in branding and communications, and had a lot of great things to say, including about AI. And I was piqued, and we have been talking ever since. So, Alex, welcome to the show.

Alexandra Merceron: Thank you so much for having me on today, Farra, it’s great to be here with you and your listeners.

Farra Trompeter: So Alex, as we were just sharing in the intro, you have been working in mission-driven communications for a long time. And I know you have spent many a conversation helping communications leaders around the country understand artificial intelligence and plan for its impact on their organizations since 2020. You know, for some people, they may have just started thinking about AI in the past year or two. You have been thinking about it for several years. And I’m curious, what are some of the biggest themes that have emerged when you have been talking with folks generally, but in particular in 2024, and with folks who are in the nonprofit or public sector?

Alexandra Merceron: Yeah, you know, it’s been a very interesting five years. I’ll take a quick step back, but as you noted, right, we work, teach, do research on communication, and new technologies are nothing new. Whether it’s been mobile apps, social media platforms: we’re used to having to adjust our communication to new technologies. But AI has really proven to be different. It’s not one new thing, it’s a new approach to everything. So I really started talking to my colleagues, clients, students, about the evolution of AI, particularly for strategic communication around 2020 when it became clear that the technology was developing really quickly. And quite honestly, media outlets were among the first organizations to really start experimenting. And it became more evident that this was a significant moment in the development of AI when ChatGPT was rolled out in 2023, suddenly making this technology available to everyone. And so really, since then, organizations have been focused on adoption and experimentation. Which tools to use, how to use them, who should use them within the organization, and how do we build some structure around that, whether it’s a policy to manage the risks and challenges, or just simply training people to protect ourselves from the things that we know can arise: AI hallucination, the bias that sometimes shows up in the results, copyright issues, ethical challenges, and so on.

Alexandra Merceron: But in 2024, those conversations have really matured. Generative AI tools have multiplied. I’ve lost count of how many there are available. The risks are well documented. So really where organizations are now is sifting through all of the hype around AI to make smart decisions about what to hold onto long-term. We’re never going to exist in a world where these tools are not here, where they’re not a part of our work. And so really, the planning for long-term, permanent integration, in a way that’s flexible as the tools evolve, is the number one priority for all organizations.

Farra Trompeter: Yeah, and I appreciate one of the things you said in the beginning of that thread, which is: Really, instead of just looking at it as a tool, but really thinking about what is our relationship to this tool? How are we using it? How are we experimenting with it? Creating safeguards against some challenges and understanding some challenges with it. You know, I got my first email address officially in college, so I think about even just my relationship to technology as an individual, let alone as a communications professional. So it’s fascinating to think about, you know, often you’re having the same conversation around whatever that new tool is, and just to reorient ourselves around that. So, specifically with nonprofit organizations, I’m curious, what do you think are some of the most significant implications that AI has on how nonprofits communicate?

Alexandra Merceron: Yeah, I think there are three. So nonprofit organizations, inherently mission-driven, you are always trying to do more with less and be as responsible stewards of resources as you can. And so the first big implication is organizations deciding which approach they’re going to take with AI, right? When we’re talking about AI, we’re really talking about commercially available tools built on these large language models that are publicly available, ChatGPT, Gemini, Claude, Copilot, and the dozens of other tools we already use that have been enhanced with AI. That’s one sort of approach and and key question for organizations to answer: What’s the right mix of tools? The other side of that coin is do we make a major investment in what’s called a closed model? Some of the bigger fears around using AI are around privacy. And if for an organization that’s a huge concern, there is the option of working with software developers to create your own large language model or working with one of these commercial solutions to create your own personalized organizational, large language model. And it’s not a small decision, right? It can potentially be a huge investment, but for some organizations, privacy, security, that level of control over the technology is key.

Alexandra Merceron: Second, which sort of speaks to what you were talking about in terms of our personal relationship with AI, I think what we’ve seen in 2024 is that the robots are not going to take our jobs. Really, the best way to look at AI and its promise is to see it as a resource multiplier. And for nonprofit organizations, your greatest resource is the staff, right? Your people and their time, their knowledge. So I encourage leaders to ask themselves, what are the most labor-intensive, time-consuming tasks we do most often? And how can generative AI make those things less challenging and more efficient? Really, AI can help nonprofit organizations maximize the resources that they already have that are in short supply, or just get more out of those resources, ie the people.

Alexandra Merceron: And then third, really if you think about another positive outcome of AI is that audiences are changing. The way people are learning about the world and the causes that many organizations are advocating for will be changed by searching on AI, right? I can’t tell you how many times I just use AI as a Google on steroids. To learn, educate myself, get more background information on topics. And quite honestly, many people are doing that. According to recent research from Adobe, over half of Americans have used generative AI in the past year. 81% of us are using it in our personal lives, about 30% are using it for work, and 17% using it in school. So that’s only going to increase, and that actually creates an opportunity, but it’s a little bit more work for nonprofits. And now you have one more channel where you have to sort of curate your reputation and make sure that the right information about your organization is on the website, in the press, in the press releases. All of the content that you create now is ending up in this additional place, right? These generative AI tools that will soon become an important part of people’s learning about both your organization and the causes you support.

Farra Trompeter: Yeah. In fact, to that last point, the other day, I had a call with an organization reaching out to us to see about working with us. One of the first questions we almost always ask is: “Hey, how’d you hear about us?” You know, unless it came from somebody that introduced us. And I had the first call where someone said, “I asked ChatGPT about nonprofit communications firms, and that’s how I found Big Duck.” I was like, “All right, well, there you have it.” So backing up your point, and another thing that we need to think about with our own marketing. So I’m curious, where do you see AI offering the biggest boost for nonprofits and how they approach communications? And specifically, perhaps also, what are the most significant liabilities or risks? What about ethical concerns? Because again, I think there’s a lot of exciting opportunities. We should certainly talk about them. We wanna be abundant, we wanna be excited, but we also wanna be mindful. You spoke about privacy, There’s ethical concerns. There’s, I know, a lot of questions that have come up. We’ve talked about a little bit on this podcast about equity issues and even representation. We have a great blog that my co-director, Claire Taylor Hanson, wrote around representation and images and how you have to be careful if you’re using AI in those ways. So I’d love to hear from you, what do you think about biggest opportunities and perhaps biggest risks?

Alexandra Merceron: Yeah, I’ll start with the opportunities, right? Five basic uses for AI. We’ve already touched on some, right? Brainstorming, research, writing, analyzing data. Those are things that I tell everyone you can get started with immediately. Just the amount of time maximizing human resources. It adds such a benefit to pretty much any sort of knowledge work, right? Early on, most of the fear around AI was that it was going to replace a lot of knowledge jobs and particularly lower-level jobs were at risk. But really, the people who should be using AI are more senior. People who have judgment, people who have expertise. People who have enough of an understanding of the subject matter, to really just use AI for efficiency. You could get to a good first draft of most common forms of writing with a lot of the tools.

Alexandra Merceron: Gemini, ChatGPT, Claude is one I rely on, quite honestly, more and more each day. Tools like Dall-E can be used for creating images, right? Many of us remember the day of having to invest in purchasing stock images if you were working on design, but there are so many commercial available tools that make that easy, even for video creation, audio creation. So there’s a host of tools that can be used for creating content, research, also, personal productivity: tools like Microsoft Copilot and Gemini, even Zoom’s AI companion features, things that are taking notes for you, helping you transcribe meetings. Again, that theme of maximizing the resources you do have, your people who are your best resource, to get more out of them, make their jobs more enjoyable, and hopefully save time and resources that way.

Alexandra Merceron: But you are absolutely right, there are many ethical challenges. So first, some of the problems are inherent to the tools. AI hallucination still happens. The tools have gotten much better, but I cannot underscore enough the importance of fact-checking and editing, trying to triangulate findings in anything you get out of an AI tool is still a top concern. Second, knowing how to use these tools safely and responsibly. Offline, we were talking about some of the mistakes that happen with these AI tools, and if you really look at it, it’s as much human error as it is the tool’s fault. You know, recently there have been news stories about people putting personal information into public tools. The key is to make sure that you are investing in training along the way to make sure that staff and the organization’s reputation, quite honestly are saved and sort of protected from some of these problems. And then the third area, which is probably the most difficult, and also the largest, are the ethical complications, right? Depending on which tool you are working with, there’s all sorts of critiques about the sort of development of the technology, the environmental impact, how these large language models have been trained in terms of content ownership. And then there’s, as you mentioned, the bias in results, right? Essentially, these large language models are pulling from existing content, and we have varying degrees of transparency around what that content is, how that content was derived, and how the model algorithms have been written to really draw information out. So it’s still incumbent upon us to try to extend our personal ethics, our organizational ethics, to these tools.

Farra Trompeter: You know, you mentioned just in this response and earlier “AI hallucinations”, and I actually have to ask what you mean by that. Because I have an assumption. I remember reading an article a few months ago, I’ll try to find it, to put it in our show notes. I can’t remember if it was in The New York Times or The Washington Post that had a back and forth between, a reporter was kind of testing one of the AI tools when it first came out, and all of a sudden the AI was like, “Leave your spouse and be with me forever,” or something very wild like that. What do you mean by AI hallucination?

Alexandra Merceron: So AI hallucination is used as a broad term to sort of categorize all of the ways in which these AI tools just make stuff up. If you think about how large language models work, right? It’s a huge vat of information, and there’s essentially code written on top of it to help make predictions and assumptions. When we enter a prompt, it wants to give us an answer so badly. It is, it exists to give us an answer, and it will try to find the answer. And if that answer is not there, it will give you its assumption. Many times that assumption is wrong and we end up with facts that aren’t true. We end up with inaccuracies in the data. Sometimes we end up with the most bizarre kinds of fill-in-the-blanks results, purely based on these linear assumptions that the large language model is making.

Alexandra Merceron: I always tell people, that’s kind of the edge that human beings have on AI. We are much better at non-linear assumptions and guessing than ChatGPT or Google Gemini, right? They are good at thinking linearly, and if something’s not there, they will put something in the hole that they think should be there. Whereas our intuition sometimes helps us from that a little bit more than models. But it’s still a very real concern. Again, the tools have gotten better. I think it still remains a concern. And so I have not dialed back my caution around looking for inaccuracies. I triple-check any statement of facts, particularly dates. I’ll give you one very specific example. Bill Walton, the famous basketball player who passed away in May, I think the very next day after he passed away, I asked ChatGPT to give me some background on Bill Walton, and it gave me the date of his passing as being three weeks ahead.

Alexandra Merceron: I did the query in May, and it predicted that he passed away on June 13th, 2024. Number one, he passed away in May. He passed away the day before I asked. And also I have no idea where, and June 13th at that point was in the future. So it just completely made something up. And if you prompt it further, many chatbots, ChatGPT and Claude in particular, they will recognize their mistake and you will get a result that says something like, “I apologize, I didn’t have sufficient data. I gave you my best guess. I probably shouldn’t have done that.” It’s actually quite interesting.

Farra Trompeter: Yeah. So in addition to saying, “Is this in fact true?” what questions should nonprofit staff be asking when it comes to selecting AI tools? Whether they’re thinking about testing them, adapting and adopting them in their organizations, what should they be thinking about?

Alexandra Merceron: Yeah, so I think it goes back to identifying tasks that you do all of the time. Areas where you identify that there’s room for efficiency, right? The first step in choosing tools is to decide what the use cases are. And that’s number one, thinking about what you need. It’s that conversation about ethics. It’s that conversation about training and thinking about who’s going to be using them. And it may be that your organization decides that the only use cases for AI will be generating first drafts and internal communication. That is fair. It’s not unprecedented that some organizations are only using it for what we call non-billable content. We use it for brainstorming. We use it for research. We’re using it to draft emails to our colleagues, but nothing for clients, right? That’s just one example. But deciding what those use cases are. And the examples I gave, research, brainstorming, analyzing data, first, drafts of writing: those are a good place to start. I think you have to be quite careful about thinking beyond that for much broader uses. That would probably have more risk involved.

Alexandra Merceron: Second, is to think about the risks for the uses that you have in mind. I can’t stress this enough again, having a policy in place that gives people the rules of the road for using it. If you want people to check everything with three sources and have a second colleague read over it, build that into your policy, right? The policy should include things not to do as well as guidance on how to use these tools. Well, everyone needs the roadmap. It’s good for the users, it’s good for the organization.

Alexandra Merceron: And then three, right, really deciding which tools make sense. Many of these tools overlap. You could probably make the case for several of them, but not every organization can afford all of them. You probably need one or two or a combination depending on what your use cases are. But again, having those documented, I encourage organizations to invest in the paid subscription versions of these tools. They offer more security and a little bit more control over the platform. And again, giving the organization the ability to manage and to stay safe in using those tools. And then fourth, really deciding who should be using them. I think initially we thought these tools were for everyone. The answer is, it depends on the organization. Knowledge work comes in many forms. You’re going to find that the people in your organization who are really enthusiastic and excited about these tools will be the best ambassadors and the best people to invest in. Having them and investing in their training takes time and effort. And so I think those are some of the key steps that you, you sort of have to commit to if you decide that you’re going to begin to experiment with AI.

Farra Trompeter: That’s really helpful. Thank you. So before we wrap up, we always really love, in the podcast, to provide some practical tips or examples for our listeners. And I’m curious, do you have anything incredibly concrete and specific. Suggestions, examples, et cetera, that come to mind for you that might spark some new ideas for our listeners? I mean, you’ve given us a lot to think about, but I’d love to just bring some practical insights in.

Alexandra Merceron: Yeah. Two quick things that I like to do. I like to use ChatGPT as a coach. Before I do anything, or if I have a plan to start working on any project, I will typically ask, “What are the steps you would take in doing this?” Just to sort of verify my own approach and see if I’ve missed anything. Another thing I do with Claude sometimes, and again, this is all for internal. In my work, I’m not using it on any client work, but if I’m doing, I do a lot of events, I do a lot of speaking, I will often take something that I’ve written and transform it with Claude. If I’m giving a presentation, I’m comfortable putting my slides into Claude, and I will ask it to give me a five-sentence summary of this presentation, an index of every presentation I’ve ever given. You can ask it to take, you know, a paragraph that you’ve written and turn it into an email invitation. So a lot of content transformation, it would probably take me 20 minutes to do. If I do it with Claude or ChatGPT, it takes me five. So again, trying to find these just little efficiencies that you can add to your day. Number one, it’s fun. It’s a great way to learn about the tools to get started, and it just saves you time. And really adding efficiency to our day is, it’s a huge win in itself.

Farra Trompeter: Yeah. So who doesn’t wanna have more time in their day and more fun? So there you go. Well, Alex, this has been such a great conversation as it is whenever we get a chance to talk. And if you’re out there and you’d like to connect with Alex, you can find her on LinkedIn. We will connect her profile in the transcript of this at bigduck.com/insights. Alex, before we officially wrap things up, anything else you’d like to share?

Alexandra Merceron: Just my advice is to find something small that AI can help you do personally in your day-to-day work, and build from there. That firsthand experience with generative AI is really the best way to understand the risks, benefits, and opportunities. And thank you for having me.

Farra Trompeter: Well, thanks for being here today. All right, everyone go out there and have some fun in whatever way you choose to do it.