Technology

PR Group Sets Guidelines for AI Use

The Public Relations Society of America’s document addresses ethical landmines around generative AI, from hiring to content creation.

A leading public relations association has released ethical guidelines around the use of generative AI in the profession.

In late November, the Public Relations Society of America published “Promise & Pitfalls: The Ethical Use of AI for Public Relations Practitioners,” a document laying out best practices for professionals using the technology. It cites examples of improper use, such as lack of supervision of automatically generated materials, generation of “astroturf” campaigns, unfair vetting of job candidates, misuse of private data, spread of misinformation and disinformation, and more.

A press release regarding the document explains that the guidance is framed around longstanding provisions within the association’s code of ethics, including “disclosure of information,” “competition,” and “enhancing the profession.”

The document, developed by an AI-specific workgroup within PRSA’s volunteer Board of Ethics and Professional Standards, has been in the works for the past year, said PRSA Chair Michelle Egan, APR, Fellow PRSA. As the use of tools like ChatGPT has exploded, PRSA has hosted more conference sessions and held more informal conversations around the technology, particularly about how to use it properly.

“There are tools out there that can help us do our jobs more effectively and more efficiently, and allow us to work at a higher, more strategic level,” Egan said. “But it comes with ethical risks. It comes with risks to our companies and our clients. So we’ve been trying to sort through what it means and how we navigate it.”

There are tools out there that can help us do our jobs more effectively. But it comes with ethical risks.

Michelle Egan, PRSA Chair

Though “Promise & Pitfalls” is not a binding document for PRSA members, it draws bright lines between ethical and unethical use of generative AI. For instance, regarding hiring, it cites as improper a “hiring manager solely [relying] on AI to provide returns relevant to an advertised job without reviewing whether the pool of applicants reflects the known diverse traits in a competitive labor market.”

Here, and in many other cases, the document stresses the importance of human intervention in whatever generative AI produces. “Recognize the limits of the technology and acknowledge the sophistication and expertise of personal knowledge,” the document says. “AI is not a substitute for human judgment, and it cannot replicate human experience.”

As the technology evolves, Egan said the document will likely evolve as well. “We know that this is a living document,” she said. “This is not a new code of ethics. This is just the application of the code of ethics. So as time goes on, and new tools come out and new developments take place, this will be updated so that it stays relevant.”

Egan said that many committees and groups within PRSA have been discussing generative AI, but recommended that associations looking to establish ground rules around its use start with their own code of ethics. 

“If your organization has a set of values, a code of ethics or guidelines, then looking at AI or any other contemporary issue through those parameters is really the way to go,” she said. “It’s too overwhelming, with too much potential for missing things or misreading the situation if you start from scratch. Starting with your organization’s values seems like the ideal.”

(Just_Super/iStock)

Mark Athitakis

By Mark Athitakis

Mark Athitakis, a contributing editor for Associations Now, has written on nonprofits, the arts, and leadership for a variety of publications. He is a coauthor of The Dumbest Moments in Business History and hopes you never qualify for the sequel. MORE

Got an article tip for us? Contact us and let us know!


Comments