The AI Revolution
Empowering Executives for the Future
Risk Management

Essential Guardrails for Associations and AI

In this article:
Navigating the legal and ethical issues that AI presents means recognizing trouble spots and building guidelines around them.

Two of an association’s most essential assets are its information and its people, and AI is radically altering the role of both. What data does an association need to protect, and what roles do human beings play in monitoring and disclosing AI use?

As a starting point, says Jeffrey S. Tenenbaum, managing partner of Tenenbaum Law Group, which works with associations, associations should set a clear policy around keeping private member data off of public-facing tools like ChatGPT. “Most machine learning platforms learn from everything that’s out there on the web, but they also learn from inputs that people put into the AI platform,” he says. “Once you put it in there, it might show up in someone else’s outputs.” Paid versions of these tools allow users to input data without that risk; even there, though, Tenenbaum recommends implementing a policy that sets guidelines around the contexts where the tool is used.

One particular trouble spot is hiring. Because generative AI is based on information that’s available online, it’s also prone to the kinds of biases inherent in it. When screening resumes, an AI tool might favor candidates from one ethnic or racial group over another, or downgrade outliers; and when processing recorded interviews, AI tools may deliver poorer transcriptions of those with pronounced accents or speaking disabilities. “There’s a legal risk of employment discrimination, because AI platforms are trained based on everything that’s out there on the web, and we know that the internet has built-in biases,” Tenenbaum says.

“AI platforms are trained based on everything that's out there on the web, and we know that the internet has built-in biases.” –Jeffrey S. Tenenbaum

Policies and Processes

Developing policies around AI use at an association is an ongoing process. Cindy Ziegler, director of governance and special initiatives at the Association for Professionals in Infection Control and Epidemiology (and incoming chair of ASAE’s Ethics Committee), has been developing guidelines around AI as part of her work on APIC’s ethics task force. “It’s interesting because when it comes to infection prevention, artificial intelligence might be able to detect a future outbreak and let people know about it,” she says. “But people could be stealing our content or taking content off of our website and repurposing it.”

To address that dynamic, APIC is looking to apply AI challenges to its recently developed toolkit around ethical decision-making. The core principles of that toolkit apply to AI: identifying the “ethical tensions,” considering who’s affected by it, exploring options, and then taking action. That means it’s important for human beings to be part of any process using AI, Ziegler says. “AI doesn’t have the human factor of empathy and understanding, which is often what you need when you’re looking at bias,” she says.

That mindfulness should extend into every area at risk of being done mindlessly. For instance, Tenenbaum says, many associations have gotten into the habit of using AI transcription tools to record board meetings so they can quickly produce transcripts and summaries. But while that increases efficiency, it also increases legal risk, because recordings are discoverable in any legal action taken against an association (or individual board members). “That can be very scary from a legal perspective,” he says. “Make sure it’s deleted as soon as the minutes are approved at the next meeting.”

Associations should also be on the lookout for how their intellectual property is being used online. Versions of the association’s IP can easily be fed into large language models, which is a good reminder to make sure that copyright notices throughout an association’s website are up to date. That includes registering the website itself with the U.S. Copyright Office, which bolsters an association’s legal standing. “It’s really important to register so that if you do have to go after a platform for copyright infringement, you’re going to have a much stronger case,” he says.

Ultimately, Tenenbaum says, human involvement is key to navigating the issues AI presents. “The bottom line is that there’s no substitute for human accountability,” he says. “There has to be a human involved in whatever comes out of the AI platform.”

Mark Athitakis

Mark Athitakis, a contributing editor for Associations Now, has written on nonprofits, the arts, and leadership for a variety of publications. He is a coauthor of The Dumbest Moments in Business History and hopes you never qualify for the sequel.

More from Risk Management

View The AI Revolution