Ethical Considerations and Challenges

Leverage AI Without Compromising Trust

In this article:
AI opens new concerns about data security and member privacy. Old-school guidelines are still useful. 

When it comes to an association’s data, AI can open up a brand-new world of anxieties. A chatbot can inadvertently share information about other members’ help requests. Proprietary content can be fed into large language models. Misinformation and incomplete content can give the general public the sense that your association isn’t the authority it claims to be. 

But legacy approaches to data security still apply, says Bucky Dodd, Ph.D., CEO and principal consultant for ClearKinetic, a consultancy focused on AI-generated educational tools with numerous association clients. “AI is built upon data, and if you have strong data-management and data-governance practices, you’re already a step ahead” he says. “I often tell people that your first step is not really an AI strategy — it’s a data strategy that you then build off of and extend into the AI space.” 

Conor Sibley, CTO of the association tech firm Higher Logic, says that the first step toward a quality data-governance process is a thorough audit of not just what data pools you have, but also their accessibility. 

“You can have great data, but it can be inaccessible — there might be incredible data in your community platform, inside your LMS, inside your CMS, your CRM, your journal platform, but if none of it’s accessible and none of it is consistent, then the AI is not going to be able to make sense of it.” 

From there, Dodd says, it’s important to make sure that there’s a human element built into the development of any AI tool. Subject matter experts, staff, and technology committee members can all play a role in vetting the quality of an AI solution and establish guidelines around its proper and ethical use.  

“I’m a big believer in co-design practices,” he says. “I would bring subject matter experts in early and often throughout the process and ask them about the approach to what you’re doing. What areas of concern do you see? From an ethical point of view, that human in the middle of the process is a bedrock principle.”  

Amanda DeLuke, senior privacy analyst at Higher Logic, recommends testing any AI tools an association is considering on a smaller public set of data to test its quality and security before deploying it more widely.  

“Think of it from a risk-based approach,” she says. “Do you have a very good business use case to be using the most sensitive data? Do you have the proper guardrails in place? And have you educated folks on what those guardrails are? The most sensitive data can accidentally be put into a public model, and that’s where you may have a data breach. I think it’s very important to be very strategic about the different types of data — what is off limits and what’s more general-purpose.” 

“Bring subject matter experts in early and often throughout the process and ask them about the approach to what you’re doing.” —Bucky Dodd, CEO, ClearKinetic

Educating Staff and Stakeholders

As more organizations have begun adopting AI in their everyday work — and as more organizations have experienced or heard about horror stories about breaches — more guidelines around best practices have emerged. For instance, Higher Logic has a usage policy that discloses how it approaches AI, how it protects user data, what LLMs it uses, and what information is shared with it. DeLuke also recommends leaders look at the European Union’s AI Act, passed in 2023, as a helpful guideline for what clear disclosures and guardrails around AI usage looks like.  

“You need to have a really strong human firewall, so educating is very, very important when it comes to security and protecting data,” she says. “You want to create your own internal framework that you’re going to use to educate folks, so that everyone is on the same page. I feel like communication can be very tricky sometimes, so you want to make sure you have that proper framework, whether it’s an AI policy, or you may have a ‘responsible use of AI’ statement that is used internally and externally. That’s really going to help your internal folks and also create that transparency for your users or members.” 

Similarly, Dodd says organizations should develop clear statements about what data it’s using and how it will be used, especially for members who are concerned about misuse of data and skeptical about AI. “From a messaging standpoint, getting really clear about what it is we’re talking about with AI to the various constituencies that we’re working with, I think, is number one,” he says. “Then the next piece is the data piece. What data are you working on? Who are the controlled audiences around that data? Who can access it? What do you then do with the inputs and the outputs of the data? Those all play a very significant role in what is appropriate use of AI.” 

Higher Logic’s Sibley notes that a strong data-management policy should forestall any serious AI-related breaches, but as a new technology it will require more specific and regular reminders. 

“What we’ve found for staff is it’s worth educating them about high-level things, like what’s super-dangerous,” he says. “But some of the things that are super-dangerous are not new. It’s just that because of AI it’s magnified. If you go in and say that this is all new because of AI, it’s actually not. The data has always been sensitive. It’s just now that people are talking about it.” 

Mark Athitakis

Mark Athitakis, a contributing editor for Associations Now, has written on nonprofits, the arts, and leadership for a variety of publications. He is a coauthor of The Dumbest Moments in Business History and hopes you never qualify for the sequel.

More from Ethical Considerations and Challenges

View