When it comes to an association’s data, AI can open up a brand-new world of anxieties. A chatbot can inadvertently share information about other members’ help requests. Proprietary content can be fed into large language models. Misinformation and incomplete content can give the general public the sense that your association isn’t the authority it claims to be.
But legacy approaches to data security still apply, says Bucky Dodd, Ph.D., CEO and principal consultant for ClearKinetic, a consultancy focused on AI-generated educational tools with numerous association clients. “AI is built upon data, and if you have strong data-management and data-governance practices, you’re already a step ahead” he says. “I often tell people that your first step is not really an AI strategy — it’s a data strategy that you then build off of and extend into the AI space.”
Conor Sibley, CTO of the association tech firm Higher Logic, says that the first step toward a quality data-governance process is a thorough audit of not just what data pools you have, but also their accessibility.
“You can have great data, but it can be inaccessible — there might be incredible data in your community platform, inside your LMS, inside your CMS, your CRM, your journal platform, but if none of it’s accessible and none of it is consistent, then the AI is not going to be able to make sense of it.”
From there, Dodd says, it’s important to make sure that there’s a human element built into the development of any AI tool. Subject matter experts, staff, and technology committee members can all play a role in vetting the quality of an AI solution and establish guidelines around its proper and ethical use.
“I’m a big believer in co-design practices,” he says. “I would bring subject matter experts in early and often throughout the process and ask them about the approach to what you’re doing. What areas of concern do you see? From an ethical point of view, that human in the middle of the process is a bedrock principle.”
Amanda DeLuke, senior privacy analyst at Higher Logic, recommends testing any AI tools an association is considering on a smaller public set of data to test its quality and security before deploying it more widely.
“Think of it from a risk-based approach,” she says. “Do you have a very good business use case to be using the most sensitive data? Do you have the proper guardrails in place? And have you educated folks on what those guardrails are? The most sensitive data can accidentally be put into a public model, and that’s where you may have a data breach. I think it’s very important to be very strategic about the different types of data — what is off limits and what’s more general-purpose.”