Two of an association’s most essential assets are its information and its people, and AI is radically altering the role of both. What data does an association need to protect, and what roles do human beings play in monitoring and disclosing AI use?
As a starting point, says Jeffrey S. Tenenbaum, managing partner of Tenenbaum Law Group, which works with associations, associations should set a clear policy around keeping private member data off of public-facing tools like ChatGPT. “Most machine learning platforms learn from everything that’s out there on the web, but they also learn from inputs that people put into the AI platform,” he says. “Once you put it in there, it might show up in someone else’s outputs.” Paid versions of these tools allow users to input data without that risk; even there, though, Tenenbaum recommends implementing a policy that sets guidelines around the contexts where the tool is used.
One particular trouble spot is hiring. Because generative AI is based on information that’s available online, it’s also prone to the kinds of biases inherent in it. When screening resumes, an AI tool might favor candidates from one ethnic or racial group over another, or downgrade outliers; and when processing recorded interviews, AI tools may deliver poorer transcriptions of those with pronounced accents or speaking disabilities. “There’s a legal risk of employment discrimination, because AI platforms are trained based on everything that’s out there on the web, and we know that the internet has built-in biases,” Tenenbaum says.