
Where Should CEOs Limit AI?
As associations do more around the technology, they'll have to determine where experts fit in. The more training around that, the better.
Teaching builds trust.
That’s one of the main lessons I took from reporting on AI trends for the latest batch of Associations Now Deep Dive articles. Distrust of AI is understandable—there are plenty of outsize claims that the technology will decimate industries, shut off our brains, and doom us all. But the associations that use it recognize its limitations, while doing more to ensure that their people are comfortable with it.
For instance, I spoke with Bruce Moe, executive director of the Missouri State Teachers Association, who last year offered bonuses to staff who completed training around AI. A majority of the staff did that training, and one of Moe’s motivations for the program was to not just improve the staff’s knowledge but the members’. As he told me, he wanted to “get my staff involved in using the technology so that we could begin to leverage it within the association to benefit our members.”
Learning ways to serve members is always a good thing, and I’m hopeful that one of the lessons that gets passed on is that human beings remain essential to any process that AI is deployed for.
Rule of thumb: Insert a human being in the AI process at any point where it’s important to ask, “What’s missing?”
Tagoras’ Jeff Cobb pointed out to me that AI is already doing a lot to help subject matter experts get a running start when it comes to content development: “What AI is doing is giving staff the tools to generate an outline to give to them based on the content that we already have as an organization, or that we’re able to get through public sources, to generate an outline for what we’re looking for. So, the subject matter expert is not working from a blank slate.”
More importantly, the subject matter expert is working: A professional needs to be there to vet, structure, question, rewrite, reorganize, and generally think. As MCI USA’s Ashley Slauter put it to me: “You are never going to AI your way out of having a subject matter expert. To stand up a program, you need someone who has the eyes to help the robots.”
But again, that requires training and processes. How much will you allow AI to develop your content generation? Who will be looking at the outputs, and what will they be looking for? When does AI enter the picture for developing content, answering member questions, and analyzing data, and when will it stop?
Every association will have to determine the limits for themselves. But I think a useful rule of thumb, based on what I learned from experts I spoke with, is this: Insert a human being in the AI process at any point where it’s important to ask, “What’s missing?” AI tools know a lot, but they only know what’s available online to draw from. People know the blind spots, and possess the critical thinking skills to address them. More importantly, failing to integrate people means an association risks perpetuating the errors that come with blind spots.
That’s why training builds trust around AI: It spotlights the spaces where human involvement preserves and increases value. AI makes jobs easier, but if you want to ensure people engage in it, remind them that it’s also an opportunity to better support what humans still do best: think.
Comments