Ethical Considerations and Challenges

The Future of Work: Balancing AI Automation With Human Connection

In this article:
How association leaders can drive ethical AI transformation.

Picture this: A member visits your association’s website late at night stressed about an upcoming certification deadline. She starts a conversation with your AI chatbot, which quickly identifies her completed continuing education hours and directs her to the right pages on your website. When she mentions feeling overwhelmed, the chatbot seamlessly connects her with a staff member the next morning. 

This is the future of AI at associations. Technology handles the routine questions and gathers data so humans can excel at what they do best — connecting, advising, and genuinely caring about members when it matters most. 

So, the question today isn’t whether AI will transform your association — it’s whether you’ll navigate that transformation successfully. As AI capabilities explode, association leaders face a fundamental tension of how to embrace innovation while preserving the human connections that define member value. Get this balance wrong, and you risk losing what makes your association irreplaceable. 

Reimagining the Org Chart in the AI Era

Smart associations are discovering that AI integration isn’t about replacing people — it’s about repositioning them to have a greater impact. 

“The key is to use AI to enhance, not replace, human connections,” says Richard E. Shermanski, a professional ethics and compliance attorney with extensive association governance experience and a member of the ASAE Ethics Committee. “Organizations should structure AI to handle routine information gathering and data processing while freeing staff to focus on relationship building, strategic problem-solving, and providing the personalized guidance that members truly value.” 

This creates exciting possibilities for staff evolution. Monica Pemberton, vice president and chief information officer at the American Council on Education, sees roles, responsibilities, and even job titles transforming rather than disappearing. “There becomes the potential for change in titles. Maybe instead of a member coordinator, the role becomes more of a member engagement strategist.” 

The key functions of that role can shift from hours updating contact information and processing renewals to analyzing member engagement patterns, designing personalized outreach campaigns, or having meaningful conversations with at-risk members. 

But here’s a critical point: Associations that automate without considering member impact lose sight of the goal. The aim is creating more time for meaningful human interactions, not fewer opportunities for them.

Which Tasks Should Stay Human?

Efficiency calculations alone shouldn’t guide these decisions. Associations need an ethical framework that protects member interests and staff well-being. Shermanski recommends a comprehensive approach. “I believe associations should apply a framework that prioritizes mission alignment, fairness and transparency, human dignity, and member trust.” 

Mission alignment serves as the primary filter. Before automating any function, leaders must ask themselves whether AI will better serve the association’s purpose, member needs, and responsibility to staff. Fairness and transparency require honest communication with staff about coming changes and equitable opportunities for skill development. Human dignity means preserving meaningful work that leverages uniquely human capabilities like judgment, creativity, and relationship building. 

Member trust considerations prove equally critical, particularly for sensitive areas like ethics or compliance. As Shermanski notes, certain interactions require human expertise and empathy that AI cannot replicate. The framework ensures automation strengthens rather than weakens the member relationship. 

Tori Miller Liu, MBA, CIP, FASAE, CAE, president and CEO of the Association for Intelligent Information Management (AIIM), advocates for a bottom-up approach to AI automation. “Unless you’re in dire straits, make it a dialogue. Let staff look at their positions and find ways that AI can make their roles better and more fulfilling.” When staff help identify what to automate, they’re more likely to embrace the changes and spot opportunities leaders might miss. 

From Job Displacement to Job Enhancement

Forward-thinking associations recognize their responsibility and invest in helping staff navigate AI transformation. Pemberton takes a strong stance. “If you’re expecting staff to leverage [AI] at work, it’s the association’s responsibility to give them the time and resources to learn how to do that.” 

But effective upskilling goes beyond basic AI literacy — it encompasses strategic thinking skills that become more valuable as routine tasks get automated. Shermanski emphasizes developing “uniquely human skills like ethics, critical thinking, relationship management, and strategic problem-solving.”  

Practical training approaches range from hands-on experimentation to formal certification programs. Pemberton advocates for microlearning approaches that fit busy schedules. “Our attention spans are lower, so microlearning is a good option. Take 15 minutes on one tool and then switch the focus to a new tool or topic,” she says. This format makes skills development more accessible while allowing gradual competency building. 

The investment in staff development creates both immediate returns and long-term organizational resilience. As Shermanski observes, organizations that prioritize people development build stronger foundations for navigating technological change. 

Managing the Human Side of AI Transformation

Successfully introducing AI in a significant way requires addressing the emotional and psychological aspects of change. “Transparency and inclusion are essential,” says Shermanski. 

Liu frames her approach as “ruthless transformation, radical transparency,” advocating for honest dialogue about AI plans and their implications. Rather than imposing change, effective leaders involve staff in implementation processes and demonstrate how AI augments rather than threatens their work. 

Start with practical exercises that can help staff envision their future with AI. Liu suggests running job descriptions through AI tools to identify which tasks could be automated, then asking staff, “How can we be better and benefit from AI? What do you need to explore that?” 

Pemberton recommends sharing specific use cases. “Give real-world examples on how AI can enhance the job. Tedious, day-to-day things that take up time now that can be automated or AI can be leveraged to make that task easier — what does that free you up to do strategically?” 

Leaders must show through actions, not just words, that AI implementation aims to make work more meaningful rather than eliminate positions.  

Charting the Path Forward

The associations that thrive in an AI-enhanced future will be those that approach transformation with intentionality, ethics, and genuine concern for both staff and member welfare. The technology promises tremendous potential — improved efficiency, personalized service, and meaningful work for staff — but only when implemented thoughtfully. 

Liu points to research at AIIM that reveals that 77.4 percent of executives across industries are already experimenting with AI. The risk now isn’t moving too fast — it’s falling behind competitors who are enhancing member value while you’re still debating whether to start. 

“What would be irresponsible is to not experiment out of fear or caution,” Liu warns. “There is a different type of risk altogether in avoiding it.” 

Sarah Sain, CAE, a contributing writer for Associations Now, is the senior manager of marketing and communications at the Association of Old Crows.

More from Ethical Considerations and Challenges

View