As Artificial Intelligence Use Increases, Ethics Policies Needed

New research finds that while AI use is expected to skyrocket in the coming years, most employers aren’t concerned about unethical uses of the technology. They also haven’t created ethics policies to address AI use, though they should.

While many organizations are integrating artificial intelligence into their daily work, they don’t appear to be concerned about possible unethical uses of the technology.

But they should be, according to new research from customer-experience company Genesys.

More than 5,300 participants, which included employers and employees, from six countries were asked about their attitudes toward a variety of AI topics, including ethics. Respondents generally believed AI use would become more prevalent in the near future, with 64 percent of employers surveyed expecting “their companies to be using AI or advanced automation by 2022 to support efficiency in operations, staffing, budgeting or performance.” Only 25 percent are using it for those purposes now.

Even with AI use expected to grow so rapidly, most employers are not troubled that the technology could be used unethically by their companies as a whole (54 percent) or by individual employees (52 percent). Meanwhile, only 17 percent of employees expressed concern over AI potentially being used unethically.

Digging a little deeper, the survey found that while 28 percent of employers are apprehensive that their organization could be hit with “future liability for an unforeseen use of AI,” only 23 percent have a written policy on the ethical use of AI and bots.

In addition, the ethics surrounding using AI to replace human jobs was of keen interest: Both employers and employees said they want safeguards to protect human jobs. The research found 48 percent of U.S. employers and 62 percent of employees agreed that “unions or other regulators should require companies to maintain a minimum ratio of human employees to robots.”

For organizations wondering where to start when it comes to addressing ethics with AI, Genesys created some guidelines that target five focus areas:

  • Customers should know when they are conversing with AI bots.
  • AI systems should not introduce bias against race, gender, nationality, sexual orientation, ability, or other areas.
  • Organizations are ultimately responsible for the AI they create and systems created by their AI.
  • Data protection. AI must not be used to diminish the rights or privacy of individuals or communities.
  • Social benefit. Thoughtful use of AI should provide social benefit.

“Our research reveals both employers and employees welcome the increasingly important role AI-enabled technologies will play in the workplace and hold a surprisingly consistent view toward the ethical implications of this intelligent technology,” said Genesys Chief Marketing Officer Merijn te Booij in a press release. “We advise companies to develop and document their policies on AI sooner rather than later—making employees a part of the process to quell any apprehension and promote an environment of trust and transparency.”

(Just_Super/iStock/Getty Images Plus)

Rasheeda Childress

By Rasheeda Childress

Rasheeda Childress is a former editor at Associations Now. MORE

Got an article tip for us? Contact us and let us know!