The Information Technology Industry Council, which represents some of the sector’s biggest players, has released a set of policy principles on emerging artificial intelligence technology. The recommendations, urging only light regulation, received mixed reviews.
As the potential of artificial intelligence to touch nearly every aspect of our computing experience becomes clearer, pressure for a set of standards to govern the emerging technology is growing.
Organizations that have begun to tackle the issue include the Partnership on AI, which has attracted major tech companies, including Apple, Facebook, and Google, as well as a number of nonprofits.
The Information Technology Industry Council (ITI), which includes many of the companies involved in the Partnership but has interests beyond AI, is now adding its voice to the conversation. This week, ITI released a set of principles designed to help shape policy related to artificial intelligence.
ITI’s AI Policy Principles [PDF] addresses responsible use of artificial intelligence and the government’s role in the field. ITI sees public-private partnerships (PPPs) as a promising tool for research and development.
“Our ability to adapt to rapid technological change … is critical. That is why we must continue to be prepared to address the implications of AI on the existing and future workforce,” the document states. “By leveraging PPPs—especially between industry partners, academic institutions, and governments—we can expedite AI R&D, democratize access, prioritize diversity and inclusion, and prepare our workforce for the jobs of the future.”
In a news release, ITI President and CEO Dean Garfield noted that the goal of the principles, a yearlong collaboration among its member companies, is to help advance the technology while “guarding against unwanted impacts.”
“These principles will evolve as AI evolves and will provide a compass for our approach,” Garfield said. “These principles are not just a vision of our path forward, they are a call to action for all of society to fully realize the smart, responsible growth of artificial intelligence.”
How much regulation?
The principles urge a soft touch in government regulation of AI. “We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI,” the document states.
But self-regulation by the industry is “far from ideal,” according to Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University.
“It’s letting the fox guard the henhouse,” Lin told Gizmodo. “There’s no teeth to enforce self-regulations if a company breaks rank; there may be even less transparency than with government regulators; and many other problems.”
In comments on the new policy document, Garfield said there is room for flexibility and collaboration between government and the technology field.
“We look forward to working with lawmakers, academia, industry partners, and the general public on the exciting road ahead,” he added in a statement.