How to Create a Fact-Checking Process for AI-Generated Content
As associations get comfortable using generative AI, it’s important to know if the information it outputs is correct. An expert shares how associations can verify accuracy and train staff and members to make smart decisions when using this technology.
Generative AI can be useful for organizations when it comes to writing, task automation, customer service, and many other areas. However, these tools can also generate both disinformation and misinformation.
According to Lisa Rau, cofounder and chief growth officer at Fíonta, it’s difficult to identify these falsehoods.
“AI has an authoritative ‘voice,’ and [false information] may be surrounded by other obviously true information,” she said.
Add to it that generative AI can get “hallucinations” where it makes up something that it thinks is true. But that’s just the tip of the iceberg.
“Generative AI is vulnerable to purposeful manipulations where people try to inject bad information into the models, then the system gets trained on the false models,” Rau said.
For those reasons, it’s important that organizations provide effective training to help staff and members identify false information. Knowing how to detect these issues can not only protect an organization’s reputation but also reduce the risk of potential legal exposure from publishing false or misleading information.
Accuracy and Fact Checking
While there are automated fact checkers, including Google’s Fact Check Tool, they are not always accurate. Bringing humans into the equation would be a good solution, just make sure to get several heads together.
To avoid publishing factually incorrect content, have staff reach out to sources and check quotes and attributions from ChatGPT. Rau suggests consulting multiple sources to confirm accuracy, such as reputable news outlets, academic articles, government publications, or industry reports. Google also prioritizes reputable sources.
“It’s important to triangulate: verify facts, statistics, and numbers from at least two or three different sources. You can also use fact-checking websites like Snopes, FactCheck.org, or PolitiFact,” Rau said. “Essentially, you’re becoming an editor or fact-checker through this process.”
Training and Testing
To guide staff and volunteers through this process, make sure to not only have policies and procedures in place on generative AI, but also offer frequent training on the technology.
Providing an overview course that explains how generative AI works and its sources of knowledge can help people better understand how and why some content is false, biased, or misleading. According to Rau, staff and members should leave training with a better appreciation of why it’s important to check the AI’s work outcomes.
“Understanding the ways in which generative AI fails can help staff to pay attention to different types of issues,” Rau said. “For example, reading for implicit bias, considering if other points of view are represented, and assessing the rhetorical effect of the output.”
Once formal training is over, continue to get people comfortable with fact-checking by running random checks and tests. Tests should cover how GPTs work and their limitations as well as the bias, hallucinations, and areas of liability. It may also be useful to have staff or members sign up for online courses and get certifications in professional copyediting, proofreading, and fact-checking.
“I recommend upskilling in copyediting and fact-checking because these are areas they need to be trained on,” Rau said. “Staff and members need clear policies on your expectations regarding fact-checking, and they need to be tested so it’s important they know the material and understand it.”
[marchmeena29/ISTOCK]
Comments