Artificial intelligence and algorithms can help tackle climate change, strengthen cybersecurity, and improve customer service, amongst all manner of applications.
On the other hand, limitations in fairness, safety and security together with a perceived lack of transparency and accountability means these technologies can damage the rights and interests of citizens, consumers and others and, as such, pose significant potential risks to the organisations that design, develop and deploy them.
As the adoption of AI and AI-related technologies become more mainstream, awareness diversifies and grows, public opinion consolidates and legislation hardens, the risks are likely to become more reputational in nature.
In the absence of clear, objective, structured information and data in the public domain on the risks and impacts of AI, I have developed an online repository of incidents and controversies driven by or relating to artificial intelligence, algorithms and automation.
An open tool used by journalists, researchers, academics, NGOs, businesses and others, the repository contains details of hundreds of AI-driven incidents and controversies, and is updated regularly.
Users of the repository can use, copy, remix and share its contents under the terms of its license.
Fair, accurate and supportable contributions in English are welcome.
Contact Charlie Pownall if you have any questions, comments, or suggestions about the AI controversy repository, or to discuss your AI reputational and/or communications requirements.