Artificial intelligence and algorithms can help tackle climate change, strengthen cybersecurity, and improve customer service, amongst all manner of applications.
On the other hand, limitations in fairness, safety and security together with a perceived lack of transparency and accountability means these technologies can damage the rights and interests of citizens, consumers and others and, as such, may pose significant potential risks to the organisations that design, develop and deploy them.
As the adoption of AI and AI-related technologies become more mainstream, awareness diversifies and grows, public opinion consolidates and legislation hardens, the risks are likely to become more reputational in nature.
AI, algorithmic and automation incident and controversy repository
Surprisingly little clear, objective, structured information and data exists in the public domain on the limitations, consequences and risks of artificial intelligence, algorithms and automation.
Accordingly, I have developed the AI, Algorithmic and Automation Incident & Controversy (‘AIAAIC’) Repository, an independent, open library of 650+ incidents and controversies driven by and relating to AI and AI-related technologies across the world since 2012.
The repository is regularly updated and aims to be accurate and fair. It does not claim to be comprehensive and does not cover super-intelligence and related meta controversies.
Who uses the repository
The AIAAIC is used by researchers, academics, NGOs, journalists, and policy makers for reference and research.
It is also used by business managers, risk managers, lawyers, reputation managers and others looking to deepen their understanding of AI risks, and to apply their findings.
What to use the repository for
The AIAAIC repository can be used for multiple purposes, such as:
- to inform and support analysis and commentary
- to conduct qualitative or quantitative research
- to develop case studies
- to develop training and education programmes
- to develop methodologies, frameworks and other tools
- to predict future trends.
Real-world examples of its use include:
- Partnership on AI: AI incident database
- Responsible AI Institute: Map of helpful and harmful AI
- ETAMI: development of a risk-based classification system for AI applications
- We and AI: Research study on the personal risks of AI
- Asia-based healthcare company: incident and crisis plan development
- UK-based law firm: development of an AI legal case library.
Terms and contributions
The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its CC BY 4.0 license.