Technology circles are awash with talk about AI risks, ethics, responsibility, and trust. Principles and frameworks abound. But these are proving awkward in practice, partly because the purpose, nature and inner workings of AI and algorithmic systems can easily be shielded from view.

As a result, users, auditors, regulators, legislators, and others often have little or no idea how these systems work or what their impact is until they backfire or are publicly exposed by researchers, employee leaks or backlashes, white-hat hackers, malicious data breaches, FOI requests, public inquiries, or litigation.

An independent, nonpartisan, non-profit initiative, AIAAIC examines and makes the case for meaningful AI, algorithmic and automation transparency and openness.

Specifically, AIAAIC believes that everyone should know when they are using or being assessed, nudged, instructed, or coerced by an AI or algorithmic system, understand how the system works, appreciate its impact, and be in a position to make informed decisions based on clear, accurate, concise and timely information.

AIAAIC Repository

One way AIAAIC does this is by collecting examples of incidents and controversies driven by and relating to AI, algorithms and automation.

A free, open library detailing 750+ negative events since 2012, the AIAAIC Repository is used by researchers, academics, NGOs, policymakers, and industry experts.

It is used to conduct qualitative and quantitative research; inform analysis and commentary; develop case studies; devise training and education programmes; and develop risk-based products and services, including incident response and crisis plans.

CIPR members are welcome to use, copy, redistribute and adapt the repository, subject to the terms of its Creative Commons attribution license.

AIAAIC also welcomes volunteers passionate about advancing the cause of AI and algorithmic transparency and openness.

Opportunities include contributing to the AIAAIC Repository, researching technology transparency trends and best practices, and making the case to opinion-formers and decision-makers.

First published by INFLUENCE


Further information

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.



The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.

The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.

Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.

And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.

Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.

Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.

The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.


Matt Hancock and Priti Patel hit the headlines this week for mishandling journalists’ questions about COVID-19.

Hancock repeatedly dodged Piers Morgan’s questions on why had voted against extending Marcus Rashford’s free schools meals initiative in Parliament by circling back to the government ‘sorting it out’ and ‘putting it in place’.

And Priti Patel used a blizzard of words to avoiding answering questions from two journalists about why current lockdown restrictions are less severe than those during the first lockdown last year given virus infection and fatality rates are higher now.

Looping back to key messages and obfuscation are well-known media interview avoidance techniques. Yet Hancock and Patel only succeeded in making themselves appear slippery and evasive.

Both also came across as poorly prepared. Which should not have been the case given the volume of media interviews both jobs involve, the challenging and sometimes controversial nature of their policy briefs, a highly charged atmosphere, and the army of PR support they can draw on.

Bound by collective cabinet responsibility, presumably Hancock and Patel did not want to be seen to be breaking ranks in public, or to appear weak.

How could Hancock and Patel have responded?

Hancock’s challenge was the more straight-forward. After all, the government had already publicly u-turned having had its hand forced by Rashford.

When asked whether he regretted voting against extending free school meals, Hancock could simply have said: ‘Yes, knowing what I know now about the difficulties many families are in, not least in the context of covid, I would not have voted against it. My mistake, our mistake’.

Which would acknowledge the error and provide some semblance of empathy towards poor families.

Given questions from many quarters about the effectiveness of the current lockdown, and widespread speculation that the government would be forced to tighten restrictions, Patel faced the trickier proposition.

Asked whether the current lockdown rules were sufficient, she could have said ‘We are always keeping a close eye on how effective the rules are proving. Whilst it seems that most people are behaving very sensibly and sticking closely to the rules, we may indeed have to tighten them if infection rates continue to rise.’

Handling awkward questions from the media when your organisation has u-turned or may be about to u-turn can be a tricky proposition. But it need not be a car crash provided you are prepared to acknowledge the change of direction and individually or collectively accept responsibility.