Archive

Reputation management

Technology circles are awash with talk about AI risks, ethics, responsibility, and trust. Principles and frameworks abound. But these are proving awkward in practice, partly because the purpose, nature and inner workings of AI and algorithmic systems can easily be shielded from view.

As a result, users, auditors, regulators, legislators, and others often have little or no idea how these systems work or what their impact is until they backfire or are publicly exposed by researchers, employee leaks or backlashes, white-hat hackers, malicious data breaches, FOI requests, public inquiries, or litigation.

An independent, nonpartisan, non-profit initiative, AIAAIC examines and makes the case for meaningful AI, algorithmic and automation transparency and openness.

Specifically, AIAAIC believes that everyone should know when they are using or being assessed, nudged, instructed, or coerced by an AI or algorithmic system, understand how the system works, appreciate its impact, and be in a position to make informed decisions based on clear, accurate, concise and timely information.

AIAAIC repository

One way AIAAIC does this is by collecting examples of incidents and controversies driven by and relating to AI, algorithms and automation.

A free, open library detailing 700+ negative events since 2012, the AIAAIC repository is used by researchers, academics, NGOs, policymakers, regulators, lawyers, as well as several household name multinationals.

It is used to conduct qualitative and quantitative research; inform analysis and commentary; develop case studies; devise training and education programmes; and develop risk-based products and services, including incident response and crisis plans.

CIPR members are welcome to use, copy, redistribute and adapt the repository, subject to the terms of its Creative Commons attribution license.

AIAAIC also welcomes volunteers passionate about advancing the cause of AI and algorithmic transparency and openness. Opportunities include contributing to the AIAAIC repository, researching technology transparency trends and best practices, and making the case to opinion-formers and decision-makers.

Visit http://aiaaic.org for further information.

First published by INFLUENCE

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Download The State of Responsible AI: 2021

The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.

The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.

Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.

And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.

Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.

Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.

The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Watch/listen to the Ada Lovelace Institute’s webinar on the OSR review

Matt Hancock and Priti Patel hit the headlines this week for mishandling journalists’ questions about COVID-19.

Hancock repeatedly dodged Piers Morgan’s questions on why had voted against extending Marcus Rashford’s free schools meals initiative in Parliament by circling back to the government ‘sorting it out’ and ‘putting it in place’.

And Priti Patel used a blizzard of words to avoiding answering questions from two journalists about why current lockdown restrictions are less severe than those during the first lockdown last year given virus infection and fatality rates are higher now.

Looping back to key messages and obfuscation are well-known media interview avoidance techniques. Yet Hancock and Patel only succeeded in making themselves appear slippery and evasive.

Both also came across as poorly prepared. Which should not have been the case given the volume of media interviews both jobs involve, the challenging and sometimes controversial nature of their policy briefs, a highly charged atmosphere, and the army of PR support they can draw on.

Bound by collective cabinet responsibility, presumably Hancock and Patel did not want to be seen to be breaking ranks in public, or to appear weak.

How could Hancock and Patel have responded?

Hancock’s challenge was the more straight-forward. After all, the government had already publicly u-turned having had its hand forced by Rashford.

When asked whether he regretted voting against extending free school meals, Hancock could simply have said: ‘Yes, knowing what I know now about the difficulties many families are in, not least in the context of covid, I would not have voted against it. My mistake, our mistake’.

Which would acknowledge the error and provide some semblance of empathy towards poor families.

Given questions from many quarters about the effectiveness of the current lockdown, and widespread speculation that the government would be forced to tighten restrictions, Patel faced the trickier proposition.

Asked whether the current lockdown rules were sufficient, she could have said ‘We are always keeping a close eye on how effective the rules are proving. Whilst it seems that most people are behaving very sensibly and sticking closely to the rules, we may indeed have to tighten them if infection rates continue to rise.’

Handling awkward questions from the media when your organisation has u-turned or may be about to u-turn can be a tricky proposition. But it need not be a car crash provided you are prepared to acknowledge the change of direction and individually or collectively accept responsibility.

It’s been quite the week for apologies. Singer Rita Ora hosted a flashy 30th birthday lockdown party which was promptly shut down by the police. And news of politician Joszef Szajer’s sizzling Brussels (s)exploits burst into the mainstream conscious. Their apologies had very different results.

Ora quickly took to Instagram to express what comes across as a fulsome and genuine mea culpa:

Szajer’s apology appeared two days after he had somewhat mysteriously resigned as an MEP and comes across as stilted and reluctant.

“I regret that I broke the lockdown rules, that was irresponsible of me, and I will accept the sanctions that result”.

His reticence almost certainly stems from the salacious nature of his activities, and from hypocrisy of a kind that makes John Major’s basic to basics frolics appear like a walk in the park.

It hardly needs saying that tone counts for much when you are saying sorry and that being seen to apologise sincerely, acknowledging where you’ve gone wrong, and taking responsibility for your actions count for much.

Apologies and the law

Tonal differences aside, Ora and Szajer’s statements bear one thing in common: both state they accept the consequences of their actions.

This was almost certainly prompted by both parties being caught red-handed by the police.

Yet many apologies are never made out of fear of legal liability, and those that are made often avoid any admission of guilt. And as such they can easily end up as tokenistic.

As it happens, John Howell MP also introduced a private members bill to the House of Commons this week that ‘allows an apology to be given that is genuinely and sincerely meant without creating a legal liability that would run into millions of pounds.’

The policy driver, Howell states, is that ‘apologies can often unlock disputes and lead to settlements without recourse to formal legal action’.

This is a commendable initiative. An apology is already a statutory, professional and legal requirement in cases of NHS clinical negligence. And as Howell points out, apology laws already exist in multiple US states, Australia, Canada and elsewhere.

Howells’ recommended solution is less litigation and more arbitration and mediation. Again, the prospect of less media intrusion, lower legal fees and less pressure on our overloaded courts of justice seems eminently sensible.

Appreciate who you’re apologising to

All this is well and good in a commercial context in which big money is at stake. But it doesn’t much help ordinary individuals who are left to the mercy of the crowd and, in Ora and Szajer’s case, the mercy of the police.

To date, Rita Ora has escaped a fine, though the restaurant faces a police investigation. Szajer, on the other hand, has resigned as an MEP and been forced to leave his political party.

Neither apology appears likely to sway the police one way or the other, but it may help sway the general public and others, who are arguably their principal audiences.

While Rita Ora may have made a stupid mistake, her apology has won her at least one new fan. Meantime, Joszef Szajer is licking his wounds.

John Howell’s bill will have its second reading in March 2021. A more constructive and less legalistic environment in which an apology can be made freely and meaningfully is surely in most people’s interests.

UPDATE: It has emerged that Rita Ora has broken lockdown rules a second time, triggering a second apology.

%d bloggers like this: