Archive

Artificial intelligence

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Download The State of Responsible AI: 2021

The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.

The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.

Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.

And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.

Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.

Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.

The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Watch/listen to the Ada Lovelace Institute’s webinar on the OSR review

Concern is widespread that artificially generated ‘deepfake’ videos pose a major potential problem for those targeted, be they companies, CEOs, celebrities, academics and commentators, or politicians.

A new study of 14,678 deepfake videos by cybersecurity company Deeptrace suggests otherwise. Deepfakes may generate millions of views, yet the great majority (96%) are pornographic and have little wider societal impact.

Of those that are not pornographic, such as Chinese deepfake face-swapping app Zao or a recent spoof of former Italian PM Matteo Renzi, most are designed to entertain. Only a tiny minority have been expressly designed to sow misinformation or disinformation, or to damage reputation.

The reputational threat of deepfakes

This may change all too soon. Deepfakes are increasingly realistic, freely available, and easy to make. Artificial voice company Lyrebird promises it can create a digital voice that sounds like you in a few minutes (even if my voice apparently proved less than straight-forward.)

It is surely only a matter of time before we see more regular instances of deepfakes damaging – directly or indirectly – companies, governments and individuals through false or misleading news stories, hoaxes and reputational attacks.

A recent example: controversial Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves talking in his voice, forcing him to threaten legal action. The simulator has since been taken offline.

In another case a political private secretary in the Malaysia government was arrested over a video allegedly showing him having illegal gay sex with the country’s minister of economic affairs. The country’s leader responded by saying the video was ‘cooked up’, but it remains unproven whether the video was manipulated. 

Reputational risks of deepfakes for companies include:

  • A fake CEO town hall video regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller
  • The voice of a politician is used to manipulate a senior director into discussing allegations of corporate fraud
  • A fake recording of two executive board directors discussing the sexual habits of a colleague is used to blackmail the company
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

Spread over the internet and social media and excavating distrust in institutions and deep geo-political tensions, the risks of malevolent deepfakes are only now starting to emerge.

While the likelihood of a deepfake attack remains low in the short-term, and impact remains hard to quantify, every organisation would be wise to start considering what it may mean for its name and image.

Deepfakes are only one form of AI, though arguably pose the most direct reputational risk.

I am collecting examples of AI risks in the public domain via my AI and Algorithmic Incident and Controversy Repository.

Accurate and fair contributions are welcome. 

%d bloggers like this: