Archive

Reputation risk management

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Download The State of Responsible AI: 2021

The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.

The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.

Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.

And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.

Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.

Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.

The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.


  • First published in AIAAIC Alert, the fortnightly newsletter examining AI, algorithmic and automation trust and transparency
  • Watch/listen to the Ada Lovelace Institute’s webinar on the OSR review

By any measure, payment processor company Wirecard’s demise has been dramatic. Within a month of KPMG refusing to verify the company’s accounts, CEO Markus Braun had resigned and it had filed for insolvency with debts of over GBP 3 billion.

Given that the writing had been on the wall for over five years, plenty of tricky questions are now being asked of Wirecard’s management and its business partners. Its long-term auditor EY and Germany’s banking regulator are in the firing line. There is widespread talk of another Enron.

Professional services in the spotlight

Other advisors are also attracting red ink, including Wirecard’s crisis law firm and PR agency. Legal and communications firms generally manage to keep their names out of the media spotlight during major controversies involving their clients.

In part, this can be ascribed to an unwritten convention between the mainstream media and the hands that feed it that says that every organisation has the right to be heard, however unsavoury it’s reputation, and that their advisors are only doing what is expected of them.

With one man’s meat being another man’s poison, it remains to be seen whether Wirecard’s dramatic collapse hinders or benefits its crisis advisors.

However, with corporate governance, responsibility, and transparency in the spotlight as never before, it is incumbent on consulting companies of all stripes to ask difficult questions of clients before they are engaged, not afterwards.

As Arthur Andersen discovered with Enron, not asking difficult questions upfront can prove a perilous defence.

Last week I had the fortune to be invited to speak on the topic of reputational risk management to MBA students and assorted internal auditors, risk managers, HR and communications executives at the Othman Yeop Abdullah Graduate School of Business at the Universiti Utara Malaysia in Kuala Lumpur.

Reputation risk may not be as high up the agenda of boards of directors and management teams in Malaysia as in some other countries, but it has gained importance in recent years due largely to two major crises:

  • the 1MDB scandal that led directly to the overturning of the Malaysian government, the arrest and forthcoming trial of former prime minister Najib Razak, fraud investigations in 10+ countries, and criminal charges laid against Goldman Sachs and two of its former employees
  • and the various woes befalling Malaysia Airlines (here’s my take on the mystery of MH370 from an online/social media perspective; if you haven’t already, I strongly recommend you read this in The Atlantic for what may well be the last word on the tragedy).

Whilst unresolved, both crises helped erode confidence and trust in institutions in Malaysia and raised (and continue to raise) legitimate questions about how Malaysia Inc – which is still largely dominated by a few family-controlled businesses – operates.

Accordingly, companies (especially government-owned or linked ones) and parts of government and civil society are actively considering the extent to which they are exposed to reputational risks, and thinking harder about how these should be minimised and managed.

The whys and hows of effective reputation risk management

Predicting and managing reputational risks poses a wealth of tricky questions and challenges – amongst them:

  • How should reputation risk be defined?
  • What are the primary drivers of corporate reputation?
  • What forms do these risks take?
  • Who is responsible for an organisation’s overall reputation?
  • Who should own corporate reputation on a day-to-day basis?
  • What role(s) should communications and marketing play in reputation risk management?
  • How best measure, track and report reputational threats?
  • Why can leaders be reluctant to get to the root of reputational issues?

I tackled these and other challenges in my presentation, setting out solutions based on my professional experience, research and observation.

Here are my slides:

Fortunately, trust in Malaysia appears to have been restored to some degree over the last eighteen months.

However it is clear that organisations based in Malaysia – and elsewhere – continue to grapple with the strategic, governance and operational challenges reputation risk management inevitably raises.

The past few days have seen the Metropolitan Police in London, the FBI and US Immigration and Customs Enforcement (ICE) hauled over the coals for appearing to use inaccurate and non-consensual facial recognition technologies.

In the face of hostile media reports, public concerns about AI in general and complaints about their programmes specifically, as well as ongoing litigation, all three organisations have doubled down on the appropriateness and legality of their actions.

Their reaction is hardly surprising. The artificial intelligence (AI) that underpins these technologies is largely unregulated. And the general public is only starting to become aware of its benefits and risks, is largely skeptical of its promises, and is concerned about some of its potential impacts.

The looming tower of AI

The benefits of AI are many. It can help tackle climate change, strengthen cybersecurity, improve customer service and reduce the volume of abusive comments on Facebook, Instagram and other social media platforms, amongst all manner of other applications.

However, as Stanley Kubrick highlighted in his 1968 film 2001: A Space Odyssey in the form of HAL 9000, AI poses substantial risks.

These risks include:

  • unfair or discriminatory algorithms
  • unreliable or malfunctioning outcomes
  • misuse of personal or confidential data
  • greater exposure to cyberattacks
  • loss of jobs
  • legal risks and liabilities
  • direct and indirect reputational risks, including malicious deepfakes.

It is likely that these risks will become greater and more reputational in nature as the adoption of AI technologies becomes more mainstream, awareness diversifies and grows, and public opinion consolidates.

Source: PEGA, 20

In addition, the risk management industry is looking at AI from a risk perspective, and the PR/communications industry from a communications perspective.

AI reputation management research study

However, little exists on the reputational threats posed by AI, or how these should be managed should an incident or crisis occur – an important topic given the volume of AI controversies and the general focus on corporate behaviour and governance.

Accordingly, I am pulling together examples of AI controversies driven by or relating to artificial intelligence for an initial report, research study and white paper on the topic.

To kick-start the process, I am crowdsourcing information on the nature and impact of recent incidents through an AI and algorithimic incident and controversy repository.

The repository is open, and your contribution is welcome. Given the sensitivity of these types of events, please note all contributions should be fair, accurate and supportable.

Let me know if you have any questions.

Thank you.

%d bloggers like this: