Archive

Reputation risk management

Technology circles are awash with talk about AI risks, ethics, responsibility, and trust. Principles and frameworks abound. But these are proving awkward in practice, partly because the purpose, nature and inner workings of AI and algorithmic systems can easily be shielded from view.

As a result, users, auditors, regulators, legislators, and others often have little or no idea how these systems work or what their impact is until they backfire or are publicly exposed by researchers, employee leaks or backlashes, white-hat hackers, malicious data breaches, FOI requests, public inquiries, or litigation.

An independent, nonpartisan, non-profit initiative, AIAAIC examines and makes the case for meaningful AI, algorithmic and automation transparency and openness.

Specifically, AIAAIC believes that everyone should know when they are using or being assessed, nudged, instructed, or coerced by an AI or algorithmic system, understand how the system works, appreciate its impact, and be in a position to make informed decisions based on clear, accurate, concise and timely information.

AIAAIC Repository

One way AIAAIC does this is by collecting examples of incidents and controversies driven by and relating to AI, algorithms and automation.

A free, open library detailing 750+ negative events since 2012, the AIAAIC Repository is used by researchers, academics, NGOs, policymakers, and industry experts.

It is used to conduct qualitative and quantitative research; inform analysis and commentary; develop case studies; devise training and education programmes; and develop risk-based products and services, including incident response and crisis plans.

CIPR members are welcome to use, copy, redistribute and adapt the repository, subject to the terms of its Creative Commons attribution license.

AIAAIC also welcomes volunteers passionate about advancing the cause of AI and algorithmic transparency and openness.

Opportunities include contributing to the AIAAIC Repository, researching technology transparency trends and best practices, and making the case to opinion-formers and decision-makers.

First published by INFLUENCE


Further information

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.



The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.

The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.

Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.

And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.

Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.

Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.

The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.


By any measure, payment processor company Wirecard’s demise has been dramatic. Within a month of KPMG refusing to verify the company’s accounts, CEO Markus Braun had resigned and it had filed for insolvency with debts of over GBP 3 billion.

Given that the writing had been on the wall for over five years, plenty of tricky questions are now being asked of Wirecard’s management and its business partners. Its long-term auditor EY and Germany’s banking regulator are in the firing line. There is widespread talk of another Enron.

Professional services in the spotlight

Other advisors are also attracting red ink, including Wirecard’s crisis law firm and PR agency. Legal and communications firms generally manage to keep their names out of the media spotlight during major controversies involving their clients.

In part, this can be ascribed to an unwritten convention between the mainstream media and the hands that feed it that says that every organisation has the right to be heard, however unsavoury it’s reputation, and that their advisors are only doing what is expected of them.

With one man’s meat being another man’s poison, it remains to be seen whether Wirecard’s dramatic collapse hinders or benefits its crisis advisors.

However, with corporate governance, responsibility, and transparency in the spotlight as never before, it is incumbent on consulting companies of all stripes to ask difficult questions of clients before they are engaged, not afterwards.

As Arthur Andersen discovered with Enron, not asking difficult questions upfront can prove a perilous defence.

Last week I had the fortune to be invited to speak on the topic of reputational risk management to MBA students and assorted internal auditors, risk managers, HR and communications executives at the Othman Yeop Abdullah Graduate School of Business at the Universiti Utara Malaysia in Kuala Lumpur.

Reputation risk may not be as high up the agenda of boards of directors and management teams in Malaysia as in some other countries, but it has gained importance in recent years due largely to two major crises:

  • the 1MDB scandal that led directly to the overturning of the Malaysian government, the arrest and forthcoming trial of former prime minister Najib Razak, fraud investigations in 10+ countries, and criminal charges laid against Goldman Sachs and two of its former employees
  • and the various woes befalling Malaysia Airlines (here’s my take on the mystery of MH370 from an online/social media perspective; if you haven’t already, I strongly recommend you read this in The Atlantic for what may well be the last word on the tragedy).

Whilst unresolved, both crises helped erode confidence and trust in institutions in Malaysia and raised (and continue to raise) legitimate questions about how Malaysia Inc – which is still largely dominated by a few family-controlled businesses – operates.

Accordingly, companies (especially government-owned or linked ones) and parts of government and civil society are actively considering the extent to which they are exposed to reputational risks, and thinking harder about how these should be minimised and managed.

The whys and hows of effective reputation risk management

Predicting and managing reputational risks poses a wealth of tricky questions and challenges – amongst them:

  • How should reputation risk be defined?
  • What are the primary drivers of corporate reputation?
  • What forms do these risks take?
  • Who is responsible for an organisation’s overall reputation?
  • Who should own corporate reputation on a day-to-day basis?
  • What role(s) should communications and marketing play in reputation risk management?
  • How best measure, track and report reputational threats?
  • Why can leaders be reluctant to get to the root of reputational issues?

I tackled these and other challenges in my presentation, setting out solutions based on my professional experience, research and observation.

Here are my slides:

Fortunately, trust in Malaysia appears to have been restored to some degree over the last eighteen months.

However it is clear that organisations based in Malaysia – and elsewhere – continue to grapple with the strategic, governance and operational challenges reputation risk management inevitably raises.

%d bloggers like this: