Concern is widespread that artificially generated ‘deepfake’ videos pose a major potential problem for those targeted, be they companies, CEOs, celebrities, academics and commentators, or politicians.

A new study of 14,678 deepfake videos by cybersecurity company Deeptrace suggests otherwise. Deepfakes may generate millions of views, yet the great majority (96%) are pornographic and have little wider societal impact.

Of those that are not pornographic, such as Chinese deepfake face-swapping app Zao or a recent spoof of former Italian PM Matteo Renzi, most are designed to entertain. Only a tiny minority have been expressly designed to sow misinformation or disinformation, or to damage reputation.

The reputational threat of deepfakes

This may change all too soon. Deepfakes are increasingly realistic, freely available, and easy to make. Artificial voice company Lyrebird promises it can create a digital voice that sounds like you in a few minutes (even if my voice apparently proved less than straight-forward.)

It is surely only a matter of time before we see more regular instances of deepfakes damaging – directly or indirectly – companies, governments and individuals through false or misleading news stories, hoaxes and reputational attacks.

A recent example: controversial Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves talking in his voice, forcing him to threaten legal action. The simulator has since been taken offline.

Jordan Peterson audio deepfake

In another case a political private secretary in the Malaysia government was arrested over a video allegedly showing him having illegal gay sex with the country’s minister of economic affairs. The country’s leader responded by saying the video was ‘cooked up’, but it remains unproven whether the video was manipulated. 

Reputational risks of deepfakes for companies include:

  • A fake CEO town hall video regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller
  • The voice of a politician is used to manipulate a senior director into discussing allegations of corporate fraud
  • A fake recording of two executive board directors discussing the sexual habits of a colleague is used to blackmail the company
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

Spread over the internet and social media and excavating distrust in institutions and deep geo-political tensions, the risks of malevolent deepfakes are only now starting to emerge.

While the likelihood of a deepfake attack remains low in the short-term, and impact remains hard to quantify, every organisation would be wise to start considering what it may mean for its name and image.

Deepfakes are only one form of AI, though arguably pose the most direct reputational risk.

I am crowdsourcing examples of AI risks of all kinds that have spilled into the public domain via my AI/machine learning controversy, incident and crisis database. Constructive contributions are welcome. 

A 1MDB hoarding in Kuala Lumpur

Last week I had the fortune to be invited to speak on the topic of reputational risk management to MBA students and assorted internal auditors, risk managers, HR and communications executives at the Othman Yeop Abdullah Graduate School of Business at the Universiti Utara Malaysia in Kuala Lumpur.

Reputation risk may not be as high up the agenda of boards of directors and management teams in Malaysia as in some other countries, but it has gained importance in recent years due largely to two major crises:

  • the 1MDB scandal that led directly to the overturning of the Malaysian government, the arrest and forthcoming trial of former prime minister Najib Razak, fraud investigations in 10+ countries, and criminal charges laid against Goldman Sachs and two of its former employees
  • and the various woes befalling Malaysia Airlines (here’s my take on the mystery of MH370 from an online/social media perspective; if you haven’t already, I strongly recommend you read this in The Atlantic for what may well be the last word on the tragedy).

Whilst unresolved, both crises helped erode confidence and trust in institutions in Malaysia and raised (and continue to raise) legitimate questions about how Malaysia Inc – which is still largely dominated by a few family-controlled businesses – operates.

Accordingly, companies (especially government-owned or linked ones) and parts of government and civil society are actively considering the extent to which they are exposed to reputational risks, and thinking harder about how these should be minimised and managed.

The whys and hows of effective reputation risk management

Predicting and managing reputational risks poses a wealth of tricky questions and challenges – amongst them:

  • How should reputation risk be defined?
  • What are the primary drivers of corporate reputation?
  • What forms do these risks take?
  • Who is responsible for an organisation’s overall reputation?
  • Who should own corporate reputation on a day-to-day basis?
  • What role(s) should communications and marketing play in reputation risk management?
  • How best measure, track and report reputational threats?
  • Why can leaders be reluctant to get to the root of reputational issues?

I tackled these and other challenges in my presentation, setting out solutions based on my professional experience, research and observation.

Here are my slides:

Fortunately, trust in Malaysia appears to have been restored to some degree over the last eighteen months.

However it is clear that organisations based in Malaysia – and elsewhere – continue to grapple with the strategic, governance and operational challenges reputation risk management inevitably raises.

I will explore some of the questions raised in my talk in more depth over the coming weeks and months on this blog.

Meantime, I hope you find the slides useful.

The past few days have seen the Metropolitan Police in London and the FBI and the US Immigration and Customs Enforcement hauled over the coals for appearing to use inaccurate and non-consensual facial recognition technologies.

In the face of hostile media reports, public concerns about AI in general and complaints about their programmes specifically, as well as ongoing litigation, all three organisations have doubled down on the appropriateness and legality of their actions.

Their reaction is hardly surprising. The artificial intelligence (AI) that underpins these technologies is largely unregulated. And the general public is only starting to become aware of its benefits and risks, is largely skeptical of its promises, and is concerned about some of its potential impacts.

The looming tower of AI

The benefits of AI are many. It can help tackle climate change, strengthen cybersecurity, improve customer service and stop people making abusive comments on Instagram, amongst all manner of other applications.

Yet as Stanley Kubrick highlighted in his 1968 film 2001: A Space Odyssey in the form of HAL 9000, AI also poses substantial risks. These include:

  • unfair or discriminatory algorithms
  • unreliable or malfunctioning outcomes
  • misuse of personal or confidential data
  • greater exposure to cyberattacks
  • loss of jobs
  • legal risks and liabilities
  • direct and indirect reputational risks, including malicious deepfakes.

It is likely that these risks will become greater and more reputational in nature as the adoption of AI technologies becomes more mainstream, awareness diversifies and grows, and public opinion consolidates.

Source: PEGA, 2019

Appreciating the scope of public skepticism and distrust and, under pressure from government, politicians and regulators, the AI industry is now making considerable headway in the area of AI ethics.

In addition, the risk management industry is looking at AI from a risk perspective, and the PR/communications industry from a communications perspective.

AI reputation management research study

However, little exists on the reputational threats posed by AI, or how these should be managed should an incident or crisis occur – an important topic given the volume of AI controversies and the focus on corporate behaviour and governance.

Accordingly, I am pulling together examples of recent AI controversies, incidents and crises for a study/white paper on the topic.

To kick-start the process, I have started collecting basic information on recent AI controversies:

https://docs.google.com/spreadsheets/d/1Bn55B4xz21-_Rgdr8BBb2lt0n_4rzLGxFADMlVW0PYI/edit?usp=sharing

Your contribution is welcome. Given the sensitivity of these types of events, please note all contributions should be supportable – otherwise they may be deleted or discarded.

Named, constructive contributions will be credited in the final report.

Let me know if you have any questions.

Thank you.

Following my post on VW’s new electric driving marketing campaign, here are highlights of the VW diesel emissions test crisis from its inception to the current day.

This timeline seeks to put the scandal into a broader context by highlighting important legal, regulatory, industry and other inputs, outputs and outcomes. It will be updated on an ongoing basis.

2019

2018

2017

2016

2015

2014

2008

2004

  • US Environmental Protection Agency (EPA) significantly tightens diesel emissions standards.

Most companies expressly avoid mentioning past scandals in their advertising. Not so VW, which makes its 2015 diesel emissions crisis the starting point for its latest ad ‘Hello Light’.

The ad is clearly intended to signal VW’s shift to electric driving, while drawing on the company’s glory days of the 1960s and 1970s. It is eye-catching, and feels honest and refeshingly unnostalgic.

It is also brave. For one, there are clear risks in framing the firm’s shift to electric through the prism of its diesel emissions fiasco. Purists might also complain there is no apology – just as there was no apology in VW’s November 2015 goodwill marketing campaign.

Hello Light is no one-off, but is part of VW’s larger ‘Drive Something Bigger Than Yourself’ brand campaign that aims to press home it’s commitment to electric while drawing on its rich history.

Yet VW’s diesel emissions woes are far from over. With legal cases in 50 countries, 2019 may prove to be the company’s ‘most difficult year ever’ according to Hiltrud Werner VW Group board member and head of compliance.

Each court case will bring a rash of unwelcome publicity as old documents are raked over and new evidence comes to light. Much will hinge on the company’s rogue employee defence, which is looking increasingly brittle.

While risky, VW’s electric driving campaign is also strategically critical. Diesel sales have been dropping sharply.

Major cities are banning diesel cars in their centres. And several top auto manufacturers have promised to end production of the internal combustion engine. VW says its last generation of combustion engines will be launched in 2026.

In addition, the electric market is a challenging proposition thanks to new entrants such as Tesla and the relatively high cost of electric technologies, even if these costs are now starting to fall as volume increases.

Set against this background, VW’s electric driving campaign is worth the strategic and reputational risks.

Arguably, it should have been run sooner.

© Charlie Pownall/CPC & Associates 2012-2019 | Terms | Privacy policy

One year on and GDPR is, variously, the gold standard for data privacy legislation, a monstrous example of bureaucratic red tape, or a busted flush leading to greater big tech dominance, few meaningful fines, some basic checkbox ticking and a blizzard of irritating pop-up statements.

94,000+ complaints and 64,000+ data breach notifications later, including some major breaches, regulators are starting to bear their teeth. Accordingly, companies are actively lawyering up.

With the GDPR honeymoon period set to end, earning the trust of regulators and customers is critical for all organisations.

How to do so is a topic I explore in an article for CPO magazine.

I hope you find it interesting and useful.

%d bloggers like this: