Archive

Artificial intelligence

Concern is widespread that artificially generated ‘deepfake’ videos pose a major potential problem for those targeted, be they companies, CEOs, celebrities, academics and commentators, or politicians.

A new study of 14,678 deepfake videos by cybersecurity company Deeptrace suggests otherwise. Deepfakes may generate millions of views, yet the great majority (96%) are pornographic and have little wider societal impact.

Of those that are not pornographic, such as Chinese deepfake face-swapping app Zao or a recent spoof of former Italian PM Matteo Renzi, most are designed to entertain. Only a tiny minority have been expressly designed to sow misinformation or disinformation, or to damage reputation.

The reputational threat of deepfakes

This may change all too soon. Deepfakes are increasingly realistic, freely available, and easy to make. Artificial voice company Lyrebird promises it can create a digital voice that sounds like you in a few minutes (even if my voice apparently proved less than straight-forward.)

It is surely only a matter of time before we see more regular instances of deepfakes damaging – directly or indirectly – companies, governments and individuals through false or misleading news stories, hoaxes and reputational attacks.

A recent example: controversial Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves talking in his voice, forcing him to threaten legal action. The simulator has since been taken offline.

Jordan Peterson audio deepfake

In another case a political private secretary in the Malaysia government was arrested over a video allegedly showing him having illegal gay sex with the country’s minister of economic affairs. The country’s leader responded by saying the video was ‘cooked up’, but it remains unproven whether the video was manipulated. 

Reputational risks of deepfakes for companies include:

  • A fake CEO town hall video regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller
  • The voice of a politician is used to manipulate a senior director into discussing allegations of corporate fraud
  • A fake recording of two executive board directors discussing the sexual habits of a colleague is used to blackmail the company
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

Spread over the internet and social media and excavating distrust in institutions and deep geo-political tensions, the risks of malevolent deepfakes are only now starting to emerge.

While the likelihood of a deepfake attack remains low in the short-term, and impact remains hard to quantify, every organisation would be wise to start considering what it may mean for its name and image.

Deepfakes are only one form of AI, though arguably pose the most direct reputational risk.

I am crowdsourcing examples of AI risks of all kinds that have spilled into the public domain via my AI/machine learning controversy, incident and crisis database. Constructive contributions are welcome. 

The past few days have seen the Metropolitan Police in London and the FBI and the US Immigration and Customs Enforcement hauled over the coals for appearing to use inaccurate and non-consensual facial recognition technologies.

In the face of hostile media reports, public concerns about AI in general and complaints about their programmes specifically, as well as ongoing litigation, all three organisations have doubled down on the appropriateness and legality of their actions.

Their reaction is hardly surprising. The artificial intelligence (AI) that underpins these technologies is largely unregulated. And the general public is only starting to become aware of its benefits and risks, is largely skeptical of its promises, and is concerned about some of its potential impacts.

The looming tower of AI

The benefits of AI are many. It can help tackle climate change, strengthen cybersecurity, improve customer service and stop people making abusive comments on Instagram, amongst all manner of other applications.

Yet as Stanley Kubrick highlighted in his 1968 film 2001: A Space Odyssey in the form of HAL 9000, AI also poses substantial risks. These include:

  • unfair or discriminatory algorithms
  • unreliable or malfunctioning outcomes
  • misuse of personal or confidential data
  • greater exposure to cyberattacks
  • loss of jobs
  • legal risks and liabilities
  • direct and indirect reputational risks, including malicious deepfakes.

It is likely that these risks will become greater and more reputational in nature as the adoption of AI technologies becomes more mainstream, awareness diversifies and grows, and public opinion consolidates.

Source: PEGA, 2019

Appreciating the scope of public skepticism and distrust and, under pressure from government, politicians and regulators, the AI industry is now making considerable headway in the area of AI ethics.

In addition, the risk management industry is looking at AI from a risk perspective, and the PR/communications industry from a communications perspective.

AI reputation management research study

However, little exists on the reputational threats posed by AI, or how these should be managed should an incident or crisis occur – an important topic given the volume of AI controversies and the focus on corporate behaviour and governance.

Accordingly, I am pulling together examples of recent AI controversies, incidents and crises for a study/white paper on the topic.

To kick-start the process, I have started collecting basic information on recent AI controversies:

https://docs.google.com/spreadsheets/d/1Bn55B4xz21-_Rgdr8BBb2lt0n_4rzLGxFADMlVW0PYI/edit?usp=sharing

Your contribution is welcome. Given the sensitivity of these types of events, please note all contributions should be supportable – otherwise they may be deleted or discarded.

Named, constructive contributions will be credited in the final report.

Let me know if you have any questions.

Thank you.

%d bloggers like this: