Archive

Cyber & data privacy communications

A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.

Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”

Algorithmic opacity is normal

The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.

The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.

There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.

This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.

Transparency risks are growing

With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.

By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.

Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.

But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.


One year on and GDPR is, variously, the gold standard for data privacy legislation, a monstrous example of bureaucratic red tape, or a busted flush leading to greater big tech dominance, few meaningful fines, some basic checkbox ticking and a blizzard of irritating pop-up statements.

94,000+ complaints and 64,000+ data breach notifications later, including some major breaches, regulators are starting to bear their teeth. Accordingly, companies are actively lawyering up.

With the GDPR honeymoon period set to end, earning the trust of regulators and customers is critical for all organisations.

How to do so is a topic I explore in an article for CPO magazine.

I hope you find it interesting and useful.

Complex, technical and emotive, data breaches are tough communications and reputational challenges at the best of times.

The EU’s GDPR ups the ante. Not only does it raise the prospect of bigger fines but it increases the likelihood of greater legal liability and reputational damage.

Widely regarded as the gold standard for data privacy across the world, GDPR is being adopted by many countries and regions, including the Asia-Pacific Economic Cooperation.

What does the GDPR mean for business leaders, communicators, risk managers, lawyers and others preparing for tougher data privacy laws across Asia and responding to data breaches in the EU?

Here are some important principles to bear in mind:

Take swift, decisive action to address the problem 

Companies have no option other than to move fast under GDPR. There are only 72 hours to establish what has happened, assess the likely damage, notify the regulator(s) and communicate with those impacted can seem like precious little time, especially when the facts remain unclear.

Notification and communication can appear especially daunting when the hole remains open and the facts are unclear. Yet, the quicker a company moves to fix the hole and the more decisively it does it, the more likely it will be able to limit the actual and potential damage and rebuild confidence.

Err on the side of caution, but do not panic

It is easy to feel like you are being press-ganged into publicly disclosing a data breach. In fact, not all breaches need to be reported to the regulator, and some don’t need to be reported within 72 hours.

Some breaches do not pose a high risk to those impacted, while others may be considered temporary. In some cases, the data involved is unintelligible and/or already in the public domain, in others, the effort involved in notifying the regulator may be considered disproportionate to the actual or likely damage.

In such instances, a company may choose to inform the customer of an incident without notifying the regulator or making a public statement—provided it is confident it is on a safe footing legally.

However, generally, it is best to err on the side of caution and report a breach to the regulator. If one is unclear, information regulators will generally advise whether it needs to be reported. They may also provide guidance on whether it should be communicated with those impacted.

That said, there may be some instances in which you feel it is more important to communicate immediately with those impacted, before notifying the regulator. For example, where the data involved is extremely sensitive, or where a supplier processing data for a business customer is breached.

There are also good reasons to be wary of going straight to the data subject. Customer and stakeholder expectations vary widely on data privacy and, in the wake of an incident, their behaviours can conflict. And news of a breach typically becomes public as soon as it has been communicated with those impacted.

Whichever route you choose, it is usually best to err on the side of caution. There’s no need to panic.

Be open and honest

The GDPR and emerging data privacy policy frameworks are fundamentally about transparency and trust, with organisations expected to be open and honest about data privacy in general and data breaches specifically.

EU information regulators have said they will take seriously anything that puts these twin principles into jeopardy and that they are willing to expand investigations beyond assessing IT/cybersecurity governance and controls to testing compliance in areas like technical competence and education and training.

The same goes for customers in Asia, who increasingly expect organisations to be honest about their shortcomings and to move quickly when something goes wrong.

Consider carefully how those impacted might be affected

Understandably, company leaders and executives fret primarily about the sensitivity and volume of data involved in a breach and what it means for the well-being of their employer. But it is just as important to pay close attention to those impacted and to the context in which the incident has occurred.

In August 2018, British Airways suffered a major breach involving the personal and financial details of over 500,000 customers. Despite no evidence of fraudulent financial activity at the time, British Airways quickly appreciated that the potential for lasting reputational damage was significant, given the large number of payment card and CVV numbers involved.

Hence the airline’s decision when it acknowledged the breach to offer compensation to customers for any financial hardship suffered—a promise that may result in significant payouts and higher insurance premiums going forward. The decision almost certainly also took into account the overwhelmingly negative reaction to the airline’s 2017 IT systems outage.

Consider carefully the needs and expectations of those impacted, the degree of external and internal scrutiny the incident attracts, your firm’s historic reputation, perceived culpability and other factors when you respond to a breach.

Don’t walk away

From a communications perspective, it is tempting to treat a breach as a one-off negative event to be resolved with a little timely public grovelling.

This is a mistake.

Nowadays, people take naturally to social media to vent their experiences and concerns, which can easily spiral into secondary news stories. Leaks are common, and breaches easily bleed into other business issues, thereby aggravating the situation and elongating the news cycle.

Worse, GDPR means regulatory investigations, fines and litigation are more likely, resulting in additional negative publicity. In the process, you may also come under greater pressure to publish internal and expert investigative reports.

It is important to understand that a breach is often just the start of the reputational battle, and that you must stay – and be seen to stay – the distance in all facets of your response if you are to have any real chance of success. 

Align your response

The messiness and complexity of data breaches and the need for different business units to be involved in the response can result in sloppy, inadequate, or inconsistent communications.

Given the expanded legal obligations under GDPR, the likelihood of the emergence of equivalent regimes across Asia and heightened public awareness of data privacy rights, it is particularly important that companies’ legal and communications responses are properly aligned.

Legal and communications teams can sometimes be at loggerheads, so this is not necessarily as straightforward as it sounds. It need not be difficult. Unlike in a court of law, in the court of public opinion, a business is presumed guilty until it proves its innocence.

This doesn’t just mean one should be as open and honest as possible and that one’s rhetoric always meets reality. It means that a company must look at the wider picture, avoid inappropriate legal threats, actions, and lawyerly sounding statements, and apologize sincerely when it is at fault.

By following these principles, you will be less likely to botch your business and communications response to a data privacy incident.

More important, you will be in a much better position you to persuade your customers and others that you are acting in their best interests.

This article was first published on BRINK Asia

Since the start of the year a rumour has been swirling that Facebook has been using a then-and-now facial recognition photo-sharing challenge to collect data about users and improve its AI algorithms. The social network denies it started or is involved with the challenge. 

That people suspect Facebook of being involved, and that the rumour went viral, is indicative of the suspicion with which the company is held since its flaccid approach to privacy became widespread public knowledge.

Multiple data privacy violations

These suspicions are not new. There was the row over Facebook’s Beacon user-tracking service in 2007, concerns about facial recognition, a bungled psychological experiment into the moods of its users, and run-ins with the US FTU, ACLU and privacy commissioners in multiple jursidictions over many years.

According to Google, there has been considerable public interest in privacy (mostly as a proxy for internet and/or data privacy) for many years.

Google: Data Privacy News Trends


Facebook had plenty of time to tackle the problem and prepare a meaningful response. The Guardian’s initial story in December 2015 about the covert harvesting of user data by Cambridge Analytica did not ignite until whistle-blower Christopher Wylie lifted the lid on Cambridge Analytica twenty-six months later.

Yet they did little to address the core of the privacy issue, Mark Zuckerberg disappeared as soon as the story ran, and Facebook’s value dropped USD 119 billion in a single day. Zuckerberg hardly helped matters by refusing to appear before the UK DCMS Enquiry into Disinformation and ‘Fake News’.

How did Facebook fail to anticipate a major privacy crisis when the writing had been on the wall for so long? Were its leaders truly ignorant and out of touch, or simply failed to act substantively on the many warning signs? Why did they behave the way they did? Was Facebook’s experience isolated, or consistent with other reputational meltdowns? 

Reputation risk management

These are the kinds of questions posed by lawyer Anthony Fitzsimmons and insurance expert Derek Atkins in their book Rethinking Reputational Risk, in which they get to practical grips with the notoriously knotty, slippery topic of reputation risk management.

Rethinking Reputational Risk

Drawing on analysis of recent high profile crises such as BP’s Deepwater Horizon spill, Barclays’ LIBOR rigging, Tesco’s false accounting, and the VW diesel emissions scandal, the authors argue that the problem lies in the complexity of many modern businesses, the emergence of multiple online ‘unseen systems’, fast-changing stakeholder behaviours, inadequate listening, issues management and crisis preparedness, and an unwillingness to get to the root problem of problems and failures, chiefly due to over-confidence, complacency and hubris.

All this sounds familiar. But the book comes into its own when it addresses the failure of ‘classical’ risk management and the three/four line of defence model, which is regarded as overly rigid and ill-suited to handling the many and varied behavioural risks, from weak culture and values and inappropriate incentive schemes, to the blurring of personal and professional lives and the character and personality traits of senior leaders.

The authors rightly argue that reputation risk is first and foremost a leadership responsibility, and too often it is at Board level that things fall down. Board failures were involved in 50% of the 42 crises studied.

Why?

Because Boards are essentially self-selecting, and overly reliant on people with financial and operational experience, as opposed to the forensic, analytical, behavioural and digital skills that are required in today’s globalised, networked and inherently volatile economies. There is much in this.

Since concerns about Facebook’s approach to privacy first started emerging several years before its murky dealings with Cambridge Analytica came to light, Mark Zuckerberg and Sheryl Sandberg have admitted that they should have taken user privacy far more seriously.

The important question on why they didn’t heed the warning signals earlier appears to have a single plausible answer: user privacy was regarded as a price worth paying for growth, and they would make the most of it while the sun shone and regulators, politicians, customers and the general public had more important fish to fry.

Mark Zuckerberg may insist he is personally responsible for Facebook’s privacy lapses, but Facebook’s board is also responsible and must prove itself equal to the task of fixing the holes properly, and holding its CEO to account. Its members would do well to read Fitzsimmons and Atkins’ excellent book.

Meantime, Facebook must shoulder part of the blame for the many rumours about it – be they accurate, misinformed, or plain false.


%d bloggers like this: