A new FICO/Corinium study finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.
Furthermore, 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems” and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”
Algorithmic opacity is normal
The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.
The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.
There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.
This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.
Transparency risks are growing
With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is hardly surprising that most companies are reluctant to manage ethical risks in a meaningful manner, or say much about their systems.
By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.
Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.
But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are going to have to manage and publicly disclose AI risks, and communicate a good deal more openly and genuinely.
- First published in AIAAIC Alert
- Download FICO: The State of Responsible AI: 2021