INSUBCONTINENT EXCLUSIVE:
Joshua New
Contributor
Joshua New is a senior policy analyst at the Center for Data Innovation, a think
tank studying the intersection of data, technology and public policy.
While artificial intelligence was once heralded as the key to
unlocking anewera of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe
NewYork City Mayor de Blasio recently announced the formation of thenation first task forceto monitor and assess the use of algorithms
Days later, the European Union enacted sweepingnew data protection rules that require companies be able to explain to consumers anyautomated
And high-profile critics, likeElon Musk, have called on policymakers to do more to regulate AI.
Unfortunately, the two most popular ideas
— requiring companies to disclose the source code to their algorithms and explain how they make decisions — would cause more harm than
good by regulating the business models and the inner workings of the algorithms of companies using AI, rather than holding these companies
accountable for outcomes.
The first idea — &algorithmic transparency& — would require companies to disclose the source code and data
Beyond its simplicity, this idea lacks any real merits as a wide-scale solution
Many AI systems are too complex to fully understand by looking at source code alone
Some AI systems rely on millions of data points and thousands of lines of code, and decision models can change over time as they
It is unrealistic to expect even the most motivated, resource-flush regulators or concerned citizens to be able to spot all potential
malfeasance when that system developersmay be unable to do soeither.
Additionally, not all companies have an open-source business model
Requiring them to disclose their source code reduces their incentive to invest in developingnewalgorithms, because it invites competitors to
Bad actors in China, which is fiercely competing with the United States for AI dominance but routinely flouts intellectual property rights,
would likely use transparency requirements to steal source code.
The other idea — &algorithmic explainability& — would require companies
to explain to consumers how their algorithms make decisions
The problem with this proposal is that there is often an inescapable trade-off between explainability and accuracy in AI systems
An algorithm accuracy typically scales with its complexity, so the more complex an algorithm is, the more difficult it is to explain
While this could change in the future as research into explainable AI matures — DARPA devoted $75 million in 2017to this problem — for
now, requirements for explainability would come at the cost of accuracy
This is enormously dangerous
With autonomous vehicles, for example, is it more important to be able to explain an accident or avoid one The cases where explanations are
more important than accuracy are rare.
The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to
regulation.
Rather than demanding companies reveal their source code or limiting the types of algorithms they can use, policymakers
should instead insist onalgorithmic accountability— the principle that an algorithmic system should employ a variety of controls to ensure
the party responsible for deploying the algorithm) can verify it acts as intended, and identify and rectify harmful outcomes should they
occur.
A policy framework built around algorithmic accountability would have several important benefits
First, it would make operators responsible for any harms their algorithms might cause, not developers
Not only do operators have the most influence over how algorithms impact society, but they already have to comply with a variety of laws
designed to make sure their decisions don''t cause harm
For example, employers must comply with anti-discrimination laws in hiring, regardless of whether they use algorithms to make those
decisions.
Second, holding operators accountable for outcomes rather than the inner workings of algorithms would free them to focus on the
best methods to ensure their algorithms do not cause harm, such as confidence measures, impact assessments or procedural regularity, where
For example, a university could conduct an impact assessment before deploying an AI system designed to predict which students are likely to
drop out to ensure it is effective and equitable
Unlike transparency or explainability requirements, this would enable the university to effectively identify any potential flaws without
prohibiting the use of complex, proprietary algorithms.
This is not to say that transparency and explanations do not have their place
Transparency requirements, for example, make sense forrisk-assessment algorithms in the criminal justice system
After all, there is along-standing public interestin requiring the judicial system be exposed to the highest degree of scrutiny possible,
even if this transparency may not shed much light on how advanced machine-learning systems work.
Similarly, laws like the Equal Credit
Opportunity Act require companies to provide consumers anadequate explanationfor denying them credit
Consumers will still have a right to these explanations regardless of whether a company uses AI to make its decisions.
The debate about how
to make AI safe has ignored the need for a nuanced, targeted approach to regulation, treating algorithmic transparency and explainability
like silver bullets without considering their many downsides
There is nothing wrong with wanting to mitigate the potential harms AI poses, but the oversimplified, overbroad solutions put forth so far
would be largely ineffective and likely do more harm than good
Algorithmic accountability offers a better path toward ensuring organizations use AI responsibly so that it can truly be a boon to society.