In the current fast paced innovation environment, companies are pushing the boundaries of existing legal frameworks. This blogs tracks the what's happening. This blog started with the idea of being an analysis of relevant topics. However, that task is too big an events too fast so it has morphed into an attempt to track the issues, to map the emerging needs of policy. Thus, it is a kind of log book of policy issues that pass my desk.
Thursday, October 25, 2018
Administrative Law and AI Accountability
Different countries have different approaches to Administrative Law. Some countries have little specific and rely on courts to determine on decision making correctness. Canada seems to work much like this. Australia even though it is a common law country has a specific laws regarding decision making, a separate court system in the Administrative Appeals Tribunal and in many states permanent independent commissions to watch over public bodies examining evidence of corruption.
What then with Big Data and AI decision making. We know humans are biased we know that we will build those biases into the technology we build - that is the whole point of regulation.
A few jurisdictions have now begun to venture into regulating AI decisions.
However, we know that that is going to be very difficult. Watching the Alphago documentary was fascinating. In real time the Alphago team could has questions of the AI such as probably of a human making the movie. But getting the AI to actually say why it made that move - more challenging. Humans have intuition - perhaps after playing thousands of games Alphago had intuition.
From the Economist 2018-02-17
As The Economist discussed back in February there are approaches to the problem such as Explainable AI. But that only gets you so far.
The real problem is that good AI may not be self explanatory.
Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do (see article). When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer an car through a crowded city, it is potentially harmful. And when things go wrong—as, even with the best system, they inevitably will—then customers, regulators and the courts will want to know why.
For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction. Despite their futuristic sheen, the diô€‚˜culties posed by clever computers are not unprecedented. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings. Adding new ones will pose a challenge, but not an insuperable one. In response to the flaws in humans, society has evolved a series of workable coping mechanisms, called laws, rules and regulations. With a little tinkering, many of these can be applied to machines as well.
The Economist may be overly optimistic - in the first place that governments or courts will even get around to looking at AI but secondly the speed of change is overwhelming.
However, The Economist does have a point humans sometimes maybe even often can ot explain their decisions. That makes the New York City's legislation so interesting.
Quoted from MIT blog
New York City has a new law on the books demanding “algorithmic accountability,” and AI researchers want to help make it work.
Background: At the end of 2017, the city’s council passed the country’s first bill to ban algorithmic discrimination in city government. It calls for a task force to study how city agencies use algorithms and create a report on how to make algorithms more easily understandable to the public.
Rubber, meet road: But how to actually implement the bill was left up for grabs. Enter AI Now, a research institution at NYU focused on the social impact of AI. The group recommends focusing on things like making sure agencies understand the technology better, and providing a chance for outside groups to look at algorithms.
https://www.technologyreview.com/the-download/610346/the-big-apple-gets-tough-on-biased-ai/
Link to the text of the New York City legislation.
A good article about the legislation development is here at The New Yorker
Subscribe to:
Posts (Atom)