INSUBCONTINENT EXCLUSIVE:
Digital services have frequently been in collision — if not out-and-out conflict — with the rule of law
But what happens when technologies such as deep learning software and self-executing code are in the driving seat of legal decisions
How can
we be sure next-gen ‘legal tech& systems are not unfairly biased against certain groups or individuals And what skills will lawyers need
to develop to be able to properly assess the quality of the justice flowing from data-driven decisions
While entrepreneurs have been eyeing
traditional legal processes for some years now, with a cost-cutting gleam in their eye and the word ‘streamline‘ on their lips, this
early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their
algorithmic fingers into legal processes — and perhaps shifting the line of the lawitself in the process.
But how can legal protections be
safeguarded if decisions are automated by algorithmic models trained on discrete data-sets — or flowing from policies administered by
being embedded on a blockchain
These are the sorts of questions that lawyer and philosopher Mireille Hildebrandt, a professor at the
research group for Law, Science, Technology and Society at Vrije Universiteit Brussels in Belgium, will be engaging with during a five-year
projectto investigate the implications of what she terms ‘computational law&.
Last month the European Research Council awarded Hildebrandt
a grant of€2.5 million to conduct foundational research with a dual technology focus: Artificial legal intelligence and legal applications
of blockchain.
Discussing her research plan with TechCrunch, she describes the project as both very abstract and very practical, with a
staff that will include both lawyers and computer scientists
She says her intention is to come up with a new legal hermeneutics — so, basically, a framework for lawyers to approach computational law
architectures intelligently; to understand limitations and implications, and be able to ask the right questions to assess technologies that
are increasingly being put to work assessing us.
&The idea is that the lawyers get together with the computer scientists to understand what
they&re up against,& she explains
&I want to have that conversation… I want lawyers who are preferably analytically very sharp and philosophically interested to get
together with the computer scientists and to really understand each other language.
&We&re not going to develop a common language
That not going to work, I&m convinced
But they must be able to understand what the meaning of a term is in the other discipline, and to learn to play around, and to say okay, to
see the complexity in both fields, to shy away from trying to make it all very simple.
&And after seeing the complexity to then be able to
explain it in a way that the people that really matter — that is us citizens — can make decisions both at a political level and in
everyday life.&
Hildebrandt says she included both AI and blockchain technologies in the project remit as the two offer &two very different
types of computational law&.
There is also of course the chance that the two will be applied in combination — creating &an entirely new
set of risks and opportunities& in a legal tech setting.
Blockchain &freezes the future&, argues Hildebrandt, admitting of the two it the
technology she more skeptical of in this context
&Once you&ve put it on a blockchain it very difficult to change your mind, and if these rules become self-reinforcing it would be a very
costly affair both in terms of money but also in terms of effort, time, confusion and uncertainty if you would like to change that.
&You can
do a fork but not, I think, when governments are involved
They can&t just fork.&
That said, she posits that blockchain could at some point in the future be deemed an attractive alternative mechanism
for states and companies to settle on a less complex system to determine obligations under global tax law, for example
(Assuming any such accord could indeed be reached.)
Given how complex legal compliance can already be for Internet platforms operating
across borders and intersecting with different jurisdictions and political expectations there may come a point when a new system for
applying rules is deemed necessary — and putting policies on a blockchain could be one way to respond to all the chaotic overlap.
Though
Hildebrandt is cautious about the idea of blockchain-based systems for legal compliance.
It the other area of focus for the project — AI
legal intelligence — where she clearly sees major potential, though also of course risks too
&AI legal intelligence means you use machine learning to do argumentation mining — so you do natural language processing on a lot of legal
texts and you try to detect lines of argumentation,& she explains, citing the example of needing to judge whether a specific person is a
contractor or an employee.
&That has huge consequences in the US and in Canada, both for the employer… and for the employee and if they
get it wrong the tax office may just walk in and give them an enormous fine plus claw back a lot of money which they may not have.&
As a
consequence of confused case law in the area, academics at the University of Toronto developed an AI to try to help — by mining lots of
related legal texts to generate a set of features within a specific situation that could be used to check whether a person is an employee or
not.
&They&re basically looking for a mathematical function that connected input data — so lots of legal texts — with output data, in
this case whether you are either an employee or a contractor
And if that mathematical function gets it right in your data set all the time or nearly all the time you call it high accuracy and then we
test on new data or data that has been kept apart and you see whether it continues to be very accurate.&
Given AI reliance on data-sets to
derive algorithmic models that are used to make automated judgement calls, lawyers are going to need to understand how to approach and
interrogate these technology structures to determine whether an AI is legally sound or not.
High accuracy that not generated off of a biased
data-set cannot just be a ‘nice to have& if your AI is involved in making legal judgment calls on people.
&The technologies that are going
to be used, or the legal tech that is now being invested in, will require lawyers to interpret the end results — so instead of saying
‘oh wow this has 98% accuracy and it outperforms the best lawyers!& they should say ‘ah, ok, can you please show me the set of
performance metrics that you tested on
Ah thank you, so why did you put these four into the drawer because they have low accuracy… Can you show me your data-set What happened in
the hypothesis space Why did you filter those arguments out&
&This is a conversation that really requires lawyers to become interested, and
It a very serious business because legal decisions have a lot of impact on people lives but the idea is that lawyers should start having fun
in interpreting the outcomes of artificial intelligence in law
And they should be able to have a serious conversation about the limitations of self-executing code — so the other part of the project
legal applications of blockchain tech].
&If somebody says ‘immutability& they should be able to say that means that if after you have put
everything in the blockchain you suddenly discover a mistake that mistake is automated and it will cost you an incredible amount of money
and effort to get it repaired… Or ‘trustless& — so you&re saying we should not trust the institutions but we should trust software
that we don&t understand, we should trust all sorts of middlemen, i.e
the miners in permissionless, or the other types of middlemen who are in other types of distributed ledgers… &
&I want lawyers to have
ammunition there, to have solid arguments… to actually understand what bias means in machine learning,& she continues, pointing by way of
an example to research that being done by theAI Now Institutein New York to investigate disparate impacts and treatments related to AI
systems.
&That one specific problem but I think there are many more problems,& she adds of algorithmic discrimination
&So the purpose of this project is to really get together, to get to understand this.
&I think it extremely important for lawyers, not to
become computer scientists or statisticians but to really get their finger behind what happening and then to be able to share that, to
really contribute to legal method — which is text oriented
I&m all for text but we have to, sort of, make up our minds when we can afford to use non-text regulation
I would actually say that that not law.
&So how should be the balance between something that we can really understand, that is text, and
these other methods that lawyers are not trained to understand… And also citizens do not understand.&
Hildebrandt does see opportunities
for AI legal intelligence argument mining to be &used for the good& — saying, for example, AI could be applied to assess the calibre of
the decisions made by a particular court.
Though she also cautions that huge thought would need to go into the design of any such
systems.
&The stupid thing would be to just give the algorithm a lot of data and then train it and then say ‘hey yes that not fair, wow
But you could also really think deeply what sort of vectors you have to look at, how you have to label them
And then you may find out that — for instance — the court sentences much more strictly because the police is not bringing the simple
cases to court but it a very good police and they talk with people, so if people have not done something really terrible they try to solve
that problem in another way, not by using the law
And then this particular court gets only very heavy cases and therefore gives far more heavy sentences than other courts that get from their
police or public prosecutor all life cases.
&To see that you should not only look at legal texts of course
You have to look also at data from the police
And if you don&t do that then you can have very high accuracy and a total nonsensical outcome that doesn&t tell you anything you didn&t
And if you do it another way you can sort of confront people with their own prejudices and make it interesting — challenge certain things
But in a way that doesn&t take too much for granted
And my idea would be that the only way this is going to work is to get a lot of different people together at the design stage of the system
— so when you are deciding which data you&re going to train on, when you are developing what machine learners call your ‘hypothesis
space&, so the type of modeling you&re going to try and do
And then of course you should test five, six, seven performance metrics.
&And this is also something that people should talk about — not
just the data scientists but, for instance, lawyers but alsothe citizens who are going to be affected by what we do in law
And I&m absolutely convinced that if you do that in a smart way that you get much more robust applications
But then the incentive structure to do it that way is maybe not obvious
Because I think legal tech is going to be used to reduce costs.&
She says one of the key concepts of the research project is legal
protection by design — opening up other interesting (and not a little alarming) questions such as what happens to the presumption of
innocence in a world of AI-fueled ‘pre-crime& detectors
&How can you design these systems in such a way that they offer legal protection
from the first minute they come to the market — and not as an add-on or a plug in
And that not just about data protection but also about non-discrimination of course and certain consumer rights,& she says.
&I always think
that the presumption of innocence has to be connected with legal protection by design
So this is more on the side of the police and the intelligence services — how can you help the intelligence services and the police to buy
or develop ICT that has certain constrains which makes it compliant with the presumption of innocence which is not easy at all because we
probably have to reconfigure what is the presumption of innocence.&
And while the research is part abstract and solidly foundational,
Hildebrandt points out that the technologies being examined — AI and blockchain — are already being applied in legal contexts, albeit in
&a state of experimentation&.
And, well, this is one tech-fueled future that really must not be unevenly distributed
&Both the EU and national governments have taken a liking to experimentation… and where experimentation stops and systems are really
already implemented and impacting decisions about your and my life is not always so easy to see,& she adds.
Her other hope is that the
interpretation methodology developed through the project will help lawyers and law firms to navigate the legal tech that coming at them as
a sales pitch.
&There going to be, obviously, a lot of crap on the market,& she says
&That inevitable, this is going to be a competitive market for legal tech and there going to be good stuff, bad stuff, and it will not be
easy to decide what good stuff and bad stuff — so I do believe that by taking this foundational perspective it will be more easy to know
where you have to look if you want to make that judgement… It about a mindset and about an informed mindset on how these things
matter.
&I&m all in favor of agile and lean computing
Don&t do things that make no sense… So I hope this will contribute to a competitive advantage for those who can skip methodologies that
are basically nonsensical.&