Examining the boundaries of law and technology

14 December 2020
The cross-disciplinary approach to humanising automated decision-making
Sydney Law School academic, Professor Kimberlee Weatherall, explores how automated decision-making will improve human decision-making while keeping within the boundaries of legal standards.

Professor Kimberlee Weatherall was recently appointed as Chief Investigator at the new ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). The appointment forms part of a broader cross-disciplinary research collaboration with ADM+S, which brings together a range of collaborators with the goal of developing responsible, ethical, and inclusive automated decision-making.

We spoke to Professor Weatherall about the research collaboration, and why a cross-disciplinary approach is needed to ensure that automated decision-making improves human decision-making while meeting or exceeding legal standards.

What can you tell us about the research collaboration with ADM+S?

ADM+S brings together researchers from media and communications, law, and data science, as well as partners from government and industry, with the goal of supporting the development of responsible, ethical, and inclusive automated decision-making. 

It’s an exciting opportunity to make a difference in the digital transformations affecting all sectors of society, in Australia and beyond. Data, data analytics and machine learning are radically changing how society operates.
Professor Kimberlee Weatherall, Sydney Law School

It’s become possible to collect masses of data about people and to use that data to make predictions and decisions. It could be possible, for example, to collect data about how a person uses a touchscreen to predict whether they have certain health conditions, or to use social network data to predict likely success in paying back a loan or holding down a job.

Those predictions can turn into real-world decisions that affect people: decisions whether to offer insurance, a loan, a job or a training opportunity. And increasingly those decisions can be automated, with limited input from people. This raises very important questions about whether we can trust these systems, and how we exercise oversight.

There’s massive opportunity, but also risk. We know that the systems can be flawed in ways that discriminate against already disadvantaged groups — rating people of colour as higher reoffending risks in criminal justice systems, for example, or offering different job advertisements to men and women. Or these systems can be used to personalise news feeds, advertising, services – even the price you pay, raising critical issues of fairness.

So if we’re going to use data, we have to do it right: we have to find ways to make sure it’s fair, legal and consistent with our values and with our concern for people and their rights. That is true whether we’re talking about health, transport, education or social services — or indeed in any area where automated decision-making has an impact on people.

Tell us about the researchers you work with, who come from a wide range of areas in AI and data governance

The questions that I’m trying to answer in my research, and that we’re trying to answer at ADM+S, relate to how we ensure that automated decision-making improves human decision-making and respects human rights and other longstanding legal and democratic principles.
Professor Kimberlee Weatherall, Sydney Law School

These questions can’t be answered by one academic discipline, or by academics working in isolation from the people at the frontlines of these changes. That’s why I spend most of my time working with and talking to professionals and researchers in data science, public policy, philosophy, and other disciplines.

Currently, the work I am doing with ADM+S, other law researchers including Emeritus Professor Terry Carney, and data scientists at the Gradient Institute, involves trying to work out how to translate legal concepts like fairness, or anti-discrimination obligations when building data analytics and automated decision-making systems. We’re currently looking for a postdoctoral researcher or research fellow to help us with this work. And I’m collaborating with social scientists on questions about civic values and ADM.

Where do you think law, technology and digital rights are going?

New technologies constantly push at the boundaries and assumptions of the law. In the area of automated decision-making, technological developments have the potential to upend many of our mechanisms for exercising oversight over important decisions. I think we’re at the beginning of a process of really grappling with how we regulate data-driven predictive analytics and ADM: adjusting privacy, and anti-discrimination law, and consumer law to even up what is currently a very asymmetrical relationship. Firms and governments have all the power, and all the data, and a lot of the technology. The challenges are constantly new and constantly changing, and that keeps things interesting.

We’re expecting the Australian Human Rights Commission to publish their final report out of their inquiry into Human Rights and Technology by the end of 2020, which I think will make a range of findings about the state of digital rights in Australia and recommendations about how things should change.

The Australian Government has also commenced a review of Australia’s privacy law, which needs updating in light of all the big changes around data analytics.

Professor Weatherall specialises in issues at the intersection of law and technology, as well as intellectual property law.

She will be the supervisor for a new postdoctoral research fellow or research fellow, ideally suited to an ECR with a law degree, and a strong interest in working with researchers from technical disciplines to address key policy questions.

Sign up for updates ADM+S to find out about Centre publications and events which will be highlighting trends in the space of technology.

Related articles