Further information will be provided by the end of June 2020
We are continuing the conversation with our invited speakers, partners and venue and will provide more information about future plans for “The Ethics of Data Science Conference” by the end of June 2020 when we hope there will be a little more certainty about the global situation with regard to COVID-19.
Refunds can be requested by contacting email@example.com. If you have requested a refund for The Ethics of Data Science Conference please note that this may take up to 30 days to process, but we will endeavour to have this processed in the shortest possible time.
Hosted by the University of Sydney and held at Doltone House in Pyrmont NSW, the international conference on the Ethics of Data Science brings together world-renowned experts from multiple disciplines to discuss the use and misuse of our data.
CTDS has partnered with the Gradient Institute, the Humanising Machine Intelligence group from the Australian National University, Google and QuantumBlack to bring this conversation to Sydney.
Our goal is to design ethical evidence based decision-making frameworks. This can only be achieved by understanding the morality, law and politics of data and artificial intelligence, drawing on world-class research in data science, law, philosophy and beyond. This conference is a unique opportunity to engage with the cutting edge of research in these fields, and to make progress on understanding the viability and legitimacy of algorithmic decision-making.
The conference aims to bring established global leaders in these fields together with the emerging talent that will define these debates for years to come. Key themes explored by our keynotes will include fairness, privacy, algorithmic regulation, and how we receive and process information in the age of AI.
Alongside a high-level program of cutting-edge research, the conference will provide a forum for two-way knowledge transfer between researchers and practitioners, 'masterclasses' in which leading Australian scholars in ethical questions related to AI will engage with leaders in government and industry to both pinpoint the central problems faced in the deployment of algorithmic systems, and identify the paths to solving those problems.
Conference dinner is an additional $50.
Paper and travel grant submissions have now closed.
The conference will give a Best Paper prize, based on the potential impact of the research. This paper will receive the special opportunity to present in an extended session oriented to attendees from government and industry.
Dr Roman Marchant, Senior Research Fellow and Lecturer, Centre for Translational Data Science
“Important decisions are being made by government and big corporations based on data. This conference will help guide the analysis and decisions to be ethical and beneficial for society as a whole.“
Dr Tiberio Caetano, Chief Scientist, The Gradient Institute
We know computers run the world, but we don't know how. The complex dynamics between algorithms, data, AI systems and people is effectively giving rise to a new social order yet to be understood. At the core of EDSC is the purpose of understanding of what has to be done, from a truly multidisciplinary perspective, to direct this dynamics towards a world of increasing human wellbeing, fairness and autonomy.
Professor Kimberlee Weatherall, Professor of Law, Sydney Law School
“This conference will help us address decisions around the use of new data science technology, where and how are they are appropriate to be used."
Professor Seth Lazar, Head of School Philosophy, Australian National University
“The world is clamouring for research to help better understand the morality, law, and politics of data and AI, but while there have been many strategic announcements and statements of ethical frameworks and principles, real interdisciplinary research in this area is in its infancy. EDSC2020, uniquely, is grounded in a genuine collaboration between computer scientists, lawyers and philosophers. It is interdisciplinary at its heart."
As governments and companies rush to develop principles for the ethical use of data and AI, with equal alacrity academics and activists lament the focus on 'soft' ethics, at the expense of 'hard' law. But law isn't always hard—in fact, it is often indeterminate, and often with good reason. And ethical principles can be very demanding. But what's the right way to think about the connection between morality and law? And how do either relate to the realities of politics, and power? Philosophy can help make sense of these questions. In this co-learning class, I'll be soliciting people's views on how the morality, law and politics of data and AI interact in their daily practice, and offering insights from philosophy on how to make sense of the subtle connections between these different sources of normativity—each of them irreducible to the others.
Speaker: Associate Professor Seth Lazar, Head, School of Philosophy. Project Lead, Humanising Machine Intelligence Grand Challenge, Australian National University.
This master class will discuss the challenges inherent in designing algorithms to be fair and non-discriminatory from a computer-science perspective. We will describe current approaches to formalising fairness in the machine learning literature, explain what these metrics do and do not capture, and how they can be implemented. Finally, we will place algorithmic fairness in the wider context of Australian anti-discrimination law and algorithmic transparency. The tutorial will focus on concepts and practical algorithms, and is intended for a multidisciplinary audience.
The impact of data and digital technologies on a range of sectors (education, health, transit, cities) has increasingly blurred the lines between public institutional operations and private markets. As this expands, several major shifts demand attention – the privatisation of governance, the platformisation of everyday life, and the evisceration of protective prohibitions through regulatory voids and arbitrage. As governments continue to be slow to act, for fear of impeding economic growth, the blurred lines are reshaping every sector of society in ways that are increasingly challenging for governmental institutions to rein back in. In this masterclass, we'll detail two case studies – the health sector, through an exploration of DeepMind, and city operations, through the case study of Sidewalk Labs. We'll also work with participants to discuss the demand for alternative public infrastructures that could be designed and used to enable digital advancements in a range of different sectors and how to advance this cause from a policy perspective.
Speaker: Associate Professor Julia Powles, Associate Professor of Technology Law & Policy, University of Western Australia.
Researchers interested in employing large data sets are typically given broad access and trusted to operate honourably - an approach that has proven manifestly inadequate (c.f. Cambridge Analytica). In this class, we will introduce the Private data analysis language which allows researchers to analyse data without having access to the data. Private is a probabilistic declarative language based on Python and BUGS which blocks from release any results that are sensitive to the data provided by any given individual. We will also argue that users/participants/citizens should retain ownership of and derive ongoing benefit from their data and introduce the Unforgettable data marketplace, as an example of how this can be achieved.
Speaker: Professor Simon Dennis, Director, Complex Human Data Hub, Melbourne School of Psychological Sciences, University of Melbourne.
In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a multi-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I paint a picture of what I think the future of explainable AI will look like if we went down this path.
Speaker: Associate Professor Tim Miller, School of Computing and Information Systems, and Co-Director for the Centre of AI and Digital Ethics, The University of Melbourne
Since it was introduced in 2006, differential privacy (DP) has become accepted as a gold standard for ensuring that individual-level information is not leaked through statistical analyses or machine learning on sensitive datasets. In recent years, it has seen large-scale deployments by Google, Apple, and the US Census Bureau, all organisations with the resources and expertise to implement their own custom DP systems. In this talk, I will describe our efforts to foster wider adoption of DP, including a system ("PSI") we are developing to enable researchers in the social sciences and other fields to share and explore privacy-sensitive datasets through data repositories, a new community effort ("OpenDP") to build a trusted open-source suite of DP tools, and work analysing the relationship between DP and legal requirements for privacy. This is joint work with many collaborators in the Privacy Tools Project.
Speaker: Professor Salil Vadhan, Vicky Joseph Professor of Computer Science and Applied Mathematics, Harvard University.
Data driven insights and data driven decisions increasingly depend on analysing large quantities of deidentified data. The voiced concerns of using such data are often expressed in terms of privacy. The most significant concerns often relate to sensitivity in the data itself, concerns about unintended consequences of use of insights, loss of control or harms which could be created through use of insights. This presentation will provide an insight to ongoing work to attempt to identify these concerns and in terms of considerations and controls around use of data driven insights and decisions.
Speaker: Dr Ian Oppermann, Chief Data Scientist, Customer, Delivery and Transformation, Department of Customer Service
Details coming soon.
Speaker: Nicolas Hohn, Associate Partner at McKinsey, and the Chief Data Scientist for QuantumBlack Australia
This presentation will introduce the New Zealand Government’s algorithm transparency and accountability journey and the role of the Government Chief Data Seward.
Speaker: Jeanne McKnight, Senior Advisor System Policy, Statistics NZ
This presentation will explore the opportunities and considerations when using AI in a Government agency such as Services Australia and particularly in the role as Chief Data Officer. Dr Milosavljevic will discuss the importance of proper governance arrangements around the management and use of data (including AI) and balancing the constant tension between those who are too fearful of AI and those who are not fearful enough about poor execution. She will focus on the 6 main steps needed to make AI work well in practice and the importance of first truly understanding the business problem being solved to ensure that leveraging AI brings about real and sustainable business value.
Speaker: Dr Maria Milosavljevic, Chief Data Officer, Services Australia
The turn to data-driven approaches within public administration to inform (and even to automate) public sector decision-making can be understood as an emerging movement that I call the ‘New Public Analytics’ (‘NPA’). Central to the New Public Analytics is the use of ‘data analytics,’ a form of computational analysis that has its theoretical foundations in data science and statistics, involving the application of software algorithms (including but not limited to machine learning algorithms) to large data sets in order to identify patterns and correlations in the data capable of generating ‘actionable’ insight. The lecture will explore, amongst others, the various ways in which the apparent migration of predictive analytics from on-line commercial services to data-driven approaches to public sector decision-making and service delivery entails several problematic and dangerous distortions and blind-spots. In so doing, I will highlight the need for lawyers, in particular, to critically scrutinise these developments in order to ensure that there are reliable and transparent mechanisms capable of providing meaningful and effective responsibility and accountability for the design, deployment and consequences of NPA techniques in order to safeguard the rights and freedoms of individuals and communities against potentially egregious errors and abuses of public power.
Speaker: Professor Karen Yeung, Birmingham Law School and School of Computer Science, University of Birmingham
In ancient Greece, the basanos or touchstone had multiple meanings: a literal stone that tests the authenticity of gold by revealing its characteristic mark upon striking it, or metaphorically, a moral test of the authenticity of a life or a ruler. It also referred to a method of extracting truthful testimony by means of torture; specifically, of non-Greek slaves. The basanos thus embodies the interweaving of truth-telling with virtue, violence, and power in Western moral, political, and technical thought. In this talk I explore how contemporary uses of AI and data science have retraced and reconstituted the basanos in myriad ways, while also revealing a critical opportunity for the invention of new, more just and sustainable means of truth-telling.
Speaker: Professor Shannon Vallor, Baillie Gifford Chair in the Ethics and Data of Artificial Intelligence, The University of Edinburgh.