Kate Crawford: Atlas of AI

Hear from Kate Crawford, Honorary Professor at the University of Sydney and one of the world's foremost scholars on the social and political implications of artificial intelligence.

Timed for the Australian launch of her new book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Kate Crawford makes a compelling case for how artificial intelligence is not an objective or neutral technology of innovation. 

Hailed in Nature as an essential read, Crawford's book "exposes the dark side of AI success" by taking us on a journey that uncovers how planetary computation is fueling a shift toward undemocratic governance and increased inequity.

Who benefits most and what is at stake from these systems? Crawford says that 'these systems are empowering already powerful institutions – corporations, militaries and police.'

A brilliant thinker, award-winning author, and acclaimed academic, hear University of Sydney alumna and Honorary Professor Kate Crawford explain what it takes to make AI work, and how it centralises power.

This event was held on 6 July, 2021.

Watch on-demand

Catch up on podcast

Resources

FENELLA KERNEBONE

Welcome to Sydney Ideas. This is the University of Sydney talks program. My name is Fenella Kernebone and it is a pleasure to be with you tonight. Thank you to everybody who has joined us in our audience. Welcome to you.

Tonight we will be hearing from Professor Kate Crawford, University of Sydney alumna and Honorary Professor right here at the University and of course, one of the world's foremost scholars on the social and political implications of artificial intelligence.

Now before I continue proceedings, I would firstly like to acknowledge and pay my respects to the traditional custodians of the lands on which we meet, where we live and work and share ideas, wherever you happen to be and I also acknowledge the Gadigal People of the Eora Nation because it is on their lands that the University of Sydney is built.

And as we share our own knowledge, our teaching, learning and research practices within this university, may we also pay respect to the knowledge embedded forever within the Aboriginal custodianship of country.

So, tonight, ladies and gentlemen, it's such a pleasure to be in conversation with Kate Crawford. It is timed for the Australian launch of her new book; 'Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence', published by Yale University Press.

Over her 20-year career, Kate's work has focused on understanding large-scale data systems, machine learning and AI in the wider context of history, politics, labour and the environment.

She is a research professor of communication and STS at USC Annenberg; a senior principal researcher MSR-NYC, and the inaugural Visiting Chair of AI and Justice at the Ecole Normale Superieure in Paris.

Her book has been described as ‘timely and urgent’ by Nature, ‘a fascinating read with a fascinating history of data’ by the New Yorker, and is one of the Financial Times' top reads of 2021.

Artificial intelligence or AI seems to be one of the great innovations of all time. It is making things simpler, easier, and it is all in the cloud. Simple. Hidden behind bars.

Ladies and gentlemen, 'Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence' challenges us to see AI in a wider context; as an extractive industry built with human labour exploiting vast natural resources.

What does it take to make AI work and how does it centralise power?

As Kate says, it is all about politics, all the way down. Kate Crawford, hello.

KATE CRAWFORD

What a fantastic introduction. It is great to be here.

FENELLA KERNEBONE

Tell us a bit about why you wrote this book. Why do we need an Atlas of AI? How do we map artificial intelligence?

KATE CRAWFORD

Well certainly I used the metaphor of the atlas because atlases are very unusual sorts of books. You can look at the scale of a continent or zoom in to a mountain range. You can also look at the imprint of colonial empires over time.

I think that is a very useful way of thinking about artificial intelligence. We need to think about the way in which we have these great houses of AI, this handful of companies that really dominates the planetary computation.

We need to look at the different scales in terms of how it is affecting our everyday life.

In that sense and atlas is really encouraging us to look at this more global picture around what these systems do and what they might be costing us.

In terms of literally how I did this, the way I did it as a researcher was to really go to the places where AI is being made in a fuller sense rather than a more traditional approach of reading academic papers and reflecting at arm’s length.

I went to the mines. I went to where the minerals are being extracted to build the large-scale systems.

I went inside Amazon fulfilment warehouses to see the experience of work. And of course, I went to the laboratories where large-scale training data sets are being made.

In that sense I wanted to put myself in the story; to understand personally what it really takes to make these systems work.

Frankly I wanted to open my eyes. Having done this for decades, it really was still an extraordinary eye-opening experience to be in these locations and to understand things differently.

FENELLA KERNEBONE

Tell me a bit more about how you balanced to the need to take us on a journey through the course of this book through those chapters to give us the storytelling that we need to understand what is going on when it comes to artificial intelligence.

This is mixed with the archival research that runs throughout this book. How did you go about that?

KATE CRAWFORD

I think you always need both to understand how we got to this place. Archives are an important part of my practice. But again, I started the book by going to the location.

I begin by jumping in a van and driving out to the only working lithium mine in the United States, which is in Nevada in a little town in the Clayton Valley that has about 125 people living there.

This town was almost abandoned in 1917 after the gold and silver rushes that enriched San Francisco. It was not until after the war that they realised that this little town of Silver Peak is right on top of this gigantic underground lake of lithium.

Lithium is essential for one reason, which is rechargeable batteries. Of course, we have a few grams of lithium in things like iPhones, but you have many kilograms, over 60kg required to create a reasonable battery for a Tesla car.

We are now in this race where we have a real concern about running out of our current known supplies of lithium. The substance which was assumed to be everywhere and was hardly considered to be important and was very rarely recycled has now become part of this crucial aspect of planetary computation and reminds us of the enormous amount of minerals and energy and water that is needed to make these system work. This is at a time when we are reaching the end point.

There was a new study that came out from a university in Germany that suggested if you have best practices of using lithium and we recycle better, we might be reaching the current end of our limits around 2100. If we do not do that, we could be reaching a limit as soon as 2040.

We are in a very different moment in terms of thinking about the true cost of the systems. Certainly, when I was writing this book it was before the semiconductor crisis.

Before 2021 we had a lot of evidence around why the true impact of the system is much bigger than what is usually talked about. We generally think of big technology being a greedy industry, but nothing could be further from the truth.

FENELLA KERNEBONE

And this is exactly what the book outline so clearly in each chapter as well. Maybe back to the lithium mine, what does that place look like? What does it smell like? What does it feel like?

As you say, if we do not find ways to recycle this lithium effectively, we could be running out of it by 2040, which does spell the end of certain pieces of equipment that we all rely on and love so much. Tell me a bit more about the environmental impact that the footprint of AI has when it comes to these types of mines.

KATE CRAWFORD

These are extraordinary landscapes. There are multiple places around the world where you get to see the sort of material imprint of really all of these systems required to build AI.

It was interesting going to the lithium mine, it is this gigantic desert plane and you see there is a huge salt desert, but there are these big black pipes that are snaking around the earth, extracting the lithium brine.

It is quite extraordinary landscape yet at the same time you talk to the miners there, people are not sure how much longer that mine can go or how much is possibly left.

If we look on the other side of the planet, if we go to Mongolia, there is this giant black toxic lake which is an entirely artificial lake that has been made from the huge amount of waste produced by creating rare earth minerals. Again, rare earth is a core component of so many consumer AI devices and computational infrastructure.

The lasting legacy of these times of production is often hidden from view. So, I think in many ways we do not see the systems that are behind AI.

Be that from the perspective of natural resources or labour or data. Part of what I am trying to do is bring those things back into focus. We need a better sense of what these systems are really doing worldwide.

FENELLA KERNEBONE

I would love you to talk briefly about an earlier work of yours that you are very well known for and you can look this up online if you need to see more information about this. It was all about mapping the supply chain of consumer AI devices.

It was a project that you did on the ‘Anatomy of an AI System’ with Vladan Joler, and it won the international prize for design of the year in 2019 and is now in the permanent collection of MoMA.

It is not in my home, which is deeply unfortunate, but it is quite big. Tell me a bit about this particular work and how this new book specifically connects to this earlier work that you did of mapping the supply chain of consumer AI devices.

KATE CRAWFORD

In many ways I think of the ‘Anatomy of an AI System’ as the genesis of this book, because it was the project that completely transformed my thinking.

It started back in 2016. Vladan And I were at a conference on voice-enabled AI systems. And we were thinking, do people understand how an Alexa or a Siri really work?

How would you draw it and map it for people? We started drawing the forms of data extraction that allow you to speak to it and say, "Alexa, what is the weather today?"

We knew how the data part worked, but then we thought – what happens if you open it up? And you trace back every component?

We went back to discover what mines it was being produced in, how it was being smelted and shipped around the world. All the logistics.

We also looked at the end of life, at the way steps were these devices get thrown away in less than five years, so this extraordinary mapping process where we saw the deep time of extracting all these minerals and energy to serve a split-second of technological time for these devices that are so easily discarded.

So doing that opened my eyes and made me realise that after doing this gigantic map of a single AI device, I wanted to expand that analysis to essentially study the entire AI industry.

FENELLA KERNEBONE

Tell me about how this transformed you thinking about artificial intelligence and then how it has in some ways transformed how we will think about it.

We are thinking about a single device, the Alexa, so by using art and putting a humanistic focus on it, how this enables our understanding of AI and its impacts?

KATE CRAWFORD

It is interesting. I've been an academic for 20 years now, coming up on my anniversary and it has been interesting for me.

I spent a lot of early years writing papers and books and that is incredibly important work, speaks to an audience but primarily an audience already interested, who is already thinking about these questions.

It wasn't until I had the privilege of collaborating with artists that I realised there are ways we can show the way these systems work to a bigger audience and the reason I think that is important is because AI systems already having so much impact on our everyday lives, often in ways that are hidden but sometimes in ways that we see and across that sort of social structure.

How can you start to have a more informed public debate about whether we want facial recognition in our cities or AI emotion recognition used in schools; if people cannot see how the systems work and what is at stake?

I think one thing that has motivated my work recently is how do we share this research with the widest possible group so we can start to have those debates?

Without those debates, we will not have the regulation we need, we won’t be able to make the choices we need to make and will not looking at the way these kinds of systems are actually producing really quite troubling changes at the level of our climate; but also in terms of labour rights and data protections.

It's got to a point where we have to think about these issues together, and it is important to bring in a wider group than an academic paper might dol.

FENELLA KERNEBONE

We have talked about the extraction when it comes to lithium mine. What is the term you called the actual structure and plan of your book? I find that interesting. Can you talk about that?

KATE CRAWFORD

In some ways the book is structured like geological strata, beginning with the earth and moves to labour and data and classification and effect and how do states use these systems and ends up in outer space and the currently private space race run by AI billionaires.

I tend to think of this as the full stack of AI. We tend to hear about fullstack computing and think about the data channels.

But the fullstack of AI really does involve so many human bodies labouring, so many resources and just vast amounts of data, so in that sense the book is really trying to allow people to skip between these strata and to see how they are actually connecting.

FENELLA KERNEBONE

Let's go back a little bit and I want to talk about labour and ethics, but AI itself is something, we’ve been unpacking it, we know what is involved, we’ve talked about the lithium mines but when we think of AI, for many of us it is inexplicable, in the cloud, a thing, doesn't have any impact.

If I could get you to define how you see artificial intelligence and how does that in some way differ from what everybody else might necessarily think of? Give us your definition.

KATE CRAWFORD

The traditional definition, which is certainly most commonly used in technical communities, is looking at a series of approaches which, right now, the most popular ones are called machine learning.

And that includes various types of learning from deep learning to reinforcement learning; but this is a series of technical approaches and algorithms used effectively to do large-scale statistical analysis complete pattern recognition, one way of thinking about it.

There is another way – what are the social practices defining these systems, where does the data come from, what histories does the data bring along with that?

More broadly, what are the infrastructures that these algorithms run on? Who owns those infrastructures?

In many ways we could look at this from a technical perspective but also social and infrastructural perspective and then also geopolitically in terms of how these systems are producing significant change at the level of global power.

What I tend to do in the way I define systems is we need to go beyond this very narrow technical approach to look at everything from the rooms where the systems get designed.

Who decides who will be served by the systems and who might be harmed, and ultimately, how did these feed into existing structures of power? Be that big businesses, policing or the military?

FENELLA KERNEBONE

You call it an ecosystem as a way of thinking about it. Is that a fair one?

KATE CRAWFORD

I think that is a fair one and that ecosystem is changing rapidly and the types of techniques used in the AI industry are changing all the time.

But these broader questions around the social and political structures I think are much more consistent and dominant and so that is part of what I try and do in this book – look closely at how AI systems are designed to serve very specific sorts of interested parties.

FENELLA KERNEBONE

Let's talk about labour. There are a number of elements that go into the construction of artificial intelligence when we talk about this through the book, but you write about many forms of labour involved in the making of AI and you have touched on this before.

It includes miners, content moderators, Amazon warehouse workers, even engineers in Silicon Valley. So all these labour forces are playing into the construction of artificial intelligence but how is the experience of work in your view changing now in relation to increased surveillance and algorithmic management systems?

KATE CRAWFORD

A great question because so often when we think about what does the future work look like, and we are told these stories about robots and in humans and collaborating in the shared environments and Amazon is so often used as the symbol of what that might look like.

But it was by going inside a fulfillment centre that is again, such an ironic term – fulfillment centre. You have to think about whose desires are being fulfilled by these systems because when you see what the working experience is like, it is extremely harsh.

I read many stories about that but it was something about really seeing it for myself and seeing people under huge amounts of physical strain, again you would see lots of bandages and support garments.

You’d also see the stress of having these screens that would be really tracking what is called the rate, the ability for a worker to pick items off the shelves and get them packed in time and if they don't make that rate, they run the risk of getting penalised and possibly fired.

It is an extremely difficult environment. Even Jeff Bezos in his recent letter to shareholders admitted they have to do much better with how they treat workers.

But rather than supporting unions, they have decided to further surveil workers by tracking them at the level of the muscles and ligaments to see how long they are spending on task to make sure literally the kind of internal workings of the bodies are being spread through the factory.

That sounds very much to me like again the visions of labour that we had from Taylor and Ford. You can go back even to Charles Babbage who believed we needed to create perfect factories that would work like gleaming computational systems, so I'm interested in looking at the labour history there.

But the other thing that is important is that these sorts of systems are not just in factory contexts; they are also in so many workplaces and this is particularly true in the context of the pandemic where many of us right now are on Zoom calls like this one and many workers are being tracked from how efficiently they are answering the emails and taking their meetings.

They’re being compared to their colleagues in terms of who is making the sales or being a more efficient worker. And some systems are even tracking the micro-expressions in people's faces to assess if they are happy, sad or engaged or the ideal employee.

So this type of what is called boss-ware or algorithmic management is being spread throughout industries around the world. I find it concerning because it increases the power race symmetry between employers and employees.

And we see those power race symmetries being pushed out in many domains, where again AI systems so frequently give more power to the already powerful.

FENELLA KERNEBONE

For sure. This goes back to things you have been talking about, facial recognition, goes back to data collection practices.

You talk about this in the book extensively – data collection, facial recognition of Paul Eckman – so tell me about the things feeding into our understanding of how AI is being created and used today and it still persists and perpetuates.

KATE CRAWFORD

Yes. It is an interesting story. The idea of emotion recognition AI, is you could look at a video of somebody's face and tell what their true inner state was based on nothing more than movement of muscles in their face.

And this idea can be traced back to a psychologist Paul Eckman who in the 1960s and 70s was developing this idea there were six essentially universal emotions we all feel regardless of culture or context. They can be deduced from the face.

AI researchers wanted to see if computers could do this, trace muscle movements and make predictions about internal states. What is troubling about this is even back when these theories were being developed, anthropologists said it was problematic, simply wasn't scientific and removed all of the interesting issues around context and culture, that we react differently depending on who we're talking to.

Just because you are smiling does not mean you are happy and any of us have ever worked in a café know that is absolutely the case.

Yet this deeply problematic idea has been built into hiring, in schools, in criminal justice systems.

So I traced the history of that idea and how problematic it is from a scientific perspective and we have treated it as objective that, so that is one example of how pseudoscientific it ideas get built into technical infrastructures.

FENELLA KERNEBONE

I wonder if I could go off track. You talk about the Mechanical Turk. It was interesting to see how that might play in, the mystery or the magic of AI, so could you tell me about the story?

KATE CRAWFORD

The Mechanical Turk was first designed in the 1700s it was the toast of Europe. It was this almost robotic looking man with a turban who could defeat any human chess player.

But it was a trick because there was a person behind within the box who was then actually directing the motions of the automaton.

Then of course we go forward to 150 years and Jeff Bezos decides to name his system for remote work, a remote dispersed work platform around the world, artificial intelligence as you call it.

Amazon mechanical Turk, so these ideas from history – tricks to hide labour are now being revived in AI systems, often in ways that are not ironic.

The fact Bezos is saying that with a straight face always strikes me as somewhat odd. But yes, one of the things I think is important is looking at these longer histories behind ideas of automation that I think are still in some ways haunting the systems we have today.

FENELLA KERNEBONE

If I may, Kate, can we get into some issues now of bias and particularly – and you touched on this already – but racial and gender classification. Something that haunts the book is eugenics and how this is done in contemporary AI with criminal detection and emotional recognition. Maybe talk about the histories or these pseudoscience and scientific racism as such.

KATE CRAWFORD

It is an important part of how we have got to this part of history, where we have AI systems which claim to tell your internal state; to tell if you are a criminal or to predict your sexuality.

These are systems that have been published about just in recent years. Certainly, I think we can trace those ideas back to phrenology and craniology and scientific racism.

It troubles me that one of the very common impacts that is not even remarked upon enough in the AI sector; one of the very common approaches is really to predict people's race and gender.

And when I spend time studying systems that do this, seeing the classificatory in logic is absolutely shocking. Again, it is often prefaced on ideas of binary gender.

Quite frankly that is so problematic that I do not understand why that is still being coded for. And then there are ideas of there being four races. And that takes us back to the systems that were being used in apartheid South Africa.

There are labels used to classifies people's character and personality and worth. This is something that for me certainly the breakthrough project in understanding this was a collaboration I did with Trevor Paglen called Excavating AI.

We spent over two years looking at the training data that is used to allow AI systems to interpret the world. In order to have an AI system interpret, it needs to have a lot of training data so that if it is predicting whether an image of a cat or a dog, you feed it thousands or sometimes millions of images of cats and dogs so it can begin to detect what an image can be.

When you start applying that to humans you start to see some really disturbing classifications. We looked at possibly one of the most well-known training datasets called ImageNet, which is in many ways responsible for so much of the success and breakthroughs we have had in computer vision and specifically object recognition.

When it is defining people as objects you start to see some really troubling things, like people being defined as a bad person or a kleptomaniac or an alcoholic or as we kept looking and these categories are terms that I will not be able to say here because they are not repeatable, but truly racist and misogynist slurs.

This is a training dataset that is used widely and was openly available on the internet for over a decade. It was left in that state as though it were an okay way to be classifying humans.

It is not that unusual. There are many examples of this in terms of training data layout. The way that it has been used is an afterthought in terms of how we train technical systems.

Certainly, one of the things that we needed to do much more is bring a critical lens to how AI systems are trained to see the world and to make interpretations. We need to be far more careful about the way that they are able to play a role in so many sensitive social institutions.

FENELLA KERNEBONE

When these practices are perpetuated in AI, as you are just talking about before, this ideas of ImageNet et cetera, the idea is that you can clean it up and fix it and therefore it can go away. That idea in itself is a huge problem that perpetuates today, isn't it?

KATE CRAWFORD

It is certainly one of the most common approaches, that when these issues of bias in AI systems have been pointed out, they have been pointed out by many scholars for many years. We can think of many examples.

There is a long list of people that have done an extraordinary work pointing out that we have systems that are profoundly biased in terms of race and gender and many other factors as well.

The response from technology companies has most commonly been to remediate bias. If we have a facial system that doesn’t recognize people with darker skin tones, let’s just increase the amount of pictures that we have of people with darker skin tones.

FENELLA KERNEBONE

More and more superdata.

KATE CRAWFORD
Precisely. We had a case of course after a study was published saying that Google systems for example were not able to recognise people with darker skin tones.

Google reportedly then went out and try to pay homeless people in Los Angeles to get face scans without telling them what the data would be used for.

Again, I do not think this is a way of improving their systems or making them more just and fair if we look at this wider context of how they are being made and who is being impacted.

Certainly, I think this idea of bias remediation, while it is an important step, it is not sufficient. We need to think about who benefits and who is harmed.

And until we have more rigorous systems for accountability and have built in stronger guardrails about how these systems have worked and once they are deployed, we are going to keep seeing these problems occur.

FENELLA KERNEBONE

This is why the book and what you are talking about are so important, because it unpacks and unravels this so that it becomes clearer. We are talking to Kate Crawford, and what a delight it is, about her book 'Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence'.

It is an extraordinary book. One of the wonderful surprises about the book, Kate, is how immersed in historical methodologies it is.

You look at history and prehistory and you talk about this a bit already, these hyper-contemporary systems. Let us go back a bit. Can you say where and when AI begins? Is it possible? Take us back a bit.

KATE CRAWFORD

The most commonly told a story about AI is that it begins with a conference in 1956. That was certainly a turning point when we saw a series of scientists get together and say that they want to create computational systems that can be intelligent and can make intelligent decisions.

In some ways I think there was a problem even then, early on, in the idea that computers being akin to a human mind.

It is almost like a Cartesian dualism, the old mind-body split that we can create a mind in a jar, without thinking about the ways in which we are embodied and relational and we think in relation to the people and context around us in a wider environment and ecology.

I think there already we had a trap in the very term artificial intelligence. It is certainly one of the points that I believe deeply; that is that it is neither artificial nor intelligent. It is profoundly material.

And in no way are we creating systems that are autonomous or able to make any kind of determinations without the extensive amount of data and predetermined rules or reports. Even there I think in that moment we can start to say that the official term artificial intelligence had some problems at its birth.

You can go much further back, that is one of the things that I like to do as a researcher, to think of the deeper historical threads that are traced through that allow systems like this to be constructed.

Sometimes that means going back to again people who devised ideas of statistics, like Frances Galton, was the father of eugenics.

He was trying to think about how you create composite images of the criminal or the prostitute. These ideas of again reading from the face what somebody's character would be. It is an idea that repeats in our technical systems today.

There are also historical trajectories around how we classify populations, and this is something that states have done for centuries.

Again, it is a mechanism of power that is now being devolved in AI systems. I think we have to take this 500-year view to really understand the types of AI systems we have today and again where there might be.

In fact, I think centralising power and threatening some of the traditional tenants of democracies.

FENELLA KERNEBONE

So, politics all the way down, as you like to say. For sure.

In the book you end with a discussion, and I think you call it the coda chapter, is that correct? I might scan it briefly with my eyeballs.

You cut to a video of Jeff Bezos talking basically the modern-day space race between Jeff Bezos and Elon Musk.

I guess the question is what we can learn about the future and what we discover about the technological billionaires wanting to leave the planet.

Cut to Bezos in the control room, adjusting his headset His voice-over continues, saying ‘this is the most important work I am doing’. It is a simple argument. This is the best plan.

We face a choice that as we move forward, we need to decide whether we want to civilisation or status. Do we have to cap the population? Do we have to put a limit on a population or do we fix the problem by moving out into space?

So how can we imagine a future, Kate, when all of our billionaires just want to leave and extract out there? They cannot do it here anymore, so they go out there. Talk to me.

KATE CRAWFORD

I'm not sure if you saw, Fenella, but one of the most popular petitions this week is a petition to not let Jeff Bezos land back on earth when he goes to space this month.

You can see why people are concerned. I am fascinated by the fact that he is passionately committed to space travel and has created this company, Blue Origin, to not just create rockets but with a much more ambitious mission of actually moving people into space.

Living on large-scale satellites and keeping the earth is a very nice place to visit in his words. Presumably for the people that can afford it and are not working in the space mines and living in the space colonies.

This is interesting for two reasons. One is that I think it tells us something very troubling about this addiction to growth that is certainly there in that quote from the book that you just read.

Rather than thinking about what zero growth would look like or changing the way we live, we just have to remove the extractive frontier little bit further out.

We need to treat space as the new mining material. Certainly, that is the case for many of the big technological billionaires. This is a new privatised moment.

It does not come from a vision of what is best for everybody, but a vision of what is best for them. You can really see it in the documents they produce.

To me this kind of corporate imaginary of capture and extraction is really what is behind so much of the big tech sector right now.

In the book I described AI as the extractive industry of the 21st century. I think you can see that logic taken to its extreme in the private space race where it really is about trying to capture as many minerals as possible, to get to places first and then to claim it for themselves.

It is interesting actually, there used to be certainly the Outer space Treaty in the 1960s that was written in the United States to really concretise the idea that space was a public commons. It was for everybody.

And then in 2015 we saw both Jeff Bezos and Elon Musk lobby the Obama administration to extract minerals and to profit from mineral extraction in outer space.

The race has been on and we are losing yet another commons. We are seeing another type of enclosure just as we have done with so many kinds of datasets on the internet and in archives which have just become a raw material for AI. We are now seeing this play out in outer space as well.

FENELLA KERNEBONE

Maybe we should all sign petition. I do not know if that is a good idea or not. Potentially.

Kate. It has been delightful to speak to you for your book launch in Sydney. If you were to write your next book on how to govern AI, what would you say? Is anything that needs to be done beyond ethics, principles and self-regulation? Or can we do better? What do you think?

KATE CRAWFORD

We can definitely do better.

We have certainly seen over the last couple of years a profusion of ethics statements and principles and even a Hippocratic oath for AI, which I saw was published this week.

In all of these highflying documents what we lack is any type of mechanism to tie this to accountability.

You can have the big technology companies releasing their principles, but how do we make sure that they are actually being followed and they actually have an impact on the world at large. We have had a lot of evidence to show that that is not working.

Indeed I think we have to look to strong regulation. I have been heartened to see the EU released the first ever omnibus regulation for AI and we have got a long way to go in the United States and we are slowly seeing some interesting moves in Australia in fact.

The human rights commission of Australia released a report specifically looking at how we can address the human rights challenge that is presented by artificial intelligence.

I'm optimistic we are moving into an era of regulation and the trick will be to ensure that regulation works and is not riddled with loopholes.

I'm optimistic to see the number of organisations that are coming together around this issue, so the fact that issues of climate justice and labour rights and data protections have often been issues seen as very separate, pursued by different organisations.

We are starting to see real collaboration around this question of how AI can actually be curtailed in such a way it can benefit all and not the few.

So those sorts of movements I think are very positive and I think also this idea of the politics of refusal, that people are saying no to some kinds of technology.

We have seen, for example, bans on facial recognition in places like Portland, Oakland, Somerville; growing calls for real restrictions across the EU as well.

That to me is again a moment where we are starting to see scepticism, and some realism about the fact technological determinism is not the way we should be living.

We shouldn't allow the system is to always be the central actors in terms of what our lives and what broader political structures should look like.

FENELLA KERNEBONE

Certainly. Kate Crawford is speaking with us, talking about her book, ‘Atlas of AI’. And thank you for the questions you have sent through so far. I will get to a couple before we wrap up before seven o'clock, Eastern Standard Time.

The first one is from Chris Geriarty and it is: What is the impact of AI on energy resources? Does AI use more power than is justifiable? How did we get to this place?

KATE CRAWFORD

Well that’s a really good question and there have been excellent papers that have been published recently. I’m thinking of the work of Emma Strubel, who wrote a breakthrough paper in 2019 where she looked at how much energy it takes to train a single natural language processing model and in that study, it was shown it was over £660,000 of carbon dioxide or the equivalent of around 125 roundtrips between Beijing and New York. So that’s one model. And it doesn't even come close to the scale of the models created at, say, Facebook or Google.

So indeed, we have a moment now where we're starting to see a huge amount of energy being used to train AI systems. It’s also happening at the same moment in history where there is a trend towards ever larger models.

We’ve got large language models like GPT-3, which is a the system that attempts to create humanlike text, but uses a vast amount of data and a vast amoung energy. So there is a push ultimately to AI supercomputing right at the same time in history where the planet is already teetering on a climate, very serious climate collapse, depending where you are and how you are experiencing it.

So in this sense your question points to one of my deepest concerns, which is how can we start to move away from these extremely energy intensive and compute heavy approaches? Because otherwise these systems simply cost far more than they actually deliver.

FENELLA KERNEBONE

A question's come through from George Margelis and it’s towards healthcare. "So AI and medicine is getting a lot of press. In your opinion, ts it close to becoming a viable tool for clinicians and will it ever replace clinicians?"

KATE CRAWFORD

Well certainly there's a lot we would like to see AI do in the medical space that could be profoundly helpful and certainly in terms of designing drugs, looking at vaccines – very timely moment – we can think about the roles of machine learning there. But it becomes much trickier when we look at the way AI is being used in direct doctor-patient relationships.

A colleague of mine, Rich Caruana, wrote this fantastic paper called ‘Friends Don't Let Friends Use Black-box Models’ where he bascially looked at the way AI systems where it was being used to predict whether somebody would get pneumonia and this was to be used in hospitals.

And his deep learning model was remarkably effective apart from one issue, which is, it would always recommend people with asthma should be told to leave the hospital. That is the last thing you wanted to recommend. But we can think about why. Because the date it was trained on showed that, in many cases, people with asthma had really good treatment. And why was that? Because they were put immediately into the most high level of care, so they had really good responses, so it might look in the data, 'Oh these are patients who always do well, so send them out of the hospital.'

It was such an important breakthrough paper because it reminded us that AI systems are only as good as the data systems they are trained on and of course, medical data has a profoundly profoundly skewed history in terms of the fact that most commonly medical tests are done on white populations. It has, again, predominantly male populations. We've had many problems with the skews that we have on the data that AI systems are trained on.

So yet again, I think these are great questions, but it reminds us we we need to have a higher threshold of care and caution in terms of how we use these systems rather than just designing them and deploying them on millions of people at once, which is the current state of AI.

FENELLA KERNBONE

A question, Kate for you, from Deborah Prospero, and thank you to everybody who is sending in your questions, we will get to a few more. “Kate, you write about Bezos and van Braun in your book. Is there an inherent connection between alt right, white supremacy, and contemporary space colonialism?”

KATE CRAWFORD

I love this question, it's an extremely timely question of course because in the US, there have been recent reports that the CEO of a surveillance firm called Banjo was found to have connections to the Dixie Knights of the Ku Klux Klan – so an actual white supremicist – and also Clearview AI, a company that has the largest training dataset of all our faces, it's got around three billion images that's currently being used in many policing applications founded by an Australian. That Australian, Ton-That, was affiliated with right wing extremists including writers from Breitbart, people behind the pizza gate conspiracy.

You know, it really is kind of chilling to start to realise the way in which the far right have actually got very clear strongholds within start-ups and within tech generally.

In fact, the academic Sarah Meyers West wrote a piece called 'AI and the Far Right: a History We Can't Ignore', really pointing to the fact there is a deep connection within some of these organisations to tools being used in immigration, tools that are being used to try and track and remove immigrants from the US and also in predictive policing.

So it's an incredibly important question and one I think we need a lot of investigative journalism and research into, but you do see some of that ideology in this contemporary private space race; this idea – a very Ayn Randian vision, which is not a coincidence. We know that Elon Musk has cited Ayn Rand as a big influence, as has Peter Thiel. This idea of singular success has become a libertarian vision, I think is very much built into Silicon Valley culture.

FENELLA KERNEBONE

A question – there's actually a couple of questions – about AI and children. So, you know,  we are all at home at the moment, especially if in New South Wales and respect and love to everybody. Babita Tewari has asked this question, "Is it worth introducing AI concepts to children from the age of six and onwards?"

KATE CRAWFORD

Well I love this is an open question because we can ask what concepts should be introduced to them? And certainly from the age of six, I think children are already exposed to so many AI systems that are part of their toys. We have Barbies that are actually recording children's voices and speaking back to them. We have obviously so many iPad games trying to harvest and extract data from children. Now in theory, we should have stronger legal protections than we do, but certainly, giving children the ability to ask questions back of the systems, to be a little bit, I think, cautious around how much data we share, and I think in many ways to just remember they have a role too, in the world and ultimately these systems don't get to decide how they live.

It is actually really important to give kids the ability to understand they can really create the world in different ways, and it does not have to revolve around technology, so perhaps that is not the answer you are looking for. I'm not going to sit here to advocate all kids need to learn how to code. I don't think that is the solution.

Instead I think we need to teach kids that we need a generation that will innovate in different ways, in terms of thinking about the social implications of what we build, in terms of the legislation and policy that we have and in terms of how we address these major parallel crises we currently face, so I think it is a much bigger challenge that we should be preparing our six-year-olds for.

FENELLA KERNEBONE

Absolutely. A question that's come in from Martin, which might be a good one to wrap up on, Kate. "The book is brilliant and disturbing. How can we do better in the light of the huge and growing power and momentum of a few small players?"

KATE CRAWFORD

That is the biggest question of all. So thank you for giving us that question, Martin. I love we have so many questions in the chat that we will not get to today but I want to thank you for these really interesting and provocative notes and also the multiple references to robodebt and the fact that we start to see how systems are going wrong, which I think brings us to your question, and how we might actually begin to contend with the system that we currently have.

My biggest concern is we have had over the last 15 years the growth of a sector that has been barely regulated, daily tax and really allowed to have its run with the world. The idea of move fast and break things that was indeed the motto of early Facebook, I think has become the ideology of big tech and we can see in terms of the richest companies on earth, we are really looking at technology companies that were started in the last 20 years.So it is an extraordinary shift.

In many ways we can see how large tech companies are taking on some roles we used associate with states, with nation states. They have extraordinary power in many cases, in many cases exceeding the power of nation states so we have a question now, which is what are we going to do? With the recent experience of what happened with Facebook in its negotiations with the Australian Government where we saw newsfeeds switched off across the country and we saw Australia isolated in terms of its ability to share national news with an international audience, I think we saw a moment of the iron fist being removed from the velvet glove, that we could see technology companies are quite happy to play very tough politics with the infrastructures that they own.

So we have to think about what kinds of solutions we come up with that, in some cases, need to be global in structure. It will be very difficult solving how we govern large-scale AI and big tech on a purely state by state basis.

One of the things we face – and I think it is unfortunately come at a time in history where an international set of governing bodies are at a moment of, I would say, profound weakness, having been undercut by many people, including the Trump administration – that this is precisely the moment where we need international governance – and am heartened to see there are slow grown initiatives but we will need to do more if we contend with how much power is currently held by these people in these sectors.

FENELLA KERNEBONE

Absolutely. To find out more, you can read the book and Kate's articles and we will put reading list up on the Sydney Ideas website.

Kate, it’s been such a delight to talk with you. Thank you, I appreciate it, and this is Sydney Ideas at the University of Sydney, my name’s Fenella Kernebone.

The speakers

Kate is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at MSR-NYC, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, In 2021, she will be the Miegunyah Distinguished Visiting Fellow at the University of Melbourne, and has been appointed an Honorary Professor at the University of Sydney. She previously co-founded the AI Now Institute at New York University. Kate has advised policy makers in the United Nations, the Federal Trade Commission, the European Parliament, and the White House.

Her academic research has been published in journals such as NatureNew Media & SocietyScience, Technology & Human Values and Information, Communication & Society. Beyond academic journals, Kate has also written for The New York TimesThe AtlanticHarpers’ Magazine, among others.

Kate's work also includes collaborative art projects and critical visual design. Her project Anatomy of an AI System with Vladan Joler — which maps the full lifecycle of the Amazon Echo — won the Beazley Design of the Year Award in 2019, and is in the permanent collection of the Museum of Modern Art in New York and the V&A Museum in London. She also collaborated with the artist Trevor Paglen to produce Training Humans at Fondazione Prada's Osservatorio in Milan — the first major exhibition of the images used to train AI systems. Their investigative essay, Excavating.ai, won the Ayrton Prize from the British Society for the History of Science.

Fenella has recently joined the University of Sydney as Head of Programming.

Prior to this appointment she was Head of Curation for TEDxSydney where she led the programming for one of the largest TEDx events in the world. She is an in-demand MC, speaker, moderator and a noted television and radio presenter and producer. Programs have included Art Nation & Sunday Arts on ABC TV; The Movie Show on SBS TV; By Design on Radio National; and her long running cult electronic music show, The Sound Lab on Triple J. 2020 saw her work collaboratively across many virtual events and with many organisations including hosting the 40 minutes online program to launch the Chau Chak Wing Museum within the University of Sydney. She is on the Board of the National Trust of Australia (NSW).


Event image: Book cover of Atlas of AI (Yale University Press, 2021). Image supplied. 

Sign up for our newsletter

Each month we'll send you details about upcoming events, and a selection of podcasts.