Stock image of humanoid robot
News_

Will robots inherit the earth?

25 June 2018
The dimensions of AI, with New Scientist’s Alice Klein
Ahead of her appearance at the next Outside the Square event, the New Scientist reporter and University of Sydney Science graduate sits down to discuss the implications of Artificial Intelligence.
Alice Klein, Outside the Square panellist

Alice Klein, Outside the Square panellist

Not just robots

Of all the areas of enquiry her role at New Scientist leads her, of particular interest to Alice is artificial intelligence (AI). From the outset, she’s quick to explain that AI goes beyond traditional ideas around robots.

“I don’t think anyone’s agreed on a single definition, but I think of AI as any kind of computer system that does things we consider to be ‘smart’. Whether that’s predicting what you want to watch on Netflix, recognising faces in a crowd, or operating self-driving cars.”

As a result, there are currently developments in AI which many of us might not recognise. Automation is a major benefit, Alice explains, as she gives the “pretty mundane” example of universities now using smart software to organise the timetables of thousands of students instead of someone doing it manually. Other advances utilise machine learning, a computer system used to find patterns in data.

“There are examples in medicine where they showed a machine learning system multiple examples of skin moles, adding details for which ones turned cancerous. The computer started to see patterns and tiny details in that mole that were related to later melanoma.

There are AI systems that are actually better than dermatologists at working out the likelihood of a mole turning into skin cancer.

She goes on to describe other cases of AI being used for the benefit of society. A robot called MARIO has been built to increase people with dementia’s social interactions by giving them access to the things they enjoy such as reading materials, music, event reminders, family photos and more.

Artificial stupidity

Of course, Alice admits, there are downsides to AI.

“Sometimes,” she says, “AI just doesn’t work that well. And that can be dangerous.”

For example, in the 1990s, researchers from the University of Pittsburgh wrote a computer program designed to predict the outcomes of patients suffering from pneumonia. The program went through data from approximately 750,000 patients in 78 hospitals in 23 states and found, oddly, that patients with asthma had better outcomes.

And this, according to Alice, is the problem with machine learning. “It doesn’t tell you ‘why’. So the system might see a relationship, but it won’t tell you why that relationship exists.”

Critics of machine learning describe this phenomenon as a ‘black box’. It’s a situation where the data goes into a box and a prediction comes out, but no one knows why or how it happens. This makes it hard for researchers to then evaluate and improve the machine when new data is available.

I ask Alice if they ever figured out the case from Pittsburgh with the pneumonia patients and she laughs.

“They actually found out that the algorithm was predicting people with pneumonia and asthma did better because if you arrive at the hospital with pneumonia and you have a history of asthma, the hospital will give you better care as you’re seen as a higher risk patient.

People have this idea that AI is an objective system that can make all of these great predictions, but they forget that the data it’s using can be flawed.

Can we keep up?

According to Swedish philosopher, Nick Bostrom, AI will reach the technological singularity (the point at which AI becomes as smart as us) within the next 50-100 years. After that, it will only get smarter than us.

“It’s not that AI is going to become human,” says Alice. “It’s that they’re going to become smarter than humans. That, I think, is more of a worry. The fact is that we’re not making human-like systems, we’re making intelligent systems. And they have no inherent care for humans.”

We could tell an AI to stop people from hurting each other and it might decide the best way to achieve that is to put all humans in cages.

“Some people think if we have enough regulation, we can make sure AI is contained in a box. But even if you put an AI in a big steel box, if you make it smart enough, it’s going to work out how to get out of that box.”

The fact is that as long as AI is based on data that has been influenced by humans, there will always be an inherent bias. The question now, is whether we can keep up with the technology that we have created.

“Throughout human history, we’ve had these big technological shake-ups. We used to exist in mainly agricultural societies, but then came the industrial revolution followed by manufacturing. Now we’ve moved towards a service-based industry. Over time, we’ve adjusted to each of those shifts. However, I think the AI revolution is happening so fast that it’s unclear whether we can adjust quickly enough.”


Alice will debate the merits and ethics of AI at the Outside the Square event, Ethical AI: Are robots our friends?, 2 August 2018 at the Old Rum Store, Chippendale. Book tickets here.

This article was authored by Theodora Chan (BA, MECO 2010; BA, HONS 2012), Co-Founder and Content Director at Pen and Pixel.

Jennifer Peterson-Ward

Media and PR Adviser (Humanities)

Related articles