High angle photo of a robot
Research_

Can machines imitate human intelligence?

14 February 2020
By Susan Wyndham, SSSHARC Journalist in Residence
As rapid technological advances bring artificial intelligence into our daily life, the public imagination is still dominated by a Romantic fear that machines will take over the world.

People commonly blame the “monster” in Mary Shelley’s Frankenstein and the talking computer HAL in Arthur Clarke’s Space Odyssey for their destructive rampages. But responsibility and control are always in the hands of human beings, argues Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the University of Vienna.

He offers reassurance that we can set limits on the power given to artificial intelligence, and warning that governments and industry, designers and users must all apply strict ethical standards to their work.

Belgian-born Professor Coeckelbergh spent a week at the University of Sydney in November-December as a Sydney Social Sciences and Humanities Advanced Research Centre (SSSHARC) Visiting Fellow. He spoke at research events in the Department of Media and Communications, and the Sydney Institute for Robotics and Intelligent Systems; taught a masterclass on his recent book, Moved By Machines; and gave a Sydney Ideas public talk titled “Wild AI and Tame Humans”, attended by more than 250 people with questions on topics from education and loss of jobs to military advances and sex robots.

Sydney Ideas lecture audience

Wild AI and Tame Humans Sydney Ideas lecture. Photo credit: Nicola Bailey. 

His focus is on the ethics of artificial intelligence, and on formulating policies to regulate its design and application. He has just published a book, AI Ethics, with The MIT Press.

Coeckelbergh is a member of the High-Level Expert Group on Artificial Intelligence advising the European Commission and acts both as a critic and an adviser to industry on building practical regulations into the first stages of developing new technologies.

“As an academic you often find yourself pushed into the role of defending the common good against particular private interests, because who else is going to do it, except maybe one or two NGOs?” he said in an interview for SSSHARC.

“On the other hand, we’re creating guidelines as a way of saying ‘Here is a tool that you as a company can use, that you as an engineer, a computer scientist can use to think about the ethical issues of your technology’. Without this kind of constructive approach we’re not really moving much, because people are going to develop this stuff anyway and sometimes it’s better for us to also be there and try to give ethical influence.”

Artificial intelligence is already in your pocket

Philosophy of technology emerged as an academic field after World War II demonstrated how destructive technology could be. As a “second-generation” philosopher of technology, Coeckelbergh has done specific research in robotics and recently published Introduction to Philosophy of Technology (OUP), the first textbook to update the subject with a decade of new developments.

John McCarthy, an American computer scientist, coined the term “artificial intelligence” in 1955 and organised a landmark workshop at Dartmouth College. Artificial intelligence now has ever-expanding applications, for example, from the everyday use of mobile phones and social media to robots in healthcare and industry, self-driving cars and autonomous weapons. Most worrying to Coeckelbergh is the misuse of facial recognition, especially by governments for political purposes.

“There’s no such thing as general AI, an AI which can completely mimic the cognitive capacities of human beings,” he said. “The project of AI as it was formulated in the Dartmouth workshop was to imitate human intelligence. Well, this project has not succeeded yet. What has succeeded, though, is that artificial intelligence helps us to do specific tasks and artificial intelligence in that sense is already there, is already behind the apps in your phone and is already in your pocket.”

He lists the general ethical principles that can be adapted to specific uses: privacy and data protection, safety, moral and legal responsibility, transparency and explainability to makers and users, and avoidance of built-in bias.

Coeckelbergh is a member of a robotics council established by the Austrian Ministry for Transport, Innovation and Technology. In his Sydney Ideas talk he used the example of self-driving cars to examine the complex problem of responsibility. In 2018 a pedestrian died after being hit by a self-driving car in the US State of Arizona. The car cannot be held responsible, he said, because it does not operate with free will or awareness. So humans are responsible, but who exactly?

Mark Coeckelbergh speaking at Sydney Ideas lecture

Mark Coeckelbergh speaking at Sydney Ideas lecture. Photo credit: Nicola Bailey

“There is not really one human, there are all kinds of potential parties,” he said. “It can be the car maker who failed to make software that recognised the pedestrian; it can be the pedestrian if they didn’t cross at the crosswalk; it could be the State of Arizona that did not put enough regulation in place to force the company to take certain actions, and could be the operator who did not in time react.”

Coeckelbergh believes the same level of safety procedures and accident investigation should apply to cars as to aircraft. “Some people think the choice is between using AI for full automation or not using AI at all, but we can do semi-automation and all sorts of steps in between. As a society we can decide fully self-driving cars are not acceptable yet for all kind of reasons including that they’re not technically good enough.”

AI ethics is technical and political

He questions whether development of artificial intelligence is a priority compared with urgent issues such as climate change. Technology production may add to environmental and social problems but the ability of AI to see patterns in large amounts of data may also help with solutions.

Coeckelbergh draws on the influence of philosophers such as Aristotle, Heidegger, Wittgenstein, and John Dewey and the tradition of American pragmatism.

“It’s not enough to say we have these values of freedom, justice and so on. Start with the social problems we have, start with people, don’t be stuck in theory. Questions of AI ethics are always part of bigger societal questions and not only a technical question but also political.”

Behind his interdisciplinary approach to real-world problems is a playful mind also interested in the arts. Among his many books are Using Words and Things (2015), in which he likens the use of technology to the use of language as a tool to achieve our aims; New Romantic Cyborgs (2017), which explores his theory that 19th-century images of monsters and automata shape our fascination and fears about technology; and Moved by Machines (2019), using the metaphor of performance to show how technology choreographs our bodily movements and the designer is a stage magician who creates illusions to beguile and sometimes to deceive us.

It’s always important to remember, he said, that “technology is very human, because we make it”.

Mark Coecklebergh was a SSSHARC Visiting Fellow in early December 2019. His visit was funded by SSSHARC, the Sydney Institute for Robotics and Intelligence Systems, and the Socio-Tech Futures Lab. 

This article is part of the 2020 SSSHARC series on how the humanities and social sciences can help us see the world in new ways.

Susan Wyndham

Susan Wyndham. Credit: Nicola Bailey.
Inaugural SSSHARC Journalist in Residence

Related articles