Engineering extraordinaire, Professor Salah Sukkarieh, tackles what it means to be human in the wake of artificial intelligence.
I’ve been at the University since my undergraduate years. I joined Sydney to study Mechatronics Engineering, which at the time was a new offering. The program touched upon my interests of bringing together software, mechanical and electronics engineering in a systematic way.
I’ve always been fascinated by the beauty and order of the natural world – from astronomy through to biology. In particular, I’m interested in how technology can be used to help us to understand the world.
About 10 years ago, I became very aware of the state of the environment and the various issues our scientists, field experts, and farmers were facing, and that’s when it all began. I started to look at how robotic aircraft could be used to undertake environmental monitoring and capture data that wasn’t possible before. This included machine learning algorithms to detect invasive species within that data, and how robotic ground platforms could be used to assist farmers with their day to day tasks such as chemical-free weeding and pest inspection.
Robotics, and automation in general, is a disruptive technology – it creates new industries by disrupting the old. There are extensive reports that predict how the labour force across the broad spectrum of industries will not exist in their current form within 10 years: either morphed into new activities that heavily use the technology, or completely replaced. At the same time, we are seeing how new technologies are helping to grow industries such as in education, agriculture and manufacturing. What is catching us by surprise is the rapid advancement of the technology over the last 10 years which is not giving us time to reflect on the impact, hence the constant stream of articles which both reinforce and contradict the one before it.
What it means to be human (from which I am still unlearning and relearning). Each advancement in robotics comes closer to the capability of some human activity – whether it’s greater dexterity in operations, faster decision making under uncertainty, or the ability to perceive and understand its surrounding environment deeper. We set a limit to robotic capability only to see it broken again. The things that robots can’t do (maybe) are deciding between right and wrong, understanding when compassion is required, behaving ethically, and so on. However, what if we find that all of those human traits are embedded or taught instructions? Can they be then codified? Can a robot ultimately behave more ethically than a human because of its superior sensory and computation capability?
The world is becoming more complex and requires seeing the challenges we face from multiple viewpoints, from different perspectives, and from various disciplines at the same time. Simply knowing how to communicate to people in these different groups requires constant thinking about your own mental framework and how you need to reflect on it to listen and understand other mental frameworks. Because our world is rapidly changing and at the same time less robust (due to our many attempts of trying to optimise processes and systems) we need to constantly unlearn in order to loosen our own mental frames in order to learn new things. Einstein’s quote comes to mind: “We cannot solve our problems with the same thinking we used when we created them.”
Thank you. I was both pleased and pleasantly surprised. It was recognition of the great work of the whole team.
I like relaxing, walking, exploring, reading, spending time with the family, video games, Marvel videos and YouTube surfing.