The advancement of mobile information technologies, coupled with connected sensors, actuators, and the Internet of Things (IoT), holds promise for the development of smart infrastructure and services in future cities. These advancements are leading to the emergence of autonomous vehicles (AVs) or self-driving cars, sparking speculation about how these technologies could benefit future transportation by creating safer and less congested roads. Within this future transport landscape, autonomous lockers could navigate footpaths alongside or among pedestrians, fulfilling their role as efficient parcel delivery systems . Meanwhile, autonomous shopping trolleys could independently move around shopping centers, returning to their designated locations after shoppers have finished with them . This harmonious interaction between humans and mobile robots offers an enticing vision for future cities .
However, alongside the excitement and promise, the vision of a smart city with AVs in the future (perhaps not so distant) raises questions regarding how transportation systems consisting of a mixture of humans, human-driven vehicles, and mobile robots will be managed, particularly in spaces where their paths cross.
Read more: Self-driving cars are coming – but are we ready? 
The interactions between AVs and vulnerable road users, such as pedestrians and cyclists, in "shared spaces" are of particular concern. These enjoyed a period of popularity in many cities as a way of both making better use of scarce street space and taming drivers by forcing them to slow to a little more than walking speed . While significant progress has been made in autonomous vehicle technology, there is still substantial development required before these vehicles can merge seamlessly into regular traffic.
As self-driving vehicles will coexist with vulnerable users on many roads, and not just in shared spaces, there is a need for a social consensus as to which vehicles should have priority when paths cross and under what circumstances. This becomes more challenging, especially in the central areas of cities with prevalent shared spaces because of the density of vehicles, pedestrians, and cyclists. AVs are carefully designed to operate flawlessly, both by avoiding mistakes themselves and compensating for the occasional errors and misjudgments made by fallible humans. But what happens when humans realize they can exploit this programmed reliability by deliberately stepping in front of self-driving cars, causing traffic to grind to a halt? In a densely populated urban landscape, this realization could lead to safety-aware AVs getting trapped in gridlock while humans enjoy unfettered movement .
Alternatively, AVs may learn by experience that they can intimidate and dominate vulnerable road users and take priority where their paths cross. There has been much concern in the media  of late about the threat of AI and AVs learning to behave aggressively in the presence of vulnerable road users could be a manifestation of this. Any form of aggression raises profound ethical and moral issues, which can only be resolved by a clear understanding of what constitutes "acceptable behaviour." Eventually "acceptable behaviour" will need to be codified into new rules of the road. Perhaps one way to ensure good behaviour from autonomous vehicles is to subject them to a driving test, just as we do with human drivers, as being investigated in the current European Horizon Research Project "i4driving" .
Addressing these multifaceted behavioural factors presents a significant challenge to the successful integration of autonomous vehicles into our transportation systems. Understanding the complex dynamics between humans and self-driving technology will be instrumental to establishing a harmonious coexistence on roads. Neglecting these crucial aspects puts autonomous vehicles at risk of eventual exclusion from areas where they would come into contact with vulnerable road users, preventing society from benefiting from their full potential.
In current research funded by the Australian Research Council at Sydney University, the issue of how to tame the behaviour of AVs through the concept of "micro pricing" is being investigated. The idea is that AVs should be programmed to take into account the costs imposed on others when deciding whether or not to yield if paths cross, without of course compromising safety. For this to work in practice, however, there will be a need for a social consensus as to what should take priority, and this social consensus will need to be incorporated into the rules of the road. Ultimately, priority is a political decision, but one basis for assigning priority, setting the emergency services aside, might be the head count. For example, an AV carrying (non-urgent) freight might be expected to yield to a pedestrian whereas a pedestrian might be expected to yield to a driverless bus carrying many passengers. In the realm of autonomous vehicles, the challenge of taming traffic invites us to consider a fundamental question: How do we design AI systems that not only navigate roads but also integrate seamlessly into the interconnected transportation ecosystem and societal norms?