Close-up of a lens
Research_

Sensing and perception

Enabling next-generation autonomy through better sensing and perception
We're changing how machines sense and understand the world to enable the next generation of robotics and intelligent systems.

Autonomous systems are increasingly changing how we work and live. As they enter the challenging, dynamic environments of our roadways, hospitals, construction sites, and farms, these systems must contend with harsh conditions, platform limitations, and complex human- driven decision making.

Our aim is to develop sensing and perception systems that address the demanding requirements of this next generation of robots and intelligent systems.

Our research spans the design of new sensing hardware, the perception algorithms needed to interpret them, and the practical tools required to deploy them. Sensing and perception also informs higher-level tasks like mapping and scene understanding, planning, and control.

We’re exploring sensing and perception at the system level, exposing new science at the intersection of these domains, and delivering more capable autonomous systems.

Our research

Our expert: Dr Donald Dansereau

Our collaborators: Queensland University of Technology, Stanford University, University of California San Diego

Robotic imaging explores how new imaging technologies can help robots see and do. We’re developing new devices and algorithms that allow robots to operate under a broader range of conditions and with greater levels of autonomy. We’re establishing the fundamental science of sensing from photons to actions, and delivering the sensing and perception tools needed to enable the next generation of robotic systems.

Our expert: Dr Graham Booker

Our collaborators: Defence Science and Technology Group

This program is developing both the hardware and algorithmic tools needed to make use of this increasingly important sensing modality. By allowing robust perception where other sensors fail, including in heavy fog, rain, and cloud, radar enables robotic autonomy in otherwise inaccessible domains.

Our experts: Professor Eduardo Nebot, Dr James Ward, Dr Stewart Worrall

Our collaborators: ibeo Automotive, Nvidia

Our project aims to develop sensing and navigating techniques for cluttered urban environments. It emphasises the challenges of driving around people and crowds, including the human factors of intentionality, predictability, and transparency. The work includes deployment of a fleet of ground vehicles for data collection and testing, including a custom-built lightweight autonomous car.

Our experts: Professor Stefan Williams, Dr Mitch Bryson, Dr Donald Dansereau, Dr Oscar Pizarro

Our collaborators: Stanford University, University of California San Diego

We're working on building a photometric model of the underwater image formation process and driving the design of novel underwater imaging systems. This uses rich information recorded across multiple apertures to achieve otherwise impossible tasks. By combining light field capture with appropriate filtering, we’ve already demonstrated that a plenoptic camera can provide higher-quality imagery than a traditional camera in low light with murky water. We’re now extending these models to more broadly consider the design of custom imaging systems for challenging underwater conditions.

Our expert: Dr Mitch Bryson

Our collaborator: Forest and Wood Products Australia

We’re developing new methods for drones and UAVs to benefit from the rich, complementary information available through hyperspectral imaging, thermal imaging, and 3D laser scanning. Typical tasks include mapping, 3D reconstruction, object detection and classification. Our research is enabling the next generation of aerial robots to deliver critical information in applications such forestry and environmental science.