This work will develop the machine learning techniques and principles needed for robots to automatically interpret and benefit from a broad range of emerging imaging technologies.
Recent imaging advances have yielded super-human perceptual capabilities like imaging around corners, recording a person's pulse from changes in skin colour during a heartbeat, and directly imaging a pulse of light as it propagates through a scene. These technologies do not see the world the same way a conventional camera does, and making use of them in robotics raises important new challenges.
In this project you will expand current ideas in machine learning to bridge a gap between robotics and an expanding array of exciting new imaging technologies. Potential approaches include unsupervised and semi-supervised learning, active autonomous data collection, online learning, and new neural processing elements and architectures. Imaging technologies might include solid-state LiDAR, single-photon sensors, transient/femtosecond imaging, light field imaging, imaging around corners, and event-based dynamic vision sensors. Applications arise anywhere robots encounter perceptual challenges including all-weather autonomous driving, drone flight, underwater survey, human-robot interaction, and locomotion on challenging terrain.
Working within the Australian Centre for Field Robotics (ACFR), you will have access to the state-of-the-art robots, facilities, dedicated technical staff, and mentorship available through this world-class research centre. The ACFR undertakes significant field robotics programs in autonomous driving, flight, agriculture, and underwater survey, providing rich opportunities for deployment and validation of novel perception systems.
The opportunity ID for this research opportunity is 2631