The demand for high-quality three dimensional (3D) virtual models of complex physical objects and environments is growing in a wide range of robotics and autonomous systems applications.
Estimation techniques such as simultaneous localisation and mapping and structure from motion are core technologies for obtaining 3D models of the world.
Efficient methods for reconstructing the 3D world from visual and 3D sensing data are finding application in household robotics, drone technology, self-driving cars and virtual and augmented reality.
Nevertheless, real applications are complex, and the final solutions must be robust, scalable and able to account for dynamic and changing environments.
Our aim is to develop mapping and scene understanding techniques that address those problems and produce efficient representations essential for real world state of the art robotic and computer vision applications in dynamic environments.
Our experts: Professor Eduardo Nebot, Dr Viorela Ila, Dr Stewart Worrall
Our collaborator: Dr Ravi Garg (University of Adelaide)
Our project aims to develop a novel motion segmentation framework that ensures continuous motion tracking across multiple image frames. The proposed methodology intends to overcome the problem of degenerate motions (such as translation along the camera view), and effectively handle the partial occlusions present in real-life scenarios.
Our goal is to integrate the proposed motion segmentation techniques into a dynamic simultaneous localisation and mapping approach and build a dynamic representation of the environment.