Dr Donald Dansereau’s work in developing the world’s first single lens wide field-of-view (FOV) light field capture camera may have huge implications for drone landing, delivery robots and self-driving cars.
During his time as a Doctor of Philosophy student in the School of Aerospace, Mechanical and Mechatronic Engineering, Don Dansereau could be found at the bottom of lake Geneva in a Russian submersible, or flying an autonomous robot over the Great Barrier Reef, developing better ways to image the underwater world.
His thesis on light field imaging developed through the Australian Centre for Field Robotics (ACFR) aimed to explore the limits of four-dimensional (4D) imagery and how it could transform robotics.
“Having a strong hands-on component in the marine robotics group in the Faculty of Engineering and IT at the University of Sydney gave me a good understanding of the sorts of problems faced by robots and their handlers in challenging environments,” said Dr Dansereau.
“During my degree I constructed a software toolbox to solve the practical problems of working with light fields. Now in widespread use, it formed the backbone for much of the software developed during my recent work.”
Now at Stanford University, Don is part of a team including Gordon Wetzstein, as well as Joseph Ford and Glenn Schuster at UCSD, which has developed the world’s first camera capable of wide field-of-view (FOV) light field capture with a single lens.
This revolutionary technology now means that it is no longer necessary for multiple cameras to capture these 4D light fields. Don’s design uses a single lens which means that a very compact camera with a single centre of projection can be built, keeping processing simple.
“A key technology in making this happen is the monocentric lens, which uses concentric glass spheres to capture images over a huge field of view,” he said.
“In fact, these lenses could have a 360-degree field of view if we didn't need to install sensors to capture the light.”
A wide field-of-view is crucial in robotics as light field capture simplifies tasks where three-dimensional (3D) scene motion would normally complicate perception.
When combined, such a camera is ideal for close-up interaction with complex scenes where time and power supply are limited. Accordingly, drone landing, delivery robots, self-driving cars and other related kinds of autonomy are much easier with a wide FOV light field camera.
“In 10–15 years, it seems possible that more cameras will be manufactured for robots than for humans, and it makes a lot of sense to ask, ‘what's the best camera for a robot’?” Don says.
“I believe light field cameras and other computational imaging technologies will allow us to tailor cameras to specific robotics applications, allowing greater levels of autonomy and reliability even in challenging conditions.”
And in the context of the debate over humans competing with robots for future jobs, Don offers a considered opinion.
“In 1870, 70-hour work weeks were the norm in the USA and Europe, and this has since fallen to between 35 and 40 hours. Over this period, many more new jobs have been created than old jobs lost, and I think it's possible this trend will continue,” he suggests.
“It’s my hope that robotics and automation will continue to increase the efficiency with which we accomplish our goals, lowering the cost of living while decreasing human workload.”