Image of man studying mechatronic engineering
Study area_

Mechatronic engineering

Gain research project experience as part of your undergraduate studies
Explore a range of mechatronic engineering research internships to complete as part of your degree during the semester break.

Last updated 27 February 2024.

List of available projects

Supervisor: Dr. Donald Dansereau

Eligibility: A basic understanding of optics and electronics; Experience with one or more of imaging, image processing and/or computer vision and strong programming skills would be an asset.

Project Description: What does your robotic vacuum cleaner see, and who else has access to those images? In homes, hospitals, and secure industrial sites, the uptake of autonomous robots is limited by privacy concerns.

Working with researchers at the Australian Centre for Robotics, this project will develop novel sensing technologies to enable robots to visually understand their environments without capturing privacy-revealing images.

Building on existing work in the group, you will construct and control the first opto-electronic hardware prototype of an inherently privacy-preserving robotic vision system. 

Depending on interest and ability, there is also scope to advance the algorithms behind the hardware, making sense of the novel computational imaging device to allow robots to intelligently understand their environments.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Dr. Donald Dansereau

Eligibility: A basic understanding of optics and electronics; Experience with one or more of imaging, image processing and/or computer vision and strong programming skills would be an asset

Project Description:The escalating problem of defunct satellites and other space debris represents a growing threat to crucial spaceborne technologies, including communication infrastructure and astronomical instruments. To help deal with this problem we are developing technologies that will allow us to dock with and repair satellites in orbit.

Working with researchers at the Australian Centre for Robotics, this project will develop a physical surrogate environment for developing satellite docking and repair technologies and explore novel cameras and imaging techniques to better perceive satellites in orbit.

Depending on ability and interest, there are opportunities to work on physical model construction, illumination and camera characterisation and engineering, and camera development and characterisation including development of novel perception algorithms and cameras.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Dr. Stewart Worrall; Dr. Julie Stephany Berrio Perez; Dr. Mao Shan

Eligibility: Programming Skills: Basics in Computer Vision and ML

Project Description: The main objective of this project is to enhance data augmentation for semantic segmentation in images, specifically for autonomous driving applications. The project involves applying cutting-edge machine learning techniques to modify the appearance of locally annotated images into various domains. One of the core tasks is image translation, which entails training a machine learning model to transform an image from one domain to another. For instance, this could involve converting a daytime image to a night-time or rainy condition image.

By performing such data augmentation, the project aims to create a more robust and diverse dataset, which can improve the performance and generalization capabilities of semantic segmentation models used in autonomous driving scenarios. This augmentation process will expose the models to different environmental conditions, preparing them to handle various real-world situations effectively.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Dr. Stewart Worrall; Dr. Julie Stephany Berrio Perez; Dr. Mao Shan

Eligibility: Programming Skills: Python or C++, basic knowledge of robotics

Project Description:The project aims to advance digital twins for Autonomous Vehicles (AVs) by harnessing combined capabilities, facilities, and equipment. Students will be able to gather data, process it, and create an accurate model of the environment. Data collection includes various perception sensor data, such as lidar, inertial units, cameras, and GPS localisation. 

Experienced researchers specialising in robotics and sensor fusion will mentor the students. Their guidance will assist in developing scalable digital twin models that represent the physical features of the local environment. Applying state-of-the-art techniques will be central to overcoming perception, localisation, and 3D reconstruction challenges. 

In addition to the technical aspects, the project will incorporate virtual reality (VR) to effectively communicate the research's potential and showcase the project's results. Ultimately, the models, algorithms, and tools developed during the internship will be publicly available.

Dixocer more about the Current Visualisation.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Prof. Stefan Williams, Dr. Gideon Billings

Eligibility: Computer vision/machine learning knowledge, Python, ROS, C++ Recommended

Project Description: This project aims to develop a ROS package implementing a real-time monocular SLAM system for underwater vehicles equipped with a forward looking camera for obstacle avoidance. 

The system will combine:

  • Simultaneous Localization and Mapping (SLAM): Building a real-time feature map of the environment while fusing motion sensing (IMU, DVL, etc) to constrain the map scale.
  • Machine Learning Enabled Dense Depth Estimation: Utilizing a transformer network to fuse monocular images with feature priors from the SLAM system to predict depth maps from the forward looking camera, enabling obstacle detection and avoidance.

The goal is to build a system that will be deployed on underwater survey vehicles regularly operated by the ACFR marine group.

Project experience opportunities include:

  • Hands on work with the underwater vehicles at the ACFR, with the possibility to join field trials.
  • Experience implementing software for field robotic systems.
  • Experience deploying machine learning methods on the edge.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Prof. Stefan Williams, Dr Jackson Shields

Eligibility: Control systems knowledge (e.g. AMME3500/AMME5520). ROS, Gazebo, C++, Python recommended.

Project DescriptionAutonomous Underwater Vehicles (AUVs) are utilised within the Australian Centre for Field Robotics (ACFR) for visual seafloor surveys. This involves the AUV traversing close to the seafloor over complex terrain, holding a consistent altitude of 2 meters. To improve our operations in these challenging environments, we need a better simulation of our vehicle and the environments it operates in. 

During this winter school project, your tasks will include:

  • Operating an AUV in both the test tank and ocean environments to collect data on the AUVs motion.
  • Learning dynamic models of AUVs using the collected real world data.
  • Creating simulated versions (in Gazebo) of the AUV, complete with all the underwater sensors.
  • Tuning controllers / parameters of the simulated model to match the real AUV.
  • Evaluating success of the simulated system in replicating real AUV surveys.

This project offers hands-on experience that will help develop useful skills in simulation, programming and control.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Dr. Viorela Ila; Dr. Yiduo Wang

Eligibility: Required Skills: Python programming. Desired Skills: TensorFlow, PyTorch or similar deep learning frameworks. Knowledge about optical flow.

Project Description: Modelling and understanding the environment is crucial for any mobile robot to operate autonomously. In a dynamic scenario, motions of moving objects are a key piece of information for many robotic tasks such as navigation and path planning. Therefore, many state-of-the-art localisation and mapping solutions simultaneously track multiple dynamic objects and estimate their motions, and an efficient detection and segmentation of dynamic objects on the robot vision significantly improves the effectiveness of motion tracking. In the case of autonomous driving, crossing pedestrians or cyclists, lane-splitting motorcyclists and merging traffic are situations where modelling rapid changes in the environment is imperative. 

This thesis will focus on implementing a convolutional neural network architecture that learns to detect motions of dynamic objects in robot vision given a pair of consecutive images. Deep convolutional neural network (DCNN) based approaches have significantly improved the state-of-art in both semantic segmentation and motion estimation.

Requirement to be on campus: Yes *dependent on government’s health advice.

Supervisors: Dr Viorela Ila and Dr Yiduo Wang

Eligibility: Required Skills: C++ programming. Desired Skills: ROS and ROS2 Knowledge on Open Sound Control

Project Description: Visually impaired patients understand the environment based on audio; the sonification of the visual world is a not only meaningful but also intriguing topic. Sonifying static object is an explored problem in recent years thanks to object detection techniques, but conveying motion information using audio remains a challenge. 

Because humans are brilliant at recognising patterns given the correct information, the interpretation of audio signals is greatly simplified. With object motions already estimated by our state-of-the-art multi-object tracking solution, the key questions are: I) what is the necessary kinematic information, and II) how to best translate that into sound. 

This thesis will focus on designing an algorithm that sonifies 3D motion information by simulating Doppler Effect, such as changing the frequency or pitch of a sound based on relative movement between the source and the observer. The goal is to realise a system that can be deployed in human-in-the-loop real-world scenarios.  

Requirement to be on campus: Yes *dependent on government’s health advice.