Research Supervisor Connect

Semantic-driven Multi-modal Biomedical Data Visualisation

Summary

BMIT excels at addressing bio-inspired and other real-world challenges with core computing and information technology research in image processing and informatics, computer vision, big data fusion and analysis, visualization and visual analytics, multimedia technology and intelligent algorithms etc. Our research has numerous applications, including in the biomedical and health domains where we have reshaped the biomedical research and digital healthcare practice landscape in several ways.Students will join a strong research team and closely work with a multi-disciplinary supervisory team. They will be able to contribute to other related projects and build networks with other students, postdocs, clinicians and scientists. Our projects involve close collaboration with both hospitals (e.g., Royal Prince Alfred, Westmead, Nepean) and industry partners (e.g., Microsoft) in both Australia and abroad to translate algorithms into clinical trials and commercial applications; there will be opportunities for internships with our partner organisations.

Supervisors

Associate Professor Jinman Kim, Professor David Feng.

Research location

Computer Science

Program type

PHD

Synopsis

The next generation of medical imaging scanners are introducing new diagnostic capabilities that improve patient care. These medical images are multi-dimensional (3D), multi-modality (fusion of PET and MRI for example) and also time varying (that is, 3D volumes taken over multiple time points and functional MRI). Our research innovates in coupling volume rendering technologies with machine learning/image processing to render realistic and detailed 3D volumes of the body

Additional information

Topic 1. Semantic-driven biomedical image visualisationModern medical images are complex data entities; they have multiple dimensions and often are derived from multiple modalities either sequentially or simultaneously (e.g., combined PET-CT and combined PET-MR). There are usually multiple potential regions of interest within these images, which are often examined with different visualization parameters for different diseases and types of applications. The current approach generally involves many trial-and-error manual parameter adjustments that need to be adjusted for different patient and disease characteristics.
In this project, we will design a deep learning approach to estimate and initialize the visualization parameters for 3D renderings of multi-modality medical imaging data. Specifically, the technique will relate the semantics of patient case (as described by disease stage, clinical reports, etc.) to the visualization parameters that emphasize the relevant anatomical and functional regions of interest. This will lead to new approaches to interpret complex medical data for diagnosis, staging, and assessing response to therapy. 
Topic 2. Mixed Reality for biomedical image recognition and visualisationThe objective of this project is to innovate in mixed reality experiences to visualize patient’s medical record history (spatial + temporal). Microsoft’s HoloLens device will be utilised to display a complex visualisation of all the available data, which includes 2D/3D renderings of images (volumetric, multi-modal images), annotations e.g., regions of interest (ROI) delineated on the images, and graphical charts of numerical and text information, e.g., patient history and pathology. The centrepiece of patient record visualisation is with the medical imaging data that are multi-dimensional (3D spatial + multi-modal + temporal sequences) and are viewed along with various supporting data that needs to be interactively manipulated. All visualisation computing will be performed by a remote GPU and each Hololens will become a thin client. This ensures necessary computing resources as well as concurrency challenges.

Want to find out more?

Opportunity ID

The opportunity ID for this research opportunity is 2424

Other opportunities with Associate Professor Jinman Kim