Disease Map - Big Data driven modelling and derivation of diseases and treatment response
Summary
BMIT excels at addressing bio-inspired and other real-world challenges with core computing and information technology research in image processing and informatics, computer vision, big data fusion and analysis, visualization and visual analytics, multimedia technology and intelligent algorithms etc. Our research has numerous applications, including in the biomedical and health domains where we have reshaped the biomedical research and digital healthcare practice landscape in several ways.Students will join a strong research team and closely work with a multi-disciplinary supervisory team. They will be able to contribute to other related projects and build networks with other students, postdocs, clinicians and scientists. Our projects involve close collaboration with both hospitals (e.g., Royal Prince Alfred, Westmead, Nepean) and industry partners (e.g., Microsoft) in both Australia and abroad to translate algorithms into clinical trials and commercial applications; there will be opportunities for internships with our partner organisations.
Supervisor(s)
Professor David Feng, Associate Professor Jinman Kim
Research Location
Program Type
PHD
Synopsis
One in four people will be affected by cancer in their lifetime. Our research aims to produce computationally derived cancer disease maps that extract and quantify important disease characteristics from a very large biomedical image data repository. The outcome will vastly improve personalised diagnosis and treatment of these cancers by providing new insights on how some cancers spread and are unique to individuals.
Additional Information
Topic 1. Modelling tumour growth and spread in PET-CT imaging dataPET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. It is also common for PET-CT scans to be acquired at intervals during treatment to monitor the patient’s response to therapy, e.g., whether the cancer is shrinking/growing, spreading to other sites. In diseases such as lymphoma, there can be dozens or hundreds of sites of disease, some of which may change independently to other sites during the treatment process (e.g., some sites may grow while others shrink). The current technique to quantify the changes is to either report on the disease burden as a whole or to manually analyse each site, which is not feasible as the number of disease sites increase.In this project, we will derive a new deep learning technique for modelling changes across multiple disease sites through integrating convolutional neural networks (for analyzing image data) and recurrent neural networks (for analyzing temporal information). Ultimately, this will provide additional information to physicians when assessing patient response to therapy.Topic 2. Functional structure detection in PET-CT imaging dataPET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. Sites of disease (abnormalities) usually comprise of high uptakes (hot spot) and other visual characteristics such as shape, volume, localization etc. Existing methods for detecting abnormalities are reliant on modelling the characteristics of these abnormalities; however this is challenging due to the inconsistent image-omic (image / visual) features, their varying anatomical localization information, and due to the similarity to some normal structures that also exhibit high uptakes.In this project, we aim to develop a new approach to automatically detect abnormalities, in a reverse manner, through the filtering (removal) of normal / known structures that occurs in the human body. We will pioneer state-of-the-art deep learning algorithms to iteratively filter out known structures, and thus detecting abnormal structures as the output. This project could significantly improve the segmentation and classification performance of the existing methods and potentially increase the confidence in diagnosis for the physicians in a clinical environment.Topic 3. Robust segmentation and classification of multi-modal medical imaging dataDeep learning methods based on convolutional neural networks (CNN) have recently achieved great success in image classification, object detection and segmentation problems. This success is primarily attributed to the capability of CNNs to learn image feature representations that carry a high-level of semantic meaning. Therefore, many investigators have attempted to adapt deep learning methods to medical image segmentation and classification related problems. However, comparatively, there is a scarcity of annotated medical image training data due to the large cost and complications in manual annotations of medical images. Consequently, without sufficient training data to cover all the variations, e.g., lesions from different patients can have major differences in size/shape/texture, deep learning methods cannot provide accurate results.In this project, we will derive a new approach to train an accurate deep learning model for medical images with limited data. More specifically, we will develop deep learning based data argumentation approach to derive additional information and features which can boost the training process. Ultimately, this project can potentially change the existing way of training a medical imaging based deep model and minimize the cost of building the training datasets.
Want to find out more?
Contact us to find out what’s involved in applying for a PhD. Domestic students and International students
Contact Research Expert to find out more about participating in this opportunity.
Browse for other opportunities within the Computer Science .
Keywords
medical image analysis, deep learning, Machine learning, Image processing, image modelling, computed aided diagnosis
Opportunity ID
The opportunity ID for this research opportunity is: 2425
Other opportunities with Professor David Feng
- Advanced computer modelling of biological systems using insight knowledge
- Discovery of new image-derived features for computer-aided diagnosis
- Automated 3-Dimensional Biomedical Registration for Whole-body Images from Combined PET/CT Scanners - Automatic Registration for 3D Whole-body Images from Combined PET/CT Scanners
- Deformable Registration for Temporal Lung Volumes
- Kinetic Characterization and Mapping for Whole-body Molecular Image Retrieval
- Multi-dimensional Biomedical Data Visualization
- Image Representation using Multi-dimensional Biomedical Functional and Anatomical Features
- Automatic Image Content Annotation
- Web Image Annotation
- Data Management for Automated Identification and Classification of Plant Images
- Novel Image Retouching Techniques
- Automatic Video Content Annotation
- Intelligent Access to Digital TV Content
- Semantic Multimedia Information Retrieval
- Multimedia Streaming with Peer-to-Peer Techniques
- Intelligent Multimodality Molecular Image Segmentation
- Multimodality Medical Image Segmentation
- Medical Image Mining for Computer-Aided Diagnosis
- Functional Brain Image Understanding for Differential Diagnosis of Dementia
- Machine learning in Multiscale Image-Omics
- Content-based Retrieval and Management of Multi-dimensional Biomedical Imaging Data
- Object-based Volumetric Texture Feature Extraction for Biomedical Image Retrieval
- Semantic-driven Multi-modal Biomedical Data Visualisation
- Computerised image analysis of musculoskeletal diagnostics and surgical planning
Other opportunities with Associate Professor Jinman Kim
- Semantic-driven Multi-modal Biomedical Data Visualisation
- Computerised image analysis of musculoskeletal diagnostics and surgical planning
- Multi-dimensional Biomedical Data Visualization
- Image Representation using Multi-dimensional Biomedical Functional and Anatomical Features
- Machine learning in Multiscale Image-Omics