Research Supervisor Connect

Machine learning in Multiscale Image-Omics

Summary

BMIT excels at addressing bio-inspired and other real-world challenges with core computing and information technology research in image processing and informatics, computer vision, big data fusion and analysis, visualization and visual analytics, multimedia technology and intelligent algorithms etc. Our research has numerous applications, including in the biomedical and health domains where we have reshaped the biomedical research and digital healthcare practice landscape in several ways.Students will join a strong research team and closely work with a multi-disciplinary supervisory team. They will be able to contribute to other related projects and build networks with other students, postdocs, clinicians and scientists. Our projects involve close collaboration with both hospitals (e.g., Royal Prince Alfred, Westmead, Nepean) and industry partners (e.g., Microsoft) in both Australia and abroad to translate algorithms into clinical trials and commercial applications; there will be opportunities for internships with our partner organisations.

Supervisors

Professor David Feng, Associate Professor Jinman Kim.

Research location

Computer Science

Program type

Masters/PHD

Synopsis

Recent findings have demonstrated the feasibility of using high-dimensional features derived from large datasets of radiology (x-ray and computer tomography [CT]) images in lesion characterization, response prediction, and prognostication in lung cancer patients. Current approaches to integrating imaging and omics data rely on deriving homogenous disease phenotypes from a patient’s imaging data that are typically acquired from a single modality, at a single time-point. We suggest that harnessing semantic knowledge and models derived from patient populations, will allow much better disease characterisation from images, will change current practices and lead to breakthroughs in disease classification and precision medicine.

Additional information

Topic 1. Image-omics features for the analysis of lung cancer in PET-CT imagesConventional approaches generally only regard the radiology data without integration of the complementary information provided by other imaging modalities, such as information from nuclear medicine functional images (positron emission tomography [PET]). Further, conventional approaches rely on an ad-hoc definition of “traditional” image features (e.g. texture, shape) that have been used across a wide range of generic object recognition tasks and may not be meaningless for PET, e.g., texture analysis due to the inherent noise and shape analysis due to low spatial resolution.In this project, we will derive a new approach to extract new image-omic features that are specific to 18F-FDG PET/CT images using state-of-the-art deep learning coupled with conventional (texture, morphological, statistical and regional) features. We will use this new data-specific image-omic features to analyse the relationship between visual features and clinical parameters (age, sex, histological type, tumor grade, stage and prognosis) to identify the features that may predict the metabolic type of lung tumor and help select some specific chemotherapy for planning the individualized therapeutic strategy. We will target our study at data from lung cancer, where the global 5-year survival rate is lower than 20% and where our image-omics approach may reveal insights that could one day be used to improve this survival rate. Topic 2. Extrapolating image-genomics features across heterogeneous sites of diseaseImage-genomics is a state-of-the-art imaging informatics research where disease quantification is derived by coupling medical imaging data with genetic data extracted from tissue biopsies as a means of unravelling the heterogeneity of disease and how it affects an individual. Image-genomics is based on the hypothesis that visual characteristics in an image encode the tumour’s underlying genotype and the influence of its biological environment.However, current approaches are mainly focussed on a 1-to-1 correlation between image features and genetic patterns at a single site of disease, which is dependent on obtaining repeated tissue samples from biopsies. In clinical practice, multiple tissue biopsies are rarely undertaken and this is a major hurdle because the image-genomic features from a single biopsied location may not be representative of the genetic characteristics at other sites of disease (i.e., in other organs) and as such a major challenge is in the selection of image features that are most representative of the entire disease burden.In this project, we will derive new unsupervised deep learning technique for extracting image features that will encode the image characteristics that are common across different sites of disease. This is in contrast to existing ad-hoc feature extraction and selection, which have been subjectively designed by humans for different purposes at single sites and risks overlooking features that may be relevant to other sites of disease. Such a technique may allow some genetic information from one site to be extrapolated to other sites and, more generally, will allow the inference of genetic information of tumours from the morphology presented in medical imagery.

Want to find out more?

Opportunity ID

The opportunity ID for this research opportunity is 2427

Other opportunities with Professor David Feng