Research

My academic journey began with personalised modeling of vascular disease based on medical imaging. As deep learning advanced and the potential of artificial general intelligence became apparent, my research focus evolved to AI for Multi-Modal Medicine (AIM³).

Here, “multi-modal” encompasses more than just integrating various medical imaging modalities (e.g., CT, MR, PET). It also includes incorporating diverse medical data modalities (e.g., tabular data, waveforms, images, and text) and assimilating relevant domain-specific scientific knowledge (e.g., mechanics, physics, and life sciences). Leveraging recent breakthroughs in multi-modal AI and our growing understanding of biological systems, we aim to develop multi-modal foundation models for the diagnosis and treatment of specific diseases.

Topics

My recent research focuses on developing AI models that effectively integrate and interpret information from multiple sources and across diverse scales. These models are instantiated as multi-modal foundation models built on healthcare big data provided by our clinical collaborators. Key projects include: