Himadri Nandi

School of Engineering and Technology
Engineering;Information and Computing Sciences;Technology
Dr Abdul Md Mazid
Masters by Research
0009-0008-7478-4596
himadri.nandi@cqumail.com
Himadri Nandi

Research Details

Thesis Name

AI-driven multimodal fusion of drone and satellite imagery for rapid post-disaster damage assessment

Thesis Abstract

Over the past decade, the frequency, severity, and economic impact of natural disasters worldwide have increased. During 2019-2022, Australia endured sequential disasters: devastating bushfires burned over 24 million hectares, followed by catastrophic flooding in Eastern Australia. These disasters have caused significant humanitarian and wildlife crises, economic loss, and overwhelmed the capacity of the first responders and emergency services. With the unsettling trajectory of worsening extreme weather, there is an urgent need for rapid damage assessment, which can assist emergency services, infrastructure operators, and governments in planning quick responses, allocating resources, and facilitating recovery.

Current post-disaster assessment mapping relies heavily on ground surveys and satellite imagery, which are laborintensive and limited in spatial coverage. Satellite imagery, such as that from Sentinel and Copernicus, provides wide coverage, and all-weather Synthetic Aperture Radar (SAR) satellites allow data collection in challenging weather conditions. However, image resolution from these is not detailed or sufficient to identify infrastructure and building-level impacts. Conversely, modern drone-mounted multispectral cameras and LiDAR offer high-resolution, flexible data collection, but are constrained by regulatory restrictions, such as beyond visual line-of-sight operations and approvals, as well as limited operational coverage and flight time.

Despite advances in AI-assisted image analysis and change detection, current damage assessment models are largely unimodal. As revealed, there is no robust operational framework that utilises AI to integrate these heterogeneous data sources and produce a unified damage assessment output. This research aims to address this gap by developing an AI framework that utilises both timely coverage of satellite imagery and high-resolution drone imagery, thereby creating a unified, scalable, and precise post-disaster damage map. Advanced deep learning architectures, such as Hierarchical Vision Transformers, attention mechanisms, and convolutional neural networks (CNNs), will be used to combine and interpret heterogeneous data, including VHR satellite optical/SAR and RGB-LiDAR data from ultra-high-definition Drone cameras, to improve damage detection and classification in dynamic and complex environments.

The following research questions will be investigated in this proposal.

  1. How to design a drone-based disaster monitoring system in a real-time kinematic situation?
  2. How to design AI models to combine high-resolution drone imagery data with wide-area satellite data for
  3. fast and accurate damage assessment following natural disasters?
  4. How to design the model to operate efficiently within a time-sensitive emergency response situation?
  5. How reliable and consistent is the model’s performance across different disaster types, such as bushfires, floods, and other major events?

By addressing these questions, this research aims to advance AI-driven multimodal fusion for disaster monitoring and management, with a focus on developing practical and real-life deliverable solutions that enhance disaster management. The outcomes are expected to contribute to academic understanding of multimodal geospatial fusion models and to enhance real-world resilience capacity in Australia and internationally.