Reliable Federated Learning against Data Poisoning Attacks

School of Engineering and Technology

Centre for Intelligent Systems (CIS)

Hong Shen

Synopsis

The project focuses on enhancing the reliability and robustness of federated learning (FL) systems in the presence of data poisoning attacks. Federated learning is a decentralized machine learning approach where multiple participants collaboratively train a shared model without sharing their raw data. However, malicious/infected participants can manipulate their local training data to poison the global model, leading to degraded performance or biased outcomes. The project aims to develop techniques and frameworks to investigate data poisoning (i.e. poison embedding and generation) methods, detect, mitigate, and prevent data poisoning attacks, ensuring the reliability and integrity of FL systems.

The project has the potential to make FL systems more secure, reliable, and trustworthy. This could enable broader adoption of FL in sensitive applications, such as personalized healthcare, fraud detection, and smart cities, where data integrity and model reliability are paramount. The project contributes to the growing field of secure and robust machine learning, offering solutions that protect the integrity of the global model and ensure fair outcomes for all participants.

Information and Computing Sciences

Immediately

Either Masters or Doctorate

Brisbane

Project Contacts