003 : Sponge Attack against Multi-Exit Networks

A research project on data poisoning attacks

About The Project

With data integrity being paramount in today's times, machine learning models are becoming increasingly susceptible to adversarial attacks and data poisoning. Poisoned samples added to training datasets have the potential to severely impair model performance, causing erroneous predictions and security breaches. Conventional data filtering methods fail to identify and correct these attacks, making a more proactive and automated solution a necessity.

In this project, my team and I have used the CIFAR-10 dataset, which is a popular benchmark dataset with 60,000 images in 10 classes, such as vehicle and animal. Our model is a multi-exit CNN. By having multiple exit points, the model effectively optimizes computational cost and performance, enabling adaptive learning. We have implemented batch normalization, early exit, and loss weighting for stable training and resilient poisoned sample detection. Our approach improves reliability with high-classification accuracy, making it an efficient option for secure deployment in machine learning.

Our work proposes a system to eliminate poisoned samples in datasets using AI-generated synthetic data to replace them. Our suggested framework combines anomaly detection mechanisms with feature-based generation, where deep learning models are used to detect anomalous data points. Rather than the use of computationally intensive image generation techniques, we investigate lean alternatives like feature interpolation and data augmentation through generation to preserve data integrity without extensive computational overheads.

© 2025 Sumedh Murakonda. All rights reserved.