One Pixel Attack Analysis Using Activation Maps

Main Article Content

Shubham Sinha, S. S. Saranya

Abstract

According to researchers, when a small amount of perturbation (a small change) is added to the input of a Deep Neural Network (DNN), the output of the DNN can be altered easily. In this paper, we'll be performing this attack by perturbing one pixel of the input image for the image classifying model. We'll also take a look at the activation maps that will help us to visualize the working of the attack as well as classification of the image. To perform the attack we'll be using an optimization algorithm called Differential Evolution (DE) which is going to help us to generate adversarial perturbation for one pixel. This is a grey box attack, meaning that we don't have much information about the target model, and this attack can fool many types of DNN because of the features provided by DE. The results shows that images present in Dataset of Kaggle (CIFAR-10) and ImageNet Dataset can be attacked just by modifying one pixel of the image. This exact vulnerability exists in original CIFAR-10 Dataset. Thus, this attack shows that currently present Deep Neural Networks aresusceptible to such low dimension attacks.

Article Details

How to Cite
Shubham Sinha, S. S. Saranya. (2021). One Pixel Attack Analysis Using Activation Maps. Annals of the Romanian Society for Cell Biology, 8397–8404. Retrieved from https://www.annalsofrscb.ro/index.php/journal/article/view/2382
Section
Articles