초록(영문)
A variety of studies have been conducted recently as the Deep Neural Network models have been found to be
vulnerable to dversarial attacks that induce misclassification of models by adding invisible perturbations. Although
such an Adversarial attack can cause a big problem in the deep neural network-based autonomous vehicle and
security applications, the Adversarial attack simulator is currently provided only for the verification of the research
perspective, and there are no simulators in the service perspective. Therefore, in this research, we wanted to develop
a simulator that allows users to compare images before and after an attack by developing and visualizing various
Adversarial attack techniques with a form of a module using the machine learning framework *Tensorflowand
the open-source library'cleverhans’. For now, we implemented an Adversarial attack simulator to learn the Deep
Neural Network model using MNIST, the handwritten digit dataset, attack Deep Neural Network model using
modularized Fast Gradient Sign Method(FGSM) model. After implementation, we also added modularized models
such as Jacobian-based Saliency Map Attack(JSMA), which allows users to choose which attack techniques they
want to use. Furthermore, if a file belonging to the MNIST database is entered, noise generated during the attack
is stored separately after the attack, so that the output of the original, attack and perturbation images can be
easily verified by the user.