Adversarial machine learning has been both a major concern and a hot topic
recently, especially with the ubiquitous use of deep neural networks in the
current landscape. Adversarial attacks and defenses are usually likened to a
cat-and-mouse game in which defenders and attackers evolve over the time. On
one hand, the goal is to develop strong and robust deep networks that are
resistant to malicious actors. On the other hand, in order to achieve that, we
need to devise even stronger adversarial attacks to challenge these defense
models. Most of existing attacks employs a single $ell_p$ distance (commonly,
$pin{1,2,infty}$) to define the concept of closeness and performs steepest
gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial
example in the same way. These $ell_p$ attacks each has its own pros and cons;
and there is no single attack that can successfully break through defense
models that are robust against multiple $ell_p$ norms simultaneously.
Motivated by these observations, we come up with a natural approach: combining
various $ell_p$ gradient projections on a pixel level to achieve a joint
adversarial perturbation. Specifically, we learn how to perturb each pixel to
maximize the attack performance, while maintaining the overall visual
imperceptibility of adversarial examples. Finally, through various experiments
with standardized benchmarks, we show that our method outperforms most current
strong attacks across state-of-the-art defense mechanisms, while retaining its
ability to remain clean visually.

By admin