TY - JOUR T1 - Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization AU - Lin , Xuan AU - Xie , Haidong AU - Wu , Chunlin AU - Xiang , Xueshuang JO - CSIAM Transactions on Applied Mathematics VL - 4 SP - 797 EP - 819 PY - 2023 DA - 2023/10 SN - 4 DO - http://doi.org/10.4208/csiam-am.SO-2022-0005 UR - https://global-sci.org/intro/article_detail/csiam-am/22079.html KW - Sparse adversarial attack, $ℓ_p \ (0< p <1)$ regularization, lower bound theory, support shrinkage, ADMM. AB -
Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on $ℓ_0$ optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed $ℓ_p \ (0< p <1)$ regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent $ℓ_0$ attack FMN-$ℓ_0$ on average.