Volume 4, Issue 4
Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization

Xuan Lin, Haidong Xie, Chunlin Wu & Xueshuang Xiang

CSIAM Trans. Appl. Math., 4 (2023), pp. 797-819.

Published online: 2023-10

Export citation
  • Abstract

Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on $ℓ_0$ optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed $ℓ_p \ (0< p <1)$ regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent $ℓ_0$ attack FMN-$ℓ_0$ on average.

  • AMS Subject Headings

68T07

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CSIAM-AM-4-797, author = {Lin , XuanXie , HaidongWu , Chunlin and Xiang , Xueshuang}, title = {Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization}, journal = {CSIAM Transactions on Applied Mathematics}, year = {2023}, volume = {4}, number = {4}, pages = {797--819}, abstract = {

Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on $ℓ_0$ optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed $ℓ_p \ (0< p <1)$ regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent $ℓ_0$ attack FMN-$ℓ_0$ on average.

}, issn = {2708-0579}, doi = {https://doi.org/10.4208/csiam-am.SO-2022-0005}, url = {http://global-sci.org/intro/article_detail/csiam-am/22079.html} }
TY - JOUR T1 - Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization AU - Lin , Xuan AU - Xie , Haidong AU - Wu , Chunlin AU - Xiang , Xueshuang JO - CSIAM Transactions on Applied Mathematics VL - 4 SP - 797 EP - 819 PY - 2023 DA - 2023/10 SN - 4 DO - http://doi.org/10.4208/csiam-am.SO-2022-0005 UR - https://global-sci.org/intro/article_detail/csiam-am/22079.html KW - Sparse adversarial attack, $ℓ_p \ (0< p <1)$ regularization, lower bound theory, support shrinkage, ADMM. AB -

Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on $ℓ_0$ optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed $ℓ_p \ (0< p <1)$ regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent $ℓ_0$ attack FMN-$ℓ_0$ on average.

Lin , XuanXie , HaidongWu , Chunlin and Xiang , Xueshuang. (2023). Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization. CSIAM Transactions on Applied Mathematics. 4 (4). 797-819. doi:10.4208/csiam-am.SO-2022-0005
Copy to clipboard
The citation has been copied to your clipboard