Adversarial examples in random neural networks with general activations

Adversarial examples in random neural networks with general activations cover
Download PDF

This article is published open access under our Subscribe to Open model.

Abstract

A substantial body of empirical work documents the lack of robustness in deep learning models to adversarial examples. Recent theoretical work proved that adversarial examples are ubiquitous in two-layers networks with sub-exponential width and ReLU or smooth activations, and multi-layer ReLU networks with sub-exponential width. We present a result of the same type, with no restriction on width and for general locally Lipschitz continuous activations.

More precisely, given a neural network with random weights , and feature vector , we show that an adversarial example can be found with high probability along the direction of the gradient . Our proof is based on a Gaussian conditioning technique. Instead of proving that is approximately linear in a neighborhood of , we characterize the joint distribution of and for , where for some positive step size .

Cite this article

Andrea Montanari, Yuchen Wu, Adversarial examples in random neural networks with general activations. Math. Stat. Learn. 6 (2023), no. 1/2, pp. 143–200

DOI 10.4171/MSL/41