From Schneier on Security – Model Extraction Attack on Neural Networks

Adi Shamir et al. have a new model extraction attack on neural networks:
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Abstract: Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons)… Read More  

Leave a Reply

Your email address will not be published. Required fields are marked *