PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
-
Updated
Jun 15, 2024 - Python
PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
Project developed during the course of 'Optimization for Data Science' in the University of Padua. The project provides an Implementation of Frank-Wolfe Methods for Recommender Systems in Python.
Hands-on AI security workshop by GDSC Asia Pacific University – explore the fundamentals of attacking machine learning systems through white-box and black-box techniques. Learn to evade image classifiers and manipulate LLM behavior using real-world tools and methods.
Add a description, image, and links to the projected-gradient-descent topic page so that developers can more easily learn about it.
To associate your repository with the projected-gradient-descent topic, visit your repo's landing page and select "manage topics."