💉🔐 Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario
-
Updated
Apr 22, 2024 - Jupyter Notebook
💉🔐 Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario
Experiments on Data Poisoning Regression Learning
This repository contains the code, the dataset and the experimental results related to the paper "Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks" accepted for publication at The 32nd IEEE/ACM International Conference on Program Comprehension (ICPC 2024).
Library for simulating data poisoning attack and defence strategies against online machine learning systems.
Flareon: Stealthy Backdoor Injection via Poisoned Augmentation
A backdoor attack in a Federated learning setting using the FATE framework
This is the official code for the ESORICS 2024 paper "ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification"
DSC 253 Advanced Data-Driven Text Mining Project
🤖 Generate tailored AI training datasets quickly and easily, transforming your domain knowledge into essential training data for model fine-tuning.
[ACCV 2022] The official repository of ''COLLIDER: A Robust Training Framework for Backdoor Data''.
Investigating lightweight approaches for trojan detection in code models using only model parameters.
Python API and toolkit for detecting data poisoning in ML models using Adversarial Robustness Toolkit defenses
🛠️ Generate AI training datasets easily, transforming complex information from documents into structured data for model fine-tuning.
Add a description, image, and links to the data-poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the data-poisoning-attacks topic, visit your repo's landing page and select "manage topics."