Skip to content
Snippets Groups Projects
Commit 0596f459 authored by Mina Moshfegh's avatar Mina Moshfegh
Browse files

Upload New File

parent 519b7101
No related branches found
No related tags found
No related merge requests found
README.md 0 → 100644
# Adversarial Defense Methods
This repository contains implementations of various adversarial defense methods, including **AIR**, **Vanilla Adversarial Training (AT)**, **LFL**, **EWC**, **Feat. Extraction** and **Joint Training**.
The code also allows training on multiple tasks (attacks) sequentially, with automatic defense method selection from the configuration file.
## Setup & Installation
### Prerequisites
To run this code, you’ll need to install the following dependencies:
1. **Python 3.x**
2. **PyTorch** (for training models)
3. **NumPy** (for numerical computations)
4. **torchvision** (for image transformations and datasets)
5. **Matplotlib** (for plotting and visualization)
6. **scikit-learn** (for t-SNE visualization)
### Step-by-step setup
1. **Clone the repository:**
First, clone the repository to your local machine:
```bash
git clone https://gitlab.cs.fau.de/ex55aveq/defense-without-forgetting-reproduction.git
cd adversarial-defense
```
2. **Create and activate a virtual environment:**
It's recommended to use a virtual environment to isolate the dependencies.
```bash
python -m venv venv
source venv/bin/activate
```
3. **Install dependencies:**
Install the necessary dependencies using `pip`:
```bash
pip install -r requirements.txt
```
4. **Configure the environment:**
Set up the configuration file (`Config`) in the Python code to specify your training parameters, dataset, and attack/defense methods.
---
## Usage Instructions
### Running the code
After setting up the environment, you can start training your model using the following command:
```bash
python main.py
```
This will start training on the dataset and attacks specified in the `Config` class, which is located in `utils/config.py`. You can modify the config file to specify different training parameters, attack methods, defense methods, and dataset.
### Configuration Options
The configuration options can be found in `utils/config.py`. Some of the key parameters you can adjust include:
- **Dataset**: Choose between `MNIST`, `CIFAR10`, and `CIFAR100`.
- **Attack Methods**: Specify the sequence of attacks you want to train on (e.g., `FGSM`, `PGD`, or `None`).
- **Defense Methods**: Choose a defense method like `AIR`, `LFL`, `JointTraining`,`VanillaAT`, `FeatExtraction`,`EWC`, etc.
- **Training Parameters**: You can adjust `epochs`, `batch_size`, `learning_rate`, and other parameters as needed.
### Configuration Example
Below is an example of the configuration you might define in `config.py`:
```python
@dataclass
class Config:
# General
seed: int = 42
device: str = "cuda"
# Training
epochs: int = 30
batch_size: int = 128
learning_rate: float = 0.1
momentum: float = 0.9
weight_decay: float = 5e-4
# Attack params
epsilon: float = 8 / 255
alpha: float = 2 / 255
num_steps: int = 10
random_init: bool = True
# Defense selection
defense_method: str = "AIR"
# AIR parameters
lambda_SD: float = 1.0
lambda_IR: float = 1.0
lambda_AR: float = 1.0
lambda_Reg: float = 1.0
alpha_range: tuple = (0.3, 0.7)
use_rdrop: bool = True
# Isotropic replay augmentations
iso_noise_std: float = 0.01
iso_clamp_min: float = 0.0
iso_clamp_max: float = 1.0
iso_p_flip: float = 0.5
iso_flip_dim: int = 3
iso_p_rotation: float = 0.5
iso_max_rotation: int = 10
iso_p_crop: float = 0.5
iso_p_erase: float = 0.5
# Dataset
dataset: str = "MNIST"
data_root: str = "./data"
num_workers: int = 2
# LFL
lambda_lfl: float = 1.0
feature_lambda: float = 1.0
freeze_classifier: bool = True
# JointTraining
joint_lambda: float = 0.5
# VanillaAT
adv_lambda: float = 1.0
# FeatExtraction
feat_lambda: float = 1.0
noise_std: float = 0.01
# EWC
lambda_ewc: float = 100.0
# Multi-task or multi-attack scenario
attack_sequence: tuple = ("FGSM", "PGD")
```
You can modify `attack_sequence` and `defense_method` as per your needs.
---
## Results
After training, the results will be saved and displayed. You can see:
- **Clean Accuracy**: Accuracy on the test dataset without any attack.
- **Robust Accuracy**: Accuracy under adversarial attacks (e.g., PGD, FGSM).
- **Losses**: Both clean and adversarial losses.
---
## Dependencies
- **PyTorch**: For deep learning and model training.
- **torchvision**: For image processing and dataset utilities.
- **NumPy**: For numerical computations.
- **matplotlib**: For plotting training and evaluation results.
- **scikit-learn**: For t-SNE visualization.
---
## Author
- **Mina Moshfegh**
---
### Notes:
- This README provides clear setup instructions, usage details, configuration, and dependency installation steps.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment