Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
# Adversarial Defense Methods
This repository contains implementations of various adversarial defense methods, including **AIR**, **Vanilla Adversarial Training (AT)**, **LFL**, **EWC**, **Feat. Extraction** and **Joint Training**.
The code also allows training on multiple tasks (attacks) sequentially, with automatic defense method selection from the configuration file.
## Setup & Installation
### Prerequisites
To run this code, you’ll need to install the following dependencies:
1. **Python 3.x**
2. **PyTorch** (for training models)
3. **NumPy** (for numerical computations)
4. **torchvision** (for image transformations and datasets)
5. **Matplotlib** (for plotting and visualization)
6. **scikit-learn** (for t-SNE visualization)
### Step-by-step setup
1. **Clone the repository:**
First, clone the repository to your local machine:
```bash
git clone https://gitlab.cs.fau.de/ex55aveq/Defense-without-Forgetting-Continual-Adversarial-Defense-with-Anisotropic-and-Isotropic-Pseudo-Replay-Reproduction.git
cd src
```
2. **Create and activate a virtual environment:**
It's recommended to use a virtual environment to isolate the dependencies.
```bash
python -m venv venv
source venv/bin/activate
```
3. **Install dependencies:**
Install the necessary dependencies using `pip`:
```bash
pip install -r requirements.txt
```
4. **Configure the environment:**
Set up the configuration file (`Config`) in the Python code to specify your training parameters, dataset, and attack/defense methods.
---
## Usage Instructions
### Running the code
After setting up the environment, you can start training your model using the following command:
```bash
python main.py
```
This will start training on the dataset and attacks specified in the `Config` class, which is located in `utils/config.py`. You can modify the config file to specify different training parameters, attack methods, defense methods, and dataset.
### Configuration Options
The configuration options can be found in `utils/config.py`. Some of the key parameters you can adjust include:
- **Dataset**: Choose between `MNIST`, `CIFAR10`, and `CIFAR100`.
- **Attack Methods**: Specify the sequence of attacks you want to train on (e.g., `FGSM`, `PGD`, or `None`).
- **Defense Methods**: Choose a defense method like `AIR`, `LFL`, `JointTraining`,`VanillaAT`, `FeatExtraction`,`EWC`, etc.
- **Training Parameters**: You can adjust `epochs`, `batch_size`, `learning_rate`, and other parameters as needed.
### Configuration Example
Below is an example of the configuration you might define in `config.py`:
```python
@dataclass
class Config:
# General
seed: int = 42
device: str = "cuda"
# Training
epochs: int = 30
batch_size: int = 128
learning_rate: float = 0.1
momentum: float = 0.9
weight_decay: float = 5e-4
# Attack params
epsilon: float = 8 / 255
alpha: float = 2 / 255
num_steps: int = 10
random_init: bool = True
# Defense selection
defense_method: str = "AIR"
# AIR parameters
lambda_SD: float = 1.0
lambda_IR: float = 1.0
lambda_AR: float = 1.0
lambda_Reg: float = 1.0
alpha_range: tuple = (0.3, 0.7)
use_rdrop: bool = True
# Isotropic replay augmentations
iso_noise_std: float = 0.01
iso_clamp_min: float = 0.0
iso_clamp_max: float = 1.0
iso_p_flip: float = 0.5
iso_flip_dim: int = 3
iso_p_rotation: float = 0.5
iso_max_rotation: int = 10
iso_p_crop: float = 0.5
iso_p_erase: float = 0.5
# Dataset
dataset: str = "MNIST"
data_root: str = "./data"
num_workers: int = 2
# LFL
lambda_lfl: float = 1.0
feature_lambda: float = 1.0
freeze_classifier: bool = True
# JointTraining
joint_lambda: float = 0.5
# VanillaAT
adv_lambda: float = 1.0
# FeatExtraction
feat_lambda: float = 1.0
noise_std: float = 0.01
# EWC
lambda_ewc: float = 100.0
# Multi-task or multi-attack scenario
attack_sequence: tuple = ("FGSM", "PGD")
```
You can modify `attack_sequence` and `defense_method` as per your needs.
---
## Results
After training, the results will be saved and displayed. You can see:
- **Clean Accuracy**: Accuracy on the test dataset without any attack.
- **Robust Accuracy**: Accuracy under adversarial attacks (e.g., PGD, FGSM).
- **Losses**: Both clean and adversarial losses.
---
## Dependencies
- **PyTorch**: For deep learning and model training.
- **torchvision**: For image processing and dataset utilities.
- **NumPy**: For numerical computations.
- **matplotlib**: For plotting training and evaluation results.
- **scikit-learn**: For t-SNE visualization.
---
## Author
- **Mina Moshfegh**
---
### Notes:
- This README provides clear setup instructions, usage details, configuration, and dependency installation steps.