Skip to content

Commit 8738297

Browse files
committed
Minor fixes to artifacts and fix conda env in experiments
1 parent e3f36a2 commit 8738297

File tree

7 files changed

+81
-31
lines changed

7 files changed

+81
-31
lines changed

.dockerignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,5 @@
77
**/.vscode
88
**.egg-info
99
**/massif.out*
10-
*swp
10+
*swp
11+
**/.github

ARTIFACT-EVALUATION.md

Lines changed: 43 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -72,26 +72,32 @@ sudo apt-get -y install cudnn-cuda-12
7272
The project can also be built with Docker.
7373
For this, please first install Docker by followiung the official website: [https://docs.docker.com/engine/install/ubuntu/](https://docs.docker.com/engine/install/ubuntu/).
7474

75+
[A Beginner’s Guide to NVIDIA Container Toolkit on Docker](https://medium.com/@u.mele.coding/a-beginners-guide-to-nvidia-container-toolkit-on-docker-92b645f92006) is a good reference to getting started with CUDA Docker containers. We describe important steps below.
76+
7577
In addition to the CUDA toolkit installed above, install the [Nvidia Container toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) to pass through the GPU drivers to the container engine (Docker daemon). Please refer to the [official website](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-the-nvidia-container-toolkit) to download this toolkit and configure Docker to use it.
7678
Remember to restart the Docker daemon after installing the toolkit.
7779

80+
Adding Nvidia GPG Keys and Repository:
7881
```shell
79-
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
80-
sudo apt-get update
81-
sudo apt-get install -y nvidia-container-toolkit
82-
sudo systemctl restart docker
82+
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
83+
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list |
84+
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |
85+
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
8386
```
8487

85-
Test the correctness of the docker + cuda installation with the following docker container:
88+
Installing the toolkit and restarting docker:
8689
```shell
87-
docker run --rm --gpus all nvidia/cuda:12.3.2-devel-ubuntu22.04 nvcc --version
90+
sudo apt-get update && \
91+
sudo apt-get install -y nvidia-container-toolkit && \
92+
sudo nvidia-ctk runtime configure --runtime=docker && \
93+
sudo systemctl restart docker
8894
```
89-
This should give you the CUDA version 12.3.
9095

91-
Check that the GPU is detected within the container:
96+
Test the correctness of the docker + cuda installation with the following docker container and check that the GPU is detected within the container:
9297
```shell
93-
docker run --rm --gpus all nvidia/cuda:12.3.2-devel-ubuntu22.04 nvidia-smi
98+
docker run --rm --gpus all nvidia/cuda:12.3.2-devel-ubuntu22.04 nvcc --version && nvidia-smi
9499
```
100+
This should give you the CUDA version 12.3 and your GPU should show up in the `nvidia-smi` output.
95101

96102

97103
### Estimated Time and Storage Consumption
@@ -113,8 +119,8 @@ When cloning directly from the Github repository, git-lfs is required to downloa
113119
Use `git lfs pull` to ensure large files are downloaded after cloning.
114120

115121
```shell
116-
git clone https://github.com/sacs-epfl/shatter.git
117-
git switch -c shatter-pets-2025
122+
git clone https://github.com/sacs-epfl/shatter.git && cd shatter && \
123+
git switch -c shatter-pets-2025 && \
118124
git lfs pull
119125
```
120126

@@ -131,19 +137,19 @@ In `docker-build.sh`, update the ```TOTCH_CUDA_ARCH_LIST``` with your microarchi
131137
./docker-build.sh
132138
```
133139

134-
After the docker build completes, remember to check your installation of Nvidia container toolkit as described in Software Requirements above. The `nvidia-smi` and `nvcc --version` commands should succeed from within the container (See Requirements for Building with Docker section above).
140+
After the docker build completes, remember to check your installation of Nvidia container toolkit as described in [Software Requirements](#software-requirements) above. The `nvidia-smi` and `nvcc --version` commands should succeed from within the container (See [Section Requirements for Building with Docker](#requirements-for-building-with-docker) above).
135141

136142
To run the image, use the following command:
137143
```shell
138144
./docker-run.sh
139145
```
140-
To run the prebuilt image, replace the target in `docker-run.sh` from ```shatter-artifacts``` to ```rishis8/shatter-artifact-pets2025```.
146+
To run the prebuilt image, replace the target (flagged with -t) in `docker-run.sh` from ```shatter-artifacts``` to ```rishis8/shatter-artifact-pets2025:latest```.
141147

142148
#### Setup without Docker
143149
It is important to install ```libgl1-mesa-glx```.
144150

145151
```shell
146-
sudo apt-get update && sudo apt-get install libgl1-mesa-glx
152+
sudo apt-get update && sudo apt-get -y install libgl1-mesa-glx
147153
```
148154

149155
If not using docker, set ```$SHATTER_HOME``` to the root of `shatter` repository.
@@ -156,16 +162,7 @@ Then set up the environment with the available script:
156162
```
157163

158164
### Testing the Environment (Only for Functional and Reproduced badges)
159-
When using Docker, check the Host and Container are working correctly with GPUs:
160-
```shell
161-
docker run --rm --gpus all nvidia/cuda:12.3.2-devel-ubuntu22.04 nvcc --version
162-
```
163-
This should give you the CUDA version 12.3.
164-
165-
Check that the GPU is detected within the container:
166-
```shell
167-
docker run --rm --gpus all nvidia/cuda:12.3.2-devel-ubuntu22.04 nvidia-smi
168-
```
165+
When using Docker, check the Host and Container are working correctly with GPUs as described in [Section Requirements for Building with Docker](#requirements-for-building-with-docker) of this file.
169166

170167
Finally, use the `testing-script.sh` to see if everything is correct:
171168
```shell
@@ -209,18 +206,35 @@ Sections 6.2 and 6.3 demonstrate this.
209206
### Experiments
210207

211208
#### Experiment 1: Gradient-inversion attack
212-
- Run `$SHATTER_HOME/artifact_scripts/gradientInversion/rog/run.sh`. This should take ~15 minutes and about 30 MBs of space because of reconstructed images.
213-
- Reconstructed images per client, aggregated data CSVs and bar plots are generated in `$SHATTER_HOME/artifact_scripts/gradientInversion/rog/experiments/lenet`.
209+
For Experiment 1, run the following command:
210+
```shell
211+
$SHATTER_HOME/artifact_scripts/gradientInversion/rog/run.sh
212+
```
213+
This should take ~15 minutes and about 30 MBs of space because of reconstructed images.
214+
Reconstructed images per client, aggregated data CSVs and bar plots are generated in `$SHATTER_HOME/artifact_scripts/gradientInversion/rog/experiments/lenet`.
215+
216+
Some additional details:
214217
- VNodes{k} is Shatter with k virtual nodes.
215218
- The reconstructed images and lpips scores can be compared to Figures 2 and 8. Furthermore, lpips_bar_plot.png is analogous to Figure 7(d). You can ignore other metrics like `snr` and `ssim`. LPIPS will not be exact numbers in the paper since only 1 client was attacked as opposed to 100 in the experiments in the paper.
216219
- We recommend clearing up `artifact_scripts/gradientInversion/rog/experiments/lenet` before running other experiments to save disk space.
220+
- If you get a `ModuleNotFoundError`, verify the conda environment `venv` is active and you followed the steps in the [Setting up the Environment Section](#set-up-the-environment-only-for-functional-and-reproduced-badges).
217221

218222
#### Experiment 2: Convergence, MIA and LA
219-
- These experiments are smaller scale versions of the other experiments in the paper since the full-scale experiments take very long and need to be run across 25 machines.
220-
- Easiest way is to execute `$SHATTER_HOME/artifact_scripts/small_scale/run_all`. This runs the experiments for all the datasets in one go. To do this step by step, one can also individually run the scripts for each dataset in `$SHATTER_HOME/artifact_scripts/small_scale`. Experiments with CIFAR-10 and Movielens datasets should take ~1.5 hour and ~200MBs in disk space each. Twitter dataset experiments take a bit longer and can take ~2.5 hours and ~200 MBs. In total `run_all` should run in ~5.5 hours and ~600MBs of disk space.
221-
- Inside `$SHATTER_HOME/artifact_scripts/small_scale/CIFAR10`, the aggregated CSVs for each baseline can be found: `*test_acc.csv` (Figure 3, 5, 7 all except Movielens), `*test_loss.csv` (Figure 3, 5, 7 Movielens), `*clients_linkability.csv` (Figure 6), `*clients_MIA.csv` (Figure 6), `*iterations_linkability.csv` (Partially Figure 7c), and `*iterations_MIA.csv` (Figure 5). PDFs for the plots with all baselines together (not exactly the ones in the paper, but same figures as the CSVs) are also created in the same folders. Since these are smaller scale experiments, the values will not match the ones in the paper.
223+
These experiments are smaller scale versions of the other experiments in the paper since the full-scale experiments take very long and need to be run across 25 machines. To run experiment 2, execute the following command:
224+
```shell
225+
$SHATTER_HOME/artifact_scripts/small_scale/run_all.sh
226+
```
227+
This runs the experiments for all the datasets in one go.
228+
229+
To do this step by step, one can also individually run the scripts for each dataset in `$SHATTER_HOME/artifact_scripts/small_scale`.
230+
231+
Experiments with CIFAR-10 and Movielens datasets should take ~1.5 hour and ~200MBs in disk space each. Twitter dataset experiments take a bit longer and can take ~2.5 hours and ~200 MBs. In total `run_all.sh` should run in ~5.5 hours and ~600MBs of disk space.
232+
Inside `$SHATTER_HOME/artifact_scripts/small_scale/CIFAR10`, the aggregated CSVs for each baseline can be found: `*test_acc.csv` (Figure 3, 5, 7 all except Movielens), `*test_loss.csv` (Figure 3, 5, 7 Movielens), `*clients_linkability.csv` (Figure 6), `*clients_MIA.csv` (Figure 6), `*iterations_linkability.csv` (Partially Figure 7c), and `*iterations_MIA.csv` (Figure 5). PDFs for the plots with all baselines together (not exactly the ones in the paper, but same figures as the CSVs) are also created in the same folders. Since these are smaller scale experiments, the values will not match the ones in the paper.
233+
234+
Things to watch out for:
222235
- If CUDA OOM is encountered, try lowering the `test_batch_size` and `batch_size` in `config*.ini` within each dataset and baseline folder. One such `config` file is `$SHATTER_HOME/artifact_scripts/small_scale/CIFAR10/EL/config_EL.ini`
223236
- If the experiments look like they are in a deadlock, check the corresponding log files in the running dataset/baseline. If nothing has been logged for some time and it does not say that the experiment has been completed, check the CPU utilization and DRAM usage. It is likely a DRAM out-of-memory problem. The experiments would likely take up more DRAM. If a larger machine is unavailable, try disabling (commenting out) `Muffliato` experiments in the run scripts.
237+
- If you get a `ModuleNotFoundError`, verify the conda environment `venv` is active and you followed the steps in the [Setting up the Environment Section](#set-up-the-environment-only-for-functional-and-reproduced-badges).
224238

225239
#### Copying results back from Docker
226240
We provided `docker-copy-exp-1.sh` and `docker-copy-exp-2.sh` to copy the results from the docker containers to the subfolders.

artifact_scripts/gradientInversion/rog/run.sh

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
#!/bin/bash
22

3+
set -euxo pipefail
4+
5+
# Check if the 'conda' command is available
6+
if ! command -v conda &> /dev/null; then
7+
echo "Activating Conda"
8+
source ${CONDA_PREFIX}/bin/activate
9+
fi
10+
11+
conda activate venv
12+
13+
314
num_clients=1
415

516
# Compute the results

artifact_scripts/small_scale/run_CIFAR10.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
set -euxo pipefail
44

5+
# Check if the 'conda' command is available
6+
if ! command -v conda &> /dev/null; then
7+
echo "Activating Conda"
8+
source ${CONDA_PREFIX}/bin/activate
9+
fi
10+
11+
conda activate venv
12+
513
echo "Computing EL on CIFAR10"
614
cd $SHATTER_HOME/artifact_scripts/small_scale/CIFAR10
715
$SHATTER_HOME/eval/run_helper.sh 8 51 $(pwd)/config_EL.ini $SHATTER_HOME/eval/testingSimulation.py 10 10 $SHATTER_HOME/eval/data/CIFAR10 $SHATTER_HOME/eval/data/CIFAR10

artifact_scripts/small_scale/run_Movielens.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
set -euxo pipefail
44

5+
# Check if the 'conda' command is available
6+
if ! command -v conda &> /dev/null; then
7+
echo "Activating Conda"
8+
source ${CONDA_PREFIX}/bin/activate
9+
fi
10+
11+
conda activate venv
12+
513
echo Computing EL on Movielens
614
cd $SHATTER_HOME/artifact_scripts/small_scale/Movielens
715
$SHATTER_HOME/eval/run_helper.sh 8 501 $(pwd)/config_EL.ini $SHATTER_HOME/eval/testingSimulation.py 100 100 $SHATTER_HOME/eval/data/movielens $SHATTER_HOME/eval/data/movielens

artifact_scripts/small_scale/run_Twitter.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
set -euxo pipefail
44

5+
# Check if the 'conda' command is available
6+
if ! command -v conda &> /dev/null; then
7+
echo "Activating Conda"
8+
source ${CONDA_PREFIX}/bin/activate
9+
fi
10+
11+
conda activate venv
12+
513
echo Computing EL on Twitter
614
cd $SHATTER_HOME/artifact_scripts/small_scale/Twitter
715
$SHATTER_HOME/eval/run_helper.sh 4 51 $(pwd)/config_EL.ini $SHATTER_HOME/eval/testingSimulation.py 10 10 $SHATTER_HOME/eval/data/sent140/train $SHATTER_HOME/eval/data/sent140/test

docker-run.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
#!/bin/bash
22

3-
docker run --gpus all -it shatter-artifacts
3+
docker run --gpus all -it shatter-artifacts --name shatter-artifacts

0 commit comments

Comments
 (0)