This is the artifact evaluation repository of the USENIX Security '25 paper "FABLE: Batched Evaluation on Confidential Lookup Tables in 2PC".
The main implementation is at https://github.com/cmu-cryptosystems/FABLE, and the paper can be found at https://eprint.iacr.org/2025/1081.
This version is for functionality and reproducibility evaluation, where we provide
- The code for the FABLE protocol plus the two applications.
- The instructions to build and execute the protocol.
- The instructions to reproduce all experiments in the paper.
- The scripts to plot all figures and report the speedup of the FABLE protocol over the baselines.
This artifact is organized as two folders:
- The
FABLEfolder, which contains the code for FABLE (FABLE/src) and the unit tests (FABLE/test). - The
FABLE-AEfolder, which contains the instructions and scripts to run the experiments in its root directory.FABLE-AE/Cryptencontains the scripts to benchmark the Crypten baseline andFABLE-AE/ORAMscontains the benchmark results for the ORAM baseline.
Please use Docker to install the dependencies. To do this, first build a Docker image with the Dockerfile in the FABLE folder, and then build a Docker image for AE using the Dockerfile in the FABLE-AE folder. Assume you are in the root directory of the artifact (i.e., the parent directory of FABLE-AE), then you can run:
sudo docker build ./FABLE -t fable:1.0
sudo docker build ./FABLE-AE -t fable-ae:1.0The image fable:1.0 contains all dependencies and a built FABLE in /workspace/FABLE. The image fable-ae:1.0 contains some extra dependencies for reproduction.
To reproduce the main experiments, first create a container on two machines:
sudo docker run -it --net=host --cap-add=NET_ADMIN -v $PWD/FABLE-AE:/workspace/AE -w /workspace/AE fable-ae:1.0Then run
bash main_experiments.sh 1 0.0.0.0and
bash main_experiments.sh 2 $HOSTOn two machines, where $HOST is the IP address of the machine running the first command.
The logs will be written to logs/main in FABLE-AE.
This step takes around 30 human-minutes, 10 compute-hour, and 1GB disk.
To reproduce the splut baseline, run
bash baseline_splut.sh 1 0.0.0.0and
bash baseline_splut.sh 2 $HOSTOn two machines, where $HOST is the IP address of the machine running the first command.
The logs will be written to logs/baseline in FABLE-AE.
This step only takes around half an hour to finish.
FLORAM and 2P-DUORAM are benchmarked using their official implementation. You can skip this step if you would like to use the results we provided in ORAMs/floram.json and ORAMs/duoram.json.
To do this, exit the docker for FABLE-AE and build their docker with
sudo bash FABLE-AE/ORAMs/build-dockers.shThen run the experiments with
sudo bash FABLE-AE/ORAMs/repro-dockers.shThis step takes around 30 human-minutes and 1.5 compute-hour.
Before creating the plots, we need to transfer the logs from the client to the server. To do this, launch
bash sync_log_serv.shon the client (the second machine), and run
bash sync_log_recv.sh $CLIENTon the server (the first machine), where $CLIENT is the IP address of the client.
To plot the figures, please make sure that you are inside the container fable-ae, as it contains the necessary plotting scripts. Then run
python3 microbench.py, which will produceFABLE-AE/plots/runtime_vs_threads.pdf(Figure 4).python3 draw.py, which will produceFABLE-AE/plots/time_lutsize_network_clipped.pdf(Figure 5)FABLE-AE/plots/24_time_bs.pdf(Figure 6)FABLE-AE/plots/comm.pdf(Figure 7)FABLE-AE/plots/time_lutsize_network.pdf(Figure 8) You can add--doram-baselinetopython3 draw.pyif you prefer to use the reproduced FLORAM and DUORAM numbers instead of the ones we provided.
You may also want to reproduce our tables (in markdown format) with python3 breakdown_table.py and python3 breakdown_table.py --aes. The first one gives the result with LowMC as the OPRF and the second one gives the result with AES as the OPRF. Table 4 in the paper is produced by merging the two tables.
To reproduce the applications, run
bash applications.sh 1 0.0.0.0and
bash applications.sh 2 $HOSTOn two machines, where $HOST is the IP address of the machine running the first command.
This step takes around 30 human-minutes and 1.5 compute-hour.
Afterwards, you can run python3 read_app_speedup.py to see the speedup for both applications.
Note: Similar to DORAMs, we use a separate environment to benchmark to performance of the Crypten baseline. If you prefer to benchmark the baselines on your own, feel free to follow the instructions:
To build the docker,
sudo docker build FABLE-AE/Crypten -t cryptenThen launch the docker with
sudo docker run -it --net=host --cap-add=NET_ADMIN -v $PWD/FABLE-AE:/workspace/AE -w /workspace/AE cryptenTo reproduce the performance, run
bash Crypten/reproduce.sh 0 $HOST $CLIENTand
bash Crypten/reproduce.sh 1 $HOST $CLIENTon the host and the client respectively.
This step takes around 30 human-minutes and 1.5 compute-hour.
Afterwards, you should be able to pass --crypten-baseline to python3 read_app_speedup.py to see the speedup with your benchmarked results.