Skip to content

jiaaom/pyTFHE-CUDA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pyTFHE GPU Acceleration Library

Kernels and NTT are based on the cuFHE library (https://github.com/vernamlab/cuFHE).

Requirements

  • CUDA Toolkit (Tested on 13.0)
  • pybind11
  • python3-dev

Installation

First, clone and cd into this repo.

Then, run the following commands:

pip3 install .

Enjoy.

Usage

import pyTFHEGPU
# Initialize cuda device. Must be done before creating any TFHE objects.
pyTFHEGPU.init_cuda()

# These are the gate_types
NAND = 0
OR = 1
AND = 2
NOR = 3
XOR = 4
XNOR = 5
ANDNY = 6  # (NOT input1) AND input2
ANDYN = 7  # input1 AND (NOT input2)


t = pyTFHEGPU.BTGM()
# Create a new empty batch
b1 = pyTFHEGPU.Batch()

# ========= add each ciphertext =================
# the 2nd argument is the ctxt_unique_id
#     ctxt_unique_id can be non-continuous but must be unique
# the 3rd argument is the message (0 or 1)
t.add_ctxt(b1, 0, 0)
t.add_ctxt(b1, 1, 1)
t.add_ctxt(b1, 2, 0)

# ========= add each gate =================
# - the first argument is the gate_unique_id
#     gate_unique_id can be non-continuous but must be unique
# - the second argument is the gate_type
# - the third argument is the input1_unique_id
# - the fourth argument is the input2_unique_id
# - the fifth argument is the output_unique_id
#
# For example, this means Ctxt[2] = Ctxt[0] NAND Ctxt[1]
t.add_gate(b1, 0, NAND, 0, 1, 2)
# Feel free to add more gates

# Don't forget to build dependency graph
# - verbose = False
t.build_dependency_graph(b1, False)

# Evaluate the graph
t.eval(b1, False)

# Now let's see the results
# The first argument is the ctxt_unique_id
print(t.get_ctxt_value(b1, 2))

Check the test directory for more examples.

Benchmark

RTX A5000 reports 4122.82 gates/sec when there's no dependency between the gates.

About

TFHE GPU Acceleration with Python binding

Resources

License

Stars

Watchers

Forks

Packages

No packages published