Create a way to load encrypted models efficiently? #14289
elephantpanda
started this conversation in
Ideas / Feature Requests
Replies: 1 comment 2 replies
-
I'd recommend using the API that loads a model from memory: CreateSessionFromArray https://onnxruntime.ai/docs/api/c/struct_ort_api.html#a8c9d4f617fd806cd10da1fecd265dd8c This way you can decrypt it any way you choose and just pass a pointer to the decrypted model, it doesn't need to be a file. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
If I have a customised onnx model possibly trained on my own copyrighted images, I would like to find a way to encrypt it. Then run the encrypted model.
Yes, I could use a 3rd party encrypter, but I'm thinking for a very big model of 1GB, if I decrypt it I will have to use up 1GB of RAM. And computers are already short of RAM to run the models.
What I would like is a function like EncryptedInferenceSession("model.onnx.encrypted", secret_key) which would load the enctypted model onto the GPU in a memory saving way using a secret key.
Alternatively a way to load the model bit by bit in manageable chunks like 50Mb each. And decrypt each one on the fly.
Either of these would be a valid solution.
Beta Was this translation helpful? Give feedback.
All reactions