Skip to content

Conversation

@paulbauriegel
Copy link

Adds the official Apple CoreML model for inference, if you have a Apple Silicon Macbook, you can run the biggest model in about a second. So the results are much better and faster then via ONNX.

This is a draft let me know what you think and if there is something you want to have improved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant