v3.3.0 #390
Closed
v3.3.0
#390
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
3.3.0 (2024-12-02)
Bug Fixes
llama.cppchanges (#386) (97abbca)compiler is out of heap spaceCUDA build error (#386) (97abbca)Features
llama.cppbackend registry for GPUs instead of custom implementations (#386) (97abbca)getLlama:build: "try"option (#386) (97abbca)initcommand:--modelflag (#386) (97abbca)prefixItems,minItems,maxItemssupport (#388) (4d387de)additionalProperties,minProperties,maxPropertiessupport (#388) (4d387de)minLength,maxLength,formatsupport (#388) (4d387de)descriptionsupport (#388) (4d387de)Shipped with
llama.cppreleaseb4234This discussion was created from the release v3.3.0.
Beta Was this translation helpful? Give feedback.
All reactions