Added optional LoRA adapter support for vLLM inference. #88
docs.yml
on: pull_request
build
2m 57s
preview
0s
deploy
0s
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
docs-html
|
250 KB |
sha256:2f4945f342fe23dfc3ea81bff0eec0010be21ac5713737b8b6597c96e517121d
|
|