Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

feat: use llama.cpp server #2498

feat: use llama.cpp server

feat: use llama.cpp server #2498

Triggered via pull request March 19, 2025 22:48
Status Cancelled
Total duration 40m 31s
Artifacts 1

cortex-cpp-quality-gate.yml

on: pull_request
build-docker-and-test
6m 50s
build-docker-and-test
build-docker-and-test-target-pr
0s
build-docker-and-test-target-pr
Matrix: build-and-test-target-pr
Matrix: build-and-test
Fit to window
Zoom out
Zoom in

Annotations

9 errors
build-and-test (linux, arm64, ubuntu-2004-arm64, -DCORTEX_CPP_VERSION=a2886d2e24d03e9aceade50bedd...
The version '3.10' with architecture 'arm64' was not found for Ubuntu 20.04. The list of all available versions can be found here: https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json

Artifacts

Produced during runtime
Name Size Digest
cortex-linux-amd64 Expired
46.8 MB
sha256:144af4abf53d011143bf3cc53ade063f497f90293b6e7c61e57b282485c2059e