|
| 1 | +name: Bug (model use) |
| 2 | +description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module). |
| 3 | +title: "Eval bug: " |
| 4 | +labels: ["bug-unconfirmed", "model evaluation"] |
| 5 | +body: |
| 6 | + - type: markdown |
| 7 | + attributes: |
| 8 | + value: > |
| 9 | + Thanks for taking the time to fill out this bug report! |
| 10 | + This issue template is intended for bug reports where the model evaluation results |
| 11 | + (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation. |
| 12 | + If you encountered the issue while using an external UI (e.g. ollama), |
| 13 | + please reproduce your issue using one of the examples/binaries in this repository. |
| 14 | + The `llama-cli` binary can be used for simple and reproducible model inference. |
| 15 | + - type: textarea |
| 16 | + id: version |
| 17 | + attributes: |
| 18 | + label: Name and Version |
| 19 | + description: Which version of our software are you running? (use `--version` to get a version string) |
| 20 | + placeholder: | |
| 21 | + $./llama-cli --version |
| 22 | + version: 2999 (42b4109e) |
| 23 | + built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu |
| 24 | + validations: |
| 25 | + required: true |
| 26 | + - type: dropdown |
| 27 | + id: operating-system |
| 28 | + attributes: |
| 29 | + label: Which operating systems do you know to be affected? |
| 30 | + multiple: true |
| 31 | + options: |
| 32 | + - Linux |
| 33 | + - Mac |
| 34 | + - Windows |
| 35 | + - BSD |
| 36 | + - Other? (Please let us know in description) |
| 37 | + validations: |
| 38 | + required: true |
| 39 | + - type: dropdown |
| 40 | + id: backends |
| 41 | + attributes: |
| 42 | + label: GGML backends |
| 43 | + description: Which GGML backends do you know to be affected? |
| 44 | + options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan] |
| 45 | + multiple: true |
| 46 | + - type: textarea |
| 47 | + id: hardware |
| 48 | + attributes: |
| 49 | + label: Hardware |
| 50 | + description: Which CPUs/GPUs are you using? |
| 51 | + placeholder: > |
| 52 | + e.g. Ryzen 5950X + 2x RTX 4090 |
| 53 | + validations: |
| 54 | + required: true |
| 55 | + - type: textarea |
| 56 | + id: model |
| 57 | + attributes: |
| 58 | + label: Model |
| 59 | + description: > |
| 60 | + Which model at which quantization were you using when encountering the bug? |
| 61 | + If you downloaded a GGUF file off of Huggingface, please provide a link. |
| 62 | + placeholder: > |
| 63 | + e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M |
| 64 | + validations: |
| 65 | + required: false |
| 66 | + - type: textarea |
| 67 | + id: steps_to_reproduce |
| 68 | + attributes: |
| 69 | + label: Steps to Reproduce |
| 70 | + description: > |
| 71 | + Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. |
| 72 | + If you can narrow down the bug to specific hardware, compile flags, or command line arguments, |
| 73 | + that information would be very much appreciated by us. |
| 74 | + placeholder: > |
| 75 | + e.g. when I run llama-cli with -ngl 99 I get garbled outputs. |
| 76 | + When I use -ngl 0 it works correctly. |
| 77 | + Here are the exact commands that I used: ... |
| 78 | + validations: |
| 79 | + required: true |
| 80 | + - type: textarea |
| 81 | + id: first_bad_commit |
| 82 | + attributes: |
| 83 | + label: First Bad Commit |
| 84 | + description: > |
| 85 | + If the bug was not present on an earlier version: when did it start appearing? |
| 86 | + If possible, please do a git bisect and identify the exact commit that introduced the bug. |
| 87 | + validations: |
| 88 | + required: false |
| 89 | + - type: textarea |
| 90 | + id: logs |
| 91 | + attributes: |
| 92 | + label: Relevant log output |
| 93 | + description: > |
| 94 | + Please copy and paste any relevant log output, including the command that you entered and any generated text. |
| 95 | + This will be automatically formatted into code, so no need for backticks. |
| 96 | + render: shell |
| 97 | + validations: |
| 98 | + required: true |
0 commit comments