We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b3e8a5e commit 052c968Copy full SHA for 052c968
repoqa.html
@@ -220,8 +220,8 @@ <h2 id="task-snf" class="text-nowrap mt-5">
220
<h3 class="text-nowrap mt-5">🏆 Benchmark @ 16K Code Context</h3>
221
<p>
222
🛠️ <b>Config:</b> The code in the prompt is fixed to 16K tokens (by
223
- DeepSeekCoder tokenizer). Yet, the required context is a bit larger
224
- than 16K so we extend 8K and 16K models using either
+ CodeLlama tokenizer). Yet, the required context is a bit larger than
+ 16K so we extend 8K and 16K models using either
225
<a
226
href="https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/"
227
>Dynamic RoPE Scaling</a
0 commit comments