Skip to content

Conversation

@wwoodsTM
Copy link
Contributor

@wwoodsTM wwoodsTM commented Nov 6, 2024

Fixes #10176, this uses the correct call now to llama_sampler_init_dry_impl and uses a dummy vocab parameter as well, similar to the DRY testing function.

I tested this with llama-speculative and the clone process now runs successfully.

@slaren slaren merged commit 5107e8c into ggml-org:master Nov 7, 2024
53 checks passed
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Speculative Decoding "Segmentation fault (core dumped)"

2 participants