Skip to content

Conversation

@ngxson
Copy link
Collaborator

@ngxson ngxson commented Oct 20, 2024

Related to discussion #9949 (comment)

Fixing llama_batch_allocr causing segfault when processing an empty batch


@ngxson ngxson requested review from ggerganov and slaren October 20, 2024 22:13
@ngxson ngxson changed the title llama : fix empty batch cause llama_batch_allocr to crash llama : fix empty batch causing llama_batch_allocr to crash Oct 20, 2024
src/llama.cpp Outdated
Comment on lines 21142 to 21145
if (batch.n_tokens == 0) {
// llama_(de|en)code_internal will return an error in this case
return;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My experience is that constructors should be as simple as possible and avoid branches if possible since they can leave the object in partially initialized state without a way to handle this. Suggest to replace this block with assert() and move llama_batch_allocr from llama_encode/llama_decode to llama_..._internal after the empty batch check.

@ngxson ngxson requested a review from ggerganov October 22, 2024 11:15
@ngxson ngxson merged commit c8c07d6 into ggml-org:master Oct 22, 2024
53 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
…#9966)

* llama : fix empty batch cause llama_batch_allocr to crash

* move batch_allocr inside decode/encode_internal

* fix build

* add GGML_ASSERT

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
…#9966)

* llama : fix empty batch cause llama_batch_allocr to crash

* move batch_allocr inside decode/encode_internal

* fix build

* add GGML_ASSERT

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
…#9966)

* llama : fix empty batch cause llama_batch_allocr to crash

* move batch_allocr inside decode/encode_internal

* fix build

* add GGML_ASSERT

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
…#9966)

* llama : fix empty batch cause llama_batch_allocr to crash

* move batch_allocr inside decode/encode_internal

* fix build

* add GGML_ASSERT

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants