Skip to content

Conversation

@danbev
Copy link
Member

@danbev danbev commented Jul 27, 2025

This commit clarifies the comment in llama-context.cpp regarding the prefill prompt (pp), and token generation (tg) graphs.

The motivation for this is that I've struggled to remember these and had to look them up more than once, so I thought it would be helpful to add a comment that makes it clear what these stand for.

This commit clarifies the comment in `llama-context.cpp` regarding the
prefill prompt (pp), and token generation (tg) graphs.

The motivation for this is that I've struggled to remember these and had
to look them up more than once, so I thought it would be helpful to add
a comment that makes it clear what these stand for.
Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this project "pp" more often means "prompt processing". Though I think it's interchangeable with "prefill".

@danbev danbev merged commit ca0ef2d into ggml-org:master Jul 27, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants