Skip to content

Conversation

@ggerganov
Copy link
Member

Minor QoL improvement - sometimes it can be confusing why it takes so long to start processing. We now print a message to explain that the model data is being loaded.

...
0.00.189.951 I print_info: EOG token        = 128001 '<|end_of_text|>'
0.00.189.951 I print_info: EOG token        = 128009 '<|eot_id|>'
0.00.189.952 I print_info: max token length = 256
0.00.189.952 I load_tensors: loading model tensors, this can take a while... (mmap = true)
...

@ggerganov ggerganov merged commit 9dd7a03 into master Feb 6, 2025
46 checks passed
ggerganov added a commit that referenced this pull request Feb 8, 2025
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants