@@ -10,11 +10,10 @@ Feature: llama.cpp server
10
10
# KV Cache corresponds to the total amount of tokens
11
11
# that can be stored across all independent sequences: #4130
12
12
# see --ctx-size and #5568
13
- And 32 KV cache size
14
- And 512 as batch size
15
- And 1 slots
16
- And embeddings extraction
17
- And 32 server max tokens to predict
13
+ And 256 KV cache size
14
+ And 32 as batch size
15
+ And 2 slots
16
+ And 64 server max tokens to predict
18
17
And prometheus compatible metrics exposed
19
18
Then the server is starting
20
19
Then the server is healthy
@@ -23,18 +22,35 @@ Feature: llama.cpp server
23
22
Then the server is ready
24
23
And all slots are idle
25
24
25
+
26
26
Scenario Outline : Completion
27
27
Given a prompt <prompt>
28
28
And <n_predict> max tokens to predict
29
29
And a completion request with no api error
30
30
Then <n_predicted> tokens are predicted matching <re_content>
31
+ And the completion is <truncated> truncated
32
+ And <n_prompt> prompt tokens are processed
31
33
And prometheus metrics are exposed
32
34
And metric llamacpp:tokens_predicted is <n_predicted>
33
35
34
36
Examples : Prompts
35
- | prompt | n_predict | re_content | n_predicted |
36
- | I believe the meaning of life is | 8 | (read \|going )+ | 8 |
37
- | Write a joke about AI | 64 | (park \|friends \|scared \|always )+ | 32 |
37
+ | prompt | n_predict | re_content | n_prompt | n_predicted | truncated |
38
+ | I believe the meaning of life is | 8 | (read \|going )+ | 18 | 8 | not |
39
+ | Write a joke about AI from a very long prompt which will not be truncated | 256 | (princesses \|everyone \|kids )+ | 46 | 64 | not |
40
+
41
+ Scenario : Completion prompt truncated
42
+ Given a prompt:
43
+ """
44
+ Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
45
+ Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
46
+ Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
47
+ Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
48
+ """
49
+ And a completion request with no api error
50
+ Then 64 tokens are predicted matching fun|Annaks|popcorns
51
+ And the completion is truncated
52
+ And 109 prompt tokens are processed
53
+
38
54
39
55
Scenario Outline : OAI Compatibility
40
56
Given a model <model>
@@ -44,11 +60,14 @@ Feature: llama.cpp server
44
60
And streaming is <enable_streaming>
45
61
Given an OAI compatible chat completions request with no api error
46
62
Then <n_predicted> tokens are predicted matching <re_content>
63
+ And <n_prompt> prompt tokens are processed
64
+ And the completion is <truncated> truncated
47
65
48
66
Examples : Prompts
49
- | model | system_prompt | user_prompt | max_tokens | re_content | n_predicted | enable_streaming |
50
- | llama -2 | Book | What is the best book | 8 | (Mom \|what )+ | 8 | disabled |
51
- | codellama70b | You are a coding assistant . | Write the fibonacci function in c ++. | 64 | (thanks \|happy \|bird )+ | 32 | enabled |
67
+ | model | system_prompt | user_prompt | max_tokens | re_content | n_prompt | n_predicted | enable_streaming | truncated |
68
+ | llama -2 | Book | What is the best book | 8 | (Here \|what )+ | 77 | 8 | disabled | not |
69
+ | codellama70b | You are a coding assistant . | Write the fibonacci function in c ++. | 128 | (thanks \|happy \|bird )+ | -1 | 64 | enabled | |
70
+
52
71
53
72
Scenario : Tokenize / Detokenize
54
73
When tokenizing:
0 commit comments