Skip to content

Commit 3bea2e6

Browse files
committed
fix tests
1 parent 468551e commit 3bea2e6

File tree

4 files changed

+15
-14
lines changed

4 files changed

+15
-14
lines changed

examples/server/server.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3249,7 +3249,7 @@ int main(int argc, char ** argv) {
32493249

32503250
if (!params.api_keys.empty()) {
32513251
// for now, if API key is set, web UI is unusable
3252-
svr->Get("/", [&](const httplib::Request & req, httplib::Response & res) {
3252+
svr->Get("/", [&](const httplib::Request &, httplib::Response & res) {
32533253
return res.set_content("Web UI is disabled because API key is set.", "text/html; charset=utf-8");
32543254
});
32553255
} else {

examples/server/tests/features/security.feature

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Feature: Security
55
Background: Server startup with an api key defined
66
Given a server listening on localhost:8080
77
And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
8-
And a server api key llama.cpp
8+
And a server api key THIS_IS_THE_KEY
99
Then the server is starting
1010
Then the server is healthy
1111

@@ -16,11 +16,11 @@ Feature: Security
1616
And a completion request with <api_error> api error
1717

1818
Examples: Prompts
19-
| api_key | api_error |
20-
| llama.cpp | no |
21-
| llama.cpp | no |
22-
| hackeme | raised |
23-
| | raised |
19+
| api_key | api_error |
20+
| THIS_IS_THE_KEY | no |
21+
| THIS_IS_THE_KEY | no |
22+
| hackeme | raised |
23+
| | raised |
2424

2525
Scenario Outline: OAI Compatibility
2626
Given a system prompt test
@@ -32,10 +32,10 @@ Feature: Security
3232
Given an OAI compatible chat completions request with <api_error> api error
3333

3434
Examples: Prompts
35-
| api_key | api_error |
36-
| llama.cpp | no |
37-
| llama.cpp | no |
38-
| hackme | raised |
35+
| api_key | api_error |
36+
| THIS_IS_THE_KEY | no |
37+
| THIS_IS_THE_KEY | no |
38+
| hackme | raised |
3939

4040
Scenario Outline: OAI Compatibility (invalid response formats)
4141
Given a system prompt test
@@ -55,7 +55,7 @@ Feature: Security
5555

5656

5757
Scenario Outline: CORS Options
58-
Given a user api key llama.cpp
58+
Given a user api key THIS_IS_THE_KEY
5959
When an OPTIONS request is sent from <origin>
6060
Then CORS header <cors_header> is set to <cors_header_value>
6161

examples/server/tests/features/steps/steps.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1299,7 +1299,8 @@ async def wait_for_slots_status(context,
12991299

13001300
async with aiohttp.ClientSession(timeout=DEFAULT_TIMEOUT_SECONDS) as session:
13011301
while True:
1302-
async with await session.get(f'{base_url}/slots', params=params) as slots_response:
1302+
headers = {'Authorization': f'Bearer {context.server_api_key}'}
1303+
async with await session.get(f'{base_url}/slots', params=params, headers=headers) as slots_response:
13031304
status_code = slots_response.status
13041305
slots = await slots_response.json()
13051306
if context.debug:

examples/server/utils.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ inline std::string format_chat(const struct llama_model * model, const std::stri
9090
return formatted_chat;
9191
}
9292

93-
std::string llama_get_chat_template(const struct llama_model * model) {
93+
static std::string llama_get_chat_template(const struct llama_model * model) {
9494
std::string template_key = "tokenizer.chat_template";
9595
// call with NULL buffer to get the total size of the string
9696
int32_t res = llama_model_meta_val_str(model, template_key.c_str(), NULL, 0);

0 commit comments

Comments
 (0)