Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit 64a2393

Browse files
author
vmpuri
committed
List-type message content parsing in LLaMA 2 Chat Formatter
1 parent d494aa0 commit 64a2393

File tree

2 files changed

+6
-2
lines changed

2 files changed

+6
-2
lines changed

torchchat/generate.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,11 @@ def encode_dialog_prompt(self, dialog) -> List[int]:
103103
tokens = self.tokenizer.encode(f"{B_INST} ")
104104
first_message = True # Bool to handle placing the B_INST token. Behavior is weird - the system prompt should have the B_INST, but not the first user message. All following user messages *should* have it. Also, if there is no system prompt, then the user message should have it.
105105
for message in dialog:
106-
content = message["content"].strip()
106+
if isinstance(message["content"], list):
107+
content = message["content"][0]["text"]
108+
else:
109+
content = message["content"]
110+
content = content.strip()
107111
if message["role"] == "system":
108112
encoded = self.tokenizer.encode(f"{B_SYS}\n{content}\n{E_SYS}")
109113
first_message = False

torchchat/usages/openai_api.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ def chunked_completion(self, completion_request: CompletionRequest):
376376
encoded_prompt=encoded,
377377
temperature=float(completion_request.temperature),
378378
chat_mode=False,
379-
sequential_prefill=False,
379+
sequential_prefill=True,
380380
)
381381

382382
def callback(x, *, done_generating=False):

0 commit comments

Comments
 (0)