-
Notifications
You must be signed in to change notification settings - Fork 49
feat: Adds think budget-forcing #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yelkurdi
wants to merge
35
commits into
generative-computing:main
Choose a base branch
from
yelkurdi:think_bf
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+504
−0
Open
Changes from 17 commits
Commits
Show all changes
35 commits
Select commit
Hold shift + click to select a range
bd503cf
Initial commit - think budget-forcing - tests run - WIP
9385af8
adds zero-think case
fda0768
resolved type checking errors
ff03b6f
fixes typo and some scripts
d73c1ac
Merge branch 'main' into think_bf
yelkurdi 1df37d5
Merge branch 'main' into think_bf
yelkurdi e013f92
backend interface using _raw_generate
556634b
Bump version number from 0.0.2 to 0.0.3 (#117)
nrfulton 6b6599d
ci: Rename .mergify.yml to mergify.yml (#119)
avinash2692 396bf7a
docs: fix typo on README (#116)
mdevino cad893f
refactor: Full refactor of the Decompose CLI Tool & introduction of p…
tuliocoppola 75e3d0e
moved the budget forcing function into mellea/stdlib/sampling_algos/b…
fd7a3b3
adds budget forcing fn
8f1a820
Merge branch 'main' into think_bf
yelkurdi 599eac1
feat: adds think budget forcing - relocated test dir
8098128
Update budget_forcing.py
yelkurdi 56a828a
Merge branch 'main' into think_bf
nrfulton 3535b65
Merge branch 'main' into think_bf
yelkurdi ad076c5
merging main in-progress
yelkurdi 66ae952
Merge branch 'main' into think_bf
yelkurdi 05c8185
main branch updates
yelkurdi 80e8485
updates to think_budget_forcing function to match sampling strategy i…
yelkurdi 7f2c8f1
adds sampling strategy for budget forcing
yelkurdi 2493ca1
minor fixes
yelkurdi dbadd21
feat: ollama generate_from_raw uses existing event loop
jakelorocco 4396f81
Merge branch 'main' into think_bf
yelkurdi f4dc004
fix: add blocking prevention mech
jakelorocco c143ce4
Merge branch 'main' into jal/ollama-generate-from-raw
jakelorocco 99b3156
Merge branch 'jal/ollama-generate-from-raw' into think_bf
yelkurdi 8d91627
fixes of async inconsistencies and incorporating Jacob's branch
yelkurdi d0c9e41
Merge branch 'main' into think_bf
yelkurdi 8796661
updates interface significantly after prompting `_generate_from_raw` …
yelkurdi 5664a8d
minor fix to test case
yelkurdi d83fb84
minor updates
yelkurdi 1a999b9
Merge branch 'main' into think_bf
yelkurdi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,148 @@ | ||
| from mellea.stdlib.session import MelleaSession | ||
| from mellea.stdlib.base import ( | ||
| CBlock, | ||
| Component, | ||
| GenerateLog, | ||
| ModelOutputThunk, | ||
| ) | ||
| import re | ||
|
|
||
| def think_budget_forcing( | ||
| session: MelleaSession, | ||
| action: CBlock | Component, | ||
| *, | ||
| think_max_tokens: int = 4096, | ||
| answer_max_tokens: int | None = None, | ||
| start_think_token: str = "<think>", | ||
| end_think_token: str = "</think>", | ||
| begin_response_token: str = "", | ||
| end_response_token: str = "", | ||
| think_wait_suffix: str = "", | ||
| answer_suffix: str = "The final answer is:", | ||
| answer_regex: str = r"\\boxed{.*?}", | ||
| model_options: dict | None = None, | ||
| generate_logs: list[GenerateLog] | None = None, | ||
| ): | ||
|
|
||
| """Generate with budget forcing using the completions APIs. This relies on raw autocompletion and assumes the model's output is structured in the following form: '<think> ... </think> summary answer' | ||
| The budget forcing method is proposed in the paper: https://arxiv.org/abs/2501.19393 | ||
| This implementation tries to follow the key outlines in the paper while ensuring stable and fail-safe operation. | ||
| This is performed via multi-step generation. The model will be called multiple times until requirements are met, in other words, the response will be assembled conditionally. | ||
|
|
||
| Args: | ||
| think_max_tokens: Budget in number of tokens allocated for the think block | ||
| answer_max_tokens: Budget in number of tokens allocated for the summary and answer block, None indicates generating till EoS | ||
| start_think_token: String indicating start of think block, default <think> | ||
| end_think_token: String indicating end of think block, default </think> | ||
| begin_response_token: Used by certain models, string indicating start of response block, e.g. "<response>", default None | ||
| end_response_token: Used by certain models, string indicating end of response block, e.g. "</response>", default None | ||
| think_wait_suffix: String to append to force continued thinking, e.g. "\nWait" if set to None we will not force additional thinking. Use None for upper-bound budget case | ||
| answer_suffix: String to append to force a final answer | ||
| answer_regex: Answer regex which indicates an answer is generated | ||
|
|
||
| Assumptions: | ||
| - The chat template is applied on prompt, with think mode enabled | ||
| - Model is think mode activated | ||
| - enabling prefix-caching improves performance | ||
|
|
||
| Limitations: | ||
| - Does not support batching | ||
| """ | ||
|
|
||
| backend = session.backend | ||
| model_options = backend._simplify_and_merge(model_options, is_chat_context=False) | ||
|
|
||
| responses = [] | ||
| prompt = backend.formatter.print(action) | ||
| if start_think_token: | ||
| prompt += start_think_token | ||
| responses.append(start_think_token) | ||
|
|
||
| # Generate thinking portion | ||
| # model_options["echo"] = True | ||
| # model_options["logprobs"] = 1 | ||
| model_options["n"] = 1 | ||
| rem_toks = think_max_tokens | ||
| gen_tok_count = 0 | ||
| curr_prompt = prompt | ||
| min_step_len = 10 # minimum character length of step to be considered valid | ||
|
|
||
| # think block indefinite multi-step operation to satisfy user's budget | ||
| while True: | ||
| if rem_toks <= 0: # zero-think case | ||
| break | ||
|
|
||
| if rem_toks <= min_step_len: # minimum step length reached | ||
| break | ||
|
|
||
| model_options["max_tokens"] = rem_toks | ||
| # TODO workaround to obtain generated token counts | ||
| # The token count should be relayed by openai's CompletionUsage | ||
| model_options["logprobs"] = 1 # To get number of generated tokens | ||
| result = backend._generate_from_raw([prompt], model_options=model_options, generate_logs=generate_logs) | ||
| gen_tok_count += len(result[0]._meta['oai_completion_response']['logprobs']['token_logprobs']) | ||
| rem_toks = think_max_tokens - gen_tok_count | ||
| response = result[0].value | ||
|
|
||
| if think_wait_suffix == "": | ||
| # non-strict budget form | ||
| responses.append(response) | ||
| break | ||
|
|
||
| if rem_toks <= 0: | ||
| responses.append(response) | ||
| break | ||
|
|
||
| else: | ||
| if end_think_token: | ||
| step = response.split(end_think_token)[0] | ||
| # model fails to produce thoughts, let's exit | ||
| if len(step.strip()) <= min_step_len: | ||
| responses.append(response) | ||
| break | ||
|
|
||
| # request more steps | ||
| step = f"{step} {think_wait_suffix}" | ||
| responses.append(step) | ||
| curr_prompt += step | ||
|
|
||
| response = "".join(responses) | ||
| if answer_regex is None or answer_suffix is None: | ||
| return response, gen_tok_count | ||
|
|
||
| # Now get a final answer if we need to | ||
| # TODO: Here we check if a final answer exists, technically we should check for an answer outside | ||
| # The think block, but we will use relaxed requirement of finding any answer in the model's response. | ||
| # Consider a strict structural approach in the future. | ||
| # e.g. | ||
| # answer_blk = response.split(end_think_token)[-1] | ||
|
|
||
| # Check if answer in response | ||
| matches = re.findall(answer_regex, response, re.DOTALL) | ||
| if len(matches) > 0: | ||
| return response, gen_tok_count | ||
|
|
||
| # Answer is not in response, let's force an answer | ||
| if end_think_token and end_think_token not in response: | ||
| response += f" {end_think_token}" | ||
|
|
||
| if begin_response_token and begin_response_token not in response: | ||
| response += f" {begin_response_token}" | ||
|
|
||
| if answer_suffix: | ||
| response += f" {answer_suffix}" | ||
|
|
||
| # update original prompt with assembled response | ||
| prompt += response | ||
| if answer_max_tokens is not None: | ||
| model_options["max_tokens"] = answer_max_tokens | ||
|
|
||
| else: | ||
| model_options.pop("max_tokens", None) # generate unconditionally | ||
|
|
||
| model_options["logprobs"] = 1 # To get number of generated tokens | ||
| result = backend._generate_from_raw([prompt], model_options=model_options, generate_logs=generate_logs) | ||
| response += result[0].value | ||
| gen_tok_count += len(result[0]._meta['oai_completion_response']['logprobs']['token_logprobs']) | ||
| return response, gen_tok_count | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| vllm.err | ||
| vllm.log |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
|
|
||
| # Test for OpenAI API served by VLLM | ||
|
|
||
| ## Requirement | ||
|
|
||
| anaconda / miniconda / miniforge. | ||
|
|
||
| Make sure to run the test with multiple cores available (e.g. in a cloud instance / cluster job). | ||
| Although you may think 1 core is enough, | ||
| vllm could get stuck due to deadlock if so. | ||
|
|
||
| ## Installation | ||
|
|
||
| Run the `install.sh` script, which needs to be done only once. | ||
| The script creates a new conda environment named "mellea_tbf" only for the purposes of testing or contributing to the think budget-forcing feature. | ||
|
|
||
| Run `./install.sh` | ||
|
|
||
| ## Testing | ||
|
|
||
| ``` shell | ||
| ./run_test.sh | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
|
|
||
| name: mellea_tbf | ||
| channels: | ||
| - conda-forge | ||
| dependencies: | ||
| - python=3.12 # note: at the time of writing, xformer (< vllm) has a broken wheel for 3.13. https://github.com/facebookresearch/xformers/issues/740#issuecomment-2753869337 | ||
| - uv |
10 changes: 10 additions & 0 deletions
10
test/stdlib_basics/test_think_budget_forcing/exec_sampling_test.sh
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,10 @@ | ||
| #!/bin/bash | ||
|
|
||
| source set_variables.sh | ||
|
|
||
| eval "$(conda shell.bash hook)" | ||
| conda activate $ENV_NAME | ||
|
|
||
| export LOCAL_TEST_MODEL | ||
|
|
||
| python test_think_budget_forcing.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| #!/bin/bash -xe | ||
|
|
||
| source set_variables.sh | ||
|
|
||
| conda env remove -y -n $ENV_NAME || true | ||
| conda env create -f $(readlink -f $(dirname $0))/environment.yml | ||
|
|
||
| in-conda (){ | ||
| conda run -n $ENV_NAME $@ | ||
| } | ||
|
|
||
|
|
||
| cd ../../../ | ||
| in-conda uv pip install -e . | ||
| cd - | ||
| in-conda uv pip install pre-commit | ||
| in-conda uv pip install pytest | ||
| in-conda uv pip install vllm==0.10.0 | ||
| in-conda uv pip install outlines | ||
| # in-conda uv pip install unsloth | ||
| in-conda uv pip install ipdb | ||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| #!/bin/bash | ||
|
|
||
| source set_variables.sh | ||
|
|
||
| eval "$(conda shell.bash hook)" | ||
| conda activate $ENV_NAME | ||
|
|
||
| rm $VLLM_LOG $VLLM_ERR | ||
|
|
||
| bash ./serve.sh & | ||
| VLLM_PID=$! | ||
|
|
||
| trap "kill -SIGINT $VLLM_PID ; wait" EXIT | ||
|
|
||
| while sleep 1 ; do | ||
| if grep -q "Application startup complete." $VLLM_ERR | ||
| then | ||
| break | ||
| fi | ||
| done | ||
|
|
||
| bash exec_sampling_test.sh | ||
|
|
||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,16 @@ | ||
| #!/bin/bash | ||
|
|
||
| source set_variables.sh | ||
| eval "$(conda shell.bash hook)" | ||
| conda activate $ENV_NAME | ||
| export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True | ||
|
|
||
| echo "launching a vllm server. Logs are found in $(readlink -ef $(dirname $0))/vllm.log" | ||
| # At the time of writing this code, Granite 4.4 vLLM serving did not support prefix-caching | ||
| # --enable-prefix-caching \ | ||
| vllm serve $LOCAL_TEST_MODEL \ | ||
| --dtype bfloat16 \ | ||
| > $VLLM_LOG \ | ||
| 2> $VLLM_ERR | ||
|
|
||
|
|
8 changes: 8 additions & 0 deletions
8
test/stdlib_basics/test_think_budget_forcing/set_variables.sh
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| #!/bin/bash | ||
|
|
||
| PYTHONBREAKPOINT="ipdb.set_trace" | ||
| LOCAL_TEST_MODEL="ibm-granite/granite-4.0-tiny-preview" | ||
| ENV_NAME=mellea_tbf | ||
| DIR=$(readlink -ef $(dirname $0)) | ||
| VLLM_LOG=$DIR/vllm.log | ||
| VLLM_ERR=$DIR/vllm.err |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_options