Skip to content

Commit d886f7c

Browse files
authored
Merge pull request #224 from restackio/update-llm-model
Update llm model to latest available
2 parents 8251376 + 0243550 commit d886f7c

File tree

28 files changed

+30
-30
lines changed

28 files changed

+30
-30
lines changed

agent_apis/src/functions/llm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ async def llm(function_input: FunctionInputParams) -> str:
4141
messages.append({"role": "user", "content": function_input.user_content})
4242

4343
response = client.chat.completions.create(
44-
model=function_input.model or "gpt-4o-mini", messages=messages
44+
model=function_input.model or "gpt-4.1-mini", messages=messages
4545
)
4646
log.info("llm function completed", response=response)
4747
return response.choices[0].message.content

agent_apis/src/workflows/multistep.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ async def run(self, workflow_input: WorkflowInputParams) -> dict:
3535
function_input=FunctionInputParams(
3636
system_content=f"You are a personal assitant and have access to weather data {weather_data}. Always greet person with relevant info from weather data",
3737
user_content=user_content,
38-
model="gpt-4o-mini",
38+
model="gpt-4.1-mini",
3939
),
4040
start_to_close_timeout=timedelta(seconds=120),
4141
)

agent_chat/src/functions/llm_chat.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ async def llm_chat(agent_input: LlmChatInput) -> dict[str, str]:
4444
)
4545

4646
assistant_raw_response = client.chat.completions.create(
47-
model=agent_input.model or "gpt-4o-mini",
47+
model=agent_input.model or "gpt-4.1-mini",
4848
messages=agent_input.messages,
4949
)
5050
except Exception as e:

agent_rag/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ python -c "from src.services import watch_services; watch_services()"
5151

5252
Duplicate the `env.example` file and rename it to `.env`.
5353

54-
Obtain a Restack API Key to interact with the 'gpt-4o-mini' model at no cost from [console.restack.io](https://console.restack.io)
54+
Obtain a Restack API Key to interact with the 'gpt-4.1-mini' model at no cost from [console.restack.io](https://console.restack.io)
5555

5656
## Run agents
5757

agent_rag/src/functions/llm_chat.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ async def llm_chat(function_input: LlmChatInput) -> ChatCompletion:
4545
)
4646

4747
response = client.chat.completions.create(
48-
model=function_input.model or "gpt-4o-mini",
48+
model=function_input.model or "gpt-4.1-mini",
4949
messages=function_input.messages,
5050
)
5151
except Exception as e:

agent_stream/src/functions/llm_chat.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ async def llm_chat(function_input: LlmChatInput) -> str:
4040
messages_dicts = [message.model_dump() for message in function_input.messages]
4141
# Get the streamed response from OpenAI API
4242
response: Stream[ChatCompletionChunk] = client.chat.completions.create(
43-
model=function_input.model or "gpt-4o-mini",
43+
model=function_input.model or "gpt-4.1-mini",
4444
messages=messages_dicts,
4545
stream=True,
4646
)

agent_telephony/twilio_livekit/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:723
3131

3232
In all subfolders, duplicate the `env.example` file and rename it to `.env`.
3333

34-
Obtain a Restack API Key to interact with the 'gpt-4o-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
34+
Obtain a Restack API Key to interact with the 'gpt-4.1-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
3535

3636

3737
## Start Restack Agent with Twilio
@@ -102,7 +102,7 @@ python src/worker.py dev
102102

103103
Duplicate the `env.example` file and rename it to `.env`.
104104

105-
Obtain a Restack API Key to interact with the 'gpt-4o-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
105+
Obtain a Restack API Key to interact with the 'gpt-4.1-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
106106

107107
## Create a new Agent
108108

agent_telephony/vapi/agent_vapi/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:723
2828

2929
In all subfolders, duplicate the `env.example` file and rename it to `.env`.
3030

31-
Obtain a Restack API Key to interact with the 'gpt-4o-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
31+
Obtain a Restack API Key to interact with the 'gpt-4.1-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
3232

3333

3434
## Start Restack Agent with Twilio
@@ -99,7 +99,7 @@ python src/pipeline.py dev
9999

100100
Duplicate the `env.example` file and rename it to `.env`.
101101

102-
Obtain a Restack API Key to interact with the 'gpt-4o-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
102+
Obtain a Restack API Key to interact with the 'gpt-4.1-mini' model at no cost from [Restack Cloud](https://console.restack.io/starter)
103103

104104
## Create a new Agent
105105

agent_telephony/vapi/agent_vapi/src/functions/llm_chat.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ async def llm_chat(function_input: LlmChatInput) -> str:
4040
messages_dicts = [message.model_dump() for message in function_input.messages]
4141
# Get the streamed response from OpenAI API
4242
response: Stream[ChatCompletionChunk] = client.chat.completions.create(
43-
model=function_input.model or "gpt-4o-mini",
43+
model=function_input.model or "gpt-4.1-mini",
4444
messages=messages_dicts,
4545
stream=True,
4646
)

agent_todo/src/functions/llm_chat.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ async def llm_chat(function_input: LlmChatInput) -> ChatCompletion:
5555
)
5656

5757
response = client.chat.completions.create(
58-
model=function_input.model or "gpt-4o-mini",
58+
model=function_input.model or "gpt-4.1-mini",
5959
messages=function_input.messages,
6060
tools=function_input.tools,
6161
)

0 commit comments

Comments
 (0)