Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,6 @@
load_dotenv()
chat_model = ChatBedrockAnthropic(
model="anthropic.claude-3-sonnet-20240229-v1:0",
# aws_secret_key=os.getenv("AWS_SECRET_KEY"),
# aws_access_key=os.getenv("AWS_ACCESS_KEY"),
# aws_region=os.getenv("AWS_REGION"),
# aws_account_id=os.getenv("AWS_ACCOUNT_ID"),
)

# Set some Shiny page options
Expand All @@ -36,5 +32,5 @@
# Define a callback to run when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
shiny
python-dotenv
tokenizers
chatlas
anthropic[bedrock]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that reading through this PR reminded me of an issue I opened in chatlas to make provider-specific extras in chatlas. Here's the related PR: posit-dev/chatlas#66

In the future (I guess not yet...) it'd be nice to be able to include those extras in these templates -- e.g. chatlas[bedrock-anthropic] -- which would mean that updates to the provider dependencies don't require changes here to stay up-to-date.

Copy link
Collaborator Author

@cpsievert cpsievert Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, agreed, thanks for the PR

FWIW, tokenizers was once for ui.Chat.messages(token_limits), but we longer advertise it

Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
Comment on lines +1 to +2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be good to include this message in the shiny create output. See the template.json in the shiny/templates/apps section for examples. Here's one:

{
  "type": "app",
  "id": "dashboard",
  "title": "Basic dashboard",
  "description": "A basic, single page dashboard with value boxes, two plots in cards and a sidebar.",
  "next_steps": [
    "Run the app with `shiny run app.py`."
  ],
  "follow_up": [
    {
      "type": "info",
      "text": "Just getting started with Shiny?"
    },
    {
      "type": "action",
      "text": "Learn more at https://shiny.posit.co/py/docs/user-interfaces.html"
    }
  ]
}

Copy link
Collaborator Author

@cpsievert cpsievert Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, forgot that was a thing!

AWS_SECRET_KEY=<Your AWS info here>
AWS_ACCESS_KEY=<Your AWS info here>
AWS_REGION=<Your AWS info here>
AWS_ACCOUNT_ID=<Your AWS info here>
2 changes: 1 addition & 1 deletion shiny/templates/chat/llm-enterprise/azure-openai/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@
# Define a callback to run when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
shiny
python-dotenv
tokenizers
chatlas
openai
3 changes: 3 additions & 0 deletions shiny/templates/chat/llm-enterprise/azure-openai/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
AZURE_OPENAI_API_KEY=<Your Azure OpenAI API key>
4 changes: 2 additions & 2 deletions shiny/templates/chat/llms/anthropic/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
load_dotenv()
chat_model = ChatAnthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY"),
model="claude-3-5-sonnet-latest",
model="claude-3-7-sonnet-latest",
system_prompt="You are a helpful assistant.",
)

Expand All @@ -37,5 +37,5 @@
# Generate a response when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
3 changes: 3 additions & 0 deletions shiny/templates/chat/llms/anthropic/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
ANTHROPIC_API_KEY=<Your Anthropic API key>
4 changes: 2 additions & 2 deletions shiny/templates/chat/llms/google/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
chat_model = ChatGoogle(
api_key=os.environ.get("GOOGLE_API_KEY"),
system_prompt="You are a helpful assistant.",
model="gemini-1.5-flash",
model="gemini-2.0-flash",
)

# Set some Shiny page options
Expand All @@ -33,5 +33,5 @@
# Generate a response when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
5 changes: 2 additions & 3 deletions shiny/templates/chat/llms/google/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
shiny
python-dotenv
tokenizers
chatlas
google-generativeai
chatlas>=0.4.0
google-genai
3 changes: 3 additions & 0 deletions shiny/templates/chat/llms/google/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
GOOGLE_API_KEY=<Your Google API key>
2 changes: 1 addition & 1 deletion shiny/templates/chat/llms/langchain/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,5 @@
# Define a callback to run when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
1 change: 0 additions & 1 deletion shiny/templates/chat/llms/langchain/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
shiny
python-dotenv
tokenizers
langchain-openai
3 changes: 3 additions & 0 deletions shiny/templates/chat/llms/langchain/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
OPENAI_API_KEY=<Your OpenAI API key>
2 changes: 1 addition & 1 deletion shiny/templates/chat/llms/ollama/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,5 +29,5 @@
# Generate a response when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
1 change: 0 additions & 1 deletion shiny/templates/chat/llms/ollama/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
shiny
tokenizers
chatlas
ollama
2 changes: 1 addition & 1 deletion shiny/templates/chat/llms/openai/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@
# Generate a response when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
response = chat_model.stream(user_input)
response = await chat_model.stream_async(user_input)
await chat.append_message_stream(response)
1 change: 0 additions & 1 deletion shiny/templates/chat/llms/openai/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
shiny
python-dotenv
tokenizers
chatlas
openai
3 changes: 3 additions & 0 deletions shiny/templates/chat/llms/openai/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
OPENAI_API_KEY=<Your OpenAI API key>
6 changes: 3 additions & 3 deletions shiny/templates/chat/llms/playground/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
load_dotenv()

models = {
"openai": ["gpt-4o-mini", "gpt-4o"],
"claude": [
"claude-3-7-sonnet-latest",
"claude-3-opus-latest",
"claude-3-5-sonnet-latest",
"claude-3-haiku-20240307",
],
"google": ["gemini-1.5-pro-latest"],
"openai": ["gpt-4o-mini", "gpt-4o"],
"google": ["gemini-2.0-flash"],
}

model_choices: dict[str, dict[str, str]] = {}
Expand Down
4 changes: 2 additions & 2 deletions shiny/templates/chat/llms/playground/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
chatlas
chatlas>=0.4
openai
anthropic
google-generativeai
google-genai
python-dotenv
shiny
5 changes: 5 additions & 0 deletions shiny/templates/chat/llms/playground/template.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Once you provided your API key, rename this file to .env
# The load_dotenv() in the app.py will then load this env variable
ANTHROPIC_API_KEY=<Your Anthropic API key>
OPENAI_API_KEY=<Your OpenAI API key>
GOOGLE_API_KEY=<Your Google API key>
Binary file not shown.
12 changes: 5 additions & 7 deletions shiny/templates/chat/starters/hello/app-core.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,11 @@
)

# Create a welcome message
welcome = ui.markdown(
"""
Hi! This is a simple Shiny `Chat` UI. Enter a message below and I will
simply repeat it back to you. For more examples, see this
[folder of examples](https://github.com/posit-dev/py-shiny/tree/main/shiny/templates/chat).
"""
)
welcome = """
Hi! This is a simple Shiny `Chat` UI. Enter a message below and I will
simply repeat it back to you. For more examples, see this
[folder of examples](https://github.com/posit-dev/py-shiny/tree/main/shiny/templates/chat).
"""


def server(input, output, session):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,11 @@
)

# Create a welcome message
welcome = ui.markdown(
"""
Hi! This is a simple Shiny `Chat` UI. Enter a message below and I will
simply repeat it back to you. For more examples, see this
[folder of examples](https://github.com/posit-dev/py-shiny/tree/main/shiny/templates/chat).
"""
)
welcome = """
Hi! This is a simple Shiny `Chat` UI. Enter a message below and I will
simply repeat it back to you. For more examples, see this
[folder of examples](https://github.com/posit-dev/py-shiny/tree/main/shiny/templates/chat).
"""

# Create a chat instance
chat = ui.Chat(
Expand Down
Loading