-
Notifications
You must be signed in to change notification settings - Fork 2k
Open
Labels
bedrockp3This is a minor priority issueThis is a minor priority issueservice-apiThis issue is caused by the service API, not the SDK implementation.This issue is caused by the service API, not the SDK implementation.
Description
Describe the bug
I want to create a bedrock agent with the foundation model anthropic.claude-3-7-sonnet and with model tuning parameters such as temperature, topP, stopSequences, etc.
tried the given code
import boto3
import uuid
aws_access_key = "KI6"
aws_secret_key = "Qc3"
aws_region = "us-east-1"
def create_bedrock_agent_with_temperature(agent_name, agent_instruction, agent_role_arn, foundation_model, temperature=0.5):
client = boto3.client('bedrock-agent',
region_name=aws_region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key)
try:
default_prompt_template = "{\"anthropic_version\":\"bedrock-2023-05-31\",\"system\":\"You are a helpful and professional assistant. Provide thorough, accurate answers in a formal tone.\",\"messages\":[{\"role\":\"user\",\"content\":\"$question$\"}]}"
response = client.create_agent(
agentName=agent_name,
instruction=agent_instruction,
foundationModel=foundation_model,
agentResourceRoleArn=agent_role_arn,
idleSessionTTLInSeconds=1800,
promptOverrideConfiguration={
'promptConfigurations': [
{
'promptType': 'ORCHESTRATION', # Or ORCHESTRATION etc.
'promptCreationMode': 'OVERRIDDEN', # OVERRIDDEN or DEFAULT
'promptState': 'ENABLED',
'basePromptTemplate': default_prompt_template,
'inferenceConfiguration': {
'temperature': 0.7, # This is where you set the temperature
'topP': 0.9,
'stopSequences': ['</response>'],
'maximumLength': 2048
}
}
]
}
)
print(f"Agent '{agent_name}' created successfully!")
return response['agent']['agentArn']
except Exception as e:
print(f"Error creating agent: {e}")
return None
role_arn = "arn:aws:iam::68296:role/DEFAULT_AgentExecutionRole"
model_id = "arn:aws:bedrock:us-east-1:68096:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
# Create an agent with a higher temperature for more creative responses
creative_agent_arn = create_bedrock_agent_with_temperature(
agent_name=f"CreativeAgent-{uuid.uuid4().hex[:8]}",
agent_instruction="You are a creative writing assistant. Generate imaginative and diverse story ideas.",
agent_role_arn=role_arn,
foundation_model=model_id,
temperature=0.8
)
print(f"Creative Agent ARN: {creative_agent_arn}")
But the agent creation is working, but the agent invocation is not happening, and it says
The overridden prompt that you provided is incorrectly formatted. Check the format for errors, such as invalid JSON, and retry your request.
So, as per my understanding
- If we need to tune the model inside an agent, then we need to configure the
inferenceConfiguration - To provide
inferenceConfigurationthepromptCreationModevalue should beOVERRIDDEN - If we provide
promptCreationModevalue asOVERRIDDEN, we need to pass thebasePromptTemplate
so what should be the basePromptTemplate value? How to structure this? Does this template differ based on the foundation model we used? Please provide any existing documentation for this?
Regression Issue
- Select this option if this issue appears to be a regression.
Expected Behavior
Agent creation and invocation should happen smoothly with model tuning parameters
Current Behavior
Agent invocation failing.
Reproduction Steps
Use the code base above
Possible Solution
No response
Additional Information/Context
No response
SDK version used
boto3==1.40.45, botocore==1.40.45
Environment details (OS name and version, etc.)
Ubuntu
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bedrockp3This is a minor priority issueThis is a minor priority issueservice-apiThis issue is caused by the service API, not the SDK implementation.This issue is caused by the service API, not the SDK implementation.