Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .viperlightignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Empty .viperlightignore created - none existed
1 change: 1 addition & 0 deletions .viperlightrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"failOn":"low","all":true}
25 changes: 25 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,31 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [4.1.0] - 2026-01-29

### Added

- MCP Server target configuration support in MCP Server use case.

### Changed

- Upgraded langchain-aws and dependent packages with migration from deprecated modules to langchain_core and langchain_classic.
- Refactored info panel content for all use case types in view details page.

### Fixed

- Bug where removing already deleted MCP servers from an Agent Builder usecase would cause errors.
- Issue with workflow review page not displaying system prompt formatting correctly.
- Bug where failed memory creation would also cause deletion to fail.

### Security

- Upgraded python-multipart to `0.0.22` to mitigate [CVE-2024-53981](https://avd.aquasec.com/nvd/cve-2024-53981/)
- Upgraded lodash to `4.17.23` to mitigate [CVE-2025-13465](https://avd.aquasec.com/nvd/cve-2025-13465/)
- Upgraded diff to `4.0.4` and `5.2.2` to mitigate [CVE-2026-24001](https://avd.aquasec.com/nvd/cve-2026-24001/)
- Upgraded @smithy/config-resolver to `4.4.6` to mitigate [GHSA-6475-r3vj-m8vf](https://github.com/advisories/GHSA-6475-r3vj-m8vf)
- Upgraded wheel to `0.46.3` to mitigate [GHSA-8rrh-rw8j-w5fx](https://github.com/advisories/GHSA-8rrh-rw8j-w5fx)

## [4.0.4] - 2026-01-13

### Fixed
Expand Down
10 changes: 9 additions & 1 deletion NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1303,9 +1303,17 @@ klaw-sync under the MIT license.
kleur under the MIT license.
langchain under the MIT license.
langchain-aws under the MIT license.
langchain-classic under the MIT license.
langchain-core under the MIT license.
langchain-text-splitters under the MIT license.
langsmith under the MIT license.
langgraph under the MIT license.
langgraph-checkpoint under the MIT license.
langgraph-prebuilt under the MIT license.
langgraph-sdk under the MIT license.
ormsgpack under the Apache-2.0 license.
uuid-utils under the MIT license.
xxhash under the BSD-2-Clause license.
launch-editor under the MIT license.
lazy-ass under the MIT license.
leven under the MIT license.
Expand Down Expand Up @@ -1548,7 +1556,7 @@ pump under the MIT license.
punycode under the MIT license.
pure-rand under the MIT license.
pyasn1 under the BSD-2-Clause license.
pycparser under the 0BSD license.
pycparser under the BSD-3-Clause license.
pydantic under the MIT license.
pydantic-core under the MIT license.
pyjwt under the MIT license.
Expand Down
4 changes: 2 additions & 2 deletions deployment/cdk-solution-helper/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions deployment/cdk-solution-helper/package.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
{
"name": "@amzn/cdk-solution-helper",
"version": "0.1.0",
"version": "4.1.0",
"description": "This script performs token replacement as part of the build pipeline",
"license": "Apache-2.0",
"author": {
"name": "Amazon Web Services",
"url": "https://aws.amazon.com/solutions"
}
}
}
4 changes: 2 additions & 2 deletions deployment/ecr/gaab-strands-agent/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "gaab-strands-agent"
version = "4.0.4"
version = "4.1.0"
description = "GAAB Strands Agent Runtime for Amazon Bedrock AgentCore"
readme = "README.md"
requires-python = ">=3.13"
Expand All @@ -14,7 +14,7 @@ classifiers = [
dependencies = [
"setuptools>=70.0.0",
"pip>=25.0",
"wheel>=0.42.0",
"wheel>=0.46.2",

# AWS SDK
"boto3>=1.35.0",
Expand Down
21 changes: 12 additions & 9 deletions deployment/ecr/gaab-strands-agent/uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion deployment/ecr/gaab-strands-common/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "gaab-strands-common"
version = "0.1.0"
version = "4.1.0"
description = "Shared library for GAAB Strands agents"
readme = "README.md"
requires-python = ">=3.13"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def __init__(self, config: Dict[str, Any], region: str):
logger.debug(f"Initialized S3FileReaderTool for bucket: {self.bucket_name}")

@tool
def s3_file_reader(self, tool_input: ToolUse) -> ToolResult:
def s3_file_reader(self, s3_key: str) -> ToolResult:
"""
Read files from S3 and return content in model-readable format.

Expand All @@ -69,28 +69,24 @@ def s3_file_reader(self, tool_input: ToolUse) -> ToolResult:
automatically detects file type and formats content appropriately for processing.

Args:
tool_input: ToolUse object containing:
- s3_key (str, required): S3 object key/path (e.g., 'uploads/document.pdf')

s3_key (str, required): S3 object key/path (e.g., 'uploads/document.pdf')

Returns:
ToolResult with status "success" or "error":
- Success: Returns image or document block with format and binary content
- Error: Returns descriptive error message for invalid input, unsupported format,
file not found, or S3 access issues
"""
# Initialize variables with defaults
tool_use_id = "unknown"
s3_key = "unknown"

try:
tool_use_id = tool_input["toolUseId"]
tool_use_input = tool_input["input"]

if "s3_key" not in tool_use_input:
if not isinstance(s3_key, str):
tool_use_id = "s3_file_reader_invalid"
return self._create_error_result(tool_use_id, "S3 key must be a string")

if not s3_key or not s3_key.strip():
tool_use_id = "s3_file_reader_empty"
return self._create_error_result(tool_use_id, "S3 key is required")

s3_key = tool_use_input["s3_key"]
tool_use_id = f"s3_file_reader_{hash(s3_key) % 10000}"

# Validate and normalize the S3 key
validation_result = self._validate_and_normalize_s3_key(s3_key)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,11 +100,10 @@ def test_image_processing_png(tool):
# Replace tool's S3 client with mocked one
tool.s3_client = s3

tool_use = {"toolUseId": "test-png", "input": {"s3_key": filename}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(filename)

# Verify ToolResult structure
assert result["toolUseId"] == "test-png"
assert result["toolUseId"].startswith("s3_file_reader_")
assert result["status"] == "success"
assert len(result["content"]) == 1

Expand All @@ -126,8 +125,7 @@ def test_image_processing_jpg_normalization(tool):

tool.s3_client = s3

tool_use = {"toolUseId": "test-jpg", "input": {"s3_key": filename}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(filename)

# Verify JPG is normalized to JPEG
assert result["status"] == "success"
Expand All @@ -146,8 +144,7 @@ def test_no_normalization_for_valid_formats(tool):
test_content = b"fake document content"
s3.put_object(Bucket="test-bucket", Key=filename, Body=test_content)

tool_use = {"toolUseId": f"test-csv", "input": {"s3_key": filename}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(filename)

assert result["status"] == "success"
assert result["content"][0]["document"]["format"] == "csv"
Expand All @@ -165,11 +162,10 @@ def test_document_processing_pdf(tool):

tool.s3_client = s3

tool_use = {"toolUseId": "test-pdf", "input": {"s3_key": filename}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(filename)

# Verify ToolResult structure
assert result["toolUseId"] == "test-pdf"
assert result["toolUseId"].startswith("s3_file_reader_")
assert result["status"] == "success"
assert len(result["content"]) == 1

Expand All @@ -182,27 +178,16 @@ def test_document_processing_pdf(tool):

def test_missing_s3_key(tool):
"""Test error when s3_key is missing from input"""
tool_use = {"toolUseId": "test-missing", "input": {}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader("")

assert result["toolUseId"] == "test-missing"
assert result["toolUseId"] == "s3_file_reader_empty"
assert result["status"] == "error"
assert "S3 key is required" in result["content"][0]["text"]


def test_empty_s3_key(tool):
"""Test error when s3_key is empty"""
tool_use = {"toolUseId": "test-empty", "input": {"s3_key": ""}}
result = tool.s3_file_reader(tool_use)

assert result["status"] == "error"
assert "S3 key cannot be empty" in result["content"][0]["text"]


def test_s3_uri_rejected(tool):
"""Test that S3 URIs are rejected"""
tool_use = {"toolUseId": "test-uri", "input": {"s3_key": "s3://bucket/key.txt"}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader("s3://bucket/key.txt")

assert result["status"] == "error"
assert "Invalid input" in result["content"][0]["text"]
Expand All @@ -220,8 +205,7 @@ def test_unsupported_file_format(tool):

tool.s3_client = s3

tool_use = {"toolUseId": "test-unsupported", "input": {"s3_key": "program.exe"}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader("program.exe")

assert result["status"] == "error"
assert "Unsupported file type" in result["content"][0]["text"]
Expand All @@ -236,8 +220,7 @@ def test_file_not_found(tool):

tool.s3_client = s3

tool_use = {"toolUseId": "test-not-found", "input": {"s3_key": "nonexistent.txt"}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader("nonexistent.txt")

assert result["status"] == "error"
assert "not found" in result["content"][0]["text"]
Expand All @@ -253,29 +236,19 @@ def mock_get_object(**kwargs):
)

with patch.object(tool.s3_client, "get_object", side_effect=mock_get_object):
tool_use = {"toolUseId": "test-access", "input": {"s3_key": "restricted.txt"}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader("restricted.txt")

assert result["status"] == "error"
assert result["content"][0]["text"] == "File 'restricted.txt' not found. The file may have been deleted or moved."


def test_malformed_tool_use_missing_tool_use_id(tool):
"""Test handling of malformed ToolUse objects missing toolUseId"""
result = tool.s3_file_reader({"input": {"s3_key": "test.txt"}})

assert result["status"] == "error"
assert result["toolUseId"] == "unknown"
assert "Unexpected error" in result["content"][0]["text"]


def test_malformed_tool_use_missing_input(tool):
"""Test handling of malformed ToolUse objects missing input"""
result = tool.s3_file_reader({"toolUseId": "test-id"})
"""Test handling of None s3_key parameter"""
result = tool.s3_file_reader(None)

assert result["status"] == "error"
assert result["toolUseId"] == "test-id"
assert "Unexpected error" in result["content"][0]["text"]
assert result["toolUseId"] == "s3_file_reader_invalid"
assert "S3 key must be a string" in result["content"][0]["text"]


@mock_aws
Expand All @@ -297,8 +270,7 @@ def test_bedrock_compliance_document_names(tool):
test_content = b"bedrock compliance test content"
s3.put_object(Bucket="test-bucket", Key=filename, Body=test_content)

tool_use = {"toolUseId": f"test-{filename}", "input": {"s3_key": filename}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(filename)

assert result["status"] == "success"
document_name = result["content"][0]["document"]["name"]
Expand All @@ -322,8 +294,7 @@ def test_full_document_s3_key_structure(tool):
test_content = b"complex path document content"
s3.put_object(Bucket="test-bucket", Key=complex_key, Body=test_content)

tool_use = {"toolUseId": "test-complex", "input": {"s3_key": complex_key}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(complex_key)

assert result["status"] == "success"
assert result["content"] == [
Expand Down Expand Up @@ -352,8 +323,7 @@ def test_full_image_s3_key_structure(tool):
test_content = b"complex path image content"
s3.put_object(Bucket="test-bucket", Key=complex_key, Body=test_content)

tool_use = {"toolUseId": "test-complex", "input": {"s3_key": complex_key}}
result = tool.s3_file_reader(tool_use)
result = tool.s3_file_reader(complex_key)

assert result["status"] == "success"
assert result["content"] == [
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ def test_validate_single_file_deleted_status():
"fileKey": "test-use-case-id/test-user-id/test-conversation-id/test-message-id",
"fileName": "deleted-file.txt",
"status": FileStatus.DELETED,
"ttl": int(time.time()) + 3600,
"ttl": int(time.time()) + 3610,
}
)

Expand Down
Loading