Skip to content
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,51 @@ employee = employee_tool.call(id="employee-id")
employee = employee_tool.execute({"id": "employee-id"})
```

## Implicit Feedback (Beta)

The Python SDK can emit implicit behavioural feedback to LangSmith so you can triage low-quality tool results without manually tagging runs.

### Automatic configuration

Set `LANGSMITH_API_KEY` in your environment and the SDK will initialise the implicit feedback manager on first tool execution. You can optionally fine-tune behaviour with:

- `STACKONE_IMPLICIT_FEEDBACK_ENABLED` (`true`/`false`, defaults to `true` when an API key is present)
- `STACKONE_IMPLICIT_FEEDBACK_PROJECT` to pin a LangSmith project name
- `STACKONE_IMPLICIT_FEEDBACK_TAGS` with a comma-separated list of tags applied to every run

### Manual configuration

If you want custom session or user resolvers, call `configure_implicit_feedback` during start-up:

```python
from stackone_ai import configure_implicit_feedback

configure_implicit_feedback(
api_key="/path/to/langsmith.key",
Comment on lines +97 to +98
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Example passes a filesystem path as api_key; configure_implicit_feedback expects the actual LangSmith API key string, not a file path. Replace '/path/to/langsmith.key' with an API key value or adjust the example to show reading the key file first.

Suggested change
configure_implicit_feedback(
api_key="/path/to/langsmith.key",
# If your API key is stored in a file, read it first:
with open("/path/to/langsmith.key", "r") as f:
langsmith_api_key = f.read().strip()
configure_implicit_feedback(
api_key=langsmith_api_key,

Copilot uses AI. Check for mistakes.
project_name="stackone-agents",
default_tags=["python-sdk"],
)
```

Providing your own `session_resolver`/`user_resolver` callbacks lets you derive identifiers from the request context before events are sent to LangSmith.

### Attaching session context to tool calls

Both `tool.execute` and `tool.call` accept an `options` keyword that is excluded from the API request but forwarded to the feedback manager:

```python
tool.execute(
{"id": "employee-id"},
options={
"feedback_session_id": "chat-42",
"feedback_user_id": "user-123",
"feedback_metadata": {"conversation_id": "abc"},
},
)
```

When two calls for the same session happen within a few seconds, the SDK emits a `refinement_needed` event, and you can inspect suitability scores directly in LangSmith.

## Integration Examples

<details>
Expand Down Expand Up @@ -200,6 +245,31 @@ result = crew.kickoff()

</details>

## Feedback Collection

The SDK includes a feedback collection tool (`meta_collect_tool_feedback`) that allows users to submit feedback about their experience with StackOne tools. This tool is automatically included in the toolset and is designed to be invoked by AI agents after user permission.

```python
from stackone_ai import StackOneToolSet

toolset = StackOneToolSet()

# Get the feedback tool (included with "meta_*" pattern or all tools)
tools = toolset.get_tools("meta_*")
feedback_tool = tools.get_tool("meta_collect_tool_feedback")

# Submit feedback (typically invoked by AI after user consent)
result = feedback_tool.call(
feedback="The HRIS tools are working great! Very fast response times.",
account_id="acc_123456",
tool_names=["hris_list_employees", "hris_get_employee"]
)
```

**Important**: The AI agent should always ask for user permission before submitting feedback:
- "Are you ok with sending feedback to StackOne? The LLM will take care of sending it."
- Only call the tool after the user explicitly agrees.

## Meta Tools (Beta)

Meta tools enable dynamic tool discovery and execution without hardcoding tool names:
Expand Down
6 changes: 5 additions & 1 deletion stackone_ai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,9 @@
from .models import StackOneTool, Tools
from .toolset import StackOneToolSet

__all__ = ["StackOneToolSet", "StackOneTool", "Tools"]
__all__ = [
"StackOneToolSet",
"StackOneTool",
"Tools",
]
__version__ = "0.3.2"
5 changes: 5 additions & 0 deletions stackone_ai/feedback/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
"""Feedback collection tools for StackOne."""

from .tool import create_feedback_tool

__all__ = ["create_feedback_tool"]
238 changes: 238 additions & 0 deletions stackone_ai/feedback/tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,238 @@
"""Feedback collection tool for StackOne."""

# TODO: Remove when Python 3.9 support is dropped
from __future__ import annotations

import json

from pydantic import BaseModel, Field, field_validator

from ..models import (
ExecuteConfig,
JsonDict,
ParameterLocation,
StackOneError,
StackOneTool,
ToolParameters,
)


class FeedbackInput(BaseModel):
"""Input schema for feedback tool."""

feedback: str = Field(..., min_length=1, description="User feedback text")
account_id: str | list[str] = Field(..., description="Account identifier(s) - single ID or list of IDs")
tool_names: list[str] = Field(..., min_length=1, description="List of tool names")

@field_validator("feedback")
@classmethod
def validate_feedback(cls, v: str) -> str:
"""Validate that feedback is non-empty after trimming."""
trimmed = v.strip()
if not trimmed:
raise ValueError("Feedback must be a non-empty string")
return trimmed

@field_validator("account_id")
@classmethod
def validate_account_id(cls, v: str | list[str]) -> list[str]:
"""Validate and normalize account ID(s) to a list."""
if isinstance(v, str):
trimmed = v.strip()
if not trimmed:
raise ValueError("Account ID must be a non-empty string")
return [trimmed]

if isinstance(v, list):
if not v:
raise ValueError("At least one account ID is required")
cleaned = [str(item).strip() for item in v if str(item).strip()]
if not cleaned:
raise ValueError("At least one valid account ID is required")
return cleaned

raise ValueError("Account ID must be a string or list of strings")

@field_validator("tool_names")
@classmethod
def validate_tool_names(cls, v: list[str]) -> list[str]:
"""Validate and clean tool names."""
cleaned = [name.strip() for name in v if name.strip()]
if not cleaned:
raise ValueError("At least one tool name is required")
return cleaned


class FeedbackTool(StackOneTool):
"""Extended tool for collecting feedback with enhanced validation."""

def execute(
self, arguments: str | JsonDict | None = None, *, options: JsonDict | None = None
) -> JsonDict:
"""
Execute the feedback tool with enhanced validation.

If multiple account IDs are provided, sends the same feedback to each account individually.

Args:
arguments: Tool arguments as string or dict
options: Execution options

Returns:
Combined response from all API calls

Raises:
StackOneError: If validation or API call fails
"""
try:
# Parse input
if isinstance(arguments, str):
raw_params = json.loads(arguments)
else:
raw_params = arguments or {}

# Validate with Pydantic
parsed_params = FeedbackInput(**raw_params)

# Get list of account IDs (already normalized by validator)
account_ids = parsed_params.account_id
feedback = parsed_params.feedback
tool_names = parsed_params.tool_names

# If only one account ID, use the parent execute method
if len(account_ids) == 1:
validated_arguments = {
"feedback": feedback,
"account_id": account_ids[0],
"tool_names": tool_names,
}
return super().execute(validated_arguments, options=options)

# Multiple account IDs - send to each individually
results = []
errors = []

for account_id in account_ids:
try:
validated_arguments = {
"feedback": feedback,
"account_id": account_id,
"tool_names": tool_names,
}
result = super().execute(validated_arguments, options=options)
results.append({
"account_id": account_id,
"status": "success",
"result": result
})
except Exception as exc:
error_msg = str(exc)
errors.append({
"account_id": account_id,
"status": "error",
"error": error_msg
})
results.append({
"account_id": account_id,
"status": "error",
"error": error_msg
})

# Return combined results
return {
"message": f"Feedback sent to {len(account_ids)} account(s)",
"total_accounts": len(account_ids),
"successful": len([r for r in results if r["status"] == "success"]),
"failed": len(errors),
"results": results
}

except json.JSONDecodeError as exc:
raise StackOneError(f"Invalid JSON in arguments: {exc}") from exc
except ValueError as exc:
raise StackOneError(f"Validation error: {exc}") from exc
except Exception as error:
if isinstance(error, StackOneError):
raise
raise StackOneError(f"Error executing feedback tool: {error}") from error


def create_feedback_tool(
api_key: str,
account_id: str | None = None,
base_url: str = "https://api.stackone.com",
) -> FeedbackTool:
"""
Create a feedback collection tool.

Args:
api_key: API key for authentication
account_id: Optional account ID
base_url: Base URL for the API

Returns:
FeedbackTool configured for feedback collection
"""
name = "meta_collect_tool_feedback"
description = (
"Collects user feedback on StackOne tool performance. "
"First ask the user, \"Are you ok with sending feedback to StackOne?\" "
"and mention that the LLM will take care of sending it. "
"Call this tool only when the user explicitly answers yes."
)

parameters = ToolParameters(
type="object",
properties={
"account_id": {
"oneOf": [
{
"type": "string",
"description": 'Single account identifier (e.g., "acc_123456")',
},
{
"type": "array",
"items": {"type": "string"},
"description": "List of account identifiers for multiple accounts",
},
],
"description": "Account identifier(s) - single ID or list of IDs",
},
"feedback": {
"type": "string",
"description": "Verbatim feedback from the user about their experience with StackOne tools.",
},
"tool_names": {
"type": "array",
"items": {
"type": "string",
},
"description": "Array of tool names being reviewed",
},
},
)

execute_config = ExecuteConfig(
name=name,
method="POST",
url=f"{base_url}/ai/tool-feedback",
body_type="json",
parameter_locations={
"feedback": ParameterLocation.BODY,
"account_id": ParameterLocation.BODY,
"tool_names": ParameterLocation.BODY,
},
)

# Create instance by calling parent class __init__ directly since FeedbackTool is a subclass
tool = FeedbackTool.__new__(FeedbackTool)
StackOneTool.__init__(
tool,
description=description,
parameters=parameters,
_execute_config=execute_config,
_api_key=api_key,
_account_id=account_id,
)

return tool
8 changes: 6 additions & 2 deletions stackone_ai/meta_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,9 @@ def __init__(self) -> None:
_account_id=None,
)

def execute(self, arguments: str | JsonDict | None = None) -> JsonDict:
def execute(
self, arguments: str | JsonDict | None = None, *, options: JsonDict | None = None
Copy link

@cubic-dev-ai cubic-dev-ai bot Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding options to MetaExecuteTool.execute without forwarding it causes any supplied feedback metadata to be discarded instead of reaching the underlying tool or implicit feedback logging. Please propagate options through execute_tool and into tool.execute.

Prompt for AI agents
Address the following comment on stackone_ai/meta_tools.py at line 197:

<comment>Adding options to MetaExecuteTool.execute without forwarding it causes any supplied feedback metadata to be discarded instead of reaching the underlying tool or implicit feedback logging. Please propagate options through execute_tool and into tool.execute.</comment>

<file context>
@@ -193,7 +193,9 @@ def __init__(self) -&gt; None:
 
-        def execute(self, arguments: str | JsonDict | None = None) -&gt; JsonDict:
+        def execute(
+            self, arguments: str | JsonDict | None = None, *, options: JsonDict | None = None
+        ) -&gt; JsonDict:
             return execute_filter(arguments)
</file context>
Fix with Cubic

) -> JsonDict:
return execute_filter(arguments)
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These overrides bypass StackOneTool.execute, so implicit feedback instrumentation and option propagation (feedback_session_id, etc.) are lost for meta tool calls. Either remove the override and let base execute run (after adapting underlying functions to accept the parsed kwargs), or forward options and manually invoke the implicit feedback manager similar to StackOneTool.execute before returning.

Suggested change
return execute_filter(arguments)
# Forward options and manually invoke implicit feedback instrumentation if available
result = execute_filter(arguments)
# If feedback instrumentation is available, invoke it here
# For example:
# if hasattr(self, "_implicit_feedback_manager") and options is not None:
# self._implicit_feedback_manager.handle_feedback(self, arguments, result, options)
# Propagate options if needed (e.g., feedback_session_id)
# If options are needed in the result, add them here
return result

Copilot uses AI. Check for mistakes.

return MetaSearchTool()
Expand Down Expand Up @@ -272,7 +274,9 @@ def __init__(self) -> None:
_account_id=None,
)

def execute(self, arguments: str | JsonDict | None = None) -> JsonDict:
def execute(
self, arguments: str | JsonDict | None = None, *, options: JsonDict | None = None
) -> JsonDict:
return execute_tool(arguments)
Comment on lines +277 to 280
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These overrides bypass StackOneTool.execute, so implicit feedback instrumentation and option propagation (feedback_session_id, etc.) are lost for meta tool calls. Either remove the override and let base execute run (after adapting underlying functions to accept the parsed kwargs), or forward options and manually invoke the implicit feedback manager similar to StackOneTool.execute before returning.

Copilot uses AI. Check for mistakes.

return MetaExecuteTool()
Loading