diff --git a/examples/Running a Test Suite on an External Function.ipynb b/examples/Running a Test Suite on an External Function.ipynb deleted file mode 100644 index 454cbb70de..0000000000 --- a/examples/Running a Test Suite on an External Function.ipynb +++ /dev/null @@ -1,305 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "fc3073e1-a1e8-452f-bcc6-20466ba8e747", - "metadata": {}, - "source": [ - "# Running a Test Suite on an External Function\n", - "\n", - "## Context\n", - "Vellum Test Suites provide a framework for performing quantiative evaluation on AI applications at scale. You can use them to measure the quality of Prompts, Workflows, and even custom functions defined outside of Vellum in your codebase!\n", - "\n", - "This example details how to use Vellum Test Suites to run evals on an external function.\n" - ] - }, - { - "cell_type": "markdown", - "id": "beb623fd-55fb-4f9f-8cd9-c095d2bef043", - "metadata": {}, - "source": [ - "## Prerequisites\n", - "1. A Vellum account\n", - "2. A Vellum API key, which can be created at [https://app.vellum.ai/api-keys](https://app.vellum.ai/api-keys)\n", - "3. Install the `vellum-ai` pip package. We'll also use the getpass package in this notebook to store your Vellum API key.\n", - "\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "cc8ea10c-d5f0-4557-972b-7f9aa293ad1a", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Looking in indexes: https://pypi.org/simple, https://_json_key_base64:****@us-central1-python.pkg.dev/vocify-prod/vocify/simple/\n", - "Requirement already satisfied: vellum-ai in /Users/noaflaherty/Repos/vellum-ai/vellum-client-python/venv/lib/python3.11/site-packages (0.5.0)\n", - "\u001b[31mERROR: Could not find a version that satisfies the requirement getpass (from versions: none)\u001b[0m\u001b[31m\n", - "\u001b[0m\u001b[31mERROR: No matching distribution found for getpass\u001b[0m\u001b[31m\n", - "\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" - ] - } - ], - "source": [ - "!pip install vellum-ai getpass\n" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "f61c3873-bf84-4a61-b1a0-36089704c360", - "metadata": {}, - "outputs": [ - { - "name": "stdin", - "output_type": "stream", - "text": [ - " ········\n" - ] - } - ], - "source": [ - "from getpass import getpass\n", - "\n", - "VELLUM_API_KEY = getpass()" - ] - }, - { - "cell_type": "markdown", - "id": "4bffb358-dd73-4ca0-bf5d-ccbee73e1b7e", - "metadata": {}, - "source": [ - "## Test Suite Set Up\n", - "To run evals on your external function, you must first configure a Test Suite through the Vellum web application at [https://app.vellum.ai/test-suites](https://app.vellum.ai/test-suites).\n", - "\n", - "Note that the Test Suite's \"Execution Interface\" must match that of the function that you'd like to evaluate. For example, if your function looks like:\n", - "\n", - "```python\n", - "def my_function(arg_1: str, arg_2: str) -> str:\n", - " pass\n", - "```\n", - "\n", - "Then you will want your Test Suite's Execution interface to look like this:\n", - "![Test Suite Execution Interface](images/test-suite-execution-interface.png)" - ] - }, - { - "cell_type": "markdown", - "id": "f97d9f53-f57d-4db9-9a11-06fb6f689a99", - "metadata": {}, - "source": [ - "## Getting Started\n", - "\n", - "Now that everything is set up, it's time to write some code! First, we need to define the function whose output we want to evaluate. Here's how we can actually invoke the Test Suite against our function.\n", - "\n", - "Here we're using a Vellum Workflow as an example, but this code could do anything, including calling a Prompt Chain made via another\n", - "third-party library." - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "46070a6d-c661-49ea-b15d-0123fc8c8da5", - "metadata": {}, - "outputs": [], - "source": [ - "from vellum.types.named_test_case_variable_value_request import NamedTestCaseVariableValueRequest, NamedTestCaseStringVariableValueRequest\n", - "from vellum.types.test_case_variable_value import TestCaseVariableValue\n", - "\n", - "def external_execution(inputs: list[TestCaseVariableValue]) -> list[NamedTestCaseVariableValueRequest]:\n", - " output_value = \"\".join([variable.value for variable in inputs])\n", - " output = NamedTestCaseStringVariableValueRequest(\n", - " type=\"STRING\",\n", - " value=output_value,\n", - " name=\"output\"\n", - " )\n", - " return [output]" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "95384013-0636-42db-8d33-4f641d4d7e75", - "metadata": {}, - "outputs": [ - { - "name": "stdin", - "output_type": "stream", - "text": [ - " external-eval-example\n" - ] - } - ], - "source": [ - "# Ether the Test Suite's ID or name\n", - "TEST_SUITE_IDENTIFIER = input()" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "57b72cff-0c3e-4fe5-ab7a-7f03b4bf04ad", - "metadata": {}, - "outputs": [], - "source": [ - "from vellum.client import Vellum\n", - "from vellum.evaluations import VellumTestSuite\n", - "\n", - "\n", - "# Create a new VellumTestSuite object\n", - "client = Vellum(api_key=VELLUM_API_KEY)\n", - "test_suite = VellumTestSuite(TEST_SUITE_IDENTIFIER, client=client)" - ] - }, - { - "cell_type": "markdown", - "id": "831c1fd7-2f48-49ae-a941-3f0e3a721987", - "metadata": {}, - "source": [ - "## Running Evals\n", - "\n", - "Here is where we actually trigger the Test Suite and pass in our executable function." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "421c76de-20f9-4b11-bd4e-f6787148d695", - "metadata": {}, - "outputs": [], - "source": [ - "# Run the external execution\n", - "results = test_suite.run_external(executable=external_execution)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "2bd38cf7-78bf-4cf6-996d-31f22530efa2", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='score'),\n", - " TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='score'),\n", - " TestSuiteRunMetricNumberOutput(value=0.0, type='NUMBER', name='score')]" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Filter down to a specific metric and a specific output that it produces.\n", - "results.get_metric_outputs(\"Exact Match\", \"score\")" - ] - }, - { - "cell_type": "markdown", - "id": "54aaa843-af86-48bd-9329-ced9280c740b", - "metadata": {}, - "source": [ - "## Operating on the Results\n", - "\n", - "Above we use the`get_metric_outputs` function to retrieve all `score`'s for the `Exact Match` output.\n", - "\n", - "Note that under the hood, this function calls `wait_until_complete` to wait until the Test Suite Run has finished running.\n", - "You can also call this function explicitly if you like ahead of time.\n", - "\n", - "`get_metric_outputs` is the primary way to interact with the outputs of a specified metric. With it, you can\n", - "perform a variety of assertions to enforce whatever quality thresholds you like.\n", - "\n", - "If you want to operate directly on the raw executions for ultimate flexibility, use `results.all_executions`." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "c98609c5-754e-4e45-9155-f9a5e52f20e4", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Do all Test Cases pass? No\n", - "66.66666666666666% of Test Cases pass. Acceptable? Yes\n", - "Is the average score acceptable? Yes\n", - "Is the minimum score acceptable? No\n", - "Is the maximum regressing? No\n" - ] - }, - { - "data": { - "text/plain": [ - "[VellumTestSuiteRunExecution(id='ffc8102f-9e8b-4a03-9cdc-9e6fc9ab7928', test_case_id='99971a73-429d-4a28-9003-afbe5cadb868', outputs=[TestSuiteRunExecutionStringOutput(name='output', type='STRING', value='Hello, world!', output_variable_id='c3f48fd5-6df7-4116-bd69-fb624d8d7d88')], metric_results=[TestSuiteRunExecutionMetricResult(metric_id='c4ac96a5-2101-4e1e-8dfb-3fccdc1ebde0', outputs=[TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='score'), TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='normalized_score')], metric_label='Exact Match', metric_definition=TestSuiteRunExecutionMetricDefinition(id='9a8a4c32-0258-41be-beac-063628fe50e6', label='Exact Match', name='exact-match'))]),\n", - " VellumTestSuiteRunExecution(id='fbc9263a-3ae6-4225-ad6e-cc9215c0f758', test_case_id='d4e6885c-4d10-4099-bc2f-9e8dff37c4c2', outputs=[TestSuiteRunExecutionStringOutput(name='output', type='STRING', value='Goodbye cruel, world...', output_variable_id='c3f48fd5-6df7-4116-bd69-fb624d8d7d88')], metric_results=[TestSuiteRunExecutionMetricResult(metric_id='c4ac96a5-2101-4e1e-8dfb-3fccdc1ebde0', outputs=[TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='score'), TestSuiteRunMetricNumberOutput(value=1.0, type='NUMBER', name='normalized_score')], metric_label='Exact Match', metric_definition=TestSuiteRunExecutionMetricDefinition(id='9a8a4c32-0258-41be-beac-063628fe50e6', label='Exact Match', name='exact-match'))]),\n", - " VellumTestSuiteRunExecution(id='fcf54a22-8d52-4d77-9c8a-75cb85cb30a0', test_case_id='3fdb81b7-2147-42c8-92b2-b8f322ad9853', outputs=[TestSuiteRunExecutionStringOutput(name='output', type='STRING', value='Failingtest', output_variable_id='c3f48fd5-6df7-4116-bd69-fb624d8d7d88')], metric_results=[TestSuiteRunExecutionMetricResult(metric_id='c4ac96a5-2101-4e1e-8dfb-3fccdc1ebde0', outputs=[TestSuiteRunMetricNumberOutput(value=0.0, type='NUMBER', name='score'), TestSuiteRunMetricNumberOutput(value=0.0, type='NUMBER', name='normalized_score')], metric_label='Exact Match', metric_definition=TestSuiteRunExecutionMetricDefinition(id='9a8a4c32-0258-41be-beac-063628fe50e6', label='Exact Match', name='exact-match'))])]" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "def print_result(msg: str, result: bool) -> None:\n", - " print(msg, \"Yes\" if result else \"No\")\n", - "\n", - "# Example of asserting that every Test Cases passes\n", - "all_test_cases_pass = all([result.value == 1.0 for result in results.get_metric_outputs(\"exact-match\", \"score\")])\n", - "print_result(\"Do all Test Cases pass?\", all_test_cases_pass)\n", - "\n", - "# Example asserting that at least 50% of results have a score above a specified threshold\n", - "num_test_cases_passing = results.get_count_metric_outputs(\"exact-match\", \"score\", predicate=lambda x: x.value >= 0.5)\n", - "num_test_cases_total = results.get_count_metric_outputs(\"exact-match\", \"score\")\n", - "percent_test_cases_passing = num_test_cases_passing / num_test_cases_total\n", - "print_result(f\"{percent_test_cases_passing * 100}% of Test Cases pass. Acceptable?\", percent_test_cases_passing > 0.5)\n", - "\n", - "# Example of asserting that the average score is greater than a specified threshold\n", - "avg_score_acceptable = results.get_mean_metric_output(\"exact-match\", \"score\") > 0.5\n", - "print_result(\"Is the average score acceptable?\", avg_score_acceptable)\n", - "\n", - "# Example of asserting that the min score is greater than a specified threshold\n", - "min_score_acceptable = results.get_min_metric_output(\"exact-match\", \"score\") > 0.5\n", - "print_result(\"Is the minimum score acceptable?\", min_score_acceptable)\n", - "\n", - "# Example of asserting that the max score is greater than a specified threshold\n", - "max_score_acceptable = results.get_min_metric_output(\"exact-match\", \"score\") > 0.75\n", - "print_result(\"Is the maximum regressing?\", max_score_acceptable)\n", - "\n", - "# Print out all results\n", - "results.all_executions" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/Synthetic Conversation Generation.ipynb b/examples/Synthetic Conversation Generation.ipynb deleted file mode 100644 index c3b39865c5..0000000000 --- a/examples/Synthetic Conversation Generation.ipynb +++ /dev/null @@ -1,373 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "908e1b66-9e2b-43c9-8bbb-251cae8dec24", - "metadata": {}, - "source": [ - "# Synthetic Conversation Generation\n", - "\n", - "## Context\n", - "\n", - "When building an AI-powered chatbot, it's often helpful to have a large number of realistic conversations between a human and the AI\n", - "for testing purposes. Given the conversation up until a point, you can verify whether your AI responds as expected to the latest\n", - "user message.\n", - "\n", - "For example, if you have a conversation with three conversation turns (i.e. the user and AI have responded to each other 3 times), then you\n", - "can simulate how changes to the AI system effect the AI's respond on turn 4.\n", - "\n", - "## The Problem\n", - "\n", - "However, there are two problems with this.\n", - "1. Collecting a large number of realitic conversations can be tricky, making it difficult to simulate your Chatbot's behavior over a wide variety of real-world scenarios; and\n", - "2. While you can simulate the _most recent_ AI message, you cannot easily test changes that would have affected the AI's _messages up until that point_.\n", - "\n", - "To solve for both of these problems, this guide shows you how you can use two Vellum Workflows to \"talk to\" one another and generate a wide set of synthetic conversations.\n", - "\n", - "One Workflow represents the AI Chatbot itself, and the other represents a user chatting into the system.\n", - "\n", - "## Prerequisites\n", - "\n", - "This guide assumes the following pre-requisites:\n", - "1. You have a Vellum account and have created a Vellum API key\n", - "2. You know how to run and interact with a Jupyter notebook\n", - "3. You've created a **Vellum Workflow representing your AI chatbot** that accepts a `CHAT_HISTORY` input variable called `chat_history`. It should have a Final Output Node of type `STRING` called `final-output`.\n", - "4. You've created another **Vellum Workflow representing your user** that accepts a `CHAT_HISTORY` input variable called `chat_history`. It should have a Final Output Node of type `STRING` called `final-output`.\n", - "5. You've already done manual testing on your User Workflow to ensure that it behaves similarly to your customers.\n", - "6. You've updated the csv file located at `datasets/seed_user_messages.csv`. There's one row per conversation you want to generate, and one initial or \"seed\" user message per conversation.\n", - "\n", - "## Solution\n", - "In the rest of this guide, we'll implement everything needed to generate sythetic conversations that could then be uploaded into a Vellum Test Suite for quantitative evaluation of conversation quality." - ] - }, - { - "cell_type": "markdown", - "id": "f4c8d1d8-c732-40cd-8e6c-84d491d87839", - "metadata": {}, - "source": [ - "## Getting Started\n", - "\n", - "Install dependencies and securely enter your Vellum API key." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "63466cce-3b46-403b-8a1e-ff4572fd062a", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Looking in indexes: https://pypi.org/simple, https://_json_key_base64:****@us-central1-python.pkg.dev/vocify-prod/vocify/simple/\n", - "Requirement already satisfied: vellum-ai in /Users/noaflaherty/Repos/vellum-ai/vellum-client-python/venv/lib/python3.11/site-packages (0.5.0)\n", - "\u001B[31mERROR: Could not find a version that satisfies the requirement getpass (from versions: none)\u001B[0m\u001B[31m\n", - "\u001B[0m\u001B[31mERROR: No matching distribution found for getpass\u001B[0m\u001B[31m\n", - "\u001B[0m\n", - "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m23.2.1\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m24.0\u001B[0m\n", - "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\n" - ] - } - ], - "source": [ - "!pip install vellum-ai getpass pandas" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "a04fff2a-fa94-4c2c-9a93-1db4f0a39275", - "metadata": {}, - "outputs": [ - { - "name": "stdin", - "output_type": "stream", - "text": [ - " ········\n" - ] - } - ], - "source": [ - "from getpass import getpass\n", - "\n", - "VELLUM_API_KEY = getpass()" - ] - }, - { - "cell_type": "markdown", - "id": "f16975a2-22fd-4d34-a2f4-724ea691e40c", - "metadata": {}, - "source": [ - "Read in the csv consisting of seed user messages.\n", - "\n", - "Review the printed out results to ensure your seed messages look good and edit the csv manually if not before proceeding." - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "6de7a41b-c965-4ceb-8e8b-704427a71e36", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "0 What is your refund policy?\n", - "1 What time are you open until?\n", - "2 How much does a one year warranty cost?\n", - "Name: 0, dtype: object" - ] - }, - "execution_count": 3, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "import pandas as pd\n", - "\n", - "df = pd.read_csv(\"datasets/seed_user_messages.csv\", header=None)\n", - "\n", - "initial_user_messages = df[0]\n", - "initial_user_messages" - ] - }, - { - "cell_type": "markdown", - "id": "4006f9da-6137-4b31-b9fd-b4e60c7fd7b0", - "metadata": {}, - "source": [ - "Here we define some helper functions that we'll execute later. Notice that we use our `async` python client so that we can run API calls in parallel later.\n", - "\n", - "Here you'll have to provide the `name` of your two Vellum Workflow Deployments. You'll also provide how many \"conversation turns\" you want to run per seed user message." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "5c9e03a4-51ae-42ad-bbcd-356eea2c4929", - "metadata": {}, - "outputs": [ - { - "name": "stdin", - "output_type": "stream", - "text": [ - " chat-history-manipulation\n", - " chat-history-manipulation\n", - " 3\n" - ] - } - ], - "source": [ - "AI_CHATBOT_WORKFLOW_DEPLOYMENT_NAME = input()\n", - "SIMULATED_USER_WORKFLOW_DEPLOYMENT_NAME = input()\n", - "DEFAULT_NUM_CONVERSATION_TURNS = int(input())" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "a9a5eb2f-f699-4927-ba3d-fa11f1e2667f", - "metadata": {}, - "outputs": [], - "source": [ - "from typing import List\n", - "\n", - "from vellum.client import AsyncVellum\n", - "import vellum.types as types\n", - "\n", - "client = AsyncVellum(api_key=VELLUM_API_KEY)\n", - "\n", - "\n", - "async def invoke_chat_workflow(\n", - " workflow_name: str,\n", - " chat_history: List[types.ChatMessageRequest],\n", - " output_name: str = \"final-output\"\n", - ") -> str:\n", - " \"\"\"\n", - " A helper for invoking a chat-based Workflow Deployment.\n", - " \n", - " Feel free to pass in other input variable values if your Workflow expected them.\n", - " \"\"\"\n", - " \n", - " result = await client.execute_workflow(\n", - " workflow_deployment_name=workflow_name,\n", - " inputs=[\n", - " types.WorkflowRequestInputRequest_ChatHistory(\n", - " type=\"CHAT_HISTORY\",\n", - " name=\"chat_history\",\n", - " value=chat_history,\n", - " ),\n", - " ],\n", - " )\n", - " \n", - "\n", - " if result.data.state == \"REJECTED\":\n", - " raise Exception(result.data.error.message)\n", - "\n", - " reponse = next(\n", - " filter(\n", - " lambda output: output.name == output_name and output.type == \"STRING\",\n", - " result.data.outputs\n", - " )\n", - " ).value\n", - "\n", - " return reponse\n", - "\n", - "\n", - "async def get_assistant_response(chat_history: List[types.ChatMessageRequest]) -> str:\n", - " return await invoke_chat_workflow(AI_CHATBOT_WORKFLOW_DEPLOYMENT_NAME, chat_history)\n", - "\n", - "\n", - "async def get_synthetic_user_response(chat_history: List[types.ChatMessageRequest]) -> str:\n", - " return await invoke_chat_workflow(SIMULATED_USER_WORKFLOW_DEPLOYMENT_NAME, chat_history)\n", - "\n", - "\n", - "async def generate_synthetic_conversation(seed_user_message: str, num_conversation_turns: int = DEFAULT_NUM_CONVERSATION_TURNS) -> List[types.ChatMessageRequest]:\n", - " assistant_chat_history: List[types.ChatMessageRequest] = [\n", - " types.ChatMessageRequest(role=\"USER\", text=seed_user_message)\n", - " ]\n", - " synthetic_user_chat_history: List[types.ChatMessageRequest] = [\n", - " types.ChatMessageRequest(role=\"ASSISTANT\", text=seed_user_message)\n", - " ]\n", - " \n", - " for i in range(num_conversation_turns):\n", - " assistant_response = await get_assistant_response(assistant_chat_history)\n", - " \n", - " assistant_chat_history.append(\n", - " types.ChatMessageRequest(role=\"ASSISTANT\", text=assistant_response)\n", - " )\n", - " synthetic_user_chat_history.append(\n", - " types.ChatMessageRequest(role=\"USER\", text=assistant_response)\n", - " )\n", - " \n", - " if i < num_conversation_turns:\n", - " synthetic_user_response = await get_synthetic_user_response(synthetic_user_chat_history)\n", - " \n", - " assistant_chat_history.append(\n", - " types.ChatMessageRequest(role=\"USER\", text=assistant_response)\n", - " )\n", - " synthetic_user_chat_history.append(\n", - " types.ChatMessageRequest(role=\"ASSISTANT\", text=assistant_response)\n", - " )\n", - "\n", - " return assistant_chat_history" - ] - }, - { - "cell_type": "markdown", - "id": "0195cce8-a522-41ee-a411-b616b8ef2aed", - "metadata": {}, - "source": [ - "Here's where we actually invoke our Workflow Deployments. We do so in parallel for faster execution." - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "b42176ed-686b-49d5-9267-3d91312fc3ba", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[[ChatMessageRequest(text='What is your refund policy?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What is your refund policy?', role='USER', content=None, source=None)],\n", - " [ChatMessageRequest(text='What time are you open until?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='What time are you open until?', role='USER', content=None, source=None)],\n", - " [ChatMessageRequest(text='How much does a one year warranty cost?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='USER', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='ASSISTANT', content=None, source=None),\n", - " ChatMessageRequest(text='How much does a one year warranty cost?', role='USER', content=None, source=None)]]" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "import asyncio\n", - "\n", - "synthetic_conversations = await asyncio.gather(\n", - " *[generate_synthetic_conversation(seed_message) for seed_message in initial_user_messages]\n", - ")\n", - "synthetic_conversations\n" - ] - }, - { - "cell_type": "markdown", - "id": "8a5a626e-0b31-4110-bcbe-c1ff1535986b", - "metadata": {}, - "source": [ - "Finally, we serialize the results and save them to a csv in this same directory.\n", - "\n", - "You can now upload this csv to a Vellum Test Suite if you want to perform quanitative assertions on the next turn of the conversation!" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "409eeb63-ebfa-4b97-be18-0c7f2e7e260b", - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import csv\n", - "\n", - "serialized_synthetic_conversations = [\n", - " json.dumps([\n", - " {\n", - " \"text\": message.text,\n", - " \"role\": message.role,\n", - " }\n", - " for message in conversation\n", - " ])\n", - " for conversation in synthetic_conversations\n", - "]\n", - "\n", - "csv_contents = [\n", - " \"chat_history\",\n", - " *serialized_synthetic_conversations,\n", - "]\n", - "\n", - "pd.DataFrame(csv_contents).to_csv(\"synthetic_conversations\", header=False, index=False, quoting=csv.QUOTE_NONE, sep='\\t')" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.4" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/client/multimodal_inputs/main.ipynb b/examples/client/multimodal_inputs/main.ipynb deleted file mode 100644 index 3561a7f5cf..0000000000 --- a/examples/client/multimodal_inputs/main.ipynb +++ /dev/null @@ -1,184 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Vellum Multimodal Inputs Example\n", - "\n", - "This notebook demonstrates how to use the Vellum Python Client SDK to send multimodal inputs (PDF and image) to a Prompt Deployment. For more information on how to create and test Prompts in the Vellum Prompt Sandbox UI, see [Vellum Prompts - Multimodality](https://docs.vellum.ai/product/prompts/multimodality)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "vscode": { - "languageId": "shellscript" - } - }, - "outputs": [], - "source": [ - "!pip install -r requirements.txt" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "from dotenv import load_dotenv\n", - "from vellum import (\n", - " ArrayChatMessageContent,\n", - " ChatHistoryInput,\n", - " ChatMessage,\n", - " DocumentChatMessageContent,\n", - " ImageChatMessageContent,\n", - " StringChatMessageContent,\n", - " Vellum,\n", - " VellumDocument,\n", - " VellumImage,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Initialize Vellum Client\n", - "Make sure you have set the `VELLUM_API_KEY` environment variable in a `.env` file in this directory." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [], - "source": [ - "load_dotenv()\n", - "client = Vellum(api_key=os.environ[\"VELLUM_API_KEY\"])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## PDF Example\n", - "Let's send a PDF to the prompt and get a response." - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'The PDF contains only the text \"Dummy PDF file\" displayed at the top of an otherwise blank page. It appears to be a simple placeholder or test document without any additional content, images, or formatting.'" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "pdf_link = \"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf\"\n", - "response = client.execute_prompt(\n", - " prompt_deployment_name=\"pdfs-example-prompt\",\n", - " inputs=[\n", - " ChatHistoryInput(\n", - " name=\"chat_history\",\n", - " value=[\n", - " ChatMessage(\n", - " role=\"USER\",\n", - " content=ArrayChatMessageContent(\n", - " value=[\n", - " StringChatMessageContent(value=\"What's in the PDF?\"),\n", - " DocumentChatMessageContent(value=VellumDocument(src=pdf_link)),\n", - " ]\n", - " ),\n", - " )\n", - " ],\n", - " ),\n", - " ],\n", - ")\n", - "\n", - "response.outputs[0].value" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Image Example\n", - "Let's send an image to the prompt and get a response." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'The image shows a wispy, feathery cloud formation against a clear blue sky. The cloud appears to be elongated and somewhat vertical, with delicate white and light gray wisps that create an ethereal, almost flame-like shape as it stretches upward. This type of cloud formation might be a cirrus cloud, which typically forms at high altitudes and is made of ice crystals, giving it that distinctive wispy, feathery appearance.'" - ] - }, - "execution_count": 10, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "image_link = \"https://fastly.picsum.photos/id/53/200/300.jpg?hmac=KbEX4oNyVO15M-9S4xMsefrElB1uiO3BqnvVqPnhPgE\"\n", - "response = client.execute_prompt(\n", - " prompt_deployment_name=\"pdfs-example-prompt\",\n", - " inputs=[\n", - " ChatHistoryInput(\n", - " name=\"chat_history\",\n", - " value=[\n", - " ChatMessage(\n", - " role=\"USER\",\n", - " content=ArrayChatMessageContent(\n", - " value=[\n", - " StringChatMessageContent(value=\"What's in the image?\"),\n", - " ImageChatMessageContent(value=VellumImage(src=image_link)),\n", - " ]\n", - " ),\n", - " )\n", - " ],\n", - " ),\n", - " ],\n", - ")\n", - "\n", - "response.outputs[0].value" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.16" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/examples/client/multimodal_inputs/requirements.txt b/examples/client/multimodal_inputs/requirements.txt deleted file mode 100644 index a65a96ca2c..0000000000 --- a/examples/client/multimodal_inputs/requirements.txt +++ /dev/null @@ -1,2 +0,0 @@ -python-dotenv -vellum-ai diff --git a/examples/datasets/seed_user_messages.csv b/examples/datasets/seed_user_messages.csv deleted file mode 100644 index 5573456b5e..0000000000 --- a/examples/datasets/seed_user_messages.csv +++ /dev/null @@ -1,3 +0,0 @@ -What is your refund policy? -What time are you open until? -How much does a one year warranty cost? \ No newline at end of file diff --git a/examples/evals/__init__.py b/examples/evals/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/.github/workflows/eval.yml b/examples/evals/github_actions/.github/workflows/eval.yml deleted file mode 100644 index c7cd420115..0000000000 --- a/examples/evals/github_actions/.github/workflows/eval.yml +++ /dev/null @@ -1,37 +0,0 @@ -name: Run Evaluation - -on: - push: - branches: [main] - pull_request: - branches: [main] - -jobs: - evaluate: - runs-on: ubuntu-latest - steps: - - name: Checkout repo - uses: actions/checkout@v3 - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: 3.10 - - - name: Install uv - uses: astral-sh/setup-uv@v1 - with: - version: "latest" - - - name: Install dependencies - run: uv sync - - - name: Push workflow to Vellum - run: uv run vellum workflows push oracle - env: - VELLUM_API_KEY: ${{ secrets.VELLUM_API_KEY }} - - - name: Run evaluation on Vellum and report results - run: uv run python eval.py - env: - VELLUM_API_KEY: ${{ secrets.VELLUM_API_KEY }} diff --git a/examples/evals/github_actions/README.md b/examples/evals/github_actions/README.md deleted file mode 100644 index c9fee1844d..0000000000 --- a/examples/evals/github_actions/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# GitHub Actions Evaluation Example - -This example demonstrates how to create an automated evaluation pipeline using GitHub Actions that: - -1. Pushes a Vellum workflow to the platform -2. Upserts test cases from JSON files -3. Runs evaluations using the VellumTestSuite -4. Reports back results - -## Structure - -- `oracle` - Main workflow definition, a simple question and answering bot -- `test_cases/` - JSON files containing test cases -- `eval.py` - Main evaluation script -- `.github/workflows/eval.yml` - GitHub Action workflow -- `pyproject.toml` - Project configuration -- `vellum.lock.json` - Vellum workspace configuration - -## Usage - -We recommend copying this directory as a new project. - -### Local Development - -```bash -# Install dependencies -uv sync - -# Run evaluation locally -uv run python eval.py -``` - -### GitHub Actions - -The evaluation runs automatically on: - -- Push to main branch -- Pull requests - -## Configuration - -Set the `VELLUM_API_KEY` secret in your GitHub repository settings at `https://github.com/{owner}/{repo}/settings/secrets/actions`. diff --git a/examples/evals/github_actions/__init__.py b/examples/evals/github_actions/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/eval.py b/examples/evals/github_actions/eval.py deleted file mode 100644 index 9f24868dcf..0000000000 --- a/examples/evals/github_actions/eval.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python3 -""" -Evaluation script that upserts test cases, runs evaluation, and reports results. -""" -import json -import logging -import os -from pathlib import Path -from uuid import uuid4 -from typing import List - -from vellum.client.types.test_suite_run_workflow_sandbox_exec_config_data_request import ( - TestSuiteRunWorkflowSandboxExecConfigDataRequest, -) -from vellum.client.types.test_suite_run_workflow_sandbox_exec_config_request import ( - TestSuiteRunWorkflowSandboxExecConfigRequest, -) -from vellum.client.types.test_suite_test_case_create_bulk_operation_request import ( - TestSuiteTestCaseCreateBulkOperationRequest, -) -from vellum.evaluations.resources import VellumTestSuiteRunResults -from vellum.workflows.vellum_client import create_vellum_client -from vellum_cli.config import load_vellum_cli_config - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def load_test_cases() -> List[dict]: - """Load test cases from JSON files.""" - test_cases = [] - test_cases_dir = Path(__file__).parent / "test_cases" - - for json_file in test_cases_dir.glob("*.json"): - with open(json_file) as f: - test_case = json.load(f) - test_cases.append({"data": test_case, "id": str(uuid4())}) - - return test_cases - - -def run_evaluation(): - """Run the evaluation using VellumTestSuite.""" - logger.info("Loading test cases...") - test_cases = load_test_cases() - - client = create_vellum_client() - config = load_vellum_cli_config() - workflow_sandbox_id = config.workflows[0].workflow_sandbox_id - - test_suite_id = os.getenv("VELLUM_TEST_SUITE_ID") - if not test_suite_id: - raise ValueError("`VELLUM_TEST_SUITE_ID` is not set") - - # Ideally, we support the following API to dynamically get a test suite id for a given - # Workflow Sandbox ID: - # - # workflow_sandbox = client.workflow_sandboxes.retrieve(workflow_sandbox_id) - # test_suite_id = workflow_sandbox.evaluation_reports[0].id - - # vellum push, but for test cases - client.test_suites.test_suite_test_cases_bulk( - test_suite_id, request=[TestSuiteTestCaseCreateBulkOperationRequest.model_validate(tc) for tc in test_cases] - ) - - logger.info("Running evaluation...") - test_suite_run = client.test_suite_runs.create( - test_suite_id=test_suite_id, - exec_config=TestSuiteRunWorkflowSandboxExecConfigRequest( - data=TestSuiteRunWorkflowSandboxExecConfigDataRequest(workflow_sandbox_id=workflow_sandbox_id) - ), - ) - - results = VellumTestSuiteRunResults(test_suite_run, client=client) - results.wait_until_complete() - - logger.info("Evaluation Results:") - logger.info(f"State: {results.state}") - logger.info(f"Total executions: {len(results.all_executions)}") - logger.info("Evaluation completed successfully!") - - return results - - -if __name__ == "__main__": - results = run_evaluation() diff --git a/examples/evals/github_actions/oracle/__init__.py b/examples/evals/github_actions/oracle/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/oracle/display/__init__.py b/examples/evals/github_actions/oracle/display/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/oracle/display/nodes/__init__.py b/examples/evals/github_actions/oracle/display/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/oracle/inputs.py b/examples/evals/github_actions/oracle/inputs.py deleted file mode 100644 index d9679b44b3..0000000000 --- a/examples/evals/github_actions/oracle/inputs.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import Optional - -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - query: str - context: Optional[str] = None diff --git a/examples/evals/github_actions/oracle/nodes/__init__.py b/examples/evals/github_actions/oracle/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/evals/github_actions/oracle/nodes/final_output.py b/examples/evals/github_actions/oracle/nodes/final_output.py deleted file mode 100644 index 5b87c8d0a6..0000000000 --- a/examples/evals/github_actions/oracle/nodes/final_output.py +++ /dev/null @@ -1,8 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode - -from .prompt_node import PromptNode - - -class FinalOutput(FinalOutputNode): - class Outputs(FinalOutputNode.Outputs): - value = PromptNode.Outputs.text diff --git a/examples/evals/github_actions/oracle/nodes/prompt_node.py b/examples/evals/github_actions/oracle/nodes/prompt_node.py deleted file mode 100644 index b9107bc9dd..0000000000 --- a/examples/evals/github_actions/oracle/nodes/prompt_node.py +++ /dev/null @@ -1,25 +0,0 @@ -from vellum import ChatMessagePromptBlock, JinjaPromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs - - -class PromptNode(InlinePromptNode): - ml_model = "gpt-4o-mini" - prompt_inputs = { - "query": Inputs.query, - "context": Inputs.context, - } - blocks = [ - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - JinjaPromptBlock( - template=( - "Answer the following query: {{ inputs.query }}\n\n" - "Context: {{ inputs.context or 'No context provided' }}" - ) - ) - ], - ) - ] diff --git a/examples/evals/github_actions/oracle/workflow.py b/examples/evals/github_actions/oracle/workflow.py deleted file mode 100644 index 689e21a0cc..0000000000 --- a/examples/evals/github_actions/oracle/workflow.py +++ /dev/null @@ -1,13 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.final_output import FinalOutput -from .nodes.prompt_node import PromptNode - - -class Oracle(BaseWorkflow[Inputs, BaseState]): - graph = PromptNode >> FinalOutput - - class Outputs(BaseWorkflow.Outputs): - response = FinalOutput.Outputs.value diff --git a/examples/evals/github_actions/pyproject.toml b/examples/evals/github_actions/pyproject.toml deleted file mode 100644 index dce1e5aee9..0000000000 --- a/examples/evals/github_actions/pyproject.toml +++ /dev/null @@ -1,13 +0,0 @@ -[project] -name = "vellum-eval-github-actions" -version = "0.1.0" -description = "GitHub Actions evaluation example for Vellum workflows" -authors = ["Vellum "] -license = "MIT" -requires-python = ">=3.10,<4.0" -dependencies = [ - "vellum-ai>=1.1.2", -] - -[tool.uv] -package = false diff --git a/examples/evals/github_actions/test_cases/basic_queries.json b/examples/evals/github_actions/test_cases/basic_queries.json deleted file mode 100644 index e5b685007c..0000000000 --- a/examples/evals/github_actions/test_cases/basic_queries.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "label": "Simple Question", - "input_values": [ - {"name": "query", "value": "What is the capital of France?"}, - {"name": "context", "value": "France is a country in Europe."} - ] -} diff --git a/examples/evals/github_actions/test_cases/complex_queries.json b/examples/evals/github_actions/test_cases/complex_queries.json deleted file mode 100644 index ce6135be09..0000000000 --- a/examples/evals/github_actions/test_cases/complex_queries.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "label": "Complex Analysis", - "input_values": [ - {"name": "query", "value": "Compare and contrast renewable vs fossil fuel energy sources"}, - {"name": "context", "value": "Energy sources can be categorized as renewable or non-renewable based on their ability to replenish naturally."} - ] -} diff --git a/examples/evals/github_actions/test_cases/edge_cases.json b/examples/evals/github_actions/test_cases/edge_cases.json deleted file mode 100644 index d113298334..0000000000 --- a/examples/evals/github_actions/test_cases/edge_cases.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "label": "Empty Query", - "input_values": [ - {"name": "query", "value": ""}, - {"name": "context", "value": "Some context"} - ] -} diff --git a/examples/evals/github_actions/vellum.lock.json b/examples/evals/github_actions/vellum.lock.json deleted file mode 100644 index 366bedc275..0000000000 --- a/examples/evals/github_actions/vellum.lock.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "version": "1.0", - "workflows": [ - { - "module": "oracle", - "workflow_sandbox_id": null, - "ignore": null, - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - } - ] -} diff --git a/examples/images/test-suite-execution-interface.png b/examples/images/test-suite-execution-interface.png deleted file mode 100644 index c2cb0eb360..0000000000 Binary files a/examples/images/test-suite-execution-interface.png and /dev/null differ diff --git a/examples/workflows/chatbot/README.md b/examples/workflows/chatbot/README.md deleted file mode 100644 index 2d7586f646..0000000000 --- a/examples/workflows/chatbot/README.md +++ /dev/null @@ -1,24 +0,0 @@ -# Chatbot (previous execution ID + SetState) - -This example is a small variant of the chatbot workflow that explicitly uses `SetStateNode` to append chat history on each run. Passing a `previous_execution_id` loads the prior state so history keeps growing across executions. - -## Flow - -1) `AppendUserMessage` appends the incoming user message to any loaded `chat_history`. -2) `Agent` generates an assistant reply. -3) `AppendAssistantMessage` appends the assistant reply to `chat_history`. -4) `FinalOutput` returns the full `chat_history` so you can see persistence. - -## Running locally (sandbox) - -```bash -poetry run python -m examples.workflows.chatbot.sandbox -``` - -## Running interactively with state persistence - -```bash -poetry run python -m examples.workflows.chatbot.chat -``` - -After the first run, copy the printed `previous_execution_id` to resume the conversation and see the accumulated `chat_history` emitted from `FinalOutput`. diff --git a/examples/workflows/chatbot/__init__.py b/examples/workflows/chatbot/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/chatbot/chat.py b/examples/workflows/chatbot/chat.py deleted file mode 100644 index 029d80231f..0000000000 --- a/examples/workflows/chatbot/chat.py +++ /dev/null @@ -1,55 +0,0 @@ -from dotenv import load_dotenv - -from vellum.workflows.emitters.vellum_emitter import VellumEmitter -from vellum.workflows.resolvers.resolver import VellumResolver - -from .inputs import Inputs -from .workflow import Workflow - - -def main(): - load_dotenv() - previous_execution_id = None - iterations = 1 - - while True: - print("--- New Message ---") - user_message = input(f"Your message ({iterations}): ").strip() - - if user_message.lower() in ["quit", "exit"]: - print("Goodbye!") - break - - if not user_message: - print("Please type a message!") - continue - - workflow = Workflow( - emitters=[VellumEmitter()], - resolvers=[VellumResolver()], # needed for sdk first - ) - - if previous_execution_id: - print(f"Resuming from previous execution ID: {previous_execution_id}") - terminal_event = workflow.run( - inputs=Inputs(user_message=user_message), - previous_execution_id=previous_execution_id, - ) - else: - print("Starting new conversation") - terminal_event = workflow.run(inputs=Inputs(user_message=user_message)) - - current_execution_id = None - - if terminal_event.name == "workflow.execution.fulfilled": - print("workflow.execution.fulfilled", terminal_event.outputs) - current_execution_id = str(terminal_event.span_id) - - print(f"Current Execution ID: {current_execution_id}") - previous_execution_id = current_execution_id - - iterations += 1 - - -if __name__ == "__main__": - main() diff --git a/examples/workflows/chatbot/inputs.py b/examples/workflows/chatbot/inputs.py deleted file mode 100644 index 3f0bc84fde..0000000000 --- a/examples/workflows/chatbot/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - user_message: str diff --git a/examples/workflows/chatbot/nodes/__init__.py b/examples/workflows/chatbot/nodes/__init__.py deleted file mode 100644 index 88453fab1c..0000000000 --- a/examples/workflows/chatbot/nodes/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .agent import Agent -from .append_assistant_message import AppendAssistantMessage -from .append_user_message import AppendUserMessage -from .final_output import FinalOutput - -__all__ = [ - "Agent", - "AppendAssistantMessage", - "AppendUserMessage", - "FinalOutput", -] diff --git a/examples/workflows/chatbot/nodes/agent.py b/examples/workflows/chatbot/nodes/agent.py deleted file mode 100644 index 823d110956..0000000000 --- a/examples/workflows/chatbot/nodes/agent.py +++ /dev/null @@ -1,35 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock -from vellum.workflows.nodes.displayable.tool_calling_node.node import ToolCallingNode - -from ..inputs import Inputs -from ..state import State - - -class Agent(ToolCallingNode[State]): - ml_model = "gpt-5-responses" - prompt_inputs = { - "user_message": Inputs.user_message, - "chat_history": State.chat_history, - } - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="You are a helpful assistant. Use the prior conversation to stay consistent." - ) - ] - ) - ], - ), - # Insert prior turns before the new user turn - VariablePromptBlock(input_variable="chat_history"), - # Latest user message - ChatMessagePromptBlock( - chat_role="USER", - blocks=[RichTextPromptBlock(blocks=[VariablePromptBlock(input_variable="user_message")])], - ), - ] - max_prompt_iterations = 25 diff --git a/examples/workflows/chatbot/nodes/append_assistant_message.py b/examples/workflows/chatbot/nodes/append_assistant_message.py deleted file mode 100644 index cbe6f19926..0000000000 --- a/examples/workflows/chatbot/nodes/append_assistant_message.py +++ /dev/null @@ -1,15 +0,0 @@ -from vellum.client.types.chat_message import ChatMessage -from vellum.workflows.nodes.displayable.set_state_node.node import SetStateNode - -from ..state import State -from .agent import Agent - - -class AppendAssistantMessage(SetStateNode[State]): - operations = { - "chat_history": State.chat_history + ChatMessage(role="ASSISTANT", text=Agent.Outputs.text), - } - - class Display(SetStateNode.Display): - icon = "vellum:icon:database" - color = "purple" diff --git a/examples/workflows/chatbot/nodes/append_user_message.py b/examples/workflows/chatbot/nodes/append_user_message.py deleted file mode 100644 index fc0f6351fb..0000000000 --- a/examples/workflows/chatbot/nodes/append_user_message.py +++ /dev/null @@ -1,17 +0,0 @@ -from vellum.client.types.chat_message import ChatMessage -from vellum.workflows.nodes.displayable.set_state_node.node import SetStateNode - -from ..inputs import Inputs -from ..state import State - - -class AppendUserMessage(SetStateNode[State]): - operations = { - # Append the latest user message to the loaded chat history (if any) - "chat_history": State.chat_history - + ChatMessage(role="USER", text=Inputs.user_message), - } - - class Display(SetStateNode.Display): - icon = "vellum:icon:database" - color = "purple" diff --git a/examples/workflows/chatbot/nodes/final_output.py b/examples/workflows/chatbot/nodes/final_output.py deleted file mode 100644 index ed8ccdabb3..0000000000 --- a/examples/workflows/chatbot/nodes/final_output.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode - -from ..state import State - - -class FinalOutput(FinalOutputNode[State, list]): - class Outputs(FinalOutputNode.Outputs): - # Return the full chat history so callers can observe persistence across executions - value = State.chat_history diff --git a/examples/workflows/chatbot/sandbox.py b/examples/workflows/chatbot/sandbox.py deleted file mode 100644 index b400ee56e0..0000000000 --- a/examples/workflows/chatbot/sandbox.py +++ /dev/null @@ -1,14 +0,0 @@ -from vellum.workflows.inputs import DatasetRow -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -dataset = [ - DatasetRow(label="Scenario 1", inputs=Inputs(user_message="Hello")), -] - -runner = WorkflowSandboxRunner(workflow=Workflow(), dataset=dataset) - -if __name__ == "__main__": - runner.run() diff --git a/examples/workflows/chatbot/state.py b/examples/workflows/chatbot/state.py deleted file mode 100644 index 762661ac68..0000000000 --- a/examples/workflows/chatbot/state.py +++ /dev/null @@ -1,11 +0,0 @@ -from typing import Optional, Union - -from pydantic import Field - -from vellum import ChatMessage -from vellum.workflows.state import BaseState - - -class State(BaseState): - message_count: Optional[Union[float, int]] = 0 - chat_history: Optional[list[ChatMessage]] = Field(default_factory=list) diff --git a/examples/workflows/chatbot/workflow.py b/examples/workflows/chatbot/workflow.py deleted file mode 100644 index 728755959c..0000000000 --- a/examples/workflows/chatbot/workflow.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows import BaseWorkflow - -from .inputs import Inputs -from .nodes.agent import Agent -from .nodes.append_assistant_message import AppendAssistantMessage -from .nodes.append_user_message import AppendUserMessage -from .nodes.final_output import FinalOutput -from .state import State - - -class Workflow(BaseWorkflow[Inputs, State]): - # Flow: - # 1) AppendUserMessage: add the user message onto any loaded chat history - # 2) Agent: generate a reply - # 3) AppendAssistantMessage: add the assistant reply to chat history - # 4) FinalOutput: emit the full chat history (showing persisted state across executions) - graph = AppendUserMessage >> Agent >> AppendAssistantMessage >> FinalOutput - - class Outputs(BaseWorkflow.Outputs): - response = FinalOutput.Outputs.value diff --git a/examples/workflows/custom_base_node/README.md b/examples/workflows/custom_base_node/README.md deleted file mode 100644 index 1ddae6ee92..0000000000 --- a/examples/workflows/custom_base_node/README.md +++ /dev/null @@ -1,30 +0,0 @@ -# Custom Base Node - -This Workflow is an example of having multiple nodes extending from a Custom Node that the user defines, called `MockNetworkingClient`. This node simulates a client that makes a network call, whether that is HTTP, GraphQL, etc. - -The node also imports logic from a module outside of the node, motivating the need for a [Custom Docker Image](https://docs.vellum.ai/developers/workflows-sdk/custom-container-images). The definition for this Docker image is found at `./utils/Dockerfile`. To rebuild locally, run: - -```bash -docker buildx build -f utils/Dockerfile --platform=linux/amd64 -t sdk-examples-utils:1.0.0 . -``` - -Then, you could push the image to Vellum, - -```bash -vellum images push sdk-examples-utils:1.0.0 -``` - -Next, associate the newly created image with your workflow by adding the following to your `pyproject.toml`: - -```toml -[[tool.vellum.workflows]] -module = "custom_prompt_node" -container_image_name = "sdk-examples-utils" -container_image_tag = "1.0.0" -``` - -You can then push the Workflow itself, - -```bash -vellum workflows push custom_base_node -``` diff --git a/examples/workflows/custom_base_node/__init__.py b/examples/workflows/custom_base_node/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/custom_base_node/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/custom_base_node/chat.py b/examples/workflows/custom_base_node/chat.py deleted file mode 100644 index fa9fcc7474..0000000000 --- a/examples/workflows/custom_base_node/chat.py +++ /dev/null @@ -1,32 +0,0 @@ -from dotenv import load_dotenv - -from vellum.workflows.workflows.event_filters import root_workflow_event_filter - -from .inputs import Inputs -from .workflow import CustomBaseNodeWorkflow - - -def main(): - load_dotenv() - workflow = CustomBaseNodeWorkflow() - - while True: - query = input("Hi! I'm a multi tool Chatbot. What can I do for you today? ") - inputs = Inputs(query=query) - - stream = workflow.stream(inputs=inputs, event_filter=root_workflow_event_filter) - error = None - for event in stream: - if event.name == "node.execution.fulfilled": - print("Finished Node", event.node_definition) # noqa: T201 - elif event.name == "workflow.execution.fulfilled": - print(event.outputs["answer"]) # noqa: T201 - elif event.name == "workflow.execution.rejected": - error = event.error.message - - if error: - raise Exception(error) - - -if __name__ == "__main__": - main() diff --git a/examples/workflows/custom_base_node/display/__init__.py b/examples/workflows/custom_base_node/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/custom_base_node/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/custom_base_node/display/nodes/__init__.py b/examples/workflows/custom_base_node/display/nodes/__init__.py deleted file mode 100644 index 64fce7052f..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -from .conditional_router import ConditionalRouterDisplay -from .echo_request import EchoRequestDisplay -from .error_node import ErrorNodeDisplay -from .exit_node import ExitNodeDisplay -from .fibonacci import FibonacciDisplay -from .get_temperature import GetTemperatureDisplay -from .my_prompt import MyPromptDisplay - -__all__ = [ - "ConditionalRouterDisplay", - "ErrorNodeDisplay", - "ExitNodeDisplay", - "MyPromptDisplay", - "GetTemperatureDisplay", - "FibonacciDisplay", - "EchoRequestDisplay", -] diff --git a/examples/workflows/custom_base_node/display/nodes/conditional_router.py b/examples/workflows/custom_base_node/display/nodes/conditional_router.py deleted file mode 100644 index c882e142c2..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/conditional_router.py +++ /dev/null @@ -1,18 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from ...nodes.conditional_router import ConditionalRouter - - -class ConditionalRouterDisplay(BaseNodeDisplay[ConditionalRouter]): - port_displays = { - ConditionalRouter.Ports.exit: PortDisplayOverrides(id=UUID("4614e935-1a99-48c5-a3dd-46d4d97a883a")), - ConditionalRouter.Ports.get_temperature: PortDisplayOverrides(id=UUID("1ba9260f-39b3-48f0-a1c9-a042346fb961")), - ConditionalRouter.Ports.echo_request: PortDisplayOverrides(id=UUID("2834c6c6-0bbc-4d76-becd-de6abdbe0410")), - ConditionalRouter.Ports.fibonacci: PortDisplayOverrides(id=UUID("ce38e395-c5a4-4550-b950-8994dff781b2")), - ConditionalRouter.Ports.unknown: PortDisplayOverrides(id=UUID("c094a2db-3d76-46ee-8d44-eb255402e32d")), - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/echo_request.py b/examples/workflows/custom_base_node/display/nodes/echo_request.py deleted file mode 100644 index 67a001d3d2..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/echo_request.py +++ /dev/null @@ -1,12 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from ...nodes.echo_request import EchoRequest - - -class EchoRequestDisplay(BaseNodeDisplay[EchoRequest]): - port_displays = {EchoRequest.Ports.default: PortDisplayOverrides(id=UUID("615b3eb7-f1e3-4f23-9743-9a90044d9500"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/error_node.py b/examples/workflows/custom_base_node/display/nodes/error_node.py deleted file mode 100644 index 688a17b161..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/error_node.py +++ /dev/null @@ -1,16 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseErrorNodeDisplay - -from ...nodes.error_node import ErrorNode - - -class ErrorNodeDisplay(BaseErrorNodeDisplay[ErrorNode]): - name = "error-node" - node_id = UUID("92e401f1-110f-4edc-8bb0-cad879e1ea08") - label = "Error Node" - # error_output_id = UUID("None") - target_handle_id = UUID("c1b30829-76af-4d99-bf83-b030c551a7cf") - node_input_ids_by_name = {"error_source_input_id": UUID("02d8ab87-a237-4892-81bc-9e532c73064e")} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/exit_node.py b/examples/workflows/custom_base_node/display/nodes/exit_node.py deleted file mode 100644 index dc2b290acf..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/exit_node.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.exit_node import ExitNode - - -class ExitNodeDisplay(BaseFinalOutputNodeDisplay[ExitNode]): - label = "Exit Node" - node_id = UUID("c69f1216-3000-4e69-afa4-4269c734c980") - target_handle_id = UUID("674e0ef5-8563-466f-9209-93bb4689082f") - output_name = "answer" - node_input_ids_by_name = {"node_input": UUID("72f69630-7210-47ae-be3b-0b6a04a4488f")} - output_display = { - ExitNode.Outputs.value: NodeOutputDisplay(id=UUID("708005f3-81e0-4886-95e9-ccb0d6c029d3"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/fibonacci.py b/examples/workflows/custom_base_node/display/nodes/fibonacci.py deleted file mode 100644 index 29fb8213cb..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/fibonacci.py +++ /dev/null @@ -1,12 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from ...nodes.fibonacci import Fibonacci - - -class FibonacciDisplay(BaseNodeDisplay[Fibonacci]): - port_displays = {Fibonacci.Ports.default: PortDisplayOverrides(id=UUID("bfb5c1da-5cf0-40bf-adce-c071f0d09d12"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/get_temperature.py b/examples/workflows/custom_base_node/display/nodes/get_temperature.py deleted file mode 100644 index 355167cba4..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/get_temperature.py +++ /dev/null @@ -1,14 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from ...nodes.get_temperature import GetTemperature - - -class GetTemperatureDisplay(BaseNodeDisplay[GetTemperature]): - port_displays = { - GetTemperature.Ports.default: PortDisplayOverrides(id=UUID("3f774189-4e8e-45b6-a6eb-f62a7a96593c")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/nodes/my_prompt.py b/examples/workflows/custom_base_node/display/nodes/my_prompt.py deleted file mode 100644 index 3ded105286..0000000000 --- a/examples/workflows/custom_base_node/display/nodes/my_prompt.py +++ /dev/null @@ -1,23 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.my_prompt import MyPrompt - - -class MyPromptDisplay(BaseInlinePromptNodeDisplay[MyPrompt]): - label = "My Prompt" - node_id = UUID("805fa2c2-56fb-400d-9ca2-486c753bc81d") - output_id = UUID("2bae09e7-f310-446d-9471-563446262d4b") - array_output_id = UUID("7a1fd6fd-070e-417a-919b-52520f0429e2") - target_handle_id = UUID("7e899794-657a-412f-b72d-8e7e4b151a01") - node_input_ids_by_name = {"prompt_inputs.query": UUID("ebae7e5c-c916-4c07-be2c-40929eed766b")} - output_display = { - MyPrompt.Outputs.text: NodeOutputDisplay(id=UUID("2bae09e7-f310-446d-9471-563446262d4b"), name="text"), - MyPrompt.Outputs.results: NodeOutputDisplay(id=UUID("7a1fd6fd-070e-417a-919b-52520f0429e2"), name="results"), - MyPrompt.Outputs.json: NodeOutputDisplay(id=UUID("9498ab4d-f0a2-4666-b3eb-59284fe11583"), name="json"), - } - port_displays = {MyPrompt.Ports.default: PortDisplayOverrides(id=UUID("20d70a2d-01c5-4c98-9fa2-2c1b200d492f"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None) diff --git a/examples/workflows/custom_base_node/display/workflow.py b/examples/workflows/custom_base_node/display/workflow.py deleted file mode 100644 index 1e97379167..0000000000 --- a/examples/workflows/custom_base_node/display/workflow.py +++ /dev/null @@ -1,61 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.conditional_router import ConditionalRouter -from ..nodes.echo_request import EchoRequest -from ..nodes.error_node import ErrorNode -from ..nodes.exit_node import ExitNode -from ..nodes.fibonacci import Fibonacci -from ..nodes.get_temperature import GetTemperature -from ..nodes.my_prompt import MyPrompt -from ..workflow import CustomBaseNodeWorkflow - - -class CustomBaseNodeWorkflowDisplay(BaseWorkflowDisplay[CustomBaseNodeWorkflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("5aa611bd-83d6-4a34-b2f4-3b3511c56998"), - entrypoint_node_source_handle_id=UUID("213ccd7a-bd48-45a3-8387-af9afa946e0e"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=0, y=0), width=None, height=None), - display_data=WorkflowDisplayData(viewport=WorkflowDisplayDataViewport(x=0, y=0, zoom=1)), - ) - inputs_display = { - Inputs.query: WorkflowInputsDisplay(id=UUID("a6cc5f97-122f-4d54-b70a-43874e2c6573"), name="query") - } - entrypoint_displays = { - MyPrompt: EntrypointDisplay( - id=UUID("5aa611bd-83d6-4a34-b2f4-3b3511c56998"), - edge_display=EdgeDisplay(id=UUID("8f376c53-b70a-4c09-a187-27fe25b306bc")), - ) - } - edge_displays = { - (MyPrompt.Ports.default, ConditionalRouter): EdgeDisplay(id=UUID("af748cf1-8aab-4cfc-b73d-141a4a506d40")), - (ConditionalRouter.Ports.echo_request, EchoRequest): EdgeDisplay( - id=UUID("47e23829-7b60-4303-b242-64c1ff299e39") - ), - (EchoRequest.Ports.default, MyPrompt): EdgeDisplay(id=UUID("5a78572f-595e-4fe2-acc5-e45540d2b98b")), - (ConditionalRouter.Ports.exit, ExitNode): EdgeDisplay(id=UUID("1a518848-89f6-4036-9f0b-6e1c09b4a8a7")), - (ConditionalRouter.Ports.get_temperature, GetTemperature): EdgeDisplay( - id=UUID("f8297a07-5d2e-45a0-8610-780b6307c9d5") - ), - (GetTemperature.Ports.default, MyPrompt): EdgeDisplay(id=UUID("93ddfe51-7ce3-46f4-8c60-0ea2fe2e228e")), - (ConditionalRouter.Ports.fibonacci, Fibonacci): EdgeDisplay(id=UUID("038eb1b3-663a-49b5-9c0c-e4f6afef8d46")), - (Fibonacci.Ports.default, MyPrompt): EdgeDisplay(id=UUID("1ea51ea2-a74f-4638-88df-496183fc1014")), - (ConditionalRouter.Ports.unknown, ErrorNode): EdgeDisplay(id=UUID("414653b1-f3a6-40da-9973-5a5bda741c85")), - } - output_displays = { - CustomBaseNodeWorkflow.Outputs.answer: WorkflowOutputDisplay( - id=UUID("708005f3-81e0-4886-95e9-ccb0d6c029d3"), name="answer" - ) - } diff --git a/examples/workflows/custom_base_node/inputs.py b/examples/workflows/custom_base_node/inputs.py deleted file mode 100644 index 93b648bd0e..0000000000 --- a/examples/workflows/custom_base_node/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - query: str diff --git a/examples/workflows/custom_base_node/nodes/__init__.py b/examples/workflows/custom_base_node/nodes/__init__.py deleted file mode 100644 index 006e9e6d34..0000000000 --- a/examples/workflows/custom_base_node/nodes/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -from .conditional_router import ConditionalRouter -from .echo_request import EchoRequest -from .error_node import ErrorNode -from .exit_node import ExitNode -from .fibonacci import Fibonacci -from .get_temperature import GetTemperature -from .my_prompt import MyPrompt - -__all__ = [ - "ConditionalRouter", - "ErrorNode", - "ExitNode", - "MyPrompt", - "GetTemperature", - "Fibonacci", - "EchoRequest", -] diff --git a/examples/workflows/custom_base_node/nodes/conditional_router.py b/examples/workflows/custom_base_node/nodes/conditional_router.py deleted file mode 100644 index 5a74c83334..0000000000 --- a/examples/workflows/custom_base_node/nodes/conditional_router.py +++ /dev/null @@ -1,13 +0,0 @@ -from vellum.workflows.nodes import BaseNode -from vellum.workflows.ports import Port - -from .my_prompt import MyPrompt - - -class ConditionalRouter(BaseNode): - class Ports(BaseNode.Ports): - exit = Port.on_if(MyPrompt.Outputs.results[0]["type"].equals("STRING")) - get_temperature = Port.on_elif(MyPrompt.Outputs.results[0]["value"]["name"].equals("get_temperature")) - echo_request = Port.on_elif(MyPrompt.Outputs.results[0]["value"]["name"].equals("echo_request")) - fibonacci = Port.on_elif(MyPrompt.Outputs.results[0]["value"]["name"].equals("fibonacci")) - unknown = Port.on_else() diff --git a/examples/workflows/custom_base_node/nodes/echo_request.py b/examples/workflows/custom_base_node/nodes/echo_request.py deleted file mode 100644 index 9a1c17737f..0000000000 --- a/examples/workflows/custom_base_node/nodes/echo_request.py +++ /dev/null @@ -1,13 +0,0 @@ -from typing import Any - -from .mock_networking_client import MockNetworkingClient -from .my_prompt import MyPrompt - - -class EchoRequest(MockNetworkingClient): - action = MyPrompt.Outputs.results[0]["value"] - - class Outputs(MockNetworkingClient.Outputs): - response: Any - - "A node that calls our base Mock Networking Client" diff --git a/examples/workflows/custom_base_node/nodes/error_node.py b/examples/workflows/custom_base_node/nodes/error_node.py deleted file mode 100644 index 67bd74bd14..0000000000 --- a/examples/workflows/custom_base_node/nodes/error_node.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.nodes.displayable import ErrorNode as BaseErrorNode - - -class ErrorNode(BaseErrorNode): - error = "Unexpected function call name from the model." diff --git a/examples/workflows/custom_base_node/nodes/exit_node.py b/examples/workflows/custom_base_node/nodes/exit_node.py deleted file mode 100644 index f4b37af6df..0000000000 --- a/examples/workflows/custom_base_node/nodes/exit_node.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .my_prompt import MyPrompt - - -class ExitNode(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = MyPrompt.Outputs.text diff --git a/examples/workflows/custom_base_node/nodes/fibonacci.py b/examples/workflows/custom_base_node/nodes/fibonacci.py deleted file mode 100644 index 6ba60ae617..0000000000 --- a/examples/workflows/custom_base_node/nodes/fibonacci.py +++ /dev/null @@ -1,13 +0,0 @@ -from typing import Any - -from .mock_networking_client import MockNetworkingClient -from .my_prompt import MyPrompt - - -class Fibonacci(MockNetworkingClient): - action = MyPrompt.Outputs.results[0]["value"] - - class Outputs(MockNetworkingClient.Outputs): - response: Any - - "A node that calls our base Mock Networking Client" diff --git a/examples/workflows/custom_base_node/nodes/get_temperature.py b/examples/workflows/custom_base_node/nodes/get_temperature.py deleted file mode 100644 index b96c3a548f..0000000000 --- a/examples/workflows/custom_base_node/nodes/get_temperature.py +++ /dev/null @@ -1,13 +0,0 @@ -from typing import Any - -from .mock_networking_client import MockNetworkingClient -from .my_prompt import MyPrompt - - -class GetTemperature(MockNetworkingClient): - action = MyPrompt.Outputs.results[0]["value"] - - class Outputs(MockNetworkingClient.Outputs): - response: Any - - "A node that calls our base Mock Networking Client" diff --git a/examples/workflows/custom_base_node/nodes/mock_networking_client.py b/examples/workflows/custom_base_node/nodes/mock_networking_client.py deleted file mode 100644 index 5f6871d50f..0000000000 --- a/examples/workflows/custom_base_node/nodes/mock_networking_client.py +++ /dev/null @@ -1,59 +0,0 @@ -import json - -from utils.networking import MyCustomNetworkingClient - -from vellum import ( - ChatMessage, - FunctionCall, - FunctionCallChatMessageContent, - FunctionCallChatMessageContentValue, - StringChatMessageContent, -) -from vellum.workflows.nodes.bases import BaseNode - -from ..state import State - - -class MockNetworkingClient(BaseNode[State]): - """ - A base node for which you can that mimics a networking call. - - Adapt this implementation to handle your own use cases surrounding: - - HTTP - - gRPC - - GraphQL - - Web Scraping - - Database - - and more! - """ - - action: FunctionCall - - class Outputs(BaseNode.Outputs): - response: dict - - def run(self) -> BaseNode.Outputs: - if not self.state.chat_history: - self.state.chat_history = [] - - self.state.chat_history.append( - ChatMessage( - role="ASSISTANT", - content=FunctionCallChatMessageContent( - value=FunctionCallChatMessageContentValue.model_validate(self.action.model_dump()) - ), - ) - ) - - client = MyCustomNetworkingClient() - response = client.invoke_request(name=self.action.name, request=self.action.arguments) - - self.state.chat_history.append( - ChatMessage( - role="FUNCTION", - content=StringChatMessageContent(value=json.dumps(response)), - source=self.action.id, - ) - ) - - return self.Outputs(response=response) diff --git a/examples/workflows/custom_base_node/nodes/my_prompt.py b/examples/workflows/custom_base_node/nodes/my_prompt.py deleted file mode 100644 index 74c83ae8ab..0000000000 --- a/examples/workflows/custom_base_node/nodes/my_prompt.py +++ /dev/null @@ -1,54 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - FunctionDefinition, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from ..state import State - - -class MyPrompt(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""You are a helpful assistant that can answer questions and help with tasks.""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock(chat_role="USER", blocks=[VariablePromptBlock(input_variable="query")]), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "query": Inputs.query, - "chat_history": State.chat_history, - } - functions = [ - FunctionDefinition(name="get_temperature", description="Use this tool for any questions about the weather."), - FunctionDefinition(name="echo_request", description="Use this tool for any questions about job searching."), - FunctionDefinition( - name="fibonacci", description="Use this tool for any questions about topics outside of work." - ), - ] - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=4096, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias=None, - custom_parameters=None, - ) diff --git a/examples/workflows/custom_base_node/state.py b/examples/workflows/custom_base_node/state.py deleted file mode 100644 index 33c1e03983..0000000000 --- a/examples/workflows/custom_base_node/state.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.state.base import BaseState - - -class State(BaseState): - chat_history: List[ChatMessage] = [] diff --git a/examples/workflows/custom_base_node/workflow.py b/examples/workflows/custom_base_node/workflow.py deleted file mode 100644 index 5e3a92c722..0000000000 --- a/examples/workflows/custom_base_node/workflow.py +++ /dev/null @@ -1,24 +0,0 @@ -from vellum.workflows import BaseWorkflow - -from .inputs import Inputs -from .nodes.conditional_router import ConditionalRouter -from .nodes.echo_request import EchoRequest -from .nodes.error_node import ErrorNode -from .nodes.exit_node import ExitNode -from .nodes.fibonacci import Fibonacci -from .nodes.get_temperature import GetTemperature -from .nodes.my_prompt import MyPrompt -from .state import State - - -class CustomBaseNodeWorkflow(BaseWorkflow[Inputs, State]): - graph = MyPrompt >> { - ConditionalRouter.Ports.exit >> ExitNode, - ConditionalRouter.Ports.unknown >> ErrorNode, - ConditionalRouter.Ports.fibonacci >> Fibonacci >> MyPrompt, - ConditionalRouter.Ports.echo_request >> EchoRequest >> MyPrompt, - ConditionalRouter.Ports.get_temperature >> GetTemperature >> MyPrompt, - } - - class Outputs(BaseWorkflow.Outputs): - answer = ExitNode.Outputs.value diff --git a/examples/workflows/custom_prompt_node/README.md b/examples/workflows/custom_prompt_node/README.md deleted file mode 100644 index 2fc4a921b5..0000000000 --- a/examples/workflows/custom_prompt_node/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# Custom Prompt Node - -_NOTE: This Workflow is still under development and is not yet ready for reuse_ - -This Workflow is an example of implementing your _own_ Prompt node that avoids making the round trip to Vellum by invoking the LLM directly. The node, `LocalBedrockNode`, assumes that you have your AWS credentials stored locally in a `.env` file in order to run locally: - -```bash -VELLUM_API_KEY=************************* -AWS_ACCESS_KEY_ID=********************** -AWS_SECRET_ACCESS_KEY=****************** -``` - -It also depends on a dependency called `boto3`, which acts as a client to the AWS API. To use it in Vellum, you will need to build the Docker image locally: - -```bash -docker buildx build -f utils/Dockerfile --platform=linux/amd64 -t sdk-examples-utils:1.0.0 . -``` - -Then, you could push the image to Vellum, - -```bash -vellum images push sdk-examples-utils:1.0.0 -``` - -Next, associate the newly created image with your workflow by adding the following to your `pyproject.toml`: - -```toml -[[tool.vellum.workflows]] -module = "custom_prompt_node" -container_image_name = "sdk-examples-utils" -container_image_tag = "1.0.0" -``` - -You can then push the Workflow itself, - -```bash -vellum workflows push custom_prompt_node -``` diff --git a/examples/workflows/custom_prompt_node/inputs.py b/examples/workflows/custom_prompt_node/inputs.py deleted file mode 100644 index 3f1a44d3c2..0000000000 --- a/examples/workflows/custom_prompt_node/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - message: str diff --git a/examples/workflows/custom_prompt_node/nodes/__init__.py b/examples/workflows/custom_prompt_node/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/custom_prompt_node/nodes/be_happy.py b/examples/workflows/custom_prompt_node/nodes/be_happy.py deleted file mode 100644 index f988656297..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/be_happy.py +++ /dev/null @@ -1,41 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock - -from ..inputs import Inputs -from .local_bedrock_node import LocalBedrockNode - - -class BeHappyPrompt(LocalBedrockNode): - ml_model = "aws-bedrock//anthropic/claude-3-5-sonnet-20240620-v1:0/us-west-2" - - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You will be given a message from the user that is already categorized as happy. - -Partake in the user's happiness with some happiness of your own. -""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - RichTextPromptBlock( - blocks=[ - VariablePromptBlock(input_variable="message"), - ] - ) - ], - ), - ] - - prompt_inputs = { - "message": Inputs.message, - } diff --git a/examples/workflows/custom_prompt_node/nodes/bot_response.py b/examples/workflows/custom_prompt_node/nodes/bot_response.py deleted file mode 100644 index 408cf7698e..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/bot_response.py +++ /dev/null @@ -1,10 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode - -from .be_happy import BeHappyPrompt -from .cheer_up import CheerUpPrompt -from .settle_down import SettleDownPrompt - - -class BotResponse(FinalOutputNode): - class Outputs(FinalOutputNode.Outputs): - value = BeHappyPrompt.Outputs.text.coalesce(CheerUpPrompt.Outputs.text).coalesce(SettleDownPrompt.Outputs.text) diff --git a/examples/workflows/custom_prompt_node/nodes/cheer_up.py b/examples/workflows/custom_prompt_node/nodes/cheer_up.py deleted file mode 100644 index 2f9c4eb8ef..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/cheer_up.py +++ /dev/null @@ -1,41 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock - -from ..inputs import Inputs -from .local_bedrock_node import LocalBedrockNode - - -class CheerUpPrompt(LocalBedrockNode): - ml_model = "aws-bedrock//anthropic/claude-3-5-sonnet-20240620-v1:0/us-west-2" - - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You will be given a message from the user that is already categorized as sad. - -Offer some words of encouragement and support. -""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - RichTextPromptBlock( - blocks=[ - VariablePromptBlock(input_variable="message"), - ] - ) - ], - ), - ] - - prompt_inputs = { - "message": Inputs.message, - } diff --git a/examples/workflows/custom_prompt_node/nodes/detect_tone_prompt.py b/examples/workflows/custom_prompt_node/nodes/detect_tone_prompt.py deleted file mode 100644 index d5d408682a..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/detect_tone_prompt.py +++ /dev/null @@ -1,54 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock -from vellum.workflows.ports import Port -from vellum.workflows.references import LazyReference - -from ..inputs import Inputs -from .local_bedrock_node import LocalBedrockNode - - -class DetectTonePrompt(LocalBedrockNode): - ml_model = "aws-bedrock//anthropic/claude-3-5-sonnet-20240620-v1:0/us-west-2" - - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You will be given a message and you need to detect the tone of the message. - -The tone can be one of the following: -- happy -- sad -- angry - - -Only respond with one of those three tones and nothing more. -""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - RichTextPromptBlock( - blocks=[ - VariablePromptBlock(input_variable="message"), - ] - ) - ], - ), - ] - - prompt_inputs = { - "message": Inputs.message, - } - - class Ports(LocalBedrockNode.Ports): - happy = Port.on_if(LazyReference(lambda: DetectTonePrompt.Outputs.text.equals("happy"))) - sad = Port.on_elif(LazyReference(lambda: DetectTonePrompt.Outputs.text.equals("sad"))) - angry = Port.on_elif(LazyReference(lambda: DetectTonePrompt.Outputs.text.equals("angry"))) diff --git a/examples/workflows/custom_prompt_node/nodes/local_bedrock_node.py b/examples/workflows/custom_prompt_node/nodes/local_bedrock_node.py deleted file mode 100644 index d307185386..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/local_bedrock_node.py +++ /dev/null @@ -1,106 +0,0 @@ -import json -from uuid import uuid4 -from typing import Any, Iterator - -import boto3 - -from vellum import ( - AdHocExecutePromptEvent, - FulfilledAdHocExecutePromptEvent, - InitiatedAdHocExecutePromptEvent, - PromptOutput, - StringVellumValue, -) -from vellum.prompts.blocks.compilation import compile_prompt_blocks -from vellum.workflows.exceptions import NodeException -from vellum.workflows.nodes.displayable import InlinePromptNode - - -class LocalBedrockNode(InlinePromptNode): - """ - Used to execute a Prompt against the AWS Bedrock API directly instead of growing through Vellum. - """ - - # Override - def _get_prompt_event_stream(self) -> Iterator[AdHocExecutePromptEvent]: - """ - This is the main method that needs to be overridden to execute the prompt against the Bedrock API directly. - """ - - client = self._get_client() - - execution_id = str(uuid4()) - - yield InitiatedAdHocExecutePromptEvent( - execution_id=execution_id, - ) - - response = client.invoke_model( - modelId=".".join(self.ml_model.replace("aws-bedrock//", "").split("/")[0:-1]), - body=json.dumps(self._get_body()), - ) - response_body = json.loads(response["body"].read()) - - content = response_body.get("content") - outputs: list[PromptOutput] = [] - for part in content: - if part.get("type") == "text": - outputs.append(StringVellumValue(value=part["text"])) - - yield FulfilledAdHocExecutePromptEvent( - outputs=outputs, - execution_id=execution_id, - ) - - def _get_client(self) -> Any: - return boto3.client("bedrock-runtime", region_name="us-west-2") - - def _get_body(self) -> dict: - input_variables, input_values = self._compile_prompt_inputs() - compiled_blocks = compile_prompt_blocks( - blocks=self.blocks, inputs=input_values, input_variables=input_variables - ) - - system_blocks: list[dict] = [] - messages: list[dict] = [] - - for block in compiled_blocks: - if block.block_type != "CHAT_MESSAGE": - continue - - contents: list[dict[str, Any]] = [] - for child_block in block.blocks: - if child_block.content.type == "STRING": - text = child_block.content.value - if text.strip(): - contents.append({"type": "text", "text": text}) - elif child_block.content.type == "JSON": - contents.append( - { - "type": "text", - "text": json.dumps(child_block.content.value), - } - ) - else: - raise NodeException(f"Unsupported child block type: {child_block.content.type}") - - if block.role == "SYSTEM": - system_blocks = contents - - elif block.role == "USER": - messages.append({"role": "user", "content": contents}) - - elif block.role == "ASSISTANT": - messages.append({"role": "assistant", "content": contents}) - - body = { - "anthropic_version": "bedrock-2023-05-31", - "messages": messages, - "max_tokens": self.parameters.max_tokens, - "temperature": self.parameters.temperature, - } - - if system_blocks: - body["system"] = system_blocks - - return body diff --git a/examples/workflows/custom_prompt_node/nodes/settle_down.py b/examples/workflows/custom_prompt_node/nodes/settle_down.py deleted file mode 100644 index 8d3d3ea7b4..0000000000 --- a/examples/workflows/custom_prompt_node/nodes/settle_down.py +++ /dev/null @@ -1,41 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock - -from ..inputs import Inputs -from .local_bedrock_node import LocalBedrockNode - - -class SettleDownPrompt(LocalBedrockNode): - ml_model = "aws-bedrock//anthropic/claude-3-5-sonnet-20240620-v1:0/us-west-2" - - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You will be given a message from the user that is already categorized as angry. - -Give a short, concise response that helps the user calm down. -""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - RichTextPromptBlock( - blocks=[ - VariablePromptBlock(input_variable="message"), - ] - ) - ], - ), - ] - - prompt_inputs = { - "message": Inputs.message, - } diff --git a/examples/workflows/custom_prompt_node/sandbox.py b/examples/workflows/custom_prompt_node/sandbox.py deleted file mode 100644 index 8bbb1e93a9..0000000000 --- a/examples/workflows/custom_prompt_node/sandbox.py +++ /dev/null @@ -1,12 +0,0 @@ -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import CustomPromptNodeWorkflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -workflow = CustomPromptNodeWorkflow() -runner = WorkflowSandboxRunner(workflow, inputs=[Inputs(message="It's such a beautiful day today!")]) -runner.run() diff --git a/examples/workflows/custom_prompt_node/workflow.py b/examples/workflows/custom_prompt_node/workflow.py deleted file mode 100644 index 56d3b130c8..0000000000 --- a/examples/workflows/custom_prompt_node/workflow.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.be_happy import BeHappyPrompt -from .nodes.bot_response import BotResponse -from .nodes.cheer_up import CheerUpPrompt -from .nodes.detect_tone_prompt import DetectTonePrompt -from .nodes.settle_down import SettleDownPrompt - - -class CustomPromptNodeWorkflow(BaseWorkflow[Inputs, BaseState]): - graph = { - DetectTonePrompt.Ports.happy >> BeHappyPrompt, - DetectTonePrompt.Ports.sad >> CheerUpPrompt, - DetectTonePrompt.Ports.angry >> SettleDownPrompt, - } >> BotResponse - - class Outputs(BaseWorkflow.Outputs): - response = BotResponse.Outputs.value diff --git a/examples/workflows/customer_support_q_a/__init__.py b/examples/workflows/customer_support_q_a/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/customer_support_q_a/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/customer_support_q_a/display/__init__.py b/examples/workflows/customer_support_q_a/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/customer_support_q_a/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/customer_support_q_a/display/nodes/__init__.py b/examples/workflows/customer_support_q_a/display/nodes/__init__.py deleted file mode 100644 index 82590ee197..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .answer_from_help_docs import AnswerFromHelpDocsDisplay -from .answer_from_q_a_database import AnswerFromQADatabaseDisplay -from .final_output import FinalOutputDisplay -from .get_search_results_with_metadata import GetSearchResultsWithMetadataDisplay -from .help_docs_lookup import HelpDocsLookupDisplay -from .merge_node import MergeNodeDisplay -from .q_a_bank_lookup import QABankLookupDisplay -from .take_best_response import TakeBestResponseDisplay - -__all__ = [ - "AnswerFromHelpDocsDisplay", - "AnswerFromQADatabaseDisplay", - "FinalOutputDisplay", - "GetSearchResultsWithMetadataDisplay", - "HelpDocsLookupDisplay", - "MergeNodeDisplay", - "QABankLookupDisplay", - "TakeBestResponseDisplay", -] diff --git a/examples/workflows/customer_support_q_a/display/nodes/answer_from_help_docs.py b/examples/workflows/customer_support_q_a/display/nodes/answer_from_help_docs.py deleted file mode 100644 index 0e6e32714f..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/answer_from_help_docs.py +++ /dev/null @@ -1,37 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.answer_from_help_docs import AnswerFromHelpDocs - - -class AnswerFromHelpDocsDisplay(BaseInlinePromptNodeDisplay[AnswerFromHelpDocs]): - label = "Answer from HelpDocs" - node_id = UUID("b6192292-e521-4e52-8fbb-3a276a967c9b") - output_id = UUID("d142f5b3-2d37-4985-acee-36145aeb7f37") - array_output_id = UUID("7474b519-c47d-4fc6-9f00-0bd3c9a550d5") - target_handle_id = UUID("c98a61e2-782d-4f37-a0b7-02b4fdfc8367") - node_input_ids_by_name = { - "prompt_inputs.context_str": UUID("dc9d9831-85b0-4046-ac16-1f74ecd5ed54"), - "prompt_inputs.customer_question": UUID("f05cec45-79ae-466c-a961-75c63c73f7a0"), - } - attribute_ids_by_name = {"ml_model": UUID("b8d31eda-388e-4e53-a5b9-c93796eb2738")} - output_display = { - AnswerFromHelpDocs.Outputs.text: NodeOutputDisplay( - id=UUID("d142f5b3-2d37-4985-acee-36145aeb7f37"), name="text" - ), - AnswerFromHelpDocs.Outputs.results: NodeOutputDisplay( - id=UUID("7474b519-c47d-4fc6-9f00-0bd3c9a550d5"), name="results" - ), - AnswerFromHelpDocs.Outputs.json: NodeOutputDisplay( - id=UUID("34356498-0c5c-4632-b493-91c5318065e0"), name="json" - ), - } - port_displays = { - AnswerFromHelpDocs.Ports.default: PortDisplayOverrides(id=UUID("5dddc604-7f31-47cc-a410-95e2cd35d78b")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3944.7967556188532, y=1657.0257545187833), width=480, height=229 - ) diff --git a/examples/workflows/customer_support_q_a/display/nodes/answer_from_q_a_database.py b/examples/workflows/customer_support_q_a/display/nodes/answer_from_q_a_database.py deleted file mode 100644 index 0e0cac8f2d..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/answer_from_q_a_database.py +++ /dev/null @@ -1,37 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.answer_from_q_a_database import AnswerFromQADatabase - - -class AnswerFromQADatabaseDisplay(BaseInlinePromptNodeDisplay[AnswerFromQADatabase]): - label = "Answer from Q&A Database" - node_id = UUID("19e33306-0838-4ce7-8c5d-e9d3c3f7b651") - output_id = UUID("507144b2-1fa7-4dc9-807b-1ac5d3e8222f") - array_output_id = UUID("12d21eb9-8e69-43fc-9a85-7862772cfe96") - target_handle_id = UUID("32491208-05df-4c63-9d78-9e5590fc7962") - node_input_ids_by_name = { - "prompt_inputs.context_str": UUID("df01e4d5-724e-46bd-95a2-8814f8d73d24"), - "prompt_inputs.customer_question": UUID("55ce3cc9-388d-4aff-a68e-db69ab350e3a"), - } - attribute_ids_by_name = {"ml_model": UUID("b65c824f-dd68-4ccd-a636-dddc142001ee")} - output_display = { - AnswerFromQADatabase.Outputs.text: NodeOutputDisplay( - id=UUID("507144b2-1fa7-4dc9-807b-1ac5d3e8222f"), name="text" - ), - AnswerFromQADatabase.Outputs.results: NodeOutputDisplay( - id=UUID("12d21eb9-8e69-43fc-9a85-7862772cfe96"), name="results" - ), - AnswerFromQADatabase.Outputs.json: NodeOutputDisplay( - id=UUID("6f9e3b26-5d94-493c-b682-cf63d00c10c6"), name="json" - ), - } - port_displays = { - AnswerFromQADatabase.Ports.default: PortDisplayOverrides(id=UUID("7d4f2c1c-7ddb-4a8d-9956-699d6823b4a7")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3276.4025292273127, y=655.8411695053666), width=480, height=229 - ) diff --git a/examples/workflows/customer_support_q_a/display/nodes/final_output.py b/examples/workflows/customer_support_q_a/display/nodes/final_output.py deleted file mode 100644 index 7af0d9862d..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/final_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("4cf45b90-2ea9-4654-95a0-c00623766914") - target_handle_id = UUID("973fc6ad-bf46-4f31-8544-04fcb4783cdd") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("215bb1c9-81d8-464a-a8bb-3f1812d8b4f3")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("e15f7fa4-cb16-4a38-8a8b-75ee6e77a95e"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=5626.014123554148, y=1248.518870115668), width=460, height=239 - ) diff --git a/examples/workflows/customer_support_q_a/display/nodes/get_search_results_with_metadata.py b/examples/workflows/customer_support_q_a/display/nodes/get_search_results_with_metadata.py deleted file mode 100644 index a3041962a7..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/get_search_results_with_metadata.py +++ /dev/null @@ -1,28 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.get_search_results_with_metadata import GetSearchResultsWithMetadata - - -class GetSearchResultsWithMetadataDisplay(BaseTemplatingNodeDisplay[GetSearchResultsWithMetadata]): - label = "Get Search Results With Metadata" - node_id = UUID("55846c99-4741-42db-8c04-366efcec3d93") - target_handle_id = UUID("9023b50f-5341-477b-b15c-0fafa3aa3376") - node_input_ids_by_name = { - "inputs.docs_context": UUID("c9c60844-6d7d-45a0-b20f-ef174600d04f"), - "template": UUID("8ee889d8-2f4a-40b9-85ae-1d1df6fbb950"), - } - output_display = { - GetSearchResultsWithMetadata.Outputs.result: NodeOutputDisplay( - id=UUID("bdb5a5c0-2659-4e15-8f04-fb86189e8e13"), name="result" - ) - } - port_displays = { - GetSearchResultsWithMetadata.Ports.default: PortDisplayOverrides( - id=UUID("8e06408e-6e40-48a4-832f-1b403a4e64e1") - ) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=3405, y=1665), width=460, height=229) diff --git a/examples/workflows/customer_support_q_a/display/nodes/help_docs_lookup.py b/examples/workflows/customer_support_q_a/display/nodes/help_docs_lookup.py deleted file mode 100644 index 55ef72fa84..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/help_docs_lookup.py +++ /dev/null @@ -1,34 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseSearchNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.help_docs_lookup import HelpDocsLookup - - -class HelpDocsLookupDisplay(BaseSearchNodeDisplay[HelpDocsLookup]): - label = "Help Docs Lookup" - node_id = UUID("7128dc10-b8ab-4fc2-b363-6ea9ab58e1b5") - target_handle_id = UUID("e9cafff7-a5ce-4afc-befe-9da250a8ac3e") - metadata_filter_input_id_by_operand_id = {} - node_input_ids_by_name = { - "query": UUID("16d6a7df-b431-413d-ae5e-483c0c772935"), - "document_index_id": UUID("f756c5ca-4571-4325-9732-b081b9a9b1ae"), - "weights": UUID("b049bec5-cadd-4842-a662-eaaa7cf6a80a"), - "limit": UUID("92fb873b-af59-4043-ab80-27fb1947e7d6"), - "separator": UUID("57bca8d9-4b25-4c0b-94df-8ae5e98b35d1"), - "result_merging_enabled": UUID("3068e8f7-1dc5-48ea-91af-03e6d16495a0"), - "external_id_filters": UUID("41bc7ad7-a2e9-4dc7-b00a-673cb5b80b09"), - "metadata_filters": UUID("56ea3c07-b224-463e-99a5-d62e510d87c9"), - } - output_display = { - HelpDocsLookup.Outputs.results: NodeOutputDisplay( - id=UUID("d3062655-bcf9-4074-b037-6bc544cfd0ab"), name="results" - ), - HelpDocsLookup.Outputs.text: NodeOutputDisplay(id=UUID("b1c8e591-1576-4146-ad8a-fc082eabfd7c"), name="text"), - } - port_displays = { - HelpDocsLookup.Ports.default: PortDisplayOverrides(id=UUID("13d97aad-78fd-4d79-ac9e-b0c944777716")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2610, y=1545), width=480, height=185) diff --git a/examples/workflows/customer_support_q_a/display/nodes/merge_node.py b/examples/workflows/customer_support_q_a/display/nodes/merge_node.py deleted file mode 100644 index cfddc7011c..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/merge_node.py +++ /dev/null @@ -1,15 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseMergeNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from ...nodes.merge_node import MergeNode - - -class MergeNodeDisplay(BaseMergeNodeDisplay[MergeNode]): - label = "Merge Node" - node_id = UUID("f67ad8f4-af4e-49c5-bc68-fb19c4c40e0d") - target_handle_ids = [UUID("e734b04e-0965-4bde-b652-0ff94aae1230"), UUID("4a1f8760-f5f9-4b77-b062-3e1836db3e53")] - port_displays = {MergeNode.Ports.default: PortDisplayOverrides(id=UUID("7a906154-4a53-4406-b0c5-c7caabe1a8f1"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=4511.840798045603, y=1305), width=449, height=180) diff --git a/examples/workflows/customer_support_q_a/display/nodes/q_a_bank_lookup.py b/examples/workflows/customer_support_q_a/display/nodes/q_a_bank_lookup.py deleted file mode 100644 index cbbfebe836..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/q_a_bank_lookup.py +++ /dev/null @@ -1,32 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseSearchNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.q_a_bank_lookup import QABankLookup - - -class QABankLookupDisplay(BaseSearchNodeDisplay[QABankLookup]): - label = "Q&A Bank Lookup" - node_id = UUID("4193098f-9b91-4e69-8cdf-302c7cfe3648") - target_handle_id = UUID("5dc53a6f-8617-4898-b983-53a2964a049a") - metadata_filter_input_id_by_operand_id = {} - node_input_ids_by_name = { - "query": UUID("c660f5de-2bd7-451b-a5e8-d17d51670f76"), - "document_index_id": UUID("e43d1e7b-9d36-4a88-8fab-98ec8d725e42"), - "weights": UUID("dcc44135-4496-4f06-bf55-f9ead9506c9c"), - "limit": UUID("63f5af38-7360-4395-b765-b64c9d9ee4c8"), - "separator": UUID("a4ba6b24-f6c8-45ad-bf95-0b67e604f845"), - "result_merging_enabled": UUID("d65ce831-6a66-44eb-bc84-1c4c0bc3e640"), - "external_id_filters": UUID("d24be36b-f3b3-46aa-8a7c-b465d70661a7"), - "metadata_filters": UUID("f84a6937-d464-4028-8e43-ed5aaa11cf92"), - } - output_display = { - QABankLookup.Outputs.results: NodeOutputDisplay( - id=UUID("0194b94a-b0b9-419d-a67b-d4056dc7668a"), name="results" - ), - QABankLookup.Outputs.text: NodeOutputDisplay(id=UUID("35fd1b25-4dba-40a1-af90-809f1094e13b"), name="text"), - } - port_displays = {QABankLookup.Ports.default: PortDisplayOverrides(id=UUID("c683f976-b555-4f7f-90f9-2c5c81ede842"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2625, y=675), width=480, height=185) diff --git a/examples/workflows/customer_support_q_a/display/nodes/take_best_response.py b/examples/workflows/customer_support_q_a/display/nodes/take_best_response.py deleted file mode 100644 index 220c35965b..0000000000 --- a/examples/workflows/customer_support_q_a/display/nodes/take_best_response.py +++ /dev/null @@ -1,34 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.take_best_response import TakeBestResponse - - -class TakeBestResponseDisplay(BaseInlinePromptNodeDisplay[TakeBestResponse]): - label = "Take Best Response" - node_id = UUID("8aaede70-9024-46bc-9ffe-2247baa43eb1") - output_id = UUID("818b9849-3e2d-4149-b982-860f0d18b9cc") - array_output_id = UUID("97641cc0-9ea0-4fd9-87f2-df0808a8928f") - target_handle_id = UUID("335b9eb8-dc2d-419f-ab1d-5af0cb68d434") - node_input_ids_by_name = { - "prompt_inputs.question": UUID("9ae1e761-85f7-46f7-9478-e95eecfdeb87"), - "prompt_inputs.support_bot_response_1": UUID("26a360de-e06e-4bbd-bc05-c6dc3f96b538"), - "prompt_inputs.support_bot_response_2": UUID("62a38935-9b81-4379-b58a-768185ad6a5a"), - } - attribute_ids_by_name = {"ml_model": UUID("b8fd05a2-7eeb-4ebf-8687-3969b3eca532")} - output_display = { - TakeBestResponse.Outputs.text: NodeOutputDisplay(id=UUID("818b9849-3e2d-4149-b982-860f0d18b9cc"), name="text"), - TakeBestResponse.Outputs.results: NodeOutputDisplay( - id=UUID("97641cc0-9ea0-4fd9-87f2-df0808a8928f"), name="results" - ), - TakeBestResponse.Outputs.json: NodeOutputDisplay(id=UUID("7fa84843-cc89-444c-9e5b-64c4270b2448"), name="json"), - } - port_displays = { - TakeBestResponse.Ports.default: PortDisplayOverrides(id=UUID("ee9cdb4f-85d0-48dc-8691-032bb08a2bb0")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=5038.966133246463, y=1237.2432353873394), width=480, height=283 - ) diff --git a/examples/workflows/customer_support_q_a/display/workflow.py b/examples/workflows/customer_support_q_a/display/workflow.py deleted file mode 100644 index d0bfe1dd5a..0000000000 --- a/examples/workflows/customer_support_q_a/display/workflow.py +++ /dev/null @@ -1,68 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.answer_from_help_docs import AnswerFromHelpDocs -from ..nodes.answer_from_q_a_database import AnswerFromQADatabase -from ..nodes.final_output import FinalOutput -from ..nodes.get_search_results_with_metadata import GetSearchResultsWithMetadata -from ..nodes.help_docs_lookup import HelpDocsLookup -from ..nodes.merge_node import MergeNode -from ..nodes.q_a_bank_lookup import QABankLookup -from ..nodes.take_best_response import TakeBestResponse -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("2edf6a15-5a45-4662-8c26-76e6c65456dd"), - entrypoint_node_source_handle_id=UUID("0168427d-ed02-47ba-98c2-e51fb25d6273"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=2235, y=1290), width=124, height=48), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-2349.500094897242, y=-487.238689370777, zoom=0.577587531707226) - ), - ) - inputs_display = { - Inputs.question: WorkflowInputsDisplay(id=UUID("83be60e4-5d0a-43e4-99fc-691e2fb623f6"), name="question") - } - entrypoint_displays = { - QABankLookup: EntrypointDisplay( - id=UUID("2edf6a15-5a45-4662-8c26-76e6c65456dd"), - edge_display=EdgeDisplay(id=UUID("139d2454-ed41-4e6f-9141-4e1e7441ef09")), - ), - HelpDocsLookup: EntrypointDisplay( - id=UUID("2edf6a15-5a45-4662-8c26-76e6c65456dd"), - edge_display=EdgeDisplay(id=UUID("bcb98193-5019-4e04-86b5-7b76ac7530b2")), - ), - } - edge_displays = { - (HelpDocsLookup.Ports.default, GetSearchResultsWithMetadata): EdgeDisplay( - id=UUID("40c24a06-52fb-4381-8c63-0ce40a741f30") - ), - (QABankLookup.Ports.default, AnswerFromQADatabase): EdgeDisplay( - id=UUID("c98536b9-fac3-4ea1-b495-fd5429888a2d") - ), - (AnswerFromQADatabase.Ports.default, MergeNode): EdgeDisplay(id=UUID("bd6e3401-f0d3-421e-a522-ff63c8029bd8")), - (GetSearchResultsWithMetadata.Ports.default, AnswerFromHelpDocs): EdgeDisplay( - id=UUID("8fd447f0-300b-4378-9c4b-3e1b38fd4476") - ), - (AnswerFromHelpDocs.Ports.default, MergeNode): EdgeDisplay(id=UUID("68fa5245-7c8d-4cb6-b8d8-1cff712cb994")), - (MergeNode.Ports.default, TakeBestResponse): EdgeDisplay(id=UUID("a9a1c676-480c-472b-9f7b-3366e1b7a1fd")), - (TakeBestResponse.Ports.default, FinalOutput): EdgeDisplay(id=UUID("a83ce9d1-8074-4963-8838-885d683afde3")), - } - output_displays = { - Workflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("e15f7fa4-cb16-4a38-8a8b-75ee6e77a95e"), name="final-output" - ) - } diff --git a/examples/workflows/customer_support_q_a/inputs.py b/examples/workflows/customer_support_q_a/inputs.py deleted file mode 100644 index 861f6eb52d..0000000000 --- a/examples/workflows/customer_support_q_a/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - question: str diff --git a/examples/workflows/customer_support_q_a/nodes/__init__.py b/examples/workflows/customer_support_q_a/nodes/__init__.py deleted file mode 100644 index feb7ae3fab..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .answer_from_help_docs import AnswerFromHelpDocs -from .answer_from_q_a_database import AnswerFromQADatabase -from .final_output import FinalOutput -from .get_search_results_with_metadata import GetSearchResultsWithMetadata -from .help_docs_lookup import HelpDocsLookup -from .merge_node import MergeNode -from .q_a_bank_lookup import QABankLookup -from .take_best_response import TakeBestResponse - -__all__ = [ - "AnswerFromHelpDocs", - "AnswerFromQADatabase", - "FinalOutput", - "GetSearchResultsWithMetadata", - "HelpDocsLookup", - "MergeNode", - "QABankLookup", - "TakeBestResponse", -] diff --git a/examples/workflows/customer_support_q_a/nodes/answer_from_help_docs.py b/examples/workflows/customer_support_q_a/nodes/answer_from_help_docs.py deleted file mode 100644 index a91970e5ab..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/answer_from_help_docs.py +++ /dev/null @@ -1,72 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, PromptParameters, RichTextPromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .get_search_results_with_metadata import GetSearchResultsWithMetadata - - -class AnswerFromHelpDocs(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You are a Customer Support bot that helps answer questions from customers based on context provided to you from our Help Docs and API docs. If the context is insufficient, say \"Sorry, I don\'t know.\" - -The context will include some metadata with more information about where the corresponding help doc exists on our site. Please include a link to the Help Doc URL somewhere in your response, and leverage other aspects of the metadata if it improves the quality of your response. You can be extremely concise and quickly link them to the Help Doc URL. - -The context and customer question will be provided to you in the following format: - - -... - - - -... -\ -""" - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ - -{{ context_str }} - - - -{{ customer_question }} -\ -""" - ) - ] - ) - ], - ), - ] - prompt_inputs = { - "context_str": GetSearchResultsWithMetadata.Outputs.result, - "customer_question": Inputs.question, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/customer_support_q_a/nodes/answer_from_q_a_database.py b/examples/workflows/customer_support_q_a/nodes/answer_from_q_a_database.py deleted file mode 100644 index 27fc1b0bc8..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/answer_from_q_a_database.py +++ /dev/null @@ -1,66 +0,0 @@ -from vellum import ChatMessagePromptBlock, JinjaPromptBlock, PromptParameters -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .q_a_bank_lookup import QABankLookup - - -class AnswerFromQADatabase(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""\ -You are a Customer Support bot that helps answer questions from customers based on context from previous Q&A that our staff have already answered. - -The context will be a few examples of questions and answers that we found from previous questions we\'ve answered for customers. - -Your task is to answer questions based on the context provided without using any other knowledge. If the customer\'s question can\'t be answered using the provided context, say \"Sorry, I don\'t know.\" You should start by acknowledging the user\'s question. - -The context and customer question will be provided to you in the following format. - - -... - - - -... -\ -""" - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - JinjaPromptBlock( - template="""\ - -{{ context_str }} - - - -{{ customer_question }} -\ -""" - ) - ], - ), - ] - prompt_inputs = { - "context_str": QABankLookup.Outputs.text, - "customer_question": Inputs.question, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/customer_support_q_a/nodes/final_output.py b/examples/workflows/customer_support_q_a/nodes/final_output.py deleted file mode 100644 index d35546a7b9..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/final_output.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .take_best_response import TakeBestResponse - - -class FinalOutput(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = TakeBestResponse.Outputs.text diff --git a/examples/workflows/customer_support_q_a/nodes/get_search_results_with_metadata.py b/examples/workflows/customer_support_q_a/nodes/get_search_results_with_metadata.py deleted file mode 100644 index cbf3119de7..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/get_search_results_with_metadata.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .help_docs_lookup import HelpDocsLookup - - -class GetSearchResultsWithMetadata(TemplatingNode[BaseState, str]): - template = """{{ docs_context }}""" - inputs = { - "docs_context": HelpDocsLookup.Outputs.results, - } diff --git a/examples/workflows/customer_support_q_a/nodes/help_docs_lookup.py b/examples/workflows/customer_support_q_a/nodes/help_docs_lookup.py deleted file mode 100644 index a8acfa44da..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/help_docs_lookup.py +++ /dev/null @@ -1,17 +0,0 @@ -from vellum import SearchResultMergingRequest, SearchWeightsRequest -from vellum.workflows.nodes.displayable import SearchNode -from vellum.workflows.nodes.displayable.bases.types import MetadataLogicalConditionGroup, SearchFilters - -from ..inputs import Inputs - - -class HelpDocsLookup(SearchNode): - query = Inputs.question - document_index = "vellum-fern-pages-demos-aibmsi" - limit = 1 - weights = SearchWeightsRequest(semantic_similarity=0.8, keywords=0.2) - result_merging = SearchResultMergingRequest(enabled=True) - filters = SearchFilters( - external_ids=None, metadata=MetadataLogicalConditionGroup(combinator="AND", negated=False, conditions=[]) - ) - chunk_separator = "#####" diff --git a/examples/workflows/customer_support_q_a/nodes/merge_node.py b/examples/workflows/customer_support_q_a/nodes/merge_node.py deleted file mode 100644 index 09f1bb822a..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/merge_node.py +++ /dev/null @@ -1,7 +0,0 @@ -from vellum.workflows.nodes.displayable import MergeNode as BaseMergeNode -from vellum.workflows.types import MergeBehavior - - -class MergeNode(BaseMergeNode): - class Trigger(BaseMergeNode.Trigger): - merge_behavior = MergeBehavior.AWAIT_ALL diff --git a/examples/workflows/customer_support_q_a/nodes/q_a_bank_lookup.py b/examples/workflows/customer_support_q_a/nodes/q_a_bank_lookup.py deleted file mode 100644 index 35eecb417e..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/q_a_bank_lookup.py +++ /dev/null @@ -1,17 +0,0 @@ -from vellum import SearchResultMergingRequest, SearchWeightsRequest -from vellum.workflows.nodes.displayable import SearchNode -from vellum.workflows.nodes.displayable.bases.types import MetadataLogicalConditionGroup, SearchFilters - -from ..inputs import Inputs - - -class QABankLookup(SearchNode): - query = Inputs.question - document_index = "vellum-q-a-bank-demos-aezoyg" - limit = 1 - weights = SearchWeightsRequest(semantic_similarity=0.8, keywords=0.2) - result_merging = SearchResultMergingRequest(enabled=True) - filters = SearchFilters( - external_ids=None, metadata=MetadataLogicalConditionGroup(combinator="AND", negated=False, conditions=[]) - ) - chunk_separator = "#####" diff --git a/examples/workflows/customer_support_q_a/nodes/take_best_response.py b/examples/workflows/customer_support_q_a/nodes/take_best_response.py deleted file mode 100644 index 00257f6955..0000000000 --- a/examples/workflows/customer_support_q_a/nodes/take_best_response.py +++ /dev/null @@ -1,76 +0,0 @@ -from vellum import ChatMessagePromptBlock, JinjaPromptBlock, PromptParameters -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .answer_from_help_docs import AnswerFromHelpDocs -from .answer_from_q_a_database import AnswerFromQADatabase - - -class TakeBestResponse(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""\ -You are an expert Customer Support Agent. You are using information provided by two other customer support agents each with different sources of information to help answer customers questions. One agent will use a question and answer bank from previous customer inquiries. The other agent will use the Help Docs and API Docs. - -You will review the customer\'s initial question, and the responses proposed by the two customer support agents. Your task is to decide how to respond based on what each support agent suggested. Talk directly to the user and don\'t reveal that you have two agents supporting you behind the scenes. If you don\'t have a good response based on the information provided by the other support agents, say \"I\'m sorry, I don\'t know the answer to that. I\'ll loop in the Vellum team to help!\" - -If one doesn\'t know the answer, use the answer provided by the other agent! - -The relevant information will be provided to you in the following format: - - -... - - - -... - - - -... -\ -""" - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - JinjaPromptBlock( - template="""\ - -{{ user_question }} - - - -{{ support_bot_response_1 }} - - - -{{ support_bot_response_2 }} -\ -""" - ) - ], - ), - ] - prompt_inputs = { - "question": Inputs.question, - "support_bot_response_1": AnswerFromQADatabase.Outputs.text, - "support_bot_response_2": AnswerFromHelpDocs.Outputs.text, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/customer_support_q_a/sandbox.py b/examples/workflows/customer_support_q_a/sandbox.py deleted file mode 100644 index 6a4f498a42..0000000000 --- a/examples/workflows/customer_support_q_a/sandbox.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - question="Hi! How do i pass inputs into an API node? I can do URL params (preferred) or JSON body, but I couldn't figure either out. " - ), - Inputs(question="Do you support Llama 9.12? Do you plan to soon? "), - ], -) - -runner.run() diff --git a/examples/workflows/customer_support_q_a/workflow.py b/examples/workflows/customer_support_q_a/workflow.py deleted file mode 100644 index 7aafbf3600..0000000000 --- a/examples/workflows/customer_support_q_a/workflow.py +++ /dev/null @@ -1,27 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.answer_from_help_docs import AnswerFromHelpDocs -from .nodes.answer_from_q_a_database import AnswerFromQADatabase -from .nodes.final_output import FinalOutput -from .nodes.get_search_results_with_metadata import GetSearchResultsWithMetadata -from .nodes.help_docs_lookup import HelpDocsLookup -from .nodes.merge_node import MergeNode -from .nodes.q_a_bank_lookup import QABankLookup -from .nodes.take_best_response import TakeBestResponse - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = ( - { - QABankLookup >> AnswerFromQADatabase, - HelpDocsLookup >> GetSearchResultsWithMetadata >> AnswerFromHelpDocs, - } - >> MergeNode - >> TakeBestResponse - >> FinalOutput - ) - - class Outputs(BaseWorkflow.Outputs): - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/document_parsing/__init__.py b/examples/workflows/document_parsing/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/document_parsing/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/document_parsing/display/__init__.py b/examples/workflows/document_parsing/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/document_parsing/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/document_parsing/display/nodes/__init__.py b/examples/workflows/document_parsing/display/nodes/__init__.py deleted file mode 100644 index 1c67e2484e..0000000000 --- a/examples/workflows/document_parsing/display/nodes/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .add_image_to_chat_history import AddImageToChatHistoryDisplay -from .extract_by_chat_history import ExtractByChatHistoryDisplay -from .extract_by_document_url import ExtractByDocumentURLDisplay -from .final_output import FinalOutputDisplay -from .final_output_6 import FinalOutput6Display - -__all__ = [ - "AddImageToChatHistoryDisplay", - "ExtractByChatHistoryDisplay", - "ExtractByDocumentURLDisplay", - "FinalOutput6Display", - "FinalOutputDisplay", -] diff --git a/examples/workflows/document_parsing/display/nodes/add_image_to_chat_history.py b/examples/workflows/document_parsing/display/nodes/add_image_to_chat_history.py deleted file mode 100644 index fa57c081b2..0000000000 --- a/examples/workflows/document_parsing/display/nodes/add_image_to_chat_history.py +++ /dev/null @@ -1,32 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.add_image_to_chat_history import AddImageToChatHistory - - -class AddImageToChatHistoryDisplay(BaseTemplatingNodeDisplay[AddImageToChatHistory]): - label = "Add Image to Chat History" - node_id = UUID("cee9404a-f77d-4011-a6d0-b764dc6465a6") - target_handle_id = UUID("35ba823b-e0cd-4b5e-84a6-07c63ae3e34e") - node_input_ids_by_name = { - "inputs.chat_history": UUID("0f373af9-1521-4cd2-b29d-222a375b02da"), - "template": UUID("33443abf-6804-4ffc-99e4-14341d889303"), - "inputs.image_url": UUID("7ce37d39-0f62-4878-a7c4-435f94118b51"), - } - output_display = { - AddImageToChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("d1d37e2e-4bff-498b-a191-c8b72f224f75"), name="result" - ) - } - port_displays = { - AddImageToChatHistory.Ports.default: PortDisplayOverrides(id=UUID("24102683-8533-4a10-bebd-10e138293da4")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=282.3228076068352, y=552.4516595338625), - width=553, - height=746, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/document_parsing/display/nodes/extract_by_chat_history.py b/examples/workflows/document_parsing/display/nodes/extract_by_chat_history.py deleted file mode 100644 index c8b2544cd6..0000000000 --- a/examples/workflows/document_parsing/display/nodes/extract_by_chat_history.py +++ /dev/null @@ -1,37 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.extract_by_chat_history import ExtractByChatHistory - - -class ExtractByChatHistoryDisplay(BaseInlinePromptNodeDisplay[ExtractByChatHistory]): - label = "Extract by Chat History" - node_id = UUID("b112e753-d690-4fe5-9d89-8e8addf570a0") - output_id = UUID("a94cdd58-fef8-473a-bb00-ebf320a29fd4") - array_output_id = UUID("1ff8bd8a-b927-49c1-bbf4-fcbbd9702d16") - target_handle_id = UUID("f5f73970-2f63-4d5f-b379-b83d0f6afd72") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("7c67425d-da0a-4a0b-ae99-0284fa9ee92f")} - attribute_ids_by_name = {"ml_model": UUID("4dbd5540-ddcd-4db5-a8f0-95639efee626")} - output_display = { - ExtractByChatHistory.Outputs.text: NodeOutputDisplay( - id=UUID("a94cdd58-fef8-473a-bb00-ebf320a29fd4"), name="text" - ), - ExtractByChatHistory.Outputs.results: NodeOutputDisplay( - id=UUID("1ff8bd8a-b927-49c1-bbf4-fcbbd9702d16"), name="results" - ), - ExtractByChatHistory.Outputs.json: NodeOutputDisplay( - id=UUID("6815ce40-5016-4b54-b6e9-f60cdbf9645c"), name="json" - ), - } - port_displays = { - ExtractByChatHistory.Ports.default: PortDisplayOverrides(id=UUID("269f5117-6747-4a5f-b989-27fc9ceb0271")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=287.169358764978, y=-248.98570613885954), - width=553, - height=677, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/document_parsing/display/nodes/extract_by_document_url.py b/examples/workflows/document_parsing/display/nodes/extract_by_document_url.py deleted file mode 100644 index ab626a84d1..0000000000 --- a/examples/workflows/document_parsing/display/nodes/extract_by_document_url.py +++ /dev/null @@ -1,34 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.extract_by_document_url import ExtractByDocumentURL - - -class ExtractByDocumentURLDisplay(BaseInlinePromptNodeDisplay[ExtractByDocumentURL]): - label = "Extract by Document URL" - node_id = UUID("a3ba2912-c0c1-432c-bd1b-a10f65b7649c") - output_id = UUID("63a1191f-ee4a-4194-8fc9-531d64a1f301") - array_output_id = UUID("1bae2ae0-7b6b-4a11-bb4c-85e222d3d5dd") - target_handle_id = UUID("e876d7e8-2023-43d8-a29d-d1c9ed8df541") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("816d73c4-78b7-46e0-b4fd-2c8fbab4886c")} - attribute_ids_by_name = {"ml_model": UUID("873c8386-0680-4ec8-8a51-1bcb8b0b068c")} - output_display = { - ExtractByDocumentURL.Outputs.text: NodeOutputDisplay( - id=UUID("63a1191f-ee4a-4194-8fc9-531d64a1f301"), name="text" - ), - ExtractByDocumentURL.Outputs.results: NodeOutputDisplay( - id=UUID("1bae2ae0-7b6b-4a11-bb4c-85e222d3d5dd"), name="results" - ), - ExtractByDocumentURL.Outputs.json: NodeOutputDisplay( - id=UUID("a8535d1c-bfd7-4454-ba35-1585c883dcbe"), name="json" - ), - } - port_displays = { - ExtractByDocumentURL.Ports.default: PortDisplayOverrides(id=UUID("a15192cb-c2c3-44ff-8a62-e35ddb65e67d")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=837.4721057480847, y=667.4771042124831), width=553, height=518 - ) diff --git a/examples/workflows/document_parsing/display/nodes/final_output.py b/examples/workflows/document_parsing/display/nodes/final_output.py deleted file mode 100644 index bab974c507..0000000000 --- a/examples/workflows/document_parsing/display/nodes/final_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("08cf5b7e-2668-4f27-95c5-f28236d5be47") - target_handle_id = UUID("a09307c3-78f6-4aca-bbc0-e8ec24fc2a64") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("c84aa0bd-3440-4507-bae4-586dae3b9f22")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("9c3830c9-7faa-4872-83e5-7360f662f8e2"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1465.3578490588761, y=635.1421509411235), width=521, height=448 - ) diff --git a/examples/workflows/document_parsing/display/nodes/final_output_6.py b/examples/workflows/document_parsing/display/nodes/final_output_6.py deleted file mode 100644 index dfb2d3d580..0000000000 --- a/examples/workflows/document_parsing/display/nodes/final_output_6.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output_6 import FinalOutput6 - - -class FinalOutput6Display(BaseFinalOutputNodeDisplay[FinalOutput6]): - label = "Final Output 6" - node_id = UUID("941add93-b7aa-469a-9c25-740fe80009a5") - target_handle_id = UUID("4e8d14bf-1dfc-4afa-8863-5bf951559308") - output_name = "final-output-6" - node_input_ids_by_name = {"node_input": UUID("0a6972dc-ac3a-4d99-9577-be4c4c3dc2a7")} - output_display = { - FinalOutput6.Outputs.value: NodeOutputDisplay(id=UUID("4dfe9122-354b-4796-a2de-4eaa38a0c5df"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=969.5727079556486, y=-167.19651184282836), width=521, height=507 - ) diff --git a/examples/workflows/document_parsing/display/workflow.py b/examples/workflows/document_parsing/display/workflow.py deleted file mode 100644 index 5873dd5648..0000000000 --- a/examples/workflows/document_parsing/display/workflow.py +++ /dev/null @@ -1,67 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.add_image_to_chat_history import AddImageToChatHistory -from ..nodes.extract_by_chat_history import ExtractByChatHistory -from ..nodes.extract_by_document_url import ExtractByDocumentURL -from ..nodes.final_output import FinalOutput -from ..nodes.final_output_6 import FinalOutput6 -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("a7f0df51-cd51-45e5-a8ec-5c01e4a62859"), - entrypoint_node_source_handle_id=UUID("d0976f57-bece-4eb5-8629-e845f0b9c7f9"), - entrypoint_node_display=NodeDisplayData( - position=NodeDisplayPosition(x=-55.26644635815683, y=510.7385521062415), width=124, height=48 - ), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=257.17303389015467, y=185.5531574161448, zoom=0.35565791828925275) - ), - ) - inputs_display = { - Inputs.image_url: WorkflowInputsDisplay(id=UUID("5be00b19-5016-4e16-93b3-980a6f8838ad"), name="image_url"), - Inputs.workflow_input_chat_history: WorkflowInputsDisplay( - id=UUID("ae1eb3ce-4b1d-40ee-936f-308f1062a4fd"), name="workflow_input_chat_history", color="pink" - ), - } - entrypoint_displays = { - ExtractByChatHistory: EntrypointDisplay( - id=UUID("a7f0df51-cd51-45e5-a8ec-5c01e4a62859"), - edge_display=EdgeDisplay(id=UUID("d3eb566f-f1c0-4f79-bebe-b2bb89ed9910")), - ), - AddImageToChatHistory: EntrypointDisplay( - id=UUID("a7f0df51-cd51-45e5-a8ec-5c01e4a62859"), - edge_display=EdgeDisplay(id=UUID("e8571392-39cf-4ba1-8325-7ccf64996170")), - ), - } - edge_displays = { - (ExtractByDocumentURL.Ports.default, FinalOutput): EdgeDisplay(id=UUID("30eb4c90-c436-4181-99d5-d91f108b0477")), - (ExtractByChatHistory.Ports.default, FinalOutput6): EdgeDisplay( - id=UUID("b92b2e32-930a-4e51-afdd-9e1ce5a0a23a") - ), - (AddImageToChatHistory.Ports.default, ExtractByDocumentURL): EdgeDisplay( - id=UUID("f2f4eb15-20a3-4024-9fbb-84e35a531405") - ), - } - output_displays = { - Workflow.Outputs.final_output_6: WorkflowOutputDisplay( - id=UUID("4dfe9122-354b-4796-a2de-4eaa38a0c5df"), name="final-output-6" - ), - Workflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("9c3830c9-7faa-4872-83e5-7360f662f8e2"), name="final-output" - ), - } diff --git a/examples/workflows/document_parsing/inputs.py b/examples/workflows/document_parsing/inputs.py deleted file mode 100644 index afad549aa9..0000000000 --- a/examples/workflows/document_parsing/inputs.py +++ /dev/null @@ -1,9 +0,0 @@ -from typing import List, Optional - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - image_url: str - workflow_input_chat_history: Optional[List[ChatMessage]] diff --git a/examples/workflows/document_parsing/nodes/__init__.py b/examples/workflows/document_parsing/nodes/__init__.py deleted file mode 100644 index afa8580c21..0000000000 --- a/examples/workflows/document_parsing/nodes/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .add_image_to_chat_history import AddImageToChatHistory -from .extract_by_chat_history import ExtractByChatHistory -from .extract_by_document_url import ExtractByDocumentURL -from .final_output import FinalOutput -from .final_output_6 import FinalOutput6 - -__all__ = [ - "AddImageToChatHistory", - "ExtractByChatHistory", - "ExtractByDocumentURL", - "FinalOutput", - "FinalOutput6", -] diff --git a/examples/workflows/document_parsing/nodes/add_image_to_chat_history.py b/examples/workflows/document_parsing/nodes/add_image_to_chat_history.py deleted file mode 100644 index 841caaf54e..0000000000 --- a/examples/workflows/document_parsing/nodes/add_image_to_chat_history.py +++ /dev/null @@ -1,36 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from ..inputs import Inputs - - -class AddImageToChatHistory(TemplatingNode[BaseState, List[ChatMessage]]): - """You can use this approach if you want to insert documents into Chat History dynamically while your Workflow / Agent is running.""" - - template = """\ -{%- set new_msg = { - \"text\": image_url, - \"role\": \"USER\", - \"content\": { - \"type\": \"ARRAY\", - \"value\": [ - { - \"type\": \"DOCUMENT\", - \"value\": { - \"src\": image_url, - } - } - ] - }, - \"source\": None - } -%} -{%- set msg_arr = [new_msg] -%} -{{- ((chat_history or []) + msg_arr) | tojson -}}\ -""" - inputs = { - "chat_history": [], - "image_url": Inputs.image_url, - } diff --git a/examples/workflows/document_parsing/nodes/extract_by_chat_history.py b/examples/workflows/document_parsing/nodes/extract_by_chat_history.py deleted file mode 100644 index 53c9a8b723..0000000000 --- a/examples/workflows/document_parsing/nodes/extract_by_chat_history.py +++ /dev/null @@ -1,41 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs - - -class ExtractByChatHistory(InlinePromptNode): - """You can use this approach if you want to drag-and-drop documents in the UI / use them in Scenarios, or include them from your application code. This approach will also make it easier to view documents directly in your Evaluations and Test Cases.""" - - ml_model = "claude-3-7-sonnet-latest" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[PlainTextPromptBlock(text="""What is the small top pressure rating of the 1.5\" valve?""")] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": Inputs.workflow_input_chat_history, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=None, - presence_penalty=None, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/document_parsing/nodes/extract_by_document_url.py b/examples/workflows/document_parsing/nodes/extract_by_document_url.py deleted file mode 100644 index b17ff2efbb..0000000000 --- a/examples/workflows/document_parsing/nodes/extract_by_document_url.py +++ /dev/null @@ -1,39 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from .add_image_to_chat_history import AddImageToChatHistory - - -class ExtractByDocumentURL(InlinePromptNode): - ml_model = "claude-3-7-sonnet-latest" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[PlainTextPromptBlock(text="""What is the small top pressure rating of the 1.5\" valve?""")] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": AddImageToChatHistory.Outputs.result, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=None, - frequency_penalty=None, - presence_penalty=None, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/document_parsing/nodes/final_output.py b/examples/workflows/document_parsing/nodes/final_output.py deleted file mode 100644 index f802401b11..0000000000 --- a/examples/workflows/document_parsing/nodes/final_output.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .extract_by_document_url import ExtractByDocumentURL - - -class FinalOutput(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = ExtractByDocumentURL.Outputs.text diff --git a/examples/workflows/document_parsing/nodes/final_output_6.py b/examples/workflows/document_parsing/nodes/final_output_6.py deleted file mode 100644 index a2a40c10af..0000000000 --- a/examples/workflows/document_parsing/nodes/final_output_6.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .extract_by_chat_history import ExtractByChatHistory - - -class FinalOutput6(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = ExtractByChatHistory.Outputs.text diff --git a/examples/workflows/document_parsing/sandbox.py b/examples/workflows/document_parsing/sandbox.py deleted file mode 100644 index d1a4305104..0000000000 --- a/examples/workflows/document_parsing/sandbox.py +++ /dev/null @@ -1,55 +0,0 @@ -from vellum import ( - ArrayChatMessageContent, - ChatMessage, - DocumentChatMessageContent, - StringChatMessageContent, - VellumDocument, -) -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - image_url="https://storage.googleapis.com/vellum-public/onboarding-assets/5924_png.rf.b3609806540c821c7a51bba3bf0b1f09.pdf", - workflow_input_chat_history=[ - ChatMessage( - role="USER", - text="vellum:uploaded-file:0e48b359-2677-4133-b4a5-7ab59f5f9369", - content=ArrayChatMessageContent( - value=[ - DocumentChatMessageContent( - value=VellumDocument( - src="vellum:uploaded-file:0e48b359-2677-4133-b4a5-7ab59f5f9369", - metadata={ - "id": "0e48b359-2677-4133-b4a5-7ab59f5f9369", - "type": "DOCUMENT", - "detail": "high", - "expiry": "2025-04-01T05:51:12.000Z", - "signedUrl": "https://storage.googleapis.com/vellum-django/uploaded-files/55b14aa3-55ea-4785-83fb-dede9e01babc/f6b086c6-c2c9-4cf5-a6f1-2ea8e46fead2/0e48b359-2677-4133-b4a5-7ab59f5f9369/5924_png.rf.b3609806540c821c7a51bba3bf0b1f09.pdf?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=585775334980-compute%40developer.gserviceaccount.com%2F20250324%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20250324T235113Z&X-Goog-Expires=604799&X-Goog-SignedHeaders=host&X-Goog-Signature=8bfb4a557f54f46c804ff6518d6868036e47662213dd9afb4a8f1e34f599a967183f76c422dbb26e719b6302f844e18a0528cfe27a4ccb0a76aa04ba4468ebbe5aafebda518d5f9dd7f597fb308c5012af313c115aa4c8b173e283473900f38500df8141c021ca1e7d25c0883ea4732867138e355875470cf6e82708693af42f364f4f4ad2c754f3f40b7a9cdfe2b7511c3eb7c123d73709cf6eb9e011b6e70e10d4850eae997d7a61792c4eb2272a503505520e17a5ca06843f1bd17c4c935fe625e520053289a4d05a1f11277c32550f783cc35f344dfe3c12276ef9d93b33ec818cd29cd6d75ce9dc8b1fe2412381b4a964d3c0d46e2661f260d383d6adac", - }, - ), - ), - ] - ), - ), - ChatMessage( - role="ASSISTANT", - text='Based on the Model Selection Chart provided, I can see that for a 1.5" valve, the large top pressure rating is 500 psi (34.0 bar). This applies to both the 2WNC and 2WNO configurations shown in the chart for the 1.5" basic valve size.', - content=StringChatMessageContent( - value='Based on the Model Selection Chart provided, I can see that for a 1.5" valve, the large top pressure rating is 500 psi (34.0 bar). This applies to both the 2WNC and 2WNO configurations shown in the chart for the 1.5" basic valve size.' - ), - ), - ], - ), - ], -) - -runner.run() diff --git a/examples/workflows/document_parsing/workflow.py b/examples/workflows/document_parsing/workflow.py deleted file mode 100644 index 640d32246c..0000000000 --- a/examples/workflows/document_parsing/workflow.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.add_image_to_chat_history import AddImageToChatHistory -from .nodes.extract_by_chat_history import ExtractByChatHistory -from .nodes.extract_by_document_url import ExtractByDocumentURL -from .nodes.final_output import FinalOutput -from .nodes.final_output_6 import FinalOutput6 - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = { - ExtractByChatHistory >> FinalOutput6, - AddImageToChatHistory >> ExtractByDocumentURL >> FinalOutput, - } - - class Outputs(BaseWorkflow.Outputs): - final_output_6 = FinalOutput6.Outputs.value - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/extract_from_image_of_receipt/__init__.py b/examples/workflows/extract_from_image_of_receipt/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/extract_from_image_of_receipt/display/__init__.py b/examples/workflows/extract_from_image_of_receipt/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/extract_from_image_of_receipt/display/nodes/__init__.py b/examples/workflows/extract_from_image_of_receipt/display/nodes/__init__.py deleted file mode 100644 index 30648b75b1..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/display/nodes/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .data_extractor import DataExtractorDisplay -from .final_output import FinalOutputDisplay - -__all__ = [ - "DataExtractorDisplay", - "FinalOutputDisplay", -] diff --git a/examples/workflows/extract_from_image_of_receipt/display/nodes/data_extractor.py b/examples/workflows/extract_from_image_of_receipt/display/nodes/data_extractor.py deleted file mode 100644 index bbf566f410..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/display/nodes/data_extractor.py +++ /dev/null @@ -1,29 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay, BaseTryNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.data_extractor import DataExtractor - - -@BaseTryNodeDisplay.wrap(node_id=UUID("39d4d8a9-61c3-49cf-9fb4-0f8c2814a981")) -class DataExtractorDisplay(BaseInlinePromptNodeDisplay[DataExtractor]): - label = "Data Extractor" - node_id = UUID("34996c9d-9b6c-41bf-a269-e7984093826b") - output_id = UUID("da7cf2f7-a4d8-4c06-b1d3-1ca570d88242") - array_output_id = UUID("acf1ba94-a2bb-462c-a770-0971cace33d0") - target_handle_id = UUID("87b983b6-7a38-402d-8ce6-a08afc0260f0") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("462722d6-8acd-4446-b744-28631516e3ee")} - attribute_ids_by_name = {"ml_model": UUID("27f30854-59fe-4c87-aaa0-7721caafbf99")} - output_display = { - DataExtractor.Outputs.text: NodeOutputDisplay(id=UUID("da7cf2f7-a4d8-4c06-b1d3-1ca570d88242"), name="text"), - DataExtractor.Outputs.results: NodeOutputDisplay( - id=UUID("acf1ba94-a2bb-462c-a770-0971cace33d0"), name="results" - ), - DataExtractor.Outputs.json: NodeOutputDisplay(id=UUID("797fd7b7-d61e-45d4-a80b-8d13f603c78d"), name="json"), - } - port_displays = {DataExtractor.Ports.default: PortDisplayOverrides(id=UUID("83d895c9-2203-4b70-95e6-ade11602a39a"))} - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=857.2886057113042, y=-157.99381186416284), width=554, height=531 - ) diff --git a/examples/workflows/extract_from_image_of_receipt/display/nodes/final_output.py b/examples/workflows/extract_from_image_of_receipt/display/nodes/final_output.py deleted file mode 100644 index 5e07f6cf91..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/display/nodes/final_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("25746d6b-3749-401e-9111-00a64737949c") - target_handle_id = UUID("6ff6d0f8-c629-42e7-bbf1-272028c2979e") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("d5983b77-b8f4-4d3c-9aa1-c83830e5a919")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("dfa69d72-3f2a-4f56-b639-5f0331ed5dc5"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1619.1089509093895, y=-195.73728608411776), width=522, height=400 - ) diff --git a/examples/workflows/extract_from_image_of_receipt/display/workflow.py b/examples/workflows/extract_from_image_of_receipt/display/workflow.py deleted file mode 100644 index 8446607c6e..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/display/workflow.py +++ /dev/null @@ -1,48 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.data_extractor import DataExtractor -from ..nodes.final_output import FinalOutput -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("6343c92f-e900-41aa-8efa-a0ddabf62d42"), - entrypoint_node_source_handle_id=UUID("c30f3ded-6947-4206-a443-21bbfef379c1"), - entrypoint_node_display=NodeDisplayData( - position=NodeDisplayPosition(x=604.6530102688362, y=-81.4120252975955), width=124, height=48 - ), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-211.11940130297864, y=393.24473293335495, zoom=0.39997596956099407) - ), - ) - inputs_display = { - Inputs.chat_history: WorkflowInputsDisplay(id=UUID("cda43bd1-2f3f-449e-93bc-e3e4a7be87ba"), name="chat_history") - } - entrypoint_displays = { - DataExtractor: EntrypointDisplay( - id=UUID("6343c92f-e900-41aa-8efa-a0ddabf62d42"), - edge_display=EdgeDisplay(id=UUID("735a5360-63f7-4420-8ca0-9bb95d0f4b01")), - ) - } - edge_displays = { - (DataExtractor.Ports.default, FinalOutput): EdgeDisplay(id=UUID("74c32f43-a8b2-4549-8189-c5b5eaad1862")) - } - output_displays = { - Workflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("dfa69d72-3f2a-4f56-b639-5f0331ed5dc5"), name="final-output" - ) - } diff --git a/examples/workflows/extract_from_image_of_receipt/inputs.py b/examples/workflows/extract_from_image_of_receipt/inputs.py deleted file mode 100644 index 48cbf257ff..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/inputs.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - chat_history: List[ChatMessage] diff --git a/examples/workflows/extract_from_image_of_receipt/nodes/__init__.py b/examples/workflows/extract_from_image_of_receipt/nodes/__init__.py deleted file mode 100644 index 09e82eaf2b..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/nodes/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .data_extractor import DataExtractor -from .final_output import FinalOutput - -__all__ = [ - "DataExtractor", - "FinalOutput", -] diff --git a/examples/workflows/extract_from_image_of_receipt/nodes/data_extractor.py b/examples/workflows/extract_from_image_of_receipt/nodes/data_extractor.py deleted file mode 100644 index d6388cf67f..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/nodes/data_extractor.py +++ /dev/null @@ -1,112 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.core.try_node.node import TryNode -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs - - -@TryNode.wrap() -class DataExtractor(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You are an expert in extracting information from images of receipts. Your task is to accurately parse the receipt image and provide structured data based on the included schema. Only use the information from the image when creating the output and try to be as accurate as possible when grabbing the information from the image. If you don\'t know leave that portion of the JSON output empty -\ -""" - ) - ] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": Inputs.chat_history, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters={ - "json_schema": { - "name": "schema", - "schema": { - "type": "object", - "required": [ - "provider_name", - "provider_address", - "provider_phone", - "date", - "items_in_receipt", - "number_of_items", - "payment_total", - ], - "properties": { - "date": { - "type": "string", - "description": "Date of purchase in the format mm/dd/yyyy.", - }, - "payment_total": { - "type": "number", - "description": "Total amount of the receipt.", - }, - "provider_name": { - "type": "string", - "description": "Name of the company that created the receipt.", - }, - "provider_phone": { - "type": "string", - "description": "Phone number of the company that created the receipt.", - }, - "number_of_items": { - "type": "number", - "description": "Total number of items on the receipt.", - }, - "items_in_receipt": { - "type": "array", - "items": { - "type": "object", - "required": [ - "name", - "price", - ], - "properties": { - "name": { - "type": "string", - "description": "Name of the item.", - }, - "price": { - "type": "number", - "description": "Price of the corresponding item.", - }, - }, - }, - "description": "List of items on the receipt.", - }, - "provider_address": { - "type": "string", - "description": "Address of the company that created the receipt.", - }, - }, - "additionalProperties": False, - }, - }, - }, - ) diff --git a/examples/workflows/extract_from_image_of_receipt/nodes/final_output.py b/examples/workflows/extract_from_image_of_receipt/nodes/final_output.py deleted file mode 100644 index a5ff092449..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/nodes/final_output.py +++ /dev/null @@ -1,11 +0,0 @@ -from typing import Any - -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .data_extractor import DataExtractor - - -class FinalOutput(FinalOutputNode[BaseState, Any]): - class Outputs(FinalOutputNode.Outputs): - value = DataExtractor.Outputs.json diff --git a/examples/workflows/extract_from_image_of_receipt/sandbox.py b/examples/workflows/extract_from_image_of_receipt/sandbox.py deleted file mode 100644 index 9b47f5492b..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/sandbox.py +++ /dev/null @@ -1,44 +0,0 @@ -from vellum import ArrayChatMessageContent, ChatMessage, ImageChatMessageContent, StringChatMessageContent, VellumImage -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="https://storage.googleapis.com/vellum-public/help-docs/extract-from-image-of-receipt.jpeg", - content=ArrayChatMessageContent( - value=[ - ImageChatMessageContent( - value=VellumImage( - src="https://storage.googleapis.com/vellum-public/help-docs/extract-from-image-of-receipt.jpeg", - metadata={ - "detail": "low", - }, - ), - ), - ] - ), - ), - ChatMessage( - role="ASSISTANT", - text='\n{\n "provider_name": "WAL-MART",\n "provider_address": "",\n "provider_phone": "(515) 986-1783",\n "date": "08/20/10",\n "items_in_receipt": [\n {\n "name": "BANANAS",\n "price": 0.20\n },\n {\n "name": "FRAP",\n "price": 5.48\n }\n ],\n "number_of_items": 2,\n "payment_total": 5.11\n}', - content=StringChatMessageContent( - value='\n{\n "provider_name": "WAL-MART",\n "provider_address": "",\n "provider_phone": "(515) 986-1783",\n "date": "08/20/10",\n "items_in_receipt": [\n {\n "name": "BANANAS",\n "price": 0.20\n },\n {\n "name": "FRAP",\n "price": 5.48\n }\n ],\n "number_of_items": 2,\n "payment_total": 5.11\n}' - ), - ), - ] - ), - ], -) - -runner.run() diff --git a/examples/workflows/extract_from_image_of_receipt/workflow.py b/examples/workflows/extract_from_image_of_receipt/workflow.py deleted file mode 100644 index 60628e0ae2..0000000000 --- a/examples/workflows/extract_from_image_of_receipt/workflow.py +++ /dev/null @@ -1,13 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.data_extractor import DataExtractor -from .nodes.final_output import FinalOutput - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = DataExtractor >> FinalOutput - - class Outputs(BaseWorkflow.Outputs): - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/file_uploads/README.md b/examples/workflows/file_uploads/README.md deleted file mode 100644 index 63665ed565..0000000000 --- a/examples/workflows/file_uploads/README.md +++ /dev/null @@ -1,31 +0,0 @@ -# File Uploads Example - -This example demonstrates how to upload images to Vellum's internal storage and use them in vision prompts. - -## Overview - -This workflow shows how to: -1. Upload images from URLs or base64 to Vellum's secure storage -2. Convert them to `vellum:uploaded-file:*` URIs -3. Use the uploaded images in vision prompts - -## Key Components - -### `nodes/upload_file_node.py` -Uploads images using the `upload_vellum_file()` utility function. Accepts images from URLs, base64, or already-uploaded files. - -### `nodes/use_uploaded_file_node.py` -Uses the uploaded images in a vision prompt. Note that `VariablePromptBlock` for images must be placed **inside** the `ChatMessagePromptBlock`'s `blocks` array. - -## Running the Example - -```bash -poetry run python -m examples.workflows.file_uploads.sandbox -``` - -## Key Points - -- The `upload_vellum_file()` utility handles uploading from URLs, base64, or already-uploaded files -- Images are converted to `vellum:uploaded-file:*` URIs for secure internal use -- Multiple images require separate `VariablePromptBlock` instances (one per image) -- `VariablePromptBlock` for images must be inside `ChatMessagePromptBlock` blocks, not at the top level diff --git a/examples/workflows/file_uploads/chat.png b/examples/workflows/file_uploads/chat.png deleted file mode 100644 index 34985b3447..0000000000 Binary files a/examples/workflows/file_uploads/chat.png and /dev/null differ diff --git a/examples/workflows/file_uploads/inputs.py b/examples/workflows/file_uploads/inputs.py deleted file mode 100644 index db8fccb39b..0000000000 --- a/examples/workflows/file_uploads/inputs.py +++ /dev/null @@ -1,10 +0,0 @@ -from typing import List, Optional - -from vellum import VellumImage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - """Inputs for the file upload workflow example.""" - - images: Optional[List[VellumImage]] = None diff --git a/examples/workflows/file_uploads/nodes/__init__.py b/examples/workflows/file_uploads/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/file_uploads/nodes/final_output.py b/examples/workflows/file_uploads/nodes/final_output.py deleted file mode 100644 index c1bafb71ca..0000000000 --- a/examples/workflows/file_uploads/nodes/final_output.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .use_uploaded_file_node import UseUploadedFileNode - - -class FinalOutput(FinalOutputNode[BaseState, str]): - """Final output that returns the analysis result.""" - - class Outputs(FinalOutputNode.Outputs): - analysis = UseUploadedFileNode.Outputs.text diff --git a/examples/workflows/file_uploads/nodes/upload_file_node.py b/examples/workflows/file_uploads/nodes/upload_file_node.py deleted file mode 100644 index 5b452c4944..0000000000 --- a/examples/workflows/file_uploads/nodes/upload_file_node.py +++ /dev/null @@ -1,62 +0,0 @@ -from typing import List - -from vellum import VellumImage -from vellum.workflows.nodes.bases import BaseNode -from vellum.workflows.state import BaseState - -from ..inputs import Inputs - - -class UploadFileNode(BaseNode[BaseState]): - """ - Uploads files to Vellum's internal storage. - - This demonstrates how to convert public URLs or base64 data URLs into private - vellum:uploaded-file:* URIs that can be used securely within workflows. - """ - - images = Inputs.images - - class Display(BaseNode.Display): - icon = "vellum:icon:cloud-upload" - color = "blue" - - class Outputs(BaseNode.Outputs): - chat: VellumImage - receipt: VellumImage - four_pillars: VellumImage - - def run(self) -> BaseNode.Outputs: - # Get images from inputs - images = self.images or [] - - if len(images) < 3: - raise ValueError("Expected at least 3 images") - - # Upload each file to Vellum if it's not already uploaded - # This will: - # 1. Return as-is if already a vellum:uploaded-file:* URI - # 2. Upload from base64 data URL if that's the source - # 3. Download from URL and upload if it's a public URL - uploaded_chat = images[0].upload( - filename="chat.png", - vellum_client=self._context.vellum_client, - ) - uploaded_receipt = images[1].upload( - filename="receipt.jpeg", - vellum_client=self._context.vellum_client, - ) - uploaded_four_pillars = images[2].upload( - filename="four_pillars.png", - vellum_client=self._context.vellum_client, - ) - - print(f"Uploaded chat: {uploaded_chat.src}") - print(f"Uploaded receipt: {uploaded_receipt.src}") - print(f"Uploaded four pillars: {uploaded_four_pillars.src}") - - return self.Outputs( - chat=uploaded_chat, - receipt=uploaded_receipt, - four_pillars=uploaded_four_pillars, - ) diff --git a/examples/workflows/file_uploads/nodes/use_uploaded_file_node.py b/examples/workflows/file_uploads/nodes/use_uploaded_file_node.py deleted file mode 100644 index aecdca9447..0000000000 --- a/examples/workflows/file_uploads/nodes/use_uploaded_file_node.py +++ /dev/null @@ -1,36 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from .upload_file_node import UploadFileNode - - -class UseUploadedFileNode(InlinePromptNode): - """ - Uses uploaded image files in a prompt. - - This node receives VellumImages with vellum:uploaded-file:* URIs - and uses them in a vision prompt. The files are automatically resolved - by Vellum's infrastructure. - """ - - ml_model = "gpt-5-responses" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock(text="Analyze the images provided and describe what you see in detail.") - ] - ), - VariablePromptBlock(input_variable="chat"), - VariablePromptBlock(input_variable="receipt"), - VariablePromptBlock(input_variable="four_pillars"), - ], - ), - ] - prompt_inputs = { - "chat": UploadFileNode.Outputs.chat, - "receipt": UploadFileNode.Outputs.receipt, - "four_pillars": UploadFileNode.Outputs.four_pillars, - } diff --git a/examples/workflows/file_uploads/sandbox.py b/examples/workflows/file_uploads/sandbox.py deleted file mode 100644 index 6bf458f607..0000000000 --- a/examples/workflows/file_uploads/sandbox.py +++ /dev/null @@ -1,54 +0,0 @@ -import base64 -import os - -import dotenv - -from vellum import VellumImage -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - -dotenv.load_dotenv() - -# # Basic upload example: Upload chat.png and get vellum URI -# current_dir = os.path.dirname(os.path.abspath(__file__)) -# chat_image_path = os.path.join(current_dir, "chat.png") -# with open(chat_image_path, "rb") as f: -# chat_image_bytes = f.read() -# chat_image_base64 = base64.b64encode(chat_image_bytes).decode("utf-8") -# chat_image_base64_src = f"data:image/png;base64,{chat_image_base64}" - -# uploaded_chat = VellumImage(src=chat_image_base64_src).upload(filename="chat.png") -# # vellum:uploaded-file:9cdc7745-7260-4117-bd3d-b83fd6c2b6f2 -# print(f"Uploaded chat.png URI: {uploaded_chat.src}") - -# Create images from different sources -# Receipt image: From a public URL -receipt_image_url = "https://storage.googleapis.com/vellum-public/help-docs/extract-from-image-of-receipt.jpeg" - -# Four pillars image: From base64 (read from local file) -current_dir = os.path.dirname(os.path.abspath(__file__)) -four_pillars_image_path = os.path.join(current_dir, "vellum_four_pillars.png") -with open(four_pillars_image_path, "rb") as f: - four_pillars_image_bytes = f.read() -four_pillars_image_base64 = base64.b64encode(four_pillars_image_bytes).decode("utf-8") -four_pillars_image_base64_src = f"data:image/png;base64,{four_pillars_image_base64}" - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - dataset=[ - Inputs( - images=[ - VellumImage(src="vellum:uploaded-file:9cdc7745-7260-4117-bd3d-b83fd6c2b6f2"), # chat.png - VellumImage(src=receipt_image_url), - VellumImage(src=four_pillars_image_base64_src), - ], - ), - ], -) - -runner.run() diff --git a/examples/workflows/file_uploads/vellum_four_pillars.png b/examples/workflows/file_uploads/vellum_four_pillars.png deleted file mode 100644 index e7944efe0d..0000000000 Binary files a/examples/workflows/file_uploads/vellum_four_pillars.png and /dev/null differ diff --git a/examples/workflows/file_uploads/workflow.py b/examples/workflows/file_uploads/workflow.py deleted file mode 100644 index 898b870e7c..0000000000 --- a/examples/workflows/file_uploads/workflow.py +++ /dev/null @@ -1,25 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.final_output import FinalOutput -from .nodes.upload_file_node import UploadFileNode -from .nodes.use_uploaded_file_node import UseUploadedFileNode - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - """ - Workflow demonstrating file uploads. - - This workflow shows: - 1. How to upload files to Vellum's internal storage - 2. How to use vellum:uploaded-file:* URIs - 3. How to pass images between nodes - """ - - graph = { - UploadFileNode >> UseUploadedFileNode >> FinalOutput, - } - - class Outputs(BaseWorkflow.Outputs): - analysis = FinalOutput.Outputs.analysis diff --git a/examples/workflows/function_calling_demo/__init__.py b/examples/workflows/function_calling_demo/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/function_calling_demo/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/function_calling_demo/display/__init__.py b/examples/workflows/function_calling_demo/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/function_calling_demo/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/function_calling_demo/display/nodes/__init__.py b/examples/workflows/function_calling_demo/display/nodes/__init__.py deleted file mode 100644 index 8ba66559ce..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -from .accumulate_chat_history import AccumulateChatHistoryDisplay -from .conditional_node import ConditionalNodeDisplay -from .conditional_node_10 import ConditionalNode10Display -from .error_node import ErrorNodeDisplay -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistoryDisplay -from .final_output import FinalOutputDisplay -from .get_current_weather import GetCurrentWeatherDisplay -from .output_type import OutputTypeDisplay -from .parse_function_call import ParseFunctionCallDisplay -from .prompt_node import PromptNodeDisplay - -__all__ = [ - "AccumulateChatHistoryDisplay", - "ConditionalNode10Display", - "ConditionalNodeDisplay", - "ErrorNodeDisplay", - "FinalAccumulationOfChatHistoryDisplay", - "FinalOutputDisplay", - "GetCurrentWeatherDisplay", - "OutputTypeDisplay", - "ParseFunctionCallDisplay", - "PromptNodeDisplay", -] diff --git a/examples/workflows/function_calling_demo/display/nodes/accumulate_chat_history.py b/examples/workflows/function_calling_demo/display/nodes/accumulate_chat_history.py deleted file mode 100644 index 8eb9d3cae6..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/accumulate_chat_history.py +++ /dev/null @@ -1,35 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.accumulate_chat_history import AccumulateChatHistory - - -class AccumulateChatHistoryDisplay(BaseCodeExecutionNodeDisplay[AccumulateChatHistory]): - label = "Accumulate Chat History" - node_id = UUID("4787115e-23b9-4980-bf51-655da351d9e7") - target_handle_id = UUID("6456c31d-1631-46d3-aaed-bbcad9e9be62") - output_id = UUID("9e6d85b4-6941-4758-9304-99b94122868d") - log_output_id = UUID("60c8dade-d9a9-441a-8997-e77afc7ec38d") - node_input_ids_by_name = { - "code_inputs.tool_id": UUID("eab3c9f4-78ef-4b16-afa8-dcdeffcd5af2"), - "code_inputs.function_result": UUID("29e037ad-b6e4-40ec-a816-2fce11671f04"), - "code": UUID("c5dc64de-d70e-4b99-a15f-0cf2c853a856"), - "runtime": UUID("b7a6a274-756e-4983-93d4-93e4c8ee2a9b"), - "code_inputs.assistant_message": UUID("8f59e730-6da8-4418-b5d7-e001e3b1869b"), - "code_inputs.current_chat_history": UUID("92f43842-3d2e-47e4-8fc2-1fc62391a72c"), - } - output_display = { - AccumulateChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("9e6d85b4-6941-4758-9304-99b94122868d"), name="result" - ), - AccumulateChatHistory.Outputs.log: NodeOutputDisplay( - id=UUID("60c8dade-d9a9-441a-8997-e77afc7ec38d"), name="log" - ), - } - port_displays = { - AccumulateChatHistory.Ports.default: PortDisplayOverrides(id=UUID("66a7143a-15c6-4f85-9b0b-029653f488ff")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=5700, y=-45), width=449, height=381) diff --git a/examples/workflows/function_calling_demo/display/nodes/conditional_node.py b/examples/workflows/function_calling_demo/display/nodes/conditional_node.py deleted file mode 100644 index e9c04ac59d..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/conditional_node.py +++ /dev/null @@ -1,46 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from ...nodes.conditional_node import ConditionalNode - - -class ConditionalNodeDisplay(BaseConditionalNodeDisplay[ConditionalNode]): - label = "Conditional Node" - node_id = UUID("a3f2fd3d-0c58-48d8-bdfc-9f16591b1964") - target_handle_id = UUID("e07139df-febe-43fb-8e2c-11b85464dcde") - source_handle_ids = { - 0: UUID("b032aa36-545e-4499-b7ba-2750e19b61d1"), - 1: UUID("60b2218f-1481-41e3-bc39-943033285d76"), - } - rule_ids = [ - RuleIdMap( - id="4913d8e4-6cfe-4826-b722-3af118d788c9", - lhs=RuleIdMap( - id="f860841f-da06-45c2-ad50-c63f975469a8", - lhs=None, - rhs=None, - field_node_input_id="4dd04a7d-7a12-43b1-9a0b-b2dc182227c6", - value_node_input_id="95b497b8-7357-4f74-a86f-b65f4ae14dcf", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ) - ] - condition_ids = [ - ConditionId(id="708ee94b-62f1-46cf-ac1b-157548dd6e40", rule_group_id="4913d8e4-6cfe-4826-b722-3af118d788c9"), - ConditionId(id="dbd8983a-63a5-432b-a1c7-4ddf612010f5", rule_group_id=None), - ] - node_input_ids_by_name = { - "226cdc38-6029-4e73-8d88-a283cd6dc0de.field": UUID("4dd04a7d-7a12-43b1-9a0b-b2dc182227c6"), - "226cdc38-6029-4e73-8d88-a283cd6dc0de.value": UUID("95b497b8-7357-4f74-a86f-b65f4ae14dcf"), - } - port_displays = { - ConditionalNode.Ports.branch_1: PortDisplayOverrides(id=UUID("b032aa36-545e-4499-b7ba-2750e19b61d1")), - ConditionalNode.Ports.branch_2: PortDisplayOverrides(id=UUID("60b2218f-1481-41e3-bc39-943033285d76")), - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=3405, y=-90), width=457, height=177) diff --git a/examples/workflows/function_calling_demo/display/nodes/conditional_node_10.py b/examples/workflows/function_calling_demo/display/nodes/conditional_node_10.py deleted file mode 100644 index 5ff4a13390..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/conditional_node_10.py +++ /dev/null @@ -1,46 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from ...nodes.conditional_node_10 import ConditionalNode10 - - -class ConditionalNode10Display(BaseConditionalNodeDisplay[ConditionalNode10]): - label = "Conditional Node 10" - node_id = UUID("3ab8c006-2d4e-4025-b39d-314ac549afd6") - target_handle_id = UUID("a5a91252-cd98-44d9-a19e-683bd8da7d01") - source_handle_ids = { - 0: UUID("cb376fec-4053-4ecc-8acd-118ac972b393"), - 1: UUID("ca10ed22-f23f-4a50-ac36-a64877d13b9c"), - } - rule_ids = [ - RuleIdMap( - id="c125682f-b736-40fa-a9e2-bfafcbc2312b", - lhs=RuleIdMap( - id="d0a1fbec-f2f9-4c3a-9756-9865e92d41fe", - lhs=None, - rhs=None, - field_node_input_id="fc26c9a7-f514-4bdb-93b4-374fd6272afc", - value_node_input_id="42d853d9-dcd0-4309-baed-4f74317e778c", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ) - ] - condition_ids = [ - ConditionId(id="777fa95e-3442-4f46-83f2-c1ff8b609ec9", rule_group_id="c125682f-b736-40fa-a9e2-bfafcbc2312b"), - ConditionId(id="89ea5a9b-2330-46dc-a075-04c8608bc0c1", rule_group_id=None), - ] - node_input_ids_by_name = { - "b946ebf5-6865-4f84-a08e-9bb1ed2867df.field": UUID("fc26c9a7-f514-4bdb-93b4-374fd6272afc"), - "b946ebf5-6865-4f84-a08e-9bb1ed2867df.value": UUID("42d853d9-dcd0-4309-baed-4f74317e778c"), - } - port_displays = { - ConditionalNode10.Ports.branch_1: PortDisplayOverrides(id=UUID("cb376fec-4053-4ecc-8acd-118ac972b393")), - ConditionalNode10.Ports.branch_2: PortDisplayOverrides(id=UUID("ca10ed22-f23f-4a50-ac36-a64877d13b9c")), - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2156.6652061538634, y=240), width=460, height=177) diff --git a/examples/workflows/function_calling_demo/display/nodes/error_node.py b/examples/workflows/function_calling_demo/display/nodes/error_node.py deleted file mode 100644 index 3e83a87b90..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/error_node.py +++ /dev/null @@ -1,16 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseErrorNodeDisplay - -from ...nodes.error_node import ErrorNode - - -class ErrorNodeDisplay(BaseErrorNodeDisplay[ErrorNode]): - name = "error-node" - node_id = UUID("456afdb9-4d76-40b8-a032-0bddf0583632") - label = "Error Node" - error_output_id = UUID("8fd34ccf-aadb-4fab-b450-11b247737548") - target_handle_id = UUID("a815d06f-b3af-46de-a53b-b2e38f0e3ea3") - node_input_ids_by_name = {"error_source_input_id": UUID("d88e2764-a9b0-4b13-aa04-d7a5bdd54785")} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=4260, y=780), width=364, height=124) diff --git a/examples/workflows/function_calling_demo/display/nodes/final_accumulation_of_chat_history.py b/examples/workflows/function_calling_demo/display/nodes/final_accumulation_of_chat_history.py deleted file mode 100644 index 5363cb0592..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/final_accumulation_of_chat_history.py +++ /dev/null @@ -1,35 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory - - -class FinalAccumulationOfChatHistoryDisplay(BaseCodeExecutionNodeDisplay[FinalAccumulationOfChatHistory]): - label = "Final Accumulation of Chat History" - node_id = UUID("d1855d88-8316-4377-8e95-9395c8f75855") - target_handle_id = UUID("92fc3b35-8e45-473f-ba98-c7664714c1f9") - output_id = UUID("1c8268a2-4e31-4fd5-b493-0a9f28b7ec19") - log_output_id = UUID("a97f5cbf-3a00-4af7-921d-c553c8b3243f") - node_input_ids_by_name = { - "code": UUID("ec9748c7-a43c-4a94-9772-ee747cf4e361"), - "runtime": UUID("df07ce66-fe48-4787-b210-087bb3b2a337"), - "code_inputs.current_chat_history": UUID("b9fe0d7b-be2c-43fc-8c65-57300e40fb20"), - "code_inputs.assistant_message": UUID("18dfa292-1cc8-4549-847e-152a5e7782df"), - } - output_display = { - FinalAccumulationOfChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("1c8268a2-4e31-4fd5-b493-0a9f28b7ec19"), name="result" - ), - FinalAccumulationOfChatHistory.Outputs.log: NodeOutputDisplay( - id=UUID("a97f5cbf-3a00-4af7-921d-c553c8b3243f"), name="log" - ), - } - port_displays = { - FinalAccumulationOfChatHistory.Ports.default: PortDisplayOverrides( - id=UUID("619a3089-4a05-4029-a248-cddb50facab7") - ) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2820, y=825), width=466, height=315) diff --git a/examples/workflows/function_calling_demo/display/nodes/final_output.py b/examples/workflows/function_calling_demo/display/nodes/final_output.py deleted file mode 100644 index 644e7d52e0..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/final_output.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("4a6aadca-4d2f-40b8-a1a1-23db0f6a5767") - target_handle_id = UUID("e5ef2f31-2199-4e1b-85d0-b5a79ab0595a") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("4ad8334f-b66b-49bd-828e-aef443bc3052")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("e869f551-b02c-465f-90b3-ad2021b3c618"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=3405, y=900), width=465, height=230) diff --git a/examples/workflows/function_calling_demo/display/nodes/get_current_weather.py b/examples/workflows/function_calling_demo/display/nodes/get_current_weather.py deleted file mode 100644 index 6c0dd50cde..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/get_current_weather.py +++ /dev/null @@ -1,32 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.get_current_weather import GetCurrentWeather - - -class GetCurrentWeatherDisplay(BaseCodeExecutionNodeDisplay[GetCurrentWeather]): - label = "Get Current Weather" - node_id = UUID("d3734d8b-c3c8-44db-abf8-cbc6c07e20dc") - target_handle_id = UUID("4d7e27dc-608a-4486-ab45-fe7a476ee3c0") - output_id = UUID("312cdcea-bef2-498e-80d0-32391857ffcc") - log_output_id = UUID("fbea8140-56b3-4e34-8550-63c62887c2d5") - node_input_ids_by_name = { - "code_inputs.kwargs": UUID("5f08cdf0-e683-44a3-96e6-1d42d4d57f28"), - "code": UUID("a069f01e-fa1d-48b4-854f-4de6baa0761e"), - "runtime": UUID("19da0c1a-1c88-40be-8286-134c551dcb32"), - "code_inputs.gmaps_api_key": UUID("2b5a6327-d026-4b54-997b-00e97125f85e"), - "code_inputs.openweather_api_key": UUID("32e37882-aee4-47a1-b286-a533b9bf350a"), - } - output_display = { - GetCurrentWeather.Outputs.result: NodeOutputDisplay( - id=UUID("312cdcea-bef2-498e-80d0-32391857ffcc"), name="result" - ), - GetCurrentWeather.Outputs.log: NodeOutputDisplay(id=UUID("fbea8140-56b3-4e34-8550-63c62887c2d5"), name="log"), - } - port_displays = { - GetCurrentWeather.Ports.default: PortDisplayOverrides(id=UUID("f7c33ad9-2414-4c5e-89cb-4cc372ee2adf")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=4740, y=-105), width=460, height=327) diff --git a/examples/workflows/function_calling_demo/display/nodes/output_type.py b/examples/workflows/function_calling_demo/display/nodes/output_type.py deleted file mode 100644 index ab83726b10..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/output_type.py +++ /dev/null @@ -1,22 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.output_type import OutputType - - -class OutputTypeDisplay(BaseTemplatingNodeDisplay[OutputType]): - label = "Output Type" - node_id = UUID("122e6aed-eee1-448c-8fb0-9746aa3c63f4") - target_handle_id = UUID("7bad755b-7f00-4c5f-a66d-4e099902cb4f") - node_input_ids_by_name = { - "inputs.output": UUID("76a43566-ea34-4547-bded-0f77652671ae"), - "template": UUID("15e4e329-b479-4b66-a105-2420c8796970"), - } - output_display = { - OutputType.Outputs.result: NodeOutputDisplay(id=UUID("7d17e99d-3929-4672-9fbd-ad7eb5cae0d2"), name="result") - } - port_displays = {OutputType.Ports.default: PortDisplayOverrides(id=UUID("457cd3cf-893d-421d-a4f0-47872d7df7ac"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1485, y=285), width=453, height=221) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/__init__.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/__init__.py deleted file mode 100644 index f139cfd4f9..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -# flake8: noqa: F401, F403 - -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlineSubworkflowNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ....nodes.parse_function_call import ParseFunctionCall -from .nodes import * -from .workflow import * - - -class ParseFunctionCallDisplay(BaseInlineSubworkflowNodeDisplay[ParseFunctionCall]): - label = "Parse Function Call" - node_id = UUID("345d09e7-0117-4aef-aba4-ac7f3ce1b4a7") - target_handle_id = UUID("f5f2f53a-4867-4bf0-b07b-8b39d39c6a03") - workflow_input_ids_by_name = {} - output_display = { - ParseFunctionCall.Outputs.function_args: NodeOutputDisplay( - id=UUID("d520f0a1-c28f-4007-acf9-2758871f2250"), name="function-args" - ), - ParseFunctionCall.Outputs.function_name: NodeOutputDisplay( - id=UUID("680f2d8d-b03a-43b9-9d77-626044e03227"), name="function-name" - ), - ParseFunctionCall.Outputs.tool_id: NodeOutputDisplay( - id=UUID("764c26a4-b0c4-4a52-9e19-96e651eccbd3"), name="tool-id" - ), - } - port_displays = { - ParseFunctionCall.Ports.default: PortDisplayOverrides(id=UUID("b0b8c13c-4c55-4c38-9e00-412000f517b3")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2850, y=75), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/__init__.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/__init__.py deleted file mode 100644 index 517968f805..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -from .allowed_function_names import AllowedFunctionNamesDisplay -from .args import ArgsDisplay -from .conditional_node_1 import ConditionalNode2Display -from .error_message import ErrorMessageDisplay -from .error_node import ErrorNode1Display -from .is_valid_function_name import IsValidFunctionNameDisplay -from .merge_node import MergeNodeDisplay -from .name import NameDisplay -from .parse_function_args import ParseFunctionArgsDisplay -from .parse_function_call import ParseFunctionCall1Display -from .parse_function_name import ParseFunctionNameDisplay -from .parse_tool_id import ParseToolIDDisplay -from .tool_id import ToolIDDisplay - -__all__ = [ - "AllowedFunctionNamesDisplay", - "ArgsDisplay", - "ConditionalNode2Display", - "ErrorMessageDisplay", - "ErrorNode1Display", - "IsValidFunctionNameDisplay", - "MergeNodeDisplay", - "NameDisplay", - "ParseFunctionArgsDisplay", - "ParseFunctionCall1Display", - "ParseFunctionNameDisplay", - "ParseToolIDDisplay", - "ToolIDDisplay", -] diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/allowed_function_names.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/allowed_function_names.py deleted file mode 100644 index 791da7b333..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/allowed_function_names.py +++ /dev/null @@ -1,23 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.allowed_function_names import AllowedFunctionNames - - -class AllowedFunctionNamesDisplay(BaseTemplatingNodeDisplay[AllowedFunctionNames]): - label = "Allowed Function Names" - node_id = UUID("a5d4f0d5-747d-4469-8501-d8ee3165050e") - target_handle_id = UUID("42ad9447-9fea-4e21-9ef8-dc5908e33ad9") - node_input_ids_by_name = {"template": UUID("669d0c45-0f73-493e-85d9-bcc815419229")} - output_display = { - AllowedFunctionNames.Outputs.result: NodeOutputDisplay( - id=UUID("c1f68169-345d-456d-afc6-0a5eb6db1f41"), name="result" - ) - } - port_displays = { - AllowedFunctionNames.Ports.default: PortDisplayOverrides(id=UUID("1b86b935-cb3e-41ed-9477-e820e56c042c")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2910, y=1965), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/args.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/args.py deleted file mode 100644 index 7b7e15682a..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/args.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from .....nodes.parse_function_call.nodes.args import Args - - -class ArgsDisplay(BaseFinalOutputNodeDisplay[Args]): - label = "Args" - node_id = UUID("2bb745c2-6405-4c79-9659-c35bf6a0331a") - target_handle_id = UUID("b1eb4d88-5d4e-473a-bf74-e86e6f5b5c71") - output_name = "function-args" - node_input_ids_by_name = {"node_input": UUID("37038db4-ad28-4270-a256-32302a0962e1")} - output_display = { - Args.Outputs.value: NodeOutputDisplay(id=UUID("d520f0a1-c28f-4007-acf9-2758871f2250"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=5535, y=345), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/conditional_node_1.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/conditional_node_1.py deleted file mode 100644 index c11471a403..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/conditional_node_1.py +++ /dev/null @@ -1,46 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from .....nodes.parse_function_call.nodes.conditional_node_1 import ConditionalNode2 - - -class ConditionalNode2Display(BaseConditionalNodeDisplay[ConditionalNode2]): - label = "Conditional Node" - node_id = UUID("f5ccd090-d6f8-44c8-8e1a-23abb3e0cce2") - target_handle_id = UUID("ec011887-0213-4667-90d3-fcf0b1e170ca") - source_handle_ids = { - 0: UUID("f40a1638-e846-4aa1-9dda-0d5fe439697a"), - 1: UUID("a58748f1-8ae4-470f-9e3e-80478f567f5d"), - } - rule_ids = [ - RuleIdMap( - id="d16245e8-5465-4f2c-a7e9-f7091e6ad2dd", - lhs=RuleIdMap( - id="1a4cc082-17bf-4cd3-bd9d-f84b209a5008", - lhs=None, - rhs=None, - field_node_input_id="839e9337-765e-4a3a-a8ca-74acb91014a2", - value_node_input_id="754fa31d-451d-4669-977b-8d5f23a4b542", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ) - ] - condition_ids = [ - ConditionId(id="4ca649ff-199b-4f31-8bf5-57063305d35c", rule_group_id="d16245e8-5465-4f2c-a7e9-f7091e6ad2dd"), - ConditionId(id="1c58d548-5ea6-4037-b4e6-589a98a911af", rule_group_id=None), - ] - node_input_ids_by_name = { - "5b5a07bf-0b78-48bb-8fdc-ae0c0c5266b7.field": UUID("839e9337-765e-4a3a-a8ca-74acb91014a2"), - "5b5a07bf-0b78-48bb-8fdc-ae0c0c5266b7.value": UUID("754fa31d-451d-4669-977b-8d5f23a4b542"), - } - port_displays = { - ConditionalNode2.Ports.branch_1: PortDisplayOverrides(id=UUID("f40a1638-e846-4aa1-9dda-0d5fe439697a")), - ConditionalNode2.Ports.branch_2: PortDisplayOverrides(id=UUID("a58748f1-8ae4-470f-9e3e-80478f567f5d")), - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=4515, y=885), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_message.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_message.py deleted file mode 100644 index 2e486272bd..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_message.py +++ /dev/null @@ -1,22 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.error_message import ErrorMessage - - -class ErrorMessageDisplay(BaseTemplatingNodeDisplay[ErrorMessage]): - label = "Error Message" - node_id = UUID("d5c71b95-8f26-46b9-b0ba-25ba723b7886") - target_handle_id = UUID("db91f735-b082-4762-8125-63880c5e380d") - node_input_ids_by_name = { - "inputs.invalid_function_name": UUID("f34229bf-1912-451c-8d4e-cf8d061ddf32"), - "template": UUID("438b35a2-5163-4978-af95-fb85e2023035"), - } - output_display = { - ErrorMessage.Outputs.result: NodeOutputDisplay(id=UUID("b777ddb5-87c7-4939-8cc8-7ee31d322d5a"), name="result") - } - port_displays = {ErrorMessage.Ports.default: PortDisplayOverrides(id=UUID("e985f530-f306-4ab9-9b8b-ca54c64a4b81"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=5355, y=1275), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_node.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_node.py deleted file mode 100644 index 6799e60779..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/error_node.py +++ /dev/null @@ -1,16 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseErrorNodeDisplay - -from .....nodes.parse_function_call.nodes.error_node import ErrorNode1 - - -class ErrorNode1Display(BaseErrorNodeDisplay[ErrorNode1]): - name = "error-node" - node_id = UUID("2f1c13c1-7ffe-48a2-b1f5-f99d711c3880") - label = "Error Node" - error_output_id = UUID("d1c9c405-b96c-4648-bf87-52d7e3898fac") - target_handle_id = UUID("4f5ecc8a-bf35-4ce3-b4ec-fc7f675da3d9") - node_input_ids_by_name = {"error_source_input_id": UUID("a3c25a49-cb6e-4d92-abe0-30fcc257d81a")} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=6000, y=1380), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/is_valid_function_name.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/is_valid_function_name.py deleted file mode 100644 index b51c195144..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/is_valid_function_name.py +++ /dev/null @@ -1,31 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.is_valid_function_name import IsValidFunctionName - - -class IsValidFunctionNameDisplay(BaseCodeExecutionNodeDisplay[IsValidFunctionName]): - label = "Is Valid Function Name" - node_id = UUID("644b84f0-2e9f-4ca3-b91d-0f63d431ce2f") - target_handle_id = UUID("d41fa741-3e2f-4d6b-8608-d6ab6926e01f") - output_id = UUID("377c6fd6-73de-4cf6-8437-f12738f9b077") - log_output_id = UUID("dd1ea277-eaec-4c59-b5ed-23017940b5f5") - node_input_ids_by_name = { - "code_inputs.function_name": UUID("cc496e6f-4983-4db0-96fd-36e6976bcd8d"), - "code_inputs.allowed_function_names": UUID("131d9183-afcf-446b-b1f6-9979cad15858"), - "code": UUID("d498be92-0470-4c47-b9d4-5087e54a8116"), - "runtime": UUID("a8098cf8-d647-4495-9f3c-24b43fabf979"), - } - output_display = { - IsValidFunctionName.Outputs.result: NodeOutputDisplay( - id=UUID("377c6fd6-73de-4cf6-8437-f12738f9b077"), name="result" - ), - IsValidFunctionName.Outputs.log: NodeOutputDisplay(id=UUID("dd1ea277-eaec-4c59-b5ed-23017940b5f5"), name="log"), - } - port_displays = { - IsValidFunctionName.Ports.default: PortDisplayOverrides(id=UUID("0d1a598d-d40f-41fa-8025-c81e188647b4")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=4050, y=960), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/merge_node.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/merge_node.py deleted file mode 100644 index abca45d6f4..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/merge_node.py +++ /dev/null @@ -1,20 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseMergeNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.merge_node import MergeNode - - -class MergeNodeDisplay(BaseMergeNodeDisplay[MergeNode]): - label = "Merge Node" - node_id = UUID("bafc16db-a476-436d-9dde-fb74e8c294a0") - target_handle_ids = [ - UUID("d3539134-8ba5-461f-b96c-8dda2b2bcc18"), - UUID("9dd8f0b4-dc15-456b-af35-68ca7c0ab1d7"), - UUID("a783d560-204c-45de-b8eb-ede63936ddaf"), - UUID("0d6fd7aa-1ca0-4246-8920-e1f00272ca78"), - ] - port_displays = {MergeNode.Ports.default: PortDisplayOverrides(id=UUID("c3beb021-47d9-44b1-bd2b-81b11168e76d"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=3480, y=1005), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/name.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/name.py deleted file mode 100644 index a0ffd3aa81..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/name.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from .....nodes.parse_function_call.nodes.name import Name - - -class NameDisplay(BaseFinalOutputNodeDisplay[Name]): - label = "Name" - node_id = UUID("fa12bb7f-c8e4-4d5d-9916-c348109c0ffb") - target_handle_id = UUID("a435166f-a65a-40a4-95e7-ad7def5491d2") - output_name = "function-name" - node_input_ids_by_name = {"node_input": UUID("d2ecd03e-b443-482c-838a-2dc345dbf8ed")} - output_display = { - Name.Outputs.value: NodeOutputDisplay(id=UUID("680f2d8d-b03a-43b9-9d77-626044e03227"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=5520, y=-135), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_args.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_args.py deleted file mode 100644 index 3ed5ac398c..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_args.py +++ /dev/null @@ -1,26 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.parse_function_args import ParseFunctionArgs - - -class ParseFunctionArgsDisplay(BaseTemplatingNodeDisplay[ParseFunctionArgs]): - label = "Parse Function Args" - node_id = UUID("8578e405-2e94-4603-88b2-f6420af5135b") - target_handle_id = UUID("8d4f5e8c-3124-4049-89d5-1261f6838400") - node_input_ids_by_name = { - "inputs.function_call": UUID("5b3a91bc-4375-45a1-9fa7-c61f52484678"), - "template": UUID("09cae3d9-6444-4352-9d04-532a0b703303"), - } - output_display = { - ParseFunctionArgs.Outputs.result: NodeOutputDisplay( - id=UUID("aa9c4408-4251-4a48-9b8f-882912966dec"), name="result" - ) - } - port_displays = { - ParseFunctionArgs.Ports.default: PortDisplayOverrides(id=UUID("553960a1-8c83-4da5-9484-cf96c625cd9d")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2925, y=1035), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_call.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_call.py deleted file mode 100644 index a41d2b49d1..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_call.py +++ /dev/null @@ -1,26 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.parse_function_call import ParseFunctionCall1 - - -class ParseFunctionCall1Display(BaseTemplatingNodeDisplay[ParseFunctionCall1]): - label = "Parse Function Call" - node_id = UUID("40f2e0f6-9d30-4c94-a09d-d88be7c871d6") - target_handle_id = UUID("f3ae9773-9729-4ae9-a608-3cc2a5a0ade8") - node_input_ids_by_name = { - "inputs.output": UUID("69e35fe1-9920-450c-b524-757d5900f810"), - "template": UUID("ddf266be-2166-404c-986f-631cffa05fbc"), - } - output_display = { - ParseFunctionCall1.Outputs.result: NodeOutputDisplay( - id=UUID("a165e831-0c3d-414e-85a0-f17d53166759"), name="result" - ) - } - port_displays = { - ParseFunctionCall1.Ports.default: PortDisplayOverrides(id=UUID("4a5084ce-5c40-4af7-917f-e412e86f21e6")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1725, y=495), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_name.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_name.py deleted file mode 100644 index 17cf4948fd..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_function_name.py +++ /dev/null @@ -1,26 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.parse_function_name import ParseFunctionName - - -class ParseFunctionNameDisplay(BaseTemplatingNodeDisplay[ParseFunctionName]): - label = "Parse Function Name" - node_id = UUID("14d5c7b2-e434-46a6-8b97-47450ea77dcd") - target_handle_id = UUID("d2761e82-367e-4a10-b886-21413e52d9d5") - node_input_ids_by_name = { - "inputs.function_call": UUID("d15756f7-9021-469f-8e91-04f3008feaa3"), - "template": UUID("4239a502-567a-4224-8f37-354883502192"), - } - output_display = { - ParseFunctionName.Outputs.result: NodeOutputDisplay( - id=UUID("063d66ff-547c-49c4-b62a-c0ca172f63c7"), name="result" - ) - } - port_displays = { - ParseFunctionName.Ports.default: PortDisplayOverrides(id=UUID("5e7848c8-057e-404d-b00a-a4e198904bc2")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2955, y=630), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_tool_id.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_tool_id.py deleted file mode 100644 index 6924ca70d4..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/parse_tool_id.py +++ /dev/null @@ -1,22 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.parse_function_call.nodes.parse_tool_id import ParseToolID - - -class ParseToolIDDisplay(BaseTemplatingNodeDisplay[ParseToolID]): - label = "Parse Tool ID" - node_id = UUID("189de97f-9ae4-4af5-a816-0b54dbef616b") - target_handle_id = UUID("0c3ad9b3-23cc-412a-a73e-b16622ab5cab") - node_input_ids_by_name = { - "inputs.function_call": UUID("b67b9af7-df91-4002-b501-aa8c7389f565"), - "template": UUID("d8d8f969-ad9a-443c-9b15-0e53db265fa7"), - } - output_display = { - ParseToolID.Outputs.result: NodeOutputDisplay(id=UUID("6df79e21-6e5d-45db-84e7-b16ec91f2302"), name="result") - } - port_displays = {ParseToolID.Ports.default: PortDisplayOverrides(id=UUID("3919adb3-bb78-4668-9dc1-22fca2c1e945"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2925, y=1500), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/tool_id.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/tool_id.py deleted file mode 100644 index 67568134a9..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/nodes/tool_id.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from .....nodes.parse_function_call.nodes.tool_id import ToolID - - -class ToolIDDisplay(BaseFinalOutputNodeDisplay[ToolID]): - label = "Tool ID" - node_id = UUID("b20229cd-8fd7-4cab-8e8f-d3fbaa8f6712") - target_handle_id = UUID("260b40f0-9fbe-4c84-84f3-aadc01f122f8") - output_name = "tool-id" - node_input_ids_by_name = {"node_input": UUID("aa57a708-d7e7-40f9-853d-fc1d66ac62b3")} - output_display = { - ToolID.Outputs.value: NodeOutputDisplay(id=UUID("764c26a4-b0c4-4a52-9e19-96e651eccbd3"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=5505, y=825), width=None, height=None) diff --git a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/workflow.py b/examples/workflows/function_calling_demo/display/nodes/parse_function_call/workflow.py deleted file mode 100644 index b384cf9150..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/parse_function_call/workflow.py +++ /dev/null @@ -1,81 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ....nodes.parse_function_call.nodes.allowed_function_names import AllowedFunctionNames -from ....nodes.parse_function_call.nodes.args import Args -from ....nodes.parse_function_call.nodes.conditional_node_1 import ConditionalNode2 -from ....nodes.parse_function_call.nodes.error_message import ErrorMessage -from ....nodes.parse_function_call.nodes.error_node import ErrorNode1 -from ....nodes.parse_function_call.nodes.is_valid_function_name import IsValidFunctionName -from ....nodes.parse_function_call.nodes.merge_node import MergeNode -from ....nodes.parse_function_call.nodes.name import Name -from ....nodes.parse_function_call.nodes.parse_function_args import ParseFunctionArgs -from ....nodes.parse_function_call.nodes.parse_function_call import ParseFunctionCall1 -from ....nodes.parse_function_call.nodes.parse_function_name import ParseFunctionName -from ....nodes.parse_function_call.nodes.parse_tool_id import ParseToolID -from ....nodes.parse_function_call.nodes.tool_id import ToolID -from ....nodes.parse_function_call.workflow import ParseFunctionCallWorkflow - - -class ParseFunctionCallWorkflowDisplay(BaseWorkflowDisplay[ParseFunctionCallWorkflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("b0a312c5-c07e-4ba8-b434-575646ca8320"), - entrypoint_node_source_handle_id=UUID("be47c6ac-495c-498d-b318-f479c02c687a"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=1170, y=585), width=None, height=None), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-76.39160156249994, y=147.875, zoom=0.25) - ), - ) - inputs_display = {} - entrypoint_displays = { - ParseFunctionCall1: EntrypointDisplay( - id=UUID("b0a312c5-c07e-4ba8-b434-575646ca8320"), - edge_display=EdgeDisplay(id=UUID("d4104764-f0ce-4c79-86d0-9dbca9dc3e9e")), - ) - } - edge_displays = { - (ParseFunctionCall1.Ports.default, ParseFunctionName): EdgeDisplay( - id=UUID("3c8a7288-2816-413b-a835-e9576dceb3c6") - ), - (ParseFunctionCall1.Ports.default, ParseFunctionArgs): EdgeDisplay( - id=UUID("e1f8cc9d-8991-497a-ae85-c9b0addf79f7") - ), - (ParseFunctionName.Ports.default, MergeNode): EdgeDisplay(id=UUID("21b5c3b3-6b17-49d2-8042-d9ed0b5fc2b6")), - (ParseFunctionArgs.Ports.default, MergeNode): EdgeDisplay(id=UUID("73fd983e-ec53-4b8a-bb6e-bd4284450a12")), - (MergeNode.Ports.default, IsValidFunctionName): EdgeDisplay(id=UUID("269aed35-de3f-4d49-af39-534c7e69b404")), - (ParseFunctionCall1.Ports.default, AllowedFunctionNames): EdgeDisplay( - id=UUID("be74615e-f7e9-4467-a9de-b612975a6f9d") - ), - (IsValidFunctionName.Ports.default, ConditionalNode2): EdgeDisplay( - id=UUID("fc6f2bb0-3287-4a37-895f-840f7eacb31c") - ), - (ConditionalNode2.Ports.branch_1, Name): EdgeDisplay(id=UUID("55c22812-f470-4ea9-8509-89d398df8ce7")), - (ConditionalNode2.Ports.branch_2, ErrorMessage): EdgeDisplay(id=UUID("3c890c21-4e40-4b11-8e68-8663609bc622")), - (ErrorMessage.Ports.default, ErrorNode1): EdgeDisplay(id=UUID("5eecd64f-68e4-4695-8977-2cd78874a129")), - (ConditionalNode2.Ports.branch_1, Args): EdgeDisplay(id=UUID("57778d49-b44c-4c62-9198-c848b20c4d4f")), - (ParseFunctionCall1.Ports.default, ParseToolID): EdgeDisplay(id=UUID("be6cc434-ab29-4cfb-aef3-6815bee41acf")), - (ParseToolID.Ports.default, MergeNode): EdgeDisplay(id=UUID("0074731f-548f-4ba0-bdc8-a024c9aef5fb")), - (AllowedFunctionNames.Ports.default, MergeNode): EdgeDisplay(id=UUID("37a1b5b8-9c33-47fb-a3a3-4c5eae1247a8")), - (ConditionalNode2.Ports.branch_1, ToolID): EdgeDisplay(id=UUID("51ab1ee1-57d6-4d18-bdb0-c3dc85e40a71")), - } - output_displays = { - ParseFunctionCallWorkflow.Outputs.function_args: WorkflowOutputDisplay( - id=UUID("d520f0a1-c28f-4007-acf9-2758871f2250"), name="function-args" - ), - ParseFunctionCallWorkflow.Outputs.function_name: WorkflowOutputDisplay( - id=UUID("680f2d8d-b03a-43b9-9d77-626044e03227"), name="function-name" - ), - ParseFunctionCallWorkflow.Outputs.tool_id: WorkflowOutputDisplay( - id=UUID("764c26a4-b0c4-4a52-9e19-96e651eccbd3"), name="tool-id" - ), - } diff --git a/examples/workflows/function_calling_demo/display/nodes/prompt_node.py b/examples/workflows/function_calling_demo/display/nodes/prompt_node.py deleted file mode 100644 index 50714e240b..0000000000 --- a/examples/workflows/function_calling_demo/display/nodes/prompt_node.py +++ /dev/null @@ -1,24 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.prompt_node import PromptNode - - -class PromptNodeDisplay(BaseInlinePromptNodeDisplay[PromptNode]): - label = "Prompt Node" - node_id = UUID("2f1fa0d5-ef23-4738-9fd5-216407c18fb1") - output_id = UUID("76d8986c-3248-4a84-9780-97bd56b57cd7") - array_output_id = UUID("044296d4-bfa5-43c5-9055-0d1d440cc05e") - target_handle_id = UUID("b839c700-cbc7-442a-936a-a245a692df65") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("1cf292a8-a99b-460b-8348-392e4b3e8dee")} - attribute_ids_by_name = {"ml_model": UUID("577cb543-4a7f-4f8a-8fce-56478c698511")} - output_display = { - PromptNode.Outputs.text: NodeOutputDisplay(id=UUID("76d8986c-3248-4a84-9780-97bd56b57cd7"), name="text"), - PromptNode.Outputs.results: NodeOutputDisplay(id=UUID("044296d4-bfa5-43c5-9055-0d1d440cc05e"), name="results"), - PromptNode.Outputs.json: NodeOutputDisplay(id=UUID("fdddc12a-b7ad-412e-b870-000567ec88a0"), name="json"), - } - port_displays = {PromptNode.Ports.default: PortDisplayOverrides(id=UUID("1aeebd8b-69c6-4051-af0e-01e628d81e3c"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=690, y=271.3250297289478), width=480, height=208) diff --git a/examples/workflows/function_calling_demo/display/workflow.py b/examples/workflows/function_calling_demo/display/workflow.py deleted file mode 100644 index f4ed604f58..0000000000 --- a/examples/workflows/function_calling_demo/display/workflow.py +++ /dev/null @@ -1,75 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.accumulate_chat_history import AccumulateChatHistory -from ..nodes.conditional_node import ConditionalNode -from ..nodes.conditional_node_10 import ConditionalNode10 -from ..nodes.error_node import ErrorNode -from ..nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from ..nodes.final_output import FinalOutput -from ..nodes.get_current_weather import GetCurrentWeather -from ..nodes.output_type import OutputType -from ..nodes.parse_function_call import ParseFunctionCall -from ..nodes.prompt_node import PromptNode -from ..workflow import FunctionCallingDemoWorkflow - - -class FunctionCallingDemoWorkflowDisplay(BaseWorkflowDisplay[FunctionCallingDemoWorkflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("8d9848b6-37f1-433d-9eb9-d85788007e8b"), - entrypoint_node_source_handle_id=UUID("39eb234b-7594-4855-9cb8-dfcaff48963c"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=165, y=315), width=124, height=48), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-730.0959027261842, y=128.0011183508181, zoom=0.49585442352220244) - ), - ) - inputs_display = { - Inputs.chat_history: WorkflowInputsDisplay(id=UUID("b120c597-1e61-4779-b8d1-34676f9ecabc"), name="chat_history") - } - entrypoint_displays = { - PromptNode: EntrypointDisplay( - id=UUID("8d9848b6-37f1-433d-9eb9-d85788007e8b"), - edge_display=EdgeDisplay(id=UUID("894bb47f-c560-463a-8516-1f1dd9a45463")), - ) - } - edge_displays = { - (ParseFunctionCall.Ports.default, ConditionalNode): EdgeDisplay( - id=UUID("413c943d-3aa0-459e-971d-ff89a286108f") - ), - (ConditionalNode.Ports.branch_1, GetCurrentWeather): EdgeDisplay( - id=UUID("ce18995e-2f1e-481e-bcf0-bfa1e9d6db8e") - ), - (GetCurrentWeather.Ports.default, AccumulateChatHistory): EdgeDisplay( - id=UUID("1efde279-9cd7-4917-b508-bb57768bd5e6") - ), - (AccumulateChatHistory.Ports.default, PromptNode): EdgeDisplay(id=UUID("7fd1861c-0b94-4e14-a724-88a5a67b8b02")), - (PromptNode.Ports.default, OutputType): EdgeDisplay(id=UUID("76dfcf8c-ccd0-4142-83de-f63662af6474")), - (OutputType.Ports.default, ConditionalNode10): EdgeDisplay(id=UUID("20afd4d8-bf0c-402b-a5ec-d231ed819e92")), - (ConditionalNode10.Ports.branch_1, ParseFunctionCall): EdgeDisplay( - id=UUID("b7cade05-cec8-459f-8113-e334a8f761f5") - ), - (ConditionalNode.Ports.branch_2, ErrorNode): EdgeDisplay(id=UUID("b5adfbdd-38f2-4377-8eee-d539f7747c26")), - (ConditionalNode10.Ports.branch_2, FinalAccumulationOfChatHistory): EdgeDisplay( - id=UUID("588faf16-b4b9-45f4-bdd1-b18926f6c5e5") - ), - (FinalAccumulationOfChatHistory.Ports.default, FinalOutput): EdgeDisplay( - id=UUID("988f4094-ec68-4123-b0ca-2990e973dce9") - ), - } - output_displays = { - FunctionCallingDemoWorkflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("e869f551-b02c-465f-90b3-ad2021b3c618"), name="final-output" - ) - } diff --git a/examples/workflows/function_calling_demo/inputs.py b/examples/workflows/function_calling_demo/inputs.py deleted file mode 100644 index bb02da8672..0000000000 --- a/examples/workflows/function_calling_demo/inputs.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List, Optional - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - chat_history: Optional[List[ChatMessage]] = None diff --git a/examples/workflows/function_calling_demo/nodes/__init__.py b/examples/workflows/function_calling_demo/nodes/__init__.py deleted file mode 100644 index 3b2e770923..0000000000 --- a/examples/workflows/function_calling_demo/nodes/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -from .accumulate_chat_history import AccumulateChatHistory -from .conditional_node import ConditionalNode -from .conditional_node_10 import ConditionalNode10 -from .error_node import ErrorNode -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from .final_output import FinalOutput -from .get_current_weather import GetCurrentWeather -from .output_type import OutputType -from .parse_function_call import ParseFunctionCall -from .prompt_node import PromptNode - -__all__ = [ - "AccumulateChatHistory", - "ConditionalNode", - "ConditionalNode10", - "ErrorNode", - "FinalAccumulationOfChatHistory", - "FinalOutput", - "GetCurrentWeather", - "OutputType", - "ParseFunctionCall", - "PromptNode", -] diff --git a/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/__init__.py b/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/__init__.py deleted file mode 100644 index d7c1823eac..0000000000 --- a/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.references import LazyReference -from vellum.workflows.state import BaseState - -from ...inputs import Inputs -from ..get_current_weather import GetCurrentWeather -from ..parse_function_call import ParseFunctionCall - - -class AccumulateChatHistory(CodeExecutionNode[BaseState, List[ChatMessage]]): - filepath = "./script.py" - code_inputs = { - "tool_id": ParseFunctionCall.Outputs.tool_id, - "function_result": GetCurrentWeather.Outputs.result, - "assistant_message": LazyReference("PromptNode.Outputs.results"), - "current_chat_history": Inputs.chat_history, - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/script.py b/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/script.py deleted file mode 100644 index 99efc5fa88..0000000000 --- a/examples/workflows/function_calling_demo/nodes/accumulate_chat_history/script.py +++ /dev/null @@ -1,25 +0,0 @@ -import json - - -def main( - tool_id, - function_result, - assistant_message, - current_chat_history, -) -> int: - return [ - *current_chat_history, - { - "role": "ASSISTANT", - "content": assistant_message[0], - }, - { - "role": "FUNCTION", - "content": { - "type": "STRING", - "value": json.dumps(function_result) - }, - "source": tool_id - } - ] - \ No newline at end of file diff --git a/examples/workflows/function_calling_demo/nodes/conditional_node.py b/examples/workflows/function_calling_demo/nodes/conditional_node.py deleted file mode 100644 index d520b90818..0000000000 --- a/examples/workflows/function_calling_demo/nodes/conditional_node.py +++ /dev/null @@ -1,10 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode as BaseConditionalNode -from vellum.workflows.ports import Port - -from .parse_function_call import ParseFunctionCall - - -class ConditionalNode(BaseConditionalNode): - class Ports(BaseConditionalNode.Ports): - branch_1 = Port.on_if(ParseFunctionCall.Outputs.function_name.equals("get_current_weather")) - branch_2 = Port.on_else() diff --git a/examples/workflows/function_calling_demo/nodes/conditional_node_10.py b/examples/workflows/function_calling_demo/nodes/conditional_node_10.py deleted file mode 100644 index 5622aba848..0000000000 --- a/examples/workflows/function_calling_demo/nodes/conditional_node_10.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode -from vellum.workflows.ports import Port -from vellum.workflows.references import LazyReference - - -class ConditionalNode10(ConditionalNode): - class Ports(ConditionalNode.Ports): - branch_1 = Port.on_if(LazyReference("OutputType.Outputs.result").equals("FUNCTION_CALL")) - branch_2 = Port.on_else() diff --git a/examples/workflows/function_calling_demo/nodes/error_node.py b/examples/workflows/function_calling_demo/nodes/error_node.py deleted file mode 100644 index a5be6ad1e9..0000000000 --- a/examples/workflows/function_calling_demo/nodes/error_node.py +++ /dev/null @@ -1,6 +0,0 @@ -from vellum import VellumError -from vellum.workflows.nodes.displayable import ErrorNode as BaseErrorNode - - -class ErrorNode(BaseErrorNode): - error = VellumError(message="Unrecognized function call", code="USER_DEFINED_ERROR") diff --git a/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/__init__.py b/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/__init__.py deleted file mode 100644 index a521526d62..0000000000 --- a/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.state import BaseState - -from ...inputs import Inputs -from ..accumulate_chat_history import AccumulateChatHistory -from ..prompt_node import PromptNode - - -class FinalAccumulationOfChatHistory(CodeExecutionNode[BaseState, List[ChatMessage]]): - filepath = "./script.py" - code_inputs = { - "current_chat_history": AccumulateChatHistory.Outputs.result.coalesce(Inputs.chat_history), - "assistant_message": PromptNode.Outputs.results, - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/script.py b/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/script.py deleted file mode 100644 index f6c4afc630..0000000000 --- a/examples/workflows/function_calling_demo/nodes/final_accumulation_of_chat_history/script.py +++ /dev/null @@ -1,12 +0,0 @@ -def main( - current_chat_history, - assistant_message, -) -> int: - return [ - *current_chat_history, - { - "role": "ASSISTANT", - "content": assistant_message[0], - }, - ] - \ No newline at end of file diff --git a/examples/workflows/function_calling_demo/nodes/final_output.py b/examples/workflows/function_calling_demo/nodes/final_output.py deleted file mode 100644 index beda0d5a49..0000000000 --- a/examples/workflows/function_calling_demo/nodes/final_output.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistory - - -class FinalOutput(FinalOutputNode[BaseState, List[ChatMessage]]): - class Outputs(FinalOutputNode.Outputs): - value = FinalAccumulationOfChatHistory.Outputs.result diff --git a/examples/workflows/function_calling_demo/nodes/get_current_weather/__init__.py b/examples/workflows/function_calling_demo/nodes/get_current_weather/__init__.py deleted file mode 100644 index 8801bb31ba..0000000000 --- a/examples/workflows/function_calling_demo/nodes/get_current_weather/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import Any - -from vellum.client.types import CodeExecutionPackage -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.references import VellumSecretReference -from vellum.workflows.state import BaseState - -from ..parse_function_call import ParseFunctionCall - - -class GetCurrentWeather(CodeExecutionNode[BaseState, Any]): - filepath = "./script.py" - code_inputs = { - "kwargs": ParseFunctionCall.Outputs.function_args, - "gmaps_api_key": VellumSecretReference("GOOGLE_GEOCODING_API_KEY"), - "openweather_api_key": VellumSecretReference("OPEN_WEATHER_API_KEY"), - } - runtime = "PYTHON_3_11_6" - packages = [ - CodeExecutionPackage(name="googlemaps", version="4.10.0"), - CodeExecutionPackage(name="requests", version="2.32.3"), - ] diff --git a/examples/workflows/function_calling_demo/nodes/get_current_weather/script.py b/examples/workflows/function_calling_demo/nodes/get_current_weather/script.py deleted file mode 100644 index c1ce9e2511..0000000000 --- a/examples/workflows/function_calling_demo/nodes/get_current_weather/script.py +++ /dev/null @@ -1,22 +0,0 @@ -import googlemaps -import requests - -def main(kwargs, gmaps_api_key, openweather_api_key): - location = kwargs["location"] - - gmaps = googlemaps.Client(key=gmaps_api_key) - - # Geocoding an address - geocode_result = gmaps.geocode(location) - print(geocode_result) - - coordinates = geocode_result[0]["geometry"]["location"] - lat = coordinates["lat"] - lon = coordinates["lng"] - - url = f"https://api.openweathermap.org/data/2.5/weather?lat={lat}&lon={lon}&appid=7b9663b943995d520fdfe643d6838425" - response = requests.get(url) - data = response.json() - - return data - \ No newline at end of file diff --git a/examples/workflows/function_calling_demo/nodes/output_type.py b/examples/workflows/function_calling_demo/nodes/output_type.py deleted file mode 100644 index 1e6590f5c2..0000000000 --- a/examples/workflows/function_calling_demo/nodes/output_type.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .prompt_node import PromptNode - - -class OutputType(TemplatingNode[BaseState, str]): - template = """{{ output[0].type }}""" - inputs = { - "output": PromptNode.Outputs.results, - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/__init__.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/__init__.py deleted file mode 100644 index f7c7ed51a3..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from vellum.workflows.nodes.displayable import InlineSubworkflowNode - -from .workflow import ParseFunctionCallWorkflow - - -class ParseFunctionCall(InlineSubworkflowNode): - subworkflow = ParseFunctionCallWorkflow diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/__init__.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/__init__.py deleted file mode 100644 index fdd94df181..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -from .allowed_function_names import AllowedFunctionNames -from .args import Args -from .conditional_node_1 import ConditionalNode2 -from .error_message import ErrorMessage -from .error_node import ErrorNode1 -from .is_valid_function_name import IsValidFunctionName -from .merge_node import MergeNode -from .name import Name -from .parse_function_args import ParseFunctionArgs -from .parse_function_call import ParseFunctionCall1 -from .parse_function_name import ParseFunctionName -from .parse_tool_id import ParseToolID -from .tool_id import ToolID - -__all__ = [ - "AllowedFunctionNames", - "Args", - "ConditionalNode2", - "ErrorMessage", - "ErrorNode1", - "IsValidFunctionName", - "MergeNode", - "Name", - "ParseFunctionArgs", - "ParseFunctionCall1", - "ParseFunctionName", - "ParseToolID", - "ToolID", -] diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/allowed_function_names.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/allowed_function_names.py deleted file mode 100644 index 05d150eb91..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/allowed_function_names.py +++ /dev/null @@ -1,8 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState -from vellum.workflows.types.core import Json - - -class AllowedFunctionNames(TemplatingNode[BaseState, Json]): - template = """[\"get_current_weather\"]""" - inputs = {} diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/args.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/args.py deleted file mode 100644 index 1abe35c10b..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/args.py +++ /dev/null @@ -1,11 +0,0 @@ -from typing import Any - -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .parse_function_args import ParseFunctionArgs - - -class Args(FinalOutputNode[BaseState, Any]): - class Outputs(FinalOutputNode.Outputs): - value = ParseFunctionArgs.Outputs.result diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/conditional_node_1.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/conditional_node_1.py deleted file mode 100644 index 6046e91fdf..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/conditional_node_1.py +++ /dev/null @@ -1,10 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode -from vellum.workflows.ports import Port - -from .is_valid_function_name import IsValidFunctionName - - -class ConditionalNode2(ConditionalNode): - class Ports(ConditionalNode.Ports): - branch_1 = Port.on_if(IsValidFunctionName.Outputs.result.equals("True")) - branch_2 = Port.on_else() diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_message.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_message.py deleted file mode 100644 index 595aa9736e..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_message.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .parse_function_name import ParseFunctionName - - -class ErrorMessage(TemplatingNode[BaseState, str]): - template = """Invalid function name {{ invalid_function_name }}.""" - inputs = { - "invalid_function_name": ParseFunctionName.Outputs.result, - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_node.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_node.py deleted file mode 100644 index 1b9d3cdaab..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/error_node.py +++ /dev/null @@ -1,7 +0,0 @@ -from vellum.workflows.nodes.displayable import ErrorNode - -from .error_message import ErrorMessage - - -class ErrorNode1(ErrorNode): - error = ErrorMessage.Outputs.result diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/__init__.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/__init__.py deleted file mode 100644 index 32c7d0fdac..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.state import BaseState - -from ..allowed_function_names import AllowedFunctionNames -from ..parse_function_name import ParseFunctionName - - -class IsValidFunctionName(CodeExecutionNode[BaseState, str]): - filepath = "./script.py" - code_inputs = { - "function_name": ParseFunctionName.Outputs.result, - "allowed_function_names": AllowedFunctionNames.Outputs.result, - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/script.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/script.py deleted file mode 100644 index 06b65bfca1..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/is_valid_function_name/script.py +++ /dev/null @@ -1,6 +0,0 @@ -def main( - function_name, - allowed_function_names, -) -> int: - return "True" if function_name in allowed_function_names else "False" - \ No newline at end of file diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/merge_node.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/merge_node.py deleted file mode 100644 index 09f1bb822a..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/merge_node.py +++ /dev/null @@ -1,7 +0,0 @@ -from vellum.workflows.nodes.displayable import MergeNode as BaseMergeNode -from vellum.workflows.types import MergeBehavior - - -class MergeNode(BaseMergeNode): - class Trigger(BaseMergeNode.Trigger): - merge_behavior = MergeBehavior.AWAIT_ALL diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/name.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/name.py deleted file mode 100644 index eb8bff044c..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/name.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .parse_function_name import ParseFunctionName - - -class Name(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = ParseFunctionName.Outputs.result diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_args.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_args.py deleted file mode 100644 index d8afb91c83..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_args.py +++ /dev/null @@ -1,12 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState -from vellum.workflows.types.core import Json - -from .parse_function_call import ParseFunctionCall1 - - -class ParseFunctionArgs(TemplatingNode[BaseState, Json]): - template = """{{ function_call.arguments }}""" - inputs = { - "function_call": ParseFunctionCall1.Outputs.result, - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_call.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_call.py deleted file mode 100644 index bd070d28a3..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_call.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.references import LazyReference -from vellum.workflows.state import BaseState -from vellum.workflows.types.core import Json - - -class ParseFunctionCall1(TemplatingNode[BaseState, Json]): - template = """{{ output[0] }}""" - inputs = { - "output": LazyReference("PromptNode.Outputs.results"), - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_name.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_name.py deleted file mode 100644 index 8b16621f23..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_function_name.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .parse_function_call import ParseFunctionCall1 - - -class ParseFunctionName(TemplatingNode[BaseState, str]): - template = """{{ function_call.name }}""" - inputs = { - "function_call": ParseFunctionCall1.Outputs.result, - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_tool_id.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_tool_id.py deleted file mode 100644 index 2ef247d4bb..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/parse_tool_id.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .parse_function_call import ParseFunctionCall1 - - -class ParseToolID(TemplatingNode[BaseState, str]): - template = """{{ function_call.id }}""" - inputs = { - "function_call": ParseFunctionCall1.Outputs.result, - } diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/tool_id.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/tool_id.py deleted file mode 100644 index 8707f61098..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/nodes/tool_id.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .parse_tool_id import ParseToolID - - -class ToolID(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = ParseToolID.Outputs.result diff --git a/examples/workflows/function_calling_demo/nodes/parse_function_call/workflow.py b/examples/workflows/function_calling_demo/nodes/parse_function_call/workflow.py deleted file mode 100644 index 45f2585134..0000000000 --- a/examples/workflows/function_calling_demo/nodes/parse_function_call/workflow.py +++ /dev/null @@ -1,43 +0,0 @@ -from vellum.workflows import BaseWorkflow - -from .nodes.allowed_function_names import AllowedFunctionNames -from .nodes.args import Args -from .nodes.conditional_node_1 import ConditionalNode2 -from .nodes.error_message import ErrorMessage -from .nodes.error_node import ErrorNode1 -from .nodes.is_valid_function_name import IsValidFunctionName -from .nodes.merge_node import MergeNode -from .nodes.name import Name -from .nodes.parse_function_args import ParseFunctionArgs -from .nodes.parse_function_call import ParseFunctionCall1 -from .nodes.parse_function_name import ParseFunctionName -from .nodes.parse_tool_id import ParseToolID -from .nodes.tool_id import ToolID - - -class ParseFunctionCallWorkflow(BaseWorkflow): - graph = ( - ParseFunctionCall1 - >> { - ParseFunctionName, - ParseFunctionArgs, - AllowedFunctionNames, - ParseToolID, - } - >> MergeNode - >> IsValidFunctionName - >> { - ConditionalNode2.Ports.branch_1 - >> { - Name, - Args, - ToolID, - }, - ConditionalNode2.Ports.branch_2 >> ErrorMessage >> ErrorNode1, - } - ) - - class Outputs(BaseWorkflow.Outputs): - function_args = Args.Outputs.value - function_name = Name.Outputs.value - tool_id = ToolID.Outputs.value diff --git a/examples/workflows/function_calling_demo/nodes/prompt_node.py b/examples/workflows/function_calling_demo/nodes/prompt_node.py deleted file mode 100644 index 8ae7233296..0000000000 --- a/examples/workflows/function_calling_demo/nodes/prompt_node.py +++ /dev/null @@ -1,61 +0,0 @@ -from vellum import ChatMessagePromptBlock, FunctionDefinition, JinjaPromptBlock, PromptParameters, VariablePromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .accumulate_chat_history import AccumulateChatHistory - - -class PromptNode(InlinePromptNode): - ml_model = "gpt-3.5-turbo" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""You are an expert meteorologist that helps correctly answer questions about the weather. Answer questions factually based the information that you\'re provided and ask clarifying questions when needed""" - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": AccumulateChatHistory.Outputs.result.coalesce(Inputs.chat_history), - } - functions = [ - FunctionDefinition( - name="get_current_weather", - description="Get the current weather in a given location", - parameters={ - "type": "object", - "required": [ - "location", - ], - "properties": { - "unit": { - "enum": [ - "celsius", - "fahrenheit", - ], - "type": "string", - }, - "location": { - "type": "string", - "description": "The city and state, e.g. San Francisco, CA", - }, - }, - }, - function_forced=False, - function_strict=False, - ), - ] - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/function_calling_demo/sandbox.py b/examples/workflows/function_calling_demo/sandbox.py deleted file mode 100644 index 39d40298dc..0000000000 --- a/examples/workflows/function_calling_demo/sandbox.py +++ /dev/null @@ -1,24 +0,0 @@ -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import ChatMessage, Inputs -from .workflow import FunctionCallingDemoWorkflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -workflow = FunctionCallingDemoWorkflow() -runner = WorkflowSandboxRunner( - workflow, - inputs=[ - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="What is the weather in San Francisco?", - ) - ] - ) - ], -) -runner.run() diff --git a/examples/workflows/function_calling_demo/workflow.py b/examples/workflows/function_calling_demo/workflow.py deleted file mode 100644 index d7aeae533b..0000000000 --- a/examples/workflows/function_calling_demo/workflow.py +++ /dev/null @@ -1,33 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.accumulate_chat_history import AccumulateChatHistory -from .nodes.conditional_node import ConditionalNode -from .nodes.conditional_node_10 import ConditionalNode10 -from .nodes.error_node import ErrorNode -from .nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from .nodes.final_output import FinalOutput -from .nodes.get_current_weather import GetCurrentWeather -from .nodes.output_type import OutputType -from .nodes.parse_function_call import ParseFunctionCall -from .nodes.prompt_node import PromptNode - - -class FunctionCallingDemoWorkflow(BaseWorkflow[Inputs, BaseState]): - graph = ( - PromptNode - >> OutputType - >> { - ConditionalNode10.Ports.branch_1 - >> ParseFunctionCall - >> { - ConditionalNode.Ports.branch_1 >> GetCurrentWeather >> AccumulateChatHistory >> PromptNode, - ConditionalNode.Ports.branch_2 >> ErrorNode, - }, - ConditionalNode10.Ports.branch_2 >> FinalAccumulationOfChatHistory >> FinalOutput, - } - ) - - class Outputs(BaseWorkflow.Outputs): - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/image_processing/__init__.py b/examples/workflows/image_processing/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/image_processing/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/image_processing/display/__init__.py b/examples/workflows/image_processing/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/image_processing/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/image_processing/display/nodes/__init__.py b/examples/workflows/image_processing/display/nodes/__init__.py deleted file mode 100644 index e545261c39..0000000000 --- a/examples/workflows/image_processing/display/nodes/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .add_image_to_chat_history import AddImageToChatHistoryDisplay -from .final_output import FinalOutputDisplay -from .final_output_6 import FinalOutput6Display -from .summarize_image_by_chat_history import SummarizeImageByChatHistoryDisplay -from .summarize_image_by_url_chat_history import SummarizeImageByURLChatHistoryDisplay - -__all__ = [ - "AddImageToChatHistoryDisplay", - "FinalOutput6Display", - "FinalOutputDisplay", - "SummarizeImageByChatHistoryDisplay", - "SummarizeImageByURLChatHistoryDisplay", -] diff --git a/examples/workflows/image_processing/display/nodes/add_image_to_chat_history.py b/examples/workflows/image_processing/display/nodes/add_image_to_chat_history.py deleted file mode 100644 index 5e4ddc9486..0000000000 --- a/examples/workflows/image_processing/display/nodes/add_image_to_chat_history.py +++ /dev/null @@ -1,32 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.add_image_to_chat_history import AddImageToChatHistory - - -class AddImageToChatHistoryDisplay(BaseTemplatingNodeDisplay[AddImageToChatHistory]): - label = "Add Image to Chat History" - node_id = UUID("94cda50f-0846-4724-bb0d-009c743cc167") - target_handle_id = UUID("56355ee9-fd7e-481c-99a2-48254f862f7f") - node_input_ids_by_name = { - "inputs.chat_history": UUID("156eb216-6699-499e-8aac-9df98e54d14e"), - "template": UUID("d75eba1c-ce79-4921-a0d8-579bc60975b6"), - "inputs.image_url": UUID("fc0d002f-be41-4241-b67d-4f66a39d20a1"), - } - output_display = { - AddImageToChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("b7b64225-95f6-4a51-8a91-048eef1a5b13"), name="result" - ) - } - port_displays = { - AddImageToChatHistory.Ports.default: PortDisplayOverrides(id=UUID("508eec8f-bf3a-40a7-b47f-7e393b7f0672")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=282.3228076068352, y=552.4516595338625), - width=459, - height=387, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/image_processing/display/nodes/final_output.py b/examples/workflows/image_processing/display/nodes/final_output.py deleted file mode 100644 index a802f5d027..0000000000 --- a/examples/workflows/image_processing/display/nodes/final_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("81a1c8a8-6b65-4ba8-b14a-0b2e70aed8f1") - target_handle_id = UUID("36d04318-1779-40fa-81fb-ae14ba87d1d6") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("fa56b746-783e-43a6-9c12-61d89cfb6f77")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("3ae9132d-cbb9-4237-ac5b-9c80096eaac5"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1465.3578490588761, y=635.1421509411235), width=450, height=239 - ) diff --git a/examples/workflows/image_processing/display/nodes/final_output_6.py b/examples/workflows/image_processing/display/nodes/final_output_6.py deleted file mode 100644 index f45e89665a..0000000000 --- a/examples/workflows/image_processing/display/nodes/final_output_6.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output_6 import FinalOutput6 - - -class FinalOutput6Display(BaseFinalOutputNodeDisplay[FinalOutput6]): - label = "Final Output 6" - node_id = UUID("4e5f53c8-9ae6-47c4-9263-7ba72997e144") - target_handle_id = UUID("b59692d5-f479-45ce-b745-437ddfd11397") - output_name = "final-output-6" - node_input_ids_by_name = {"node_input": UUID("d453e26e-8794-4a80-afd8-4cdfa4d2f91f")} - output_display = { - FinalOutput6.Outputs.value: NodeOutputDisplay(id=UUID("92f6b627-7e91-49d1-9f26-4fb139a7f9a9"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=864, y=16.5), width=456, height=239) diff --git a/examples/workflows/image_processing/display/nodes/summarize_image_by_chat_history.py b/examples/workflows/image_processing/display/nodes/summarize_image_by_chat_history.py deleted file mode 100644 index 5f310db171..0000000000 --- a/examples/workflows/image_processing/display/nodes/summarize_image_by_chat_history.py +++ /dev/null @@ -1,37 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.summarize_image_by_chat_history import SummarizeImageByChatHistory - - -class SummarizeImageByChatHistoryDisplay(BaseInlinePromptNodeDisplay[SummarizeImageByChatHistory]): - label = "Summarize Image by Chat History" - node_id = UUID("dca4c915-ffe8-455b-8dec-7295e38f5387") - output_id = UUID("1d8e3718-2de2-414c-a63d-4d7a619bea2c") - array_output_id = UUID("fbe26292-6e59-430b-b21a-f27b21fb6b1b") - target_handle_id = UUID("90d5b58b-c00c-4461-9ab6-1a10c40f3e2b") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("79d83f9e-fb6f-451f-95f1-ff21c2cb1801")} - attribute_ids_by_name = {"ml_model": UUID("017e7ca2-3d4c-445a-8b04-edd4292fb70a")} - output_display = { - SummarizeImageByChatHistory.Outputs.text: NodeOutputDisplay( - id=UUID("1d8e3718-2de2-414c-a63d-4d7a619bea2c"), name="text" - ), - SummarizeImageByChatHistory.Outputs.results: NodeOutputDisplay( - id=UUID("fbe26292-6e59-430b-b21a-f27b21fb6b1b"), name="results" - ), - SummarizeImageByChatHistory.Outputs.json: NodeOutputDisplay( - id=UUID("feec03e7-eed8-4399-8b02-72be44af5653"), name="json" - ), - } - port_displays = { - SummarizeImageByChatHistory.Ports.default: PortDisplayOverrides(id=UUID("1c874657-aa34-4aa5-91cb-a8cdfbf7347a")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=270.2777254920742, y=-23.060111113771768), - width=480, - height=279, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/image_processing/display/nodes/summarize_image_by_url_chat_history.py b/examples/workflows/image_processing/display/nodes/summarize_image_by_url_chat_history.py deleted file mode 100644 index a0c8dd7b4e..0000000000 --- a/examples/workflows/image_processing/display/nodes/summarize_image_by_url_chat_history.py +++ /dev/null @@ -1,36 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.summarize_image_by_url_chat_history import SummarizeImageByURLChatHistory - - -class SummarizeImageByURLChatHistoryDisplay(BaseInlinePromptNodeDisplay[SummarizeImageByURLChatHistory]): - label = "Summarize Image by URL -> Chat History" - node_id = UUID("92b90a99-61b3-4ec9-aded-114366fb5d6b") - output_id = UUID("26238951-f3b6-454c-81c6-1a6f7da20919") - array_output_id = UUID("9b21a9dd-2515-46d9-84a0-c0ef59890b18") - target_handle_id = UUID("0340e04b-13ff-4698-9fd4-8fdac3d4920d") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("9cde4b6f-77c6-47e7-963d-ea0a903c81b8")} - attribute_ids_by_name = {"ml_model": UUID("7023b57c-7b7a-42ab-a7ce-e1c077475a03")} - output_display = { - SummarizeImageByURLChatHistory.Outputs.text: NodeOutputDisplay( - id=UUID("26238951-f3b6-454c-81c6-1a6f7da20919"), name="text" - ), - SummarizeImageByURLChatHistory.Outputs.results: NodeOutputDisplay( - id=UUID("9b21a9dd-2515-46d9-84a0-c0ef59890b18"), name="results" - ), - SummarizeImageByURLChatHistory.Outputs.json: NodeOutputDisplay( - id=UUID("f420b0a1-bcb4-4bbc-b7c4-5ffb9e108c4f"), name="json" - ), - } - port_displays = { - SummarizeImageByURLChatHistory.Ports.default: PortDisplayOverrides( - id=UUID("8779d3ec-9211-414d-9af3-631ad3167b1d") - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=837.4721057480847, y=667.4771042124831), width=480, height=175 - ) diff --git a/examples/workflows/image_processing/display/workflow.py b/examples/workflows/image_processing/display/workflow.py deleted file mode 100644 index 030fd0dcbf..0000000000 --- a/examples/workflows/image_processing/display/workflow.py +++ /dev/null @@ -1,69 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.add_image_to_chat_history import AddImageToChatHistory -from ..nodes.final_output import FinalOutput -from ..nodes.final_output_6 import FinalOutput6 -from ..nodes.summarize_image_by_chat_history import SummarizeImageByChatHistory -from ..nodes.summarize_image_by_url_chat_history import SummarizeImageByURLChatHistory -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("374eff56-51f2-4432-a642-ea35d9fbc455"), - entrypoint_node_source_handle_id=UUID("061a80b9-549b-441c-a623-21038cddeb6f"), - entrypoint_node_display=NodeDisplayData( - position=NodeDisplayPosition(x=-55.26644635815683, y=510.7385521062415), width=124, height=48 - ), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-52.35286997670073, y=75.59489590281208, zoom=0.7571471902081293) - ), - ) - inputs_display = { - Inputs.image_url: WorkflowInputsDisplay(id=UUID("48d08975-862a-4858-9659-adf59f6648cc"), name="image_url"), - Inputs.workflow_input_chat_history: WorkflowInputsDisplay( - id=UUID("fb7211bb-0f6d-4176-a104-06c2261ebd5c"), name="workflow_input_chat_history", color="pink" - ), - } - entrypoint_displays = { - SummarizeImageByChatHistory: EntrypointDisplay( - id=UUID("374eff56-51f2-4432-a642-ea35d9fbc455"), - edge_display=EdgeDisplay(id=UUID("5f4e4e51-9316-4b5b-8704-bcba917e92af")), - ), - AddImageToChatHistory: EntrypointDisplay( - id=UUID("374eff56-51f2-4432-a642-ea35d9fbc455"), - edge_display=EdgeDisplay(id=UUID("6dac810b-ac0c-431a-ae93-19eaa75253c3")), - ), - } - edge_displays = { - (SummarizeImageByURLChatHistory.Ports.default, FinalOutput): EdgeDisplay( - id=UUID("04f83166-36b5-4f1f-84cd-c5e327d3954a") - ), - (SummarizeImageByChatHistory.Ports.default, FinalOutput6): EdgeDisplay( - id=UUID("f2e51a46-f977-4d99-a4b2-002593921cdf") - ), - (AddImageToChatHistory.Ports.default, SummarizeImageByURLChatHistory): EdgeDisplay( - id=UUID("f92e9611-8bbd-45df-80a5-3652edb6d3b6") - ), - } - output_displays = { - Workflow.Outputs.final_output_6: WorkflowOutputDisplay( - id=UUID("92f6b627-7e91-49d1-9f26-4fb139a7f9a9"), name="final-output-6" - ), - Workflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("3ae9132d-cbb9-4237-ac5b-9c80096eaac5"), name="final-output" - ), - } diff --git a/examples/workflows/image_processing/inputs.py b/examples/workflows/image_processing/inputs.py deleted file mode 100644 index afad549aa9..0000000000 --- a/examples/workflows/image_processing/inputs.py +++ /dev/null @@ -1,9 +0,0 @@ -from typing import List, Optional - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - image_url: str - workflow_input_chat_history: Optional[List[ChatMessage]] diff --git a/examples/workflows/image_processing/nodes/__init__.py b/examples/workflows/image_processing/nodes/__init__.py deleted file mode 100644 index 0763895ae7..0000000000 --- a/examples/workflows/image_processing/nodes/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .add_image_to_chat_history import AddImageToChatHistory -from .final_output import FinalOutput -from .final_output_6 import FinalOutput6 -from .summarize_image_by_chat_history import SummarizeImageByChatHistory -from .summarize_image_by_url_chat_history import SummarizeImageByURLChatHistory - -__all__ = [ - "AddImageToChatHistory", - "FinalOutput", - "FinalOutput6", - "SummarizeImageByChatHistory", - "SummarizeImageByURLChatHistory", -] diff --git a/examples/workflows/image_processing/nodes/add_image_to_chat_history.py b/examples/workflows/image_processing/nodes/add_image_to_chat_history.py deleted file mode 100644 index a386ad2f09..0000000000 --- a/examples/workflows/image_processing/nodes/add_image_to_chat_history.py +++ /dev/null @@ -1,39 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from ..inputs import Inputs - - -class AddImageToChatHistory(TemplatingNode[BaseState, List[ChatMessage]]): - """You can use this approach if you want to insert images into Chat History dynamically while your Workflow / Agent is running. You may want this if you want to process images at runtime, for example, images included in a parsed document or webpage.""" - - template = """\ -{%- set new_msg = { - \"text\": image_url, - \"role\": \"USER\", - \"content\": { - \"type\": \"ARRAY\", - \"value\": [ - { - \"type\": \"IMAGE\", - \"value\": { - \"src\": image_url, - \"metadata\": { - \"detail\": \"low\" - } - } - } - ] - }, - \"source\": None - } -%} -{%- set msg_arr = [new_msg] -%} -{{- ((chat_history or []) + msg_arr) | tojson -}}\ -""" - inputs = { - "chat_history": [], - "image_url": Inputs.image_url, - } diff --git a/examples/workflows/image_processing/nodes/final_output.py b/examples/workflows/image_processing/nodes/final_output.py deleted file mode 100644 index bf322eadfc..0000000000 --- a/examples/workflows/image_processing/nodes/final_output.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .summarize_image_by_url_chat_history import SummarizeImageByURLChatHistory - - -class FinalOutput(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = SummarizeImageByURLChatHistory.Outputs.text diff --git a/examples/workflows/image_processing/nodes/final_output_6.py b/examples/workflows/image_processing/nodes/final_output_6.py deleted file mode 100644 index c783756d1b..0000000000 --- a/examples/workflows/image_processing/nodes/final_output_6.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .summarize_image_by_chat_history import SummarizeImageByChatHistory - - -class FinalOutput6(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = SummarizeImageByChatHistory.Outputs.text diff --git a/examples/workflows/image_processing/nodes/summarize_image_by_chat_history.py b/examples/workflows/image_processing/nodes/summarize_image_by_chat_history.py deleted file mode 100644 index 24b06ca1ed..0000000000 --- a/examples/workflows/image_processing/nodes/summarize_image_by_chat_history.py +++ /dev/null @@ -1,41 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs - - -class SummarizeImageByChatHistory(InlinePromptNode): - """You can use this approach if you want to drag-and-drop images in the UI, use them in Scenarios, or include them from your application code. This approach will also make it easier to view images directly in your Evaluations and Test Cases.""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[PlainTextPromptBlock(text="""Please describe what you see in the image. """)] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": Inputs.workflow_input_chat_history, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/image_processing/nodes/summarize_image_by_url_chat_history.py b/examples/workflows/image_processing/nodes/summarize_image_by_url_chat_history.py deleted file mode 100644 index 800c05cb93..0000000000 --- a/examples/workflows/image_processing/nodes/summarize_image_by_url_chat_history.py +++ /dev/null @@ -1,39 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from .add_image_to_chat_history import AddImageToChatHistory - - -class SummarizeImageByURLChatHistory(InlinePromptNode): - ml_model = "gpt-4o" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[PlainTextPromptBlock(text="""Please describe what you see in the image. """)] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": AddImageToChatHistory.Outputs.result, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=None, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/image_processing/sandbox.py b/examples/workflows/image_processing/sandbox.py deleted file mode 100644 index 3ec3eecf51..0000000000 --- a/examples/workflows/image_processing/sandbox.py +++ /dev/null @@ -1,59 +0,0 @@ -from vellum import ArrayChatMessageContent, ChatMessage, ImageChatMessageContent, VellumImage -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - image_url="https://picsum.photos/id/296/200/300", - workflow_input_chat_history=[ - ChatMessage( - role="USER", - text="https://picsum.photos/id/296/200/300", - content=ArrayChatMessageContent( - value=[ - ImageChatMessageContent( - value=VellumImage( - src="https://picsum.photos/id/296/200/300", - metadata={ - "detail": "high", - }, - ), - ), - ] - ), - ), - ], - ), - Inputs( - image_url="https://picsum.photos/id/419/200/300", - workflow_input_chat_history=[ - ChatMessage( - role="USER", - text="https://picsum.photos/id/419/200/300", - content=ArrayChatMessageContent( - value=[ - ImageChatMessageContent( - value=VellumImage( - src="https://picsum.photos/id/419/200/300", - metadata={ - "detail": "high", - }, - ), - ), - ] - ), - ), - ], - ), - ], -) - -runner.run() diff --git a/examples/workflows/image_processing/workflow.py b/examples/workflows/image_processing/workflow.py deleted file mode 100644 index 744f4e948f..0000000000 --- a/examples/workflows/image_processing/workflow.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.add_image_to_chat_history import AddImageToChatHistory -from .nodes.final_output import FinalOutput -from .nodes.final_output_6 import FinalOutput6 -from .nodes.summarize_image_by_chat_history import SummarizeImageByChatHistory -from .nodes.summarize_image_by_url_chat_history import SummarizeImageByURLChatHistory - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = { - SummarizeImageByChatHistory >> FinalOutput6, - AddImageToChatHistory >> SummarizeImageByURLChatHistory >> FinalOutput, - } - - class Outputs(BaseWorkflow.Outputs): - final_output_6 = FinalOutput6.Outputs.value - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/mcp_demo/README.md b/examples/workflows/mcp_demo/README.md deleted file mode 100644 index a4d27086f9..0000000000 --- a/examples/workflows/mcp_demo/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# MCP Demo - -This Workflow is an example of how to use a Vellum Workflow as an [MCP](https://modelcontextprotocol.io/introduction) Client. It depends on the [Github MCP Server](https://github.com/github/github-mcp-server) for performing actions on the user's GitHub account on their behalf. - -To use locally, you should create a [GitHub personal access token](https://github.com/settings/personal-access-tokens) and save it in a local `.env` file: - -```bash -VELLUM_API_KEY=*********************** -GITHUB_PERSONAL_ACCESS_TOKEN=github_pat_********************** -``` - -We include a `chat.py` file with the module for help running locally. Invoke it by running: - -```bash -python -m mcp_demo.chat -``` diff --git a/examples/workflows/mcp_demo/chat.py b/examples/workflows/mcp_demo/chat.py deleted file mode 100644 index 18ade928ba..0000000000 --- a/examples/workflows/mcp_demo/chat.py +++ /dev/null @@ -1,45 +0,0 @@ -from custom_base_node.nodes.exit_node import ExitNode -from dotenv import load_dotenv - -from .inputs import Inputs -from .workflow import MCPDemoWorkflow - - -def main(): - load_dotenv() - workflow = MCPDemoWorkflow() - - while True: - query = input("Hi! I'm an MCP Chatbot for Github. What can I do for you today? ") - inputs = Inputs(query=query) - - event_filter_set = {"node.execution.fulfilled", "workflow.execution.fulfilled", "workflow.execution.rejected"} - stream = workflow.stream(inputs=inputs, event_filter=lambda workflow, event: event.name in event_filter_set) - is_rejected = False - for event in stream: - if event.name == "node.execution.fulfilled": - can_think = any(o[0].name == "thinking" for o in event.outputs) - if can_think: - print( - "Finished Node", - event.node_definition, - "thinking", - event.outputs["thinking"], - ) # noqa: T201 - else: - print( - "Finished Node", - event.node_definition, - ) # noqa: T201 - elif event.name == "workflow.execution.fulfilled": - print(event.outputs["answer"]) # noqa: T201 - elif event.name == "workflow.execution.rejected": - print("Workflow rejected", event.error.code, event.error.message) # noqa: T201 - is_rejected = True - - if is_rejected: - exit(1) - - -if __name__ == "__main__": - main() diff --git a/examples/workflows/mcp_demo/inputs.py b/examples/workflows/mcp_demo/inputs.py deleted file mode 100644 index 93b648bd0e..0000000000 --- a/examples/workflows/mcp_demo/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - query: str diff --git a/examples/workflows/mcp_demo/nodes/__init__.py b/examples/workflows/mcp_demo/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/mcp_demo/nodes/execute_action_node.py b/examples/workflows/mcp_demo/nodes/execute_action_node.py deleted file mode 100644 index a5c2a2815c..0000000000 --- a/examples/workflows/mcp_demo/nodes/execute_action_node.py +++ /dev/null @@ -1,81 +0,0 @@ -import asyncio - -from mcp import ClientSession -from mcp.client.streamable_http import streamablehttp_client - -from vellum import ChatMessage, FunctionCallChatMessageContent, StringChatMessageContent -from vellum.workflows.errors.types import WorkflowErrorCode -from vellum.workflows.exceptions import NodeException -from vellum.workflows.nodes import BaseNode -from vellum.workflows.references.environment_variable import EnvironmentVariableReference - -from ..state import State -from .my_prompt_node import MyPromptNode - - -class ExecuteActionNode(BaseNode[State]): - action = MyPromptNode.Outputs.results[0] - token = EnvironmentVariableReference(name="GITHUB_PERSONAL_ACCESS_TOKEN", default="") - - class Outputs(BaseNode.Outputs): - action_result: str - thinking: str - - def run(self) -> Outputs: - if self.action.type != "FUNCTION_CALL": - raise ValueError(f"Action is not a function call: {self.action}") - - if self.state.chat_history is None: - self.state.chat_history = [] - - self.state.chat_history.append( - ChatMessage( - role="ASSISTANT", - content=FunctionCallChatMessageContent.model_validate(self.action.model_dump()), - ) - ) - - async def run_stdio(): - try: - async with streamablehttp_client( - url="https://api.githubcopilot.com/mcp", - headers={ - "Authorization": f"Bearer {self.token}", - }, - ) as http_stream: - read_stream, write_stream, get_id = http_stream - try: - async with ClientSession(read_stream, write_stream) as session: - try: - await session.initialize() - response = await session.call_tool( - name=self.action.value.name, - arguments=self.action.value.arguments, - ) - except Exception as e: - return str(e) - except Exception as e: - return str(e) - - return response.content - except Exception as e: - return str(e) - - action_results = asyncio.run(run_stdio()) - if isinstance(action_results, str): - compiled_action_result = action_results - else: - compiled_action_result = "\n".join([res.text for res in action_results if res.type == "text"]) - - self.state.chat_history.append( - ChatMessage( - role="FUNCTION", - content=StringChatMessageContent(value=compiled_action_result), - source=self.action.value.id, - ) - ) - - return self.Outputs( - action_result=compiled_action_result, - thinking=f"Executed tool: {self.action.value.name}", - ) diff --git a/examples/workflows/mcp_demo/nodes/exit_node.py b/examples/workflows/mcp_demo/nodes/exit_node.py deleted file mode 100644 index 7709604564..0000000000 --- a/examples/workflows/mcp_demo/nodes/exit_node.py +++ /dev/null @@ -1,8 +0,0 @@ -from vellum.workflows.nodes.displayable.final_output_node.node import FinalOutputNode - -from .my_prompt_node import MyPromptNode - - -class ExitNode(FinalOutputNode): - class Outputs(FinalOutputNode.Outputs): - value = MyPromptNode.Outputs.text diff --git a/examples/workflows/mcp_demo/nodes/mcp_client_node.py b/examples/workflows/mcp_demo/nodes/mcp_client_node.py deleted file mode 100644 index 258e8d9722..0000000000 --- a/examples/workflows/mcp_demo/nodes/mcp_client_node.py +++ /dev/null @@ -1,57 +0,0 @@ -import asyncio -import os -import traceback -from typing import List - -from mcp import ClientSession -from mcp.client.streamable_http import streamablehttp_client - -from vellum import FunctionDefinition -from vellum.workflows.errors.types import WorkflowErrorCode -from vellum.workflows.exceptions import NodeException -from vellum.workflows.nodes import BaseNode -from vellum.workflows.references.environment_variable import EnvironmentVariableReference - - -class MCPClientNode(BaseNode): - - token = EnvironmentVariableReference(name="GITHUB_PERSONAL_ACCESS_TOKEN", default="") - - class Outputs(BaseNode.Outputs): - tools: List[FunctionDefinition] - thinking: str - - def run(self) -> Outputs: - async def run_stdio(): - async with streamablehttp_client( - url="https://api.githubcopilot.com/mcp/", - headers={ - "Authorization": f"Bearer {self.token}", - }, - ) as http_stream: - read_stream, write_stream, get_id = http_stream - async with ClientSession(read_stream, write_stream) as session: - await session.initialize() - response = await session.list_tools() - return response.tools - - try: - mcp_tools = asyncio.run(run_stdio()) - except Exception as e: - tb_str = "".join(traceback.format_exception(type(e), e, e.__traceback__)) - raise NodeException( - f"Error: Failed to retrieve tools from MCP Server: {tb_str}", - code=WorkflowErrorCode.INVALID_CODE, - ) - - return self.Outputs( - tools=[ - FunctionDefinition( - name=tool.name, - description=tool.description, - parameters=tool.inputSchema, - ) - for tool in mcp_tools - ], - thinking=f"Retrieved {len(mcp_tools)} tools from MCP Server...", - ) diff --git a/examples/workflows/mcp_demo/nodes/my_prompt_node.py b/examples/workflows/mcp_demo/nodes/my_prompt_node.py deleted file mode 100644 index c6f4a2dc61..0000000000 --- a/examples/workflows/mcp_demo/nodes/my_prompt_node.py +++ /dev/null @@ -1,50 +0,0 @@ -from vellum import ChatMessagePromptBlock, PlainTextPromptBlock, RichTextPromptBlock, VariablePromptBlock -from vellum.workflows.nodes import InlinePromptNode -from vellum.workflows.ports import Port -from vellum.workflows.references import LazyReference - -from ..inputs import Inputs -from ..state import State -from .mcp_client_node import MCPClientNode - - -class MyPromptNode(InlinePromptNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="You are a helpful assistant that will manage the user's Github account on their behalf.", - ) - ] - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - VariablePromptBlock( - input_variable="query", - ), - ], - ), - VariablePromptBlock( - input_variable="chat_history", - ), - ] - prompt_inputs = { - "query": Inputs.query, - "chat_history": State.chat_history.coalesce([]), - } - # Our mypy plugin is not handling list of pydantic models properly - functions = MCPClientNode.Outputs.tools # type: ignore[assignment] - - class Ports(InlinePromptNode.Ports): - action = Port.on_if(LazyReference(lambda: MyPromptNode.Outputs.results[0]["type"].equals("FUNCTION_CALL"))) - exit = Port.on_if(LazyReference(lambda: MyPromptNode.Outputs.results[0]["type"].equals("STRING"))) - - class Outputs(InlinePromptNode.Outputs): - thinking = "Asking the model..." diff --git a/examples/workflows/mcp_demo/state.py b/examples/workflows/mcp_demo/state.py deleted file mode 100644 index e7eaa52e5c..0000000000 --- a/examples/workflows/mcp_demo/state.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.state import BaseState - - -class State(BaseState): - chat_history: List[ChatMessage] = [] diff --git a/examples/workflows/mcp_demo/workflow.py b/examples/workflows/mcp_demo/workflow.py deleted file mode 100644 index faeb20ae74..0000000000 --- a/examples/workflows/mcp_demo/workflow.py +++ /dev/null @@ -1,25 +0,0 @@ -from vellum.workflows import BaseWorkflow - -from .inputs import Inputs -from .nodes.execute_action_node import ExecuteActionNode -from .nodes.exit_node import ExitNode -from .nodes.mcp_client_node import MCPClientNode -from .nodes.my_prompt_node import MyPromptNode -from .state import State - - -class MCPDemoWorkflow(BaseWorkflow[Inputs, State]): - """ - An example workflow that acts as an MCP Client. It's currently hardcoded to interact with the - Github MCP Server, found here: https://github.com/github/github-mcp-server. To run this demo, - - Create a Github Access Token and export it as the environment variable `GITHUB_PERSONAL_ACCESS_TOKEN` - - Run this workflow: `python -m examples.mcp_demo.chat` - """ - - graph = MCPClientNode >> { - MyPromptNode.Ports.action >> ExecuteActionNode >> MCPClientNode, - MyPromptNode.Ports.exit >> ExitNode, - } - - class Outputs: - answer = ExitNode.Outputs.value diff --git a/examples/workflows/mcp_tool_calling_node_demo/README.md b/examples/workflows/mcp_tool_calling_node_demo/README.md deleted file mode 100644 index c07ae27f6a..0000000000 --- a/examples/workflows/mcp_tool_calling_node_demo/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# MCP Tool Calling Node Demo - -This workflow demonstrates how to use the built-in `ToolCallingNode` with MCP (Model Context Protocol) servers. It's a simplified version that uses Vellum's native MCP integration instead of manual HTTP client implementation. - -The workflow uses the GitHub MCP Server to manage the user's GitHub account through natural language commands. - -To use locally, you should create a [GitHub personal access token](https://github.com/settings/personal-access-tokens) and save it in a local `.env` file: - -```bash -VELLUM_API_KEY=*********************** -GITHUB_PERSONAL_ACCESS_TOKEN=github_pat_********************** -``` - -We include a `chat.py` file with the module for help running locally. Invoke it by running: - -```bash -python -m examples.workflows.mcp_tool_calling_node_demo.chat -``` - -## Key Differences from Manual Implementation - -This example shows how to use the built-in `ToolCallingNode` with `MCPServer` instead of manually implementing MCP client functionality: - -1. **Simplified Architecture**: Uses `ToolCallingNode` which handles tool calling internally -2. **Native MCP Integration**: Uses `MCPServer` type for seamless MCP server integration -3. **Automatic Tool Discovery**: The `ToolCallingNode` automatically discovers and hydrates available tools from the MCP server diff --git a/examples/workflows/mcp_tool_calling_node_demo/chat.py b/examples/workflows/mcp_tool_calling_node_demo/chat.py deleted file mode 100644 index 22568b3ec4..0000000000 --- a/examples/workflows/mcp_tool_calling_node_demo/chat.py +++ /dev/null @@ -1,31 +0,0 @@ -from dotenv import load_dotenv - -from .inputs import Inputs -from .workflow import MCPDemoWorkflow - - -def main(): - load_dotenv() - workflow = MCPDemoWorkflow() - - while True: - query = input("Hi! I'm an MCP Chatbot for Github. What can I do for you today? ") - inputs = Inputs(query=query) - - event_filter_set = {"workflow.execution.fulfilled", "workflow.execution.rejected"} - stream = workflow.stream(inputs=inputs, event_filter=lambda workflow, event: event.name in event_filter_set) - is_rejected = False - for event in stream: - if event.name == "workflow.execution.fulfilled": - print("Answer:", event.outputs["text"]) # noqa: T201 - print("Chat History Length:", len(event.outputs["chat_history"])) # noqa: T201 - elif event.name == "workflow.execution.rejected": - print("Workflow rejected", event.error.code, event.error.message) # noqa: T201 - is_rejected = True - - if is_rejected: - exit(1) - - -if __name__ == "__main__": - main() diff --git a/examples/workflows/mcp_tool_calling_node_demo/inputs.py b/examples/workflows/mcp_tool_calling_node_demo/inputs.py deleted file mode 100644 index 93b648bd0e..0000000000 --- a/examples/workflows/mcp_tool_calling_node_demo/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - query: str diff --git a/examples/workflows/mcp_tool_calling_node_demo/nodes/__init__.py b/examples/workflows/mcp_tool_calling_node_demo/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/mcp_tool_calling_node_demo/nodes/my_prompt_node.py b/examples/workflows/mcp_tool_calling_node_demo/nodes/my_prompt_node.py deleted file mode 100644 index 32e3e66c70..0000000000 --- a/examples/workflows/mcp_tool_calling_node_demo/nodes/my_prompt_node.py +++ /dev/null @@ -1,51 +0,0 @@ -from vellum.client.types.chat_message_prompt_block import ChatMessagePromptBlock -from vellum.client.types.plain_text_prompt_block import PlainTextPromptBlock -from vellum.client.types.rich_text_prompt_block import RichTextPromptBlock -from vellum.client.types.variable_prompt_block import VariablePromptBlock -from vellum.workflows.constants import AuthorizationType -from vellum.workflows.nodes.displayable.tool_calling_node import ToolCallingNode -from vellum.workflows.references.environment_variable import EnvironmentVariableReference -from vellum.workflows.types.definition import MCPServer - -from ..inputs import Inputs - - -class MyPromptNode(ToolCallingNode): - """ - A tool calling node that uses the GitHub MCP server to manage the user's GitHub account. - """ - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="You are a helpful assistant that will manage the user's Github account on their behalf.", - ), - ], - ), - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - VariablePromptBlock( - input_variable="query", - ), - ], - ), - ] - functions = [ - MCPServer( - name="github", - url="https://api.githubcopilot.com/mcp/", - authorization_type=AuthorizationType.BEARER_TOKEN, - bearer_token_value=EnvironmentVariableReference(name="GITHUB_PERSONAL_ACCESS_TOKEN"), - ), - ] - prompt_inputs = { - "query": Inputs.query, - } diff --git a/examples/workflows/mcp_tool_calling_node_demo/workflow.py b/examples/workflows/mcp_tool_calling_node_demo/workflow.py deleted file mode 100644 index f06f4e8957..0000000000 --- a/examples/workflows/mcp_tool_calling_node_demo/workflow.py +++ /dev/null @@ -1,22 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state.base import BaseState - -from .inputs import Inputs -from .nodes.my_prompt_node import MyPromptNode - - -class MCPDemoWorkflow(BaseWorkflow[Inputs, BaseState]): - """ - An example workflow that uses the built-in ToolCallingNode with MCP Server. - It interacts with the Github MCP Server to manage the user's GitHub account. - - To run this demo: - - Create a Github Access Token and export it as the environment variable `GITHUB_PERSONAL_ACCESS_TOKEN` - - Run this workflow: `python -m examples.workflows.mcp_tool_calling_node_demo.chat` - """ - - graph = MyPromptNode - - class Outputs: - text = MyPromptNode.Outputs.text - chat_history = MyPromptNode.Outputs.chat_history diff --git a/examples/workflows/mcp_toolbox/MCP Toolbox.ipynb b/examples/workflows/mcp_toolbox/MCP Toolbox.ipynb deleted file mode 100644 index 4dd9ab53c4..0000000000 --- a/examples/workflows/mcp_toolbox/MCP Toolbox.ipynb +++ /dev/null @@ -1,192 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "ee311904", - "metadata": {}, - "source": [ - "# MCP Toolbox\n" - ] - }, - { - "cell_type": "markdown", - "id": "6bfbf17f", - "metadata": {}, - "source": [ - "## Prerequisites\n", - "\n", - "1. Follow the setup instructions https://github.com/vellum-ai/vellum-python-sdks/tree/c7e2ba1fd0bed4c78cfeafc22c98551b20f9044d/examples/workflows/mcp_toolbox\n" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "3486c918", - "metadata": {}, - "outputs": [], - "source": [ - "from vellum.workflows.inputs import BaseInputs\n", - "\n", - "\n", - "class Inputs(BaseInputs):\n", - " query: str" - ] - }, - { - "cell_type": "markdown", - "id": "cdab0344", - "metadata": {}, - "source": [ - "Define an Agent Node that has a mcp tool pointing to mcp server setup by `toolbox`\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "af23be67", - "metadata": {}, - "outputs": [], - "source": [ - "from vellum.client.types.chat_message_prompt_block import ChatMessagePromptBlock\n", - "from vellum.client.types.plain_text_prompt_block import PlainTextPromptBlock\n", - "from vellum.client.types.rich_text_prompt_block import RichTextPromptBlock\n", - "from vellum.client.types.variable_prompt_block import VariablePromptBlock\n", - "from vellum.workflows.nodes.displayable.tool_calling_node import ToolCallingNode\n", - "from vellum.workflows.types.definition import MCPServer\n", - "\n", - "class Agent(ToolCallingNode):\n", - " ml_model = \"gpt-4o-mini\"\n", - " blocks = [\n", - " ChatMessagePromptBlock(\n", - " chat_role=\"SYSTEM\",\n", - " blocks=[\n", - " RichTextPromptBlock(\n", - " blocks=[\n", - " PlainTextPromptBlock(\n", - " text=\"\"\"\n", - "You're a helpful hotel assistant. You handle hotel searching, booking and\n", - "cancellations. When the user searches for a hotel, mention it's name, id,\n", - "location and price tier. Always mention hotel ids while performing any\n", - "searches. This is very important for any operations. For any bookings or\n", - "cancellations, please provide the appropriate confirmation. Be sure to\n", - "update checkin or checkout dates if mentioned by the user.\n", - "Don't ask for confirmations from the user.\n", - "\"\"\"\n", - " ),\n", - " ],\n", - " ),\n", - " ],\n", - " ),\n", - " ChatMessagePromptBlock(\n", - " chat_role=\"USER\",\n", - " blocks=[\n", - " VariablePromptBlock(\n", - " input_variable=\"query\",\n", - " ),\n", - " ],\n", - " ),\n", - " ]\n", - " functions = [\n", - " MCPServer(\n", - " name=\"toolbox\",\n", - " url=\"http://127.0.0.1:5000/mcp\",\n", - " ),\n", - " ]\n", - " prompt_inputs = {\n", - " \"query\": Inputs.query,\n", - " }\n" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "6c9d438b", - "metadata": {}, - "outputs": [], - "source": [ - "from vellum.workflows import BaseWorkflow\n", - "from vellum.workflows.state.base import BaseState\n", - "\n", - "\n", - "class MCPToolboxWorkflow(BaseWorkflow[Inputs, BaseState]):\n", - " graph = Agent\n", - "\n", - " class Outputs:\n", - " text = Agent.Outputs.text\n", - " chat_history = Agent.Outputs.chat_history" - ] - }, - { - "cell_type": "markdown", - "id": "16612b06", - "metadata": {}, - "source": [ - "Interactive chat with mcp toolbox\n", - "\n", - "`Find hotels in Basel with Basel in its name.`\n" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "8d7d0447", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Running workflow with inputs: Find hotels in Basel with Basel in its name.\n", - "Answer: Following are the hotels in Basel with Basel in their name:\n", - "- Hilton Basel, with an id of 1, is a Luxury hotel in Basel.\n", - "- Hyatt Regency Basel, with an id of 3, is a Upper Upscale hotel in Basel.\n", - "- Holiday Inn Basel, with an id of 8, is a Upper Midscale hotel in Basel.\n" - ] - } - ], - "source": [ - "workflow = MCPToolboxWorkflow()\n", - "\n", - "query = input(\"Hi! I'm an your hotel booking assistant. What can I do for you today? \")\n", - "inputs = Inputs(query=query)\n", - "\n", - "print(\"Running workflow with inputs: \", query)\n", - "\n", - "terminal_event = workflow.run(inputs=inputs)\n", - "if terminal_event.name == \"workflow.execution.fulfilled\":\n", - " print(\"Answer:\", terminal_event.outputs[\"text\"])\n", - "elif terminal_event.name == \"workflow.execution.rejected\":\n", - " print(\"Workflow rejected\", terminal_event.error.code, terminal_event.error.message)\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "1114bdff", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.23" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/workflows/mcp_toolbox/README.md b/examples/workflows/mcp_toolbox/README.md deleted file mode 100644 index 108de10228..0000000000 --- a/examples/workflows/mcp_toolbox/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# MCP Toolbox Agent Node Demo - -This workflow demonstrates how to use the built-in `AgentNode` (ToolCallingNode) with [MCP Toolbox for databases](https://googleapis.github.io/genai-toolbox/getting-started/introduction/). - -## Overview - -This demo showcases the integration between Vellum's workflow and Google's MCP Toolbox, allowing you to create AI agents that can interact with databases through MCP (Model Context Protocol) tools. - -## Setup - -### Quickstart: Local Postgres via Docker - -Build and run a local Postgres seeded with demo `hotels` data: - -```bash -cd examples/workflows/mcp_toolbox/db -docker build -t mcp-demo-pg . -docker run --name mcp-demo-pg -p 5432:5432 -e POSTGRES_PASSWORD=postgres -d mcp-demo-pg - -cd ../../../../ # go back to vellum-python-sdks root dir -``` - -### Database and CLI Setup - -Follow Step 1 and Step 2 in the instructions in [Setup](https://googleapis.github.io/genai-toolbox/getting-started/local_quickstart/#:~:text=aiplatform.googleapis.com-,Step%201%3A%20Set%20up%20your%20database,-In%20this%20section) to set up your database. - -### Configure Tools - -1. Modify the database configuration in `tools.yaml` according to your setup -2. Run the toolbox server: - -```bash -toolbox --tools-file "tools.yaml" -``` - -## Usage - -### Running the Demo - -We include a `chat.py` file with the module for running locally. Invoke it by running: - -```bash -# pip install vellum-ai if it's not in python environemnt yet -VELLUM_API_KEY= python -m examples.workflows.mcp_toolbox.chat -``` - -### Example Queries - -Once the workflow is running, you can ask questions like: - -``` -Find hotels in Basel with Basel in its name. -``` diff --git a/examples/workflows/mcp_toolbox/chat.py b/examples/workflows/mcp_toolbox/chat.py deleted file mode 100644 index 0d9c721e91..0000000000 --- a/examples/workflows/mcp_toolbox/chat.py +++ /dev/null @@ -1,23 +0,0 @@ -from dotenv import load_dotenv - -from .inputs import Inputs -from .workflow import MCPToolboxWorkflow - - -def main(): - load_dotenv() # load your vellum api key - workflow = MCPToolboxWorkflow() - - while True: - query = input("Hi! I'm an your hotel booking assistant. What can I do for you today? ") - inputs = Inputs(query=query) - - terminal_event = workflow.run(inputs=inputs) - if terminal_event.name == "workflow.execution.fulfilled": - print("Answer:", terminal_event.outputs["text"]) - elif terminal_event.name == "workflow.execution.rejected": - print("Workflow rejected", terminal_event.error.code, terminal_event.error.message) - - -if __name__ == "__main__": - main() diff --git a/examples/workflows/mcp_toolbox/db/Dockerfile b/examples/workflows/mcp_toolbox/db/Dockerfile deleted file mode 100644 index e31cfeac21..0000000000 --- a/examples/workflows/mcp_toolbox/db/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM postgres:16-alpine - -# Default database credentials for local demo usage -ENV POSTGRES_USER=postgres \ - POSTGRES_PASSWORD=postgres \ - POSTGRES_DB=postgres - -# Seed schema and demo data -COPY init.sql /docker-entrypoint-initdb.d/init.sql - -EXPOSE 5432 diff --git a/examples/workflows/mcp_toolbox/db/init.sql b/examples/workflows/mcp_toolbox/db/init.sql deleted file mode 100644 index 93973d5af2..0000000000 --- a/examples/workflows/mcp_toolbox/db/init.sql +++ /dev/null @@ -1,22 +0,0 @@ -CREATE TABLE hotels( - id INTEGER NOT NULL PRIMARY KEY, - name VARCHAR NOT NULL, - location VARCHAR NOT NULL, - price_tier VARCHAR NOT NULL, - checkin_date DATE NOT NULL, - checkout_date DATE NOT NULL, - booked BIT NOT NULL -); - -INSERT INTO hotels(id, name, location, price_tier, checkin_date, checkout_date, booked) -VALUES - (1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-22', '2024-04-20', B'0'), - (2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', B'0'), - (3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', B'0'), - (4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-24', '2024-04-05', B'0'), - (5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-23', '2024-04-01', B'0'), - (6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', B'0'), - (7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-27', '2024-04-02', B'0'), - (8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-24', '2024-04-09', B'0'), - (9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', B'0'), - (10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', B'0'); diff --git a/examples/workflows/mcp_toolbox/inputs.py b/examples/workflows/mcp_toolbox/inputs.py deleted file mode 100644 index 93b648bd0e..0000000000 --- a/examples/workflows/mcp_toolbox/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - query: str diff --git a/examples/workflows/mcp_toolbox/nodes/__init__.py b/examples/workflows/mcp_toolbox/nodes/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/mcp_toolbox/nodes/agent_node.py b/examples/workflows/mcp_toolbox/nodes/agent_node.py deleted file mode 100644 index 08d8d888c4..0000000000 --- a/examples/workflows/mcp_toolbox/nodes/agent_node.py +++ /dev/null @@ -1,51 +0,0 @@ -from vellum.client.types.chat_message_prompt_block import ChatMessagePromptBlock -from vellum.client.types.plain_text_prompt_block import PlainTextPromptBlock -from vellum.client.types.rich_text_prompt_block import RichTextPromptBlock -from vellum.client.types.variable_prompt_block import VariablePromptBlock -from vellum.workflows.nodes.displayable.tool_calling_node import ToolCallingNode -from vellum.workflows.types.definition import MCPServer - -from ..inputs import Inputs - - -class Agent(ToolCallingNode): - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text=""" -You're a helpful hotel assistant. You handle hotel searching, booking and -cancellations. When the user searches for a hotel, mention it's name, id, -location and price tier. Always mention hotel ids while performing any -searches. This is very important for any operations. For any bookings or -cancellations, please provide the appropriate confirmation. Be sure to -update checkin or checkout dates if mentioned by the user. -Don't ask for confirmations from the user. -""", - ), - ], - ), - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - VariablePromptBlock( - input_variable="query", - ), - ], - ), - ] - functions = [ - MCPServer( - name="toolbox", - url="http://127.0.0.1:5000/mcp", - ), - ] - prompt_inputs = { - "query": Inputs.query, - } diff --git a/examples/workflows/mcp_toolbox/tools.yaml b/examples/workflows/mcp_toolbox/tools.yaml deleted file mode 100644 index 1cb57e2bd5..0000000000 --- a/examples/workflows/mcp_toolbox/tools.yaml +++ /dev/null @@ -1,72 +0,0 @@ -sources: - my-pg-source: - kind: postgres - host: 127.0.0.1 - port: 5432 - database: postgres - user: postgres - password: postgres -tools: - search-hotels-by-name: - kind: postgres-sql - source: my-pg-source - description: Search for hotels based on name. - parameters: - - name: name - type: string - description: The name of the hotel. - statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%'; - search-hotels-by-location: - kind: postgres-sql - source: my-pg-source - description: Search for hotels based on location. - parameters: - - name: location - type: string - description: The location of the hotel. - statement: SELECT * FROM hotels WHERE location ILIKE '%' || $1 || '%'; - book-hotel: - kind: postgres-sql - source: my-pg-source - description: >- - Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not. - parameters: - - name: hotel_id - type: string - description: The ID of the hotel to book. - statement: UPDATE hotels SET booked = B'1' WHERE id = $1; - update-hotel: - kind: postgres-sql - source: my-pg-source - description: >- - Update a hotel's check-in and check-out dates by its ID. Returns a message - indicating whether the hotel was successfully updated or not. - parameters: - - name: hotel_id - type: string - description: The ID of the hotel to update. - - name: checkin_date - type: string - description: The new check-in date of the hotel. - - name: checkout_date - type: string - description: The new check-out date of the hotel. - statement: >- - UPDATE hotels SET checkin_date = CAST($2 as date), checkout_date = CAST($3 - as date) WHERE id = $1; - cancel-hotel: - kind: postgres-sql - source: my-pg-source - description: Cancel a hotel by its ID. - parameters: - - name: hotel_id - type: string - description: The ID of the hotel to cancel. - statement: UPDATE hotels SET booked = B'0' WHERE id = $1; -toolsets: - my-toolset: - - search-hotels-by-name - - search-hotels-by-location - - book-hotel - - update-hotel - - cancel-hotel diff --git a/examples/workflows/mcp_toolbox/workflow.py b/examples/workflows/mcp_toolbox/workflow.py deleted file mode 100644 index 7925a41650..0000000000 --- a/examples/workflows/mcp_toolbox/workflow.py +++ /dev/null @@ -1,13 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state.base import BaseState - -from .inputs import Inputs -from .nodes.agent_node import Agent - - -class MCPToolboxWorkflow(BaseWorkflow[Inputs, BaseState]): - graph = Agent - - class Outputs: - text = Agent.Outputs.text - chat_history = Agent.Outputs.chat_history diff --git a/examples/workflows/poetry.lock b/examples/workflows/poetry.lock deleted file mode 100644 index c8e2e28f67..0000000000 --- a/examples/workflows/poetry.lock +++ /dev/null @@ -1,1700 +0,0 @@ -# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. - -[[package]] -name = "annotated-types" -version = "0.7.0" -description = "Reusable constraint types to use with typing.Annotated" -optional = false -python-versions = ">=3.8" -files = [ - {file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"}, - {file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"}, -] - -[[package]] -name = "anyio" -version = "4.11.0" -description = "High-level concurrency and networking framework on top of asyncio or Trio" -optional = false -python-versions = ">=3.9" -files = [ - {file = "anyio-4.11.0-py3-none-any.whl", hash = "sha256:0287e96f4d26d4149305414d4e3bc32f0dcd0862365a4bddea19d7a1ec38c4fc"}, - {file = "anyio-4.11.0.tar.gz", hash = "sha256:82a8d0b81e318cc5ce71a5f1f8b5c4e63619620b63141ef8c995fa0db95a57c4"}, -] - -[package.dependencies] -exceptiongroup = {version = ">=1.0.2", markers = "python_version < \"3.11\""} -idna = ">=2.8" -sniffio = ">=1.1" -typing_extensions = {version = ">=4.5", markers = "python_version < \"3.13\""} - -[package.extras] -trio = ["trio (>=0.31.0)"] - -[[package]] -name = "asttokens" -version = "3.0.0" -description = "Annotate AST trees with source code positions" -optional = false -python-versions = ">=3.8" -files = [ - {file = "asttokens-3.0.0-py3-none-any.whl", hash = "sha256:e3078351a059199dd5138cb1c706e6430c05eff2ff136af5eb4790f9d28932e2"}, - {file = "asttokens-3.0.0.tar.gz", hash = "sha256:0dcd8baa8d62b0c1d118b399b2ddba3c4aff271d0d7a9e0d4c1681c79035bbc7"}, -] - -[package.extras] -astroid = ["astroid (>=2,<4)"] -test = ["astroid (>=2,<4)", "pytest", "pytest-cov", "pytest-xdist"] - -[[package]] -name = "attrs" -version = "25.3.0" -description = "Classes Without Boilerplate" -optional = false -python-versions = ">=3.8" -files = [ - {file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"}, - {file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"}, -] - -[package.extras] -benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"] -tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"] - -[[package]] -name = "boto3" -version = "1.40.43" -description = "The AWS SDK for Python" -optional = false -python-versions = ">=3.9" -files = [ - {file = "boto3-1.40.43-py3-none-any.whl", hash = "sha256:c5d64ba2fb2d90c33c3969f3751869c45746d5efb5136e4cc619e3630ece89a3"}, - {file = "boto3-1.40.43.tar.gz", hash = "sha256:9ad9190672ce8736898bec2d94875aea6ae1ead2ac6d158e01d820f3ff9c23e0"}, -] - -[package.dependencies] -botocore = ">=1.40.43,<1.41.0" -jmespath = ">=0.7.1,<2.0.0" -s3transfer = ">=0.14.0,<0.15.0" - -[package.extras] -crt = ["botocore[crt] (>=1.21.0,<2.0a0)"] - -[[package]] -name = "botocore" -version = "1.40.43" -description = "Low-level, data-driven core of boto 3." -optional = false -python-versions = ">=3.9" -files = [ - {file = "botocore-1.40.43-py3-none-any.whl", hash = "sha256:1639f38999fc0cf42c92c5c83c5fbe189a4857a86f55b842be868e3283c6d3bb"}, - {file = "botocore-1.40.43.tar.gz", hash = "sha256:d87412dc1ea785df156f412627d3417c9f9eb45601fd0846d8fe96fe3c78b630"}, -] - -[package.dependencies] -jmespath = ">=0.7.1,<2.0.0" -python-dateutil = ">=2.1,<3.0.0" -urllib3 = {version = ">=1.25.4,<2.2.0 || >2.2.0,<3", markers = "python_version >= \"3.10\""} - -[package.extras] -crt = ["awscrt (==0.27.6)"] - -[[package]] -name = "certifi" -version = "2025.8.3" -description = "Python package for providing Mozilla's CA Bundle." -optional = false -python-versions = ">=3.7" -files = [ - {file = "certifi-2025.8.3-py3-none-any.whl", hash = "sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5"}, - {file = "certifi-2025.8.3.tar.gz", hash = "sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407"}, -] - -[[package]] -name = "charset-normalizer" -version = "3.4.3" -description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -optional = false -python-versions = ">=3.7" -files = [ - {file = "charset_normalizer-3.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:fb7f67a1bfa6e40b438170ebdc8158b78dc465a5a67b6dde178a46987b244a72"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cc9370a2da1ac13f0153780040f465839e6cccb4a1e44810124b4e22483c93fe"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:07a0eae9e2787b586e129fdcbe1af6997f8d0e5abaa0bc98c0e20e124d67e601"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:74d77e25adda8581ffc1c720f1c81ca082921329452eba58b16233ab1842141c"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d0e909868420b7049dafd3a31d45125b31143eec59235311fc4c57ea26a4acd2"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:c6f162aabe9a91a309510d74eeb6507fab5fff92337a15acbe77753d88d9dcf0"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:4ca4c094de7771a98d7fbd67d9e5dbf1eb73efa4f744a730437d8a3a5cf994f0"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:02425242e96bcf29a49711b0ca9f37e451da7c70562bc10e8ed992a5a7a25cc0"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:78deba4d8f9590fe4dae384aeff04082510a709957e968753ff3c48399f6f92a"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-win32.whl", hash = "sha256:d79c198e27580c8e958906f803e63cddb77653731be08851c7df0b1a14a8fc0f"}, - {file = "charset_normalizer-3.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:c6e490913a46fa054e03699c70019ab869e990270597018cef1d8562132c2669"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-win32.whl", hash = "sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849"}, - {file = "charset_normalizer-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:e28e334d3ff134e88989d90ba04b47d84382a828c061d0d1027b1b12a62b39b1"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0cacf8f7297b0c4fcb74227692ca46b4a5852f8f4f24b3c766dd94a1075c4884"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c6fd51128a41297f5409deab284fecbe5305ebd7e5a1f959bee1c054622b7018"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cfb2aad70f2c6debfbcb717f23b7eb55febc0bb23dcffc0f076009da10c6392"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1606f4a55c0fd363d754049cdf400175ee96c992b1f8018b993941f221221c5f"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:027b776c26d38b7f15b26a5da1044f376455fb3766df8fc38563b4efbc515154"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:42e5088973e56e31e4fa58eb6bd709e42fc03799c11c42929592889a2e54c491"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:cc34f233c9e71701040d772aa7490318673aa7164a0efe3172b2981218c26d93"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:320e8e66157cc4e247d9ddca8e21f427efc7a04bbd0ac8a9faf56583fa543f9f"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-win32.whl", hash = "sha256:fb6fecfd65564f208cbf0fba07f107fb661bcd1a7c389edbced3f7a493f70e37"}, - {file = "charset_normalizer-3.4.3-cp312-cp312-win_amd64.whl", hash = "sha256:86df271bf921c2ee3818f0522e9a5b8092ca2ad8b065ece5d7d9d0e9f4849bcc"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:14c2a87c65b351109f6abfc424cab3927b3bdece6f706e4d12faaf3d52ee5efe"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41d1fc408ff5fdfb910200ec0e74abc40387bccb3252f3f27c0676731df2b2c8"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:1bb60174149316da1c35fa5233681f7c0f9f514509b8e399ab70fea5f17e45c9"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:30d006f98569de3459c2fc1f2acde170b7b2bd265dc1943e87e1a4efe1b67c31"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:416175faf02e4b0810f1f38bcb54682878a4af94059a1cd63b8747244420801f"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6aab0f181c486f973bc7262a97f5aca3ee7e1437011ef0c2ec04b5a11d16c927"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fdabf8315679312cfa71302f9bd509ded4f2f263fb5b765cf1433b39106c3cc9"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:bd28b817ea8c70215401f657edef3a8aa83c29d447fb0b622c35403780ba11d5"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:18343b2d246dc6761a249ba1fb13f9ee9a2bcd95decc767319506056ea4ad4dc"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-win32.whl", hash = "sha256:6fb70de56f1859a3f71261cbe41005f56a7842cc348d3aeb26237560bfa5e0ce"}, - {file = "charset_normalizer-3.4.3-cp313-cp313-win_amd64.whl", hash = "sha256:cf1ebb7d78e1ad8ec2a8c4732c7be2e736f6e5123a4146c5b89c9d1f585f8cef"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3cd35b7e8aedeb9e34c41385fda4f73ba609e561faedfae0a9e75e44ac558a15"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b89bc04de1d83006373429975f8ef9e7932534b8cc9ca582e4db7d20d91816db"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2001a39612b241dae17b4687898843f254f8748b796a2e16f1051a17078d991d"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8dcfc373f888e4fb39a7bc57e93e3b845e7f462dacc008d9749568b1c4ece096"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18b97b8404387b96cdbd30ad660f6407799126d26a39ca65729162fd810a99aa"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ccf600859c183d70eb47e05a44cd80a4ce77394d1ac0f79dbd2dd90a69a3a049"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:53cd68b185d98dde4ad8990e56a58dea83a4162161b1ea9272e5c9182ce415e0"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:30a96e1e1f865f78b030d65241c1ee850cdf422d869e9028e2fc1d5e4db73b92"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d716a916938e03231e86e43782ca7878fb602a125a91e7acb8b5112e2e96ac16"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-win32.whl", hash = "sha256:c6dbd0ccdda3a2ba7c2ecd9d77b37f3b5831687d8dc1b6ca5f56a4880cc7b7ce"}, - {file = "charset_normalizer-3.4.3-cp314-cp314-win_amd64.whl", hash = "sha256:73dc19b562516fc9bcf6e5d6e596df0b4eb98d87e4f79f3ae71840e6ed21361c"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0f2be7e0cf7754b9a30eb01f4295cc3d4358a479843b31f328afd210e2c7598c"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c60e092517a73c632ec38e290eba714e9627abe9d301c8c8a12ec32c314a2a4b"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:252098c8c7a873e17dd696ed98bbe91dbacd571da4b87df3736768efa7a792e4"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3653fad4fe3ed447a596ae8638b437f827234f01a8cd801842e43f3d0a6b281b"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8999f965f922ae054125286faf9f11bc6932184b93011d138925a1773830bbe9"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:d95bfb53c211b57198bb91c46dd5a2d8018b3af446583aab40074bf7988401cb"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:5b413b0b1bfd94dbf4023ad6945889f374cd24e3f62de58d6bb102c4d9ae534a"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:b5e3b2d152e74e100a9e9573837aba24aab611d39428ded46f4e4022ea7d1942"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a2d08ac246bb48479170408d6c19f6385fa743e7157d716e144cad849b2dd94b"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-win32.whl", hash = "sha256:ec557499516fc90fd374bf2e32349a2887a876fbf162c160e3c01b6849eaf557"}, - {file = "charset_normalizer-3.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:5d8d01eac18c423815ed4f4a2ec3b439d654e55ee4ad610e153cf02faf67ea40"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:70bfc5f2c318afece2f5838ea5e4c3febada0be750fcf4775641052bbba14d05"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:23b6b24d74478dc833444cbd927c338349d6ae852ba53a0d02a2de1fce45b96e"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:34a7f768e3f985abdb42841e20e17b330ad3aaf4bb7e7aeeb73db2e70f077b99"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:fb731e5deb0c7ef82d698b0f4c5bb724633ee2a489401594c5c88b02e6cb15f7"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:257f26fed7d7ff59921b78244f3cd93ed2af1800ff048c33f624c87475819dd7"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:1ef99f0456d3d46a50945c98de1774da86f8e992ab5c77865ea8b8195341fc19"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:2c322db9c8c89009a990ef07c3bcc9f011a3269bc06782f916cd3d9eed7c9312"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:511729f456829ef86ac41ca78c63a5cb55240ed23b4b737faca0eb1abb1c41bc"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:88ab34806dea0671532d3f82d82b85e8fc23d7b2dd12fa837978dad9bb392a34"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-win32.whl", hash = "sha256:16a8770207946ac75703458e2c743631c79c59c5890c80011d536248f8eaa432"}, - {file = "charset_normalizer-3.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:d22dbedd33326a4a5190dd4fe9e9e693ef12160c77382d9e87919bce54f3d4ca"}, - {file = "charset_normalizer-3.4.3-py3-none-any.whl", hash = "sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a"}, - {file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"}, -] - -[[package]] -name = "click" -version = "8.3.0" -description = "Composable command line interface toolkit" -optional = false -python-versions = ">=3.10" -files = [ - {file = "click-8.3.0-py3-none-any.whl", hash = "sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc"}, - {file = "click-8.3.0.tar.gz", hash = "sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4"}, -] - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} - -[[package]] -name = "colorama" -version = "0.4.6" -description = "Cross-platform colored terminal text." -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" -files = [ - {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, - {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, -] - -[[package]] -name = "decorator" -version = "5.2.1" -description = "Decorators for Humans" -optional = false -python-versions = ">=3.8" -files = [ - {file = "decorator-5.2.1-py3-none-any.whl", hash = "sha256:d316bb415a2d9e2d2b3abcc4084c6502fc09240e292cd76a76afc106a1c8e04a"}, - {file = "decorator-5.2.1.tar.gz", hash = "sha256:65f266143752f734b0a7cc83c46f4618af75b8c5911b00ccb61d0ac9b6da0360"}, -] - -[[package]] -name = "distro" -version = "1.9.0" -description = "Distro - an OS platform information API" -optional = false -python-versions = ">=3.6" -files = [ - {file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"}, - {file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"}, -] - -[[package]] -name = "docker" -version = "7.1.0" -description = "A Python library for the Docker Engine API." -optional = false -python-versions = ">=3.8" -files = [ - {file = "docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0"}, - {file = "docker-7.1.0.tar.gz", hash = "sha256:ad8c70e6e3f8926cb8a92619b832b4ea5299e2831c14284663184e200546fa6c"}, -] - -[package.dependencies] -pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""} -requests = ">=2.26.0" -urllib3 = ">=1.26.0" - -[package.extras] -dev = ["coverage (==7.2.7)", "pytest (==7.4.2)", "pytest-cov (==4.1.0)", "pytest-timeout (==2.1.0)", "ruff (==0.1.8)"] -docs = ["myst-parser (==0.18.0)", "sphinx (==5.1.1)"] -ssh = ["paramiko (>=2.4.3)"] -websockets = ["websocket-client (>=1.3.0)"] - -[[package]] -name = "exceptiongroup" -version = "1.3.0" -description = "Backport of PEP 654 (exception groups)" -optional = false -python-versions = ">=3.7" -files = [ - {file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"}, - {file = "exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88"}, -] - -[package.dependencies] -typing-extensions = {version = ">=4.6.0", markers = "python_version < \"3.13\""} - -[package.extras] -test = ["pytest (>=6)"] - -[[package]] -name = "executing" -version = "2.2.1" -description = "Get the currently executing AST node of a frame, and other information" -optional = false -python-versions = ">=3.8" -files = [ - {file = "executing-2.2.1-py2.py3-none-any.whl", hash = "sha256:760643d3452b4d777d295bb167ccc74c64a81df23fb5e08eff250c425a4b2017"}, - {file = "executing-2.2.1.tar.gz", hash = "sha256:3632cc370565f6648cc328b32435bd120a1e4ebb20c77e3fdde9a13cd1e533c4"}, -] - -[package.extras] -tests = ["asttokens (>=2.1.0)", "coverage", "coverage-enable-subprocess", "ipython", "littleutils", "pytest", "rich"] - -[[package]] -name = "h11" -version = "0.16.0" -description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1" -optional = false -python-versions = ">=3.8" -files = [ - {file = "h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86"}, - {file = "h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1"}, -] - -[[package]] -name = "httpcore" -version = "1.0.9" -description = "A minimal low-level HTTP client." -optional = false -python-versions = ">=3.8" -files = [ - {file = "httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55"}, - {file = "httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8"}, -] - -[package.dependencies] -certifi = "*" -h11 = ">=0.16" - -[package.extras] -asyncio = ["anyio (>=4.0,<5.0)"] -http2 = ["h2 (>=3,<5)"] -socks = ["socksio (==1.*)"] -trio = ["trio (>=0.22.0,<1.0)"] - -[[package]] -name = "httpx" -version = "0.28.1" -description = "The next generation HTTP client." -optional = false -python-versions = ">=3.8" -files = [ - {file = "httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad"}, - {file = "httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc"}, -] - -[package.dependencies] -anyio = "*" -certifi = "*" -httpcore = "==1.*" -idna = "*" - -[package.extras] -brotli = ["brotli", "brotlicffi"] -cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"] -http2 = ["h2 (>=3,<5)"] -socks = ["socksio (==1.*)"] -zstd = ["zstandard (>=0.18.0)"] - -[[package]] -name = "httpx-sse" -version = "0.4.1" -description = "Consume Server-Sent Event (SSE) messages with HTTPX." -optional = false -python-versions = ">=3.9" -files = [ - {file = "httpx_sse-0.4.1-py3-none-any.whl", hash = "sha256:cba42174344c3a5b06f255ce65b350880f962d99ead85e776f23c6618a377a37"}, - {file = "httpx_sse-0.4.1.tar.gz", hash = "sha256:8f44d34414bc7b21bf3602713005c5df4917884f76072479b21f68befa4ea26e"}, -] - -[[package]] -name = "idna" -version = "3.10" -description = "Internationalized Domain Names in Applications (IDNA)" -optional = false -python-versions = ">=3.6" -files = [ - {file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"}, - {file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"}, -] - -[package.extras] -all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"] - -[[package]] -name = "ipdb" -version = "0.13.13" -description = "IPython-enabled pdb" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -files = [ - {file = "ipdb-0.13.13-py3-none-any.whl", hash = "sha256:45529994741c4ab6d2388bfa5d7b725c2cf7fe9deffabdb8a6113aa5ed449ed4"}, - {file = "ipdb-0.13.13.tar.gz", hash = "sha256:e3ac6018ef05126d442af680aad863006ec19d02290561ac88b8b1c0b0cfc726"}, -] - -[package.dependencies] -decorator = {version = "*", markers = "python_version > \"3.6\""} -ipython = {version = ">=7.31.1", markers = "python_version > \"3.6\""} -tomli = {version = "*", markers = "python_version > \"3.6\" and python_version < \"3.11\""} - -[[package]] -name = "ipython" -version = "8.37.0" -description = "IPython: Productive Interactive Computing" -optional = false -python-versions = ">=3.10" -files = [ - {file = "ipython-8.37.0-py3-none-any.whl", hash = "sha256:ed87326596b878932dbcb171e3e698845434d8c61b8d8cd474bf663041a9dcf2"}, - {file = "ipython-8.37.0.tar.gz", hash = "sha256:ca815841e1a41a1e6b73a0b08f3038af9b2252564d01fc405356d34033012216"}, -] - -[package.dependencies] -colorama = {version = "*", markers = "sys_platform == \"win32\""} -decorator = "*" -exceptiongroup = {version = "*", markers = "python_version < \"3.11\""} -jedi = ">=0.16" -matplotlib-inline = "*" -pexpect = {version = ">4.3", markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\""} -prompt_toolkit = ">=3.0.41,<3.1.0" -pygments = ">=2.4.0" -stack_data = "*" -traitlets = ">=5.13.0" -typing_extensions = {version = ">=4.6", markers = "python_version < \"3.12\""} - -[package.extras] -all = ["ipython[black,doc,kernel,matplotlib,nbconvert,nbformat,notebook,parallel,qtconsole]", "ipython[test,test-extra]"] -black = ["black"] -doc = ["docrepr", "exceptiongroup", "intersphinx_registry", "ipykernel", "ipython[test]", "matplotlib", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "sphinxcontrib-jquery", "tomli", "typing_extensions"] -kernel = ["ipykernel"] -matplotlib = ["matplotlib"] -nbconvert = ["nbconvert"] -nbformat = ["nbformat"] -notebook = ["ipywidgets", "notebook"] -parallel = ["ipyparallel"] -qtconsole = ["qtconsole"] -test = ["packaging", "pickleshare", "pytest", "pytest-asyncio (<0.22)", "testpath"] -test-extra = ["curio", "ipython[test]", "jupyter_ai", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.23)", "pandas", "trio"] - -[[package]] -name = "jedi" -version = "0.19.2" -description = "An autocompletion tool for Python that can be used for text editors." -optional = false -python-versions = ">=3.6" -files = [ - {file = "jedi-0.19.2-py2.py3-none-any.whl", hash = "sha256:a8ef22bde8490f57fe5c7681a3c83cb58874daf72b4784de3cce5b6ef6edb5b9"}, - {file = "jedi-0.19.2.tar.gz", hash = "sha256:4770dc3de41bde3966b02eb84fbcf557fb33cce26ad23da12c742fb50ecb11f0"}, -] - -[package.dependencies] -parso = ">=0.8.4,<0.9.0" - -[package.extras] -docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"] -qa = ["flake8 (==5.0.4)", "mypy (==0.971)", "types-setuptools (==67.2.0.1)"] -testing = ["Django", "attrs", "colorama", "docopt", "pytest (<9.0.0)"] - -[[package]] -name = "jinja2" -version = "3.1.6" -description = "A very fast and expressive template engine." -optional = false -python-versions = ">=3.7" -files = [ - {file = "jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67"}, - {file = "jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d"}, -] - -[package.dependencies] -MarkupSafe = ">=2.0" - -[package.extras] -i18n = ["Babel (>=2.7)"] - -[[package]] -name = "jiter" -version = "0.11.0" -description = "Fast iterable JSON parser." -optional = false -python-versions = ">=3.9" -files = [ - {file = "jiter-0.11.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3893ce831e1c0094a83eeaf56c635a167d6fa8cc14393cc14298fd6fdc2a2449"}, - {file = "jiter-0.11.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:25c625b9b61b5a8725267fdf867ef2e51b429687f6a4eef211f4612e95607179"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dd4ca85fb6a62cf72e1c7f5e34ddef1b660ce4ed0886ec94a1ef9777d35eaa1f"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:572208127034725e79c28437b82414028c3562335f2b4f451d98136d0fc5f9cd"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:494ba627c7f550ad3dabb21862864b8f2216098dc18ff62f37b37796f2f7c325"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b8da18a99f58bca3ecc2d2bba99cac000a924e115b6c4f0a2b98f752b6fbf39a"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4ffd3b0fff3fabbb02cc09910c08144db6bb5697a98d227a074401e01ee63dd"}, - {file = "jiter-0.11.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8fe6530aa738a4f7d4e4702aa8f9581425d04036a5f9e25af65ebe1f708f23be"}, - {file = "jiter-0.11.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e35d66681c133a03d7e974e7eedae89720fe8ca3bd09f01a4909b86a8adf31f5"}, - {file = "jiter-0.11.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c59459beca2fbc9718b6f1acb7bfb59ebc3eb4294fa4d40e9cb679dafdcc6c60"}, - {file = "jiter-0.11.0-cp310-cp310-win32.whl", hash = "sha256:b7b0178417b0dcfc5f259edbc6db2b1f5896093ed9035ee7bab0f2be8854726d"}, - {file = "jiter-0.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:11df2bf99fb4754abddd7f5d940a48e51f9d11624d6313ca4314145fcad347f0"}, - {file = "jiter-0.11.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:cb5d9db02979c3f49071fce51a48f4b4e4cf574175fb2b11c7a535fa4867b222"}, - {file = "jiter-0.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1dc6a123f3471c4730db7ca8ba75f1bb3dcb6faeb8d46dd781083e7dee88b32d"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09858f8d230f031c7b8e557429102bf050eea29c77ad9c34c8fe253c5329acb7"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dbe2196c4a0ce760925a74ab4456bf644748ab0979762139626ad138f6dac72d"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5beb56d22b63647bafd0b74979216fdee80c580c0c63410be8c11053860ffd09"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97025d09ef549795d8dc720a824312cee3253c890ac73c621721ddfc75066789"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d50880a6da65d8c23a2cf53c412847d9757e74cc9a3b95c5704a1d1a24667347"}, - {file = "jiter-0.11.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:452d80a1c86c095a242007bd9fc5d21b8a8442307193378f891cb8727e469648"}, - {file = "jiter-0.11.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e84e58198d4894668eec2da660ffff60e0f3e60afa790ecc50cb12b0e02ca1d4"}, - {file = "jiter-0.11.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:df64edcfc5dd5279a791eea52aa113d432c933119a025b0b5739f90d2e4e75f1"}, - {file = "jiter-0.11.0-cp311-cp311-win32.whl", hash = "sha256:144fc21337d21b1d048f7f44bf70881e1586401d405ed3a98c95a114a9994982"}, - {file = "jiter-0.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:b0f32e644d241293b892b1a6dd8f0b9cc029bfd94c97376b2681c36548aabab7"}, - {file = "jiter-0.11.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:2fb7b377688cc3850bbe5c192a6bd493562a0bc50cbc8b047316428fbae00ada"}, - {file = "jiter-0.11.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a1b7cbe3f25bd0d8abb468ba4302a5d45617ee61b2a7a638f63fee1dc086be99"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0a7f0ec81d5b7588c5cade1eb1925b91436ae6726dc2df2348524aeabad5de6"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:07630bb46ea2a6b9c6ed986c6e17e35b26148cce2c535454b26ee3f0e8dcaba1"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7764f27d28cd4a9cbc61704dfcd80c903ce3aad106a37902d3270cd6673d17f4"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1d4a6c4a737d486f77f842aeb22807edecb4a9417e6700c7b981e16d34ba7c72"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cf408d2a0abd919b60de8c2e7bc5eeab72d4dafd18784152acc7c9adc3291591"}, - {file = "jiter-0.11.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cdef53eda7d18e799625023e1e250dbc18fbc275153039b873ec74d7e8883e09"}, - {file = "jiter-0.11.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:53933a38ef7b551dd9c7f1064f9d7bb235bb3168d0fa5f14f0798d1b7ea0d9c5"}, - {file = "jiter-0.11.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:11840d2324c9ab5162fc1abba23bc922124fedcff0d7b7f85fffa291e2f69206"}, - {file = "jiter-0.11.0-cp312-cp312-win32.whl", hash = "sha256:4f01a744d24a5f2bb4a11657a1b27b61dc038ae2e674621a74020406e08f749b"}, - {file = "jiter-0.11.0-cp312-cp312-win_amd64.whl", hash = "sha256:29fff31190ab3a26de026da2f187814f4b9c6695361e20a9ac2123e4d4378a4c"}, - {file = "jiter-0.11.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:4441a91b80a80249f9a6452c14b2c24708f139f64de959943dfeaa6cb915e8eb"}, - {file = "jiter-0.11.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:ff85fc6d2a431251ad82dbd1ea953affb5a60376b62e7d6809c5cd058bb39471"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5e86126d64706fd28dfc46f910d496923c6f95b395138c02d0e252947f452bd"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4ad8bd82165961867a10f52010590ce0b7a8c53da5ddd8bbb62fef68c181b921"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b42c2cd74273455ce439fd9528db0c6e84b5623cb74572305bdd9f2f2961d3df"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f0062dab98172dd0599fcdbf90214d0dcde070b1ff38a00cc1b90e111f071982"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb948402821bc76d1f6ef0f9e19b816f9b09f8577844ba7140f0b6afe994bc64"}, - {file = "jiter-0.11.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:25a5b1110cca7329fd0daf5060faa1234be5c11e988948e4f1a1923b6a457fe1"}, - {file = "jiter-0.11.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:bf11807e802a214daf6c485037778843fadd3e2ec29377ae17e0706ec1a25758"}, - {file = "jiter-0.11.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:dbb57da40631c267861dd0090461222060960012d70fd6e4c799b0f62d0ba166"}, - {file = "jiter-0.11.0-cp313-cp313-win32.whl", hash = "sha256:8e36924dad32c48d3c5e188d169e71dc6e84d6cb8dedefea089de5739d1d2f80"}, - {file = "jiter-0.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:452d13e4fd59698408087235259cebe67d9d49173b4dacb3e8d35ce4acf385d6"}, - {file = "jiter-0.11.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:089f9df9f69532d1339e83142438668f52c97cd22ee2d1195551c2b1a9e6cf33"}, - {file = "jiter-0.11.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29ed1fe69a8c69bf0f2a962d8d706c7b89b50f1332cd6b9fbda014f60bd03a03"}, - {file = "jiter-0.11.0-cp313-cp313t-win_amd64.whl", hash = "sha256:a4d71d7ea6ea8786291423fe209acf6f8d398a0759d03e7f24094acb8ab686ba"}, - {file = "jiter-0.11.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:9a6dff27eca70930bdbe4cbb7c1a4ba8526e13b63dc808c0670083d2d51a4a72"}, - {file = "jiter-0.11.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:b1ae2a7593a62132c7d4c2abbee80bbbb94fdc6d157e2c6cc966250c564ef774"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b13a431dba4b059e9e43019d3022346d009baf5066c24dcdea321a303cde9f0"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:af62e84ca3889604ebb645df3b0a3f3bcf6b92babbff642bd214616f57abb93a"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6f3b32bb723246e6b351aecace52aba78adb8eeb4b2391630322dc30ff6c773"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:adcab442f4a099a358a7f562eaa54ed6456fb866e922c6545a717be51dbed7d7"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9967c2ab338ee2b2c0102fd379ec2693c496abf71ffd47e4d791d1f593b68e2"}, - {file = "jiter-0.11.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e7d0bed3b187af8b47a981d9742ddfc1d9b252a7235471ad6078e7e4e5fe75c2"}, - {file = "jiter-0.11.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:f6fe0283e903ebc55f1a6cc569b8c1f3bf4abd026fed85e3ff8598a9e6f982f0"}, - {file = "jiter-0.11.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:4ee5821e3d66606b29ae5b497230b304f1376f38137d69e35f8d2bd5f310ff73"}, - {file = "jiter-0.11.0-cp314-cp314-win32.whl", hash = "sha256:c2d13ba7567ca8799f17c76ed56b1d49be30df996eb7fa33e46b62800562a5e2"}, - {file = "jiter-0.11.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:fb4790497369d134a07fc763cc88888c46f734abdd66f9fdf7865038bf3a8f40"}, - {file = "jiter-0.11.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e2bbf24f16ba5ad4441a9845e40e4ea0cb9eed00e76ba94050664ef53ef4406"}, - {file = "jiter-0.11.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:719891c2fb7628a41adff4f2f54c19380a27e6fdfdb743c24680ef1a54c67bd0"}, - {file = "jiter-0.11.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:df7f1927cbdf34cb91262a5418ca06920fd42f1cf733936d863aeb29b45a14ef"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e71ae6d969d0c9bab336c5e9e2fabad31e74d823f19e3604eaf96d9a97f463df"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5661469a7b2be25ade3a4bb6c21ffd1e142e13351a0759f264dfdd3ad99af1ab"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:76c15ef0d3d02f8b389066fa4c410a0b89e9cc6468a1f0674c5925d2f3c3e890"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:63782a1350917a27817030716566ed3d5b3c731500fd42d483cbd7094e2c5b25"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5a7092b699646a1ddc03a7b112622d9c066172627c7382659befb0d2996f1659"}, - {file = "jiter-0.11.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f637b8e818f6d75540f350a6011ce21252573c0998ea1b4365ee54b7672c23c5"}, - {file = "jiter-0.11.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a624d87719e1b5d09c15286eaee7e1532a40c692a096ea7ca791121365f548c1"}, - {file = "jiter-0.11.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9d0146d8d9b3995821bb586fc8256636258947c2f39da5bab709f3a28fb1a0b"}, - {file = "jiter-0.11.0-cp39-cp39-win32.whl", hash = "sha256:d067655a7cf0831eb8ec3e39cbd752995e9b69a2206df3535b3a067fac23b032"}, - {file = "jiter-0.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:f05d03775a11aaf132c447436983169958439f1219069abf24662a672851f94e"}, - {file = "jiter-0.11.0-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:902b43386c04739229076bd1c4c69de5d115553d982ab442a8ae82947c72ede7"}, - {file = "jiter-0.11.0.tar.gz", hash = "sha256:1d9637eaf8c1d6a63d6562f2a6e5ab3af946c66037eb1b894e8fad75422266e4"}, -] - -[[package]] -name = "jmespath" -version = "1.0.1" -description = "JSON Matching Expressions" -optional = false -python-versions = ">=3.7" -files = [ - {file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"}, - {file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"}, -] - -[[package]] -name = "jsonschema" -version = "4.25.1" -description = "An implementation of JSON Schema validation for Python" -optional = false -python-versions = ">=3.9" -files = [ - {file = "jsonschema-4.25.1-py3-none-any.whl", hash = "sha256:3fba0169e345c7175110351d456342c364814cfcf3b964ba4587f22915230a63"}, - {file = "jsonschema-4.25.1.tar.gz", hash = "sha256:e4a9655ce0da0c0b67a085847e00a3a51449e1157f4f75e9fb5aa545e122eb85"}, -] - -[package.dependencies] -attrs = ">=22.2.0" -jsonschema-specifications = ">=2023.03.6" -referencing = ">=0.28.4" -rpds-py = ">=0.7.1" - -[package.extras] -format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"] -format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "rfc3987-syntax (>=1.1.0)", "uri-template", "webcolors (>=24.6.0)"] - -[[package]] -name = "jsonschema-specifications" -version = "2025.9.1" -description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry" -optional = false -python-versions = ">=3.9" -files = [ - {file = "jsonschema_specifications-2025.9.1-py3-none-any.whl", hash = "sha256:98802fee3a11ee76ecaca44429fda8a41bff98b00a0f2838151b113f210cc6fe"}, - {file = "jsonschema_specifications-2025.9.1.tar.gz", hash = "sha256:b540987f239e745613c7a9176f3edb72b832a4ac465cf02712288397832b5e8d"}, -] - -[package.dependencies] -referencing = ">=0.31.0" - -[[package]] -name = "markupsafe" -version = "3.0.3" -description = "Safely add untrusted strings to HTML/XML markup." -optional = false -python-versions = ">=3.9" -files = [ - {file = "markupsafe-3.0.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2f981d352f04553a7171b8e44369f2af4055f888dfb147d55e42d29e29e74559"}, - {file = "markupsafe-3.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e1c1493fb6e50ab01d20a22826e57520f1284df32f2d8601fdd90b6304601419"}, - {file = "markupsafe-3.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1ba88449deb3de88bd40044603fafffb7bc2b055d626a330323a9ed736661695"}, - {file = "markupsafe-3.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f42d0984e947b8adf7dd6dde396e720934d12c506ce84eea8476409563607591"}, - {file = "markupsafe-3.0.3-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:c0c0b3ade1c0b13b936d7970b1d37a57acde9199dc2aecc4c336773e1d86049c"}, - {file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0303439a41979d9e74d18ff5e2dd8c43ed6c6001fd40e5bf2e43f7bd9bbc523f"}, - {file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:d2ee202e79d8ed691ceebae8e0486bd9a2cd4794cec4824e1c99b6f5009502f6"}, - {file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:177b5253b2834fe3678cb4a5f0059808258584c559193998be2601324fdeafb1"}, - {file = "markupsafe-3.0.3-cp310-cp310-win32.whl", hash = "sha256:2a15a08b17dd94c53a1da0438822d70ebcd13f8c3a95abe3a9ef9f11a94830aa"}, - {file = "markupsafe-3.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:c4ffb7ebf07cfe8931028e3e4c85f0357459a3f9f9490886198848f4fa002ec8"}, - {file = "markupsafe-3.0.3-cp310-cp310-win_arm64.whl", hash = "sha256:e2103a929dfa2fcaf9bb4e7c091983a49c9ac3b19c9061b6d5427dd7d14d81a1"}, - {file = "markupsafe-3.0.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1cc7ea17a6824959616c525620e387f6dd30fec8cb44f649e31712db02123dad"}, - {file = "markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4bd4cd07944443f5a265608cc6aab442e4f74dff8088b0dfc8238647b8f6ae9a"}, - {file = "markupsafe-3.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b5420a1d9450023228968e7e6a9ce57f65d148ab56d2313fcd589eee96a7a50"}, - {file = "markupsafe-3.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0bf2a864d67e76e5c9a34dc26ec616a66b9888e25e7b9460e1c76d3293bd9dbf"}, - {file = "markupsafe-3.0.3-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc51efed119bc9cfdf792cdeaa4d67e8f6fcccab66ed4bfdd6bde3e59bfcbb2f"}, - {file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:068f375c472b3e7acbe2d5318dea141359e6900156b5b2ba06a30b169086b91a"}, - {file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:7be7b61bb172e1ed687f1754f8e7484f1c8019780f6f6b0786e76bb01c2ae115"}, - {file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f9e130248f4462aaa8e2552d547f36ddadbeaa573879158d721bbd33dfe4743a"}, - {file = "markupsafe-3.0.3-cp311-cp311-win32.whl", hash = "sha256:0db14f5dafddbb6d9208827849fad01f1a2609380add406671a26386cdf15a19"}, - {file = "markupsafe-3.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:de8a88e63464af587c950061a5e6a67d3632e36df62b986892331d4620a35c01"}, - {file = "markupsafe-3.0.3-cp311-cp311-win_arm64.whl", hash = "sha256:3b562dd9e9ea93f13d53989d23a7e775fdfd1066c33494ff43f5418bc8c58a5c"}, - {file = "markupsafe-3.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d53197da72cc091b024dd97249dfc7794d6a56530370992a5e1a08983ad9230e"}, - {file = "markupsafe-3.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1872df69a4de6aead3491198eaf13810b565bdbeec3ae2dc8780f14458ec73ce"}, - {file = "markupsafe-3.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3a7e8ae81ae39e62a41ec302f972ba6ae23a5c5396c8e60113e9066ef893da0d"}, - {file = "markupsafe-3.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d6dd0be5b5b189d31db7cda48b91d7e0a9795f31430b7f271219ab30f1d3ac9d"}, - {file = "markupsafe-3.0.3-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:94c6f0bb423f739146aec64595853541634bde58b2135f27f61c1ffd1cd4d16a"}, - {file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:be8813b57049a7dc738189df53d69395eba14fb99345e0a5994914a3864c8a4b"}, - {file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:83891d0e9fb81a825d9a6d61e3f07550ca70a076484292a70fde82c4b807286f"}, - {file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:77f0643abe7495da77fb436f50f8dab76dbc6e5fd25d39589a0f1fe6548bfa2b"}, - {file = "markupsafe-3.0.3-cp312-cp312-win32.whl", hash = "sha256:d88b440e37a16e651bda4c7c2b930eb586fd15ca7406cb39e211fcff3bf3017d"}, - {file = "markupsafe-3.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:26a5784ded40c9e318cfc2bdb30fe164bdb8665ded9cd64d500a34fb42067b1c"}, - {file = "markupsafe-3.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:35add3b638a5d900e807944a078b51922212fb3dedb01633a8defc4b01a3c85f"}, - {file = "markupsafe-3.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e1cf1972137e83c5d4c136c43ced9ac51d0e124706ee1c8aa8532c1287fa8795"}, - {file = "markupsafe-3.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:116bb52f642a37c115f517494ea5feb03889e04df47eeff5b130b1808ce7c219"}, - {file = "markupsafe-3.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:133a43e73a802c5562be9bbcd03d090aa5a1fe899db609c29e8c8d815c5f6de6"}, - {file = "markupsafe-3.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ccfcd093f13f0f0b7fdd0f198b90053bf7b2f02a3927a30e63f3ccc9df56b676"}, - {file = "markupsafe-3.0.3-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:509fa21c6deb7a7a273d629cf5ec029bc209d1a51178615ddf718f5918992ab9"}, - {file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4afe79fb3de0b7097d81da19090f4df4f8d3a2b3adaa8764138aac2e44f3af1"}, - {file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:795e7751525cae078558e679d646ae45574b47ed6e7771863fcc079a6171a0fc"}, - {file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8485f406a96febb5140bfeca44a73e3ce5116b2501ac54fe953e488fb1d03b12"}, - {file = "markupsafe-3.0.3-cp313-cp313-win32.whl", hash = "sha256:bdd37121970bfd8be76c5fb069c7751683bdf373db1ed6c010162b2a130248ed"}, - {file = "markupsafe-3.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:9a1abfdc021a164803f4d485104931fb8f8c1efd55bc6b748d2f5774e78b62c5"}, - {file = "markupsafe-3.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:7e68f88e5b8799aa49c85cd116c932a1ac15caaa3f5db09087854d218359e485"}, - {file = "markupsafe-3.0.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:218551f6df4868a8d527e3062d0fb968682fe92054e89978594c28e642c43a73"}, - {file = "markupsafe-3.0.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3524b778fe5cfb3452a09d31e7b5adefeea8c5be1d43c4f810ba09f2ceb29d37"}, - {file = "markupsafe-3.0.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4e885a3d1efa2eadc93c894a21770e4bc67899e3543680313b09f139e149ab19"}, - {file = "markupsafe-3.0.3-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8709b08f4a89aa7586de0aadc8da56180242ee0ada3999749b183aa23df95025"}, - {file = "markupsafe-3.0.3-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b8512a91625c9b3da6f127803b166b629725e68af71f8184ae7e7d54686a56d6"}, - {file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9b79b7a16f7fedff2495d684f2b59b0457c3b493778c9eed31111be64d58279f"}, - {file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:12c63dfb4a98206f045aa9563db46507995f7ef6d83b2f68eda65c307c6829eb"}, - {file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:8f71bc33915be5186016f675cd83a1e08523649b0e33efdb898db577ef5bb009"}, - {file = "markupsafe-3.0.3-cp313-cp313t-win32.whl", hash = "sha256:69c0b73548bc525c8cb9a251cddf1931d1db4d2258e9599c28c07ef3580ef354"}, - {file = "markupsafe-3.0.3-cp313-cp313t-win_amd64.whl", hash = "sha256:1b4b79e8ebf6b55351f0d91fe80f893b4743f104bff22e90697db1590e47a218"}, - {file = "markupsafe-3.0.3-cp313-cp313t-win_arm64.whl", hash = "sha256:ad2cf8aa28b8c020ab2fc8287b0f823d0a7d8630784c31e9ee5edea20f406287"}, - {file = "markupsafe-3.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:eaa9599de571d72e2daf60164784109f19978b327a3910d3e9de8c97b5b70cfe"}, - {file = "markupsafe-3.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c47a551199eb8eb2121d4f0f15ae0f923d31350ab9280078d1e5f12b249e0026"}, - {file = "markupsafe-3.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f34c41761022dd093b4b6896d4810782ffbabe30f2d443ff5f083e0cbbb8c737"}, - {file = "markupsafe-3.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:457a69a9577064c05a97c41f4e65148652db078a3a509039e64d3467b9e7ef97"}, - {file = "markupsafe-3.0.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e8afc3f2ccfa24215f8cb28dcf43f0113ac3c37c2f0f0806d8c70e4228c5cf4d"}, - {file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ec15a59cf5af7be74194f7ab02d0f59a62bdcf1a537677ce67a2537c9b87fcda"}, - {file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:0eb9ff8191e8498cca014656ae6b8d61f39da5f95b488805da4bb029cccbfbaf"}, - {file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2713baf880df847f2bece4230d4d094280f4e67b1e813eec43b4c0e144a34ffe"}, - {file = "markupsafe-3.0.3-cp314-cp314-win32.whl", hash = "sha256:729586769a26dbceff69f7a7dbbf59ab6572b99d94576a5592625d5b411576b9"}, - {file = "markupsafe-3.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:bdc919ead48f234740ad807933cdf545180bfbe9342c2bb451556db2ed958581"}, - {file = "markupsafe-3.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:5a7d5dc5140555cf21a6fefbdbf8723f06fcd2f63ef108f2854de715e4422cb4"}, - {file = "markupsafe-3.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:1353ef0c1b138e1907ae78e2f6c63ff67501122006b0f9abad68fda5f4ffc6ab"}, - {file = "markupsafe-3.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1085e7fbddd3be5f89cc898938f42c0b3c711fdcb37d75221de2666af647c175"}, - {file = "markupsafe-3.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b52b4fb9df4eb9ae465f8d0c228a00624de2334f216f178a995ccdcf82c4634"}, - {file = "markupsafe-3.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fed51ac40f757d41b7c48425901843666a6677e3e8eb0abcff09e4ba6e664f50"}, - {file = "markupsafe-3.0.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f190daf01f13c72eac4efd5c430a8de82489d9cff23c364c3ea822545032993e"}, - {file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e56b7d45a839a697b5eb268c82a71bd8c7f6c94d6fd50c3d577fa39a9f1409f5"}, - {file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:f3e98bb3798ead92273dc0e5fd0f31ade220f59a266ffd8a4f6065e0a3ce0523"}, - {file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5678211cb9333a6468fb8d8be0305520aa073f50d17f089b5b4b477ea6e67fdc"}, - {file = "markupsafe-3.0.3-cp314-cp314t-win32.whl", hash = "sha256:915c04ba3851909ce68ccc2b8e2cd691618c4dc4c4232fb7982bca3f41fd8c3d"}, - {file = "markupsafe-3.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4faffd047e07c38848ce017e8725090413cd80cbc23d86e55c587bf979e579c9"}, - {file = "markupsafe-3.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:32001d6a8fc98c8cb5c947787c5d08b0a50663d139f1305bac5885d98d9b40fa"}, - {file = "markupsafe-3.0.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:15d939a21d546304880945ca1ecb8a039db6b4dc49b2c5a400387cdae6a62e26"}, - {file = "markupsafe-3.0.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f71a396b3bf33ecaa1626c255855702aca4d3d9fea5e051b41ac59a9c1c41edc"}, - {file = "markupsafe-3.0.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f4b68347f8c5eab4a13419215bdfd7f8c9b19f2b25520968adfad23eb0ce60c"}, - {file = "markupsafe-3.0.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e8fc20152abba6b83724d7ff268c249fa196d8259ff481f3b1476383f8f24e42"}, - {file = "markupsafe-3.0.3-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:949b8d66bc381ee8b007cd945914c721d9aba8e27f71959d750a46f7c282b20b"}, - {file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:3537e01efc9d4dccdf77221fb1cb3b8e1a38d5428920e0657ce299b20324d758"}, - {file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:591ae9f2a647529ca990bc681daebdd52c8791ff06c2bfa05b65163e28102ef2"}, - {file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a320721ab5a1aba0a233739394eb907f8c8da5c98c9181d1161e77a0c8e36f2d"}, - {file = "markupsafe-3.0.3-cp39-cp39-win32.whl", hash = "sha256:df2449253ef108a379b8b5d6b43f4b1a8e81a061d6537becd5582fba5f9196d7"}, - {file = "markupsafe-3.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:7c3fb7d25180895632e5d3148dbdc29ea38ccb7fd210aa27acbd1201a1902c6e"}, - {file = "markupsafe-3.0.3-cp39-cp39-win_arm64.whl", hash = "sha256:38664109c14ffc9e7437e86b4dceb442b0096dfe3541d7864d9cbe1da4cf36c8"}, - {file = "markupsafe-3.0.3.tar.gz", hash = "sha256:722695808f4b6457b320fdc131280796bdceb04ab50fe1795cd540799ebe1698"}, -] - -[[package]] -name = "matplotlib-inline" -version = "0.1.7" -description = "Inline Matplotlib backend for Jupyter" -optional = false -python-versions = ">=3.8" -files = [ - {file = "matplotlib_inline-0.1.7-py3-none-any.whl", hash = "sha256:df192d39a4ff8f21b1895d72e6a13f5fcc5099f00fa84384e0ea28c2cc0653ca"}, - {file = "matplotlib_inline-0.1.7.tar.gz", hash = "sha256:8423b23ec666be3d16e16b60bdd8ac4e86e840ebd1dd11a30b9f117f2fa0ab90"}, -] - -[package.dependencies] -traitlets = "*" - -[[package]] -name = "mcp" -version = "1.15.0" -description = "Model Context Protocol SDK" -optional = false -python-versions = ">=3.10" -files = [ - {file = "mcp-1.15.0-py3-none-any.whl", hash = "sha256:314614c8addc67b663d6c3e4054db0a5c3dedc416c24ef8ce954e203fdc2333d"}, - {file = "mcp-1.15.0.tar.gz", hash = "sha256:5bda1f4d383cf539d3c035b3505a3de94b20dbd7e4e8b4bd071e14634eeb2d72"}, -] - -[package.dependencies] -anyio = ">=4.5" -httpx = ">=0.27.1" -httpx-sse = ">=0.4" -jsonschema = ">=4.20.0" -pydantic = ">=2.11.0,<3.0.0" -pydantic-settings = ">=2.5.2" -python-multipart = ">=0.0.9" -pywin32 = {version = ">=310", markers = "sys_platform == \"win32\""} -sse-starlette = ">=1.6.1" -starlette = ">=0.27" -uvicorn = {version = ">=0.31.1", markers = "sys_platform != \"emscripten\""} - -[package.extras] -cli = ["python-dotenv (>=1.0.0)", "typer (>=0.16.0)"] -rich = ["rich (>=13.9.4)"] -ws = ["websockets (>=15.0.1)"] - -[[package]] -name = "openai" -version = "1.109.1" -description = "The official Python library for the openai API" -optional = false -python-versions = ">=3.8" -files = [ - {file = "openai-1.109.1-py3-none-any.whl", hash = "sha256:6bcaf57086cf59159b8e27447e4e7dd019db5d29a438072fbd49c290c7e65315"}, - {file = "openai-1.109.1.tar.gz", hash = "sha256:d173ed8dbca665892a6db099b4a2dfac624f94d20a93f46eb0b56aae940ed869"}, -] - -[package.dependencies] -anyio = ">=3.5.0,<5" -distro = ">=1.7.0,<2" -httpx = ">=0.23.0,<1" -jiter = ">=0.4.0,<1" -pydantic = ">=1.9.0,<3" -sniffio = "*" -tqdm = ">4" -typing-extensions = ">=4.11,<5" - -[package.extras] -aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.8)"] -datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"] -realtime = ["websockets (>=13,<16)"] -voice-helpers = ["numpy (>=2.0.2)", "sounddevice (>=0.5.1)"] - -[[package]] -name = "orderly-set" -version = "5.5.0" -description = "Orderly set" -optional = false -python-versions = ">=3.8" -files = [ - {file = "orderly_set-5.5.0-py3-none-any.whl", hash = "sha256:46f0b801948e98f427b412fcabb831677194c05c3b699b80de260374baa0b1e7"}, - {file = "orderly_set-5.5.0.tar.gz", hash = "sha256:e87185c8e4d8afa64e7f8160ee2c542a475b738bc891dc3f58102e654125e6ce"}, -] - -[package.extras] -coverage = ["coverage (>=7.6.0,<7.7.0)"] -dev = ["bump2version (>=1.0.0,<1.1.0)", "ipdb (>=0.13.0,<0.14.0)"] -optimize = ["orjson"] -static = ["flake8 (>=7.1.0,<7.2.0)", "flake8-pyproject (>=1.2.3,<1.3.0)"] -test = ["pytest (>=8.3.0,<8.4.0)", "pytest-benchmark (>=5.1.0,<5.2.0)", "pytest-cov (>=6.0.0,<6.1.0)", "python-dotenv (>=1.0.0,<1.1.0)"] - -[[package]] -name = "parso" -version = "0.8.5" -description = "A Python Parser" -optional = false -python-versions = ">=3.6" -files = [ - {file = "parso-0.8.5-py2.py3-none-any.whl", hash = "sha256:646204b5ee239c396d040b90f9e272e9a8017c630092bf59980beb62fd033887"}, - {file = "parso-0.8.5.tar.gz", hash = "sha256:034d7354a9a018bdce352f48b2a8a450f05e9d6ee85db84764e9b6bd96dafe5a"}, -] - -[package.extras] -qa = ["flake8 (==5.0.4)", "mypy (==0.971)", "types-setuptools (==67.2.0.1)"] -testing = ["docopt", "pytest"] - -[[package]] -name = "pexpect" -version = "4.9.0" -description = "Pexpect allows easy control of interactive console applications." -optional = false -python-versions = "*" -files = [ - {file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"}, - {file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"}, -] - -[package.dependencies] -ptyprocess = ">=0.5" - -[[package]] -name = "prompt-toolkit" -version = "3.0.52" -description = "Library for building powerful interactive command lines in Python" -optional = false -python-versions = ">=3.8" -files = [ - {file = "prompt_toolkit-3.0.52-py3-none-any.whl", hash = "sha256:9aac639a3bbd33284347de5ad8d68ecc044b91a762dc39b7c21095fcd6a19955"}, - {file = "prompt_toolkit-3.0.52.tar.gz", hash = "sha256:28cde192929c8e7321de85de1ddbe736f1375148b02f2e17edd840042b1be855"}, -] - -[package.dependencies] -wcwidth = "*" - -[[package]] -name = "ptyprocess" -version = "0.7.0" -description = "Run a subprocess in a pseudo terminal" -optional = false -python-versions = "*" -files = [ - {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, - {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, -] - -[[package]] -name = "publication" -version = "0.0.3" -description = "Publication helps you maintain public-api-friendly modules by preventing unintentional access to private implementation details via introspection." -optional = false -python-versions = "*" -files = [ - {file = "publication-0.0.3-py2.py3-none-any.whl", hash = "sha256:0248885351febc11d8a1098d5c8e3ab2dabcf3e8c0c96db1e17ecd12b53afbe6"}, - {file = "publication-0.0.3.tar.gz", hash = "sha256:68416a0de76dddcdd2930d1c8ef853a743cc96c82416c4e4d3b5d901c6276dc4"}, -] - -[[package]] -name = "pure-eval" -version = "0.2.3" -description = "Safely evaluate AST nodes without side effects" -optional = false -python-versions = "*" -files = [ - {file = "pure_eval-0.2.3-py3-none-any.whl", hash = "sha256:1db8e35b67b3d218d818ae653e27f06c3aa420901fa7b081ca98cbedc874e0d0"}, - {file = "pure_eval-0.2.3.tar.gz", hash = "sha256:5f4e983f40564c576c7c8635ae88db5956bb2229d7e9237d03b3c0b0190eaf42"}, -] - -[package.extras] -tests = ["pytest"] - -[[package]] -name = "pydantic" -version = "2.11.9" -description = "Data validation using Python type hints" -optional = false -python-versions = ">=3.9" -files = [ - {file = "pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2"}, - {file = "pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2"}, -] - -[package.dependencies] -annotated-types = ">=0.6.0" -pydantic-core = "2.33.2" -typing-extensions = ">=4.12.2" -typing-inspection = ">=0.4.0" - -[package.extras] -email = ["email-validator (>=2.0.0)"] -timezone = ["tzdata"] - -[[package]] -name = "pydantic-core" -version = "2.33.2" -description = "Core functionality for Pydantic validation and serialization" -optional = false -python-versions = ">=3.9" -files = [ - {file = "pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8"}, - {file = "pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2"}, - {file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a"}, - {file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac"}, - {file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a"}, - {file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b"}, - {file = "pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22"}, - {file = "pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640"}, - {file = "pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7"}, - {file = "pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef"}, - {file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e"}, - {file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d"}, - {file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30"}, - {file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf"}, - {file = "pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51"}, - {file = "pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab"}, - {file = "pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65"}, - {file = "pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc"}, - {file = "pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1"}, - {file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b"}, - {file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1"}, - {file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6"}, - {file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea"}, - {file = "pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290"}, - {file = "pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2"}, - {file = "pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab"}, - {file = "pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f"}, - {file = "pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d"}, - {file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56"}, - {file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5"}, - {file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e"}, - {file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162"}, - {file = "pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849"}, - {file = "pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9"}, - {file = "pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9"}, - {file = "pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac"}, - {file = "pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5"}, - {file = "pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9"}, - {file = "pydantic_core-2.33.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a2b911a5b90e0374d03813674bf0a5fbbb7741570dcd4b4e85a2e48d17def29d"}, - {file = "pydantic_core-2.33.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6fa6dfc3e4d1f734a34710f391ae822e0a8eb8559a85c6979e14e65ee6ba2954"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c54c939ee22dc8e2d545da79fc5381f1c020d6d3141d3bd747eab59164dc89fb"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53a57d2ed685940a504248187d5685e49eb5eef0f696853647bf37c418c538f7"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09fb9dd6571aacd023fe6aaca316bd01cf60ab27240d7eb39ebd66a3a15293b4"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0e6116757f7959a712db11f3e9c0a99ade00a5bbedae83cb801985aa154f071b"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d55ab81c57b8ff8548c3e4947f119551253f4e3787a7bbc0b6b3ca47498a9d3"}, - {file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c20c462aa4434b33a2661701b861604913f912254e441ab8d78d30485736115a"}, - {file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44857c3227d3fb5e753d5fe4a3420d6376fa594b07b621e220cd93703fe21782"}, - {file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:eb9b459ca4df0e5c87deb59d37377461a538852765293f9e6ee834f0435a93b9"}, - {file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9fcd347d2cc5c23b06de6d3b7b8275be558a0c90549495c699e379a80bf8379e"}, - {file = "pydantic_core-2.33.2-cp39-cp39-win32.whl", hash = "sha256:83aa99b1285bc8f038941ddf598501a86f1536789740991d7d8756e34f1e74d9"}, - {file = "pydantic_core-2.33.2-cp39-cp39-win_amd64.whl", hash = "sha256:f481959862f57f29601ccced557cc2e817bce7533ab8e01a797a48b49c9692b3"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c"}, - {file = "pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb"}, - {file = "pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:87acbfcf8e90ca885206e98359d7dca4bcbb35abdc0ff66672a293e1d7a19101"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:7f92c15cd1e97d4b12acd1cc9004fa092578acfa57b67ad5e43a197175d01a64"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3f26877a748dc4251cfcfda9dfb5f13fcb034f5308388066bcfe9031b63ae7d"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dac89aea9af8cd672fa7b510e7b8c33b0bba9a43186680550ccf23020f32d535"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:970919794d126ba8645f3837ab6046fb4e72bbc057b3709144066204c19a455d"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3eb3fe62804e8f859c49ed20a8451342de53ed764150cb14ca71357c765dc2a6"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:3abcd9392a36025e3bd55f9bd38d908bd17962cc49bc6da8e7e96285336e2bca"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:3a1c81334778f9e3af2f8aeb7a960736e5cab1dfebfb26aabca09afd2906c039"}, - {file = "pydantic_core-2.33.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2807668ba86cb38c6817ad9bc66215ab8584d1d304030ce4f0887336f28a5e27"}, - {file = "pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc"}, -] - -[package.dependencies] -typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0" - -[[package]] -name = "pydantic-settings" -version = "2.11.0" -description = "Settings management using Pydantic" -optional = false -python-versions = ">=3.9" -files = [ - {file = "pydantic_settings-2.11.0-py3-none-any.whl", hash = "sha256:fe2cea3413b9530d10f3a5875adffb17ada5c1e1bab0b2885546d7310415207c"}, - {file = "pydantic_settings-2.11.0.tar.gz", hash = "sha256:d0e87a1c7d33593beb7194adb8470fc426e95ba02af83a0f23474a04c9a08180"}, -] - -[package.dependencies] -pydantic = ">=2.7.0" -python-dotenv = ">=0.21.0" -typing-inspection = ">=0.4.0" - -[package.extras] -aws-secrets-manager = ["boto3 (>=1.35.0)", "boto3-stubs[secretsmanager]"] -azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0)"] -gcp-secret-manager = ["google-cloud-secret-manager (>=2.23.1)"] -toml = ["tomli (>=2.0.1)"] -yaml = ["pyyaml (>=6.0.1)"] - -[[package]] -name = "pydash" -version = "7.0.7" -description = "The kitchen sink of Python utility libraries for doing \"stuff\" in a functional way. Based on the Lo-Dash Javascript library." -optional = false -python-versions = ">=3.8" -files = [ - {file = "pydash-7.0.7-py3-none-any.whl", hash = "sha256:c3c5b54eec0a562e0080d6f82a14ad4d5090229847b7e554235b5c1558c745e1"}, - {file = "pydash-7.0.7.tar.gz", hash = "sha256:cc935d5ac72dd41fb4515bdf982e7c864c8b5eeea16caffbab1936b849aaa49a"}, -] - -[package.dependencies] -typing-extensions = ">=3.10,<4.6.0 || >4.6.0" - -[package.extras] -dev = ["black", "build", "coverage", "docformatter", "flake8", "flake8-black", "flake8-bugbear", "flake8-isort", "furo", "invoke", "isort", "mypy", "pylint", "pytest", "pytest-cov", "pytest-mypy-testing", "sphinx", "sphinx-autodoc-typehints", "tox", "twine", "wheel"] - -[[package]] -name = "pygments" -version = "2.19.2" -description = "Pygments is a syntax highlighting package written in Python." -optional = false -python-versions = ">=3.8" -files = [ - {file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"}, - {file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"}, -] - -[package.extras] -windows-terminal = ["colorama (>=0.4.6)"] - -[[package]] -name = "python-dateutil" -version = "2.9.0.post0" -description = "Extensions to the standard Python datetime module" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" -files = [ - {file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"}, - {file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"}, -] - -[package.dependencies] -six = ">=1.5" - -[[package]] -name = "python-dotenv" -version = "1.1.1" -description = "Read key-value pairs from a .env file and set them as environment variables" -optional = false -python-versions = ">=3.9" -files = [ - {file = "python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc"}, - {file = "python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab"}, -] - -[package.extras] -cli = ["click (>=5.0)"] - -[[package]] -name = "python-multipart" -version = "0.0.20" -description = "A streaming multipart parser for Python" -optional = false -python-versions = ">=3.8" -files = [ - {file = "python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104"}, - {file = "python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13"}, -] - -[[package]] -name = "pytz" -version = "2025.2" -description = "World timezone definitions, modern and historical" -optional = false -python-versions = "*" -files = [ - {file = "pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00"}, - {file = "pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3"}, -] - -[[package]] -name = "pywin32" -version = "311" -description = "Python for Window Extensions" -optional = false -python-versions = "*" -files = [ - {file = "pywin32-311-cp310-cp310-win32.whl", hash = "sha256:d03ff496d2a0cd4a5893504789d4a15399133fe82517455e78bad62efbb7f0a3"}, - {file = "pywin32-311-cp310-cp310-win_amd64.whl", hash = "sha256:797c2772017851984b97180b0bebe4b620bb86328e8a884bb626156295a63b3b"}, - {file = "pywin32-311-cp310-cp310-win_arm64.whl", hash = "sha256:0502d1facf1fed4839a9a51ccbcc63d952cf318f78ffc00a7e78528ac27d7a2b"}, - {file = "pywin32-311-cp311-cp311-win32.whl", hash = "sha256:184eb5e436dea364dcd3d2316d577d625c0351bf237c4e9a5fabbcfa5a58b151"}, - {file = "pywin32-311-cp311-cp311-win_amd64.whl", hash = "sha256:3ce80b34b22b17ccbd937a6e78e7225d80c52f5ab9940fe0506a1a16f3dab503"}, - {file = "pywin32-311-cp311-cp311-win_arm64.whl", hash = "sha256:a733f1388e1a842abb67ffa8e7aad0e70ac519e09b0f6a784e65a136ec7cefd2"}, - {file = "pywin32-311-cp312-cp312-win32.whl", hash = "sha256:750ec6e621af2b948540032557b10a2d43b0cee2ae9758c54154d711cc852d31"}, - {file = "pywin32-311-cp312-cp312-win_amd64.whl", hash = "sha256:b8c095edad5c211ff31c05223658e71bf7116daa0ecf3ad85f3201ea3190d067"}, - {file = "pywin32-311-cp312-cp312-win_arm64.whl", hash = "sha256:e286f46a9a39c4a18b319c28f59b61de793654af2f395c102b4f819e584b5852"}, - {file = "pywin32-311-cp313-cp313-win32.whl", hash = "sha256:f95ba5a847cba10dd8c4d8fefa9f2a6cf283b8b88ed6178fa8a6c1ab16054d0d"}, - {file = "pywin32-311-cp313-cp313-win_amd64.whl", hash = "sha256:718a38f7e5b058e76aee1c56ddd06908116d35147e133427e59a3983f703a20d"}, - {file = "pywin32-311-cp313-cp313-win_arm64.whl", hash = "sha256:7b4075d959648406202d92a2310cb990fea19b535c7f4a78d3f5e10b926eeb8a"}, - {file = "pywin32-311-cp314-cp314-win32.whl", hash = "sha256:b7a2c10b93f8986666d0c803ee19b5990885872a7de910fc460f9b0c2fbf92ee"}, - {file = "pywin32-311-cp314-cp314-win_amd64.whl", hash = "sha256:3aca44c046bd2ed8c90de9cb8427f581c479e594e99b5c0bb19b29c10fd6cb87"}, - {file = "pywin32-311-cp314-cp314-win_arm64.whl", hash = "sha256:a508e2d9025764a8270f93111a970e1d0fbfc33f4153b388bb649b7eec4f9b42"}, - {file = "pywin32-311-cp38-cp38-win32.whl", hash = "sha256:6c6f2969607b5023b0d9ce2541f8d2cbb01c4f46bc87456017cf63b73f1e2d8c"}, - {file = "pywin32-311-cp38-cp38-win_amd64.whl", hash = "sha256:c8015b09fb9a5e188f83b7b04de91ddca4658cee2ae6f3bc483f0b21a77ef6cd"}, - {file = "pywin32-311-cp39-cp39-win32.whl", hash = "sha256:aba8f82d551a942cb20d4a83413ccbac30790b50efb89a75e4f586ac0bb8056b"}, - {file = "pywin32-311-cp39-cp39-win_amd64.whl", hash = "sha256:e0c4cfb0621281fe40387df582097fd796e80430597cb9944f0ae70447bacd91"}, - {file = "pywin32-311-cp39-cp39-win_arm64.whl", hash = "sha256:62ea666235135fee79bb154e695f3ff67370afefd71bd7fea7512fc70ef31e3d"}, -] - -[[package]] -name = "pyyaml" -version = "6.0.3" -description = "YAML parser and emitter for Python" -optional = false -python-versions = ">=3.8" -files = [ - {file = "PyYAML-6.0.3-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:c2514fceb77bc5e7a2f7adfaa1feb2fb311607c9cb518dbc378688ec73d8292f"}, - {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9c57bb8c96f6d1808c030b1687b9b5fb476abaa47f0db9c0101f5e9f394e97f4"}, - {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:efd7b85f94a6f21e4932043973a7ba2613b059c4a000551892ac9f1d11f5baf3"}, - {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:22ba7cfcad58ef3ecddc7ed1db3409af68d023b7f940da23c6c2a1890976eda6"}, - {file = "PyYAML-6.0.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:6344df0d5755a2c9a276d4473ae6b90647e216ab4757f8426893b5dd2ac3f369"}, - {file = "PyYAML-6.0.3-cp38-cp38-win32.whl", hash = "sha256:3ff07ec89bae51176c0549bc4c63aa6202991da2d9a6129d7aef7f1407d3f295"}, - {file = "PyYAML-6.0.3-cp38-cp38-win_amd64.whl", hash = "sha256:5cf4e27da7e3fbed4d6c3d8e797387aaad68102272f8f9752883bc32d61cb87b"}, - {file = "pyyaml-6.0.3-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:214ed4befebe12df36bcc8bc2b64b396ca31be9304b8f59e25c11cf94a4c033b"}, - {file = "pyyaml-6.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:02ea2dfa234451bbb8772601d7b8e426c2bfa197136796224e50e35a78777956"}, - {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b30236e45cf30d2b8e7b3e85881719e98507abed1011bf463a8fa23e9c3e98a8"}, - {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:66291b10affd76d76f54fad28e22e51719ef9ba22b29e1d7d03d6777a9174198"}, - {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9c7708761fccb9397fe64bbc0395abcae8c4bf7b0eac081e12b809bf47700d0b"}, - {file = "pyyaml-6.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:418cf3f2111bc80e0933b2cd8cd04f286338bb88bdc7bc8e6dd775ebde60b5e0"}, - {file = "pyyaml-6.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5e0b74767e5f8c593e8c9b5912019159ed0533c70051e9cce3e8b6aa699fcd69"}, - {file = "pyyaml-6.0.3-cp310-cp310-win32.whl", hash = "sha256:28c8d926f98f432f88adc23edf2e6d4921ac26fb084b028c733d01868d19007e"}, - {file = "pyyaml-6.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:bdb2c67c6c1390b63c6ff89f210c8fd09d9a1217a465701eac7316313c915e4c"}, - {file = "pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e"}, - {file = "pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824"}, - {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c"}, - {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00"}, - {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d"}, - {file = "pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a"}, - {file = "pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4"}, - {file = "pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b"}, - {file = "pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf"}, - {file = "pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196"}, - {file = "pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0"}, - {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28"}, - {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c"}, - {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc"}, - {file = "pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e"}, - {file = "pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea"}, - {file = "pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5"}, - {file = "pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b"}, - {file = "pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd"}, - {file = "pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8"}, - {file = "pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1"}, - {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c"}, - {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5"}, - {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6"}, - {file = "pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6"}, - {file = "pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be"}, - {file = "pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26"}, - {file = "pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c"}, - {file = "pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb"}, - {file = "pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac"}, - {file = "pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310"}, - {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7"}, - {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788"}, - {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5"}, - {file = "pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764"}, - {file = "pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35"}, - {file = "pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac"}, - {file = "pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3"}, - {file = "pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3"}, - {file = "pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba"}, - {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c"}, - {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702"}, - {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c"}, - {file = "pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065"}, - {file = "pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65"}, - {file = "pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9"}, - {file = "pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b"}, - {file = "pyyaml-6.0.3-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:b865addae83924361678b652338317d1bd7e79b1f4596f96b96c77a5a34b34da"}, - {file = "pyyaml-6.0.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c3355370a2c156cffb25e876646f149d5d68f5e0a3ce86a5084dd0b64a994917"}, - {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3c5677e12444c15717b902a5798264fa7909e41153cdf9ef7ad571b704a63dd9"}, - {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5ed875a24292240029e4483f9d4a4b8a1ae08843b9c54f43fcc11e404532a8a5"}, - {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0150219816b6a1fa26fb4699fb7daa9caf09eb1999f3b70fb6e786805e80375a"}, - {file = "pyyaml-6.0.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:fa160448684b4e94d80416c0fa4aac48967a969efe22931448d853ada8baf926"}, - {file = "pyyaml-6.0.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:27c0abcb4a5dac13684a37f76e701e054692a9b2d3064b70f5e4eb54810553d7"}, - {file = "pyyaml-6.0.3-cp39-cp39-win32.whl", hash = "sha256:1ebe39cb5fc479422b83de611d14e2c0d3bb2a18bbcb01f229ab3cfbd8fee7a0"}, - {file = "pyyaml-6.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:2e71d11abed7344e42a8849600193d15b6def118602c4c176f748e4583246007"}, - {file = "pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f"}, -] - -[[package]] -name = "referencing" -version = "0.36.2" -description = "JSON Referencing + Python" -optional = false -python-versions = ">=3.9" -files = [ - {file = "referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0"}, - {file = "referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa"}, -] - -[package.dependencies] -attrs = ">=22.2.0" -rpds-py = ">=0.7.0" -typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""} - -[[package]] -name = "requests" -version = "2.32.5" -description = "Python HTTP for Humans." -optional = false -python-versions = ">=3.9" -files = [ - {file = "requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6"}, - {file = "requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf"}, -] - -[package.dependencies] -certifi = ">=2017.4.17" -charset_normalizer = ">=2,<4" -idna = ">=2.5,<4" -urllib3 = ">=1.21.1,<3" - -[package.extras] -socks = ["PySocks (>=1.5.6,!=1.5.7)"] -use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] - -[[package]] -name = "rpds-py" -version = "0.27.1" -description = "Python bindings to Rust's persistent data structures (rpds)" -optional = false -python-versions = ">=3.9" -files = [ - {file = "rpds_py-0.27.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:68afeec26d42ab3b47e541b272166a0b4400313946871cba3ed3a4fc0cab1cef"}, - {file = "rpds_py-0.27.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:74e5b2f7bb6fa38b1b10546d27acbacf2a022a8b5543efb06cfebc72a59c85be"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9024de74731df54546fab0bfbcdb49fae19159ecaecfc8f37c18d2c7e2c0bd61"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:31d3ebadefcd73b73928ed0b2fd696f7fefda8629229f81929ac9c1854d0cffb"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b2e7f8f169d775dd9092a1743768d771f1d1300453ddfe6325ae3ab5332b4657"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3d905d16f77eb6ab2e324e09bfa277b4c8e5e6b8a78a3e7ff8f3cdf773b4c013"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:50c946f048209e6362e22576baea09193809f87687a95a8db24e5fbdb307b93a"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:3deab27804d65cd8289eb814c2c0e807c4b9d9916c9225e363cb0cf875eb67c1"}, - {file = "rpds_py-0.27.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8b61097f7488de4be8244c89915da8ed212832ccf1e7c7753a25a394bf9b1f10"}, - {file = "rpds_py-0.27.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8a3f29aba6e2d7d90528d3c792555a93497fe6538aa65eb675b44505be747808"}, - {file = "rpds_py-0.27.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:dd6cd0485b7d347304067153a6dc1d73f7d4fd995a396ef32a24d24b8ac63ac8"}, - {file = "rpds_py-0.27.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:6f4461bf931108c9fa226ffb0e257c1b18dc2d44cd72b125bec50ee0ab1248a9"}, - {file = "rpds_py-0.27.1-cp310-cp310-win32.whl", hash = "sha256:ee5422d7fb21f6a00c1901bf6559c49fee13a5159d0288320737bbf6585bd3e4"}, - {file = "rpds_py-0.27.1-cp310-cp310-win_amd64.whl", hash = "sha256:3e039aabf6d5f83c745d5f9a0a381d031e9ed871967c0a5c38d201aca41f3ba1"}, - {file = "rpds_py-0.27.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:be898f271f851f68b318872ce6ebebbc62f303b654e43bf72683dbdc25b7c881"}, - {file = "rpds_py-0.27.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:62ac3d4e3e07b58ee0ddecd71d6ce3b1637de2d373501412df395a0ec5f9beb5"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4708c5c0ceb2d034f9991623631d3d23cb16e65c83736ea020cdbe28d57c0a0e"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:abfa1171a9952d2e0002aba2ad3780820b00cc3d9c98c6630f2e93271501f66c"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4b507d19f817ebaca79574b16eb2ae412e5c0835542c93fe9983f1e432aca195"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:168b025f8fd8d8d10957405f3fdcef3dc20f5982d398f90851f4abc58c566c52"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cb56c6210ef77caa58e16e8c17d35c63fe3f5b60fd9ba9d424470c3400bcf9ed"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:d252f2d8ca0195faa707f8eb9368955760880b2b42a8ee16d382bf5dd807f89a"}, - {file = "rpds_py-0.27.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6e5e54da1e74b91dbc7996b56640f79b195d5925c2b78efaa8c5d53e1d88edde"}, - {file = "rpds_py-0.27.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ffce0481cc6e95e5b3f0a47ee17ffbd234399e6d532f394c8dce320c3b089c21"}, - {file = "rpds_py-0.27.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:a205fdfe55c90c2cd8e540ca9ceba65cbe6629b443bc05db1f590a3db8189ff9"}, - {file = "rpds_py-0.27.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:689fb5200a749db0415b092972e8eba85847c23885c8543a8b0f5c009b1a5948"}, - {file = "rpds_py-0.27.1-cp311-cp311-win32.whl", hash = "sha256:3182af66048c00a075010bc7f4860f33913528a4b6fc09094a6e7598e462fe39"}, - {file = "rpds_py-0.27.1-cp311-cp311-win_amd64.whl", hash = "sha256:b4938466c6b257b2f5c4ff98acd8128ec36b5059e5c8f8372d79316b1c36bb15"}, - {file = "rpds_py-0.27.1-cp311-cp311-win_arm64.whl", hash = "sha256:2f57af9b4d0793e53266ee4325535a31ba48e2f875da81a9177c9926dfa60746"}, - {file = "rpds_py-0.27.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:ae2775c1973e3c30316892737b91f9283f9908e3cc7625b9331271eaaed7dc90"}, - {file = "rpds_py-0.27.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2643400120f55c8a96f7c9d858f7be0c88d383cd4653ae2cf0d0c88f668073e5"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:16323f674c089b0360674a4abd28d5042947d54ba620f72514d69be4ff64845e"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9a1f4814b65eacac94a00fc9a526e3fdafd78e439469644032032d0d63de4881"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7ba32c16b064267b22f1850a34051121d423b6f7338a12b9459550eb2096e7ec"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e5c20f33fd10485b80f65e800bbe5f6785af510b9f4056c5a3c612ebc83ba6cb"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:466bfe65bd932da36ff279ddd92de56b042f2266d752719beb97b08526268ec5"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:41e532bbdcb57c92ba3be62c42e9f096431b4cf478da9bc3bc6ce5c38ab7ba7a"}, - {file = "rpds_py-0.27.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f149826d742b406579466283769a8ea448eed82a789af0ed17b0cd5770433444"}, - {file = "rpds_py-0.27.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:80c60cfb5310677bd67cb1e85a1e8eb52e12529545441b43e6f14d90b878775a"}, - {file = "rpds_py-0.27.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:7ee6521b9baf06085f62ba9c7a3e5becffbc32480d2f1b351559c001c38ce4c1"}, - {file = "rpds_py-0.27.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a512c8263249a9d68cac08b05dd59d2b3f2061d99b322813cbcc14c3c7421998"}, - {file = "rpds_py-0.27.1-cp312-cp312-win32.whl", hash = "sha256:819064fa048ba01b6dadc5116f3ac48610435ac9a0058bbde98e569f9e785c39"}, - {file = "rpds_py-0.27.1-cp312-cp312-win_amd64.whl", hash = "sha256:d9199717881f13c32c4046a15f024971a3b78ad4ea029e8da6b86e5aa9cf4594"}, - {file = "rpds_py-0.27.1-cp312-cp312-win_arm64.whl", hash = "sha256:33aa65b97826a0e885ef6e278fbd934e98cdcfed80b63946025f01e2f5b29502"}, - {file = "rpds_py-0.27.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:e4b9fcfbc021633863a37e92571d6f91851fa656f0180246e84cbd8b3f6b329b"}, - {file = "rpds_py-0.27.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1441811a96eadca93c517d08df75de45e5ffe68aa3089924f963c782c4b898cf"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:55266dafa22e672f5a4f65019015f90336ed31c6383bd53f5e7826d21a0e0b83"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d78827d7ac08627ea2c8e02c9e5b41180ea5ea1f747e9db0915e3adf36b62dcf"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae92443798a40a92dc5f0b01d8a7c93adde0c4dc965310a29ae7c64d72b9fad2"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c46c9dd2403b66a2a3b9720ec4b74d4ab49d4fabf9f03dfdce2d42af913fe8d0"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2efe4eb1d01b7f5f1939f4ef30ecea6c6b3521eec451fb93191bf84b2a522418"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:15d3b4d83582d10c601f481eca29c3f138d44c92187d197aff663a269197c02d"}, - {file = "rpds_py-0.27.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4ed2e16abbc982a169d30d1a420274a709949e2cbdef119fe2ec9d870b42f274"}, - {file = "rpds_py-0.27.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a75f305c9b013289121ec0f1181931975df78738cdf650093e6b86d74aa7d8dd"}, - {file = "rpds_py-0.27.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:67ce7620704745881a3d4b0ada80ab4d99df390838839921f99e63c474f82cf2"}, - {file = "rpds_py-0.27.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9d992ac10eb86d9b6f369647b6a3f412fc0075cfd5d799530e84d335e440a002"}, - {file = "rpds_py-0.27.1-cp313-cp313-win32.whl", hash = "sha256:4f75e4bd8ab8db624e02c8e2fc4063021b58becdbe6df793a8111d9343aec1e3"}, - {file = "rpds_py-0.27.1-cp313-cp313-win_amd64.whl", hash = "sha256:f9025faafc62ed0b75a53e541895ca272815bec18abe2249ff6501c8f2e12b83"}, - {file = "rpds_py-0.27.1-cp313-cp313-win_arm64.whl", hash = "sha256:ed10dc32829e7d222b7d3b93136d25a406ba9788f6a7ebf6809092da1f4d279d"}, - {file = "rpds_py-0.27.1-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:92022bbbad0d4426e616815b16bc4127f83c9a74940e1ccf3cfe0b387aba0228"}, - {file = "rpds_py-0.27.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:47162fdab9407ec3f160805ac3e154df042e577dd53341745fc7fb3f625e6d92"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb89bec23fddc489e5d78b550a7b773557c9ab58b7946154a10a6f7a214a48b2"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e48af21883ded2b3e9eb48cb7880ad8598b31ab752ff3be6457001d78f416723"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6f5b7bd8e219ed50299e58551a410b64daafb5017d54bbe822e003856f06a802"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08f1e20bccf73b08d12d804d6e1c22ca5530e71659e6673bce31a6bb71c1e73f"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0dc5dceeaefcc96dc192e3a80bbe1d6c410c469e97bdd47494a7d930987f18b2"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:d76f9cc8665acdc0c9177043746775aa7babbf479b5520b78ae4002d889f5c21"}, - {file = "rpds_py-0.27.1-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:134fae0e36022edad8290a6661edf40c023562964efea0cc0ec7f5d392d2aaef"}, - {file = "rpds_py-0.27.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:eb11a4f1b2b63337cfd3b4d110af778a59aae51c81d195768e353d8b52f88081"}, - {file = "rpds_py-0.27.1-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:13e608ac9f50a0ed4faec0e90ece76ae33b34c0e8656e3dceb9a7db994c692cd"}, - {file = "rpds_py-0.27.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dd2135527aa40f061350c3f8f89da2644de26cd73e4de458e79606384f4f68e7"}, - {file = "rpds_py-0.27.1-cp313-cp313t-win32.whl", hash = "sha256:3020724ade63fe320a972e2ffd93b5623227e684315adce194941167fee02688"}, - {file = "rpds_py-0.27.1-cp313-cp313t-win_amd64.whl", hash = "sha256:8ee50c3e41739886606388ba3ab3ee2aae9f35fb23f833091833255a31740797"}, - {file = "rpds_py-0.27.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:acb9aafccaae278f449d9c713b64a9e68662e7799dbd5859e2c6b3c67b56d334"}, - {file = "rpds_py-0.27.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:b7fb801aa7f845ddf601c49630deeeccde7ce10065561d92729bfe81bd21fb33"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fe0dd05afb46597b9a2e11c351e5e4283c741237e7f617ffb3252780cca9336a"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b6dfb0e058adb12d8b1d1b25f686e94ffa65d9995a5157afe99743bf7369d62b"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ed090ccd235f6fa8bb5861684567f0a83e04f52dfc2e5c05f2e4b1309fcf85e7"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf876e79763eecf3e7356f157540d6a093cef395b65514f17a356f62af6cc136"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:12ed005216a51b1d6e2b02a7bd31885fe317e45897de81d86dcce7d74618ffff"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:ee4308f409a40e50593c7e3bb8cbe0b4d4c66d1674a316324f0c2f5383b486f9"}, - {file = "rpds_py-0.27.1-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0b08d152555acf1f455154d498ca855618c1378ec810646fcd7c76416ac6dc60"}, - {file = "rpds_py-0.27.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:dce51c828941973a5684d458214d3a36fcd28da3e1875d659388f4f9f12cc33e"}, - {file = "rpds_py-0.27.1-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:c1476d6f29eb81aa4151c9a31219b03f1f798dc43d8af1250a870735516a1212"}, - {file = "rpds_py-0.27.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:3ce0cac322b0d69b63c9cdb895ee1b65805ec9ffad37639f291dd79467bee675"}, - {file = "rpds_py-0.27.1-cp314-cp314-win32.whl", hash = "sha256:dfbfac137d2a3d0725758cd141f878bf4329ba25e34979797c89474a89a8a3a3"}, - {file = "rpds_py-0.27.1-cp314-cp314-win_amd64.whl", hash = "sha256:a6e57b0abfe7cc513450fcf529eb486b6e4d3f8aee83e92eb5f1ef848218d456"}, - {file = "rpds_py-0.27.1-cp314-cp314-win_arm64.whl", hash = "sha256:faf8d146f3d476abfee026c4ae3bdd9ca14236ae4e4c310cbd1cf75ba33d24a3"}, - {file = "rpds_py-0.27.1-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:ba81d2b56b6d4911ce735aad0a1d4495e808b8ee4dc58715998741a26874e7c2"}, - {file = "rpds_py-0.27.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:84f7d509870098de0e864cad0102711c1e24e9b1a50ee713b65928adb22269e4"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9e960fc78fecd1100539f14132425e1d5fe44ecb9239f8f27f079962021523e"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:62f85b665cedab1a503747617393573995dac4600ff51869d69ad2f39eb5e817"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fed467af29776f6556250c9ed85ea5a4dd121ab56a5f8b206e3e7a4c551e48ec"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2729615f9d430af0ae6b36cf042cb55c0936408d543fb691e1a9e36648fd35a"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b207d881a9aef7ba753d69c123a35d96ca7cb808056998f6b9e8747321f03b8"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:639fd5efec029f99b79ae47e5d7e00ad8a773da899b6309f6786ecaf22948c48"}, - {file = "rpds_py-0.27.1-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fecc80cb2a90e28af8a9b366edacf33d7a91cbfe4c2c4544ea1246e949cfebeb"}, - {file = "rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:42a89282d711711d0a62d6f57d81aa43a1368686c45bc1c46b7f079d55692734"}, - {file = "rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:cf9931f14223de59551ab9d38ed18d92f14f055a5f78c1d8ad6493f735021bbb"}, - {file = "rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f39f58a27cc6e59f432b568ed8429c7e1641324fbe38131de852cd77b2d534b0"}, - {file = "rpds_py-0.27.1-cp314-cp314t-win32.whl", hash = "sha256:d5fa0ee122dc09e23607a28e6d7b150da16c662e66409bbe85230e4c85bb528a"}, - {file = "rpds_py-0.27.1-cp314-cp314t-win_amd64.whl", hash = "sha256:6567d2bb951e21232c2f660c24cf3470bb96de56cdcb3f071a83feeaff8a2772"}, - {file = "rpds_py-0.27.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c918c65ec2e42c2a78d19f18c553d77319119bf43aa9e2edf7fb78d624355527"}, - {file = "rpds_py-0.27.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fea2b1a922c47c51fd07d656324531adc787e415c8b116530a1d29c0516c62d"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbf94c58e8e0cd6b6f38d8de67acae41b3a515c26169366ab58bdca4a6883bb8"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c2a8fed130ce946d5c585eddc7c8eeef0051f58ac80a8ee43bd17835c144c2cc"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:037a2361db72ee98d829bc2c5b7cc55598ae0a5e0ec1823a56ea99374cfd73c1"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5281ed1cc1d49882f9997981c88df1a22e140ab41df19071222f7e5fc4e72125"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2fd50659a069c15eef8aa3d64bbef0d69fd27bb4a50c9ab4f17f83a16cbf8905"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_31_riscv64.whl", hash = "sha256:c4b676c4ae3921649a15d28ed10025548e9b561ded473aa413af749503c6737e"}, - {file = "rpds_py-0.27.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:079bc583a26db831a985c5257797b2b5d3affb0386e7ff886256762f82113b5e"}, - {file = "rpds_py-0.27.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4e44099bd522cba71a2c6b97f68e19f40e7d85399de899d66cdb67b32d7cb786"}, - {file = "rpds_py-0.27.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:e202e6d4188e53c6661af813b46c37ca2c45e497fc558bacc1a7630ec2695aec"}, - {file = "rpds_py-0.27.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:f41f814b8eaa48768d1bb551591f6ba45f87ac76899453e8ccd41dba1289b04b"}, - {file = "rpds_py-0.27.1-cp39-cp39-win32.whl", hash = "sha256:9e71f5a087ead99563c11fdaceee83ee982fd39cf67601f4fd66cb386336ee52"}, - {file = "rpds_py-0.27.1-cp39-cp39-win_amd64.whl", hash = "sha256:71108900c9c3c8590697244b9519017a400d9ba26a36c48381b3f64743a44aab"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7ba22cb9693df986033b91ae1d7a979bc399237d45fccf875b76f62bb9e52ddf"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:5b640501be9288c77738b5492b3fd3abc4ba95c50c2e41273c8a1459f08298d3"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb08b65b93e0c6dd70aac7f7890a9c0938d5ec71d5cb32d45cf844fb8ae47636"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d7ff07d696a7a38152ebdb8212ca9e5baab56656749f3d6004b34ab726b550b8"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fb7c72262deae25366e3b6c0c0ba46007967aea15d1eea746e44ddba8ec58dcc"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7b002cab05d6339716b03a4a3a2ce26737f6231d7b523f339fa061d53368c9d8"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23f6b69d1c26c4704fec01311963a41d7de3ee0570a84ebde4d544e5a1859ffc"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:530064db9146b247351f2a0250b8f00b289accea4596a033e94be2389977de71"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7b90b0496570bd6b0321724a330d8b545827c4df2034b6ddfc5f5275f55da2ad"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:879b0e14a2da6a1102a3fc8af580fc1ead37e6d6692a781bd8c83da37429b5ab"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:0d807710df3b5faa66c731afa162ea29717ab3be17bdc15f90f2d9f183da4059"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:3adc388fc3afb6540aec081fa59e6e0d3908722771aa1e37ffe22b220a436f0b"}, - {file = "rpds_py-0.27.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:c796c0c1cc68cb08b0284db4229f5af76168172670c74908fdbd4b7d7f515819"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:cdfe4bb2f9fe7458b7453ad3c33e726d6d1c7c0a72960bcc23800d77384e42df"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:8fabb8fd848a5f75a2324e4a84501ee3a5e3c78d8603f83475441866e60b94a3"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eda8719d598f2f7f3e0f885cba8646644b55a187762bec091fa14a2b819746a9"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c64d07e95606ec402a0a1c511fe003873fa6af630bda59bac77fac8b4318ebc"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:93a2ed40de81bcff59aabebb626562d48332f3d028ca2036f1d23cbb52750be4"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:387ce8c44ae94e0ec50532d9cb0edce17311024c9794eb196b90e1058aadeb66"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aaf94f812c95b5e60ebaf8bfb1898a7d7cb9c1af5744d4a67fa47796e0465d4e"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:4848ca84d6ded9b58e474dfdbad4b8bfb450344c0551ddc8d958bf4b36aa837c"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2bde09cbcf2248b73c7c323be49b280180ff39fadcfe04e7b6f54a678d02a7cf"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:94c44ee01fd21c9058f124d2d4f0c9dc7634bec93cd4b38eefc385dabe71acbf"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:df8b74962e35c9249425d90144e721eed198e6555a0e22a563d29fe4486b51f6"}, - {file = "rpds_py-0.27.1-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:dc23e6820e3b40847e2f4a7726462ba0cf53089512abe9ee16318c366494c17a"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:aa8933159edc50be265ed22b401125c9eebff3171f570258854dbce3ecd55475"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:a50431bf02583e21bf273c71b89d710e7a710ad5e39c725b14e685610555926f"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78af06ddc7fe5cc0e967085a9115accee665fb912c22a3f54bad70cc65b05fe6"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:70d0738ef8fee13c003b100c2fbd667ec4f133468109b3472d249231108283a3"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e2f6fd8a1cea5bbe599b6e78a6e5ee08db434fc8ffea51ff201c8765679698b3"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8177002868d1426305bb5de1e138161c2ec9eb2d939be38291d7c431c4712df8"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:008b839781d6c9bf3b6a8984d1d8e56f0ec46dc56df61fd669c49b58ae800400"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:a55b9132bb1ade6c734ddd2759c8dc132aa63687d259e725221f106b83a0e485"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a46fdec0083a26415f11d5f236b79fa1291c32aaa4a17684d82f7017a1f818b1"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:8a63b640a7845f2bdd232eb0d0a4a2dd939bcdd6c57e6bb134526487f3160ec5"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:7e32721e5d4922deaaf963469d795d5bde6093207c52fec719bd22e5d1bedbc4"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:2c426b99a068601b5f4623573df7a7c3d72e87533a2dd2253353a03e7502566c"}, - {file = "rpds_py-0.27.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4fc9b7fe29478824361ead6e14e4f5aed570d477e06088826537e202d25fe859"}, - {file = "rpds_py-0.27.1.tar.gz", hash = "sha256:26a1c73171d10b7acccbded82bf6a586ab8203601e565badc74bbbf8bc5a10f8"}, -] - -[[package]] -name = "s3transfer" -version = "0.14.0" -description = "An Amazon S3 Transfer Manager" -optional = false -python-versions = ">=3.9" -files = [ - {file = "s3transfer-0.14.0-py3-none-any.whl", hash = "sha256:ea3b790c7077558ed1f02a3072fb3cb992bbbd253392f4b6e9e8976941c7d456"}, - {file = "s3transfer-0.14.0.tar.gz", hash = "sha256:eff12264e7c8b4985074ccce27a3b38a485bb7f7422cc8046fee9be4983e4125"}, -] - -[package.dependencies] -botocore = ">=1.37.4,<2.0a.0" - -[package.extras] -crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"] - -[[package]] -name = "six" -version = "1.17.0" -description = "Python 2 and 3 compatibility utilities" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" -files = [ - {file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"}, - {file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"}, -] - -[[package]] -name = "sniffio" -version = "1.3.1" -description = "Sniff out which async library your code is running under" -optional = false -python-versions = ">=3.7" -files = [ - {file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"}, - {file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"}, -] - -[[package]] -name = "sse-starlette" -version = "3.0.2" -description = "SSE plugin for Starlette" -optional = false -python-versions = ">=3.9" -files = [ - {file = "sse_starlette-3.0.2-py3-none-any.whl", hash = "sha256:16b7cbfddbcd4eaca11f7b586f3b8a080f1afe952c15813455b162edea619e5a"}, - {file = "sse_starlette-3.0.2.tar.gz", hash = "sha256:ccd60b5765ebb3584d0de2d7a6e4f745672581de4f5005ab31c3a25d10b52b3a"}, -] - -[package.dependencies] -anyio = ">=4.7.0" - -[package.extras] -daphne = ["daphne (>=4.2.0)"] -examples = ["aiosqlite (>=0.21.0)", "fastapi (>=0.115.12)", "sqlalchemy[asyncio] (>=2.0.41)", "starlette (>=0.41.3)", "uvicorn (>=0.34.0)"] -granian = ["granian (>=2.3.1)"] -uvicorn = ["uvicorn (>=0.34.0)"] - -[[package]] -name = "stack-data" -version = "0.6.3" -description = "Extract data from python stack frames and tracebacks for informative displays" -optional = false -python-versions = "*" -files = [ - {file = "stack_data-0.6.3-py3-none-any.whl", hash = "sha256:d5558e0c25a4cb0853cddad3d77da9891a08cb85dd9f9f91b9f8cd66e511e695"}, - {file = "stack_data-0.6.3.tar.gz", hash = "sha256:836a778de4fec4dcd1dcd89ed8abff8a221f58308462e1c4aa2a3cf30148f0b9"}, -] - -[package.dependencies] -asttokens = ">=2.1.0" -executing = ">=1.2.0" -pure-eval = "*" - -[package.extras] -tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"] - -[[package]] -name = "starlette" -version = "0.48.0" -description = "The little ASGI library that shines." -optional = false -python-versions = ">=3.9" -files = [ - {file = "starlette-0.48.0-py3-none-any.whl", hash = "sha256:0764ca97b097582558ecb498132ed0c7d942f233f365b86ba37770e026510659"}, - {file = "starlette-0.48.0.tar.gz", hash = "sha256:7e8cee469a8ab2352911528110ce9088fdc6a37d9876926e73da7ce4aa4c7a46"}, -] - -[package.dependencies] -anyio = ">=3.6.2,<5" -typing-extensions = {version = ">=4.10.0", markers = "python_version < \"3.13\""} - -[package.extras] -full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"] - -[[package]] -name = "tomli" -version = "2.2.1" -description = "A lil' TOML parser" -optional = false -python-versions = ">=3.8" -files = [ - {file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"}, - {file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8"}, - {file = "tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff"}, - {file = "tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b"}, - {file = "tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea"}, - {file = "tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e"}, - {file = "tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98"}, - {file = "tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4"}, - {file = "tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7"}, - {file = "tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744"}, - {file = "tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec"}, - {file = "tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69"}, - {file = "tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc"}, - {file = "tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff"}, -] - -[[package]] -name = "tqdm" -version = "4.67.1" -description = "Fast, Extensible Progress Meter" -optional = false -python-versions = ">=3.7" -files = [ - {file = "tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2"}, - {file = "tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2"}, -] - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} - -[package.extras] -dev = ["nbval", "pytest (>=6)", "pytest-asyncio (>=0.24)", "pytest-cov", "pytest-timeout"] -discord = ["requests"] -notebook = ["ipywidgets (>=6)"] -slack = ["slack-sdk"] -telegram = ["requests"] - -[[package]] -name = "traitlets" -version = "5.14.3" -description = "Traitlets Python configuration system" -optional = false -python-versions = ">=3.8" -files = [ - {file = "traitlets-5.14.3-py3-none-any.whl", hash = "sha256:b74e89e397b1ed28cc831db7aea759ba6640cb3de13090ca145426688ff1ac4f"}, - {file = "traitlets-5.14.3.tar.gz", hash = "sha256:9ed0579d3502c94b4b3732ac120375cda96f923114522847de4b3bb98b96b6b7"}, -] - -[package.extras] -docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"] -test = ["argcomplete (>=3.0.3)", "mypy (>=1.7.0)", "pre-commit", "pytest (>=7.0,<8.2)", "pytest-mock", "pytest-mypy-testing"] - -[[package]] -name = "typing-extensions" -version = "4.15.0" -description = "Backported and Experimental Type Hints for Python 3.9+" -optional = false -python-versions = ">=3.9" -files = [ - {file = "typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548"}, - {file = "typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466"}, -] - -[[package]] -name = "typing-inspection" -version = "0.4.2" -description = "Runtime typing introspection tools" -optional = false -python-versions = ">=3.9" -files = [ - {file = "typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7"}, - {file = "typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464"}, -] - -[package.dependencies] -typing-extensions = ">=4.12.0" - -[[package]] -name = "urllib3" -version = "2.5.0" -description = "HTTP library with thread-safe connection pooling, file post, and more." -optional = false -python-versions = ">=3.9" -files = [ - {file = "urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc"}, - {file = "urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760"}, -] - -[package.extras] -brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] -h2 = ["h2 (>=4,<5)"] -socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] -zstd = ["zstandard (>=0.18.0)"] - -[[package]] -name = "uvicorn" -version = "0.37.0" -description = "The lightning-fast ASGI server." -optional = false -python-versions = ">=3.9" -files = [ - {file = "uvicorn-0.37.0-py3-none-any.whl", hash = "sha256:913b2b88672343739927ce381ff9e2ad62541f9f8289664fa1d1d3803fa2ce6c"}, - {file = "uvicorn-0.37.0.tar.gz", hash = "sha256:4115c8add6d3fd536c8ee77f0e14a7fd2ebba939fed9b02583a97f80648f9e13"}, -] - -[package.dependencies] -click = ">=7.0" -h11 = ">=0.8" -typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""} - -[package.extras] -standard = ["colorama (>=0.4)", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.15.1)", "watchfiles (>=0.13)", "websockets (>=10.4)"] - -[[package]] -name = "vellum-ai" -version = "1.7.3" -description = "" -optional = false -python-versions = "<4.0,>=3.9" -files = [ - {file = "vellum_ai-1.7.3-py3-none-any.whl", hash = "sha256:8ee01062acf12610fc62067169d3e3fe7a71cb17da62c5774e7adf0ce805b4db"}, - {file = "vellum_ai-1.7.3.tar.gz", hash = "sha256:516546ba90d0f4d8071c519666b4f6fa6f8def2dd7c0ed8795a7d8f4705211f9"}, -] - -[package.dependencies] -click = ">=8.1.7,<9.0.0" -docker = ">=7.1.0,<8.0.0" -httpx = ">=0.21.2" -Jinja2 = ">=3.1.0,<4.0.0" -openai = ">=1.0.0,<2.0.0" -orderly-set = ">=5.2.2,<6.0.0" -publication = "0.0.3" -pydantic = ">=1.9.2" -pydantic-core = ">=2.18.2" -pydash = ">=7.0.0,<8.0.0" -python-dateutil = ">=2.8.0,<3.0.0" -python-dotenv = ">=1.0.0,<2.0.0" -pytz = ">=2022.0,<2026.0" -pyyaml = ">=6.0.0,<7.0.0" -requests = ">=2.31.0,<3.0.0" -tomli = ">=2.0.0,<3.0.0" -typing_extensions = ">=4.0.0" - -[[package]] -name = "wcwidth" -version = "0.2.14" -description = "Measures the displayed width of unicode strings in a terminal" -optional = false -python-versions = ">=3.6" -files = [ - {file = "wcwidth-0.2.14-py2.py3-none-any.whl", hash = "sha256:a7bb560c8aee30f9957e5f9895805edd20602f2d7f720186dfd906e82b4982e1"}, - {file = "wcwidth-0.2.14.tar.gz", hash = "sha256:4d478375d31bc5395a3c55c40ccdf3354688364cd61c4f6adacaa9215d0b3605"}, -] - -[metadata] -lock-version = "2.0" -python-versions = ">=3.10,<4.0" -content-hash = "7598b3fa4fbabe8b0e0559691ce14bf976c57627ed732987fb5549e45193b139" diff --git a/examples/workflows/pyproject.toml b/examples/workflows/pyproject.toml deleted file mode 100644 index 3181c277f3..0000000000 --- a/examples/workflows/pyproject.toml +++ /dev/null @@ -1,41 +0,0 @@ -[tool.poetry] -name = "vellum-example-workflows" -version = "0.1.0" -description = "A list of ready to use examples to jump start your Vellum Workflow development experience" -authors = ["Vellum "] -license = "MIT" -readme = "README.md" - -[tool.poetry.dependencies] -python = ">=3.10,<4.0" -vellum-ai = "1.7.3" -boto3 = "^1.38.4" -mcp = "^1.9.4" - -[tool.poetry.group.dev.dependencies] -ipdb = "^0.13.13" - -[[tool.vellum.workflows]] -module = "custom_base_node" -container_image_name = "sdk-examples-utils" -container_image_tag = "1.0.3" - -[[tool.vellum.workflows]] -module = "custom_prompt_node" -container_image_name = "sdk-examples-utils" -container_image_tag = "1.0.3" - -[[tool.vellum.workflows]] -module = "mcp_demo" -container_image_name = "sdk-examples-utils" -container_image_tag = "1.0.9" - -[[tool.vellum.workspaces]] -name = "aws-staging" -api_url = "AWS_STAGING_VELLUM_API_URL" -api_key = "AWS_STAGING_VELLUM_API_KEY" - - -[build-system] -requires = ["poetry-core"] -build-backend = "poetry.core.masonry.api" diff --git a/examples/workflows/re_act_agent/__init__.py b/examples/workflows/re_act_agent/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/re_act_agent/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/re_act_agent/display/__init__.py b/examples/workflows/re_act_agent/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/re_act_agent/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/re_act_agent/display/nodes/__init__.py b/examples/workflows/re_act_agent/display/nodes/__init__.py deleted file mode 100644 index 37135792ba..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from .accumulate_chat_history import AccumulateChatHistoryDisplay -from .agent_node import AgentNodeDisplay -from .agent_response import AgentResponseDisplay -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistoryDisplay -from .full_chat_history_output import FullChatHistoryOutputDisplay -from .function_calls_to_json_array import FunctionCallsToJSONArrayDisplay -from .has_function_calls import HasFunctionCallsDisplay -from .invoke_functions import InvokeFunctionsDisplay -from .should_handle_functions import ShouldHandleFunctionsDisplay - -__all__ = [ - "AccumulateChatHistoryDisplay", - "AgentNodeDisplay", - "AgentResponseDisplay", - "FinalAccumulationOfChatHistoryDisplay", - "FullChatHistoryOutputDisplay", - "FunctionCallsToJSONArrayDisplay", - "HasFunctionCallsDisplay", - "InvokeFunctionsDisplay", - "ShouldHandleFunctionsDisplay", -] diff --git a/examples/workflows/re_act_agent/display/nodes/accumulate_chat_history.py b/examples/workflows/re_act_agent/display/nodes/accumulate_chat_history.py deleted file mode 100644 index 3be2e64044..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/accumulate_chat_history.py +++ /dev/null @@ -1,36 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.accumulate_chat_history import AccumulateChatHistory - - -class AccumulateChatHistoryDisplay(BaseCodeExecutionNodeDisplay[AccumulateChatHistory]): - label = "Accumulate Chat History" - node_id = UUID("58afeee4-346a-46fa-8097-91ba86682de9") - target_handle_id = UUID("556825c6-f00c-4dd1-883f-2d7ddb9286c0") - output_id = UUID("e446171c-840f-4e13-b404-92d98f745cbe") - log_output_id = UUID("56184319-1a54-4373-9cbc-b7c298d1a33c") - node_input_ids_by_name = { - "code_inputs.invoked_functions": UUID("40efed5b-3322-45f4-8bb3-15387fd0b35c"), - "code_inputs.assistant_message": UUID("0437c3cb-dbf3-4ee2-812b-7171cf3599d0"), - "code_inputs.current_chat_history": UUID("49f7dd2f-3f26-41c3-b018-cb412db7d668"), - "code": UUID("a5b4dbd2-a3cf-46d3-a74d-ed2acb1293ba"), - "runtime": UUID("8a9822ac-f11a-4fec-bbe0-c1b48870c404"), - } - output_display = { - AccumulateChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("e446171c-840f-4e13-b404-92d98f745cbe"), name="result" - ), - AccumulateChatHistory.Outputs.log: NodeOutputDisplay( - id=UUID("56184319-1a54-4373-9cbc-b7c298d1a33c"), name="log" - ), - } - port_displays = { - AccumulateChatHistory.Ports.default: PortDisplayOverrides(id=UUID("3f9d53b9-5f34-4021-8d33-18c90af361a6")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3988.7945432054958, y=130.58593157958728), width=456, height=378 - ) diff --git a/examples/workflows/re_act_agent/display/nodes/agent_node.py b/examples/workflows/re_act_agent/display/nodes/agent_node.py deleted file mode 100644 index 62014c642a..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/agent_node.py +++ /dev/null @@ -1,29 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.agent_node import AgentNode - - -class AgentNodeDisplay(BaseInlinePromptNodeDisplay[AgentNode]): - label = "Agent Node" - node_id = UUID("5d28e493-87ff-44f2-afc7-10ece69b798d") - output_id = UUID("9066d1a1-6662-4e09-996f-7b92aced9b47") - array_output_id = UUID("0b0d0975-b66f-4163-b8b6-57566b3bf2b6") - target_handle_id = UUID("bd7137dc-9c30-4881-8b99-b7697b9df11b") - node_input_ids_by_name = {"prompt_inputs.chat_history": UUID("7d0d3c27-4faa-42fe-a76f-5e4b934b49e9")} - attribute_ids_by_name = {"ml_model": UUID("7869ec4b-c8a9-4780-b862-9220d897ea2b")} - output_display = { - AgentNode.Outputs.text: NodeOutputDisplay(id=UUID("9066d1a1-6662-4e09-996f-7b92aced9b47"), name="text"), - AgentNode.Outputs.results: NodeOutputDisplay(id=UUID("0b0d0975-b66f-4163-b8b6-57566b3bf2b6"), name="results"), - AgentNode.Outputs.json: NodeOutputDisplay(id=UUID("6b031f82-7927-42bf-b555-c7191c7733b4"), name="json"), - } - port_displays = {AgentNode.Ports.default: PortDisplayOverrides(id=UUID("cc9eaae2-ebe1-41d6-9faa-ac4f5bd01a2e"))} - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=690, y=271.3250297289478), - width=480, - height=283, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/re_act_agent/display/nodes/agent_response.py b/examples/workflows/re_act_agent/display/nodes/agent_response.py deleted file mode 100644 index da64d77ff1..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/agent_response.py +++ /dev/null @@ -1,24 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.agent_response import AgentResponse - - -class AgentResponseDisplay(BaseFinalOutputNodeDisplay[AgentResponse]): - label = "Agent Response" - node_id = UUID("b3b8ff28-ce7d-4078-a4b2-5a6ad4426758") - target_handle_id = UUID("d28743ce-0ec5-4bab-bfbb-1f0d192bcb0b") - output_name = "response" - node_input_ids_by_name = {"node_input": UUID("4da6d59e-d509-491d-b814-a76aa5dbabc9")} - output_display = { - AgentResponse.Outputs.value: NodeOutputDisplay(id=UUID("23f727b7-d00e-48df-8387-f1ea21e1bcb6"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3406.0443370687376, y=1734.0619423765406), - width=457, - height=306, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/re_act_agent/display/nodes/final_accumulation_of_chat_history.py b/examples/workflows/re_act_agent/display/nodes/final_accumulation_of_chat_history.py deleted file mode 100644 index 490d767e56..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/final_accumulation_of_chat_history.py +++ /dev/null @@ -1,35 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory - - -class FinalAccumulationOfChatHistoryDisplay(BaseCodeExecutionNodeDisplay[FinalAccumulationOfChatHistory]): - label = "Final Accumulation of Chat History" - node_id = UUID("124a993f-3e4f-4828-98a1-2740e2c9e399") - target_handle_id = UUID("a0a79d87-431b-49b4-a03a-f0e20c12d888") - output_id = UUID("f6d85d94-a58e-403d-964f-b88a44ce72f4") - log_output_id = UUID("00f82678-ed0c-4482-8e78-f314109d571e") - node_input_ids_by_name = { - "code_inputs.current_chat_history": UUID("5341a54b-f04a-4dad-bfdc-11ee9fb416bd"), - "code_inputs.assistant_message": UUID("2eb6cb8b-5065-45d7-b4a6-4faaf148b270"), - "code": UUID("b8cf9cdb-f52e-48ed-92f7-99ff1eb77760"), - "runtime": UUID("f6b3adda-f87d-443b-9a97-2a624c478a40"), - } - output_display = { - FinalAccumulationOfChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("f6d85d94-a58e-403d-964f-b88a44ce72f4"), name="result" - ), - FinalAccumulationOfChatHistory.Outputs.log: NodeOutputDisplay( - id=UUID("00f82678-ed0c-4482-8e78-f314109d571e"), name="log" - ), - } - port_displays = { - FinalAccumulationOfChatHistory.Ports.default: PortDisplayOverrides( - id=UUID("5f1c1a04-ec5b-449f-a2bb-c3ff93a96063") - ) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=2820, y=825), width=453, height=324) diff --git a/examples/workflows/re_act_agent/display/nodes/full_chat_history_output.py b/examples/workflows/re_act_agent/display/nodes/full_chat_history_output.py deleted file mode 100644 index 15e501370c..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/full_chat_history_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.full_chat_history_output import FullChatHistoryOutput - - -class FullChatHistoryOutputDisplay(BaseFinalOutputNodeDisplay[FullChatHistoryOutput]): - label = "Full Chat History Output" - node_id = UUID("d0668a5a-4b7e-4dfe-842c-d041e512a996") - target_handle_id = UUID("0d45a6a1-0079-4424-9411-74d19a05d772") - output_name = "full-chat-history" - node_input_ids_by_name = {"node_input": UUID("5b2ddce3-6405-4468-948c-0eb664eda821")} - output_display = { - FullChatHistoryOutput.Outputs.value: NodeOutputDisplay( - id=UUID("b6effd4f-662d-4cae-9847-9598f3898660"), name="value" - ) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=3405, y=900), width=456, height=239) diff --git a/examples/workflows/re_act_agent/display/nodes/function_calls_to_json_array.py b/examples/workflows/re_act_agent/display/nodes/function_calls_to_json_array.py deleted file mode 100644 index f27442cba1..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/function_calls_to_json_array.py +++ /dev/null @@ -1,28 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.function_calls_to_json_array import FunctionCallsToJSONArray - - -class FunctionCallsToJSONArrayDisplay(BaseTemplatingNodeDisplay[FunctionCallsToJSONArray]): - label = "Function Calls to JSON Array" - node_id = UUID("9d3bd2de-db69-4125-9c97-f481585dd2bc") - target_handle_id = UUID("b6df9c97-a4ba-459f-86cb-6564efc1b49a") - node_input_ids_by_name = { - "template": UUID("5e333970-1e58-41de-a023-9f5507c47f56"), - "inputs.prompt_outputs": UUID("edebf277-4ebb-4e8e-b813-8302df576791"), - } - output_display = { - FunctionCallsToJSONArray.Outputs.result: NodeOutputDisplay( - id=UUID("17b64233-4e00-44e2-8e9a-5b44411e99a3"), name="result" - ) - } - port_displays = { - FunctionCallsToJSONArray.Ports.default: PortDisplayOverrides(id=UUID("c028325f-d175-4b06-8bed-61f293054585")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2723.468320846336, y=-31.27078140449322), width=457, height=229 - ) diff --git a/examples/workflows/re_act_agent/display/nodes/has_function_calls.py b/examples/workflows/re_act_agent/display/nodes/has_function_calls.py deleted file mode 100644 index 684555214e..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/has_function_calls.py +++ /dev/null @@ -1,26 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.has_function_calls import HasFunctionCalls - - -class HasFunctionCallsDisplay(BaseTemplatingNodeDisplay[HasFunctionCalls]): - label = "Has Function Calls?" - node_id = UUID("6ff4b3bb-5b5a-4e82-997d-4a2014cb188c") - target_handle_id = UUID("6fc3b953-c49b-45a7-a105-3b6bdf564e05") - node_input_ids_by_name = { - "template": UUID("8fbf4bb9-2a09-4a4b-80d2-0bc07527b26c"), - "inputs.output": UUID("8d2192a3-6bc9-47a6-b758-c6c4d890fc5f"), - } - output_display = { - HasFunctionCalls.Outputs.result: NodeOutputDisplay( - id=UUID("dfc5d492-eaae-4570-b0bc-22324e5a5171"), name="result" - ) - } - port_displays = { - HasFunctionCalls.Ports.default: PortDisplayOverrides(id=UUID("d579d27d-835d-4bf9-a678-44f63de59a8c")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1485, y=-40.873739776839415), width=453, height=229) diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/__init__.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/__init__.py deleted file mode 100644 index 8ab8225391..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# flake8: noqa: F401, F403 - -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseMapNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ....nodes.invoke_functions import InvokeFunctions -from .nodes import * -from .workflow import * - - -class InvokeFunctionsDisplay(BaseMapNodeDisplay[InvokeFunctions]): - label = "Invoke Functions" - node_id = UUID("dd0cb554-14e8-43fc-ac45-a00edc5ec1cd") - target_handle_id = UUID("628a2305-f870-4605-ac77-174c67837687") - node_input_ids_by_name = {"items": UUID("98ee5b96-8446-4cd6-90c7-2141432ff0b6")} - output_display = { - InvokeFunctions.Outputs.final_output: NodeOutputDisplay( - id=UUID("fe14414b-f4de-4f26-9339-63e3fab65ce8"), name="final-output" - ) - } - port_displays = { - InvokeFunctions.Ports.default: PortDisplayOverrides(id=UUID("837c5b83-5602-40b8-9075-20669d02de4c")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3315.2439889794405, y=-104.74211423787813), width=None, height=None - ) diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/__init__.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/__init__.py deleted file mode 100644 index d2a52ea0d2..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .final_output import FinalOutputDisplay -from .function_result_context import FunctionResultContextDisplay -from .invoke_function_s_w_code import InvokeFunctionSWCodeDisplay - -__all__ = [ - "FinalOutputDisplay", - "FunctionResultContextDisplay", - "InvokeFunctionSWCodeDisplay", -] diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/final_output.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/final_output.py deleted file mode 100644 index b4dfb1548d..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/final_output.py +++ /dev/null @@ -1,19 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from .....nodes.invoke_functions.nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("ef28df31-7f49-4307-8735-9f179bd92389") - target_handle_id = UUID("699a3ea9-a074-4d04-b5e1-94943adfcebf") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("4a1541b1-d58f-4ec1-81df-dc175e227847")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("fe14414b-f4de-4f26-9339-63e3fab65ce8"), name="value") - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1830, y=60.5), width=None, height=None) diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/function_result_context.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/function_result_context.py deleted file mode 100644 index 081c272d8b..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/function_result_context.py +++ /dev/null @@ -1,27 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.invoke_functions.nodes.function_result_context import FunctionResultContext - - -class FunctionResultContextDisplay(BaseTemplatingNodeDisplay[FunctionResultContext]): - label = "Function Result + Context" - node_id = UUID("02805736-ea32-437b-92f0-bfc637208194") - target_handle_id = UUID("5140dd48-cb28-4fc3-85f3-b6d5c3d2ca1d") - node_input_ids_by_name = { - "template": UUID("67df1a2d-6388-46a4-9cd3-ffb337cd09c0"), - "inputs.item": UUID("3dad5510-c33d-40d5-8056-460d77bcb962"), - "inputs.fxn_result": UUID("9cfea597-d0ad-48e6-b538-1e0e4ec7df1f"), - } - output_display = { - FunctionResultContext.Outputs.result: NodeOutputDisplay( - id=UUID("3846c00b-97d5-4ca4-b98f-1001a769a94c"), name="result" - ) - } - port_displays = { - FunctionResultContext.Ports.default: PortDisplayOverrides(id=UUID("4d6d2327-4619-46cb-9be2-b1b92738af50")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1220, y=26.5), width=None, height=None) diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/invoke_function_s_w_code.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/invoke_function_s_w_code.py deleted file mode 100644 index 789076bd4f..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/nodes/invoke_function_s_w_code.py +++ /dev/null @@ -1,32 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseCodeExecutionNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from .....nodes.invoke_functions.nodes.invoke_function_s_w_code import InvokeFunctionSWCode - - -class InvokeFunctionSWCodeDisplay(BaseCodeExecutionNodeDisplay[InvokeFunctionSWCode]): - label = "Invoke Function(s) w/ Code" - node_id = UUID("afc576f7-4075-4093-8f03-3690269bd5dd") - target_handle_id = UUID("c2aa8438-f7ad-4ec8-bde7-3405fda310db") - output_id = UUID("3a9030eb-c061-40e4-96ce-db7b776b88a4") - log_output_id = UUID("c80a1ece-b349-41d0-81c2-8171de0859d2") - node_input_ids_by_name = { - "code_inputs.fn_call": UUID("72474d8f-5626-47ff-8832-948d5ab27417"), - "code": UUID("f1e76b3e-b925-4d7c-8faf-9161413b9411"), - "runtime": UUID("3a3a2f3d-02c5-4d41-8fc0-994cb87fe263"), - } - output_display = { - InvokeFunctionSWCode.Outputs.result: NodeOutputDisplay( - id=UUID("3a9030eb-c061-40e4-96ce-db7b776b88a4"), name="result" - ), - InvokeFunctionSWCode.Outputs.log: NodeOutputDisplay( - id=UUID("c80a1ece-b349-41d0-81c2-8171de0859d2"), name="log" - ), - } - port_displays = { - InvokeFunctionSWCode.Ports.default: PortDisplayOverrides(id=UUID("e6c1ff26-3f3e-4167-a5ee-9043385824f8")) - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=610, y=0), width=None, height=None) diff --git a/examples/workflows/re_act_agent/display/nodes/invoke_functions/workflow.py b/examples/workflows/re_act_agent/display/nodes/invoke_functions/workflow.py deleted file mode 100644 index 1d490b98bb..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/invoke_functions/workflow.py +++ /dev/null @@ -1,54 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ....nodes.invoke_functions.inputs import Inputs -from ....nodes.invoke_functions.nodes.final_output import FinalOutput -from ....nodes.invoke_functions.nodes.function_result_context import FunctionResultContext -from ....nodes.invoke_functions.nodes.invoke_function_s_w_code import InvokeFunctionSWCode -from ....nodes.invoke_functions.workflow import InvokeFunctionsWorkflow - - -class InvokeFunctionsWorkflowDisplay(BaseWorkflowDisplay[InvokeFunctionsWorkflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("82475913-3e24-4995-aa56-275cd8264944"), - entrypoint_node_source_handle_id=UUID("5790a065-973f-4482-8cba-7f2ae7b91d29"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=178, y=223.5), width=None, height=None), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-10.090909090909065, y=177.69981471260743, zoom=0.39383426634077107) - ), - ) - inputs_display = { - Inputs.index: WorkflowInputsDisplay(id=UUID("5e23c5c2-077b-4fe9-b223-9f3d3d374057"), name="index"), - Inputs.items: WorkflowInputsDisplay(id=UUID("98ee5b96-8446-4cd6-90c7-2141432ff0b6"), name="items"), - Inputs.item: WorkflowInputsDisplay(id=UUID("81372cd4-6664-4144-9b80-e5fd86ab2960"), name="item"), - } - entrypoint_displays = { - InvokeFunctionSWCode: EntrypointDisplay( - id=UUID("82475913-3e24-4995-aa56-275cd8264944"), - edge_display=EdgeDisplay(id=UUID("bd2edba7-4191-4380-b5f5-8fe0c37e6112")), - ) - } - edge_displays = { - (InvokeFunctionSWCode.Ports.default, FunctionResultContext): EdgeDisplay( - id=UUID("54d7ea45-8951-47fd-958e-f3af0d5c98f6") - ), - (FunctionResultContext.Ports.default, FinalOutput): EdgeDisplay( - id=UUID("d197a876-184d-4838-98f5-badfc537272d") - ), - } - output_displays = { - InvokeFunctionsWorkflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("fe14414b-f4de-4f26-9339-63e3fab65ce8"), name="final-output" - ) - } diff --git a/examples/workflows/re_act_agent/display/nodes/should_handle_functions.py b/examples/workflows/re_act_agent/display/nodes/should_handle_functions.py deleted file mode 100644 index 9de8635fe6..0000000000 --- a/examples/workflows/re_act_agent/display/nodes/should_handle_functions.py +++ /dev/null @@ -1,48 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from ...nodes.should_handle_functions import ShouldHandleFunctions - - -class ShouldHandleFunctionsDisplay(BaseConditionalNodeDisplay[ShouldHandleFunctions]): - label = "Should Handle Functions?" - node_id = UUID("fc97e935-dbeb-47c3-81ec-429396f8ffd2") - target_handle_id = UUID("4ce430b1-e9a7-4147-b2db-35a90f63c2ce") - source_handle_ids = { - 0: UUID("b8533e34-12c9-4b69-bd38-b99a42704724"), - 1: UUID("689c9fd4-4d6f-4140-b3cf-d1254561034a"), - } - rule_ids = [ - RuleIdMap( - id="b57b9d43-9804-47bc-aa4d-b8c8d59fa2b0", - lhs=RuleIdMap( - id="3e1cc848-6a85-4e13-aad4-431b0ce1094d", - lhs=None, - rhs=None, - field_node_input_id="479816da-4919-40ba-aeca-09fa751bb320", - value_node_input_id="19aa9598-8fdf-4f49-830a-9288b4871f16", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ) - ] - condition_ids = [ - ConditionId(id="53a6356f-ddc7-44a4-be78-7f1e28401da7", rule_group_id="b57b9d43-9804-47bc-aa4d-b8c8d59fa2b0"), - ConditionId(id="bdb467e5-3e36-4613-9b9f-884a2133b939", rule_group_id=None), - ] - node_input_ids_by_name = { - "de990b44-3c5f-44bc-8867-52ee16471a47.field": UUID("479816da-4919-40ba-aeca-09fa751bb320"), - "de990b44-3c5f-44bc-8867-52ee16471a47.value": UUID("19aa9598-8fdf-4f49-830a-9288b4871f16"), - } - port_displays = { - ShouldHandleFunctions.Ports.branch_1: PortDisplayOverrides(id=UUID("b8533e34-12c9-4b69-bd38-b99a42704724")), - ShouldHandleFunctions.Ports.branch_2: PortDisplayOverrides(id=UUID("689c9fd4-4d6f-4140-b3cf-d1254561034a")), - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2132.186427579077, y=-17.02717503525355), width=448, height=185 - ) diff --git a/examples/workflows/re_act_agent/display/workflow.py b/examples/workflows/re_act_agent/display/workflow.py deleted file mode 100644 index f1a4ddf267..0000000000 --- a/examples/workflows/re_act_agent/display/workflow.py +++ /dev/null @@ -1,80 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.accumulate_chat_history import AccumulateChatHistory -from ..nodes.agent_node import AgentNode -from ..nodes.agent_response import AgentResponse -from ..nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from ..nodes.full_chat_history_output import FullChatHistoryOutput -from ..nodes.function_calls_to_json_array import FunctionCallsToJSONArray -from ..nodes.has_function_calls import HasFunctionCalls -from ..nodes.invoke_functions import InvokeFunctions -from ..nodes.should_handle_functions import ShouldHandleFunctions -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("1f17313d-882b-447a-abdc-fb44968e3a6f"), - entrypoint_node_source_handle_id=UUID("42f599f4-63c2-4f3b-982e-85e52b87abb0"), - entrypoint_node_display=NodeDisplayData( - position=NodeDisplayPosition(x=203.46768560224905, y=352.2267925183055), width=124, height=48 - ), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-68.83924799891861, y=141.87308637345325, zoom=0.256956455896284) - ), - ) - inputs_display = { - Inputs.chat_history: WorkflowInputsDisplay(id=UUID("5485250c-9067-4ae0-aa02-223202b026a8"), name="chat_history") - } - entrypoint_displays = { - AgentNode: EntrypointDisplay( - id=UUID("1f17313d-882b-447a-abdc-fb44968e3a6f"), - edge_display=EdgeDisplay(id=UUID("5a761d62-9de2-4781-a5d5-de9ef52d91ed")), - ) - } - edge_displays = { - (AgentNode.Ports.default, HasFunctionCalls): EdgeDisplay(id=UUID("0ee48898-c59e-4a76-a854-4075b89f8e01")), - (HasFunctionCalls.Ports.default, ShouldHandleFunctions): EdgeDisplay( - id=UUID("7f2c620f-a2d3-4de8-88c5-4ce6ad5206fd") - ), - (ShouldHandleFunctions.Ports.branch_2, FinalAccumulationOfChatHistory): EdgeDisplay( - id=UUID("41be00b4-1050-48f4-83a1-3fce04c28339") - ), - (FinalAccumulationOfChatHistory.Ports.default, FullChatHistoryOutput): EdgeDisplay( - id=UUID("5929d6b5-9501-48a4-a26e-4ae4ccb23beb") - ), - (ShouldHandleFunctions.Ports.branch_1, FunctionCallsToJSONArray): EdgeDisplay( - id=UUID("cf02f0b9-b3f2-4ae1-a2f3-fc2e4c47b832") - ), - (AccumulateChatHistory.Ports.default, AgentNode): EdgeDisplay(id=UUID("c0f0755d-8b60-4013-9e8d-521219fb772d")), - (FinalAccumulationOfChatHistory.Ports.default, AgentResponse): EdgeDisplay( - id=UUID("aab8fac8-f801-4e8e-9d8b-c764f7a35843") - ), - (FunctionCallsToJSONArray.Ports.default, InvokeFunctions): EdgeDisplay( - id=UUID("4738ecfb-964a-4d7f-8ba9-0b1e1cbced28") - ), - (InvokeFunctions.Ports.default, AccumulateChatHistory): EdgeDisplay( - id=UUID("62722b88-b459-44fd-b3a0-eaa13a9b4683") - ), - } - output_displays = { - Workflow.Outputs.response: WorkflowOutputDisplay( - id=UUID("23f727b7-d00e-48df-8387-f1ea21e1bcb6"), name="response" - ), - Workflow.Outputs.full_chat_history: WorkflowOutputDisplay( - id=UUID("b6effd4f-662d-4cae-9847-9598f3898660"), name="full-chat-history" - ), - } diff --git a/examples/workflows/re_act_agent/inputs.py b/examples/workflows/re_act_agent/inputs.py deleted file mode 100644 index 48cbf257ff..0000000000 --- a/examples/workflows/re_act_agent/inputs.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - chat_history: List[ChatMessage] diff --git a/examples/workflows/re_act_agent/nodes/__init__.py b/examples/workflows/re_act_agent/nodes/__init__.py deleted file mode 100644 index f746c8f1ac..0000000000 --- a/examples/workflows/re_act_agent/nodes/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from .accumulate_chat_history import AccumulateChatHistory -from .agent_node import AgentNode -from .agent_response import AgentResponse -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from .full_chat_history_output import FullChatHistoryOutput -from .function_calls_to_json_array import FunctionCallsToJSONArray -from .has_function_calls import HasFunctionCalls -from .invoke_functions import InvokeFunctions -from .should_handle_functions import ShouldHandleFunctions - -__all__ = [ - "AccumulateChatHistory", - "AgentNode", - "AgentResponse", - "FinalAccumulationOfChatHistory", - "FullChatHistoryOutput", - "FunctionCallsToJSONArray", - "HasFunctionCalls", - "InvokeFunctions", - "ShouldHandleFunctions", -] diff --git a/examples/workflows/re_act_agent/nodes/accumulate_chat_history/__init__.py b/examples/workflows/re_act_agent/nodes/accumulate_chat_history/__init__.py deleted file mode 100644 index 59c5b71a84..0000000000 --- a/examples/workflows/re_act_agent/nodes/accumulate_chat_history/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.references import LazyReference -from vellum.workflows.state import BaseState - -from ...inputs import Inputs -from ..invoke_functions import InvokeFunctions - - -class AccumulateChatHistory(CodeExecutionNode[BaseState, List[ChatMessage]]): - filepath = "./script.py" - code_inputs = { - "invoked_functions": InvokeFunctions.Outputs.final_output, - "assistant_message": LazyReference("AgentNode.Outputs.results"), - "current_chat_history": LazyReference( - lambda: AccumulateChatHistory.Outputs.result.coalesce(Inputs.chat_history) - ), - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/re_act_agent/nodes/accumulate_chat_history/script.py b/examples/workflows/re_act_agent/nodes/accumulate_chat_history/script.py deleted file mode 100644 index a58fcc11f0..0000000000 --- a/examples/workflows/re_act_agent/nodes/accumulate_chat_history/script.py +++ /dev/null @@ -1,30 +0,0 @@ -import json - -def main( - invoked_functions, - assistant_message, - current_chat_history, -) -> list: - result = [ - *current_chat_history, - { - "role": "ASSISTANT", - "content": { - "type": "ARRAY", - "value": assistant_message - } - } - ] - - for fn in invoked_functions: - fn_result = { - "role": "FUNCTION", - "content": { - "type": "STRING", - "value": json.dumps(fn["value"]["function_result"]) - }, - "source": fn["value"]["function_context"]["tool_id"] - } - result.append(fn_result) - - return result \ No newline at end of file diff --git a/examples/workflows/re_act_agent/nodes/agent_node.py b/examples/workflows/re_act_agent/nodes/agent_node.py deleted file mode 100644 index de00982838..0000000000 --- a/examples/workflows/re_act_agent/nodes/agent_node.py +++ /dev/null @@ -1,71 +0,0 @@ -from vellum import ChatMessagePromptBlock, FunctionDefinition, JinjaPromptBlock, PromptParameters, VariablePromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .accumulate_chat_history import AccumulateChatHistory - - -class AgentNode(InlinePromptNode): - """With the streaming API, we can send all intermediary messages / progress updates to the user while the Agent thinks and performs actions.""" - - ml_model = "gpt-4o" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""\ -You are a helpful support bot that helps gather product information and answer questions for the user. Answer questions factually based the information that you\'re provided and ask clarifying questions when needed. - -Please provide your final answer in the following format: - -Overall Recommendation: [Top recommendation] - -Products Considered: -1. ...\ -""" - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": AccumulateChatHistory.Outputs.result.coalesce(Inputs.chat_history), - } - functions = [ - FunctionDefinition( - name="get_top_products", - description="Gets the top rated home air products in the store", - parameters={ - "type": "object", - "properties": {}, - }, - ), - FunctionDefinition( - name="get_product_details", - description="Gets price and customer review information for a specified product", - parameters={ - "type": "object", - "required": [ - "product_name", - ], - "properties": { - "product_name": { - "type": "string", - "description": "The name/identifier of the product", - }, - }, - }, - ), - ] - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=None, - frequency_penalty=None, - presence_penalty=None, - logit_bias=None, - custom_parameters=None, - ) diff --git a/examples/workflows/re_act_agent/nodes/agent_response.py b/examples/workflows/re_act_agent/nodes/agent_response.py deleted file mode 100644 index d169f6f14a..0000000000 --- a/examples/workflows/re_act_agent/nodes/agent_response.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .agent_node import AgentNode - - -class AgentResponse(FinalOutputNode[BaseState, str]): - """Here we send the final response back to the user after the Agent finishes calling tools.""" - - class Outputs(FinalOutputNode.Outputs): - value = AgentNode.Outputs.text diff --git a/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/__init__.py b/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/__init__.py deleted file mode 100644 index da5a654518..0000000000 --- a/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.state import BaseState - -from ...inputs import Inputs -from ..accumulate_chat_history import AccumulateChatHistory -from ..agent_node import AgentNode - - -class FinalAccumulationOfChatHistory(CodeExecutionNode[BaseState, List[ChatMessage]]): - filepath = "./script.py" - code_inputs = { - "current_chat_history": AccumulateChatHistory.Outputs.result.coalesce(Inputs.chat_history), - "assistant_message": AgentNode.Outputs.results, - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/script.py b/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/script.py deleted file mode 100644 index f6c4afc630..0000000000 --- a/examples/workflows/re_act_agent/nodes/final_accumulation_of_chat_history/script.py +++ /dev/null @@ -1,12 +0,0 @@ -def main( - current_chat_history, - assistant_message, -) -> int: - return [ - *current_chat_history, - { - "role": "ASSISTANT", - "content": assistant_message[0], - }, - ] - \ No newline at end of file diff --git a/examples/workflows/re_act_agent/nodes/full_chat_history_output.py b/examples/workflows/re_act_agent/nodes/full_chat_history_output.py deleted file mode 100644 index 20a5bc83eb..0000000000 --- a/examples/workflows/re_act_agent/nodes/full_chat_history_output.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .final_accumulation_of_chat_history import FinalAccumulationOfChatHistory - - -class FullChatHistoryOutput(FinalOutputNode[BaseState, List[ChatMessage]]): - class Outputs(FinalOutputNode.Outputs): - value = FinalAccumulationOfChatHistory.Outputs.result diff --git a/examples/workflows/re_act_agent/nodes/function_calls_to_json_array.py b/examples/workflows/re_act_agent/nodes/function_calls_to_json_array.py deleted file mode 100644 index e907389add..0000000000 --- a/examples/workflows/re_act_agent/nodes/function_calls_to_json_array.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.references import LazyReference -from vellum.workflows.state import BaseState -from vellum.workflows.types.core import Json - - -class FunctionCallsToJSONArray(TemplatingNode[BaseState, Json]): - template = """{{- prompt_outputs | selectattr(\'type\', \'equalto\', \'FUNCTION_CALL\') | list | replace(\"\\n\",\",\") -}}""" - inputs = { - "prompt_outputs": LazyReference("AgentNode.Outputs.results"), - } diff --git a/examples/workflows/re_act_agent/nodes/has_function_calls.py b/examples/workflows/re_act_agent/nodes/has_function_calls.py deleted file mode 100644 index 8893063941..0000000000 --- a/examples/workflows/re_act_agent/nodes/has_function_calls.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .agent_node import AgentNode - - -class HasFunctionCalls(TemplatingNode[BaseState, str]): - template = """{{- output | selectattr(\'type\', \'equalto\', \'FUNCTION_CALL\') | list | length > 0 -}}""" - inputs = { - "output": AgentNode.Outputs.results, - } diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/__init__.py b/examples/workflows/re_act_agent/nodes/invoke_functions/__init__.py deleted file mode 100644 index 2486a868a1..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from vellum.workflows.nodes.displayable import MapNode - -from ..function_calls_to_json_array import FunctionCallsToJSONArray -from .workflow import InvokeFunctionsWorkflow - - -class InvokeFunctions(MapNode): - items = FunctionCallsToJSONArray.Outputs.result - subworkflow = InvokeFunctionsWorkflow - max_concurrency = 4 diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/inputs.py b/examples/workflows/re_act_agent/nodes/invoke_functions/inputs.py deleted file mode 100644 index de758b34f7..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/inputs.py +++ /dev/null @@ -1,9 +0,0 @@ -from typing import Any, Optional, Union - -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - index: Optional[Union[float, int]] = None - items: Optional[Any] = None - item: Optional[Any] = None diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/__init__.py b/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/__init__.py deleted file mode 100644 index 4699674417..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .final_output import FinalOutput -from .function_result_context import FunctionResultContext -from .invoke_function_s_w_code import InvokeFunctionSWCode - -__all__ = [ - "FinalOutput", - "FunctionResultContext", - "InvokeFunctionSWCode", -] diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/final_output.py b/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/final_output.py deleted file mode 100644 index db716db3e2..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/final_output.py +++ /dev/null @@ -1,11 +0,0 @@ -from typing import Any - -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .function_result_context import FunctionResultContext - - -class FinalOutput(FinalOutputNode[BaseState, Any]): - class Outputs(FinalOutputNode.Outputs): - value = FunctionResultContext.Outputs.result diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/function_result_context.py b/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/function_result_context.py deleted file mode 100644 index d884bb15bd..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/function_result_context.py +++ /dev/null @@ -1,23 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState -from vellum.workflows.types.core import Json - -from ..inputs import Inputs -from .invoke_function_s_w_code import InvokeFunctionSWCode - - -class FunctionResultContext(TemplatingNode[BaseState, Json]): - template = """\ -{ - \"function_context\": { - \"name\": \"{{ item.value.name }}\", - \"tool_id\": \"{{ item.value.id }}\", - \"args\": {{ item.value.arguments | tojson }} - }, - \"function_result\": {{ fxn_result | tojson }} -}\ -""" - inputs = { - "item": Inputs.item, - "fxn_result": InvokeFunctionSWCode.Outputs.result, - } diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/__init__.py b/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/__init__.py deleted file mode 100644 index ea9b8239af..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from typing import Any - -from vellum.workflows.nodes.displayable import CodeExecutionNode -from vellum.workflows.state import BaseState - -from ...inputs import Inputs - - -class InvokeFunctionSWCode(CodeExecutionNode[BaseState, Any]): - filepath = "./script.py" - code_inputs = { - "fn_call": Inputs.item, - } - runtime = "PYTHON_3_11_6" - packages = [] diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/script.py b/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/script.py deleted file mode 100644 index 92f035c6ea..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/nodes/invoke_function_s_w_code/script.py +++ /dev/null @@ -1,68 +0,0 @@ -def main(fn_call): - function_name = fn_call["value"]["name"] - args = fn_call["value"]["arguments"] - - if function_name == "get_top_products": - return get_top_products(args) - elif function_name == "get_product_details": - return get_product_details(args) - else: - raise Exception("Invalid function error") - -def get_top_products(args): - return [ - "Blueridge BMY922C", - "Blueridge BMM1822-6W-6W", - "Blueridge BMKH1824/O" - ] - -def get_product_details(args): - product_name = args.product_name - return { - "Blueridge BMY922C": { - "price": "$439.00", - "num_reviews": 625, - "avg_rating": 5, - "reviews": [ - { - "title": "Super quiet unit - inside and out!", - "desc": "I've agonized over putting a mini-split in for a long time, as I heat and cool my office above the garage separately, turning off the house most of the day. Let me tell you - this unit is DEAD QUIET inside and out - no compressor noise outside, just a gentle hum of the fan." - }, - { - "title": "Not too bad for the price", - "desc": "Over all the product is not too bad. I do not think the average homeowner can install these by themselves. I found the instructions very vague and incomplete. After reading them I simply threw them away and installed as I have other units of the same type. I think they will suffice for what I needed" - } - ] - }, - "Blueridge BMM1822-6W-6W": { - "price": "$1989.00", - "num_reviews": 303, - "avg_rating": 4.9, - "reviews": [ - { - "title": "Great Unit - Not without its quirks.", - "desc": "I waited to provide my comments as I wanted to use the product for several months prior to review. I'm happy to say that I still provided a 5-star review. I purchased a 4 unit system, 2 wall mounts and 2 ceiling mounts. I live in NH and have been using the system exclusively for heat since I got it" - }, - { - "title": "Very efficient and good company to work with. Only one issu", - "desc": "It cools our rooms effortlessly and keeps the temperature at a constant level. They are quiet with unfortunately the one unit over our bed. It creaks, and I was advised it was probably due to the contraction and expansion of the materials. There only solution was to loosen some screws, but it is ..." - } - ] - }, - "Blueridge BMKH1824/O": { - "price": "$949.00", - "num_reviews": 110, - "avg_rating": 4.5, # it's actually 4.9 but using 4.5 for variation - "reviews": [ - { - "title": "Running great", - "desc": "Finally finished the install, the ceiling fan makes more noise than the Blueridge cooling unit, really quiet and seems to cool out Master bedroom easily. My wife is very happy." - }, - { - "title": "Quiet, efficient, and literally took control of our cooling.", - "desc": "Install this 12,000btu unit with a 30.5 SEER rating with a bit of skepticism. All that was put to rest as the unit has basically taken control of cooling our 1200 sq ft house and our main unit only come on for about 5-6 hours a day even on the hottest and my muggiest days over 100 this summer. Keeps our house cool at 73-74 degrees all day long and is extremely quiet. Easy to install. I am a DIYer and had it done in 6.5 hours and had to go through a brick wall to exit the lines from the house. Easily done in a day with a helper. All in all, very pleased with the unit and how it has performed." - } - ] - } - }[product_name] - diff --git a/examples/workflows/re_act_agent/nodes/invoke_functions/workflow.py b/examples/workflows/re_act_agent/nodes/invoke_functions/workflow.py deleted file mode 100644 index 779a44ef28..0000000000 --- a/examples/workflows/re_act_agent/nodes/invoke_functions/workflow.py +++ /dev/null @@ -1,14 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.final_output import FinalOutput -from .nodes.function_result_context import FunctionResultContext -from .nodes.invoke_function_s_w_code import InvokeFunctionSWCode - - -class InvokeFunctionsWorkflow(BaseWorkflow[Inputs, BaseState]): - graph = InvokeFunctionSWCode >> FunctionResultContext >> FinalOutput - - class Outputs(BaseWorkflow.Outputs): - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/re_act_agent/nodes/should_handle_functions.py b/examples/workflows/re_act_agent/nodes/should_handle_functions.py deleted file mode 100644 index c9aaaf8bed..0000000000 --- a/examples/workflows/re_act_agent/nodes/should_handle_functions.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode -from vellum.workflows.ports import Port -from vellum.workflows.references import LazyReference - - -class ShouldHandleFunctions(ConditionalNode): - class Ports(ConditionalNode.Ports): - branch_1 = Port.on_if(LazyReference("HasFunctionCalls.Outputs.result").equals("True")) - branch_2 = Port.on_else() diff --git a/examples/workflows/re_act_agent/sandbox.py b/examples/workflows/re_act_agent/sandbox.py deleted file mode 100644 index 3f81d077ff..0000000000 --- a/examples/workflows/re_act_agent/sandbox.py +++ /dev/null @@ -1,32 +0,0 @@ -from vellum import ArrayChatMessageContent, ChatMessage, StringChatMessageContent -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="Can you compare the price and ratings of the top home air conditioning / mini-split products?", - content=ArrayChatMessageContent( - value=[ - StringChatMessageContent( - value="Can you compare the price and ratings of the top home air conditioning / mini-split products?" - ), - ] - ), - ), - ] - ), - ], -) - -runner.run() diff --git a/examples/workflows/re_act_agent/workflow.py b/examples/workflows/re_act_agent/workflow.py deleted file mode 100644 index 686355d91f..0000000000 --- a/examples/workflows/re_act_agent/workflow.py +++ /dev/null @@ -1,37 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.accumulate_chat_history import AccumulateChatHistory -from .nodes.agent_node import AgentNode -from .nodes.agent_response import AgentResponse -from .nodes.final_accumulation_of_chat_history import FinalAccumulationOfChatHistory -from .nodes.full_chat_history_output import FullChatHistoryOutput -from .nodes.function_calls_to_json_array import FunctionCallsToJSONArray -from .nodes.has_function_calls import HasFunctionCalls -from .nodes.invoke_functions import InvokeFunctions -from .nodes.should_handle_functions import ShouldHandleFunctions - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = ( - AgentNode - >> HasFunctionCalls - >> { - ShouldHandleFunctions.Ports.branch_1 - >> FunctionCallsToJSONArray - >> InvokeFunctions - >> AccumulateChatHistory - >> AgentNode, - ShouldHandleFunctions.Ports.branch_2 - >> FinalAccumulationOfChatHistory - >> { - FullChatHistoryOutput, - AgentResponse, - }, - } - ) - - class Outputs(BaseWorkflow.Outputs): - response = AgentResponse.Outputs.value - full_chat_history = FullChatHistoryOutput.Outputs.value diff --git a/examples/workflows/recent-workflow-doc-creator/README.md b/examples/workflows/recent-workflow-doc-creator/README.md deleted file mode 100644 index 473ce52d70..0000000000 --- a/examples/workflows/recent-workflow-doc-creator/README.md +++ /dev/null @@ -1,60 +0,0 @@ -# Recent Workflow Doc Creator - -A scheduled workflow that automatically creates GitHub pull requests to document recently deployed Vellum workflows. - -## Overview - -This workflow runs on a daily schedule (10:00 AM Denver time) and: - -1. **Fetches Recent Deployments**: Queries the Vellum API to find all workflow deployments created in the last 24 hours. - -2. **Processes Each Deployment**: Uses a MapNode to process each deployment concurrently (up to 4 at a time): - - Fetches the workflow code using the Vellum pull API - - Uses an AI agent with GitHub tools to create a PR - -3. **Outputs Results**: Returns PR URLs, processing counts, and raw deployment data. - -## Notable Features & Patterns - -### 1. Scheduled Trigger -```python -class Scheduled(ScheduleTrigger): - class Config(ScheduleTrigger.Config): - cron = "0 10 * * *" - timezone = "America/Denver" -``` -Demonstrates how to set up cron-based scheduling with timezone support. - -### 2. MapNode for Parallel Processing -```python -class ProcessDeployments(MapNode): - items = FetchRecentDeployments.Outputs.deployments - subworkflow = ProcessDeploymentsWorkflow - max_concurrency = 4 -``` -Shows how to use MapNode to process a list of items in parallel with concurrency limits. - -### 3. ToolCallingNode with External Integrations -```python -class GitHubAgent(ToolCallingNode): - ml_model = "claude-opus-4-5-20251101" - functions = [ - VellumIntegrationToolDefinition( - provider="COMPOSIO", - integration_name="GITHUB", - name="GITHUB_CREATE_A_PULL_REQUEST", - ... - ), - ] -``` -Demonstrates using `ToolCallingNode` with Composio integrations to give an AI agent access to GitHub API operations. - -### 4. Nested Subworkflows -The `ProcessDeploymentsWorkflow` is a complete subworkflow with its own inputs, nodes, and outputs, demonstrating how to compose complex workflows from smaller, reusable pieces. - -### 5. Custom BaseNode with API Calls -```python -class FetchRecentDeployments(BaseNode): - def run(self) -> Outputs: - client = self._context.vellum_client - # ... paginated AP diff --git a/examples/workflows/reflection_agent/__init__.py b/examples/workflows/reflection_agent/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/reflection_agent/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/reflection_agent/display/__init__.py b/examples/workflows/reflection_agent/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/reflection_agent/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/reflection_agent/display/nodes/__init__.py b/examples/workflows/reflection_agent/display/nodes/__init__.py deleted file mode 100644 index e1b8c32ff3..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from .add_agent_message_to_chat_history import AddAgentMessageToChatHistoryDisplay -from .add_evaluator_message_to_chat_history import AddEvaluatorMessageToChatHistoryDisplay -from .error_node import ErrorNodeDisplay -from .evaluator_agent import EvaluatorAgentDisplay -from .extract_status import ExtractStatusDisplay -from .final_output import FinalOutputDisplay -from .needs_revision import NeedsRevisionDisplay -from .note import NoteDisplay -from .problem_solver_agent import ProblemSolverAgentDisplay - -__all__ = [ - "AddAgentMessageToChatHistoryDisplay", - "AddEvaluatorMessageToChatHistoryDisplay", - "ErrorNodeDisplay", - "EvaluatorAgentDisplay", - "ExtractStatusDisplay", - "FinalOutputDisplay", - "NeedsRevisionDisplay", - "NoteDisplay", - "ProblemSolverAgentDisplay", -] diff --git a/examples/workflows/reflection_agent/display/nodes/add_agent_message_to_chat_history.py b/examples/workflows/reflection_agent/display/nodes/add_agent_message_to_chat_history.py deleted file mode 100644 index 0e226a4024..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/add_agent_message_to_chat_history.py +++ /dev/null @@ -1,31 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.add_agent_message_to_chat_history import AddAgentMessageToChatHistory - - -class AddAgentMessageToChatHistoryDisplay(BaseTemplatingNodeDisplay[AddAgentMessageToChatHistory]): - label = "Add Agent Message to Chat History" - node_id = UUID("910d3a9e-2027-480b-89f2-5da266418462") - target_handle_id = UUID("36118211-0008-42cc-823e-b35747e39ac5") - node_input_ids_by_name = { - "inputs.chat_history": UUID("6b197002-97c0-46f7-b250-a313e32ef33c"), - "template": UUID("90c2edb9-9311-4a9d-9c41-32c0345c7028"), - "inputs.message": UUID("0f55f816-6a3d-4438-9d56-f5bff0518e27"), - } - output_display = { - AddAgentMessageToChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("7293b667-9be4-4812-9e55-4b8f980cdf12"), name="result" - ) - } - port_displays = { - AddAgentMessageToChatHistory.Ports.default: PortDisplayOverrides( - id=UUID("ef6b79df-89dd-4c5a-b3fe-1bba8dc98fc3") - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2025.2963688579166, y=329.8405550495836), width=459, height=283 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/add_evaluator_message_to_chat_history.py b/examples/workflows/reflection_agent/display/nodes/add_evaluator_message_to_chat_history.py deleted file mode 100644 index 0f2b909dde..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/add_evaluator_message_to_chat_history.py +++ /dev/null @@ -1,31 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.add_evaluator_message_to_chat_history import AddEvaluatorMessageToChatHistory - - -class AddEvaluatorMessageToChatHistoryDisplay(BaseTemplatingNodeDisplay[AddEvaluatorMessageToChatHistory]): - label = "Add Evaluator Message to Chat History" - node_id = UUID("0ae79493-099c-49a6-9094-486bbccc2b97") - target_handle_id = UUID("bc1575d8-1556-4052-a864-1e468e6e35b3") - node_input_ids_by_name = { - "inputs.chat_history": UUID("89044f30-ae4f-43f0-ad8d-609778fba156"), - "template": UUID("bea2e9d8-7c94-4a43-abe2-caafaada1f88"), - "inputs.message": UUID("797fbbd1-3fcb-4bab-a31c-796f09e71983"), - } - output_display = { - AddEvaluatorMessageToChatHistory.Outputs.result: NodeOutputDisplay( - id=UUID("5d2229df-6979-42a3-ae57-1e6c64e05a9d"), name="result" - ) - } - port_displays = { - AddEvaluatorMessageToChatHistory.Ports.default: PortDisplayOverrides( - id=UUID("6e2b0ca4-b44e-4ad1-9b27-47254e6d92b7") - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3678.0195379564507, y=734.4758649232241), width=465, height=283 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/error_node.py b/examples/workflows/reflection_agent/display/nodes/error_node.py deleted file mode 100644 index 4aa70b7792..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/error_node.py +++ /dev/null @@ -1,18 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseErrorNodeDisplay - -from ...nodes.error_node import ErrorNode - - -class ErrorNodeDisplay(BaseErrorNodeDisplay[ErrorNode]): - name = "error-node" - node_id = UUID("4bcac836-1339-4c1e-8a0f-d1f452258ee7") - label = "Error Node" - error_output_id = UUID("40702d6a-624f-40b8-9c79-0a98967484db") - target_handle_id = UUID("d1118188-af35-40a6-b1f6-f48f0a47002d") - node_input_ids_by_name = {"error_source_input_id": UUID("84a630a5-fd7a-4672-ae27-bd1e87e44c22")} - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=4379.60471538661, y=610.7283212464927), width=364, height=131 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/evaluator_agent.py b/examples/workflows/reflection_agent/display/nodes/evaluator_agent.py deleted file mode 100644 index 43798ad382..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/evaluator_agent.py +++ /dev/null @@ -1,36 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.evaluator_agent import EvaluatorAgent - - -class EvaluatorAgentDisplay(BaseInlinePromptNodeDisplay[EvaluatorAgent]): - label = "Evaluator Agent" - node_id = UUID("c213d5e3-24ef-4f0f-83de-83fb2ab8291e") - output_id = UUID("5a8639b1-1735-4287-9d33-fc9dc01dc652") - array_output_id = UUID("b1487507-9d8d-4a15-a572-d831e7bc3b65") - target_handle_id = UUID("a61907e2-6acf-4877-9f57-38b685cb37d6") - node_input_ids_by_name = { - "prompt_inputs.math_problem": UUID("618cc457-9779-4a91-baa4-1aea562c0a65"), - "prompt_inputs.proposed_solution": UUID("d0b8e3c7-9c74-4f11-b527-8b6f0f4ecc21"), - } - attribute_ids_by_name = {"ml_model": UUID("02524ab3-400f-48dd-b6b7-126a12fcc031")} - output_display = { - EvaluatorAgent.Outputs.text: NodeOutputDisplay(id=UUID("5a8639b1-1735-4287-9d33-fc9dc01dc652"), name="text"), - EvaluatorAgent.Outputs.results: NodeOutputDisplay( - id=UUID("b1487507-9d8d-4a15-a572-d831e7bc3b65"), name="results" - ), - EvaluatorAgent.Outputs.json: NodeOutputDisplay(id=UUID("0741b7b3-8dae-49bb-99ac-2508b8436e11"), name="json"), - } - port_displays = { - EvaluatorAgent.Ports.default: PortDisplayOverrides(id=UUID("1ffb5ac8-5a14-4ba6-9a72-d25da938c27c")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2564.5326176251383, y=219.6964613495673), - width=480, - height=409, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/reflection_agent/display/nodes/extract_status.py b/examples/workflows/reflection_agent/display/nodes/extract_status.py deleted file mode 100644 index 11e18f7a7f..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/extract_status.py +++ /dev/null @@ -1,24 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.extract_status import ExtractStatus - - -class ExtractStatusDisplay(BaseTemplatingNodeDisplay[ExtractStatus]): - label = "Extract Status" - node_id = UUID("44ae99f5-9cfa-4393-a962-14fc7ad8007d") - target_handle_id = UUID("b6b76984-3847-44c4-8265-67392ed6d7d0") - node_input_ids_by_name = { - "inputs.example_var_1": UUID("06a652ac-d29e-42cb-9e94-4ee91edd644a"), - "template": UUID("256e6809-8e58-4d0d-89c2-03edcf32865c"), - } - output_display = { - ExtractStatus.Outputs.result: NodeOutputDisplay(id=UUID("45c66eb9-c77d-43c5-86cf-f8e4f03962d8"), name="result") - } - port_displays = {ExtractStatus.Ports.default: PortDisplayOverrides(id=UUID("ba254914-762d-45e3-92d5-cf23c5111ca3"))} - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3086.8233673760724, y=284.1854915525598), width=452, height=229 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/final_output.py b/examples/workflows/reflection_agent/display/nodes/final_output.py deleted file mode 100644 index 69ee927870..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/final_output.py +++ /dev/null @@ -1,21 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output import FinalOutput - - -class FinalOutputDisplay(BaseFinalOutputNodeDisplay[FinalOutput]): - label = "Final Output" - node_id = UUID("5fb0cb00-8b8d-48c9-b428-077c08ada4b6") - target_handle_id = UUID("7ed44929-f24c-4c81-bc98-932383b04ab6") - output_name = "final-output" - node_input_ids_by_name = {"node_input": UUID("7190d0e3-47c1-4fee-97b8-a56b75585afa")} - output_display = { - FinalOutput.Outputs.value: NodeOutputDisplay(id=UUID("d864d797-940c-44c1-a59d-a014ce5b9551"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=4351.825508233975, y=269.66595271050653), width=462, height=239 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/needs_revision.py b/examples/workflows/reflection_agent/display/nodes/needs_revision.py deleted file mode 100644 index fdc5548d38..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/needs_revision.py +++ /dev/null @@ -1,69 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from ...nodes.needs_revision import NeedsRevision - - -class NeedsRevisionDisplay(BaseConditionalNodeDisplay[NeedsRevision]): - label = "Needs Revision?" - node_id = UUID("bbe4360f-483a-4bca-a6e3-308ab9defca1") - target_handle_id = UUID("f1ca3986-79d5-4978-9301-d1e2134b76eb") - source_handle_ids = { - 0: UUID("785b5a45-4fad-4dcf-9d58-523a1a5a7f07"), - 1: UUID("d164277c-30d6-4a01-ad25-a5f52100f5e4"), - 2: UUID("1b8f421a-ce0a-4f6a-95b2-a5d3322fd270"), - } - rule_ids = [ - RuleIdMap( - id="ae3f9a87-56db-4654-9036-b04227bc36ec", - lhs=RuleIdMap( - id="3f392937-2a4f-4177-8725-87922bd63abb", - lhs=None, - rhs=None, - field_node_input_id="53181c75-b01c-4a21-b294-de468f41e323", - value_node_input_id="36c7abeb-cd2c-461a-8ced-29b332523d98", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ), - RuleIdMap( - id="a8131318-40cb-4564-a4e8-9673bbab92bc", - lhs=RuleIdMap( - id="1b8dc712-6575-4771-99cf-e713082b490d", - lhs=None, - rhs=None, - field_node_input_id="f64631c4-22ee-4a1e-81c4-0d6038bcdfed", - value_node_input_id="27a75c26-39af-45bc-8e85-e77bc487d896", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ), - ] - condition_ids = [ - ConditionId(id="eb1068af-9191-478f-b569-90143e413027", rule_group_id="ae3f9a87-56db-4654-9036-b04227bc36ec"), - ConditionId(id="c43bcb6d-ec50-44ed-8901-c70ab043ab07", rule_group_id="a8131318-40cb-4564-a4e8-9673bbab92bc"), - ConditionId(id="1852f92a-e668-4b87-850f-ef1e3cb7e5b3", rule_group_id=None), - ] - node_input_ids_by_name = { - "5b8a9b60-cfbe-41ab-8cb5-87d4c7570ed9.field": UUID("53181c75-b01c-4a21-b294-de468f41e323"), - "5b8a9b60-cfbe-41ab-8cb5-87d4c7570ed9.value": UUID("36c7abeb-cd2c-461a-8ced-29b332523d98"), - "2a4871e2-37bc-4114-8888-f2ebe0ee43e9.field": UUID("f64631c4-22ee-4a1e-81c4-0d6038bcdfed"), - "2a4871e2-37bc-4114-8888-f2ebe0ee43e9.value": UUID("27a75c26-39af-45bc-8e85-e77bc487d896"), - } - port_displays = { - NeedsRevision.Ports.branch_1: PortDisplayOverrides(id=UUID("785b5a45-4fad-4dcf-9d58-523a1a5a7f07")), - NeedsRevision.Ports.branch_2: PortDisplayOverrides(id=UUID("d164277c-30d6-4a01-ad25-a5f52100f5e4")), - NeedsRevision.Ports.branch_3: PortDisplayOverrides(id=UUID("1b8f421a-ce0a-4f6a-95b2-a5d3322fd270")), - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3656.222715551748, y=279.83299396794433), - width=455, - height=325, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/reflection_agent/display/nodes/note.py b/examples/workflows/reflection_agent/display/nodes/note.py deleted file mode 100644 index b23bac8d8b..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/note.py +++ /dev/null @@ -1,15 +0,0 @@ -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNoteNodeDisplay - -from ...nodes.note import Note - - -class NoteDisplay(BaseNoteNodeDisplay[Note]): - text = "Note: many of these Prompt Nodes include the full context via Chat History (with Attempts and Evaluator Feedback). This is not mandatory— you may prefer to only send the current solution/evaluation pair to reduce costs and bias. However, including past solution attempts in the context may improve performance. It will depend on your use-case and cost/latency/quality constraints. " - style = { - "fontSize": 16, - "backgroundColor": "#FFEF8A", - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1717.8720479880744, y=-4.0697246951307875), width=560, height=242 - ) diff --git a/examples/workflows/reflection_agent/display/nodes/problem_solver_agent.py b/examples/workflows/reflection_agent/display/nodes/problem_solver_agent.py deleted file mode 100644 index add6aabb84..0000000000 --- a/examples/workflows/reflection_agent/display/nodes/problem_solver_agent.py +++ /dev/null @@ -1,40 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.problem_solver_agent import ProblemSolverAgent - - -class ProblemSolverAgentDisplay(BaseInlinePromptNodeDisplay[ProblemSolverAgent]): - label = "Problem Solver Agent" - node_id = UUID("89e3e279-0cec-4356-8308-5e7076dbd52e") - output_id = UUID("c5b21544-5cd2-4df7-9826-74cdf558354f") - array_output_id = UUID("bbd6be27-5b0f-4d54-9a31-0e12efeb92aa") - target_handle_id = UUID("b9fba906-f169-4ca3-8edc-28f363db3eb8") - node_input_ids_by_name = { - "prompt_inputs.math_problem": UUID("1b144a04-d5d1-4b03-8314-ee13b515324c"), - "prompt_inputs.chat_history": UUID("122598c5-bcf4-437b-988a-7c6d76151c4c"), - } - attribute_ids_by_name = {"ml_model": UUID("4681a133-2a9b-4ffc-8e69-2fa80f227766")} - output_display = { - ProblemSolverAgent.Outputs.text: NodeOutputDisplay( - id=UUID("c5b21544-5cd2-4df7-9826-74cdf558354f"), name="text" - ), - ProblemSolverAgent.Outputs.results: NodeOutputDisplay( - id=UUID("bbd6be27-5b0f-4d54-9a31-0e12efeb92aa"), name="results" - ), - ProblemSolverAgent.Outputs.json: NodeOutputDisplay( - id=UUID("abfe5576-ea3e-47ca-b554-1865b9f10823"), name="json" - ), - } - port_displays = { - ProblemSolverAgent.Ports.default: PortDisplayOverrides(id=UUID("5aa4790e-6eab-4adb-8f77-89df21e32a0e")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1473.2938896047522, y=303.5007601991737), - width=480, - height=315, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/reflection_agent/display/workflow.py b/examples/workflows/reflection_agent/display/workflow.py deleted file mode 100644 index c2abd4c1a5..0000000000 --- a/examples/workflows/reflection_agent/display/workflow.py +++ /dev/null @@ -1,69 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.add_agent_message_to_chat_history import AddAgentMessageToChatHistory -from ..nodes.add_evaluator_message_to_chat_history import AddEvaluatorMessageToChatHistory -from ..nodes.error_node import ErrorNode -from ..nodes.evaluator_agent import EvaluatorAgent -from ..nodes.extract_status import ExtractStatus -from ..nodes.final_output import FinalOutput -from ..nodes.needs_revision import NeedsRevision -from ..nodes.problem_solver_agent import ProblemSolverAgent -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("f30a644b-8dc2-44a6-889c-7fc68ee56faa"), - entrypoint_node_source_handle_id=UUID("983050f4-430e-4456-87ad-65558edcbaa3"), - entrypoint_node_display=NodeDisplayData( - position=NodeDisplayPosition(x=1081.231395972988, y=460.252127819204), width=124, height=48 - ), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-1129.721988599967, y=74.55327214916164, zoom=0.6862790178657792) - ), - ) - inputs_display = { - Inputs.math_problem: WorkflowInputsDisplay(id=UUID("e1819ef5-3ed2-4c9b-b8d5-bb6d3d572002"), name="math_problem") - } - entrypoint_displays = { - ProblemSolverAgent: EntrypointDisplay( - id=UUID("f30a644b-8dc2-44a6-889c-7fc68ee56faa"), - edge_display=EdgeDisplay(id=UUID("d8573de7-4888-4caa-9dd9-2ed6135f846f")), - ) - } - edge_displays = { - (EvaluatorAgent.Ports.default, ExtractStatus): EdgeDisplay(id=UUID("a960b800-472f-4298-9434-d62471a50c68")), - (ExtractStatus.Ports.default, NeedsRevision): EdgeDisplay(id=UUID("32de57b3-bee0-4de2-8bce-1aab36505522")), - (NeedsRevision.Ports.branch_1, FinalOutput): EdgeDisplay(id=UUID("454cc03e-7570-4700-971d-5b2133a303b9")), - (NeedsRevision.Ports.branch_3, ErrorNode): EdgeDisplay(id=UUID("07d12270-0571-4116-b0df-785ed8a9f7f8")), - (ProblemSolverAgent.Ports.default, AddAgentMessageToChatHistory): EdgeDisplay( - id=UUID("e78b21fc-8618-4f9a-b138-96998fac77a3") - ), - (AddAgentMessageToChatHistory.Ports.default, EvaluatorAgent): EdgeDisplay( - id=UUID("c75940f1-fcc0-4659-b5e9-d231dcb6cec2") - ), - (NeedsRevision.Ports.branch_2, AddEvaluatorMessageToChatHistory): EdgeDisplay( - id=UUID("8576ea61-980b-4391-8685-143fe191ccf7") - ), - (AddEvaluatorMessageToChatHistory.Ports.default, ProblemSolverAgent): EdgeDisplay( - id=UUID("00cc66f2-5823-44a1-b88c-82a771211ade") - ), - } - output_displays = { - Workflow.Outputs.final_output: WorkflowOutputDisplay( - id=UUID("d864d797-940c-44c1-a59d-a014ce5b9551"), name="final-output" - ) - } diff --git a/examples/workflows/reflection_agent/inputs.py b/examples/workflows/reflection_agent/inputs.py deleted file mode 100644 index bf90647c8a..0000000000 --- a/examples/workflows/reflection_agent/inputs.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - math_problem: str diff --git a/examples/workflows/reflection_agent/nodes/__init__.py b/examples/workflows/reflection_agent/nodes/__init__.py deleted file mode 100644 index 3b0e97a8a0..0000000000 --- a/examples/workflows/reflection_agent/nodes/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from .add_agent_message_to_chat_history import AddAgentMessageToChatHistory -from .add_evaluator_message_to_chat_history import AddEvaluatorMessageToChatHistory -from .error_node import ErrorNode -from .evaluator_agent import EvaluatorAgent -from .extract_status import ExtractStatus -from .final_output import FinalOutput -from .needs_revision import NeedsRevision -from .note import Note -from .problem_solver_agent import ProblemSolverAgent - -__all__ = [ - "AddAgentMessageToChatHistory", - "AddEvaluatorMessageToChatHistory", - "ErrorNode", - "EvaluatorAgent", - "ExtractStatus", - "FinalOutput", - "NeedsRevision", - "Note", - "ProblemSolverAgent", -] diff --git a/examples/workflows/reflection_agent/nodes/add_agent_message_to_chat_history.py b/examples/workflows/reflection_agent/nodes/add_agent_message_to_chat_history.py deleted file mode 100644 index 9d5bf5ff02..0000000000 --- a/examples/workflows/reflection_agent/nodes/add_agent_message_to_chat_history.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .problem_solver_agent import ProblemSolverAgent - - -class AddAgentMessageToChatHistory(TemplatingNode[BaseState, List[ChatMessage]]): - template = """\ -{%- set new_msg = { - \"role\": \"ASSISTANT\", - \"text\": message -} -%} -{%- set msg_arr = [new_msg] -%} -{{- (chat_history + msg_arr) | tojson -}}\ -""" - inputs = { - "chat_history": [], - "message": ProblemSolverAgent.Outputs.text, - } diff --git a/examples/workflows/reflection_agent/nodes/add_evaluator_message_to_chat_history.py b/examples/workflows/reflection_agent/nodes/add_evaluator_message_to_chat_history.py deleted file mode 100644 index a500151cc5..0000000000 --- a/examples/workflows/reflection_agent/nodes/add_evaluator_message_to_chat_history.py +++ /dev/null @@ -1,21 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.references import LazyReference -from vellum.workflows.state import BaseState - - -class AddEvaluatorMessageToChatHistory(TemplatingNode[BaseState, List[ChatMessage]]): - template = """\ -{%- set new_msg = { - \"role\": \"USER\", - \"text\": message -} -%} -{%- set msg_arr = [new_msg] -%} -{{- (chat_history + msg_arr) | tojson -}}\ -""" - inputs = { - "chat_history": LazyReference("AddAgentMessageToChatHistory.Outputs.result"), - "message": LazyReference("EvaluatorAgent.Outputs.text"), - } diff --git a/examples/workflows/reflection_agent/nodes/error_node.py b/examples/workflows/reflection_agent/nodes/error_node.py deleted file mode 100644 index c46666a0ca..0000000000 --- a/examples/workflows/reflection_agent/nodes/error_node.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.nodes.displayable import ErrorNode as BaseErrorNode - - -class ErrorNode(BaseErrorNode): - error = "Failed to solve problem in given number of attempts" diff --git a/examples/workflows/reflection_agent/nodes/evaluator_agent.py b/examples/workflows/reflection_agent/nodes/evaluator_agent.py deleted file mode 100644 index 97c1113831..0000000000 --- a/examples/workflows/reflection_agent/nodes/evaluator_agent.py +++ /dev/null @@ -1,96 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .problem_solver_agent import ProblemSolverAgent - - -class EvaluatorAgent(InlinePromptNode): - """Here we use GPT's Structured Outputs to return "status" and "feedback" of whether or not the proposed solution is acceptable, along with feedback about what isn't correct in the proposed solution. - - Notably, we are not including the full conversation context here. This is a use-case dependent decision. By doing this, we are effectively grading the quality of the current solution in isolation, reducing some variability, reducing the tokens in our context window, and reducing cost.""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You are reviewing a Problem Solving Agent’s to solve a math step-by-step. Clearly identify any logical or calculation errors. If errors are found, briefly suggest corrections and instruct the agent to try again, incorporating your feedback. - - -\ -""" - ), - VariablePromptBlock(input_variable="math_problem"), - PlainTextPromptBlock( - text="""\ - - - - -\ -""" - ), - VariablePromptBlock(input_variable="proposed_solution"), - PlainTextPromptBlock( - text="""\ - -\ -""" - ), - ] - ) - ], - ), - ] - prompt_inputs = { - "math_problem": Inputs.math_problem, - "proposed_solution": ProblemSolverAgent.Outputs.text, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters={ - "json_schema": { - "name": "reasoning_schema", - "schema": { - "type": "object", - "required": [ - "status", - "feedback", - ], - "properties": { - "status": { - "enum": [ - "needs_revision", - "acceptable", - ], - "type": "string", - "description": "Denotes whether the is acceptable or needs revision. ", - }, - "feedback": { - "type": "string", - "description": "Feedback about the for the Problem Solving Agent to use in a subsequent attempt. ", - }, - }, - }, - "strict": True, - }, - }, - ) diff --git a/examples/workflows/reflection_agent/nodes/extract_status.py b/examples/workflows/reflection_agent/nodes/extract_status.py deleted file mode 100644 index e8fb6d2a22..0000000000 --- a/examples/workflows/reflection_agent/nodes/extract_status.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .evaluator_agent import EvaluatorAgent - - -class ExtractStatus(TemplatingNode[BaseState, str]): - template = """{{ json.loads(example_var_1)[\"status\"] }}""" - inputs = { - "example_var_1": EvaluatorAgent.Outputs.text, - } diff --git a/examples/workflows/reflection_agent/nodes/final_output.py b/examples/workflows/reflection_agent/nodes/final_output.py deleted file mode 100644 index fc0dc2a6ea..0000000000 --- a/examples/workflows/reflection_agent/nodes/final_output.py +++ /dev/null @@ -1,9 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .problem_solver_agent import ProblemSolverAgent - - -class FinalOutput(FinalOutputNode[BaseState, str]): - class Outputs(FinalOutputNode.Outputs): - value = ProblemSolverAgent.Outputs.text diff --git a/examples/workflows/reflection_agent/nodes/needs_revision.py b/examples/workflows/reflection_agent/nodes/needs_revision.py deleted file mode 100644 index 0b0f7105a0..0000000000 --- a/examples/workflows/reflection_agent/nodes/needs_revision.py +++ /dev/null @@ -1,14 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode -from vellum.workflows.ports import Port -from vellum.workflows.references import LazyReference - -from .problem_solver_agent import ProblemSolverAgent - - -class NeedsRevision(ConditionalNode): - """Here we retry up to 3 times until the status is acceptable, or we give up. You can use the scrubber at the bottom of the page to see the results on each loop.""" - - class Ports(ConditionalNode.Ports): - branch_1 = Port.on_if(LazyReference("ExtractStatus.Outputs.result").equals("acceptable")) - branch_2 = Port.on_elif(ProblemSolverAgent.Execution.count.less_than(4)) - branch_3 = Port.on_else() diff --git a/examples/workflows/reflection_agent/nodes/note.py b/examples/workflows/reflection_agent/nodes/note.py deleted file mode 100644 index e59408d31f..0000000000 --- a/examples/workflows/reflection_agent/nodes/note.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.nodes.displayable import NoteNode - - -class Note(NoteNode): - pass diff --git a/examples/workflows/reflection_agent/nodes/problem_solver_agent.py b/examples/workflows/reflection_agent/nodes/problem_solver_agent.py deleted file mode 100644 index f7d6504180..0000000000 --- a/examples/workflows/reflection_agent/nodes/problem_solver_agent.py +++ /dev/null @@ -1,58 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - - -class ProblemSolverAgent(InlinePromptNode): - """Here we use any context we have from an Evaluator Agent to answer a math problem for a user. If we haven't run the answer through the Evaluator Agent yet, then we initialize an empty Chat History as a "fallback value\"""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -You are a problem solving agent helping answer a from a user. You will get feedback from an Evaluator Agent on the quality of your solution and you should use it to iterate upon your solution until you have a satisfactory answer. - -Answer the following math question step by step: - - -\ -""" - ), - VariablePromptBlock(input_variable="math_problem"), - PlainTextPromptBlock( - text="""\ - -\ -""" - ), - ] - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "math_problem": "If a train travels 120 miles in 2 hours, then accelerates and covers the next 180 miles in 2 hours, what is its average speed for the entire trip?", - "chat_history": [], - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/reflection_agent/sandbox.py b/examples/workflows/reflection_agent/sandbox.py deleted file mode 100644 index 99e7eeba6a..0000000000 --- a/examples/workflows/reflection_agent/sandbox.py +++ /dev/null @@ -1,19 +0,0 @@ -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - math_problem="A cyclist travels uphill at an average speed of 8 mph and downhill along the same route at an average speed of 24 mph. If the total round-trip takes 4 hours, what is the total distance traveled?\n" - ), - ], -) - -runner.run() diff --git a/examples/workflows/reflection_agent/workflow.py b/examples/workflows/reflection_agent/workflow.py deleted file mode 100644 index 9f4aa3d539..0000000000 --- a/examples/workflows/reflection_agent/workflow.py +++ /dev/null @@ -1,31 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.add_agent_message_to_chat_history import AddAgentMessageToChatHistory -from .nodes.add_evaluator_message_to_chat_history import AddEvaluatorMessageToChatHistory -from .nodes.error_node import ErrorNode -from .nodes.evaluator_agent import EvaluatorAgent -from .nodes.extract_status import ExtractStatus -from .nodes.final_output import FinalOutput -from .nodes.needs_revision import NeedsRevision -from .nodes.note import Note -from .nodes.problem_solver_agent import ProblemSolverAgent - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = ( - ProblemSolverAgent - >> AddAgentMessageToChatHistory - >> EvaluatorAgent - >> ExtractStatus - >> { - NeedsRevision.Ports.branch_1 >> FinalOutput, - NeedsRevision.Ports.branch_3 >> ErrorNode, - NeedsRevision.Ports.branch_2 >> AddEvaluatorMessageToChatHistory >> ProblemSolverAgent, - } - ) - unused_graphs = {Note} - - class Outputs(BaseWorkflow.Outputs): - final_output = FinalOutput.Outputs.value diff --git a/examples/workflows/router_classifier/__init__.py b/examples/workflows/router_classifier/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/router_classifier/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/router_classifier/display/__init__.py b/examples/workflows/router_classifier/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/router_classifier/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/router_classifier/display/nodes/__init__.py b/examples/workflows/router_classifier/display/nodes/__init__.py deleted file mode 100644 index 209fb7a614..0000000000 --- a/examples/workflows/router_classifier/display/nodes/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .advance_or_reject import AdvanceOrRejectDisplay -from .evaluate_resume import EvaluateResumeDisplay -from .extract_score import ExtractScoreDisplay -from .final_output_email_content import FinalOutputEmailContentDisplay -from .write_next_round_email import WriteNextRoundEmailDisplay -from .write_rejection_email import WriteRejectionEmailDisplay - -__all__ = [ - "AdvanceOrRejectDisplay", - "EvaluateResumeDisplay", - "ExtractScoreDisplay", - "FinalOutputEmailContentDisplay", - "WriteNextRoundEmailDisplay", - "WriteRejectionEmailDisplay", -] diff --git a/examples/workflows/router_classifier/display/nodes/advance_or_reject.py b/examples/workflows/router_classifier/display/nodes/advance_or_reject.py deleted file mode 100644 index 52c53b562a..0000000000 --- a/examples/workflows/router_classifier/display/nodes/advance_or_reject.py +++ /dev/null @@ -1,46 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseConditionalNodeDisplay -from vellum_ee.workflows.display.nodes.types import PortDisplayOverrides -from vellum_ee.workflows.display.nodes.vellum.conditional_node import ConditionId, RuleIdMap - -from ...nodes.advance_or_reject import AdvanceOrReject - - -class AdvanceOrRejectDisplay(BaseConditionalNodeDisplay[AdvanceOrReject]): - label = "Advance or Reject?" - node_id = UUID("f08c616c-8581-4b2a-9fe1-53e947d6fd98") - target_handle_id = UUID("2eac0701-3d89-436d-b348-b9db735cc9f6") - source_handle_ids = { - 0: UUID("22add5ef-53e9-474d-9b43-81d561578a38"), - 1: UUID("67b02f94-a45e-4328-b838-da2b221a7028"), - } - rule_ids = [ - RuleIdMap( - id="4034307f-1f72-4c89-a8a4-ed26678df519", - lhs=RuleIdMap( - id="0520e53d-77a2-4655-96c0-707deee949fb", - lhs=None, - rhs=None, - field_node_input_id="7e2b9caf-a593-436e-80ed-d65fb4a6b321", - value_node_input_id="52a2a3c8-e60d-49fa-8d76-973d3664422c", - ), - rhs=None, - field_node_input_id=None, - value_node_input_id=None, - ) - ] - condition_ids = [ - ConditionId(id="f862fe38-f3cf-4298-a354-cda42c3be2c0", rule_group_id="4034307f-1f72-4c89-a8a4-ed26678df519"), - ConditionId(id="62fbd66c-2624-4652-aaf5-1c1ca17e3b18", rule_group_id=None), - ] - node_input_ids_by_name = { - "9949dfe6-605a-4c1a-9642-37c7382d9658.field": UUID("7e2b9caf-a593-436e-80ed-d65fb4a6b321"), - "9949dfe6-605a-4c1a-9642-37c7382d9658.value": UUID("52a2a3c8-e60d-49fa-8d76-973d3664422c"), - } - port_displays = { - AdvanceOrReject.Ports.branch_1: PortDisplayOverrides(id=UUID("22add5ef-53e9-474d-9b43-81d561578a38")), - AdvanceOrReject.Ports.branch_2: PortDisplayOverrides(id=UUID("67b02f94-a45e-4328-b838-da2b221a7028")), - } - display_data = NodeDisplayData(position=NodeDisplayPosition(x=1463.5, y=1263), width=448, height=185) diff --git a/examples/workflows/router_classifier/display/nodes/evaluate_resume.py b/examples/workflows/router_classifier/display/nodes/evaluate_resume.py deleted file mode 100644 index 6f01bd8abe..0000000000 --- a/examples/workflows/router_classifier/display/nodes/evaluate_resume.py +++ /dev/null @@ -1,33 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.evaluate_resume import EvaluateResume - - -class EvaluateResumeDisplay(BaseInlinePromptNodeDisplay[EvaluateResume]): - label = "Evaluate Resume" - node_id = UUID("b3baf439-997d-470c-a3fa-b7330fa01b59") - output_id = UUID("0c374fba-f4d8-400f-a1ed-e8716b790d46") - array_output_id = UUID("5b2b2db7-2456-49f5-99ad-f353303a8228") - target_handle_id = UUID("77231475-31fd-49f6-8994-dbb64e9f1d6f") - node_input_ids_by_name = { - "prompt_inputs.resume": UUID("3c5dff6d-b4a6-46cf-a3b6-5a7292124ae9"), - "prompt_inputs.job_description": UUID("55de0eea-10cd-40a4-853e-78b41e444699"), - } - attribute_ids_by_name = {"ml_model": UUID("70c11bc5-4f1b-452b-9089-fb384db2ffe0")} - output_display = { - EvaluateResume.Outputs.text: NodeOutputDisplay(id=UUID("0c374fba-f4d8-400f-a1ed-e8716b790d46"), name="text"), - EvaluateResume.Outputs.results: NodeOutputDisplay( - id=UUID("5b2b2db7-2456-49f5-99ad-f353303a8228"), name="results" - ), - EvaluateResume.Outputs.json: NodeOutputDisplay(id=UUID("322f30de-626a-46a3-8657-42f7b45d8869"), name="json"), - } - port_displays = { - EvaluateResume.Ports.default: PortDisplayOverrides(id=UUID("ed90f5f2-2acd-4661-9baa-608e18809952")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=254, y=1043.5), width=480, height=333, comment=NodeDisplayComment(expanded=True) - ) diff --git a/examples/workflows/router_classifier/display/nodes/extract_score.py b/examples/workflows/router_classifier/display/nodes/extract_score.py deleted file mode 100644 index 369a9bd516..0000000000 --- a/examples/workflows/router_classifier/display/nodes/extract_score.py +++ /dev/null @@ -1,22 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.extract_score import ExtractScore - - -class ExtractScoreDisplay(BaseTemplatingNodeDisplay[ExtractScore]): - label = "Extract Score" - node_id = UUID("8421f554-d065-426d-a5b8-a24e713b54da") - target_handle_id = UUID("15d6a18b-baab-4191-9524-524188c707ba") - node_input_ids_by_name = { - "inputs.resume_score_json": UUID("420a5f4e-dcb1-44eb-906d-34a11691dd1d"), - "template": UUID("2db22841-9572-4df7-8d34-b8cf5bcb7678"), - } - output_display = { - ExtractScore.Outputs.result: NodeOutputDisplay(id=UUID("406969db-7950-40a7-8077-53638995103b"), name="result") - } - port_displays = {ExtractScore.Ports.default: PortDisplayOverrides(id=UUID("b13925a1-dff7-4354-9fb7-0832979e2240"))} - display_data = NodeDisplayData(position=NodeDisplayPosition(x=864, y=1177), width=458, height=229) diff --git a/examples/workflows/router_classifier/display/nodes/final_output_email_content.py b/examples/workflows/router_classifier/display/nodes/final_output_email_content.py deleted file mode 100644 index 9a984de39b..0000000000 --- a/examples/workflows/router_classifier/display/nodes/final_output_email_content.py +++ /dev/null @@ -1,23 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.final_output_email_content import FinalOutputEmailContent - - -class FinalOutputEmailContentDisplay(BaseFinalOutputNodeDisplay[FinalOutputEmailContent]): - label = "Final Output - Email Content" - node_id = UUID("879336fa-8d2f-49c7-91e7-1d5ecead8224") - target_handle_id = UUID("47efd3cb-b7cc-44e7-a7a7-c1294e5b668a") - output_name = "email_copy" - node_input_ids_by_name = {"node_input": UUID("259f0fd9-e59f-4825-aa8b-4df60c6d64ec")} - output_display = { - FinalOutputEmailContent.Outputs.value: NodeOutputDisplay( - id=UUID("84803d2a-ca83-40a3-b138-d8ebf64f8af1"), name="value" - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2663, y=954), width=452, height=347, comment=NodeDisplayComment(expanded=True) - ) diff --git a/examples/workflows/router_classifier/display/nodes/write_next_round_email.py b/examples/workflows/router_classifier/display/nodes/write_next_round_email.py deleted file mode 100644 index 08eac281ee..0000000000 --- a/examples/workflows/router_classifier/display/nodes/write_next_round_email.py +++ /dev/null @@ -1,34 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.write_next_round_email import WriteNextRoundEmail - - -class WriteNextRoundEmailDisplay(BaseInlinePromptNodeDisplay[WriteNextRoundEmail]): - label = "Write Next Round Email" - node_id = UUID("ac302d25-409e-4649-afdc-2ba2ac7f16ac") - output_id = UUID("050b67a9-7976-433c-961b-65c3591c1e17") - array_output_id = UUID("f1809bf4-8f1c-44c0-ad94-9412b4cdd428") - target_handle_id = UUID("dde07670-5e53-4cc1-ac3f-ec19d6a050e6") - node_input_ids_by_name = {"prompt_inputs.resume_evaluation": UUID("0681162b-315a-4ff4-a5ba-4f205c902538")} - attribute_ids_by_name = {"ml_model": UUID("3d5b268d-b485-4ee0-9591-4c8932eebb0c")} - output_display = { - WriteNextRoundEmail.Outputs.text: NodeOutputDisplay( - id=UUID("050b67a9-7976-433c-961b-65c3591c1e17"), name="text" - ), - WriteNextRoundEmail.Outputs.results: NodeOutputDisplay( - id=UUID("f1809bf4-8f1c-44c0-ad94-9412b4cdd428"), name="results" - ), - WriteNextRoundEmail.Outputs.json: NodeOutputDisplay( - id=UUID("f124ede6-09e1-4605-bbc5-4601604dba61"), name="json" - ), - } - port_displays = { - WriteNextRoundEmail.Ports.default: PortDisplayOverrides(id=UUID("8e11f04a-e42a-4638-9564-a998c0950958")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2053, y=692), width=480, height=261, comment=NodeDisplayComment(expanded=True) - ) diff --git a/examples/workflows/router_classifier/display/nodes/write_rejection_email.py b/examples/workflows/router_classifier/display/nodes/write_rejection_email.py deleted file mode 100644 index c9f1006124..0000000000 --- a/examples/workflows/router_classifier/display/nodes/write_rejection_email.py +++ /dev/null @@ -1,34 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.write_rejection_email import WriteRejectionEmail - - -class WriteRejectionEmailDisplay(BaseInlinePromptNodeDisplay[WriteRejectionEmail]): - label = "Write Rejection Email" - node_id = UUID("f4b37571-f8af-48b4-ac02-9a6515dcf0f4") - output_id = UUID("05c32c54-865b-46f9-b68b-1c295ae1b619") - array_output_id = UUID("6270aae8-b312-4e9d-905e-fbe5f23d25e4") - target_handle_id = UUID("75a77dde-080f-406f-afb8-7c7d74df4757") - node_input_ids_by_name = {"prompt_inputs.resume_evaluation": UUID("438aafbc-cb61-4b32-b5d6-1cf4f62a5db4")} - attribute_ids_by_name = {"ml_model": UUID("64a17993-f907-4b67-aa6d-db87d8ed61a5")} - output_display = { - WriteRejectionEmail.Outputs.text: NodeOutputDisplay( - id=UUID("05c32c54-865b-46f9-b68b-1c295ae1b619"), name="text" - ), - WriteRejectionEmail.Outputs.results: NodeOutputDisplay( - id=UUID("6270aae8-b312-4e9d-905e-fbe5f23d25e4"), name="results" - ), - WriteRejectionEmail.Outputs.json: NodeOutputDisplay( - id=UUID("0bb9effe-1d07-436d-8ae6-7529b75ec76e"), name="json" - ), - } - port_displays = { - WriteRejectionEmail.Ports.default: PortDisplayOverrides(id=UUID("6a94190e-f7e0-43d8-8e56-f05c366e3681")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2053, y=1518), width=480, height=261, comment=NodeDisplayComment(expanded=True) - ) diff --git a/examples/workflows/router_classifier/display/workflow.py b/examples/workflows/router_classifier/display/workflow.py deleted file mode 100644 index 2cd2f6a6d2..0000000000 --- a/examples/workflows/router_classifier/display/workflow.py +++ /dev/null @@ -1,66 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.advance_or_reject import AdvanceOrReject -from ..nodes.evaluate_resume import EvaluateResume -from ..nodes.extract_score import ExtractScore -from ..nodes.final_output_email_content import FinalOutputEmailContent -from ..nodes.write_next_round_email import WriteNextRoundEmail -from ..nodes.write_rejection_email import WriteRejectionEmail -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("d42056a1-cb01-4fd2-8eb7-560c9006511a"), - entrypoint_node_source_handle_id=UUID("24241884-119a-4812-9d50-74b866b47fed"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=0, y=1329), width=124, height=48), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=53.2257811873921, y=-545.3742406253895, zoom=0.5924984837195854) - ), - ) - inputs_display = { - Inputs.resume: WorkflowInputsDisplay(id=UUID("ce0a66eb-5882-4303-946b-75184270a926"), name="resume"), - Inputs.job_requirements: WorkflowInputsDisplay( - id=UUID("a6f1f373-6a17-43e2-baf4-aeb28d29318f"), name="job_requirements", color="cyan" - ), - } - entrypoint_displays = { - EvaluateResume: EntrypointDisplay( - id=UUID("d42056a1-cb01-4fd2-8eb7-560c9006511a"), - edge_display=EdgeDisplay(id=UUID("ee61d69d-35d0-43ea-987f-f08155b31c87")), - ) - } - edge_displays = { - (AdvanceOrReject.Ports.branch_1, WriteNextRoundEmail): EdgeDisplay( - id=UUID("31830773-663d-46da-accf-361fb6099657") - ), - (EvaluateResume.Ports.default, ExtractScore): EdgeDisplay(id=UUID("e1ed362b-1c3d-4227-888d-58253c59b706")), - (ExtractScore.Ports.default, AdvanceOrReject): EdgeDisplay(id=UUID("986d385d-21a3-439b-ae5b-529819c3fac7")), - (AdvanceOrReject.Ports.branch_2, WriteRejectionEmail): EdgeDisplay( - id=UUID("0b53bbe8-c551-4ce2-90be-9ea82f35a136") - ), - (WriteNextRoundEmail.Ports.default, FinalOutputEmailContent): EdgeDisplay( - id=UUID("04224ba0-dd18-4981-85d6-fdeb52d1c1cf") - ), - (WriteRejectionEmail.Ports.default, FinalOutputEmailContent): EdgeDisplay( - id=UUID("dc3a45e5-3835-4ff8-b7b1-d18de8946c74") - ), - } - output_displays = { - Workflow.Outputs.email_copy: WorkflowOutputDisplay( - id=UUID("84803d2a-ca83-40a3-b138-d8ebf64f8af1"), name="email_copy" - ) - } diff --git a/examples/workflows/router_classifier/inputs.py b/examples/workflows/router_classifier/inputs.py deleted file mode 100644 index f6c9c1ddb9..0000000000 --- a/examples/workflows/router_classifier/inputs.py +++ /dev/null @@ -1,6 +0,0 @@ -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - resume: str - job_requirements: str diff --git a/examples/workflows/router_classifier/nodes/__init__.py b/examples/workflows/router_classifier/nodes/__init__.py deleted file mode 100644 index 10659cf0a1..0000000000 --- a/examples/workflows/router_classifier/nodes/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .advance_or_reject import AdvanceOrReject -from .evaluate_resume import EvaluateResume -from .extract_score import ExtractScore -from .final_output_email_content import FinalOutputEmailContent -from .write_next_round_email import WriteNextRoundEmail -from .write_rejection_email import WriteRejectionEmail - -__all__ = [ - "AdvanceOrReject", - "EvaluateResume", - "ExtractScore", - "FinalOutputEmailContent", - "WriteNextRoundEmail", - "WriteRejectionEmail", -] diff --git a/examples/workflows/router_classifier/nodes/advance_or_reject.py b/examples/workflows/router_classifier/nodes/advance_or_reject.py deleted file mode 100644 index 33c9af2934..0000000000 --- a/examples/workflows/router_classifier/nodes/advance_or_reject.py +++ /dev/null @@ -1,10 +0,0 @@ -from vellum.workflows.nodes.displayable import ConditionalNode -from vellum.workflows.ports import Port - -from .extract_score import ExtractScore - - -class AdvanceOrReject(ConditionalNode): - class Ports(ConditionalNode.Ports): - branch_1 = Port.on_if(ExtractScore.Outputs.result.equals("Advance")) - branch_2 = Port.on_else() diff --git a/examples/workflows/router_classifier/nodes/evaluate_resume.py b/examples/workflows/router_classifier/nodes/evaluate_resume.py deleted file mode 100644 index 9044c1de1d..0000000000 --- a/examples/workflows/router_classifier/nodes/evaluate_resume.py +++ /dev/null @@ -1,93 +0,0 @@ -from vellum import ChatMessagePromptBlock, JinjaPromptBlock, PromptParameters -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs - - -class EvaluateResume(InlinePromptNode): - """Here we use GPT Structured Outputs to create consistent JSON. From there, we can parse and extract a "score" - - With that score, we can then route to different Prompts or Agents""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""\ -Compare the following resume to the job description and evaluate it based on the provided schema. - -{#- The schema is provided in the Model tab -#} -{#- You can leave comments in Jinja Prompt Blocks like this -#}\ -""" - ) - ], - ), - ChatMessagePromptBlock( - chat_role="USER", - blocks=[ - JinjaPromptBlock( - template="""\ - -{{ resume }} - - - -{{ job_description }} -\ -""" - ) - ], - ), - ] - prompt_inputs = { - "resume": Inputs.resume, - "job_description": Inputs.job_requirements, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters={ - "json_schema": { - "name": "match_scorer_schema", - "schema": { - "type": "object", - "title": "MatchScorerSchema", - "required": [ - "recommendation", - "score", - "remarks", - ], - "properties": { - "score": { - "type": "integer", - "title": "Match Score", - "description": "Match score out of 10", - }, - "remarks": { - "type": "string", - "title": "Remarks", - "description": "Remarks about the match", - }, - "recommendation": { - "enum": [ - "Advance", - "Defer", - "Reject", - ], - "type": "string", - "title": "Status", - "description": "Recommendation for the candidate. Denotes whether they should move forward, get deferred, or rejected from the process. ", - }, - }, - }, - }, - }, - ) diff --git a/examples/workflows/router_classifier/nodes/extract_score.py b/examples/workflows/router_classifier/nodes/extract_score.py deleted file mode 100644 index 2c848c2770..0000000000 --- a/examples/workflows/router_classifier/nodes/extract_score.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .evaluate_resume import EvaluateResume - - -class ExtractScore(TemplatingNode[BaseState, str]): - template = """{{- resume_score_json[\"recommendation\"] -}}""" - inputs = { - "resume_score_json": EvaluateResume.Outputs.json, - } diff --git a/examples/workflows/router_classifier/nodes/final_output_email_content.py b/examples/workflows/router_classifier/nodes/final_output_email_content.py deleted file mode 100644 index 5777ffb869..0000000000 --- a/examples/workflows/router_classifier/nodes/final_output_email_content.py +++ /dev/null @@ -1,12 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .write_next_round_email import WriteNextRoundEmail -from .write_rejection_email import WriteRejectionEmail - - -class FinalOutputEmailContent(FinalOutputNode[BaseState, str]): - """Here we use fallback values such that we can use a single Final Output node regardless of which conditional path is taken.""" - - class Outputs(FinalOutputNode.Outputs): - value = WriteNextRoundEmail.Outputs.text.coalesce(WriteRejectionEmail.Outputs.text) diff --git a/examples/workflows/router_classifier/nodes/write_next_round_email.py b/examples/workflows/router_classifier/nodes/write_next_round_email.py deleted file mode 100644 index 0a07c262f9..0000000000 --- a/examples/workflows/router_classifier/nodes/write_next_round_email.py +++ /dev/null @@ -1,56 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from .evaluate_resume import EvaluateResume - - -class WriteNextRoundEmail(InlinePromptNode): - """In your application, this could be an Agent, implemented & tested via Subworkflows, which calls APIs to actually send the email, move the candidate in an Applicant Tracking System (ATS), etc.""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -Please write an email to the following candidate that congratulates them on moving to the next round interview for a role for which they applied. Use the following to give the candidate feedback that is brief, polite, and respectful. - - -\ -""" - ), - VariablePromptBlock(input_variable="resume_evaluation"), - PlainTextPromptBlock( - text="""\ - -\ -""" - ), - ] - ) - ], - ), - ] - prompt_inputs = { - "resume_evaluation": EvaluateResume.Outputs.text, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/router_classifier/nodes/write_rejection_email.py b/examples/workflows/router_classifier/nodes/write_rejection_email.py deleted file mode 100644 index 6f2c5c4955..0000000000 --- a/examples/workflows/router_classifier/nodes/write_rejection_email.py +++ /dev/null @@ -1,56 +0,0 @@ -from vellum import ( - ChatMessagePromptBlock, - PlainTextPromptBlock, - PromptParameters, - RichTextPromptBlock, - VariablePromptBlock, -) -from vellum.workflows.nodes.displayable import InlinePromptNode - -from .evaluate_resume import EvaluateResume - - -class WriteRejectionEmail(InlinePromptNode): - """In your application, this could be an Agent, implemented & tested via Subworkflows, which calls APIs to actually send the email, move the candidate in an Applicant Tracking System (ATS), etc.""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - RichTextPromptBlock( - blocks=[ - PlainTextPromptBlock( - text="""\ -Please write a rejection email to the following candidate. Use the following to give the candidate feedback that is brief, polite, and respectful. - - -\ -""" - ), - VariablePromptBlock(input_variable="resume_evaluation"), - PlainTextPromptBlock( - text="""\ - -\ -""" - ), - ] - ) - ], - ), - ] - prompt_inputs = { - "resume_evaluation": EvaluateResume.Outputs.text, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/router_classifier/sandbox.py b/examples/workflows/router_classifier/sandbox.py deleted file mode 100644 index fb37da10c2..0000000000 --- a/examples/workflows/router_classifier/sandbox.py +++ /dev/null @@ -1,20 +0,0 @@ -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - resume="Jane Doe\nEmail: jane.doe@example.com\nPhone: (123) 456-7890\nLinkedIn: linkedin.com/in/janedoe\n\nObjective:\nExperienced Data Scientist with a strong background in Python, SQL, and Machine Learning. Seeking a challenging role to leverage my skills in data analysis and AI model development.\n\nWork Experience:\nData Scientist - ABC Analytics\nJan 2021 – Jan 2025\n- Developed and deployed machine learning models to predict customer churn, improving retention by 15%.\n- Analyzed large datasets using Python and SQL to identify trends and actionable insights.\n- Built data pipelines and dashboards for real-time performance monitoring.\n\nData Analyst - XYZ Corp\nJun 2020 – Dec 2020\n- Created automated reporting systems, reducing manual reporting time by 30%.\n- Used Tableau and Excel to create visualizations for executive presentations.\n- Conducted statistical analysis to support strategic decision-making.\n\nEducation:\nBachelor of Science in Computer Science\nUniversity of Example, 2017\n\nSkills:\n- Programming: Python, SQL, R\n- Machine Learning: Scikit-learn, TensorFlow\n- Data Visualization: Tableau, Power BI\n- Tools: Jupyter, Git, AWS\n\nCertifications:\n- AWS Certified Data Analytics – 2021\n- TensorFlow Developer Certificate – 2020", - job_requirements="Must Haves:\n- Python programming experience with at least 3 years of professional work history in this field\n- SQL database experience, indicating ability to work with and query databases effectively\n- Machine Learning experience, showing familiarity with implementing ML models and algorithms\n\nNice to Haves:\n- Experience with modern ML frameworks like TensorFlow, PyTorch, or scikit-learn\n- Familiarity with cloud platforms (AWS, GCP, or Azure) for ML deployment\n- Experience with version control systems like Git and collaborative development workflows\n- Knowledge of data visualization libraries (matplotlib, seaborn, plotly)\n- Background in DevOps practices and CI/CD pipelines", - ), - ], -) - -runner.run() diff --git a/examples/workflows/router_classifier/workflow.py b/examples/workflows/router_classifier/workflow.py deleted file mode 100644 index 61c4dab09a..0000000000 --- a/examples/workflows/router_classifier/workflow.py +++ /dev/null @@ -1,25 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.advance_or_reject import AdvanceOrReject -from .nodes.evaluate_resume import EvaluateResume -from .nodes.extract_score import ExtractScore -from .nodes.final_output_email_content import FinalOutputEmailContent -from .nodes.write_next_round_email import WriteNextRoundEmail -from .nodes.write_rejection_email import WriteRejectionEmail - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = ( - EvaluateResume - >> ExtractScore - >> { - AdvanceOrReject.Ports.branch_1 >> WriteNextRoundEmail, - AdvanceOrReject.Ports.branch_2 >> WriteRejectionEmail, - } - >> FinalOutputEmailContent - ) - - class Outputs(BaseWorkflow.Outputs): - email_copy = FinalOutputEmailContent.Outputs.value diff --git a/examples/workflows/trust_center_q_a/README.md b/examples/workflows/trust_center_q_a/README.md deleted file mode 100644 index f0fb640bfb..0000000000 --- a/examples/workflows/trust_center_q_a/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Trust Center Q&A Workflow - -A Vellum workflow example that demonstrates how to build a question-answering system using chat history and search results. - -## Overview - -This workflow processes user questions by: -1. Extracting the most recent message from chat history -2. Performing search operations to find relevant information -3. Formatting search results and generating answers -4. Outputting structured responses including search results, questions, and answers - -## Features - -- **Zero-config monitoring**: Automatically displays monitoring URLs for workflow execution -- **Local SDK integration**: Uses Vellum's local SDK for development and testing -- **Structured outputs**: Provides search results, user questions, and generated answers -- **Real-time monitoring**: View workflow execution details in the Vellum dashboard - -## Requirements - -- Python 3.9+ -- Vellum API key (`VELLUM_API_KEY` environment variable) - -## Quick Start - -1. **Set your API key:** - ```bash - export VELLUM_API_KEY="your-api-key" - ``` - -2. **Run the demo:** - ```bash - python -m trust_center_q_a.external - ``` - -3. **View results:** - - The script will output workflow results to the console - - A monitoring URL will be displayed for viewing execution details in Vellum diff --git a/examples/workflows/trust_center_q_a/__init__.py b/examples/workflows/trust_center_q_a/__init__.py deleted file mode 100644 index 9442617720..0000000000 --- a/examples/workflows/trust_center_q_a/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa: F401, F403 - -from .display import * diff --git a/examples/workflows/trust_center_q_a/display/__init__.py b/examples/workflows/trust_center_q_a/display/__init__.py deleted file mode 100644 index d38fb6d6a9..0000000000 --- a/examples/workflows/trust_center_q_a/display/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# flake8: noqa: F401, F403 - -from .nodes import * -from .workflow import * diff --git a/examples/workflows/trust_center_q_a/display/nodes/__init__.py b/examples/workflows/trust_center_q_a/display/nodes/__init__.py deleted file mode 100644 index 5365947639..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .answer_question import AnswerQuestionDisplay -from .copy_of_note import CopyOfNoteDisplay -from .formatted_search_results import FormattedSearchResultsDisplay -from .most_recent_message import MostRecentMessageDisplay -from .output_answer import OutputAnswerDisplay -from .output_search_results import OutputSearchResultsDisplay -from .output_user_question import OutputUserQuestionDisplay -from .search_results import SearchResultsDisplay - -__all__ = [ - "AnswerQuestionDisplay", - "CopyOfNoteDisplay", - "FormattedSearchResultsDisplay", - "MostRecentMessageDisplay", - "OutputAnswerDisplay", - "OutputSearchResultsDisplay", - "OutputUserQuestionDisplay", - "SearchResultsDisplay", -] diff --git a/examples/workflows/trust_center_q_a/display/nodes/answer_question.py b/examples/workflows/trust_center_q_a/display/nodes/answer_question.py deleted file mode 100644 index 83c1559043..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/answer_question.py +++ /dev/null @@ -1,36 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseInlinePromptNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.answer_question import AnswerQuestion - - -class AnswerQuestionDisplay(BaseInlinePromptNodeDisplay[AnswerQuestion]): - label = "Answer Question" - node_id = UUID("645ebed4-8dcf-41ed-924a-5f7ce436fe0e") - output_id = UUID("b910a43e-864f-48b4-b57c-e9924d51807b") - array_output_id = UUID("010e1f65-2080-4391-9d1d-a0eaea832021") - target_handle_id = UUID("30ad35f8-ebba-4364-979d-96ba74211898") - node_input_ids_by_name = { - "prompt_inputs.chat_history": UUID("6c0b6479-ea6a-4302-958b-22ad1d630efb"), - "prompt_inputs.context": UUID("5e50c27e-4d76-496e-b8ea-98f185308376"), - } - attribute_ids_by_name = {"ml_model": UUID("9fc7da77-c986-44da-ab04-9c78ed73a3e5")} - output_display = { - AnswerQuestion.Outputs.text: NodeOutputDisplay(id=UUID("b910a43e-864f-48b4-b57c-e9924d51807b"), name="text"), - AnswerQuestion.Outputs.results: NodeOutputDisplay( - id=UUID("010e1f65-2080-4391-9d1d-a0eaea832021"), name="results" - ), - AnswerQuestion.Outputs.json: NodeOutputDisplay(id=UUID("904d5bcf-e96c-4ec0-af7f-6e6f11e3af3f"), name="json"), - } - port_displays = { - AnswerQuestion.Ports.default: PortDisplayOverrides(id=UUID("f5c5f06b-3dbc-4864-9f04-f97950ede5b1")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=3575.284864660129, y=357.82013225823766), - width=480, - height=296, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/copy_of_note.py b/examples/workflows/trust_center_q_a/display/nodes/copy_of_note.py deleted file mode 100644 index 9c700abfb7..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/copy_of_note.py +++ /dev/null @@ -1,14 +0,0 @@ -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseNoteNodeDisplay - -from ...nodes.copy_of_note import CopyOfNote - - -class CopyOfNoteDisplay(BaseNoteNodeDisplay[CopyOfNote]): - text = "Trust Center Q&A Bot" - style = { - "fontSize": 78, - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1320.589921371261, y=-136.7475345704591), width=890, height=165 - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/formatted_search_results.py b/examples/workflows/trust_center_q_a/display/nodes/formatted_search_results.py deleted file mode 100644 index 4f286c4ff2..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/formatted_search_results.py +++ /dev/null @@ -1,31 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.formatted_search_results import FormattedSearchResults - - -class FormattedSearchResultsDisplay(BaseTemplatingNodeDisplay[FormattedSearchResults]): - label = "Formatted Search Results" - node_id = UUID("3a762441-4bfa-46ca-a2b0-44c392b4a905") - target_handle_id = UUID("d0eab722-992c-4917-ab26-ef42599a1ac3") - node_input_ids_by_name = { - "inputs.results": UUID("9ab1342b-3f0c-40bc-916b-7307619f25a4"), - "template": UUID("26ead46f-1f9f-4ada-9a86-9d132a521331"), - } - output_display = { - FormattedSearchResults.Outputs.result: NodeOutputDisplay( - id=UUID("57af0bae-70a4-4762-b997-6ed6332d4ffe"), name="result" - ) - } - port_displays = { - FormattedSearchResults.Ports.default: PortDisplayOverrides(id=UUID("fea0cc9d-33e6-46e8-9c56-6a3ffb25572c")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2986.241584788271, y=334.6811885912033), - width=460, - height=315, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/most_recent_message.py b/examples/workflows/trust_center_q_a/display/nodes/most_recent_message.py deleted file mode 100644 index 0c74b68e22..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/most_recent_message.py +++ /dev/null @@ -1,28 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseTemplatingNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.most_recent_message import MostRecentMessage - - -class MostRecentMessageDisplay(BaseTemplatingNodeDisplay[MostRecentMessage]): - label = "Most Recent Message" - node_id = UUID("4f33effa-a71a-4850-8340-d1d271ec84ae") - target_handle_id = UUID("23ec9395-1369-4af8-a4ae-097acc4dbd58") - node_input_ids_by_name = { - "inputs.chat_history": UUID("31f0848f-58bc-48ff-8032-ea038e27e7db"), - "template": UUID("8a01c440-8061-46ae-b884-841ac1ce62b0"), - } - output_display = { - MostRecentMessage.Outputs.result: NodeOutputDisplay( - id=UUID("b233ca47-3ab5-41b9-9355-605b830bfb22"), name="result" - ) - } - port_displays = { - MostRecentMessage.Ports.default: PortDisplayOverrides(id=UUID("823834a2-0968-4d33-8baf-e23445a15c7f")) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=1785, y=225), width=448, height=315, comment=NodeDisplayComment(expanded=True) - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/output_answer.py b/examples/workflows/trust_center_q_a/display/nodes/output_answer.py deleted file mode 100644 index 3777398672..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/output_answer.py +++ /dev/null @@ -1,24 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.output_answer import OutputAnswer - - -class OutputAnswerDisplay(BaseFinalOutputNodeDisplay[OutputAnswer]): - label = "Output: Answer" - node_id = UUID("deb0823c-20eb-4cb6-8445-636d37a9c58e") - target_handle_id = UUID("df4f459e-c3e4-4ae4-ae33-69145f0d2b50") - output_name = "answer" - node_input_ids_by_name = {"node_input": UUID("eb02ccd1-a768-4fa8-adde-dff1f335a265")} - output_display = { - OutputAnswer.Outputs.value: NodeOutputDisplay(id=UUID("519d3b9b-4caa-4928-abd1-ce3130caabee"), name="value") - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=4140.211433566075, y=326.3319690467531), - width=464, - height=325, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/output_search_results.py b/examples/workflows/trust_center_q_a/display/nodes/output_search_results.py deleted file mode 100644 index 7890465f23..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/output_search_results.py +++ /dev/null @@ -1,23 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.output_search_results import OutputSearchResults - - -class OutputSearchResultsDisplay(BaseFinalOutputNodeDisplay[OutputSearchResults]): - label = "Output: Search Results" - node_id = UUID("11a597f3-6655-47d5-9e79-ebb3c11965d1") - target_handle_id = UUID("8ebe1a2e-3971-4b1a-9605-2b6a8c59d134") - output_name = "search_results" - node_input_ids_by_name = {"node_input": UUID("c4e72fc2-fa5b-47fd-849c-a5bddf738558")} - output_display = { - OutputSearchResults.Outputs.value: NodeOutputDisplay( - id=UUID("3f526b86-e419-4c89-b7fa-beacd0055556"), name="value" - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2991.452657958532, y=-177.54047446589146), width=447, height=239 - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/output_user_question.py b/examples/workflows/trust_center_q_a/display/nodes/output_user_question.py deleted file mode 100644 index 1b14a6be9e..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/output_user_question.py +++ /dev/null @@ -1,26 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseFinalOutputNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay - -from ...nodes.output_user_question import OutputUserQuestion - - -class OutputUserQuestionDisplay(BaseFinalOutputNodeDisplay[OutputUserQuestion]): - label = "Output: User Question" - node_id = UUID("12c866ee-27f8-4b1a-a664-034dfa69c789") - target_handle_id = UUID("998b1e72-aa65-4f7a-8bfd-78f944b60d0b") - output_name = "question" - node_input_ids_by_name = {"node_input": UUID("092da64e-5021-492d-8ccb-333c9602f423")} - output_display = { - OutputUserQuestion.Outputs.value: NodeOutputDisplay( - id=UUID("c2fb17c7-f6aa-44b0-a4f1-805f46e058c9"), name="value" - ) - } - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2341.90612872579, y=-239.41100749518978), - width=453, - height=362, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/trust_center_q_a/display/nodes/search_results.py b/examples/workflows/trust_center_q_a/display/nodes/search_results.py deleted file mode 100644 index 3552164396..0000000000 --- a/examples/workflows/trust_center_q_a/display/nodes/search_results.py +++ /dev/null @@ -1,37 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.editor import NodeDisplayComment, NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.nodes import BaseSearchNodeDisplay -from vellum_ee.workflows.display.nodes.types import NodeOutputDisplay, PortDisplayOverrides - -from ...nodes.search_results import SearchResults - - -class SearchResultsDisplay(BaseSearchNodeDisplay[SearchResults]): - label = "Search Results" - node_id = UUID("18a3ba39-07d8-4cf1-8c42-4bc3322d6910") - target_handle_id = UUID("6375f7ea-c1b8-4bca-ba19-0c45bc75831b") - metadata_filter_input_id_by_operand_id = {} - node_input_ids_by_name = { - "query": UUID("3f52de4c-2123-45b6-80d2-0253c42eb87f"), - "document_index_id": UUID("90d2e8f8-1bcb-4004-b69a-429db4f3e832"), - "weights": UUID("e3875cf1-7635-4a7a-8dc7-85e4875ff088"), - "limit": UUID("fd428e38-4a5c-48e5-bcff-0003bdd5ddb9"), - "separator": UUID("e50fd18a-f335-4ea2-8ab2-e2180cfaba58"), - "result_merging_enabled": UUID("cd61d622-3053-40e0-8eac-2e00d1a19a81"), - "external_id_filters": UUID("47506750-227c-469a-9875-18414cdc7379"), - "metadata_filters": UUID("6086c581-152e-4983-82fa-476511964ef6"), - } - output_display = { - SearchResults.Outputs.results: NodeOutputDisplay( - id=UUID("3f3d52db-649c-484a-af5f-17986b861a79"), name="results" - ), - SearchResults.Outputs.text: NodeOutputDisplay(id=UUID("da7f1722-0986-4313-abb3-d8550f8031d0"), name="text"), - } - port_displays = {SearchResults.Ports.default: PortDisplayOverrides(id=UUID("7f634369-dd10-4894-baa9-a73d38732ea8"))} - display_data = NodeDisplayData( - position=NodeDisplayPosition(x=2398.232394395274, y=252.2547364643862), - width=465, - height=271, - comment=NodeDisplayComment(expanded=True), - ) diff --git a/examples/workflows/trust_center_q_a/display/workflow.py b/examples/workflows/trust_center_q_a/display/workflow.py deleted file mode 100644 index 62584cd25f..0000000000 --- a/examples/workflows/trust_center_q_a/display/workflow.py +++ /dev/null @@ -1,68 +0,0 @@ -from uuid import UUID - -from vellum_ee.workflows.display.base import ( - EdgeDisplay, - EntrypointDisplay, - WorkflowDisplayData, - WorkflowDisplayDataViewport, - WorkflowInputsDisplay, - WorkflowMetaDisplay, - WorkflowOutputDisplay, -) -from vellum_ee.workflows.display.editor import NodeDisplayData, NodeDisplayPosition -from vellum_ee.workflows.display.workflows import BaseWorkflowDisplay - -from ..inputs import Inputs -from ..nodes.answer_question import AnswerQuestion -from ..nodes.formatted_search_results import FormattedSearchResults -from ..nodes.most_recent_message import MostRecentMessage -from ..nodes.output_answer import OutputAnswer -from ..nodes.output_search_results import OutputSearchResults -from ..nodes.output_user_question import OutputUserQuestion -from ..nodes.search_results import SearchResults -from ..workflow import Workflow - - -class WorkflowDisplay(BaseWorkflowDisplay[Workflow]): - workflow_display = WorkflowMetaDisplay( - entrypoint_node_id=UUID("eeb618e0-2b37-4fa0-933a-cd2c9ae73c25"), - entrypoint_node_source_handle_id=UUID("93d0da15-cfd4-47ec-9f16-b9f8e2bcfb28"), - entrypoint_node_display=NodeDisplayData(position=NodeDisplayPosition(x=1545, y=330), width=124, height=48), - display_data=WorkflowDisplayData( - viewport=WorkflowDisplayDataViewport(x=-1046.5185694818517, y=52.289078412761114, zoom=0.4747854734853663) - ), - ) - inputs_display = { - Inputs.chat_history: WorkflowInputsDisplay(id=UUID("499159eb-4f31-4659-9c87-4ad6a727419a"), name="chat_history") - } - entrypoint_displays = { - MostRecentMessage: EntrypointDisplay( - id=UUID("eeb618e0-2b37-4fa0-933a-cd2c9ae73c25"), - edge_display=EdgeDisplay(id=UUID("541590a7-c64b-4ec0-bca3-8b2a80f7dfd2")), - ) - } - edge_displays = { - (MostRecentMessage.Ports.default, SearchResults): EdgeDisplay(id=UUID("3742b578-1514-48e4-a21c-f03f48ab2fce")), - (SearchResults.Ports.default, FormattedSearchResults): EdgeDisplay( - id=UUID("5d17d921-a0e7-4482-94a0-887817bf26da") - ), - (MostRecentMessage.Ports.default, OutputUserQuestion): EdgeDisplay( - id=UUID("0b71f497-5a13-4080-9961-f21d2929bebf") - ), - (SearchResults.Ports.default, OutputSearchResults): EdgeDisplay( - id=UUID("fdd7afb7-e683-46d4-a4ed-bc478cecebf8") - ), - (FormattedSearchResults.Ports.default, AnswerQuestion): EdgeDisplay( - id=UUID("039f9b34-80f9-4f8d-8d13-2fbb64efb5d3") - ), - (AnswerQuestion.Ports.default, OutputAnswer): EdgeDisplay(id=UUID("c54dae0d-a84a-4f82-94b5-072144cec345")), - } - output_displays = { - Workflow.Outputs.search_results: WorkflowOutputDisplay( - id=UUID("3f526b86-e419-4c89-b7fa-beacd0055556"), name="search_results" - ), - Workflow.Outputs.question: WorkflowOutputDisplay( - id=UUID("c2fb17c7-f6aa-44b0-a4f1-805f46e058c9"), name="question" - ), - Workflow.Outputs.answer: WorkflowOutputDisplay(id=UUID("519d3b9b-4caa-4928-abd1-ce3130caabee"), name="answer"), - } diff --git a/examples/workflows/trust_center_q_a/external.py b/examples/workflows/trust_center_q_a/external.py deleted file mode 100644 index 011ce55e78..0000000000 --- a/examples/workflows/trust_center_q_a/external.py +++ /dev/null @@ -1,54 +0,0 @@ -import sys -import time - -from vellum.client.types import ChatMessage -from vellum.workflows.emitters.vellum_emitter import VellumEmitter # local SDK emitter - -from .inputs import Inputs -from .workflow import Workflow - -print("Trust Center Q&A Local Demo (Zero-Config Monitoring)") -print("=" * 50) - -print("\nCreating Trust Center Q&A workflow...") -# Explicitly pass the local VellumEmitter instance so the workflow uses it -workflow = Workflow(emitters=[VellumEmitter()]) - -print("\nRunning Trust Center Q&A workflow...") -print(" (Monitoring URL will be printed automatically if enabled)") - -# Minimal required inputs for this workflow -sample_question = "What security certifications does Vellum have?" -inputs = Inputs(chat_history=[ChatMessage(role="USER", text=sample_question)]) - -# Run the workflow - monitoring URL will be automatically printed! -result = workflow.run(inputs=inputs) - -print(f"\nWorkflow completed! \n") - - -span_id = str(getattr(result, "span_id", "")) -if span_id: - # Make sure all events are published to clickhouse before printing the link. - print(" Preparing Monitoring URL. Waiting ~15 seconds so all events can be published to clickchouse...") - print(" If details fail to load, refresh the page a few times until it shows up.\n") - - frames = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"] - start_time = time.time() - i = 0 - while time.time() - start_time < 15: - dots = "." * (i % 10) - frame = frames[i % len(frames)] - sys.stdout.write(f"\r {frame} Publishing events{dots:<9}") - sys.stdout.flush() - time.sleep(0.1) - i += 1 - sys.stdout.write("\r ✓ Events published. Generating link... \n") - - url = workflow.context.get_monitoring_url(span_id) - if url: - print(f" Monitoring URL: {url} \n") -else: - print(" No span_id available to construct monitoring URL.\n") - -print("\nTrust Center Q&A demo complete!") diff --git a/examples/workflows/trust_center_q_a/inputs.py b/examples/workflows/trust_center_q_a/inputs.py deleted file mode 100644 index 48cbf257ff..0000000000 --- a/examples/workflows/trust_center_q_a/inputs.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import List - -from vellum import ChatMessage -from vellum.workflows.inputs import BaseInputs - - -class Inputs(BaseInputs): - chat_history: List[ChatMessage] diff --git a/examples/workflows/trust_center_q_a/nodes/__init__.py b/examples/workflows/trust_center_q_a/nodes/__init__.py deleted file mode 100644 index 00da286dd5..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .answer_question import AnswerQuestion -from .copy_of_note import CopyOfNote -from .formatted_search_results import FormattedSearchResults -from .most_recent_message import MostRecentMessage -from .output_answer import OutputAnswer -from .output_search_results import OutputSearchResults -from .output_user_question import OutputUserQuestion -from .search_results import SearchResults - -__all__ = [ - "AnswerQuestion", - "CopyOfNote", - "FormattedSearchResults", - "MostRecentMessage", - "OutputAnswer", - "OutputSearchResults", - "OutputUserQuestion", - "SearchResults", -] diff --git a/examples/workflows/trust_center_q_a/nodes/answer_question.py b/examples/workflows/trust_center_q_a/nodes/answer_question.py deleted file mode 100644 index bca3067f52..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/answer_question.py +++ /dev/null @@ -1,46 +0,0 @@ -from vellum import ChatMessagePromptBlock, JinjaPromptBlock, PromptParameters, VariablePromptBlock -from vellum.workflows.nodes.displayable import InlinePromptNode - -from ..inputs import Inputs -from .formatted_search_results import FormattedSearchResults - - -class AnswerQuestion(InlinePromptNode): - """Here we use an LLM to answer the user's question. We give it the search results and previous messages in the conversation as context.""" - - ml_model = "gpt-4o-mini" - blocks = [ - ChatMessagePromptBlock( - chat_role="SYSTEM", - blocks=[ - JinjaPromptBlock( - template="""\ -Answer the user\'s question based on the context provided below. If you don\'t know the answer say \"Sorry, I don\'t know.\" - -**Context** -`` -{{ context }} -`` - -Limit your answer to 250 words and provide a citation at the end of your answer\ -""" - ) - ], - ), - VariablePromptBlock(input_variable="chat_history"), - ] - prompt_inputs = { - "chat_history": Inputs.chat_history, - "context": FormattedSearchResults.Outputs.result, - } - parameters = PromptParameters( - stop=[], - temperature=0, - max_tokens=1000, - top_p=1, - top_k=0, - frequency_penalty=0, - presence_penalty=0, - logit_bias={}, - custom_parameters=None, - ) diff --git a/examples/workflows/trust_center_q_a/nodes/copy_of_note.py b/examples/workflows/trust_center_q_a/nodes/copy_of_note.py deleted file mode 100644 index 7178c18932..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/copy_of_note.py +++ /dev/null @@ -1,5 +0,0 @@ -from vellum.workflows.nodes.displayable import NoteNode - - -class CopyOfNote(NoteNode): - pass diff --git a/examples/workflows/trust_center_q_a/nodes/formatted_search_results.py b/examples/workflows/trust_center_q_a/nodes/formatted_search_results.py deleted file mode 100644 index e10ff4ecda..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/formatted_search_results.py +++ /dev/null @@ -1,24 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from .search_results import SearchResults - - -class FormattedSearchResults(TemplatingNode[BaseState, str]): - """Here we format the resulting chunks that we retrieved from the Document Index and include the source document in each formatted chunk so our LLM can cite its source when it gives an answer.""" - - template = """\ -{% for result in results -%} -Policy {{ result.document.label }}: ------- -{{ result.text }} -{% if not loop.last %} - -##### - -{% endif %} -{% endfor %}\ -""" - inputs = { - "results": SearchResults.Outputs.results, - } diff --git a/examples/workflows/trust_center_q_a/nodes/most_recent_message.py b/examples/workflows/trust_center_q_a/nodes/most_recent_message.py deleted file mode 100644 index 951d80c051..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/most_recent_message.py +++ /dev/null @@ -1,13 +0,0 @@ -from vellum.workflows.nodes.displayable import TemplatingNode -from vellum.workflows.state import BaseState - -from ..inputs import Inputs - - -class MostRecentMessage(TemplatingNode[BaseState, str]): - """Here we extract the user's individual message. In the next node, we'll use it to search for relevant chunks from previously uploaded security policy PDFs in a Vellum Document Index.""" - - template = """{{ chat_history[-1][\"text\"] }}""" - inputs = { - "chat_history": Inputs.chat_history, - } diff --git a/examples/workflows/trust_center_q_a/nodes/output_answer.py b/examples/workflows/trust_center_q_a/nodes/output_answer.py deleted file mode 100644 index 3e3785400a..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/output_answer.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .answer_question import AnswerQuestion - - -class OutputAnswer(FinalOutputNode[BaseState, str]): - """Finally, we output the model's response to the user's question. This is the value we would use in our application when we ultimately display a response to the user and add it to the Chat History.""" - - class Outputs(FinalOutputNode.Outputs): - value = AnswerQuestion.Outputs.text diff --git a/examples/workflows/trust_center_q_a/nodes/output_search_results.py b/examples/workflows/trust_center_q_a/nodes/output_search_results.py deleted file mode 100644 index fb411e5173..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/output_search_results.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import List - -from vellum import SearchResult -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .search_results import SearchResults - - -class OutputSearchResults(FinalOutputNode[BaseState, List[SearchResult]]): - class Outputs(FinalOutputNode.Outputs): - value = SearchResults.Outputs.results diff --git a/examples/workflows/trust_center_q_a/nodes/output_user_question.py b/examples/workflows/trust_center_q_a/nodes/output_user_question.py deleted file mode 100644 index 2741d9d232..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/output_user_question.py +++ /dev/null @@ -1,11 +0,0 @@ -from vellum.workflows.nodes.displayable import FinalOutputNode -from vellum.workflows.state import BaseState - -from .most_recent_message import MostRecentMessage - - -class OutputUserQuestion(FinalOutputNode[BaseState, str]): - """Exposing intermediate outputs can be useful for unit testing individual pieces of the Workflow in the Evaluations tab. For example, by exposing the user's question to our Test Suite, we could use the RAGAS Context Relevancy Metric to evaluate the quality of the search results separately from the quality of the model's final answer.""" - - class Outputs(FinalOutputNode.Outputs): - value = MostRecentMessage.Outputs.result diff --git a/examples/workflows/trust_center_q_a/nodes/search_results.py b/examples/workflows/trust_center_q_a/nodes/search_results.py deleted file mode 100644 index 999e897501..0000000000 --- a/examples/workflows/trust_center_q_a/nodes/search_results.py +++ /dev/null @@ -1,19 +0,0 @@ -from vellum import SearchResultMergingRequest, SearchWeightsRequest -from vellum.workflows.nodes.displayable import SearchNode -from vellum.workflows.nodes.displayable.bases.types import MetadataLogicalConditionGroup, SearchFilters - -from .most_recent_message import MostRecentMessage - - -class SearchResults(SearchNode): - """Here we perform a semantic search on a Vellum Document Index that contains PDFs about Vellum's security policies. As a result, we get chunks from PDFs scored by semantic similarity to the user's query.""" - - query = MostRecentMessage.Outputs.result - document_index = "vellum-security-policies" - limit = 8 - weights = SearchWeightsRequest(semantic_similarity=0.8, keywords=0.2) - result_merging = SearchResultMergingRequest(enabled=True) - filters = SearchFilters( - external_ids=None, metadata=MetadataLogicalConditionGroup(combinator="AND", negated=False, conditions=[]) - ) - chunk_separator = "\n\n#####\n\n" diff --git a/examples/workflows/trust_center_q_a/sandbox.py b/examples/workflows/trust_center_q_a/sandbox.py deleted file mode 100644 index 1eae7ab2a8..0000000000 --- a/examples/workflows/trust_center_q_a/sandbox.py +++ /dev/null @@ -1,68 +0,0 @@ -from vellum import ChatMessage, StringChatMessageContent -from vellum.workflows.sandbox import WorkflowSandboxRunner - -from .inputs import Inputs -from .workflow import Workflow - -if __name__ != "__main__": - raise Exception("This file is not meant to be imported") - - -runner = WorkflowSandboxRunner( - workflow=Workflow(), - inputs=[ - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="How often is employee training?", - content=StringChatMessageContent(value="How often is employee training?"), - ), - ] - ), - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="How often is employee training?", - content=StringChatMessageContent(value="How often is employee training?"), - ), - ChatMessage( - role="ASSISTANT", - text="Employee training, as outlined in the provided policies, occurs on an annual basis. All new hires are required to complete information security awareness training as part of their onboarding process and then annually thereafter. This ongoing training includes security and privacy requirements, the correct use of information assets and facilities, and, consistent with assigned roles and responsibilities, incident response and contingency training. Additionally, individuals responsible for supporting or writing code for internet-facing applications or internal applications that handle customer information must complete annual security training specific to secure coding practices, which includes OWASP secure development principles and awareness of the OWASP top 10 vulnerabilities for the most recent year available.\n\nCitation: Policy Information Security Policy - v1.pdf & Policy Software Development Life Cycle Policy - v1.pdf.", - content=StringChatMessageContent( - value="Employee training, as outlined in the provided policies, occurs on an annual basis. All new hires are required to complete information security awareness training as part of their onboarding process and then annually thereafter. This ongoing training includes security and privacy requirements, the correct use of information assets and facilities, and, consistent with assigned roles and responsibilities, incident response and contingency training. Additionally, individuals responsible for supporting or writing code for internet-facing applications or internal applications that handle customer information must complete annual security training specific to secure coding practices, which includes OWASP secure development principles and awareness of the OWASP top 10 vulnerabilities for the most recent year available.\n\nCitation: Policy Information Security Policy - v1.pdf & Policy Software Development Life Cycle Policy - v1.pdf." - ), - ), - ChatMessage( - role="USER", - text="How does the organization do device management?", - content=StringChatMessageContent(value="How does the organization do device management?"), - ), - ] - ), - Inputs( - chat_history=[ - ChatMessage( - role="USER", - text="How often is employee training?", - content=StringChatMessageContent(value="How often is employee training?"), - ), - ChatMessage( - role="ASSISTANT", - text="Employee training, as outlined in the provided policies, occurs on an annual basis. All new hires are required to complete information security awareness training as part of their onboarding process and then annually thereafter. This ongoing training includes security and privacy requirements, the correct use of information assets and facilities, and, consistent with assigned roles and responsibilities, incident response and contingency training. Additionally, individuals responsible for supporting or writing code for internet-facing applications or internal applications that handle customer information must complete annual security training specific to secure coding practices, which includes OWASP secure development principles and awareness of the OWASP top 10 vulnerabilities for the most recent year available.\n\nCitation: Policy Information Security Policy - v1.pdf & Policy Software Development Life Cycle Policy - v1.pdf.", - content=StringChatMessageContent( - value="Employee training, as outlined in the provided policies, occurs on an annual basis. All new hires are required to complete information security awareness training as part of their onboarding process and then annually thereafter. This ongoing training includes security and privacy requirements, the correct use of information assets and facilities, and, consistent with assigned roles and responsibilities, incident response and contingency training. Additionally, individuals responsible for supporting or writing code for internet-facing applications or internal applications that handle customer information must complete annual security training specific to secure coding practices, which includes OWASP secure development principles and awareness of the OWASP top 10 vulnerabilities for the most recent year available.\n\nCitation: Policy Information Security Policy - v1.pdf & Policy Software Development Life Cycle Policy - v1.pdf." - ), - ), - ChatMessage( - role="USER", - text="How does the organization do device management?", - content=StringChatMessageContent(value="How does the organization do device management?"), - ), - ] - ), - ], -) - -runner.run() diff --git a/examples/workflows/trust_center_q_a/workflow.py b/examples/workflows/trust_center_q_a/workflow.py deleted file mode 100644 index 1893325cb5..0000000000 --- a/examples/workflows/trust_center_q_a/workflow.py +++ /dev/null @@ -1,29 +0,0 @@ -from vellum.workflows import BaseWorkflow -from vellum.workflows.state import BaseState - -from .inputs import Inputs -from .nodes.answer_question import AnswerQuestion -from .nodes.copy_of_note import CopyOfNote -from .nodes.formatted_search_results import FormattedSearchResults -from .nodes.most_recent_message import MostRecentMessage -from .nodes.output_answer import OutputAnswer -from .nodes.output_search_results import OutputSearchResults -from .nodes.output_user_question import OutputUserQuestion -from .nodes.search_results import SearchResults - - -class Workflow(BaseWorkflow[Inputs, BaseState]): - graph = MostRecentMessage >> { - SearchResults - >> { - FormattedSearchResults >> AnswerQuestion >> OutputAnswer, - OutputSearchResults, - }, - OutputUserQuestion, - } - unused_graphs = {CopyOfNote} - - class Outputs(BaseWorkflow.Outputs): - search_results = OutputSearchResults.Outputs.value - question = OutputUserQuestion.Outputs.value - answer = OutputAnswer.Outputs.value diff --git a/examples/workflows/utils/Dockerfile b/examples/workflows/utils/Dockerfile deleted file mode 100644 index 25bd0e1c34..0000000000 --- a/examples/workflows/utils/Dockerfile +++ /dev/null @@ -1,10 +0,0 @@ -FROM vellumai/python-workflow-runtime:latest - -RUN pip install --upgrade pip \ - && pip install boto3 \ - && pip install mcp - -COPY ./utils /custom/utils -ENV PYTHONPATH="${PYTHONPATH}:/custom" - -CMD vellum_start_server diff --git a/examples/workflows/utils/__init__.py b/examples/workflows/utils/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/workflows/utils/networking.py b/examples/workflows/utils/networking.py deleted file mode 100644 index a9f5ce486d..0000000000 --- a/examples/workflows/utils/networking.py +++ /dev/null @@ -1,18 +0,0 @@ -import random -import time - - -class MyCustomNetworkingClient: - def invoke_request(self, name: str, request: dict) -> dict: - if name == "get_temperature": - return self._mock_network_response({"temperature": random.randint(60, 80)}) - elif name == "echo_request": - return self._mock_network_response({"foo": "bar", "request": request}) - elif name == "fibonacci": - return self._mock_network_response({"data": [1, 1, 2, 3, 5, 8]}) - else: - raise ValueError(f"Invalid tool name: {name}") - - def _mock_network_response(self, response: dict, latency: int = 1) -> dict: - time.sleep(latency) # Simulate network latency - return response diff --git a/examples/workflows/vellum.lock.json b/examples/workflows/vellum.lock.json deleted file mode 100644 index b2a38ad1c6..0000000000 --- a/examples/workflows/vellum.lock.json +++ /dev/null @@ -1,152 +0,0 @@ -{ - "version": "1.0", - "workflows": [ - { - "module": "custom_base_node", - "workflow_sandbox_id": "9e23a0dd-cd7a-4f0c-b0ba-d72753ab4c41", - "ignore": null, - "deployments": [], - "container_image_name": "sdk-examples-utils", - "container_image_tag": "1.0.3", - "workspace": "default", - "target_directory": null - }, - { - "module": "custom_prompt_node", - "workflow_sandbox_id": "a8e57295-afb9-4b8c-8182-936000c5b0e2", - "ignore": null, - "deployments": [], - "container_image_name": "sdk-examples-utils", - "container_image_tag": "1.0.3", - "workspace": "default", - "target_directory": null - }, - { - "module": "customer_support_q_a", - "workflow_sandbox_id": "3daec7c3-b4f7-417c-8004-e42a1103179e", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "document_parsing", - "workflow_sandbox_id": "1c3d3daf-a360-4ef9-93e2-7291185086ca", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "extract_from_image_of_receipt", - "workflow_sandbox_id": "e10e2eb8-b243-4e3f-a0ed-a92a0681e543", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "function_calling_demo", - "workflow_sandbox_id": "4da762fd-9dad-4ee4-a063-b32f5188721e", - "ignore": null, - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "aws-staging", - "target_directory": null - }, - { - "module": "function_calling_demo", - "workflow_sandbox_id": "790e20d9-ca1f-482a-b746-63e94b879952", - "ignore": null, - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "image_processing", - "workflow_sandbox_id": "965383d3-5da5-45b2-a1fe-9e1fd9ae8c0e", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "mcp_demo", - "workflow_sandbox_id": "f530e95a-c752-4c17-bbfc-309eed234502", - "ignore": null, - "deployments": [], - "container_image_name": "sdk-examples-utils", - "container_image_tag": "1.0.9", - "workspace": "default", - "target_directory": null - }, - { - "module": "re_act_agent", - "workflow_sandbox_id": "1ae53c7b-f95d-4d5c-a13e-697f5183a9fc", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "re_act_agent", - "workflow_sandbox_id": "27612dbd-f650-41c3-ad25-5c118fbc15b8", - "ignore": null, - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "aws-staging", - "target_directory": null - }, - { - "module": "reflection_agent", - "workflow_sandbox_id": "b2143965-9f02-4225-a1db-543a83d00452", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - }, - { - "module": "router_classifier", - "workflow_sandbox_id": "d173ba01-2066-46e9-b481-5057d7d65266", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": "python-workflow-runtime", - "container_image_tag": "latest", - "workspace": "default", - "target_directory": null - }, - { - "module": "trust_center_q_a", - "workflow_sandbox_id": "f0662f3a-af4f-44a8-85c4-eefe0a1c1c3a", - "ignore": "sandbox.py", - "deployments": [], - "container_image_name": null, - "container_image_tag": null, - "workspace": "default", - "target_directory": null - } - ], - "workspaces": [ - { - "name": "aws-staging", - "api_key": "AWS_STAGING_VELLUM_API_KEY", - "api_url": "AWS_STAGING_VELLUM_API_URL" - } - ] -}