|
| 1 | +# Documentation for Agentic LLM Code |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +The Agentic LLM module appears to be part of the `patchwork` package designed to handle Large Language Model (LLM) interactions using a predefined agentic strategy. This implementation facilitates the elicitation of responses from a language model, manages API keys, and defines the input and output data structures for the interactions. |
| 6 | + |
| 7 | +### Structure |
| 8 | + |
| 9 | +The module is organized as follows: |
| 10 | + |
| 11 | +- **`__init__.py`**: An empty initialization file to make the directory a package. |
| 12 | +- **`typed.py`**: Defines the input and output data types for the agentic LLM process. |
| 13 | +- **`AgenticLLM.py`**: Implements the main logic for the agentic LLM step, utilizing an agentic strategy and multi-turn interactions with a language model. |
| 14 | + |
| 15 | +## Detailed Description |
| 16 | + |
| 17 | +### File: `typed.py` |
| 18 | + |
| 19 | +#### Inputs |
| 20 | + |
| 21 | +- **`base_path`**: (Optional) The base path for accessing necessary tools. |
| 22 | +- **`prompt_value`**: A dictionary that contains dynamic values for templated prompts. |
| 23 | +- **`system_prompt`**: A string used as a system prompt template. |
| 24 | +- **`user_prompt`**: A string used as a user prompt template. |
| 25 | +- **`max_llm_calls`**: An integer that indicates the maximum number of LLM API calls allowed. It is treated as a configuration parameter. |
| 26 | +- **API Keys**: Includes `openai_api_key`, `anthropic_api_key`, `patched_api_key`, and `google_api_key`. These configurations allow different API keys to be specified for integrations with various LLM services. |
| 27 | + |
| 28 | +#### Outputs |
| 29 | + |
| 30 | +- **`conversation_history`**: A list of dictionaries tracking the conversation steps with the model. |
| 31 | +- **`tool_records`**: A list of dictionaries containing records of tool usage during interactions. |
| 32 | +- **`request_tokens`**: An integer indicating the number of tokens sent in requests. |
| 33 | +- **`response_tokens`**: An integer showing the number of tokens received in responses. |
| 34 | + |
| 35 | +### File: `AgenticLLM.py` |
| 36 | + |
| 37 | +#### Functionality |
| 38 | + |
| 39 | +- **Initialization**: The class initializes by configuring the LLM client and toolset for interaction. It uses the provided input configuration to create an agentic strategy. |
| 40 | +- **Execution**: The `run` method executes the agentic strategy with a set conversation limit. It returns structured output data including conversation history and tool usage statistics. |
| 41 | + |
| 42 | +#### Usage |
| 43 | + |
| 44 | +The `AgenticLLM` class is likely used in scenarios where automated and dynamic interaction with a language model is needed. The inputs are configured to adjust queries dynamically, and the comprehensive configuration support ensures flexible integration across multiple API providers. |
| 45 | + |
| 46 | +--- |
| 47 | + |
| 48 | +This documentation is intended to offer a comprehensive overview and guide for developers working with or extending the functionality of the AgenticLLM module within the `patchwork` framework. |
0 commit comments