|
| 1 | +--- |
| 2 | +title: "LLM Agentic Tool Mesh: Harnessing agent services and multi-agent AI for |
| 3 | + next-level Gen AI" |
| 4 | +date: 2024-12-12T17:08:46.212Z |
| 5 | +author: Antonio Fin |
| 6 | +authorimage: /img/afin_photo.jpg |
| 7 | +disable: false |
| 8 | +--- |
| 9 | +<style> |
| 10 | +li { |
| 11 | + font-size: 27px !important; |
| 12 | + line-height: 33px !important; |
| 13 | + max-width: none !important; |
| 14 | +} |
| 15 | +</style> |
| 16 | + |
| 17 | +In our previous blog post, we explored the [Chat Service](https://developer.hpe.com/blog/ll-mesh-exploring-chat-service-and-factory-design-pattern/) of [LLM Agentic Tool Mesh](https://developer.hpe.com/blog/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/), an [open-source project](https://github.com/HewlettPackard/llmesh) aimed at democratizing Generative AI (Gen AI). |
| 18 | + |
| 19 | +Today, we'll delve into another core feature: the **Agent Service**. We'll discuss what agents are, explain the LLM Agentic Tool Mesh related services, and showcase examples from the LLM Agentic Tool Mesh repository. |
| 20 | + |
| 21 | +## Understanding LLM agents |
| 22 | + |
| 23 | +In the context of Large Language Models (LLMs), an agent is an autonomous entity capable of: |
| 24 | + |
| 25 | +* **Perceiving its environment**: Agents can gather and interpret information from their surroundings. |
| 26 | +* **Making decisions**: Based on the perceived information, agents decide on the best course of action. |
| 27 | +* **Acting on decisions**: Agents execute actions to achieve specific objectives. |
| 28 | + |
| 29 | +These agents can operate independently or interact with one another to optimize their collective performance, depending on the complexity of the task. |
| 30 | +In fact, multi-agent AI involves coordinating multiple agents, each specialized in a specific domain or function, to collaborate and achieve a common goal. These agents handle: |
| 31 | + |
| 32 | +* **Task Division**: Dividing complex tasks into manageable parts. |
| 33 | +* **Specialization**: Each agent specializes in a particular function, such as information retrieval or decision-making. |
| 34 | +* **Collaboration**: Agents communicate and share information for effective and efficient task execution. |
| 35 | + |
| 36 | + |
| 37 | + |
| 38 | +Managing such agents typically requires advanced coding and deep knowledge of agent-based systems. However, LLM Agentic Tool Mesh simplifies this process by providing high-level abstractions through intuitive prompts and configuration files. Users can focus on defining tasks and desired outcomes while LLM Agentic Tool Mesh handles the coordination, task distribution, and result aggregation behind the scenes. |
| 39 | + |
| 40 | +## LLM Agentic Tool Mesh Agent Service |
| 41 | + |
| 42 | +LLM Agentic Tool Mesh provides all the necessary tools to build a powerful agentic system by handling: |
| 43 | + |
| 44 | +1. Tool repository |
| 45 | +2. Reasoning engine |
| 46 | +3. Multi-agent task force |
| 47 | + |
| 48 | +Let's explore each of these components in detail. |
| 49 | + |
| 50 | +### Tool repository |
| 51 | + |
| 52 | +Agents in LLM Agentic Tool Mesh rely on tools to perform specialized tasks like information retrieval, document summarization, or data analysis. These tools extend the agents' capabilities, allowing them to efficiently complete complex operations. The **tool repository** service in LLM Agentic Tool Mesh simplifies and automates the storage, management, and retrieval of these tools. |
| 53 | + |
| 54 | +Key Features: |
| 55 | + |
| 56 | +* **Dynamic tool storage**: Add tools with associated metadata, including tool name, description, function, and usage parameters. |
| 57 | +* **Tool retrieval**: Flexible search and retrieval functionality, enabling agents to access tools based on specific criteria. |
| 58 | +* **Metadata management**: Store relevant metadata for each tool, aiding in decision-making for task assignments. |
| 59 | + |
| 60 | +Example Usage: |
| 61 | + |
| 62 | +```python |
| 63 | +from athon.agents import ToolRepository |
| 64 | + |
| 65 | +# Configuration for the Tool Repository |
| 66 | +REPO_CONFIG = { |
| 67 | + 'type': 'LangChainStructured' |
| 68 | +} |
| 69 | + |
| 70 | +# Initialize the Tool Repository |
| 71 | +tool_repository = ToolRepository.create(REPO_CONFIG) |
| 72 | +Adding a tool to the repository: |
| 73 | +from langchain.tools import tool |
| 74 | + |
| 75 | +@tool |
| 76 | +def text_summarizer(text: str) -> str: |
| 77 | + """A simple text summarizer function""" |
| 78 | + return text[:50] |
| 79 | + |
| 80 | +metadata = { |
| 81 | + 'category': 'NLP', |
| 82 | + 'version': '1.0', |
| 83 | + 'author': 'John Doe' |
| 84 | +} |
| 85 | + |
| 86 | +# Add the tool to the repository |
| 87 | +add_result = tool_repository.add_tool(text_summarizer, metadata) |
| 88 | + |
| 89 | +if add_result.status == "success": |
| 90 | + print("Tool added successfully.") |
| 91 | +else: |
| 92 | + print(f"ERROR:\n{add_result.error_message}") |
| 93 | +Retrieving tools based on metadata: |
| 94 | +# Retrieve tools with a metadata filter |
| 95 | +metadata_filter = {'category': 'NLP'} |
| 96 | +get_result = tool_repository.get_tools(metadata_filter) |
| 97 | + |
| 98 | +if get_result.status == "success": |
| 99 | + print(f"RETRIEVED TOOLS:\n{get_result.tools}") |
| 100 | +else: |
| 101 | + print(f"ERROR:\n{get_result.error_message}") |
| 102 | +``` |
| 103 | + |
| 104 | +### Reasoning engine |
| 105 | + |
| 106 | +The **reasoning engine** orchestrates interactions between the LLM and various tools, enabling agents to seamlessly combine decision-making capabilities with tool-based actions. It extends the chat capabilities by managing the dynamic integration of tools with the LLM, allowing for real-time decision-making and task execution. |
| 107 | + |
| 108 | +Key Features: |
| 109 | + |
| 110 | +* **Tool orchestration**: Coordinates between the LLM and tools, deciding which tools to invoke based on context and user input. |
| 111 | +* **Memory management**: Handles storage and retrieval of relevant memory for ongoing tasks or conversations. |
| 112 | +* **Dynamic configuration**: Allows users to adjust the Reasoning Engine's behavior dynamically, tailoring interactions between LLMs and tools. |
| 113 | + |
| 114 | +Architecture Overview: |
| 115 | + |
| 116 | + |
| 117 | + |
| 118 | +### Task force multi-agents |
| 119 | + |
| 120 | +The Task Force Multi-Agents service enables the orchestration of complex tasks through a network of specialized agents. This service allows users to define a structured workflow where each agent is assigned a specific task, executed in sequence or parallel. |
| 121 | +Key Features: |
| 122 | +• LLM-Driven Planning: Integrates with an LLM to plan task sequences, ensuring intelligent coordination. |
| 123 | +• Agent Specialization: Each agent specializes in a particular task, tailored through prompts defining their role, backstory, and goals. |
| 124 | +• Task-Oriented Workflow: Supports both sequential and parallel task execution, configurable through prompts and configuration files. |
| 125 | +• Tool Integration: Agents utilize a suite of tools to complete their tasks, dynamically loaded and executed during task completion. |
| 126 | +Example Usage: |
| 127 | +from athon.agents import TaskForce |
| 128 | + |
| 129 | +# Configuration for the Task Force Multi-Agents |
| 130 | + |
| 131 | +TASK_FORCE_CONFIG = { |
| 132 | + 'type': 'CrewAIMultiAgent', |
| 133 | + 'plan_type': 'Sequential', |
| 134 | + 'tasks': \[ |
| 135 | + { |
| 136 | + 'description': 'Perform research to gather information for a blog post on {request}.', |
| 137 | + 'expected_output': 'A summary of key insights related to the topic.', |
| 138 | + 'agent': { |
| 139 | + 'role': 'Research Agent', |
| 140 | + 'goal': 'Gather relevant information for the blog post', |
| 141 | + 'backstory': 'Expert in researching and summarizing information', |
| 142 | + 'tools': [] |
| 143 | + } |
| 144 | + }, |
| 145 | + # Additional tasks... |
| 146 | + ], |
| 147 | + 'llm': { |
| 148 | + 'type': 'LangChainChatOpenAI', |
| 149 | + 'api_key': 'your-api-key', |
| 150 | + 'model_name': 'openai/gpt-4', |
| 151 | + 'base_url': 'your-base-url' |
| 152 | + }, |
| 153 | + 'verbose': True, |
| 154 | + 'memory': False |
| 155 | +} |
| 156 | + |
| 157 | +# Initialize the Task Force |
| 158 | + |
| 159 | +task_force = TaskForce.create(TASK_FORCE_CONFIG) |
| 160 | +Running the task force with an input message: |
| 161 | + |
| 162 | +# Run the task force with an input message |
| 163 | + |
| 164 | +input_message = "Write a blog post about the importance of renewable energy." |
| 165 | +result = task_force.run(input_message) |
| 166 | + |
| 167 | +# Handle the response |
| 168 | + |
| 169 | +if result.status == "success": |
| 170 | + print(f"COMPLETION:\n{result.completion}") |
| 171 | +else: |
| 172 | + print(f"ERROR:\n{result.error_message}") |
0 commit comments