Skip to content

Commit e9c5f9c

Browse files
authored
Merge pull request #2833 from antfin/cms/antfin/hpe-dev-portal/blog/llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai
Create Blog “llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai”
2 parents ea76113 + a31d070 commit e9c5f9c

File tree

4 files changed

+246
-0
lines changed

4 files changed

+246
-0
lines changed
Lines changed: 246 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,246 @@
1+
---
2+
title: "LLM Agentic Tool Mesh: Harnessing agent services and multi-agent AI for
3+
next-level Gen AI"
4+
date: 2024-12-12T17:08:46.212Z
5+
author: Antonio Fin
6+
authorimage: /img/afin_photo.jpg
7+
disable: false
8+
tags:
9+
- HPE
10+
- GenAI
11+
- LAT-Mesh
12+
- MultiAgents
13+
---
14+
<style>
15+
li {
16+
font-size: 27px !important;
17+
line-height: 33px !important;
18+
max-width: none !important;
19+
}
20+
</style>
21+
22+
In my previous blog post, I explored the [Chat Service](https://developer.hpe.com/blog/ll-mesh-exploring-chat-service-and-factory-design-pattern/) of [LLM Agentic Tool Mesh](https://developer.hpe.com/blog/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/), an [open-source project](https://github.com/HewlettPackard/llmesh) aimed at democratizing Generative AI (Gen AI).
23+
24+
Today, I'll delve into another core feature: the **Agent Service**. I'll discuss what agents are, explain the LLM Agentic Tool Mesh related services, and showcase examples from its repository.
25+
26+
## Understanding LLM agents
27+
28+
In the context of Large Language Models (LLMs), an agent is an autonomous entity capable of:
29+
30+
* **Perceiving its environment**: Agents can gather and interpret information from their surroundings.
31+
* **Making decisions**: Based on the perceived information, agents decide on the best course of action.
32+
* **Acting on decisions**: Agents execute actions to achieve specific objectives.
33+
34+
These agents can operate independently or interact with one another to optimize their collective performance depending on the complexity of the task.
35+
In fact, multi-agent AI involves coordinating multiple agents, each specialized in a specific domain or function, to collaborate and achieve a common goal. These agents handle:
36+
37+
* **Task division**: Dividing complex tasks into manageable parts.
38+
* **Specialization**: Each agent specializes in a particular function, such as information retrieval or decision-making.
39+
* **Collaboration**: Agents communicate and share information for effective and efficient task execution.
40+
41+
![](/img/multiagents.png)
42+
43+
Managing such agents typically requires advanced coding and deep knowledge of agent-based systems. However, LLM Agentic Tool Mesh simplifies this process by providing high-level abstractions through intuitive prompts and configuration files. Users can focus on defining tasks and desired outcomes while LLM Agentic Tool Mesh handles the coordination, task distribution, and result aggregation behind the scenes.
44+
45+
## LLM Agentic Tool Mesh Agent Service
46+
47+
LLM Agentic Tool Mesh provides all the necessary tools to build a powerful agentic system by handling:
48+
49+
1. The **tool repository**
50+
2. The **reasoning engine**
51+
3. A **multi-agent task force**
52+
53+
### Tool repository
54+
55+
Agents in LLM Agentic Tool Mesh rely on tools to perform specialized tasks like information retrieval, document summarization, or data analysis. These tools extend the agents' capabilities, allowing them to efficiently complete complex operations. The **tool repository** service in LLM Agentic Tool Mesh simplifies and automates the storage, management, and retrieval of these tools.
56+
57+
Key features:
58+
59+
* **Dynamic tool storage**: Add tools with associated metadata, including tool name, description, function, and usage parameters.
60+
* **Tool retrieval**: Flexible search and retrieval functionality, enabling agents to access tools based on specific criteria.
61+
* **Metadata management**: Store relevant metadata for each tool, aiding in decision-making for task assignments.
62+
63+
Example usage:
64+
65+
```python
66+
from athon.agents import ToolRepository
67+
68+
# Configuration for the Tool Repository
69+
REPO_CONFIG = {
70+
'type': 'LangChainStructured'
71+
}
72+
73+
# Initialize the Tool Repository
74+
tool_repository = ToolRepository.create(REPO_CONFIG)
75+
Adding a tool to the repository:
76+
from langchain.tools import tool
77+
78+
@tool
79+
def text_summarizer(text: str) -> str:
80+
"""A simple text summarizer function"""
81+
return text[:50]
82+
83+
metadata = {
84+
'category': 'NLP',
85+
'version': '1.0',
86+
'author': 'John Doe'
87+
}
88+
89+
# Add the tool to the repository
90+
add_result = tool_repository.add_tool(text_summarizer, metadata)
91+
92+
if add_result.status == "success":
93+
print("Tool added successfully.")
94+
else:
95+
print(f"ERROR:\n{add_result.error_message}")
96+
97+
# Retrieve tools with a metadata filter
98+
metadata_filter = {'category': 'NLP'}
99+
get_result = tool_repository.get_tools(metadata_filter)
100+
101+
if get_result.status == "success":
102+
print(f"RETRIEVED TOOLS:\n{get_result.tools}")
103+
else:
104+
print(f"ERROR:\n{get_result.error_message}")
105+
```
106+
107+
### Reasoning engine
108+
109+
The **reasoning engine** orchestrates interactions between the LLM and various tools, enabling agents to seamlessly combine decision-making capabilities with tool-based actions. It extends the chat capabilities by managing the dynamic integration of tools with the LLM, allowing for real-time decision-making and task execution.
110+
111+
Key features:
112+
113+
* **Tool orchestration**: Coordinates between the LLM and tools, deciding which tools to invoke based on context and user input.
114+
* **Memory management**: Handles storage and retrieval of relevant memory for ongoing tasks or conversations.
115+
* **Dynamic configuration**: Allows users to adjust the reasoning engine's behavior dynamically, tailoring interactions between LLMs and tools.
116+
117+
![](/img/reasoning.png)
118+
119+
### Task force
120+
121+
The **multi-agents task force** service enables the orchestration of complex tasks through a network of specialized agents. This service allows users to define a structured workflow where each agent is assigned a specific task, executed in sequence or parallel.
122+
123+
Key features:
124+
125+
* **LLM-driven planning**: Integrates with an LLM to plan task sequences, ensuring intelligent coordination.
126+
* **Agent specialization**: Each agent specializes in a particular task, tailored through prompts defining their role, backstory, and goals.
127+
* **Task-oriented workflow**: Supports both sequential and parallel task execution, configurable through prompts and configuration files.
128+
* **Tool integration**: Agents utilize a suite of tools to complete their tasks, dynamically loaded and executed during task completion.
129+
130+
Example usage:
131+
132+
```python
133+
from athon.agents import TaskForce
134+
from custom_tools import DataFetcher, SalesSummarizer, PresentationBuilder
135+
136+
# Example configuration for the Task Force Multi-Agents
137+
TASK_FORCE_CONFIG = {
138+
'type': 'CrewAIMultiAgent',
139+
'plan_type': 'Sequential',
140+
'tasks': [
141+
{
142+
'description': 'Analyze the recent sales data.',
143+
'expected_output': 'A summary report of sales trends.',
144+
'agent': {
145+
'role': 'Data Analyst',
146+
'goal': 'Summarize sales data',
147+
'backstory': 'Experienced in sales data analysis',
148+
'tools': ['DataFetcher', 'SalesSummarizer']
149+
}
150+
},
151+
{
152+
'description': 'Prepare a presentation based on the report.',
153+
'expected_output': 'A presentation deck summarizing the sales report.',
154+
'agent': {
155+
'role': 'Presentation Specialist',
156+
'goal': 'Create a presentation',
157+
'backstory': 'Expert in creating engaging presentations',
158+
'tools': ['PresentationBuilder']
159+
}
160+
}
161+
],
162+
'llm': {
163+
'type': 'LangChainChatOpenAI',
164+
'api_key': 'your-api-key-here',
165+
'model_name': 'gpt-4o-mini'
166+
},
167+
'verbose': True,
168+
'memory': False
169+
}
170+
171+
# Initialize the Task Force with the provided configuration
172+
task_force = TaskForce.create(TASK_FORCE_CONFIG)
173+
174+
# Run the task force with an input message
175+
input_message = "Generate a sales analysis report and prepare a presentation."
176+
result = task_force.run(input_message)
177+
178+
# Handle the response
179+
if result.status == "success":
180+
print(f"COMPLETION:\n{result.completion}")
181+
else:
182+
print(f"ERROR:\n{result.error_message}")
183+
```
184+
185+
## LLM Agentic Tool Mesh in action: Examples from the repository
186+
187+
The LLM Agentic Tool Mesh GitHub repository includes several examples demonstrating the versatility and capabilities of the agent services.
188+
189+
### Chatbot application
190+
191+
This chatbot (examples/app_chatbot) is capable of reasoning and invoking appropriate LLM tools to perform specific actions. You can configure the chatbot using files that define LLM Agentic Tool Mesh platform services, project settings, toolkits, and memory configurations. The web app orchestrates both local and remote LLM tools, allowing them to define their own HTML interfaces, supporting text, images, and code presentations.
192+
193+
Configuration example:
194+
195+
```yaml
196+
projects:
197+
- name: "Personal Chat"
198+
memory:
199+
type: LangChainBuffer
200+
memory_key: chat_history
201+
return_messages: true
202+
tools:
203+
- "https://127.0.0.1:5002/" # Basic Copywriter
204+
- name: "Project 5G Network"
205+
memory:
206+
type: LangChainRemote
207+
memory_key: chat_history
208+
return_messages: true
209+
base_url: "https://127.0.0.1:5010/"
210+
timeout: 100
211+
cert_verify: false
212+
tools:
213+
- "https://127.0.0.1:5005/" # OpenAPI Manager
214+
- "https://127.0.0.1:5006/" # IMS Expert
215+
- name: "Project Meteo"
216+
memory:
217+
type: LangChainBuffer
218+
memory_key: chat_history
219+
return_messages: true
220+
tools:
221+
- "https://127.0.0.1:5003/" # Temperature Finder
222+
- "https://127.0.0.1:5004/" # Temperature Analyzer
223+
```
224+
225+
### OpenAPI manager
226+
227+
The OpenAPI manager (examples/tool_agents) is a multi-agent tool that reads OpenAPI documentation and provides users with relevant information based on their queries. It uses the Task Force service to answer questions related to 5G APIs.
228+
229+
Capabilities:
230+
231+
* **ListOpenApis**: Lists all OpenAPI specifications present in the system.
232+
* **SelectOpenApi**: Selects a specific OpenAPI specification.
233+
* **GetOpenApiVersion**: Returns the OpenAPI version of the selected specification.
234+
* **GetInfo**: Returns the information dictionary of the selected specification.
235+
* **GetMethodsByTag**: Lists all methods of the selected specification for a specific tag.
236+
* **GetMethodById**: Returns detailed information about a method selected by ID.
237+
* **GetRequestBody**: Returns the request body schema of the selected specification.
238+
* **GetResponse**: Returns the response schema of the selected specification.
239+
240+
![](/img/mesh.png)
241+
242+
## Conclusion
243+
244+
The **LLM Agentic Tool Mesh Agent Service** exemplifies how advanced design principles and innovative prompt engineering simplify and enhance the adoption of Gen AI. By abstracting complexities and providing versatile examples, LLM Agentic Tool Mesh enables developers and users alike to unlock the transformative potential of Gen AI in various domains.
245+
246+
Stay tuned for our next post, where we'll explore another key service of LLM Agentic Tool Mesh and continue our journey to democratize Gen AI!

static/img/mesh.png

256 KB
Loading

static/img/multiagents.png

189 KB
Loading

static/img/reasoning.png

168 KB
Loading

0 commit comments

Comments
 (0)