Skip to content

Commit c7918ff

Browse files
authored
Merge pull request #1619 from jc2409/main
AI Agent on CPU - Learning Path
2 parents b38e902 + 9c118fb commit c7918ff

File tree

9 files changed

+471
-0
lines changed

9 files changed

+471
-0
lines changed
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
title: How to run AI Agent Application on CPU with llama.cpp and llama-cpp-agent using KleidiAI
3+
4+
minutes_to_complete: 45
5+
6+
who_is_this_for: This Learning Path is for software developers, ML engineers, and those looking to run AI Agent Application locally.
7+
8+
learning_objectives:
9+
- Set up llama-cpp-python optimised for Arm servers.
10+
- Learn how to optimise LLM models to run locally.
11+
- Learn how to create custom tools for ML models.
12+
- Learn how to use AI Agents for applications.
13+
14+
prerequisites:
15+
- An AWS Gravition instance (m7g.xlarge)
16+
- Basic understanding of Python and Prompt Engineering
17+
- Understanding of LLM fundamentals.
18+
19+
author: Andrew Choi
20+
21+
### Tags
22+
skilllevels: Introductory
23+
subjects: ML
24+
armips:
25+
- Neoverse
26+
tools_software_languages:
27+
- Python
28+
- AWS Gravition
29+
operatingsystems:
30+
- Linux
31+
32+
33+
34+
further_reading:
35+
- resource:
36+
title: llama.cpp
37+
link: https://github.com/ggml-org/llama.cpp
38+
type: documentation
39+
- resource:
40+
title: llama-cpp-agent
41+
link: https://llama-cpp-agent.readthedocs.io/en/latest/
42+
type: documentation
43+
44+
45+
46+
### FIXED, DO NOT MODIFY
47+
# ================================================================================
48+
weight: 1 # _index.md always has weight of 1 to order correctly
49+
layout: "learningpathall" # All files under learning paths have this same wrapper
50+
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
51+
---
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
# ================================================================================
3+
# FIXED, DO NOT MODIFY THIS FILE
4+
# ================================================================================
5+
weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation.
6+
title: "Next Steps" # Always the same, html page title.
7+
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
8+
---
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
---
2+
title: AI Agent Overview and Test Results
3+
weight: 5
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## Explain how LLM which function to use
10+
11+
Below is a brief explanation of how LLM can be configured and used to execute Agent tasks.
12+
13+
- This code creates an instance of the quantized `llama3.1` model for more efficient inference on Arm-based systems.
14+
```
15+
llama_model = Llama(
16+
model_path="./models/llama3.1-8b-instruct.Q4_0_arm.gguf",
17+
n_batch=2048,
18+
n_ctx=10000,
19+
n_threads=64,
20+
n_threads_batch=64,
21+
)
22+
```
23+
24+
- Here, you define a provider that leverages the llama.cpp Python bindings.
25+
```
26+
provider = LlamaCppPythonProvider(llama_model)
27+
```
28+
29+
- The function’s docstring guides the LLM on when and how to invoke it.
30+
```
31+
def function(a,b):
32+
"""
33+
Description about when the function should be called
34+
35+
Args:
36+
a: description of the argument a
37+
b: description of the argument b
38+
39+
Returns:
40+
Description about the function's output
41+
"""
42+
43+
# ... body of your function goes here
44+
```
45+
46+
- `from_functions` creates an instance of `LlmStructuredOutputSettings` by passing in a list of callable Python functions. The LLM can then decide if and when to use these functions based on user queries.
47+
```
48+
LlmStructuredOutputSettings.from_functions([function1, function2, etc])
49+
```
50+
51+
- With this, the user’s prompt is collected and processed through `LlamaCppAgent`. The agent decides whether to call any defined functions based on the request.
52+
```
53+
user = input("Please write your prompt here: ")
54+
55+
llama_cpp_agent = LlamaCppAgent(
56+
provider,
57+
debug_output=True,
58+
system_prompt="You're a helpful assistant to answer User query.",
59+
predefined_messages_formatter_type=MessagesFormatterType.LLAMA_3,
60+
)
61+
62+
result = llama_cpp_agent.get_chat_response(
63+
user, structured_output_settings=output_settings, llm_sampling_settings=settings
64+
)
65+
```
66+
67+
68+
## Example
69+
70+
- If the user asks, “What is the current time?”, the AI Agent will choose to call the `get_current_time()` function, returning a result in **H:MM AM/PM** format.
71+
72+
![Prompt asking for the current time](test_prompt.png)
73+
74+
- As part of the prompt, a list of executable functions is sent to the LLM, allowing the agent to select the appropriate function:
75+
76+
![Display of available functions in the terminal](test_functions.png)
77+
78+
- After the user prompt, the AI Agent decides to invoke the function and return thre result:
79+
80+
![get_current_time function execution](test_output.png)
81+
82+
83+
84+
85+
## Next Steps
86+
- You can ask different questions to trigger and execute other functions.
87+
- Extend your AI agent by defining custom functions so it can handle specific tasks. You can also re-enable the `TaviliySearchResults` function to unlock search capabilities within your environment.
88+
89+
90+
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
title: Python Script to Execute the AI Agent Application
3+
weight: 4
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## Python Script for AI Agent Application
10+
Once you set up the environment, create a Python script which will execute the AI Agent Applicaion:
11+
12+
### Option A
13+
- Clone the repository
14+
```bash
15+
cd ~
16+
git clone https://github.com/jc2409/ai-agent.git
17+
```
18+
19+
### Option B
20+
- Creat a Python file:
21+
```bash
22+
cd ~
23+
touch agent.py
24+
```
25+
26+
- Copy and paste the following code:
27+
```bash
28+
from enum import Enum
29+
from typing import Union
30+
from pydantic import BaseModel, Field
31+
from llama_cpp_agent import MessagesFormatterType
32+
from llama_cpp_agent.chat_history.messages import Roles
33+
from llama_cpp_agent.llm_output_settings import LlmStructuredOutputSettings
34+
from llama_cpp_agent import LlamaCppFunctionTool
35+
from llama_cpp_agent import FunctionCallingAgent
36+
from llama_cpp_agent import MessagesFormatterType
37+
from llama_cpp_agent import LlamaCppAgent
38+
from llama_cpp_agent.providers import LlamaCppPythonProvider
39+
from llama_cpp import Llama
40+
# import os
41+
# from dotenv import load_dotenv
42+
# from langchain_community.tools import TavilySearchResults # Uncomment this to enable search function
43+
44+
45+
# load_dotenv()
46+
47+
# os.environ.get("TAVILY_API_KEY")
48+
49+
llama_model = Llama(
50+
model_path="./models/llama3.1-8b-instruct.Q4_0_arm.gguf", # make sure you use the correct path for the quantized model
51+
n_batch=2048,
52+
n_ctx=10000,
53+
n_threads=64,
54+
n_threads_batch=64,
55+
)
56+
57+
provider = LlamaCppPythonProvider(llama_model)
58+
59+
60+
def open_webpage():
61+
"""
62+
Open Learning Path Website when user asks the agent regarding Arm Learning Path
63+
"""
64+
import webbrowser
65+
66+
url = "https://learn.arm.com/"
67+
webbrowser.open(url, new=0, autoraise=True)
68+
69+
70+
def get_current_time():
71+
"""
72+
Returns the current time in H:MM AM/PM format.
73+
"""
74+
import datetime # Import datetime module to get current time
75+
76+
now = datetime.datetime.now() # Get current time
77+
return now.strftime("%I:%M %p") # Format time in H:MM AM/PM format
78+
79+
80+
class MathOperation(Enum):
81+
ADD = "add"
82+
SUBTRACT = "subtract"
83+
MULTIPLY = "multiply"
84+
DIVIDE = "divide"
85+
86+
87+
def calculator(
88+
number_one: Union[int, float],
89+
number_two: Union[int, float],
90+
operation: MathOperation,
91+
) -> Union[int, float]:
92+
"""
93+
Perform a math operation on two numbers.
94+
95+
Args:
96+
number_one: First number
97+
number_two: Second number
98+
operation: Math operation to perform
99+
100+
Returns:
101+
Result of the mathematical operation
102+
103+
Raises:
104+
ValueError: If the operation is not recognized
105+
"""
106+
if operation == MathOperation.ADD:
107+
return number_one + number_two
108+
elif operation == MathOperation.SUBTRACT:
109+
return number_one - number_two
110+
elif operation == MathOperation.MULTIPLY:
111+
return number_one * number_two
112+
elif operation == MathOperation.DIVIDE:
113+
return number_one / number_two
114+
else:
115+
raise ValueError("Unknown operation.")
116+
117+
# Uncomment the following function to enable web search functionality (You will need to install langchain-community)
118+
# def search_from_the_web(content: str):
119+
# """
120+
# Search useful information from the web to answer User's question
121+
122+
# Args:
123+
# content: Useful question to retrieve data from the web to answer user's question
124+
# """
125+
# tool = TavilySearchResults(
126+
# max_results=1,
127+
# search_depth="basic"
128+
# )
129+
# result = tool.invoke({"query":content})
130+
# return result
131+
132+
settings = provider.get_provider_default_settings()
133+
134+
settings.temperature = 0.65
135+
# settings.top_p = 0.85
136+
# settings.top_k = 60
137+
# settings.tfs_z = 0.95
138+
settings.max_tokens = 4096
139+
140+
output_settings = LlmStructuredOutputSettings.from_functions(
141+
[get_current_time, open_webpage, calculator], allow_parallel_function_calling=True
142+
)
143+
144+
145+
def send_message_to_user_callback(message: str):
146+
print(message)
147+
148+
149+
def run_web_search_agent():
150+
user = input("Please write your prompt here: ")
151+
if user == "exit":
152+
return
153+
154+
llama_cpp_agent = LlamaCppAgent(
155+
provider,
156+
debug_output=True,
157+
system_prompt="You're a helpful assistant to answer User query.",
158+
predefined_messages_formatter_type=MessagesFormatterType.LLAMA_3,
159+
)
160+
161+
result = llama_cpp_agent.get_chat_response(
162+
user, structured_output_settings=output_settings, llm_sampling_settings=settings
163+
)
164+
165+
print("----------------------------------------------------------------")
166+
print("Response from AI Agent:")
167+
print(result)
168+
print("----------------------------------------------------------------")
169+
170+
if __name__ == '__main__':
171+
run_web_search_agent()
172+
```
173+
174+
## Run the Python Script
175+
176+
You are now ready to test the AI Agent. Use the following command in a terminal to start the application:
177+
```bash
178+
python3 agent.py
179+
```
180+
181+
{{% notice Note %}}
182+
183+
If it takes too long to process, try to terminate the application and try again.
184+
185+
{{% /notice %}}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
title: Introduction to AI Agents and Agent Use Cases
3+
weight: 2
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## Defining AI Agents
10+
11+
An AI Agent is best understood as an integrated system that goes beyond standard text generation by equipping Large Language Models (LLMs) with tools and domain knowledge. Here’s a closer look at the underlying elements:
12+
13+
- **System**: Each AI Agent functions as an interconnected ecosystem of components.
14+
- **Environment**: The domain in which the AI Agent operates. For instance, in a system that books travel itineraries, the relevant environment might include airline reservation systems and hotel booking tools.
15+
- **Sensors**: Methods the AI Agent uses to observe its surroundings. In the travel scenario, these could be APIs that inform the agent about seat availability on flights or room occupancy in hotels.
16+
- **Actuators**: Ways the AI Agent exerts influence within that environment. In the example of a travel agent, placing a booking or modifying an existing reservation serves as the agent’s “actuators.”
17+
18+
- **Large Language Models**: While the notion of agents is not new, LLMs bring powerful language comprehension and data-processing capabilities to agent setups.
19+
- **Performing Actions**: Rather than just produce text, LLMs within an agent context interpret user instructions and interact with tools to achieve specific objectives.
20+
- **Tools**: The agent’s available toolkit depends on the software environment and developer-defined boundaries. In the travel agent example, these tools might be limited to flight and hotel reservation APIs.
21+
- **Knowledge**: Beyond immediate data sources, the agent can fetch additional details—perhaps from databases or web services—to enhance decision-making.
22+
23+
---
24+
25+
## Varieties of AI Agents
26+
27+
AI Agents come in multiple forms. The table below provides an overview of some agent types and examples illustrating their roles in a travel-booking system:
28+
29+
| **Agent Category** | **Key Characteristics** | **Example in Travel** |
30+
|--------------------------|--------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
31+
| **Simple Reflex Agents** | Act directly based on set rules or conditions. | Filters incoming messages and forwards travel-related emails to a service center. |
32+
| **Model-Based Agents** | Maintain an internal representation of the world and update it based on new inputs. | Monitors flight prices and flags dramatic fluctuations, guided by historical data. |
33+
| **Goal-Based Agents** | Execute actions with the aim of meeting designated objectives. | Figures out the necessary route (flights, transfers) to get from your current location to your target destination. |
34+
| **Utility-Based Agents** | Use scoring or numerical metrics to compare and select actions that fulfill a goal. | Balances cost versus convenience when determining which flights or hotels to book. |
35+
| **Learning Agents** | Adapt over time by integrating lessons from previous feedback or experiences. | Adjusts future booking suggestions based on traveler satisfaction surveys. |
36+
| **Hierarchical Agents** | Split tasks into sub-tasks and delegate smaller pieces of work to subordinate agents.| Cancels a trip by breaking down the process into individual steps, such as canceling a flight, a hotel, and a car rental. |
37+
| **Multi-Agent Systems** | Involve multiple agents that may cooperate or compete to complete tasks. | Cooperative: Different agents each manage flights, accommodations, and excursions. Competitive: Several agents vie for limited rooms. |
38+
39+
---
40+
41+
## Ideal Applications for AI Agents
42+
43+
While the travel scenario illustrates different categories of AI Agents, there are broader circumstances where agents truly excel:
44+
45+
- **Open-Ended Challenges**: Complex tasks with no predetermined procedure, requiring the agent to determine the necessary steps.
46+
- **Procedural or Multi-Step Tasks**: Endeavors requiring numerous phases or tool integrations, allowing the agent to switch between resources.
47+
- **Continual Improvement**: Contexts where feedback loops enable the agent to refine its behaviors for better outcomes in the future.

0 commit comments

Comments
 (0)