Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 124 additions & 0 deletions jupyter_ai_personas/pocketagent_persona/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Using PocketFlow in Jupyter AI Personas

In this `pocketagent_persona` we explain how any application in the PocketFlow respository ([GitHub](https://github.com/The-Pocket/PocketFlow)) can be adapted to work within `jupyter-ai-personas`. PocketFlow provides many examples in its [cookbook](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook).

PocketFlow has four main classes:

1. `Node`: this is the atom of the system, which performs a single task in 3 steps using these methods in sequence: `prep`, `exec`, `post`.
2. `BatchNode`: batches multiple tasks where the `prep` method returns an iterable.
3. `Flow`: This is the master recipe, a compute graph.
4. `AsynParallelBatchFlow`: for efficient processing when needed, so that the graph need not be processed sequentially if not strictly necessary.

There are 3 core abstractions:

1. Node: a single focused worker, described above.
2. Shared Store: this is a _global_ dict, containing as many state variables as needed. It is always called `shared`. (It is unclear if there are race conditions that make this shared state uncertain when `AsynParallelBatchFlow` is applied.)
3. Flow: connects nodes (the graph, it can be cyclic, not just a DAG).

In a `Node`, the three methods do the following:

- `prep` takes the input and preprocesses it to make a message for the `exec` method.
- `exec` calls a LLM to respond to the message.
- `post` prints the response if needed, and also emits an action based on the response, such as "continue" to continue the conversation.

That's it!

PocketFlow applications usually devolve into four modules, which can be combined into a single persona in `jupyter-ai`. These four modules are:

1. `utils.py` -- various functions needed for the app, such as web search, etc. Think of it as containing tools.
2. `nodes.py` -- contains one or more `Nodes`, each of which is a class that inherits the base `Node` class. This uses modules in `utils.py`.
3. `flow.py` -- initializes the Nodes in `nodes.py` and sets up the agent flow.
4. `main.py` -- sets up inputs and calls the flow in `flow.py`.

You will usually find these files in the PocketFlow cookbook examples.

The starting template to inject these modules into a persona is to create the following initial version of `persona.py` with 3 Nodes:

```python
from jupyter_ai.personas.base_persona import BasePersona, PersonaDefaults
from jupyterlab_chat.models import Message
from pocketflow import Node, Flow
from litellm import completion

#### ADD ALL FUNCTIONS FROM utils.py HERE #####

#### ADD ALL NODE CLASSES FROM nodes.py HERE

#### PERSONA #####
class PocketAgentPersona(BasePersona, Node1, Node2, Node3):
def __init__(self, *args, **kwargs):
BasePersona.__init__(self, *args, **kwargs)
Node1.__init__(self)
Node2.__init__(self)
Node3.__init__(self)

@property
def defaults(self):
return PersonaDefaults(
name="PocketAgentPersona",
avatar_path="/api/ai/static/jupyternaut.svg",
description="The pocketflow agent.",
system_prompt="...",
)

## UTILS (within the Persona class)
def call_llm(self, prompt):
# If messages is a string, convert it to proper message format
if isinstance(prompt, str):
prompt = [{"role": "user", "content": prompt}]
response = completion(
model=self.config_manager.chat_model,
messages=prompt,
stream=False,
)
return response.choices[0].message.content

async def process_message(self, message: Message):
## FLOW: USE THE CODE FROM flow.py HERE
n1 = Node1()
n2 = Node2()
n3 = Node3()
n1.persona = self
n2.persona = self
n3.persona = self
# Connect the nodes, the `action` is in quotes
n1 - "n2" >> n2
n1 - "n3" >> n3
n2 - "n1" >> n1
## Flow: Run the flow with the input message
agent_flow = Flow(start=decide)

## MAIN: USE THE CODE FROM main.py HERE
question = message.body.split(" ", 1)[1]
shared = {"question": question} {message.body.split(" ", 1)[1]}")
agent_flow.run(shared) # flow runs off shared
self.send_message(shared.get("answer", "No answer found"))
```

Notice that the code from `utils.py` and `nodes.py` goes above the Persona class. The code from `flow.py` and `main.py` goes inside the Persona class with the `process_message` function. The following modifications to the original PocketFlow code are required to use the LLMs that are designated in AI Settings in `jupyter-ai`:

- The `call_llm` method in `utils.py` is deleted and replaced by the one used by `jupyter-ai` -- see it included in the `PocketAgentPersona` class in the code above.
- The `PocketAgentPersona` class inherits all the Node classes and initializes them. See the statements such as `Node1.__init__(self)` and `n1.persona = self` that are required.
- The `call_llm` statements in the `exec` module of each Node are replaced by `self.persona.call_llm`.

See the `persona.py` file in the `pocketagent_persona` folder. This persona has been built using the [code](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-agent) in PocketFlows cookbook for a simple agent that answers question using a search tool if it needs to in case the LLM does not have the required answer memorized.

You will also need to add the persona to `pyproject.toml` as follows:

```
pocketagent = [
"pocketflow",
"duckduckgo_search",
"requests",
]

all = ["jupyter-ai-personas[<other-personas>, pocketagent]"]

[project.entry-points."jupyter_ai.personas"]
pocketagent_persona = "jupyter_ai_personas.pocketagent_persona.persona:PocketAgentPersona"
```

Running this persona gives the following output:
![pocket-chat-1](pocket1.png)
with the agentic iterations logged to console:
![pocket-chat-1](pocket2.png)
237 changes: 237 additions & 0 deletions jupyter_ai_personas/pocketagent_persona/persona.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,237 @@
from jupyter_ai.personas.base_persona import BasePersona, PersonaDefaults
from jupyterlab_chat.models import Message
from pocketflow import Node, Flow
from litellm import completion
import yaml
from ddgs import DDGS
import requests
import sys

## UTILS (outside the Persona class)
def search_web_duckduckgo(query):
results = DDGS().text(query, max_results=5)
# Convert results to a string
results_str = "\n\n".join([f"Title: {r['title']}\nURL: {r['href']}\nSnippet: {r['body']}" for r in results])
return results_str

def search_web_brave(query):

url = f"https://api.search.brave.com/res/v1/web/search?q={query}"
api_key = "your brave search api key"

headers = {
"accept": "application/json",
"Accept-Encoding": "gzip",
"x-subscription-token": api_key
}

response = requests.get(url, headers=headers)

if response.status_code == 200:
data = response.json()
results = data['web']['results']
results_str = "\n\n".join([f"Title: {r['title']}\nURL: {r['url']}\nDescription: {r['description']}" for r in results])
else:
print(f"Request failed with status code: {response.status_code}")
return results_str

## NODES : Outside the Persona class
class DecideAction(Node):
def prep(self, shared):
"""Prepare the context and question for the decision-making process."""
# Get the current context (default to "No previous search" if none exists)
context = shared.get("context", "No previous search")
# Get the question from the shared store
question = shared["question"]
# Return both for the exec step
return question, context

def exec(self, inputs):
"""Call the LLM to decide whether to search or answer."""
question, context = inputs

print(f"🤔 Agent deciding what to do next...")

# Create a prompt to help the LLM decide what to do next with proper yaml formatting
prompt = f"""
### CONTEXT
You are a research assistant that can search the web.
Question: {question}
Previous Research: {context}

### ACTION SPACE
[1] search
Description: Look up more information on the web
Parameters:
- query (str): What to search for

[2] answer
Description: Answer the question with current knowledge
Parameters:
- answer (str): Final answer to the question

## NEXT ACTION
Decide the next action based on the context and available actions.
Return your response in this format:

```yaml
thinking: |
<your step-by-step reasoning process>
action: search OR answer
reason: <why you chose this action>
answer: <if action is answer>
search_query: <specific search query if action is search>
```
IMPORTANT: Make sure to:
1. Use proper indentation (4 spaces) for all multi-line fields
2. Use the | character for multi-line text fields
3. Keep single-line fields without the | character
"""

# Call the LLM to make a decision
response = self.persona.call_llm(prompt)

# Parse the response to get the decision
yaml_str = response.split("```yaml")[1].split("```")[0].strip()
decision = yaml.safe_load(yaml_str)

return decision

def post(self, shared, prep_res, exec_res):
"""Save the decision and determine the next step in the flow."""
# If LLM decided to search, save the search query
if exec_res["action"] == "search":
shared["search_query"] = exec_res["search_query"]
print(f"🔍 Agent decided to search for: {exec_res['search_query']}")
else:
shared["context"] = exec_res["answer"] #save the context if LLM gives the answer without searching.
print(f"💡 Agent decided to answer the question")

# Return the action to determine the next node in the flow
return exec_res["action"]

class SearchWeb(Node):
def prep(self, shared):
"""Get the search query from the shared store."""
return shared["search_query"]

def exec(self, search_query):
"""Search the web for the given query."""
# Call the search utility function
print(f"🌐 Searching the web for: {search_query}")
results = search_web_duckduckgo(search_query)
return results

def post(self, shared, prep_res, exec_res):
"""Save the search results and go back to the decision node."""
# Add the search results to the context in the shared store
previous = shared.get("context", "")
shared["context"] = previous + "\n\nSEARCH: " + shared["search_query"] + "\nRESULTS: " + exec_res

print(f"📚 Found information, analyzing results...")

# Always go back to the decision node after searching
return "decide"

class AnswerQuestion(Node):
def prep(self, shared):
"""Get the question and context for answering."""
return shared["question"], shared.get("context", "")

def exec(self, inputs):
"""Call the LLM to generate a final answer."""
question, context = inputs

print(f"✍️ Crafting final answer...")

# Create a prompt for the LLM to answer the question
prompt = f"""
### CONTEXT
Based on the following information, answer the question.
Question: {question}
Research: {context}

## YOUR ANSWER:
Provide a comprehensive answer using the research results.
"""
# Call the LLM to generate an answer
answer = self.persona.call_llm(prompt)
return answer

def post(self, shared, prep_res, exec_res):
"""Save the final answer and complete the flow."""
# Save the answer in the shared store
shared["answer"] = exec_res

print(f"✅ Answer generated successfully")

# We're done - no need to continue the flow
return "done"


class PocketAgentPersona(BasePersona, DecideAction, SearchWeb, AnswerQuestion):
"""
The PocketFlow persona, demo to show usage of PocketFlow API.
"""

def __init__(self, *args, **kwargs):
BasePersona.__init__(self, *args, **kwargs)
DecideAction.__init__(self)
SearchWeb.__init__(self)
AnswerQuestion.__init__(self)

@property
def defaults(self):
return PersonaDefaults(
name="PocketAgentPersona",
avatar_path="/api/ai/static/jupyternaut.svg",
description="The pocketflow agent.",
system_prompt="...",
)

## UTILS (within the Persona class)
def call_llm(self, prompt):
"""Calls the litellm completion API with the given messages."""
# If messages is a string, convert it to proper message format
if isinstance(prompt, str):
prompt = [{"role": "user", "content": prompt}]
response = completion(
model=self.config_manager.chat_model,
messages=prompt,
stream=False,
)
return response.choices[0].message.content

async def process_message(self, message: Message):
if not self.config_manager.chat_model:
self.send_message(
"No chat model is configured.\n\n"
"You must set one first in the Jupyter AI settings, found in 'Settings > AI Settings' from the menu bar."
)
return

## FLOW: Initialize the flow if not already done
decide = DecideAction()
search = SearchWeb()
answer = AnswerQuestion()
decide.persona = self
search.persona = self
answer.persona = self
# Connect the nodes
# If DecideAction returns "search", go to SearchWeb
decide - "search" >> search
# If DecideAction returns "answer", go to AnswerQuestion
decide - "answer" >> answer
# After SearchWeb completes and returns "decide", go back to DecideAction
search - "decide" >> decide
## Flow: Run the flow with the input message
agent_flow = Flow(start=decide)

## MAIN: Answer the question from the message
question = message.body.split(" ", 1)[1] # Using text attribute instead of content
shared = {"question": question} # Using text attribute instead of content
print(f"🤔 Processing question: {message.body.split(" ", 1)[1]}")
agent_flow.run(shared)
print("\n🎯 Final Answer:")
print(shared.get("answer", "No answer found"))
self.send_message(shared.get("answer", "No answer found"))
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 9 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -62,15 +62,22 @@ data_analytics = [
"seaborn"
]

all = ["jupyter-ai-personas[finance,emoji,software_team,data_analytics,pr_review]"]
pocketagent = [
"pocketflow",
"ddgs",
"requests",
]

all = ["jupyter-ai-personas[finance,emoji,software_team,data_analytics,pr_review,pocketagent]"]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[project.entry-points."jupyter_ai.personas"]
finance_persona = "jupyter_ai_personas.finance_persona.persona:FinancePersona"
emoji_persona = "jupyter_ai_personas.emoji_persona.persona:EmojiPersona"
finance_persona = "jupyter_ai_personas.finance_persona.persona:FinancePersona"
pocketagent_persona = "jupyter_ai_personas.pocketagent_persona.persona:PocketAgentPersona"
software_team_persona = "jupyter_ai_personas.software_team_persona.persona:SoftwareTeamPersona"
data_analytics_persona = "jupyter_ai_personas.data_analytics_persona.persona:DataAnalyticsTeam"
pr_review_persona = "jupyter_ai_personas.pr_review_persona.persona:PRReviewPersona"
Loading