Skip to content

Commit 051690f

Browse files
committed
summarize-topic: Add a tool to summarize topic.
1 parent 2675715 commit 051690f

File tree

3 files changed

+131
-0
lines changed

3 files changed

+131
-0
lines changed

zulip/integrations/litelllm/README.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# Summarize topic
2+
3+
Generate a short summary of the last 100 messages in the provided topic URL.
4+
5+
6+
### API Keys
7+
8+
For testing you need access token from https://huggingface.co/settings/tokens (or set the correct env variable with the access token if using a different model)
9+
10+
In `~/.zuliprc` add a section named `LITELLM_API_KEYS` and set the api key for the model you are trying to use.
11+
For example:
12+
```
13+
[LITELLM_API_KEYS]
14+
HUGGINGFACE_API_KEY=YOUR_API_KEY
15+
```
16+
17+
### Setup
18+
19+
```bash
20+
$ pip install -r zulip/integrations/litelllm/requirements.txt
21+
```
22+
23+
Just run `zulip/integrations/litelllm/summarize-topic` to generate sample summary.
24+
25+
```bash
26+
$ zulip/integrations/litelllm/summarize-topic --help
27+
usage: summarize-topic [-h] [--url URL] [--model MODEL]
28+
29+
options:
30+
-h, --help show this help message and exit
31+
--url URL The URL to fetch content from
32+
--model MODEL The model name to use for summarization
33+
```
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
zulip
2+
litellm
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
#!/usr/bin/env python3
2+
3+
import argparse
4+
import os
5+
import sys
6+
import urllib.parse
7+
from configparser import ConfigParser
8+
9+
from litellm import completion
10+
11+
import zulip
12+
13+
config_file = zulip.get_default_config_filename()
14+
if not config_file:
15+
print("Could not find the Zulip configuration file. Please read the provided README.")
16+
sys.exit()
17+
18+
client = zulip.Client(config_file=config_file)
19+
20+
config = ConfigParser()
21+
# Make config parser case sensitive otherwise API keys will be lowercased
22+
# which is not supported by litellm.
23+
# https://docs.python.org/3/library/configparser.html#configparser.ConfigParser.optionxform
24+
config.optionxform = str # type: ignore[assignment, method-assign]
25+
26+
with open(config_file) as f:
27+
config.read_file(f, config_file)
28+
29+
# Set all the keys in `LITELLM_API_KEYS` as environment variables.
30+
for key in config["LITELLM_API_KEYS"]:
31+
print("Setting key:", key)
32+
os.environ[key] = config["LITELLM_API_KEYS"][key]
33+
34+
if __name__ == "__main__":
35+
parser = argparse.ArgumentParser()
36+
parser.add_argument(
37+
"--url",
38+
type=str,
39+
help="The URL to fetch content from",
40+
default="https://chat.zulip.org/#narrow/stream/101-design/topic/more.20user.20indicators",
41+
)
42+
parser.add_argument(
43+
"--model",
44+
type=str,
45+
help="The model name to use for summarization",
46+
default="huggingface/meta-llama/Meta-Llama-3-8B-Instruct",
47+
)
48+
args = parser.parse_args()
49+
50+
url = args.url
51+
model = args.model
52+
53+
base_url, narrow_hash = url.split("#")
54+
narrow_hash_terms = narrow_hash.split("/")
55+
channel = narrow_hash_terms[2].split("-")[1]
56+
topic = narrow_hash_terms[4]
57+
channel = urllib.parse.unquote(channel.replace(".", "%"))
58+
topic = urllib.parse.unquote(topic.replace(".", "%"))
59+
60+
narrow = [
61+
{"operator": "channel", "operand": channel},
62+
{"operator": "topic", "operand": topic},
63+
]
64+
65+
request = {
66+
"anchor": "newest",
67+
"num_before": 100,
68+
"num_after": 0,
69+
"narrow": narrow,
70+
"apply_markdown": False,
71+
}
72+
result = client.get_messages(request)
73+
messages = result["messages"]
74+
75+
formatted_messages = [
76+
{"content": f"{message['sender_full_name']}: {message['content']}", "role": "user"}
77+
for message in messages
78+
]
79+
80+
# Provide a instruction if using an `Instruct` model.
81+
# There is a 100 token output limit by hugging face.
82+
if "Instruct" in model:
83+
formatted_messages.append(
84+
{"content": "Summarize the above content within 90 words.", "role": "user"}
85+
)
86+
87+
# Send formatted messages to the LLM model for summarization
88+
response = completion(
89+
model=model,
90+
messages=formatted_messages,
91+
)
92+
93+
print("Server response:\n", response)
94+
print("\n\nGenerated summary for URL:", url)
95+
print("Summary:")
96+
print(response["choices"][0]["message"]["content"])

0 commit comments

Comments
 (0)