Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
b0baed1
reapply litellm updates to support only messages llm kwarg
dtam Sep 19, 2024
5e0eb15
tests run and make progress on rewrite, most of unit_tests_passing
dtam Sep 20, 2024
a11c4ee
migrate more tests partially
dtam Sep 24, 2024
487a06a
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Sep 24, 2024
b97caa2
some progress
dtam Sep 24, 2024
e684739
more progress
dtam Sep 24, 2024
18591b7
fix some more tests
dtam Sep 25, 2024
a1f7893
fix some more tests
dtam Sep 25, 2024
816ef5f
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Sep 25, 2024
67feff4
more progress
dtam Sep 25, 2024
da2dc63
more tests
dtam Sep 26, 2024
66a568b
more tests
dtam Sep 26, 2024
bc7ff07
tests passing
dtam Sep 27, 2024
e8eb7f5
typing and lint
dtam Sep 27, 2024
f4c4827
lint
dtam Sep 27, 2024
1228310
typing
dtam Sep 27, 2024
1d0490f
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Sep 27, 2024
4e333b9
fix bad merge
dtam Sep 27, 2024
0611c7b
minor fixes
dtam Oct 10, 2024
c4b7b4b
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Oct 10, 2024
8c5f9c5
notebooks
dtam Oct 10, 2024
6dfc337
last few notebooks
dtam Oct 10, 2024
955622c
last books
dtam Oct 11, 2024
27248aa
update docs for messages
dtam Oct 12, 2024
d2df838
last of docs
dtam Oct 14, 2024
95e2767
update more docs and start migration guide
dtam Oct 17, 2024
ef2863b
fix tests and format
dtam Oct 17, 2024
10a1dde
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Oct 18, 2024
58d16ec
update some tests
dtam Oct 18, 2024
e167d83
Merge remote-tracking branch 'origin/main' into feature/litellm_cleanup
dtam Oct 21, 2024
e5ccff3
renable history by default
dtam Oct 21, 2024
e167a6d
expose messages to prompt helper and finish docs for it
dtam Oct 21, 2024
7898e76
indention
dtam Oct 21, 2024
d8c0404
Merge remote-tracking branch 'origin/0.6.0-dev' into feature/litellm_…
dtam Oct 21, 2024
60605fa
update api client to point to its alpha
dtam Oct 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6,690 changes: 6,674 additions & 16 deletions docs/concepts/async_streaming.ipynb

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion docs/concepts/error_remediation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ Note that this list is not exhaustive of the possible errors that could occur.
```log
The callable `fn` passed to `Guard(fn, ...)` failed with the following error:
{Root error message here!}.
Make sure that `fn` can be called as a function that takes in a single prompt string and returns a string.
```


Expand Down
20 changes: 4 additions & 16 deletions docs/concepts/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,17 @@ docs/html/single-step-history.html

## Calls
### Initial Input
Inital inputs like prompt and instructions from a call are available on each call.
Initial inputs like messages from a call are available on each call.

```py
first_call = my_guard.history.first
print("prompt\n-----")
print(first_call.prompt)
print("message\n-----")
print(first_call.messages[0]["content"])
print("prompt params\n------------- ")
print(first_call.prompt_params)
```
```log
prompt
message
-----

You are a human in an enchanted forest. You come across opponents of different types. You should fight smaller opponents, run away from bigger ones, and freeze if the opponent is a bear.
Expand All @@ -67,18 +67,6 @@ prompt params
{'opp_type': 'grizzly'}
```

Note: Input messages and msg_history currently can be accessed through iterations
```py
print(guard.history.last.iterations.last.inputs.msg_history)
```
```log
[
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"Tell me a joke"}
]
```


### Final Output
Final output of call is accessible on a call.
```py
Expand Down
44 changes: 36 additions & 8 deletions docs/concepts/streaming.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -39,7 +39,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,12 +51,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800080; text-decoration-color: #800080; font-weight: bold\">ValidationOutcome</span><span style=\"font-weight: bold\">(</span>\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">call_id</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'14148119808'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">raw_llm_output</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'.'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validation_summaries</span>=<span style=\"font-weight: bold\">[]</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validated_output</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'.'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">reask</span>=<span style=\"color: #800080; text-decoration-color: #800080; font-style: italic\">None</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validation_passed</span>=<span style=\"color: #00ff00; text-decoration-color: #00ff00; font-style: italic\">True</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">error</span>=<span style=\"color: #800080; text-decoration-color: #800080; font-style: italic\">None</span>\n",
"<span style=\"font-weight: bold\">)</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;35mValidationOutcome\u001b[0m\u001b[1m(\u001b[0m\n",
" \u001b[33mcall_id\u001b[0m=\u001b[32m'14148119808'\u001b[0m,\n",
" \u001b[33mraw_llm_output\u001b[0m=\u001b[32m'.'\u001b[0m,\n",
" \u001b[33mvalidation_summaries\u001b[0m=\u001b[1m[\u001b[0m\u001b[1m]\u001b[0m,\n",
" \u001b[33mvalidated_output\u001b[0m=\u001b[32m'.'\u001b[0m,\n",
" \u001b[33mreask\u001b[0m=\u001b[3;35mNone\u001b[0m,\n",
" \u001b[33mvalidation_passed\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n",
" \u001b[33merror\u001b[0m=\u001b[3;35mNone\u001b[0m\n",
"\u001b[1m)\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"fragment_generator = guard(\n",
" litellm.completion,\n",
" model=\"gpt-4o\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
Expand Down Expand Up @@ -116,7 +145,6 @@
"guard = gd.Guard()\n",
"\n",
"fragment_generator = await guard(\n",
" litellm.completion,\n",
" model=\"gpt-3.5-turbo\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
Expand All @@ -137,7 +165,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "litellm",
"language": "python",
"name": "python3"
},
Expand All @@ -151,7 +179,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.12.3"
}
},
"nbformat": 4,
Expand Down
Loading
Loading