Skip to content

Commit ebee76a

Browse files
committed
feat: add block "id" tracking to the interpreter
- update Block schema to add id field - store the id in trace - update react UI to use these values, rather than computing its own. this cleans up quite a bit. - some initial def-use tracking via a `defsite` context field - replace Dataflow topology visualization with simpler Memory tracking table Signed-off-by: Nick Mitchell <[email protected]>
1 parent 2fa7a5a commit ebee76a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+1268
-436
lines changed

pdl-live-react/demos/demo1.pdl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
description: Simple LLM interaction
22
text:
33
- "write a hello world example\n"
4-
- model: ollama/granite-code:8b
4+
- model: ollama/llama3.1:8b
55
parameters:
66
stop_sequences: '!'
77
temperature: 0

pdl-live-react/demos/run.sh

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,21 @@
1-
#!/bin/sh
1+
#!/usr/bin/env bash
22

33
SCRIPTDIR=$(cd $(dirname "$0") && pwd)
44
UI="$SCRIPTDIR"/.. # top of react UI
55
TOP="$UI"/.. # top of repo
66
T="$UI"/src/demos # place to store traces
77

8-
pdl --trace "$T"/demo1.json "$UI"/demos/demo1.pdl
9-
pdl --trace "$T"/demo2.json "$TOP"/examples/tutorial/model_chaining.pdl
10-
pdl --trace "$T"/demo3.json "$TOP"/examples/fibonacci/fib.pdl
11-
pdl --trace "$T"/demo4.json "$TOP"/examples/chatbot/chatbot.pdl # WARNING: this one requires some human interaction. TODO script this.
12-
pdl --trace "$T"/demo5.json "$TOP"/examples/talk/6-code-json.pdl
8+
pdl --trace "$T"/demo1.json <(cat "$UI"/demos/demo1.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g')
9+
pdl --trace "$T"/demo2.json <(cat "$TOP"/examples/tutorial/model_chaining.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g')
10+
pdl --trace "$T"/demo3.json <(cat "$TOP"/examples/fibonacci/fib.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g')
11+
pdl --trace "$T"/demo4.json <(cat "$TOP"/examples/chatbot/chatbot.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g') <<EOF
12+
what is the fastest animal?
13+
no
14+
in europe?
15+
yes
16+
EOF
17+
cat "$TOP"/examples/talk/6-code-json.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g' > "$TOP"/examples/talk/6-code-json.pdl.tmp \
18+
&& pdl --trace "$T"/demo5.json "$TOP"/examples/talk/6-code-json.pdl.tmp \
19+
&& rm "$TOP"/examples/talk/6-code-json.pdl.tmp
1320
pdl --trace "$T"/demo6.json "$UI"/demos/error.pdl || true
14-
pdl --trace "$T"/demo7.json "$TOP"/examples/talk/4-function.pdl
21+
pdl --trace "$T"/demo7.json <(cat "$TOP"/examples/talk/4-function.pdl | sed -E 's#(model: )(.+)#\1ollama/llama3.1:8b#g')

pdl-live-react/package-lock.json

Lines changed: 19 additions & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

pdl-live-react/package.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
"dependencies": {
2323
"@patternfly/react-code-editor": "^6.1.0",
2424
"@patternfly/react-core": "^6.1.0",
25+
"@patternfly/react-table": "^6.1.0",
2526
"@patternfly/react-topology": "^6.1.0",
2627
"@tauri-apps/api": "^2",
2728
"@tauri-apps/plugin-cli": "^2.2.0",

pdl-live-react/src/Context.ts

Lines changed: 0 additions & 29 deletions
This file was deleted.
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"kind": "text", "start_nanos": 1738006438448127000, "end_nanos": 1738006439695520000, "timezone": "EST", "description": "Simple LLM interaction", "defs": {}, "text": ["write a hello world example\n", {"kind": "model", "context": [{"role": "user", "content": "write a hello world example\n"}], "start_nanos": 1738006438449110000, "end_nanos": 1738006439695499000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/granite-code:8b", "parameters": {"temperature": 0.0, "stop_sequences": "!"}, "result": "```python\nprint(\"hello world\")\n```\n"}], "result": "write a hello world example\n```python\nprint(\"hello world\")\n```\n"}
1+
{"kind": "text", "id": "text", "start_nanos": 1738862333350354000, "end_nanos": 1738862340857376000, "timezone": "EST", "description": "Simple LLM interaction", "defs": {}, "text": ["write a hello world example\n", {"kind": "model", "id": "text.1.model", "context": [{"role": "user", "content": "write a hello world example\n", "defsite": "text.0"}], "start_nanos": 1738862333351329000, "end_nanos": 1738862340857340000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/llama3.1:8b", "parameters": {"temperature": 0.0, "stop_sequences": "!"}, "result": "**Hello World Example**\n=======================\n\nBelow is a simple \"Hello, World!\" program in Python:\n\n```python\n# hello_world.py\n\ndef main():\n \"\"\"Prints 'Hello, World!' to the console.\"\"\"\n print(\"Hello, World!\")\n\nif __name__ == \"__main__\":\n main()\n```\n\n**How to Run:**\n\n1. Save this code in a file named `hello_world.py`.\n2. Open your terminal or command prompt.\n3. Navigate to the directory where you saved the file.\n4. Type `python hello_world.py` and press Enter.\n\nYou should see \"Hello, World!\" printed on your console."}], "result": "write a hello world example\n**Hello World Example**\n=======================\n\nBelow is a simple \"Hello, World!\" program in Python:\n\n```python\n# hello_world.py\n\ndef main():\n \"\"\"Prints 'Hello, World!' to the console.\"\"\"\n print(\"Hello, World!\")\n\nif __name__ == \"__main__\":\n main()\n```\n\n**How to Run:**\n\n1. Save this code in a file named `hello_world.py`.\n2. Open your terminal or command prompt.\n3. Navigate to the directory where you saved the file.\n4. Type `python hello_world.py` and press Enter.\n\nYou should see \"Hello, World!\" printed on your console."}
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"kind": "text", "start_nanos": 1738006440779179000, "end_nanos": 1738006442322911000, "timezone": "EST", "description": "Model chaining", "defs": {}, "text": ["Hello\n", {"kind": "model", "context": [{"role": "user", "content": "Hello\n"}], "start_nanos": 1738006440780081000, "end_nanos": 1738006441153102000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/granite-code:8b", "parameters": {"stop_sequences": "!"}, "result": "Hi there! How can I assist you today?\n"}, "\nDid you just say Hello?\n", {"kind": "model", "context": [{"role": "user", "content": "Hello\n"}, {"content": "Hi there! How can I assist you today?\n", "role": "assistant"}, {"role": "user", "content": "\nDid you just say Hello?\n"}], "start_nanos": 1738006441154069000, "end_nanos": 1738006442322886000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/granite-code:8b", "parameters": {"stop_sequences": "!"}, "result": "Yes, that's correct. As an AI language model, my goal is to provide helpful and accurate responses to your questions and conversations. I'm here to assist you in any way I can. So, what can I help you with today?\n"}], "result": "Hello\nHi there! How can I assist you today?\n\nDid you just say Hello?\nYes, that's correct. As an AI language model, my goal is to provide helpful and accurate responses to your questions and conversations. I'm here to assist you in any way I can. So, what can I help you with today?\n"}
1+
{"kind": "text", "id": "text", "start_nanos": 1738862342134373000, "end_nanos": 1738862343556101000, "timezone": "EST", "description": "Model chaining", "defs": {}, "text": ["Hello\n", {"kind": "model", "id": "text.1.model", "context": [{"role": "user", "content": "Hello\n", "defsite": "text.0"}], "start_nanos": 1738862342135343000, "end_nanos": 1738862342794512000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/llama3.1:8b", "parameters": {"stop_sequences": "!"}, "result": "Hello! Is there something I can help you with or would you like to chat?"}, "\nDid you just say Hello?\n", {"kind": "model", "id": "text.3.model", "context": [{"role": "user", "content": "Hello\n", "defsite": "text.0"}, {"content": "Hello! Is there something I can help you with or would you like to chat?", "role": "assistant", "defsite": "text.1.model"}, {"role": "user", "content": "\nDid you just say Hello?\n", "defsite": "text.2"}], "start_nanos": 1738862342795067000, "end_nanos": 1738862343556032000, "timezone": "EST", "defs": {}, "platform": "litellm", "model": "ollama/llama3.1:8b", "parameters": {"stop_sequences": "!"}, "result": "Yes, I did say hello. It's a greeting, typically used when starting a conversation or interaction. What's on your mind?"}], "result": "Hello\nHello! Is there something I can help you with or would you like to chat?\nDid you just say Hello?\nYes, I did say hello. It's a greeting, typically used when starting a conversation or interaction. What's on your mind?"}

0 commit comments

Comments
 (0)