Skip to content

Commit e32fc06

Browse files
Add knowledge graph
* migrate content from README.md to KG
1 parent 0bb48d7 commit e32fc06

28 files changed

+3284
-116
lines changed

README.md

Lines changed: 1 addition & 116 deletions
Original file line numberDiff line numberDiff line change
@@ -62,122 +62,7 @@ docker run
6262
See [docs](https://vonwig.github.io/prompts.docs/#/page/running%20the%20prompt%20engine) for more details on how to run the conversation loop,
6363
and especially how to use it to run local prompts that are not yet in GitHub.
6464

65-
## Function volumes
66-
67-
Every function container will have a shared volume mounted into the container at `/thread`.
68-
The volume is intended to be ephemeral and will be deleted at the end of the run. However, the volume
69-
can be saved for inspection by passing the argument `--thread-id`.
70-
71-
## Output json-rpc notifications
72-
73-
Add the flag `--jsonrpc` to the list of arguments to switch the stdout stream to be a series of `jsonrpc` notifications.
74-
This is useful if you are running the tool and streaming responses on to a canvas.
75-
76-
Try running the with the `--jsonrpc` to see a full example but the stdout stream will look something like this.
77-
78-
```
79-
{"jsonrpc":"2.0","method":"message","params":{"content":" consistently"}}Content-Length: 65
80-
81-
{"jsonrpc":"2.0","method":"message","params":{"content":" high"}}Content-Length: 61
82-
83-
{"jsonrpc":"2.0","method":"message","params":{"content":"."}}Content-Length: 52
84-
85-
{"jsonrpc":"2.0","method":"functions","params":null}Content-Length: 57
86-
87-
{"jsonrpc":"2.0","method":"functions-done","params":null}Content-Length: 1703
88-
```
89-
90-
### Notification Methods
91-
92-
#### message
93-
94-
This is a message from the assitant role, or from a tool role.
95-
The params for the `message` method should be appended to the conversation. The `params` can be either
96-
`content` or `debug`.
97-
98-
```json
99-
{"params": {"content": "append this output to the current message"}}
100-
{"params": {"debug": "this is a debug message and should only be shown in debug mode"}}
101-
```
102-
103-
#### prompts
104-
105-
Generated user and system prompts are sent to the client so that they can be displayed. These
106-
are sent after extractors are expanded so that users can see the actual prompts sent to the AI model.
107-
108-
```json
109-
{"params": {"messages": [{"role": "system", "content": "system prompt message"}]}}
110-
```
111-
112-
#### functions
113-
114-
Functions are json encoded strings. When streaming, the content of the json params will change as
115-
the functions streams. This can be rendered in place to show the function definition completing
116-
as it streams.
117-
118-
```json
119-
{"params": "{}"}
120-
```
121-
122-
#### functions-done
123-
124-
This notification is sent when a function definition has stopped streaming, and is now being executed.
125-
The next notification after this will be a tool message.
126-
127-
```json
128-
{"params": ""}
129-
```
130-
131-
#### error
132-
133-
The `error` notification is not a message from the model, prompts, or tools. Instead, it represents a kind
134-
of _system_ error trying to run the conversation loop. It should always be shown to the user as it
135-
probably represents something like a networking error or a configuration problem.
136-
137-
```json
138-
{"params": {"content": "error message"}}
139-
```
140-
141-
### Request Methods
142-
143-
#### prompt
144-
145-
Send a user prompt into the converstation loop. The `prompt` method takes the following `params`.
146-
147-
```json
148-
{"params": {"content": "here is the user prompt"}}
149-
```
150-
151-
## Prompt file layout
152-
153-
Each prompt directory should contain a README.md describing the prompts and their purpose. Each prompt file
154-
is a markdown document that supports moustache templates for subsituting context extracted from the project.
155-
156-
```
157-
prompt_dir/
158-
├── 010_system_prompt.md
159-
├── 020_user_prompt.md
160-
└── README.md
161-
```
162-
163-
* ordering of messages is determined by filename sorting
164-
* the role is encoded in the name of the file
165-
166-
### Moustache Templates
167-
168-
The prompt templates can contain expressions like {{dockerfiles}} to add information
169-
extracted from the current project. Examples of facts that can be added to the
170-
prompts are:
171-
172-
* `{{platform}}` - the platform of the current development environment.
173-
* `{{username}}` - the DockerHub username (and default namespace for image pushes)
174-
* `{{languages}}` - names of languages discovered in the project.
175-
* `{{project.dockerfiles}}` - the relative paths to local DockerFiles
176-
* `{{project.composefiles}}` - the relative paths to local Docker Compose files.
177-
178-
The entire `project-facts` map is also available using dot-syntax
179-
forms like `{{project-facts.project-root-uri}}`. All moustache template
180-
expressions documented [here](https://github.com/yogthos/Selmer) are supported.
65+
[PROMPTS KNOWLEDGE GRAPH](https://vonwig.github.io/prompts.docs/#/page/index)
18166

18267
## Building
18368

graphs/prompts/.DS_Store

6 KB
Binary file not shown.
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
- Run some prompts checked in to GitHub against a project in the current working directory.
2+
id:: 66d779c7-c1b7-40c6-a635-fa712da492de
3+
```sh
4+
docker run
5+
--rm -it \
6+
--pull=always \
7+
-v /var/run/docker.sock:/var/run/docker.sock \
8+
--mount type=volume,source=docker-prompts,target=/prompts \
9+
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
10+
vonwig/prompts:latest \
11+
run \
12+
--host-dir $PWD \
13+
--user jimclark106 \
14+
--platform darwin \
15+
--prompts "github:docker/labs-make-runbook?ref=main&path=prompts/lazy_docker"
16+
```
17+
- Most of this is boiler plate except:
18+
- the `--user` option in line 10 requires a valid DOCKER_HUB user name
19+
- the `--prompts` option in line 12 requires a valid [github reference]([[GitHub Refs]]) to some markdown prompts
20+
- if the project is located somewhere other than $PWD then the `--host-dir` will need to be updated.
21+
- Run a local prompt markdown file against a project in the current working directory. In this example, the prompts are not pulled from GitHub. Instead, our prompts are being developed in a directory called `$PROMPTS_DIR`. In this example, the local prompt file is `$PROMPTS_DIR/myprompts.md`.
22+
id:: 66d77f1b-1684-480d-ad7b-5e9f53292fe4
23+
```sh
24+
docker run
25+
--rm -it \
26+
--pull=always \
27+
-v /var/run/docker.sock:/var/run/docker.sock \
28+
--mount type=volume,source=docker-prompts,target=/prompts \
29+
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
30+
--mount type=bind,source=$PROMPTS_DIR,target=/app/workdir \
31+
--workdir /app/workdir \
32+
vonwig/prompts:latest \
33+
run \
34+
--host-dir $PWD \
35+
--user jimclark106 \
36+
--platform darwin \
37+
--prompts-file myprompts.md
38+
```
39+
- Most of this is boiler plate except:
40+
- the `--user` option in line 12 requires a valid DOCKER_HUB user name
41+
- the `--prompts-file` option in line 14 is a relative path to a prompts file (relative to $PROMPTS_DIR)
42+
- if the project being analyzed is located somewhere other than $PWD then the `--host-dir` will need to be updated.
43+
- [[Running the Prompt Engine]]
44+
- [[Authoring Prompts]]
45+
- Here is a prompt file with lots of non-default metadata (it uses [extractors]([[Prompt Extractors]]), a [[tool]], and uses a local llm in [[ollama]]). It uses one system prompt, and one user prompt. Note that the user prompt contains a moustache template to pull data in from an extractor.
46+
id:: 66d7f3ff-8769-40b3-b6b5-fc4fceea879e
47+
48+
```md
49+
---
50+
extractors:
51+
- name: linguist
52+
image: vonwig/go-linguist:latest
53+
command:
54+
- -json
55+
output-handler: linguist
56+
tools:
57+
- name: findutils-by-name
58+
description: find files in a project by name
59+
parameters:
60+
type: object
61+
properties:
62+
glob:
63+
type: string
64+
description: the glob pattern for files that should be found
65+
container:
66+
image: vonwig/findutils:latest
67+
command:
68+
- find
69+
- .
70+
- -name
71+
model: llama3.1
72+
url: http://localhost/v1/chat/completions
73+
stream: false
74+
---
75+
76+
# Prompt system
77+
78+
You are an expert on analyzing project content.
79+
80+
# Prompt user
81+
82+
{{#linguist}}
83+
This project contains {{language}} code.
84+
{{/linguist}}
85+
86+
Can you find any language specific project files and list them?
87+
88+
```
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
- An extractor is a function that runs before prompts are sent to an LLM. It can _extract_ some data from a project directory in order inject context into a set of prompts.
2+
id:: 66d87dd3-efa2-4eb3-ba92-5cc4c2f9700b
3+
- Create a docker container that expects a project bind mounted at `/project` and that writes `application/json` encoded data to `stdout`. The data written to `stdout` is what will be made available to any subsequent prompt templates.
4+
id:: 66d8a36a-432f-4d1a-a48c-edbe0224b182
5+
To test your extractor function, run
6+
```sh
7+
docker run \
8+
--rm -it \
9+
--mount type=bind,source=$PWD,target=/project \
10+
--workdir /project \
11+
image-name:latest arg1 arg2 ....
12+
```
13+
- this would make your current working directory available to the extractor at `/project`
14+
- you can also arrange to pass arguments to the extractor function when you define the extractor metadata
15+
- Once you have defined an extractor image (eg `image-name:latest`), create an entry in the prompt file to reference it.
16+
id:: 66d8a4f3-656d-42bf-b22a-60bba2d1887f
17+
```
18+
---
19+
extractors:
20+
- name: my-extractor
21+
image: image-name:latest
22+
command:
23+
- arg1
24+
- arg2
25+
---
26+
27+
# Prompt user
28+
29+
I can now inject context into the prompt using moustache template syntax.
30+
{{#my-extractor}}
31+
{{.}}
32+
{{/my-extractor}}
33+
```
34+
Read more about [moustache templates](https://mustache.github.io/mustache.5.html)
35+
- #log working on [[Prompt Extractors]]
36+
#log working on [[Authoring Prompts]]
37+
- A very simple prompt file that contains no metadata, and just a single user prompt is
38+
id:: 66d8a396-9268-4917-882f-da4c52b7b5dd
39+
```
40+
# Prompt user
41+
42+
Tell me about Docker!
43+
```
44+
- #log working on [[GitHub Refs]]
45+
-
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
- [[conversation loop]]
2+
id:: 66d9d1e0-a13e-4d62-8db7-9eebb37714a8
3+
-
4+
-

graphs/prompts/logseq/.DS_Store

6 KB
Binary file not shown.
6 KB
Binary file not shown.

0 commit comments

Comments
 (0)