You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Install the `YAML Language Support by Red Hat` extension in VSCode.
108
114
VSCode setup for syntax highlighting and validation:
109
115
110
116
```json
@@ -128,13 +134,13 @@ In this example we use external content _data.yaml_ and watsonx as an LLM provid
128
134
```yaml
129
135
description: Template with variables
130
136
defs:
131
-
USER_INPUT:
132
-
read: ../examples/code/data.yaml
137
+
user_input:
138
+
read: ../code/data.yaml
133
139
parser: yaml
134
140
text:
135
141
- model: watsonx/ibm/granite-34b-code-instruct
136
142
input: |
137
-
Process this input: ${USER_INPUT}
143
+
Process this input: ${user_input}
138
144
Format the output as JSON.
139
145
```
140
146
@@ -174,20 +180,14 @@ text:
174
180
until: ${ user_input == '/bye'}
175
181
```
176
182
177
-
## Debugging Tools
178
-
179
-
### Log Inspection
180
-
```bash
181
-
pdl --log <my-logfile> <my-example.pdl>
182
-
```
183
183
184
-
###Trace Generation and Live Document Visualization
184
+
## Trace Generation and Live Document Visualization
185
185
186
186
```bash
187
187
pdl --trace <file.json> <my-example.pdl>
188
188
```
189
189
190
-
Upload trace files to the [Live Document Viewer](https://ibm.github.io/prompt-declaration-language/viewer/) for visual debugging.
190
+
Upload trace files to the [Live Document Viewer](https://ibm.github.io/prompt-declaration-language/viewer/) for visual debugging, trace exploration, and live programming.
Most examples in this repository use IBM Granite models on [Replicate](https://replicate.com/).
53
-
In order to run these examples, you need to create a free account
52
+
You can run PDL with LLM models in local using [Ollama](https://ollama.com), or other cloud service.
53
+
See [here](https://ibm.github.io/prompt-declaration-language/tutorial/#using-ollama-models) for
54
+
instructions on how to install an Ollama model locally.
55
+
56
+
Most examples in this repository use IBM Granite models on [Ollama](https://ollama.com) and some are on [Replicate](https://replicate.com/). In order to run these examples, you need to create a free account
54
57
on Replicate, get an API key and store it in the environment variable:
55
58
56
-
-`REPLICATE_API_KEY`
59
+
-`REPLICATE_API_TOKEN`
57
60
58
61
In order to use foundation models hosted on [watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a watsonx account (a free plan is available) and set up the following environment variables:
59
62
60
-
-`WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your watsonx instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`).
61
-
-`WATSONX_APIKEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key))
63
+
-`WX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your watsonx instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`).
64
+
-`WX_API_KEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key))
62
65
-`WATSONX_PROJECT_ID`, the project hosting the resources (see information about [project creation](https://www.ibm.com/docs/en/watsonx/saas?topic=projects-creating-project) and [finding project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-project-id.html?context=wx)).
63
66
64
67
For more information, see [documentation](https://docs.litellm.ai/docs/providers/watsonx).
@@ -93,13 +96,8 @@ and all code is executed locally. To use the `--sandbox` flag, you need to have
93
96
The interpreter prints out a log by default in the file `log.txt`. This log contains the details of inputs and outputs to every block in the program. It is useful to examine this file when the program is behaving differently than expected. The log displays the exact prompts submitted to models by LiteLLM (after applying chat templates), which can be
94
97
useful for debugging.
95
98
96
-
To change the log filename, you can pass it to the interpreter as follows:
97
-
98
-
```
99
-
pdl --log <my-logfile> <my-example>
100
-
```
101
99
102
-
We can also pass initial data to the interpreter to populate variables used in a PDL program, as follows:
100
+
We can pass initial data to the interpreter to populate variables used in a PDL program, as follows:
0 commit comments