Skip to content

Commit 3d97345

Browse files
authored
readme changes (#605)
Signed-off-by: Mandana Vaziri <[email protected]>
1 parent 90bcd22 commit 3d97345

File tree

2 files changed

+23
-25
lines changed

2 files changed

+23
-25
lines changed

README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ and run it:
4646
pdl <path/to/example.pdl>
4747
```
4848

49+
For more information on the `pdl` CLI see [here](https://ibm.github.io/prompt-declaration-language/).
50+
4951
## Key Features
5052

5153
- **LLM Integration**: Compatible with any LLM, including IBM watsonx
@@ -65,8 +67,9 @@ pdl <path/to/example.pdl>
6567
## Documentation
6668

6769
- [Documentation](https://ibm.github.io/prompt-declaration-language/)
68-
- [API References](https://ibm.github.io/prompt-declaration-language/api_reference/)
6970
- [Tutorial](https://ibm.github.io/prompt-declaration-language/tutorial/)
71+
- [API References](https://ibm.github.io/prompt-declaration-language/api_reference/)
72+
7073

7174
### Quick Reference
7275

@@ -90,11 +93,13 @@ pip install 'prompt-declaration-language[examples]'
9093
### Environment Setup
9194

9295
You can run PDL with LLM models in local using [Ollama](https://ollama.com), or other cloud service.
96+
See [here](https://ibm.github.io/prompt-declaration-language/tutorial/#using-ollama-models) for
97+
instructions on how to install an Ollama model locally.
9398

9499
If you use watsonx:
95100
```bash
96-
export WATSONX_URL="https://{region}.ml.cloud.ibm.com"
97-
export WATSONX_APIKEY="your-api-key"
101+
export WX_URL="https://{region}.ml.cloud.ibm.com"
102+
export WX_API_KEY="your-api-key"
98103
export WATSONX_PROJECT_ID="your-project-id"
99104
```
100105

@@ -105,6 +110,7 @@ export REPLICATE_API_TOKEN="your-token"
105110

106111
### IDE Configuration
107112

113+
Install the `YAML Language Support by Red Hat` extension in VSCode.
108114
VSCode setup for syntax highlighting and validation:
109115

110116
```json
@@ -128,13 +134,13 @@ In this example we use external content _data.yaml_ and watsonx as an LLM provid
128134
```yaml
129135
description: Template with variables
130136
defs:
131-
USER_INPUT:
132-
read: ../examples/code/data.yaml
137+
user_input:
138+
read: ../code/data.yaml
133139
parser: yaml
134140
text:
135141
- model: watsonx/ibm/granite-34b-code-instruct
136142
input: |
137-
Process this input: ${USER_INPUT}
143+
Process this input: ${user_input}
138144
Format the output as JSON.
139145
```
140146
@@ -174,20 +180,14 @@ text:
174180
until: ${ user_input == '/bye'}
175181
```
176182
177-
## Debugging Tools
178-
179-
### Log Inspection
180-
```bash
181-
pdl --log <my-logfile> <my-example.pdl>
182-
```
183183
184-
### Trace Generation and Live Document Visualization
184+
## Trace Generation and Live Document Visualization
185185
186186
```bash
187187
pdl --trace <file.json> <my-example.pdl>
188188
```
189189

190-
Upload trace files to the [Live Document Viewer](https://ibm.github.io/prompt-declaration-language/viewer/) for visual debugging.
190+
Upload trace files to the [Live Document Viewer](https://ibm.github.io/prompt-declaration-language/viewer/) for visual debugging, trace exploration, and live programming.
191191

192192

193193
## Contributing

docs/README.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -49,16 +49,19 @@ To install the dependencies for development of PDL and execute all the example,
4949
pip install 'prompt-declaration-language[examples]'
5050
```
5151

52-
Most examples in this repository use IBM Granite models on [Replicate](https://replicate.com/).
53-
In order to run these examples, you need to create a free account
52+
You can run PDL with LLM models in local using [Ollama](https://ollama.com), or other cloud service.
53+
See [here](https://ibm.github.io/prompt-declaration-language/tutorial/#using-ollama-models) for
54+
instructions on how to install an Ollama model locally.
55+
56+
Most examples in this repository use IBM Granite models on [Ollama](https://ollama.com) and some are on [Replicate](https://replicate.com/). In order to run these examples, you need to create a free account
5457
on Replicate, get an API key and store it in the environment variable:
5558

56-
- `REPLICATE_API_KEY`
59+
- `REPLICATE_API_TOKEN`
5760

5861
In order to use foundation models hosted on [watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a watsonx account (a free plan is available) and set up the following environment variables:
5962

60-
- `WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your watsonx instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`).
61-
- `WATSONX_APIKEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key))
63+
- `WX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your watsonx instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`).
64+
- `WX_API_KEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key))
6265
- `WATSONX_PROJECT_ID`, the project hosting the resources (see information about [project creation](https://www.ibm.com/docs/en/watsonx/saas?topic=projects-creating-project) and [finding project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-project-id.html?context=wx)).
6366

6467
For more information, see [documentation](https://docs.litellm.ai/docs/providers/watsonx).
@@ -93,13 +96,8 @@ and all code is executed locally. To use the `--sandbox` flag, you need to have
9396
The interpreter prints out a log by default in the file `log.txt`. This log contains the details of inputs and outputs to every block in the program. It is useful to examine this file when the program is behaving differently than expected. The log displays the exact prompts submitted to models by LiteLLM (after applying chat templates), which can be
9497
useful for debugging.
9598

96-
To change the log filename, you can pass it to the interpreter as follows:
97-
98-
```
99-
pdl --log <my-logfile> <my-example>
100-
```
10199

102-
We can also pass initial data to the interpreter to populate variables used in a PDL program, as follows:
100+
We can pass initial data to the interpreter to populate variables used in a PDL program, as follows:
103101

104102
```
105103
pdl --data <JSON-or-YAML-data> <my-example>

0 commit comments

Comments
 (0)