Skip to content

Commit 7c15ef4

Browse files
committed
Improved documentation
1 parent 082bf10 commit 7c15ef4

File tree

7 files changed

+213
-378
lines changed

7 files changed

+213
-378
lines changed

CHANGELOG.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,17 @@
11
# Changelog
22

3+
## Unreleased
4+
- Improved tutorial documentation: condensed main tutorials README with learning path diagram,
5+
linked to quickstart for installation; added navigation links, expected outputs, and
6+
troubleshooting sections to all three tutorials; added chatterlang language tags and
7+
snippet intent labels; fixed file_structure.md header and documented step3_extras naming.
8+
- Clarified tutorial docs: "Get Started in 5 Minutes" now lists all three steps (search UI
9+
requires Step 3); Key Concepts and Beyond the Web Interface no longer imply Elasticsearch
10+
or Pinecone are built-in—they are custom segments or plugins you can add.
11+
- Rewrote quickstart.md: added non-LLM examples (echo, cast, formatItem) so users can run
12+
pipelines without Ollama/OpenAI; documented all pip install extras (ollama, openai,
13+
anthropic, pypdf, all); reordered content with no-LLM examples first.
14+
315
## 0.11.4
416
- Documentation example fixes: added missing imports and stubs to extending-talkpipe.md,
517
metadata-stream.md, pipe-api.md; introduced `# skip-extract` for blocks that cannot run

docs/quickstart.md

Lines changed: 77 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -1,154 +1,154 @@
11
# Getting Started with TalkPipe
22

3-
Welcome to TalkPipe! This guide will help you get up and running quickly with TalkPipe's dual-language architecture for building AI-powered data processing pipelines.
3+
This guide gets you up and running with TalkPipe's dual-language architecture for building data processing pipelines. You can start with **no LLM**—examples below work with the base install—then add LLM support when you're ready.
44

55
## Installation
66

77
```bash
88
pip install talkpipe
99
```
1010

11-
For LLM support, install the provider(s) you need:
11+
The base install includes data processing, file I/O, search (Whoosh, LanceDB), and web serving. For LLM and PDF support, add optional extras:
12+
1213
```bash
13-
# Install specific providers
14-
pip install talkpipe[openai] # For OpenAI
15-
pip install talkpipe[ollama] # For Ollama
16-
pip install talkpipe[anthropic] # For Anthropic Claude
14+
# LLM providers (install one or more)
15+
pip install talkpipe[ollama] # Local models via Ollama
16+
pip install talkpipe[openai] # OpenAI (GPT-4, etc.)
17+
pip install talkpipe[anthropic] # Anthropic Claude
18+
19+
# PDF extraction
20+
pip install talkpipe[pypdf]
21+
22+
# Combine extras
23+
pip install talkpipe[ollama,pypdf]
24+
pip install talkpipe[openai,anthropic]
1725

18-
# Or install all LLM providers
26+
# Everything: all LLM providers + PDF
1927
pip install talkpipe[all]
2028
```
2129

30+
| Extra | Adds |
31+
|-------|------|
32+
| `ollama` | `ollama` package for local models |
33+
| `openai` | `openai` package for OpenAI API |
34+
| `anthropic` | `anthropic` package for Claude |
35+
| `pypdf` | `pypdf` for PDF text extraction |
36+
| `all` | All of the above |
37+
2238
## Basic Concepts
2339

24-
TalkPipe provides two ways to build data processing pipelines:
40+
TalkPipe provides two ways to build pipelines:
2541

2642
- **Pipe API (Internal DSL)**: Pure Python using the `|` operator
2743
- **ChatterLang (External DSL)**: Concise text-based syntax
2844

29-
Both approaches use the same underlying components and can be mixed freely.
30-
31-
## Your First Pipeline
45+
Both use the same components and can be mixed.
3246

33-
### Using ChatterLang
47+
## Your First Pipeline (No LLM Required)
3448

35-
Create a simple chat interface in python:
49+
### ChatterLang
3650

3751
```python
3852
from talkpipe.chatterlang import compiler
3953

40-
# Define a pipeline that prompts an LLM and prints the response. Assumed Ollama is installed locally and llama3.2 is downloaded.
41-
script = '| llmPrompt[model="llama3.2", source="ollama"] | print'
42-
chat = compiler.compile(script).as_function(single_in=True, single_out=True)
54+
script = 'INPUT FROM echo[data="hello,world,test"] | print'
55+
pipeline = compiler.compile(script).as_function(single_out=False)
4356

44-
# Use it
45-
response = chat("Hello! Tell me about the history of computers.")
57+
result = pipeline()
58+
# Prints: hello, world, test (each on its own line)
59+
# Returns: ['hello', 'world', 'test']
4660
```
4761

48-
### Using the Pipe API
49-
50-
A similar interactive pipeline in pure Python:
62+
### Pipe API
5163

5264
```python
5365
from talkpipe.pipe import io
54-
from talkpipe.llm import chat
5566

56-
# Create pipeline using the | operator
57-
pipeline = io.Prompt() | chat.LLMPrompt(model="llama3.2", source="ollama") | io.Print()
58-
pipeline_func = pipeline.as_function()
67+
pipeline = io.echo(data="hello,world,test") | io.Print()
68+
result = pipeline.as_function(single_out=False)()
5969

60-
# Run it
61-
pipeline_func() # This will prompt for input interactively
70+
# Same output and return value
6271
```
6372

64-
## Web Interface
73+
### Data Transformation (No LLM)
6574

66-
Create a web interface for your pipeline from the command line:
75+
```python
76+
from talkpipe.chatterlang import compiler
6777

68-
```bash
69-
# Use single quotes to avoid escaping double quotes inside
70-
chatterlang_serve --port 2025 --display-property prompt --script '| llmPrompt[model="llama3.2", source="ollama", field="prompt"]'
78+
# Parse numbers, filter, and print
79+
script = 'INPUT FROM echo[data="1,2,hello,3,4"] | cast[cast_type="int"] | print'
80+
pipeline = compiler.compile(script).as_function(single_out=False)
81+
pipeline() # Skips "hello", prints 1, 2, 3, 4
7182
```
7283

73-
Open http://localhost:2025/stream in your browser to interact with your pipeline through a web form.
84+
## Your First Pipeline (With LLM)
7485

75-
## Debug Pipelines with diagPrint
86+
Requires `talkpipe[ollama]` and Ollama running with a model (e.g. `ollama pull llama3.2`).
7687

77-
Use `diagPrint` to inspect data as it flows through a pipeline without altering outputs.
78-
79-
### Pipe API
88+
### ChatterLang
8089

8190
```python
82-
from talkpipe.pipe import basic
83-
84-
debug = basic.DiagPrint(
85-
label="chunk",
86-
field_list="id,text",
87-
expression="len(item['text'])",
88-
output="stderr", # or a logger name
89-
)
90-
pipeline = debug | basic.firstN(n=3)
91-
list(pipeline([{"id": 1, "text": "hello world"}]))
92-
```
91+
from talkpipe.chatterlang import compiler
9392

94-
### ChatterLang
93+
script = '| llmPrompt[model="llama3.2", source="ollama"] | print'
94+
chat = compiler.compile(script).as_function(single_in=True, single_out=True)
9595

96-
```
97-
| diagPrint[label="chunk", field_list="id,text", expression="len(item['text'])", output="stderr"]
96+
response = chat("Hello! Tell me about the history of computers.")
9897
```
9998

100-
### Config-driven output
99+
### Pipe API
101100

102-
Set a config key (e.g., in `~/.talkpipe.toml`):
101+
```python
102+
from talkpipe.pipe import io
103+
from talkpipe.llm import chat
103104

104-
```toml
105-
diag_output = "stderr"
105+
pipeline = io.Prompt() | chat.LLMPrompt(model="llama3.2", source="ollama") | io.Print()
106+
pipeline_func = pipeline.as_function()
107+
pipeline_func() # Prompts for input interactively
106108
```
107109

108-
Then point `diagPrint` to it:
110+
## Web Interface
109111

112+
Use `--display-property` so the stream UI shows the input field value instead of raw JSON. Use `formatItem` to format output for readable display.
113+
114+
### Without LLM (Echo / Transform)
115+
116+
```bash
117+
chatterlang_serve --port 2025 --display-property prompt --script '| formatItem[field_list="prompt:You entered"] | print'
110118
```
111-
| diagPrint[output="config:diag_output"]
119+
120+
Open http://localhost:2025/stream. Type in the form and submit; the pipeline echoes your input as formatted text.
121+
122+
### With LLM
123+
124+
```bash
125+
chatterlang_serve --port 2025 --display-property prompt --script '| llmPrompt[model="llama3.2", source="ollama", field="prompt"] | print'
112126
```
113127

114-
Tips:
115-
- Use `field_list` to print only the fields you need.
116-
- `None` or `"None"` for `output` suppresses all diagnostic output.
117-
- The elapsed time line shows spacing between successive items.
128+
Open http://localhost:2025/stream to chat with the LLM.
118129

119130
## Next Steps
120131

121132
### Learn the Tools
122133

123-
- **[chatterlang_serve](api-reference/chatterlang-server.md)** - Create web APIs and forms
124-
- **[chatterlang_workbench](api-reference/chatterlang-workbench.md)** - Interactive development environment
134+
- **[chatterlang_serve](api-reference/chatterlang-server.md)** - Web APIs and forms
135+
- **[chatterlang_workbench](api-reference/chatterlang-workbench.md)** - Interactive development
125136
- **[chatterlang_script](api-reference/chatterlang-script.md)** - Run scripts from files
126137

127138
### Explore Tutorials
128139

129-
Check out the [tutorials directory](tutorials/) for complete tutorials:
130-
- Document indexing and search
131-
- RAG (Retrieval-Augmented Generation) systems
132-
- Multi-format report generation
140+
[Tutorials](tutorials/) walk through document indexing, RAG, and report generation. Tutorial 1 can use the included `stories.json` if you skip the LLM data-generation step.
133141

134142
### Extend with Plugins
135143

136-
TalkPipe supports plugins to add custom functionality:
137-
138144
```bash
139-
# List installed plugins
140145
talkpipe_plugins --list
141-
142-
# Install third-party plugins
143146
pip install talkpipe-some-plugin
144147
```
145148

146-
Plugins automatically extend ChatterLang with new sources and segments. See [Extending TalkPipe](architecture/extending-talkpipe.md) to create your own plugins.
149+
See [Extending TalkPipe](architecture/extending-talkpipe.md) to create plugins.
147150

148151
### Dive Deeper
149152

150153
- [Architecture](architecture/) - Technical deep-dives
151-
- [API Reference](api-reference/) - Complete command reference
152-
153-
154-
Ready to build something amazing? Start with the [tutorials](tutorials/)!
154+
- [API Reference](api-reference/) - Command reference

0 commit comments

Comments
 (0)