⚠️ v2 Alpha — This is the v2 branch of Prompty, currently in alpha. The API, file format, and tooling are under active development and may change. Feedback welcome via Issues.
Prompty is a markdown file format (.prompty) for LLM prompts. Write your prompt once — run it from VS Code, Python, or TypeScript.
---
name: greeting
model:
id: gpt-4o-mini
provider: openai
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
template:
format:
kind: jinja2
parser:
kind: prompty
---
system:
You are a friendly assistant.
user:
Say hello to {{name}}.
Python
pip install "prompty[jinja2,openai]"import prompty
result = prompty.execute("greeting.prompty", inputs={"name": "Jane"})
print(result)TypeScript
npm install @prompty/core @prompty/openaiimport { execute } from "@prompty/core";
import "@prompty/openai";
const result = await execute("greeting.prompty", { name: "Jane" });
console.log(result);VS Code — open the .prompty file and press F5.
v2 extension coming soon — the next release brings a new connections sidebar, live preview, chat mode, and redesigned trace viewer. Stay tuned on the Visual Studio Code Marketplace.
Right-click in the explorer → New Prompty to scaffold a new prompt file.
See the rendered prompt with live markdown rendering and template interpolation as you type.
Manage model connections from the sidebar — add OpenAI, Microsoft Foundry, or Anthropic endpoints, set a default, and browse available models.
Thread-enabled prompts automatically open an interactive chat panel with tool calling support.
Every execution generates a .tracy trace file. Click to inspect the full pipeline — render, parse, execute, process — with timing and payloads.
pip install "prompty[all]" # everything
pip install "prompty[jinja2,openai]" # just OpenAI
pip install "prompty[jinja2,foundry]" # Microsoft Foundryimport prompty
# Full pipeline: load → render → parse → execute → process
result = prompty.execute("my-prompt.prompty", inputs={...})
# Step-by-step
agent = prompty.load("my-prompt.prompty")
messages = prompty.prepare(agent, inputs={...})
result = prompty.run(agent, messages)
# Async
result = await prompty.execute_async("my-prompt.prompty", inputs={...})See runtime/python/prompty/README.md for full API docs.
npm install @prompty/core @prompty/openai # OpenAI
npm install @prompty/core @prompty/foundry # Microsoft Foundryimport { load, prepare, run, execute } from "@prompty/core";
import "@prompty/openai"; // registers the provider
// Full pipeline
const result = await execute("my-prompt.prompty", { name: "Jane" });
// Step-by-step
const agent = await load("my-prompt.prompty");
const messages = await prepare(agent, { name: "Jane" });
const result = await run(agent, messages);See runtime/typescript/packages/core/README.md for full API docs.
A .prompty file has two parts: YAML frontmatter (model config, inputs, tools) and a markdown body (the prompt with role markers and template syntax).
---
name: my-prompt
model:
id: gpt-4o
provider: foundry
connection:
kind: key
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
apiKey: ${env:AZURE_OPENAI_API_KEY}
options:
temperature: 0.7
inputSchema:
properties:
question:
kind: string
default: What is the meaning of life?
tools:
- name: get_weather
kind: function
description: Get the current weather
parameters:
properties:
location:
kind: string
template:
format:
kind: jinja2
parser:
kind: prompty
---
system:
You are a helpful assistant.
user:
{{question}}
Lines starting with system:, user:, or assistant: define message boundaries.
Jinja2 ({{variable}}, {% if %}, {% for %}) or Mustache ({{variable}}, {{#section}}).
| Syntax | Purpose |
|---|---|
${env:VAR} |
Environment variable (required) |
${env:VAR:default} |
With fallback value |
${file:path.json} |
Load file content |
Prompty v1 files are automatically migrated with deprecation warnings. See the Python README for details.
See SUPPORT.md for help and CODE_OF_CONDUCT.md for community guidelines.
To release a new version, see RELEASING.md.




