Skip to content

microsoft/prompty

Repository files navigation

Prompty

Python npm VS Code

⚠️ v2 Alpha — This is the v2 branch of Prompty, currently in alpha. The API, file format, and tooling are under active development and may change. Feedback welcome via Issues.

Prompty is a markdown file format (.prompty) for LLM prompts. Write your prompt once — run it from VS Code, Python, or TypeScript.

Prompty flow: .prompty file → Runtime → LLM Provider

Quick Start

1. Write a .prompty file

---
name: greeting
model:
  id: gpt-4o-mini
  provider: openai
  connection:
    kind: key
    apiKey: ${env:OPENAI_API_KEY}
template:
  format:
    kind: jinja2
  parser:
    kind: prompty
---
system:
You are a friendly assistant.

user:
Say hello to {{name}}.

2. Run it

Python

pip install "prompty[jinja2,openai]"
import prompty

result = prompty.execute("greeting.prompty", inputs={"name": "Jane"})
print(result)

TypeScript

npm install @prompty/core @prompty/openai
import { execute } from "@prompty/core";
import "@prompty/openai";

const result = await execute("greeting.prompty", { name: "Jane" });
console.log(result);

VS Code — open the .prompty file and press F5.

VS Code Extension

v2 extension coming soon — the next release brings a new connections sidebar, live preview, chat mode, and redesigned trace viewer. Stay tuned on the Visual Studio Code Marketplace.

Create

Right-click in the explorer → New Prompty to scaffold a new prompt file.

Create a new prompty file

Preview

See the rendered prompt with live markdown rendering and template interpolation as you type.

Live preview

Connections

Manage model connections from the sidebar — add OpenAI, Microsoft Foundry, or Anthropic endpoints, set a default, and browse available models.

Connections sidebar

Chat Mode

Thread-enabled prompts automatically open an interactive chat panel with tool calling support.

Chat panel with tool calling

Tracing

Every execution generates a .tracy trace file. Click to inspect the full pipeline — render, parse, execute, process — with timing and payloads.

Trace viewer

Runtimes

Python

pip install "prompty[all]"          # everything
pip install "prompty[jinja2,openai]" # just OpenAI
pip install "prompty[jinja2,foundry]" # Microsoft Foundry
import prompty

# Full pipeline: load → render → parse → execute → process
result = prompty.execute("my-prompt.prompty", inputs={...})

# Step-by-step
agent = prompty.load("my-prompt.prompty")
messages = prompty.prepare(agent, inputs={...})
result = prompty.run(agent, messages)

# Async
result = await prompty.execute_async("my-prompt.prompty", inputs={...})

See runtime/python/prompty/README.md for full API docs.

TypeScript

npm install @prompty/core @prompty/openai   # OpenAI
npm install @prompty/core @prompty/foundry  # Microsoft Foundry
import { load, prepare, run, execute } from "@prompty/core";
import "@prompty/openai"; // registers the provider

// Full pipeline
const result = await execute("my-prompt.prompty", { name: "Jane" });

// Step-by-step
const agent = await load("my-prompt.prompty");
const messages = await prepare(agent, { name: "Jane" });
const result = await run(agent, messages);

See runtime/typescript/packages/core/README.md for full API docs.

.prompty File Format

A .prompty file has two parts: YAML frontmatter (model config, inputs, tools) and a markdown body (the prompt with role markers and template syntax).

---
name: my-prompt
model:
  id: gpt-4o
  provider: foundry
  connection:
    kind: key
    endpoint: ${env:AZURE_OPENAI_ENDPOINT}
    apiKey: ${env:AZURE_OPENAI_API_KEY}
  options:
    temperature: 0.7
inputSchema:
  properties:
    question:
      kind: string
      default: What is the meaning of life?
tools:
  - name: get_weather
    kind: function
    description: Get the current weather
    parameters:
      properties:
        location:
          kind: string
template:
  format:
    kind: jinja2
  parser:
    kind: prompty
---
system:
You are a helpful assistant.

user:
{{question}}

Role markers

Lines starting with system:, user:, or assistant: define message boundaries.

Template syntax

Jinja2 ({{variable}}, {% if %}, {% for %}) or Mustache ({{variable}}, {{#section}}).

Variable references

Syntax Purpose
${env:VAR} Environment variable (required)
${env:VAR:default} With fallback value
${file:path.json} Load file content

Legacy format

Prompty v1 files are automatically migrated with deprecation warnings. See the Python README for details.

Contributing

See SUPPORT.md for help and CODE_OF_CONDUCT.md for community guidelines.

To release a new version, see RELEASING.md.

License

MIT

About

Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors