Skip to content

Commit 18db406

Browse files
committed
Add README for prompting examples
Introduces a comprehensive README in examples/prompting outlining available notebooks, guides, and conventions for LLM prompting techniques. Provides quick start instructions, learning paths, and contribution guidelines to help users navigate and extend the collection.
1 parent c55da58 commit 18db406

File tree

1 file changed

+102
-0
lines changed

1 file changed

+102
-0
lines changed

examples/prompting/README.md

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
Prompting Examples (LLMs)
2+
3+
This folder gathers all prompting-related examples in one place so you can learn, compare, and reuse patterns quickly. It ranges from fundamentals (input formatting, caching) to advanced techniques (meta-prompting, guardrails, evaluation with a judge model), plus model-specific guides and applied recipes.
4+
5+
6+
7+
Quick start
8+
1. From repo root, install deps and set credentials as described in the main Cookbook README.
9+
2. Open any notebook in Jupyter / VS Code.
10+
3. If a notebook references a model you don’t have access to, swap in a compatible one (e.g., replace gpt-5 with gpt-4o, or o4-mini with gpt-4o-mini).
11+
4. For consistent comparisons, keep temperature low (e.g., 0–0.3) and set any random seed where applicable.
12+
13+
14+
15+
What’s inside
16+
17+
Foundations & Building Blocks
18+
• How_to_format_inputs_to_ChatGPT_models.ipynb
19+
Practical patterns for structuring inputs (system/user messages, delimiters, schemas) to reduce ambiguity and boost reliability.
20+
• Prompt_Caching101.ipynb
21+
Shows how and when to cache stable prompt segments to reduce cost/latency on repeated operations.
22+
• prompt-engineering.txt
23+
Concise notes/checklist of core prompting heuristics.
24+
• prompts/ (directory)
25+
Reusable prompt templates/snippets you can copy into your own projects.
26+
27+
Prompt Optimization & Patterns
28+
• Optimize_Prompts.ipynb
29+
Step-by-step refinements from vague → specific → structured; includes measurable checkpoints (accuracy, determinism, length).
30+
• Enhance_your_prompts_with_meta_prompting.ipynb
31+
“Prompt the prompter”: use meta-instructions and self-review to systematically improve outputs.
32+
• prompt-optimization-cookbook.ipynb
33+
A recipe-style tour of optimization tactics (role specification, constraints, formatting contracts, failure-mode nudges).
34+
• prompt-optimization-cookbook/ (directory)
35+
Companion assets and extended recipes referenced by the notebook.
36+
37+
Guardrails, Evaluation & QA
38+
• Developing_hallucination_guardrails.ipynb
39+
Patterns to reduce unsupported claims: retrieval checks, cite-or-abstain, schema-validation, and refusal scaffolds.
40+
• Custom-LLM-as-a-Judge.ipynb
41+
Use an independent “judge” model to evaluate task outputs (rubrics, pairwise comparisons, error taxonomies).
42+
43+
Model-Specific Prompting Guides
44+
• gpt-5_prompting_guide.ipynb
45+
Guidance for newest-gen models: leveraging extended context/tools while keeping prompts lean and structured. (Swap to your available model if needed.)
46+
• gpt4-1_prompting_guide.ipynb
47+
Tips tailored to the 4.1 series: instruction clarity, tool-use hooks, JSON contracts.
48+
• o3o4-mini_prompting_guide.ipynb
49+
Best practices for smaller/efficient models: tighter constraints, shorter exemplars, robust formatting to offset capacity limits.
50+
• Realtime_prompting_guide.ipynb
51+
Streaming + low-latency interaction patterns (turn-taking prompts, incremental planning, partial results).
52+
• Whisper_prompting_guide.ipynb
53+
Using prompts to bias/condition speech-to-text (domain terms, speaker/style context, punctuation/number handling).
54+
• Prompt_migration_guide.ipynb
55+
How to port prompts across model/API versions while preserving behavior (capabilities, defaults, temperature/top-p changes).
56+
57+
Applied Examples
58+
• Unit_test_writing_using_a_multi-step_prompt.ipynb
59+
Multi-step prompting to draft and refine unit tests from natural-language specs with verification passes.
60+
• Unit_test_writing_using_a_multi-step_prompt_with_older_completions_API.ipynb
61+
Historical version of the above using the legacy Completions API for comparison/reference.
62+
63+
64+
65+
Suggested learning paths
66+
67+
New to prompting?
68+
1. How_to_format_inputs… → 2) Optimize_Prompts → 3) Enhance_your_prompts_with_meta_prompting → 4) Prompt_Caching101
69+
70+
Shipping something production-ish
71+
1. Developing_hallucination_guardrails → 2) Custom-LLM-as-a-Judge → 3) prompt-optimization-cookbook.ipynb → 4) your model-specific guide
72+
73+
Speech / real-time track
74+
1. Whisper_prompting_guide → 2) Realtime_prompting_guide
75+
76+
Legacy → modern migration
77+
1. Unit_test… (older completions) → 2) Unit_test… (multi-step) → 3) Prompt_migration_guide
78+
79+
80+
81+
Conventions for this folder
82+
• File naming: use snake_case and end with _guide.ipynb or a clear verb phrase (e.g., optimize_prompts.ipynb).
83+
• Comparisons: when demonstrating “bad vs. good,” keep identical tasks & evaluation across cells; only change the prompt.
84+
• Determinism: set low temperature and, where relevant, seeds to keep diffs meaningful.
85+
• Outputs: prefer structured outputs (JSON/Markdown tables) to ease automated checking.
86+
• Safety: when showing failure modes (e.g., hallucinations), add a final cell with a fix pattern (cite-or-abstain, schema checks, etc.).
87+
88+
89+
90+
Contributing
91+
• Add a short intro cell explaining the goal and what success looks like.
92+
• Include at least one automatable check (regex/schema/assert) to make improvements measurable.
93+
• If your example targets a specific model family, call it out up front and suggest fallback models.
94+
• Keep any shared assets in this folder (or its subfolders) and use relative paths.
95+
96+
97+
98+
Notes
99+
• Model names evolve. If a referenced model isn’t available in your account/region, substitute the closest family (e.g., gpt-4o / gpt-4o-mini).
100+
• Some notebooks intentionally show anti-patterns for teaching—read the headers to avoid copying them into production.
101+
102+
If you spot missing patterns (retrieval-augmented prompts, evaluation rubrics for specific domains, code-gen chains, etc.), feel free to open a PR adding a focused notebook here.

0 commit comments

Comments
 (0)