|
1 | | - |
2 | 1 |
|
3 | 2 |  |
4 | 3 | [](https://github.com/finitearth/promptolution/actions/workflows/ci.yml) |
|
7 | 6 |  |
8 | 7 | [](https://colab.research.google.com/github/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) |
9 | 8 |
|
10 | | -Promptolution is a library that provides a modular and extensible framework for implementing prompt tuning for single tasks and larger experiments. It offers a user-friendly interface to assemble the core components for various prompt optimization tasks. |
| 9 | + |
| 10 | + |
| 11 | + |
| 12 | + |
| 13 | +<p align="center"> |
| 14 | + <img src="https://mcml.ai/images/MCML_Logo_cropped.jpg" height="45"> |
| 15 | + <img src="https://github.com/user-attachments/assets/1ae42b4a-163e-43ed-b691-c253d4f4c814" height="45"> |
| 16 | + <img src="https://github.com/user-attachments/assets/e70ec1d4-bbc4-4ff3-8803-8806bc879bb0" height="45"/> |
| 17 | + <img src="https://mcml.ai/images/footer/lmu_white.webp" height="45"> |
| 18 | + <img src="https://mcml.ai/images/footer/tum_white.webp" height="45"> |
| 19 | +</p> |
| 20 | + |
| 21 | + |
11 | 22 |
|
12 | | -This project was developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/) as part of a study program at LMU Munich. |
| 23 | +## 🚀 What is Promptolution? |
13 | 24 |
|
14 | | -## Installation |
| 25 | +**Promptolution** is a modular framework for *serious* prompt optimization — built for researchers who want full control over optimizers, datasets, evaluation, and logging. |
| 26 | +Unlike end-to-end agent frameworks (DSPy, LangGraph…), Promptolution focuses **exclusively** on the prompt optimization phase, with clean abstractions, transparent internals, and an extensible API. |
15 | 27 |
|
16 | | -Use pip to install our library: |
| 28 | +It supports: |
| 29 | + |
| 30 | +* single-task prompt optimization |
| 31 | +* large-scale experiments |
| 32 | +* local + API-based LLMs |
| 33 | +* fast parallelization |
| 34 | +* clean logs for reproducible research |
| 35 | + |
| 36 | +Developed by **Timo Heiß**, **Moritz Schlager**, and **Tom Zehle** (LMU Munich, MCML, ELLIS, TUM, Uni Freiburg). |
| 37 | + |
| 38 | + |
| 39 | + |
| 40 | +## 📦 Installation |
17 | 41 |
|
18 | 42 | ``` |
19 | 43 | pip install promptolution[api] |
20 | 44 | ``` |
21 | 45 |
|
22 | | -If you want to run your prompt optimization locally, either via transformers or vLLM, consider running: |
| 46 | +Local inference via vLLM or transformers: |
23 | 47 |
|
24 | 48 | ``` |
25 | 49 | pip install promptolution[vllm,transformers] |
26 | 50 | ``` |
27 | 51 |
|
28 | | -Alternatively, clone the repository, run |
| 52 | +From source: |
29 | 53 |
|
30 | 54 | ``` |
| 55 | +git clone https://github.com/finitearth/promptolution.git |
| 56 | +cd promptolution |
31 | 57 | poetry install |
32 | 58 | ``` |
33 | 59 |
|
34 | | -to install the necessary dependencies. You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first. |
35 | 60 |
|
36 | | -## Usage |
37 | 61 |
|
38 | | -To get started right away, take a look at our [getting started notebook](https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) and our [other demos and tutorials](https://github.com/finitearth/promptolution/blob/main/tutorials). |
39 | | -For more details, a comprehensive **documentation** with API reference is availabe at https://finitearth.github.io/promptolution/. |
| 62 | +## 🔧 Quickstart |
40 | 63 |
|
41 | | -### Featured Optimizers |
| 64 | +Start with the **Getting Started tutorial**: |
| 65 | +[https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb](https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) |
42 | 66 |
|
43 | | -| **Name** | **Paper** | **init prompts** | **Exploration** | **Costs** | **Parallelizable** | **Utilizes Fewshot Examples** | |
44 | | -| :-----------: | :----------------------------------------------: | :--------------: | :-------------: | :-------: | :-------------------: | :---------------------------: | |
45 | | -| `CAPO` | [Zehle et al.](https://arxiv.org/abs/2504.16005) | _required_ | 👍 | 💲 | ✅ | ✅ | |
46 | | -| `EvoPromptDE` | [Guo et al.](https://arxiv.org/abs/2309.08532) | _required_ | 👍 | 💲💲 | ✅ | ❌ | |
47 | | -| `EvoPromptGA` | [Guo et al.](https://arxiv.org/abs/2309.08532) | _required_ | 👍 | 💲💲 | ✅ | ❌ | |
48 | | -| `OPRO` | [Yang et al.](https://arxiv.org/abs/2309.03409) | _optional_ | 👎 | 💲💲 | ❌ | ❌ | |
| 67 | +Full docs: |
| 68 | +[https://finitearth.github.io/promptolution/](https://finitearth.github.io/promptolution/) |
49 | 69 |
|
50 | | -### Core Components |
51 | 70 |
|
52 | | -- `Task`: Encapsulates initial prompts, dataset features, targets, and evaluation methods. |
53 | | -- `Predictor`: Implements the prediction logic, interfacing between the `Task` and `LLM` components. |
54 | | -- `LLM`: Unifies the process of obtaining responses from language models, whether locally hosted or accessed via API. |
55 | | -- `Optimizer`: Implements prompt optimization algorithms, utilizing the other components during the optimization process. |
56 | 71 |
|
57 | | -### Key Features |
| 72 | +## 🧠 Featured Optimizers |
58 | 73 |
|
59 | | -- Modular and object-oriented design |
60 | | -- Extensible architecture |
61 | | -- Easy-to-use interface for assembling experiments |
62 | | -- Parallelized LLM requests for improved efficiency |
63 | | -- Integration with langchain for standardized LLM API calls |
64 | | -- Detailed logging and callback system for optimization analysis |
| 74 | +| **Name** | **Paper** | **Init prompts** | **Exploration** | **Costs** | **Parallelizable** | **Few-shot** | |
| 75 | +| ---- | ---- | ---- |---- |---- | ----|---- | |
| 76 | +| `CAPO` | [Zehle et al., 2025](https://arxiv.org/abs/2504.16005) | required | 👍 | 💲 | ✅ | ✅ | |
| 77 | +| `EvoPromptDE` | [Guo et al., 2023](https://arxiv.org/abs/2309.08532) | required | 👍 | 💲💲 | ✅ | ❌ | |
| 78 | +| `EvoPromptGA` | [Guo et al., 2023](https://arxiv.org/abs/2309.08532) | required | 👍 | 💲💲 | ✅ | ❌ | |
| 79 | +| `OPRO` | [Yang et al., 2023](https://arxiv.org/abs/2309.03409) | optional | 👎 | 💲💲 | ❌ | ❌ | |
65 | 80 |
|
66 | | -## Changelog |
67 | 81 |
|
68 | | -Release notes for each version of the library can be found [here](https://finitearth.github.io/promptolution/release-notes/) |
69 | 82 |
|
70 | | -## Contributing |
| 83 | +## 🏗 Core Components |
71 | 84 |
|
72 | | -The first step to contributing is to open an issue describing the bug, feature, or enhancements. Ensure the issue is clearly described, assigned, and properly tagged. All work should be linked to an open issue. |
| 85 | +* **Task** – wraps dataset fields, init prompts, evaluation. |
| 86 | +* **Predictor** – runs predictions using your LLM backend. |
| 87 | +* **LLM** – unified interface for OpenAI, HuggingFace, vLLM, etc. |
| 88 | +* **Optimizer** – plug-and-play implementations of CAPO, GA/DE, OPRO, and your own custom ones. |
73 | 89 |
|
74 | | -### Code Style and Linting |
75 | 90 |
|
76 | | -We use Black for code formatting, Flake8 for linting, pydocstyle for docstring conventions (Google format), and isort to sort imports. All these checks are enforced via pre-commit hooks, which automatically run on every commit. Install the pre-commit hooks to ensure that all checks run automatically: |
77 | 91 |
|
78 | | -``` |
79 | | -pre-commit install |
80 | | -``` |
| 92 | +## ⭐ Highlights |
81 | 93 |
|
82 | | -To run all checks manually: |
| 94 | +* Modular, OOP design → easy customization |
| 95 | +* Experiment-ready architecture |
| 96 | +* Parallel LLM requests |
| 97 | +* LangChain support |
| 98 | +* JSONL logging, callbacks, detailed event traces |
| 99 | +* Works from laptop to cluster |
83 | 100 |
|
84 | | -``` |
85 | | -pre-commit run --all-files |
86 | | -``` |
87 | 101 |
|
88 | | -### Branch Protection and Merging Guidelines |
89 | 102 |
|
90 | | -- The main branch is protected. No direct commits are allowed for non-administrators. |
91 | | -- Rebase your branch on main before opening a pull request. |
92 | | -- All contributions must be made on dedicated branches linked to specific issues. |
93 | | -- Name the branch according to {prefix}/{description} with one of the prefixes fix, feature, chore, or refactor. |
94 | | -- A pull request must have at least one approval from a code owner before it can be merged into main. |
95 | | -- CI checks must pass before a pull request can be merged. |
96 | | -- New releases will only be created by code owners. |
| 103 | +## 📜 Changelog |
| 104 | + |
| 105 | +[https://finitearth.github.io/promptolution/release-notes/](https://finitearth.github.io/promptolution/release-notes/) |
| 106 | + |
| 107 | + |
97 | 108 |
|
98 | | -### Testing |
| 109 | +## 🤝 Contributing |
99 | 110 |
|
100 | | -We use pytest to run tests, and coverage to track code coverage. Tests automatically run on pull requests and pushes to the main branch, but please ensure they also pass locally before pushing! |
101 | | -To run the tests with coverage locally, use the following commands or your IDE's test runner: |
| 111 | +Open an issue → create a branch → PR → CI → review → merge. |
| 112 | +Branch naming: `feature/...`, `fix/...`, `chore/...`, `refactor/...`. |
| 113 | + |
| 114 | +### Code Style |
102 | 115 |
|
103 | 116 | ``` |
104 | | -poetry run python -m coverage run -m pytest |
| 117 | +pre-commit install |
| 118 | +pre-commit run --all-files |
105 | 119 | ``` |
106 | 120 |
|
107 | | -To see the coverage report run: |
| 121 | +### Tests |
| 122 | + |
108 | 123 | ``` |
| 124 | +poetry run python -m coverage run -m pytest |
109 | 125 | poetry run python -m coverage report |
110 | 126 | ``` |
| 127 | +Just tell me — happy to tune it further. |
0 commit comments