|
1 | | - |
2 | 1 |
|
3 | 2 |  |
4 | 3 | [](https://github.com/finitearth/promptolution/actions/workflows/ci.yml) |
|
7 | 6 |  |
8 | 7 | [](https://colab.research.google.com/github/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) |
9 | 8 |
|
10 | | -Promptolution is a library that provides a modular and extensible framework for implementing prompt tuning for single tasks and larger experiments. It offers a user-friendly interface to assemble the core components for various prompt optimization tasks. |
| 9 | + |
| 10 | + |
| 11 | +<p align="center"> |
| 12 | +<img height="60" alt="lmu_logo" src="https://github.com/user-attachments/assets/5aecd0d6-fc2d-48b2-b395-d1877578a3c5" /> |
| 13 | +<img height="60" alt="mcml" src="https://github.com/user-attachments/assets/d9f3b18e-a5ec-4c3f-b449-e57cb977f483" /> |
| 14 | +<img height="60" alt="ellis_logo" src="https://github.com/user-attachments/assets/60654a27-0f8f-4624-a1d5-5122f2632bec" /> |
| 15 | +<img height="60" alt="uni_freiburg_color" src="https://github.com/user-attachments/assets/f5eabbd2-ae6a-497b-857b-71958ed77335" /> |
| 16 | +<img height="60" alt="tum_logo" src="https://github.com/user-attachments/assets/982ec2f0-ec14-4dc2-8d75-bfae09d4fa73" /> |
| 17 | +</p> |
| 18 | + |
| 19 | +## 🚀 What is Promptolution? |
11 | 20 |
|
12 | | -This project was developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/) as part of a study program at LMU Munich. |
| 21 | +**Promptolution** is a unified, modular framework for prompt optimization built for researchers and advanced practitioners who want full control over their experimental setup. Unlike end-to-end application frameworks with high abstraction, promptolution focuses exclusively on the optimization stage, providing a clean, transparent, and extensible API. It allows for simple prompt optimization for one task up to large-scale reproducible benchmark experiments. |
| 22 | + |
| 23 | +<img width="808" height="356" alt="promptolution_framework" src="https://github.com/user-attachments/assets/e3d05493-30e3-4464-b0d6-1d3e3085f575" /> |
| 24 | + |
| 25 | +### Key Features |
13 | 26 |
|
14 | | -## Installation |
| 27 | +* Implementation of many current prompt optimizers out of the box. |
| 28 | +* Unified LLM backend supporting API-based models, Local LLMs, and vLLM clusters. |
| 29 | +* Built-in response caching to save costs and parallelized inference for speed. |
| 30 | +* Detailed logging and token usage tracking for granular post-hoc analysis. |
15 | 31 |
|
16 | | -Use pip to install our library: |
| 32 | +Have a look at our [Release Notes](https://finitearth.github.io/promptolution/release-notes/) for the latest updates to promptolution. |
| 33 | + |
| 34 | +## 📦 Installation |
17 | 35 |
|
18 | 36 | ``` |
19 | 37 | pip install promptolution[api] |
20 | 38 | ``` |
21 | 39 |
|
22 | | -If you want to run your prompt optimization locally, either via transformers or vLLM, consider running: |
| 40 | +Local inference via vLLM or transformers: |
23 | 41 |
|
24 | 42 | ``` |
25 | 43 | pip install promptolution[vllm,transformers] |
26 | 44 | ``` |
27 | 45 |
|
28 | | -Alternatively, clone the repository, run |
| 46 | +From source: |
29 | 47 |
|
30 | 48 | ``` |
| 49 | +git clone https://github.com/finitearth/promptolution.git |
| 50 | +cd promptolution |
31 | 51 | poetry install |
32 | 52 | ``` |
33 | 53 |
|
34 | | -to install the necessary dependencies. You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first. |
35 | | - |
36 | | -## Usage |
37 | | - |
38 | | -To get started right away, take a look at our [getting started notebook](https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) and our [other demos and tutorials](https://github.com/finitearth/promptolution/blob/main/tutorials). |
39 | | -For more details, a comprehensive **documentation** with API reference is availabe at https://finitearth.github.io/promptolution/. |
| 54 | +## 🔧 Quickstart |
40 | 55 |
|
41 | | -### Featured Optimizers |
| 56 | +Start with the **Getting Started tutorial**: |
| 57 | +[https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb](https://github.com/finitearth/promptolution/blob/main/tutorials/getting_started.ipynb) |
42 | 58 |
|
43 | | -| **Name** | **Paper** | **init prompts** | **Exploration** | **Costs** | **Parallelizable** | **Utilizes Fewshot Examples** | |
44 | | -| :-----------: | :----------------------------------------------: | :--------------: | :-------------: | :-------: | :-------------------: | :---------------------------: | |
45 | | -| `CAPO` | [Zehle et al.](https://arxiv.org/abs/2504.16005) | _required_ | 👍 | 💲 | ✅ | ✅ | |
46 | | -| `EvoPromptDE` | [Guo et al.](https://arxiv.org/abs/2309.08532) | _required_ | 👍 | 💲💲 | ✅ | ❌ | |
47 | | -| `EvoPromptGA` | [Guo et al.](https://arxiv.org/abs/2309.08532) | _required_ | 👍 | 💲💲 | ✅ | ❌ | |
48 | | -| `OPRO` | [Yang et al.](https://arxiv.org/abs/2309.03409) | _optional_ | 👎 | 💲💲 | ❌ | ❌ | |
| 59 | +Full docs: |
| 60 | +[https://finitearth.github.io/promptolution/](https://finitearth.github.io/promptolution/) |
49 | 61 |
|
50 | | -### Core Components |
51 | | - |
52 | | -- `Task`: Encapsulates initial prompts, dataset features, targets, and evaluation methods. |
53 | | -- `Predictor`: Implements the prediction logic, interfacing between the `Task` and `LLM` components. |
54 | | -- `LLM`: Unifies the process of obtaining responses from language models, whether locally hosted or accessed via API. |
55 | | -- `Optimizer`: Implements prompt optimization algorithms, utilizing the other components during the optimization process. |
56 | | - |
57 | | -### Key Features |
58 | 62 |
|
59 | | -- Modular and object-oriented design |
60 | | -- Extensible architecture |
61 | | -- Easy-to-use interface for assembling experiments |
62 | | -- Parallelized LLM requests for improved efficiency |
63 | | -- Integration with langchain for standardized LLM API calls |
64 | | -- Detailed logging and callback system for optimization analysis |
| 63 | +## 🧠 Featured Optimizers |
65 | 64 |
|
66 | | -## Changelog |
| 65 | +| **Name** | **Paper** | **Init prompts** | **Exploration** | **Costs** | **Parallelizable** | **Few-shot** | |
| 66 | +| ---- | ---- | ---- |---- |---- | ----|---- | |
| 67 | +| `CAPO` | [Zehle et al., 2025](https://arxiv.org/abs/2504.16005) | required | 👍 | 💲 | ✅ | ✅ | |
| 68 | +| `EvoPromptDE` | [Guo et al., 2023](https://arxiv.org/abs/2309.08532) | required | 👍 | 💲💲 | ✅ | ❌ | |
| 69 | +| `EvoPromptGA` | [Guo et al., 2023](https://arxiv.org/abs/2309.08532) | required | 👍 | 💲💲 | ✅ | ❌ | |
| 70 | +| `OPRO` | [Yang et al., 2023](https://arxiv.org/abs/2309.03409) | optional | 👎 | 💲💲 | ❌ | ❌ | |
67 | 71 |
|
68 | | -Release notes for each version of the library can be found [here](https://finitearth.github.io/promptolution/release-notes/) |
| 72 | +## 🏗 Components |
69 | 73 |
|
70 | | -## Contributing |
| 74 | +* **`Task`** – Manages the dataset, evaluation metrics, and subsampling. |
| 75 | +* **`Predictor`** – Defines how to extract the answer from the model's response. |
| 76 | +* **`LLM`** – A unified interface handling inference, token counting, and concurrency. |
| 77 | +* **`Optimizer`** – The core component that implements the algorithms that refine prompts. |
| 78 | +* **`ExperimentConfig`** – A configuration abstraction to streamline and parametrize large-scale scientific experiments. |
71 | 79 |
|
72 | | -The first step to contributing is to open an issue describing the bug, feature, or enhancements. Ensure the issue is clearly described, assigned, and properly tagged. All work should be linked to an open issue. |
| 80 | +## 🤝 Contributing |
73 | 81 |
|
74 | | -### Code Style and Linting |
| 82 | +Open an issue → create a branch → PR → CI → review → merge. |
| 83 | +Branch naming: `feature/...`, `fix/...`, `chore/...`, `refactor/...`. |
75 | 84 |
|
76 | | -We use Black for code formatting, Flake8 for linting, pydocstyle for docstring conventions (Google format), and isort to sort imports. All these checks are enforced via pre-commit hooks, which automatically run on every commit. Install the pre-commit hooks to ensure that all checks run automatically: |
| 85 | +Please ensure to use pre-commit, which assists with keeping the code quality high: |
77 | 86 |
|
78 | 87 | ``` |
79 | 88 | pre-commit install |
80 | | -``` |
81 | | - |
82 | | -To run all checks manually: |
83 | | - |
84 | | -``` |
85 | 89 | pre-commit run --all-files |
86 | 90 | ``` |
87 | | - |
88 | | -### Branch Protection and Merging Guidelines |
89 | | - |
90 | | -- The main branch is protected. No direct commits are allowed for non-administrators. |
91 | | -- Rebase your branch on main before opening a pull request. |
92 | | -- All contributions must be made on dedicated branches linked to specific issues. |
93 | | -- Name the branch according to {prefix}/{description} with one of the prefixes fix, feature, chore, or refactor. |
94 | | -- A pull request must have at least one approval from a code owner before it can be merged into main. |
95 | | -- CI checks must pass before a pull request can be merged. |
96 | | -- New releases will only be created by code owners. |
97 | | - |
98 | | -### Testing |
99 | | - |
100 | | -We use pytest to run tests, and coverage to track code coverage. Tests automatically run on pull requests and pushes to the main branch, but please ensure they also pass locally before pushing! |
101 | | -To run the tests with coverage locally, use the following commands or your IDE's test runner: |
| 91 | +We encourage every contributor to also write tests, that automatically check if the implementation works as expected: |
102 | 92 |
|
103 | 93 | ``` |
104 | 94 | poetry run python -m coverage run -m pytest |
105 | | -``` |
106 | | - |
107 | | -To see the coverage report run: |
108 | | -``` |
109 | 95 | poetry run python -m coverage report |
110 | 96 | ``` |
| 97 | + |
| 98 | +Developed by **Timo Heiß**, **Moritz Schlager**, and **Tom Zehle** (LMU Munich, MCML, ELLIS, TUM, Uni Freiburg). |
0 commit comments