You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Feature/workflows (#8)
* chore: add codeowners file
* chore: add python poetry action and docs workflow
* chore: update pre-commit file
* chore: update docs
* chore: update logo
* chore: add cicd pipeline for automated deployment
* chore: update poetry version
* chore: fix action versioning
* chore: add gitattributes to ignore line count in jupyter notebooks
* chore: add and update docstrings
* chore: fix end of files
* chore: update action versions
* Update README.md
---------
Co-authored-by: mo374z <[email protected]>
* Fix/workflows (#11)
* chore: fix workflow execution
* chore: fix version check in CICD pipeline
* Opro implementation (#7)
* update gitignore
* initial implementation of opro
* formatting of prompt template
* added opro test run
* opro refinements
* fixed sampling error
* add docs to opro
* fix pre commit issues#
* fix pre commit issues#
* fixed end of line
* Patch/pre commit config (#10)
* fixed pre commit config and removed end of file line breaks in tempaltes
* added /
* Feature/prompt generation (#12)
* added prompt_creation.py
* change version
* Create LICENSE (#14)
* Refactor/remove deepinfra (#16)
* Remove deepinfra file
* change langchain-community version
* Usability patches (#15)
* renamed get_tasks to get_task and change functionality accordingly. moved templates and data_sets
* init
* move templates to templates.py
* Add nested asyncio to make it useable in notebooks
* Update README.md
* changed getting_started.ipynb and created helper functions
* added sampling of initial population
* fixed config
* fixed callbacks
* adjust runs
* fix run evaluation api token
* fix naming convention in opro, remove on epoch end for logger callback, fixed to allow for numeric values in class names
* Update promptolution/llms/api_llm.py
Co-authored-by: Timo Heiß <[email protected]>
* fixed comments
* Update pyproject.toml
* resolve comments
---------
Co-authored-by: mo374z <[email protected]>
Co-authored-by: Timo Heiß <[email protected]>
Co-authored-by: Moritz Schlager <[email protected]>
* Feature/examplar selection (#17)
* implemented random selector
* added random search selector
* increased version count
* fix typos
* Update promptolution/predictors/base_predictor.py
Co-authored-by: Timo Heiß <[email protected]>
* Update promptolution/tasks/classification_tasks.py
Co-authored-by: Timo Heiß <[email protected]>
* resolve comments
* resolve comments
---------
Co-authored-by: Timo Heiß <[email protected]>
* Chore/docs release notes (#18)
* Update release-notes.md
* Fix release note links
* revert Chore/docs release notes (#18)"
This reverts commit e23dd74.
* revert last commit
* updated release notes and read me
* Feature/read from df (#21)
* Delete Experiment files
* Removed config necessities
* improved opro meta-prompts
* added read from data frame feature
* changed required python version to 3.9
* Update pyproject.toml
* Update release-notes.md
* merge
* merge
* resolve merge mistakes
* delete duplicated lines
* Update release-notes.md (#24)
* Fix/dependencies (#28)
* delete poetry.lock and upgrade transformers dependency
* Update release-notes.md
* Add vllm as feature and a llm_test_run_script
* small fixes in vllm class
* differentiate between vllm and api inference
* set up experiment over multiple tasks and prompts
* change csv saving
* add base llm super class
* add changes from PR review
* change some VLLM params
* fix tensor parallel size to 1
* experiment with batch size
* experiment with larger batch sizes
* add continuous batch llm
* remove arg
* remove continuous batch inference try
* add batching to vllm
* add batching in script
* Add release notes and increase version number
* remove llm_test_run.py script
* change system prompt
* Fix/vllm (#33)
* add token count, flexible batch size and kwargs to vllm class
* add testing script for implementation
* fix batch size calculation
* small changes
* add revision test
* add argument to parser
* max model len to int
* remove script
* Change version and Release notes
* changed callback behaviour and impelemented token count callback
* added super inits
* allow for splits not based on white space (such as new line break etc)
* include task descriptions
* add tokenizer based token count to vllm class
* update test run script
* use classifiers accordingly
* small fix
* add storage path
* helpers should use classificator
* use different model
* changes in opro test
* change get_predictor function
* fix callback calling
* change optimizer test run script
* small alignments
* small alignments
* small alignments
* some changes to match the current optimizer implementation
* changes in template and config
* allow for batching of prompt creation
* update release notes and version
* extend csvcallback functionality
* change callback csv export
* change step time calculation
* small changes
* remove llm_test_run script
* update release notes
* fix issues in token stepswise calculation
* small fix
---------
Co-authored-by: finitearth <[email protected]>
* implement changes from review
* add typing to token count callback
---------
Co-authored-by: Timo Heiß <[email protected]>
Co-authored-by: Tom Zehle <[email protected]>
Co-authored-by: Timo Heiß <[email protected]>
0 commit comments