@@ -34,6 +34,7 @@ For details on the data format and the list of supported data, please check [DAT
3434 - [ Formatting and Linting with ruff] ( #formatting-and-linting-with-ruff )
3535 - [ How to Release to PyPI] ( #how-to-release-to-pypi )
3636 - [ How to Update the Website] ( #how-to-update-the-website )
37+ - [ Acknowledgements] ( #acknowledgements )
3738
3839## Environment Setup
3940
@@ -91,7 +92,7 @@ If you want to run an evaluation on a new inference method or a new model, creat
9192For example, if you want to evaluate the ` llava-hf/llava-1.5-7b-hf ` model on the japanese-heron-bench task, run the following command:
9293
9394``` bash
94- uv sync --group normal
95+ uv sync --group normal
9596uv run --group normal python examples/sample.py \
9697 --model_id llava-hf/llava-1.5-7b-hf \
9798 --task_id japanese-heron-bench \
@@ -172,6 +173,7 @@ If you add a new group, don’t forget to configure [conflict](https://docs.astr
172173
173174## License
174175
176+ This repository is licensed under the Apache-2.0 License.
175177For the licenses of each evaluation dataset, please see [ DATASET.md] ( ./DATASET.md ) .
176178
177179## Contribution
@@ -205,11 +207,19 @@ uv run ruff check --fix src
205207```
206208
207209### How to Release to PyPI
210+
208211```
209212git tag -a v0.x.x -m "version 0.x.x"
210213git push origin --tags
211214```
215+ Or you can manually create a new release on GitHub.
216+
212217
213218### How to Update the Website
214219Please refer to [ github_pages/README.md] ( ./github_pages/README.md ) .
215220
221+ ## Acknowledgements
222+ - [ Heron] ( https://github.com/turingmotors/heron ) : We refer to the Heron code for the evaluation of the Japanese Heron Bench task.
223+ - [ lmms-eval] ( https://github.com/EvolvingLMMs-Lab/lmms-eval ) : We refer to the lmms-eval code for the evaluation of the JMMMU and MMMU tasks.
224+
225+ We also thank the developers of the evaluation datasets for their hard work.
0 commit comments