Thank you for your interest in contributing to PRGB! This document provides guidelines and information for contributors.
- Python 3.7 or higher
- Git
- Basic knowledge of RAG systems and evaluation metrics
-
Fork the repository
-
Clone your fork:
git clone https://github.com/your-username/prgb.git cd prgb -
Install development dependencies:
make install-dev
-
Verify the setup:
make test
git checkout -b feature/your-feature-name- Follow the existing code style (see Code Style section)
- Add tests for new functionality
- Update documentation as needed
make test # Run all tests
make lint # Check code style
make format # Format codegit add .
git commit -m "feat: add new evaluation metric"git push origin feature/your-feature-nameThen create a Pull Request on GitHub.
- Follow PEP 8 style guidelines
- Use type hints where appropriate
- Keep functions and classes focused and well-documented
- Use meaningful variable and function names
- Use docstrings for all public functions and classes
- Follow Google-style docstring format
- Update README.md for user-facing changes
- Add inline comments for complex logic
- Write unit tests for new functionality
- Aim for at least 80% code coverage
- Use descriptive test names
- Test both success and failure cases
PRGB/
├── README.md
├── pyproject.toml
├── requirements.txt
├── eval.py
├── test_imports.py
├── run_eval.sh
├── Makefile
├── CONTRIBUTING.md
├── CHANGELOG.md
├── .gitignore
├── .flake8
├── .pre-commit-config.yaml
├── LEGAL.md
│
├── core/
│ ├── __init__.py
│ ├── eval.py
│ ├── eval_apis.py
│ ├── eval_draft.py
│ ├── data.py
│ ├── models.py
│ └── eval_types.py
│
├── config/
│ ├── api_prompt_config_ch.json
│ ├── api_prompt_config_en.json
│ └── default_prompt_config.json
│
├── utils/
│ ├── __init__.py
│ ├── filter_mutual_right_samples.py
│ ├── filter_similar_query.py
│ └── transfer_csv_to_jsonl.py
│
├── examples/
│ └── basic_evaluation.py
│
└── tests/
├── test_data_process.py
├── test_checkanswer.py
└── test.jsonl
- Add the metric implementation in
core/metrics.py - Add corresponding tests in
tests/test_metrics.py - Update the evaluation pipeline in
core/eval.py - Update documentation
- Add model class in
core/models.py - Implement required methods (generate, batch_generate)
- Add tests in
tests/test_models.py - Update configuration files if needed
- Add data loader in
core/data.py - Add format validation
- Add tests for the new format
- Update documentation
When reporting issues, please include:
- Environment: OS, Python version, package versions
- Steps to reproduce: Clear, step-by-step instructions
- Expected behavior: What you expected to happen
- Actual behavior: What actually happened
- Error messages: Full error traceback if applicable
- Additional context: Any other relevant information
- Code follows the project's style guidelines
- All tests pass
- Documentation is updated
- No new warnings are introduced
- Code is self-documenting and well-commented
Include:
- Summary of changes
- Motivation for the change
- Testing performed
- Any breaking changes
- Screenshots (if UI changes)
- Automated Checks: CI/CD pipeline runs tests and linting
- Review: At least one maintainer reviews the PR
- Feedback: Address any review comments
- Merge: PR is merged once approved
We follow Semantic Versioning:
- Major: Breaking changes
- Minor: New features, backward compatible
- Patch: Bug fixes, backward compatible
- Update version in
pyproject.toml - Update
CHANGELOG.md - Create release tag
- Build and publish to PyPI
- Issues: Use GitHub Issues for bug reports and feature requests
- Discussions: Use GitHub Discussions for questions and general discussion
- Documentation: Check the README and inline documentation
Please be respectful and inclusive in all interactions. We follow the Contributor Covenant Code of Conduct.
By contributing to PRGB, you agree that your contributions will be licensed under the same license as the project.
Thank you for contributing to PRGB! 🚀