Thank you for your interest in contributing to Contributor Report! This document provides guidelines and instructions for contributing.
By participating in this project, you agree to abide by our Code of Conduct. Please be respectful and constructive in all interactions.
If you find a bug, please open an issue with:
- A clear, descriptive title
- Steps to reproduce the issue
- Expected behavior vs actual behavior
- Your environment (Node.js version, OS, etc.)
- Any relevant logs or screenshots
Feature requests are welcome! Please open an issue with:
- A clear description of the feature
- The problem it solves
- Potential implementation approaches (if any)
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run tests and linting (
pnpm test && pnpm lint) - Commit your changes with a descriptive message
- Push to your branch
- Open a Pull Request
- Node.js 20+
- pnpm 9+
# Clone the repository
git clone https://github.com/jdiegosierra/contributor-report.git
cd contributor-report
# Install dependencies
pnpm install# Run tests
pnpm test
# Run tests with coverage
pnpm test -- --coverage
# Lint code
pnpm lint
# Format code
pnpm format:write
# Build the action
pnpm bundlecontributor-report/
├── src/
│ ├── api/ # GitHub API client and rate limiting
│ ├── config/ # Configuration and input parsing
│ ├── metrics/ # Individual metric calculators
│ ├── output/ # Comment and output formatting
│ ├── scoring/ # Scoring engine and normalization
│ ├── types/ # TypeScript type definitions
│ └── main.ts # Action entry point
├── __tests__/ # Test files (mirrors src structure)
├── __fixtures__/ # Test fixtures and mocks
└── dist/ # Compiled action (auto-generated)
- Create a new file in
src/metrics/(e.g.,my-metric.ts) - Implement the extract and calculate functions:
export function extractMyMetricData(data: GraphQLContributorData): MyMetricData {
// Extract relevant data
}
export function calculateMyMetric(data: MyMetricData, weight: number): MetricResult {
// Calculate normalized score (0-100)
}- Add types to
src/types/metrics.ts - Export from
src/metrics/index.ts - Add weight to
src/types/config.tsandsrc/config/defaults.ts - Integrate in
src/scoring/engine.ts - Write tests in
__tests__/metrics/
When implementing or modifying metrics:
- Neutral baseline: Score of 50 means neutral (no bonus, no penalty)
- Never penalize lack of data: New users shouldn't be punished
- Be fair: Metrics should be objective and verifiable
- Document thresholds: Explain why specific values were chosen
- Write tests for all new functionality
- Maintain or improve code coverage (currently 56%+)
- Use descriptive test names
- Test edge cases (empty data, null values, boundary conditions)
Example test structure:
describe('My Metric', () => {
describe('extractMyMetricData', () => {
it('extracts data correctly', () => {
// ...
})
it('handles empty data', () => {
// ...
})
})
describe('calculateMyMetric', () => {
it('gives high score for good values', () => {
// ...
})
it('gives neutral score for no data', () => {
// ...
})
})
})Use clear, descriptive commit messages:
feat: add new metric for code review qualityfix: handle null repository in PR datadocs: update README with new configurationtest: add tests for edge cases in scoringrefactor: simplify decay calculation
- Keep PRs focused on a single change
- Update tests for any code changes
- Update documentation if needed
- Ensure all checks pass before requesting review
- Respond to review feedback promptly
Releases are managed by maintainers:
- Update version in
package.json - Update
CHANGELOG.md - Create a GitHub release with release notes
- The action is automatically published to the Marketplace
If you have questions, feel free to:
- Open an issue with the "question" label
- Start a discussion in the GitHub Discussions tab
Thank you for contributing!