Thank you for your interest in contributing to SmartEVSE! This document provides guidelines for contributing to the project.
Before making any changes, you must read and understand these documents:
| Document | Why |
|---|---|
| Quality Engineering | Architecture, testing methodology, CI/CD pipeline, quality gates |
| Coding Standards | Naming conventions, buffer safety, FreeRTOS patterns |
| Features | What the firmware does, fork improvements, feature context |
| Upstream Differences | What changed from upstream and why |
| This document | Workflow, SbE format, submission process |
For AI agents, also read CLAUDE.md (Claude Code) or .github/copilot-instructions.md (GitHub Copilot).
The following rules are non-negotiable. Deviating from any of them requires explicit written approval from the project maintainer:
- Specification-first workflow — SbE specification before code, always
- No changes to
evse_state_machine.cwithout tests — zero exceptions - No
sprintf— usesnprintfwith explicit buffer sizes - No heap allocation in ISRs or critical sections
- No platform guards in core logic — use the bridge layer
- All tests must pass —
make clean testbefore every PR - Memory budget must be respected — firmware builds must stay within limits
- SbE annotations on all test functions —
@feature,@req,@scenario,@given/@when/@then - Never modify upstream repos — all work in
basmeerman-owned repos only
If you believe a deviation is necessary, open an issue describing what rule you need to deviate from, why, and what safeguards you will put in place.
- Fork the repository on GitHub
- Clone your fork locally
- Create a feature branch from
master
- PlatformIO for firmware builds
- GCC (C11) for native tests
- Python 3.10+ for test tooling
# ESP32 v3
pio run -e release -d SmartEVSE-3/
# CH32
pio run -e ch32 -d SmartEVSE-3/cd SmartEVSE-3/test/native
make clean testAll native tests must pass before submitting a PR.
feature/short-descriptionfor new featuresfix/short-descriptionfor bug fixesdocs/short-descriptionfor documentation changes
Use clear, concise commit messages:
Add solar mode phase switching logic
Implements automatic 3P-to-1P switching when solar surplus drops
below threshold for 10 seconds.
- Use imperative mood ("Add", not "Added")
- First line under 72 characters
- Add a blank line and details if needed
- C source files: C11 standard
- Use existing code patterns as reference
- Keep functions focused and testable
- Add SbE annotations (
@feature,@req,@scenario,@given,@when,@then) to new test functions
- Add tests for new functionality
- Ensure all existing tests pass (
make clean test) - Safety-critical changes (state machine, contactors, current limiting) require thorough test coverage
- Add SbE annotations to all new test functions (see below)
- After adding tests, regenerate the specification to verify traceability:
cd SmartEVSE-3/test/native python3 scripts/extract_traceability.py --markdown test-specification.md
Every test function must include structured comment annotations that link the test to a feature, requirement, and scenario. This enables automated traceability reporting — the CI pipeline generates a test specification and an HTML traceability matrix from these annotations on every build.
/*
* @feature State Machine
* @req REQ-SM-001
* @scenario Normal charging cycle
* @given Vehicle connected in state B
* @when Pilot duty cycle allows charging
* @then State transitions to C and contactor closes
*/
void test_normal_charge_cycle(void) { ... }The CI traceability job validates that all annotated tests have requirement IDs and uploads the reports as build artifacts. See the test specification for the full list of 1,082 scenarios across 70 features.
Whether you've found a bug or want to propose a functional improvement, the project follows a specification-first workflow: describe the expected behavior in SbE format, write the test, then make the code change. This ensures every change is traceable and verifiable.
Before writing any code, describe what you observed (bug) or what you want to achieve (improvement) using the Given/When/Then pattern. This forces clear thinking about preconditions, triggers, and expected outcomes.
Bug report example — the EVSE doesn't stop charging when mains current exceeds the limit:
Feature: Error Handling & Safety
Req: REQ-ERR-030
Scenario: Charging stops when mains sum exceeds maximum
Given The EVSE is charging in Normal mode at 16A
And MaxSumMains is configured to 25A
When The mains meter reports L1=15A, L2=8A, L3=10A (sum=33A)
Then The charging current is reduced or paused
And An error condition is flagged
Feature request example — add a grace period before solar mode stops charging:
Feature: Solar Balancing
Req: REQ-SOL-025
Scenario: Solar stop timer provides grace period before stopping
Given The EVSE is charging in Solar mode
And The solar stop timer is set to 10 minutes
When Grid import exceeds solar_max_import
Then A countdown timer starts
And Charging continues during the countdown
And Charging stops only after the timer expires
Tips for writing good specifications:
- Be specific with values — use concrete numbers (16A, 25A, 10 minutes), not vague terms ("high current", "a while")
- One behavior per scenario — if you need "And" in your "When", consider splitting into two scenarios
- Cover the negative case too — if something should happen at a threshold, also specify what happens below that threshold
- Use existing feature names — check the test specification for the list of established feature names and requirement ID prefixes
Requirement IDs follow the pattern REQ-{AREA}-{NUMBER}. Check existing IDs in the
test specification and pick the
next available number for your area:
| Prefix | Area |
|---|---|
REQ-SM- |
State machine transitions |
REQ-ERR- |
Error handling & safety |
REQ-LB- |
Load balancing |
REQ-SOL- |
Solar mode / solar balancing |
REQ-OCPP- |
OCPP integration |
REQ-MQTT- |
MQTT command parsing |
REQ-API- |
HTTP REST API |
REQ-AUTH- |
Authorization & access control |
REQ-MOD- |
Modem / ISO15118 |
REQ-PH- |
Phase switching |
REQ-MTR- |
Metering |
REQ-PWR- |
Power availability |
REQ-E2E- |
End-to-end charging flows |
REQ-DUAL- |
Dual-EVSE scenarios |
REQ-MULTI- |
Multi-node load balancing |
Create or extend a test file in SmartEVSE-3/test/native/tests/. The test should
fail initially (or not compile if the feature doesn't exist yet) — this confirms
the test actually validates something.
/*
* @feature Error Handling & Safety
* @req REQ-ERR-030
* @scenario Charging stops when mains sum exceeds maximum
* @given The EVSE is charging in Normal mode at 16A
* @given MaxSumMains is configured to 25A
* @when The mains meter reports L1=15A, L2=8A, L3=10A (sum=33A)
* @then The charging current is reduced or paused
*/
void test_mains_overcurrent_triggers_reduction(void) {
// Arrange
evse_state_machine_ctx_t ctx;
evse_state_machine_init(&ctx);
ctx.mode = MODE_NORMAL;
ctx.state = STATE_C; // charging
ctx.charge_current = 160; // 16.0A
ctx.max_sum_mains = 250; // 25.0A
// Act — simulate meter reading that exceeds limit
ctx.mains_currents[0] = 150; // L1 = 15.0A
ctx.mains_currents[1] = 80; // L2 = 8.0A
ctx.mains_currents[2] = 100; // L3 = 10.0A
evse_state_machine_run(&ctx);
// Assert
assert(ctx.charge_current < 160 || ctx.state != STATE_C);
}Run the test to confirm it fails or validates the fix:
cd SmartEVSE-3/test/native
make clean testNow implement the fix or feature. The test you wrote in Step 3 tells you exactly when you're done — it passes.
cd SmartEVSE-3/test/native
python3 scripts/extract_traceability.py --markdown test-specification.md --html traceability-report.htmlCheck that your new scenario appears in the test specification under the correct feature, with the requirement ID you chose.
Your PR should include:
- The test file(s) with SbE annotations
- The code change
- The updated
test-specification.md(CI will also regenerate this automatically on merge to master)
The CI pipeline will verify that all tests pass, generate fresh traceability reports, and commit the updated test specification back to the repository.
Finding/Idea → SbE description → Write failing test → Fix code → Test passes → PR
This is the opposite of the traditional "fix first, test maybe later" approach. By writing the specification and test first, you:
- Force yourself to clearly define the expected behavior
- Create a permanent, executable record of the requirement
- Enable automated traceability from requirement to test to code
- Make it easy for reviewers to understand what the change does and why
This project uses semantic versioning: vMAJOR.MINOR.PATCH[-prerelease]
- Patch (
v3.11.0→v3.11.1): bug fixes, documentation, test-only changes - Minor (
v3.11.1→v3.12.0): new features, non-breaking enhancements - Major (
v3.12.0→v4.0.0): breaking changes (config format, API, protocol)
The VERSION in platformio.ini carries a -dev suffix (e.g. v3.11.0-dev) between
releases. Local builds display this version on the LCD and web interface.
- Update VERSION in
SmartEVSE-3/platformio.ini— remove the-devsuffix (e.g.v3.11.0-dev→v3.11.0) - Commit:
Release v3.11.0 - Tag:
git tag v3.11.0 - Push:
git push origin master --tags - CI validates the tag matches
platformio.iniand builds the release - Bump VERSION for next development cycle (e.g.
v3.11.1-dev), commit, push
- Push your branch to your fork
- Open a Pull Request against
master - Describe what your changes do and why
- Reference any related issues
- Ensure CI checks pass
Use the bug report template and include:
- Your SmartEVSE hardware version (v3 or v4)
- Firmware version
- Configuration JSON (from the webserver "raw" button)
- Debug log capturing the issue
This project supports contributions made with AI coding agents. Configuration files are provided for both Claude Code and GitHub Copilot to ensure agents follow project standards automatically.
| Agent | Configuration file | How to use |
|---|---|---|
| Claude Code | CLAUDE.md |
Auto-loaded when Claude Code opens the repo |
| GitHub Copilot | .github/copilot-instructions.md |
Auto-loaded by Copilot Chat and Copilot Workspace |
Both files encode the same rules: coding conventions, architectural principles, test-first workflow, memory budgets, and safety constraints. The agent-specific files format these rules in the way each tool consumes them best.
When using multiple AI agents simultaneously (e.g., Claude Code's parallel Task agents, or Copilot Workspace with multiple agents), designate one agent as the Quality Guardian:
- The Quality Guardian does not write implementation code
- It reviews every change from other agents for compliance with:
- Naming conventions (
snake_casefunctions,CamelCaseglobals) - SbE annotations on all test functions
- No
sprintf, no heap allocation in critical sections - State machine changes paired with corresponding tests
extern "C"guards on headers shared between C and C++
- Naming conventions (
- It runs the test suite and firmware builds after each agent completes
- It regenerates the test specification and verifies traceability
- It has veto authority — non-compliant code must be fixed before merging
When working with a single AI agent, it must self-enforce the Quality Guardian checklist before marking work complete:
- All tests pass (
make clean test) - Firmware compiles (
pio run -e release,pio run -e ch32) - New tests have SbE annotations with valid requirement IDs
- No coding standard violations in changed files
- Test specification regenerated if tests were added
AI agents may:
- Generate
sprintfinstead ofsnprintf— always reject this - Skip writing tests for "simple" changes — always require tests
- Add unnecessary abstractions or over-engineer — keep changes minimal
- Use incorrect naming conventions — check against
CODING_STANDARDS.md - Modify files outside their assigned scope — enforce file boundaries
Please report security vulnerabilities privately. See SECURITY.md for details.
By contributing, you agree that your contributions will be licensed under the MIT License.