This file is for LLM agents and new contributors to have a single point of detailed reference how to contribute to the project.
src/: core sourceslibasr/: ASR + utilities, passes, verificationlfortran/: parser, semantics, drivers, backendsruntime/: Fortran runtime (built via CMake)server/: language server
tests/,integration_tests/: unit/E2E suitesdoc/: docs & manpages (site generated from here)examples/,grammar/,cmake/,ci/,share/: supporting assets
- Tools: CMake (>=3.10), Ninja, Git, Python (>=3.8), GCC/Clang.
- Generators: re2c, bison (needed for build0/codegen).
- Libraries: zlib; optional: LLVM dev, libunwind, RapidJSON, fmt, xeus/xeus-zmq, Pandoc.
- Typical dev config (Ninja + LLVM) is specified in
./build1.sh:cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug -DWITH_LLVM=ON -DWITH_STACKTRACE=yescmake --build build -j
- Release build:
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release -DWITH_LLVM=ON - Tests:
./run_tests.py(compiler);cd integration_tests && ./run_tests.py -j16 - Local integration tip (LLVM): some evaluator/diagnostic paths require precise
location info. When running integration tests locally with the LLVM backend,
prefer injecting the flag via environment rather than changing CMake:
cd integration_tests && FFLAGS="--debug-with-line-column" ./run_tests.py -j8This mirrors CI closely while enabling line/column emission.
- We usually build with LLVM enabled (
-DWITH_LLVM=ON). - AST/ASR (no LLVM):
build/src/bin/lfortran --show-ast examples/expr2.f90 - Run program (LLVM):
build/src/bin/lfortran examples/expr2.f90 && ./a.out
- AST (syntax) ↔ ASR (semantic, valid-only). See
doc/src/design.md. - Pipeline: parse → semantics → ASR passes → codegen (LLVM/C/C++/x86/WASM).
- Prefer
src/libasr/passand existing utils; avoid duplicate helpers/APIs.
- Upstream:
lfortran/lfortranon GitHub (canonical repo and issues). - Fork workflow:
originis your fork; submit draft PRs to upstream and mark ready for review. - Setup:
git remote add upstream git@github.com:lfortran/lfortran.gitgit fetch upstream --tagsgit checkout -b feature/xyz && git push -u origin feature/xyz
- PRs: always target
upstream/main. Keep feature branches rebased onupstream/main:git fetch upstream && git rebase upstream/main && git push --force-with-lease- Ensure the
upstreamremote exists (see setup) when working from a fork.
- C/C++: C++17; follow the existing formatting in the file to be consistent; use 4 spaces for indentation
- Names: lower_snake_case files; concise CMake target names.
- No commented-out code.
- Full coverage required: every behavior change must come with tests that fail before your change and pass after. Do not merge without a full local pass of unit and integration suites.
-
Purpose: build-and-run end-to-end programs across backends/configurations via CMake/CTest.
-
Add a
.f90program underintegration_tests/and register it inintegration_tests/CMakeLists.txtusing theRUN(...)macro (labels likegfortran,llvm,cpp, etc.).- See
integration_tests/CMakeLists.txt(search formacro(RUNand existingRUN(NAME ...)entries). - Avoid custom generation; place real sources in the tree and check them in.
- See
-
Prefer integration tests; all new tests should be integration tests.
-
Ensure integration tests pass locally:
cd integration_tests && ./run_tests.py -j16. -
Add checks for correct results inside the
.f90file usingif (i /= 4) error stop-style idioms. -
Always label new tests with at least
gfortran(to ensure the code compiles with GFortran and does not rely on any LFortran-specific behavior) andllvm(to test with LFortran's default LLVM backend). -
When fixing a bug, add an integration test that reproduces the failure and now compiles/runs successfully.
-
CI‑parity (recommended): run with the same env and scripts CI uses
- Use micromamba with
ci/environment.ymlto match toolchain (LLVM, etc.). - Set env like CI and call the same helper scripts:
export LFORTRAN_CMAKE_GENERATOR=Ninjaexport ENABLE_RUNTIME_STACKTRACE=yes(Linux/macOS)- Build:
bash ci/build.sh - Quick integration run (LLVM):
bash ci/test.sh(runs a CMake+CTest LLVM pass and runner passes)- or:
cd integration_tests && ./run_tests.py -b llvm && ./run_tests.py -b llvm -f -nf16
- GFortran pass:
cd integration_tests && ./run_tests.py -b gfortran - Other backends as in CI:
./run_tests.py -b llvm2 llvm_rtlib llvm_nopragma && ./run_tests.py -b llvm2 llvm_rtlib llvm_nopragma -f./run_tests.py -b cpp c c_nopragmaand-f./run_tests.py -b wasmand-f./run_tests.py -b llvm_omp/target_offload/fortran -j1
- Use micromamba with
-
Minimal local (without micromamba):
- Build:
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug -DWITH_LLVM=ON -DWITH_RUNTIME_STACKTRACE=yes - Run:
cd integration_tests && ./run_tests.py -b llvm && ./run_tests.py -b llvm -f -nf16 - If diagnostics need line/column mapping during local debugging, inject:
FFLAGS="--debug-with-line-column" ./run_tests.py -b llvm
- Build:
-
If builds fail with messages about missing debug info or line/column emission:
- Install LLVM tools so
llvm-dwarfdumpis available (e.g.,sudo pacman -S llvm,apt install llvm, orconda install -c conda-forge llvm-tools). - Rebuild with runtime stacktraces if needed:
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug -DWITH_LLVM=ON -DWITH_RUNTIME_STACKTRACE=yes -DWITH_UNWIND=ON - Run integration with LFortran flags injected via env:
FFLAGS="--debug-with-line-column" ./run_tests.py -j8 - More details:
integration_tests/run_tests.py(CLI flags and supported backends).
- Install LLVM tools so
- Use only when an integration test is not yet feasible (e.g., feature doesn’t compile end‑to‑end). Prefer integration tests for all new work.
- If possible, still add a test under
integration_tests/, but only registergfortran(notllvm), then register this test intests/tests.tomlwith the needed outputs (ast,asr,llvm,run, etc.). Use.f90or.f(fixed-form auto-handled). Only if that cannot be done, add a new test intotests/.- See
tests/tests.tomlfor examples; reference outputs live undertests/reference/.
- See
- Multi-file modules: set
extrafiles = "mod1.f90,mod2.f90". - Run locally:
./run_tests.py -j16(use-sto debug). - Update references only when outputs intentionally change:
./run_tests.py -t path/to/test -u -s. - Error messages: add to
tests/errors/continue_compilation_1.f90and update references. - If your integration test does not compile yet, temporarily validate the change by adding a reference test that checks AST/ASR construction (enable
asr = trueand/orast = trueintests/tests.toml). Promote it to an integration test once end‑to‑end compilation succeeds.
- Modfile version mismatch: if you see "Incompatible format: LFortran Modfile...",
clean and recompile (
ninja clean && ninja) Ensure the currentbuild/src/binis first onPATHwhen running tests.
- Run all tests:
ctestand./run_tests.py -j16 - Run a specific test:
./run_tests.py -t pattern -s
References
- Developer docs:
doc/src/installation.md(Tests) anddoc/src/progress.md(workflow). - Online docs: https://docs.lfortran.org/en/installation/ (Tests: run, update, integration).
- CI examples:
.github/workflows/Quick-Checks-CI.ymlandci/test.sh.
- Commits: small, single-topic, imperative (e.g., "fix: handle BOZ constants").
- PRs target
upstream/main; reference issues (fixes #123), explain rationale. - Include test evidence (commands + summary); ensure CI passes.
- Do not commit generated artifacts, large binaries, or local configs.
- Use Draft PRs while iterating; click “Ready for review” only when satisfied.
- Use plain Markdown in PR descriptions (no escaped
\n). Keep it clean, minimal, and follow simple headings (Summary, Scope, Verification, Rationale). - Before marking ready: ensure all local tests pass (unit + integration) and include evidence.