Important
We highly appreciate contributions, but simple typo fixes (e.g., minor spelling errors, punctuation changes, or trivial rewording) will be ignored unless they significantly improve clarity or fix a critical issue. If you are unsure whether your change is substantial enough, consider opening an issue first to discuss it.
If you want to make a substantial contribution, please first make sure a corresponding issue exists. Then contact the Walrus maintainers through that issue to check if anyone is already working on this and to discuss details and design choices before starting the actual implementation.
Before contributing, please read the important note above.
We generally follow the GitHub flow in our project. In a nutshell, this requires the following steps to contribute:
- Fork the repository (only required if you don't have write access to the repository).
- Create a feature branch.
- Make changes and create a commit.
- Push your changes to GitHub and create a pull request (PR); note that we enforce a particular style for the PR titles, see below.
- Wait for maintainers to review your changes and, if necessary, revise your PR.
- When all requirements are met, a reviewer or the PR author (if they have write permissions) can merge the PR.
To keep our code clean, readable, and maintainable, we strive to follow various conventions. These are described in detail in the following subsections. Note that some but not all of them are enforced by our CI pipeline and our pre-commit hooks.
We do not use unwrap in production code; unwrap should only be used in tests, benchmarks, or
similar code. If possible, code should be rewritten such that neither unwrap nor expect is
needed. If this is not possible or cumbersome, but we know for sure that a value cannot be
None or Err, use expect with an explanation why it cannot fail.
Otherwise, handle these values explicitly, using Option or Result return types if needed.
Furthermore, if a function can panic under certain conditions, prefer an explicit panic! and make
sure to document this in the function's docstring, see also below.
Type cast expressions
with as, especially on numeric types, can sometimes have unwanted and unexpected semantics,
including silent truncation, wrapping, or loss of precision. Consequently, we recommend to instead
use from/into or, if that is not available, try_from/try_into with proper error
handling for type conversions.
We use tracing for logging within our crates. Please add reasonable spans and logging events to your code with appropriate logging level. In addition, please consider the following conventions:
- Log entries generally start with a lowercase letter and do not end in a full stop. You can however use commas and semicolons.
- Prefer including additional data as metadata fields instead of including them in the message. Use the shorthand form whenever possible. Only include variables directly in the string if they are necessary to create a useful message in the first place.
- In async code, generally use
instrumentattributes instead of manually creating and entering spans as this automatically handles await points correctly.
We generally follow the naming conventions of the Rust API Guidelines. In addition, please consider the following recommendations:
- All components should have descriptive names that match their purpose.
- The larger the scope of a component, the more expressive its name should be.
- Choose full words over abbreviations. The only exceptions are very frequent and common
abbreviations like
min,max,id. - The only situation in which very short or even single-letter names are acceptable is for parameters of very short closures.
We use the "modern" naming convention where a module some_module is called some_module.rs and
its submodules are in the directory some_module/, see the corresponding page in the Rust
reference.
To ensure a consistent Git history (from which we can later easily generate changelogs automatically), we always squash commits when merging a PR and enforce that all PR titles comply with the conventional-commit format. For examples, please take a look at our commit history.
Make sure all public structs, enums, functions, etc. are covered by docstrings. Docstrings for
private or pub(crate) components are appreciated as well but not enforced.
In general, we follow the guidelines about documentation in the Rust documentation. In particular, please adhere to the following conventions:
- All docstrings are written as full sentences, starting with a capital letter.
- The first line should be a short sentence summarizing the component. Details should be described after an empty line.
- If a function can panic, this must be documented in a
# Panicssection in the docstring. - An
# Examplessection is often useful and can simultaneously serve as documentation tests. - Docstrings should be cross-linked whenever it makes sense.
- Module docstrings should be inside the respective module file with
//!(instead of at the module inclusion location).
Additionally, if you made any user-facing changes, please adjust our documentation under docs/book; see the section about the documentation below for further information.
We use a few unstable formatting options of Rustfmt. Unfortunately, these can only be used with a
stable toolchain when specified via the --config command-line option. This is done in
CI and in our pre-commit hooks (see also
above).
If your editor supports reading rust-analyzer preferences from .vscode/settings.json, you may want
to add the following configuration to that file to setup autoformatting. Note that this repo ignores
.vscode/* to allow you to further customize your workspace settings.
{
"rust-analyzer.rustfmt.extraArgs": [
"--config",
"group_imports=StdExternalCrate,imports_granularity=Crate,imports_layout=HorizontalVertical"
]
}Also make sure you use the correct version of Rustfmt. See
rust-toolchain.toml for the current version. This also impacts other checks,
for example Clippy.
We use the @mysten/prettier-plugin-move npm package to format Move code. If you're using VSCode,
you can install the Move Formatter
extension. The formatter is also run automatically in the pre-commit hooks.
To use it as a stand-alone tool, we recommend installing it globally (requires NodeJS and npm):
npm i -g prettier @mysten/prettier-plugin-moveThe Move formatter can then be run manually by executing:
prettier-move --write <path-to-move-file-or-folder>Our main documentation at docs.wal.app is built using
mdBook from source files in the docs/book
directory. See book.toml for the configuration and parameters.
We use additional preprocessors for various features:
- mdbook-admonish for callout boxes
- mdbook-katex for rendering LaTeX expressions
- mdbook-linkcheck to make sure all our internal and external links are valid
- mdbook-tabs for internal tabs
- mdbook-template for templating based on Handlebars
These preprocessors treat certain tokens in a special way and require you to escape them if you want to use them literally:
- mdBook itself replaces certain
{{# }}patterns like{{#include <filename>}}, see the mdBook documentation. It also performs some special processing of code blocks for Rust code. - mdbook-tabs recognizes and processes some
{{# }}patterns. - Handlebars detects and processes some
{{ }}patterns. If you want to use such patterns literally, you need to escape the opening braces as (\{{). GitHub Actions${{ }}syntax is automatically preserved. You can add additional data for the templates in thepathsparameter in thepreprocessor.templatesection of the book.toml file. - KaTeX renders content between
\(and\)as inline math and\[and\]as math blocks. To use these tokens literally, you need to put them between backticks or escape them as\\(.
You can build the Walrus documentation locally (assuming you have Rust installed):
cargo install mdbook mdbook-admonish mdbook-katex mdbook-linkcheck mdbook-tabs --locked
cargo install --git https://github.com/MystenLabs/mdbook-template --locked
mdbook serveAfter this, you can browse the documentation at http://localhost:3000.
The documentation is built and published after all relevant changes in the main branch through a
GitHub workflow. Additionally a preview is
created in relevant PRs.
We lint our documentation using markdownlint. See
the configuration at .markdownlint-cli2.yaml. To locally disable certain
rules, you can use <!-- markdownlint-disable <rulename> --> and <!-- markdownlint-enable <rulename> -->. See all rules here.
We have CI jobs running for every PR to test and lint the repository. You can install Git pre-commit hooks to ensure that these check pass even before pushing your changes to GitHub. To use this, the following steps are required:
- Install Rust.
- Install nextest.
- Install pre-commit using
pipor your OS's package manager. - Run
pre-commit installin the repository.
After this setup, the code will be checked, reformatted, and tested whenever you create a Git commit.
You can also use a custom pre-commit configuration if you wish:
- Create a file
.custom-pre-commit-config.yaml(this is set to be ignored by Git). - Run
pre-commit install -c .custom-pre-commit-config.yaml.
The majority of our code is covered by automatic unit and integration tests which you can run
through cargo test or cargo nextest run (requires nextest).
Integration and end-to-end tests are excluded by default when running cargo nextest as they depend
on additional packages and take longer to run. These tests can either be run as follows:
cargo nextest run --run-ignored ignored-only # run *only* ignored tests
cargo nextest run --run-ignored all # run *all* testsIntegration tests that require a running Sui test cluster can use an external cluster. This requires a one-time setup:
CLUSTER_CONFIG_DIR="$PWD/target/sui-start"
mkdir "$CLUSTER_CONFIG_DIR"
sui genesis -f --with-faucet --working-dir "$CLUSTER_CONFIG_DIR"For running tests, start the external cluster with sui start, set the environment variable
SUI_TEST_CONFIG_DIR to the configuration directory, and run the tests using cargo test -- --ignored:
CLUSTER_CONFIG_DIR="$PWD/target/sui-start"
SUI_CONFIG_DIR="$CLUSTER_CONFIG_DIR" sui start&
SUI_PID=$!
SUI_TEST_CONFIG_DIR="$CLUSTER_CONFIG_DIR" cargo test -- --ignoredThis runs the tests with the newest contract version.
After the tests have completed, you can stop the cluster:
kill $SUI_PIDNote that it is currently not possible to use an external cluster with cargo nextest.
In addition to publicly deployed Walrus systems, you can deploy a Walrus testbed on your local
machine for manual testing. All you need to do is run the script scripts/local-testbed.sh. See
scripts/local-testbed.sh -h for further usage information.
The script generates configuration that you can use when running the Walrus client and prints the path to that configuration file.
In addition, one can spin up a local grafana instance to visualize the metrics collected by the
storage nodes. This can be done via cd docker/grafana-local; docker compose up. This should work
with the default storage node configuration.
Note that while the Walrus storage nodes of this testbed run on your local machine, the Sui devnet
is used by default to deploy and interact with the contracts. To run the testbed fully locally, simply
start a local network with sui start --with-faucet --force-regenesis
(requires sui to be v1.28.0 or higher) and specify localnet when starting the Walrus testbed.
We use simulation testing to ensure that the Walrus system keeps working under various failure
scenarios. The tests are in the walrus-simtest and walrus-service crates, with most of the
necessary plumbing primarily in walrus-service.
To run simulation tests, first install the cargo simtest tool:
./scripts/simtest/install.shYou can then run all simtests with
cargo simtestSee further information about the simtest framework.
We run micro-benchmarks for encoding, decoding, and authentication using Criterion.rs. These benchmarks are not run automatically in our pipeline as there is an explicit advice against doing this.
You can run the benchmarks by calling cargo bench from the project's root directory. Criterion
will output some data to the command line and also generate HTML reports including plots; the root
file is located at [`target/criterion/report/index.html].
Criterion automatically compares the results from multiple runs. To check if your code changes
improve or worsen the performance, run the benchmarks first on the latest main branch and then
again with your code changes or explicitly set and use baselines with --save-baseline and
--baseline. See the Criterion
documentation
for further details.
To get quick insights into where the program spends most of its time, you can use the flamegraph
tool. After installing with cargo install flamegraph, you can run binaries, tests, or benchmarks and produce SVG outputs. For example to
analyze the blob_encoding benchmark, you can run the following:
CARGO_PROFILE_BENCH_DEBUG=true cargo flamegraph --root --bench blob_encoding --openSee the documentation for further details and options.
When making changes that are expected to influence performance, it may be beneficial to record and review traces of the execution. The storage-node, aggregator, publisher, and CLI client all support exporting traces.
To view traces, you need to be running a trace collector and viewer. One such
tool is jaeger and can be run in a container with the following
command:
podman run --rm --name jaeger \
-p 5778:5778 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 14250:14250 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/jaeger:2.0.0 \
--set receivers.otlp.protocols.http.endpoint=0.0.0.0:4318 \
--set receivers.otlp.protocols.grpc.endpoint=0.0.0.0:4317Traces can then be viewed at http://localhost:16686/search.
Setting the environment variable TRACE_FILTER=<filter> when running one of
the services will cause the service to export spans and traces matching the
filter. The filter format is the same as RUST_LOG environment variable.
The default OTLP endpoint to which traces are exported is localhost:4317,
but can be changed by setting the OTLP_ENDPOINT environment
variable. Additionally, the SAMPLE_RATE environment variable can be set to a
floating point value in the interval [0.0, 1.0] to sample a subset of the
traces.
To export traces from the client, use the --trace-cli command line argument.
If set to --trace-cli otlp, then traces are exported directly to any OTLP collector
listening at localhost:4317. The collector's address can be changed with the
OTLP_ENDPOINT environment variable.
If set to --trace-cli file=<path> the traces are written GZIPed in the specified
file path. These can later be ingested into an OTLP collector using
scripts/ingest-traces.sh <path>. The collector should be listening at
localhost:4318. The above jaeger command opens both ports 4317 and 4318 for
this reason.
We appreciate it if you configure Git to sign your commits.