diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index fa1f6ca249..398f633c78 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -9,4 +9,3 @@ * @ava-labs/platform-evm /.github/ @maru-ava @ava-labs/platform-evm /triedb/firewood/ @alarso16 @ava-labs/platform-evm - diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 9e89d2d0da..a64b3a6169 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,14 +1,14 @@ # Contributing -Thank you for considering to help out with the source code! We welcome -contributions from anyone on the internet, and are grateful for even the +Thank you for considering to help out with the source code! We welcome +contributions from anyone on the internet, and are grateful for even the smallest of fixes! -If you'd like to contribute to coreth, please fork, fix, commit and send a +If you'd like to contribute to coreth, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If -you wish to submit more complex changes though, please check up with the core -devs first on [Discord](https://chat.avalabs.org) to -ensure those changes are in line with the general philosophy of the project +you wish to submit more complex changes though, please check up with the core +devs first on [Discord](https://chat.avalabs.org) to +ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple. @@ -16,38 +16,41 @@ well as our review and merge procedures quick and simple. Please make sure your contributions adhere to our coding guidelines: - * Code must adhere to the official Go -[formatting](https://go.dev/doc/effective_go#formatting) guidelines -(i.e. uses [gofmt](https://pkg.go.dev/cmd/gofmt)). - * Code must be documented adhering to the official Go -[commentary](https://go.dev/doc/effective_go#commentary) guidelines. - * Pull requests need to be based on and opened against the `master` branch. - * Pull reuqests should include a detailed description - * Commits are required to be signed. See [here](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) - for information on signing commits. - * Commit messages should be prefixed with the package(s) they modify. - * E.g. "eth, rpc: make trace configs optional" +- Code must adhere to the official Go + [formatting](https://go.dev/doc/effective_go#formatting) guidelines + (i.e. uses [gofmt](https://pkg.go.dev/cmd/gofmt)). +- Code must be documented adhering to the official Go + [commentary](https://go.dev/doc/effective_go#commentary) guidelines. +- Pull requests need to be based on and opened against the `master` branch. +- Pull reuqests should include a detailed description +- Commits are required to be signed. See [here](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) + for information on signing commits. +- Commit messages should be prefixed with the package(s) they modify. + - E.g. "eth, rpc: make trace configs optional" ## Can I have feature X -Before you submit a feature request, please check and make sure that it isn't +Before you submit a feature request, please check and make sure that it isn't possible through some other means. ## Mocks Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/mockgen) and `//go:generate` commands in the code. -* To **re-generate all mocks**, use the command below from the root of the project: +- To **re-generate all mocks**, use the command below from the root of the project: - ```sh - go generate -run "go.uber.org/mock/mockgen" ./... - ``` + ```sh + go generate -run "go.uber.org/mock/mockgen" ./... + ``` + +- To **add** an interface that needs a corresponding mock generated: + + - if the file `mocks_generate_test.go` exists in the package where the interface is located, either: + + - modify its `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface (preferred); or + - add another `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface according to specific mock generation settings -* To **add** an interface that needs a corresponding mock generated: - * if the file `mocks_generate_test.go` exists in the package where the interface is located, either: - * modify its `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface (preferred); or - * add another `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface according to specific mock generation settings - * if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): + - if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): ```go // Copyright (C) 2025-2025, Ava Labs, Inc. All rights reserved. @@ -59,10 +62,13 @@ Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/moc ``` Notes: + 1. Ideally generate all mocks to `mocks_test.go` for the package you need to use the mocks for and do not export mocks to other packages. This reduces package dependencies, reduces production code pollution and forces to have locally defined narrow interfaces. 1. Prefer using reflect mode to generate mocks than source mode, unless you need a mock for an unexported interface, which should be rare. -* To **remove** an interface from having a corresponding mock generated: + +- To **remove** an interface from having a corresponding mock generated: + 1. Edit the `mocks_generate_test.go` file in the directory where the interface is defined 1. If the `//go:generate` mockgen command line: - * generates a mock file for multiple interfaces, remove your interface from the line - * generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. + - generates a mock file for multiple interfaces, remove your interface from the line + - generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 049810cd57..d41f4838f5 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -4,7 +4,6 @@ about: Create a report to help us improve title: '' labels: bug assignees: '' - --- **Describe the bug** diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index bbcbbe7d61..2f28cead03 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -4,7 +4,6 @@ about: Suggest an idea for this project title: '' labels: '' assignees: '' - --- **Is your feature request related to a problem? Please describe.** diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 0000000000..6a152d552f --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,98 @@ +# Usage: +# 1) Install the hooks: +# pre-commit install && pre-commit install --hook-type pre-push +# 2) [Optional] Run the hooks on demand before committing: +# pre-commit run --all-files +# 3) [Optional] Run the hooks on demand before pushing changes: +# pre-commit run --hook-stage pre-push --all-files +# +# NOTE: 2) and 3) are only needed to run the hooks manually. +# The hooks are automatically run by git when committing or pushing changes. +# +# If you want to temporarily disable the hooks without uninstalling them, you can use: +# git commit --no-verify +# git push --no-verify +# +# If you want to uninstall the hooks, you can use: +# pre-commit uninstall +# +# If you want to re-install the hooks, you can run 1) again. + +minimum_pre_commit_version: "3.6.0" + +repos: + # Fast feedback at commit time: baseline golangci-lint (v1) + - repo: https://github.com/golangci/golangci-lint + rev: v1.63.4 + hooks: + - id: golangci-lint + name: golangci-lint (baseline) + args: ["--config=.golangci.yml"] + stages: [pre-commit] + + # Local utility scripts and full repo checks + - repo: local + hooks: + # Stricter extra lints (golangci-lint v2) on staged changes at pre-commit + - id: extra-golangci-lint + name: golangci-lint (extra via scripts/lint.sh) + entry: scripts/lint.sh + language: system + stages: [pre-commit] + pass_filenames: false + env: + TESTS: "avalanche_golangci_lint" + + - id: actionlint + name: actionlint + entry: scripts/actionlint.sh + language: system + files: ^\.github/workflows/.*\.(yml|yaml)$ + stages: [pre-commit] + pass_filenames: false + + - id: shellcheck + name: shellcheck + entry: scripts/shellcheck.sh + language: system + files: \.sh$ + stages: [pre-commit] + pass_filenames: false + + # General purpose housekeeping hooks + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v6.0.0 + hooks: + - id: trailing-whitespace + name: trailing-whitespace (filtered) + entry: scripts/filter_precommit_files.sh trailing-whitespace-fixer -- + pass_filenames: true + args: [--markdown-linebreak-ext=md] + - id: end-of-file-fixer + name: end-of-file-fixer (filtered) + entry: scripts/filter_precommit_files.sh end-of-file-fixer -- + pass_filenames: true + - id: check-merge-conflict + - id: check-yaml + name: check-yaml (filtered) + entry: scripts/filter_precommit_files.sh check-yaml -- + pass_filenames: true + - id: check-toml + name: check-toml (filtered) + entry: scripts/filter_precommit_files.sh check-toml -- + pass_filenames: true + - id: check-json + name: check-json (filtered) + entry: scripts/filter_precommit_files.sh check-json -- + pass_filenames: true + # Exclude intentionally invalid JSON fixtures used by tests + exclude: ^rpc/testdata/invalid-.*\.json$ + + # Markdown formatter + - repo: https://github.com/hukkin/mdformat + rev: 0.7.22 + hooks: + - id: mdformat + additional_dependencies: + - mdformat-gfm==0.4.1 + - mdformat-frontmatter==2.0.8 diff --git a/README.md b/README.md index 385f7f4e3b..550c52c811 100644 --- a/README.md +++ b/README.md @@ -31,11 +31,11 @@ test run, require binary dependencies. One way of making these dependencies avai to use a nix shell which will give access to the dependencies expected by the test tooling: - - Install [nix](https://nixos.org/). The [determinate systems - installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) - is recommended. - - Use `./scripts/dev_shell.sh` to start a nix shell - - Execute the dependency-requiring command (e.g. `./scripts/tests.e2e.sh --start-collectors`) +- Install [nix](https://nixos.org/). The [determinate systems + installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) + is recommended. +- Use `./scripts/dev_shell.sh` to start a nix shell +- Execute the dependency-requiring command (e.g. `./scripts/tests.e2e.sh --start-collectors`) This repo also defines a `.envrc` file to configure [devenv](https://direnv.net/). With `devenv` and `nix` installed, a shell at the root of the repo will automatically start a nix @@ -50,7 +50,7 @@ The C-Chain supports the following API namespaces: - `txpool` - `debug` -Only the `eth` namespace is enabled by default. +Only the `eth` namespace is enabled by default. To enable the other namespaces see the instructions for passing the C-Chain config to AvalancheGo [here.](https://docs.avax.network/nodes/configure/chain-config-flags#enabling-evm-apis) Full documentation for the C-Chain's API can be found [here.](https://docs.avax.network/reference/avalanchego/c-chain/api) @@ -82,16 +82,52 @@ To support these changes, there have been a number of changes to the C-Chain blo ### Block Body -* `Version`: provides version of the `ExtData` in the block. Currently, this field is always 0. -* `ExtData`: extra data field within the block body to store atomic transaction bytes. +- `Version`: provides version of the `ExtData` in the block. Currently, this field is always 0. +- `ExtData`: extra data field within the block body to store atomic transaction bytes. ### Block Header -* `ExtDataHash`: the hash of the bytes in the `ExtDataHash` field -* `BaseFee`: Added by EIP-1559 to represent the base fee of the block (present in Ethereum as of EIP-1559) -* `ExtDataGasUsed`: amount of gas consumed by the atomic transactions in the block -* `BlockGasCost`: surcharge for producing a block faster than the target rate +- `ExtDataHash`: the hash of the bytes in the `ExtDataHash` field +- `BaseFee`: Added by EIP-1559 to represent the base fee of the block (present in Ethereum as of EIP-1559) +- `ExtDataGasUsed`: amount of gas consumed by the atomic transactions in the block +- `BlockGasCost`: surcharge for producing a block faster than the target rate ## Releasing See [docs/releasing/README.md](docs/releasing/README.md) for the release process. + +## Local pre-commit hooks + +We use [pre-commit](https://pre-commit.com/) to provide fast local feedback and consistent formatting/linting in CI. + +1. Install the `pre-commit` tool locally. For instructions check [here](https://pre-commit.com/). + +1. Install hooks locally (done only once): + + ```bash + pre-commit install + ``` + +1. Just use `git commit` and `git push` as normal and the hooks will automatically run. + +Additional notes: + +- Run manually on the whole repo: + + ```bash + pre-commit run --all-files + ``` + +- Temporarily bypass hooks: + + ```bash + git commit --no-verify + git push --no-verify + ``` + +- For macOS developers: + + - Some local scripts require GNU grep with PCRE support. Install via Homebrew: + `brew install grep`. If needed, add `$(brew --prefix grep)/libexec/gnubin` to your PATH. + - Our lint runner (`scripts/lint.sh`) will attempt to auto-detect and use Homebrew’s GNU grep + and Bash 4+ where available. diff --git a/RELEASES.md b/RELEASES.md index e02177af5d..4f710b3d29 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -30,11 +30,15 @@ ## [v0.15.0](https://github.com/ava-labs/coreth/releases/tag/v0.15.0) - Bump golang version to v1.23.6 + - Bump golangci-lint to v1.63 and add linters + - Implement ACP-176 + - Add `GasTarget` to the chain config to allow modifying the chain's `GasTarget` based on the ACP-176 rules - Added `eth_suggestPriceOptions` API to suggest gas prices (slow, normal, fast) based on the current network conditions + - Added `"price-options-slow-fee-percentage"`, `"price-options-fast-fee-percentage"`, `"price-options-max-base-fee"`, and `"price-options-max-tip"` config flags to configure the new `eth_suggestPriceOptions` API ## [v0.14.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1) diff --git a/SECURITY.md b/SECURITY.md index 90dd1fb37f..62057a3490 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -17,4 +17,3 @@ Please refer to the [Bug Bounty Page](https://hackenproof.com/avalanche) for the ## Supported Versions Please use the [most recently released version](https://github.com/ava-labs/coreth/releases/latest) to perform testing and to validate security issues. - diff --git a/Taskfile.yml b/Taskfile.yml index 6bc26ba3f5..37e8381286 100644 --- a/Taskfile.yml +++ b/Taskfile.yml @@ -118,4 +118,4 @@ tasks: update-avalanchego-version: desc: Update AvalancheGo version in go.mod and sync GitHub Actions workflow custom action version - cmd: bash -x ./scripts/update_avalanchego_version.sh # ci.yml \ No newline at end of file + cmd: bash -x ./scripts/update_avalanchego_version.sh # ci.yml diff --git a/cmd/abigen/namefilter.go b/cmd/abigen/namefilter.go index 1785c5b76b..fba96f622c 100644 --- a/cmd/abigen/namefilter.go +++ b/cmd/abigen/namefilter.go @@ -9,7 +9,6 @@ // // Much love to the original authors for their work. // ********** - package main import ( diff --git a/cmd/abigen/namefilter_test.go b/cmd/abigen/namefilter_test.go index 1cb691c0f9..174ad8114c 100644 --- a/cmd/abigen/namefilter_test.go +++ b/cmd/abigen/namefilter_test.go @@ -9,7 +9,6 @@ // // Much love to the original authors for their work. // ********** - package main import ( diff --git a/cmd/simulator/README.md b/cmd/simulator/README.md index 1f602f61fa..79d5eb7dc5 100644 --- a/cmd/simulator/README.md +++ b/cmd/simulator/README.md @@ -43,7 +43,7 @@ WARNING: The `--sybil-protection-enabled=false` flag is only suitable for local testing. Disabling staking serves two functions explicitly for testing purposes: 1. Ignore stake weight on the P-Chain and count each connected peer as having a stake weight of 1 -2. Automatically opts in to validate every Subnet +1. Automatically opts in to validate every Subnet Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for the C-Chain will be http://127.0.0.1:9650/ext/bc/C/rpc and ws://127.0.0.1:9650/ext/bc/C/ws for WebSocket connections. diff --git a/core/blockchain_ext.go b/core/blockchain_ext.go index 7da04e6c5c..ab995d8c51 100644 --- a/core/blockchain_ext.go +++ b/core/blockchain_ext.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package core import "github.com/ava-labs/libevm/metrics" diff --git a/docs/releasing/README.md b/docs/releasing/README.md index 17cddcfd40..bc8b9d8a57 100644 --- a/docs/releasing/README.md +++ b/docs/releasing/README.md @@ -15,52 +15,55 @@ export VERSION=v0.15.0 1. Create your branch, usually from the tip of the `master` branch: - ```bash - git fetch origin master:master - git checkout master - git checkout -b "releases/$VERSION_RC" - ``` + ```bash + git fetch origin master:master + git checkout master + git checkout -b "releases/$VERSION_RC" + ``` 1. Update the [RELEASES.md](../../RELEASES.md) file by renaming the "Pending" section to the new release version `$VERSION` and creating a new "Pending" section at the top. + 1. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. + 1. Because AvalancheGo and coreth depend on each other, and that we create releases of AvalancheGo before coreth, you can use a recent commit hash or recent release candidate of AvalancheGo in your `go.mod` file. Coreth releases should be tightly coordinated with AvalancheGo releases. + 1. Commit your changes and push the branch - ```bash - git add . - git commit -S -m "chore: release $VERSION_RC" - git push -u origin "releases/$VERSION_RC" - ``` + ```bash + git add . + git commit -S -m "chore: release $VERSION_RC" + git push -u origin "releases/$VERSION_RC" + ``` 1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): - ```bash - gh pr create --repo github.com/ava-labs/coreth --base master --title "chore: release $VERSION_RC" - ``` + ```bash + gh pr create --repo github.com/ava-labs/coreth --base master --title "chore: release $VERSION_RC" + ``` 1. Wait for the PR checks to pass with - ```bash - gh pr checks --watch - ``` + ```bash + gh pr checks --watch + ``` 1. Squash and merge your release branch into `master`, for example: - ```bash - gh pr merge "releases/$VERSION" --squash --delete-branch --subject "chore: release $VERSION" --body "\n- Update AvalancheGo from v1.12.3 to v1.13.0" - ``` + ```bash + gh pr merge "releases/$VERSION" --squash --delete-branch --subject "chore: release $VERSION" --body "\n- Update AvalancheGo from v1.12.3 to v1.13.0" + ``` 1. Create and push a tag from the `master` branch: - ```bash - git fetch origin master:master - git checkout master - # Double check the tip of the master branch is the expected commit - # of the squashed release branch - git log -1 - git tag -s "$VERSION_RC" - git push origin "$VERSION_RC" - ``` + ```bash + git fetch origin master:master + git checkout master + # Double check the tip of the master branch is the expected commit + # of the squashed release branch + git log -1 + git tag -s "$VERSION_RC" + git push origin "$VERSION_RC" + ``` 1. Once the release candidate tag is created, create a pull request on the AvalancheGo repository, bumping the coreth dependency to use this release candidate. Once proven stable, an AvalancheGo release should be created, after which you can create a coreth release. @@ -72,43 +75,30 @@ Following the previous example in the [Release candidate section](#release-candi 1. Create and push a tag from the `master` branch: - ```bash - git checkout master - git pull origin - # Double check the tip of the master branch is the expected commit - # of the squashed release branch - git log -1 - git tag -s "$VERSION" - git push origin "$VERSION" - ``` + ```bash + git checkout master + git pull origin + # Double check the tip of the master branch is the expected commit + # of the squashed release branch + git log -1 + git tag -s "$VERSION" + git push origin "$VERSION" + ``` 1. Create a new release on Github, either using: - - the [Github web interface](https://github.com/ava-labs/coreth/releases/new) - 1. In the "Choose a tag" box, select the tag previously created `$VERSION` (`v0.15.0`) - 1. Pick the previous tag, for example as `v0.14.0`. - 1. Set the "Release title" to `$VERSION` (`v0.15.0`) - 1. Set the description using this format: - - ```markdown - This is the Coreth version used in AvalancheGo@v1.13.1 - - # Breaking changes - # Features + - the [Github web interface](https://github.com/ava-labs/coreth/releases/new) - # Fixes + 1. In the "Choose a tag" box, select the tag previously created `$VERSION` (`v0.15.0`) - # Documentation + 1. Pick the previous tag, for example as `v0.14.0`. - ``` + 1. Set the "Release title" to `$VERSION` (`v0.15.0`) - 1. Only tick the box "Set as the latest release" - 1. Click on the "Create release" button - - the Github CLI `gh`: + 1. Set the description using this format: - ```bash - PREVIOUS_VERSION=v0.14.0 - NOTES="This is the Coreth version used in AvalancheGo@v1.13.1 + ```markdown + This is the Coreth version used in AvalancheGo@v1.13.1 # Breaking changes @@ -118,36 +108,61 @@ Following the previous example in the [Release candidate section](#release-candi # Documentation - " - gh release create "$VERSION" --notes-start-tag "$PREVIOUS_VERSION" --notes-from-tag "$VERSION" --title "$VERSION" --notes "$NOTES" --verify-tag ``` + 1. Only tick the box "Set as the latest release" + + 1. Click on the "Create release" button + + - the Github CLI `gh`: + + ```bash + PREVIOUS_VERSION=v0.14.0 + NOTES="This is the Coreth version used in AvalancheGo@v1.13.1 + + # Breaking changes + + # Features + + # Fixes + + # Documentation + + " + gh release create "$VERSION" --notes-start-tag "$PREVIOUS_VERSION" --notes-from-tag "$VERSION" --title "$VERSION" --notes "$NOTES" --verify-tag + ``` + Note this release will likely never be used in AvalancheGo, which should always be using release candidates to accelerate the development process. However it is still useful to have a release to indicate the last stable version of coreth. ### Post-release + After you have successfully released a new coreth version, you need to bump all of the versions again in preperation for the next release. Note that the release here is not final, and will be reassessed, and possibly changer prior to release. Some releases require a major version update, but this will usually be `$VERSION` + `0.0.1`. For example: + ```bash export P_VERSION=v1.15.1 ``` + 1. Create a branch, from the tip of the `master` branch after the release PR has been merged: - ```bash - git fetch origin master:master - git checkout master - git checkout -b "prep-$P_VERSION-release" - ``` + ```bash + git fetch origin master:master + git checkout master + git checkout -b "prep-$P_VERSION-release" + ``` 1. Bump the version number to the next pending release version, `$P_VERSION` - - Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. - - Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. + +- Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. +- Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. + 1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): - ```bash - gh pr create --repo github.com/ava-labs/coreth --base master --title "chore: prep next release $P_VERSION" - ``` + ```bash + gh pr create --repo github.com/ava-labs/coreth --base master --title "chore: prep next release $P_VERSION" + ``` 1. Wait for the PR checks to pass with - ```bash - gh pr checks --watch - ``` + ```bash + gh pr checks --watch + ``` 1. Squash and merge your branch into `master`, for example: - ```bash - gh pr merge "prep-$P_VERSION-release" --squash --subject "chore: prep next release $P_VERSION" - ``` + ```bash + gh pr merge "prep-$P_VERSION-release" --squash --subject "chore: prep next release $P_VERSION" + ``` 1. Pat yourself on the back for a job well done. diff --git a/license_header.yml b/license_header.yml index 7693376369..179a3fdd89 100644 --- a/license_header.yml +++ b/license_header.yml @@ -1,3 +1,3 @@ -header: | +header: |- // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. - // See the file LICENSE for licensing terms. \ No newline at end of file + // See the file LICENSE for licensing terms. diff --git a/license_header_for_upstream.yml b/license_header_for_upstream.yml index ccb97b8dc6..7aeffe79a2 100644 --- a/license_header_for_upstream.yml +++ b/license_header_for_upstream.yml @@ -1,4 +1,4 @@ -header: | +header: |- // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // @@ -9,4 +9,4 @@ header: | // original code from which it is derived. // // Much love to the original authors for their work. - // ********** \ No newline at end of file + // ********** diff --git a/plugin/evm/ExampleWarp.abi b/plugin/evm/ExampleWarp.abi index 9d4b442caa..0642d8793f 100644 --- a/plugin/evm/ExampleWarp.abi +++ b/plugin/evm/ExampleWarp.abi @@ -102,4 +102,4 @@ "stateMutability": "view", "type": "function" } -] \ No newline at end of file +] diff --git a/plugin/evm/api.md b/plugin/evm/api.md index 9bd0c51a9b..2ced4de401 100644 --- a/plugin/evm/api.md +++ b/plugin/evm/api.md @@ -1,6 +1,6 @@ --- title: C-Chain API -description: "This page is an overview of the C-Chain API associated with AvalancheGo." +description: This page is an overview of the C-Chain API associated with AvalancheGo. --- @@ -46,7 +46,7 @@ For example, to interact with the C-Chain's Ethereum APIs via websocket on local ws://127.0.0.1:9650/ext/bc/C/ws ``` -} > +\} > On localhost, use `ws://`. When using the [Public API](/docs/tooling/rpc-providers) or another host that supports encryption, use `wss://`. diff --git a/plugin/evm/atomic/sync/extender.go b/plugin/evm/atomic/sync/extender.go index 0f0aa4543f..28d97fe5ef 100644 --- a/plugin/evm/atomic/sync/extender.go +++ b/plugin/evm/atomic/sync/extender.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package sync import ( diff --git a/plugin/evm/atomic/sync/summary_parser.go b/plugin/evm/atomic/sync/summary_parser.go index dac3fe7bc4..353b6f49b3 100644 --- a/plugin/evm/atomic/sync/summary_parser.go +++ b/plugin/evm/atomic/sync/summary_parser.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package sync import ( diff --git a/plugin/evm/atomic/sync/summary_provider.go b/plugin/evm/atomic/sync/summary_provider.go index a10be898e1..d489d3d191 100644 --- a/plugin/evm/atomic/sync/summary_provider.go +++ b/plugin/evm/atomic/sync/summary_provider.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package sync import ( diff --git a/plugin/evm/atomic/vm/api.go b/plugin/evm/atomic/vm/api.go index c718bf7b89..cd5ec93923 100644 --- a/plugin/evm/atomic/vm/api.go +++ b/plugin/evm/atomic/vm/api.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package vm import ( diff --git a/plugin/evm/atomic/vm/bonus_blocks.go b/plugin/evm/atomic/vm/bonus_blocks.go index a81ceb9850..b40d1dcc4b 100644 --- a/plugin/evm/atomic/vm/bonus_blocks.go +++ b/plugin/evm/atomic/vm/bonus_blocks.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package vm import "github.com/ava-labs/avalanchego/ids" diff --git a/plugin/evm/atomic/vm/fuji_ext_data_hashes.json b/plugin/evm/atomic/vm/fuji_ext_data_hashes.json index dac86c0ec4..0635aa035f 100644 --- a/plugin/evm/atomic/vm/fuji_ext_data_hashes.json +++ b/plugin/evm/atomic/vm/fuji_ext_data_hashes.json @@ -2242,4 +2242,4 @@ "0xffca061763aa6efa63beb050d9a6df86783fdc3130e64c91a86f8d3465dfb1c1": "0x4687fe565539d3dfa8f44bd1c110209d990570ba112b1757125c105c6f0d0fee", "0xfff1eb38ce0478dfa9d38495d58959f23717558c9419ad6bf132f3d479d397a2": "0x64c4778480bb0c1dc6691928ddd316dc4f972b44e84037abfb14feabb380ca50", "0xfff2e6d17597e3482a772c1d6d3d67ad13eae97fb99b74d0e4ed62a295845410": "0xb3dce642c4fdfb90fc18a0b42d68bcee26b022b1c3e6cc90d4a74e9fe7a1b7e6" -} \ No newline at end of file +} diff --git a/plugin/evm/atomic/vm/mainnet_ext_data_hashes.json b/plugin/evm/atomic/vm/mainnet_ext_data_hashes.json index d14f22c7ef..67f100cc5e 100644 --- a/plugin/evm/atomic/vm/mainnet_ext_data_hashes.json +++ b/plugin/evm/atomic/vm/mainnet_ext_data_hashes.json @@ -63290,4 +63290,4 @@ "0xfffc67b57a37f88bf7d9fd77cbc78bde0360a4bb1ab16e8c313170c870764525": "0xe3f8380582d31e2106fee82d861bb588d1f6f3dc7f2b26ab9b6e4faf7fc1d4ff", "0xfffd5d18f951f6af41b443d288a76f18279a7f3ef981087ab68eb729993b1d83": "0xa9cf2edc42ca08fb16046a4d758263ce59e19b8359d3d19a808d1fb9d9a08fe7", "0xfffdfb0eb1bde9b50a621c78c21d92a9878cbe0d338a902285b1101975971852": "0x621fda6c7a0eae8611f2a6511db192b0b7e757866a1b4fab9dc7984b33c28796" -} \ No newline at end of file +} diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 08e155a4e4..fb9882290b 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -104,7 +104,7 @@ Enables the Warp API. Defaults to `false`. ### Enabling EVM APIs -### `eth-apis` (\[\]string) +### `eth-apis` ([]string) Use the `eth-apis` field to specify the exact set of below services to enable on your node. If this field is not set, then the default list will be: `["eth","eth-filter","net","web3","internal-eth","internal-blockchain","internal-transaction"]`. @@ -113,23 +113,22 @@ The names used in this configuration flag have been updated in Coreth `v0.8.14`. The mapping of deprecated values and their updated equivalent follows: -|Deprecated |Use instead | -|--------------------------------|--------------------| -|public-eth |eth | -|public-eth-filter |eth-filter | -|private-admin |admin | -|private-debug |debug | -|public-debug |debug | -|internal-public-eth |internal-eth | -|internal-public-blockchain |internal-blockchain | -|internal-public-transaction-pool|internal-transaction| -|internal-public-tx-pool |internal-tx-pool | -|internal-public-debug |internal-debug | -|internal-private-debug |internal-debug | -|internal-public-account |internal-account | -|internal-private-personal |internal-personal | - - +| Deprecated | Use instead | +| -------------------------------- | -------------------- | +| public-eth | eth | +| public-eth-filter | eth-filter | +| private-admin | admin | +| private-debug | debug | +| public-debug | debug | +| internal-public-eth | internal-eth | +| internal-public-blockchain | internal-blockchain | +| internal-public-transaction-pool | internal-transaction | +| internal-public-tx-pool | internal-tx-pool | +| internal-public-debug | internal-debug | +| internal-private-debug | internal-debug | +| internal-public-account | internal-account | +| internal-private-personal | internal-personal | +| | | If you populate this field, it will override the defaults so you must include every service you wish to enable. @@ -369,7 +368,7 @@ The maximum gas to be consumed by an RPC Call (used in `eth_estimateGas` and `et _Integer_ -Global transaction fee (price \* `gaslimit`) cap (measured in AVAX) for send-transaction variants. Defaults to `100`. +Global transaction fee (price * `gaslimit`) cap (measured in AVAX) for send-transaction variants. Defaults to `100`. ### `api-max-duration` @@ -445,7 +444,7 @@ If `true`, the APIs will allow transactions that are not replay protected (EIP-1 ### `allow-unprotected-tx-hashes` -_\[\]TxHash_ +_[]TxHash_ Specifies an array of transaction hashes that should be allowed to bypass replay protection. This flag is intended for node operators that want to explicitly allow specific transactions to be issued through their API. Defaults to an empty list. diff --git a/plugin/evm/customlogs/log_ext.go b/plugin/evm/customlogs/log_ext.go index ad1a2b70b8..6074cc0ff0 100644 --- a/plugin/evm/customlogs/log_ext.go +++ b/plugin/evm/customlogs/log_ext.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package customlogs import ethtypes "github.com/ava-labs/libevm/core/types" diff --git a/plugin/evm/customtypes/block_ext_test.go b/plugin/evm/customtypes/block_ext_test.go index bf85246af4..5d468cc906 100644 --- a/plugin/evm/customtypes/block_ext_test.go +++ b/plugin/evm/customtypes/block_ext_test.go @@ -1,6 +1,5 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. - package customtypes import ( diff --git a/plugin/evm/message/block_sync_summary_parser.go b/plugin/evm/message/block_sync_summary_parser.go index cb22c9249a..13c9b738b8 100644 --- a/plugin/evm/message/block_sync_summary_parser.go +++ b/plugin/evm/message/block_sync_summary_parser.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package message import ( diff --git a/plugin/evm/message/block_sync_summary_provider.go b/plugin/evm/message/block_sync_summary_provider.go index 8facac0af1..47b24f4878 100644 --- a/plugin/evm/message/block_sync_summary_provider.go +++ b/plugin/evm/message/block_sync_summary_provider.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package message import ( diff --git a/plugin/evm/vmtest/genesis.go b/plugin/evm/vmtest/genesis.go index 4af162ce7d..482053ffe1 100644 --- a/plugin/evm/vmtest/genesis.go +++ b/plugin/evm/vmtest/genesis.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package vmtest import ( diff --git a/plugin/evm/vmtest/test_syncervm.go b/plugin/evm/vmtest/test_syncervm.go index 887389ce48..1f412ab8f1 100644 --- a/plugin/evm/vmtest/test_syncervm.go +++ b/plugin/evm/vmtest/test_syncervm.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package vmtest import ( diff --git a/precompile/contracts/warp/README.md b/precompile/contracts/warp/README.md index 2164f237b1..257dfee0fa 100644 --- a/precompile/contracts/warp/README.md +++ b/precompile/contracts/warp/README.md @@ -17,11 +17,11 @@ For more details on Avalanche Warp Messaging, see the AvalancheGo [Warp README]( The Avalanche Warp Precompile enables this flow to send a message from blockchain A to blockchain B: 1. Call the Warp Precompile `sendWarpMessage` function with the arguments for the `UnsignedMessage` -2. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage` -3. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message -4. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage` -5. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B -6. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction +1. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage` +1. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message +1. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage` +1. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B +1. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction ### Warp Precompile @@ -57,7 +57,7 @@ To use this function, the transaction must include the signed Avalanche Warp Mes This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) -2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) +1. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) This pre-verification is performed using the ProposerVM Block header during [block verification](../../../plugin/evm/block.go#L355) & [block building](../../../miner/worker.go#L200). @@ -89,9 +89,9 @@ Sending messages from the C-Chain remains unchanged. However, when L1 XYZ receives a message from the C-Chain, it changes the semantics to the following: 1. Read the `SourceChainID` of the signed message (C-Chain) -2. Look up the `SubnetID` that validates C-Chain: Primary Network -3. Look up the validator set of L1 XYZ (instead of the Primary Network) and the registered BLS Public Keys of L1 XYZ at the P-Chain height specified by the ProposerVM header -4. Continue Warp Message verification using the validator set of L1 XYZ instead of the Primary Network +1. Look up the `SubnetID` that validates C-Chain: Primary Network +1. Look up the validator set of L1 XYZ (instead of the Primary Network) and the registered BLS Public Keys of L1 XYZ at the P-Chain height specified by the ProposerVM header +1. Continue Warp Message verification using the validator set of L1 XYZ instead of the Primary Network This means that C-Chain to L1 communication only requires a threshold of stake on the receiving L1 to sign the message instead of a threshold of stake for the entire Primary Network. @@ -118,7 +118,7 @@ As a result, we require that the block itself provides a deterministic hint whic To provide that hint, we've explored two designs: 1. Include a predicate in the transaction to ensure any referenced message is valid -2. Append the results of checking whether a Warp Message is valid/invalid to the block data itself +1. Append the results of checking whether a Warp Message is valid/invalid to the block data itself The current implementation uses option (1). diff --git a/predicate/Predicate.md b/predicate/Predicate.md new file mode 100644 index 0000000000..ed625933e6 --- /dev/null +++ b/predicate/Predicate.md @@ -0,0 +1,11 @@ +# Predicate + +This package contains the predicate data structure and its encoding and helper functions to unpack/pack the data structure. + +## Encoding + +A byte slice of size N is encoded as: + +1. Slice of N bytes +1. Delimiter byte `0xff` +1. Appended 0s to the nearest multiple of 32 bytes diff --git a/predicate/Results.md b/predicate/Results.md new file mode 100644 index 0000000000..846d29a4cd --- /dev/null +++ b/predicate/Results.md @@ -0,0 +1,128 @@ +# Results + +The results package defines how to encode `PredicateResults` within the block header's `Extra` data field. + +For more information on the motivation for encoding the results of predicate verification within a block, see [here](../../x/warp/README.md#re-processing-historical-blocks). + +## Serialization + +Note: PredicateResults are encoded using the AvalancheGo codec, which serializes a map by serializing the length of the map as a uint32 and then serializes each key-value pair sequentially. + +PredicateResults: + +``` ++---------------------+----------------------------------+-------------------+ +| codecID : uint16 | 2 bytes | ++---------------------+----------------------------------+-------------------+ +| results : map[[32]byte]TxPredicateResults | 4 + size(results) | ++---------------------+----------------------------------+-------------------+ + | 6 + size(results) | + +-------------------+ +``` + +- `codecID` is the codec version used to serialize the payload and is hardcoded to `0x0000` +- `results` is a map of transaction hashes to the corresponding `TxPredicateResults` + +TxPredicateResults + +``` ++--------------------+---------------------+------------------------------------+ +| txPredicateResults : map[[20]byte][]byte | 4 + size(txPredicateResults) bytes | ++--------------------+---------------------+------------------------------------+ + | 4 + size(txPredicateResults) bytes | + +------------------------------------+ +``` + +- `txPredicateResults` is a map of precompile addresses to the corresponding byte array returned by the predicate + +### Examples + +#### Empty Predicate Results Map + +``` +// codecID +0x00, 0x00, +// results length +0x00, 0x00, 0x00, 0x00 +``` + +#### Predicate Map with a Single Transaction Result + +``` +// codecID +0x00, 0x00, +// Results length +0x00, 0x00, 0x00, 0x01, +// txHash (key in results map) +0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +// TxPredicateResults (value in results map) +// TxPredicateResults length +0x00, 0x00, 0x00, 0x01, +// precompile address (key in TxPredicateResults map) +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, +// Byte array results (value in TxPredicateResults map) +// Length of bytes result +0x00, 0x00, 0x00, 0x03, +// bytes +0x01, 0x02, 0x03 +``` + +#### Predicate Map with Two Transaction Results + +``` +// codecID +0x00, 0x00, +// Results length +0x00, 0x00, 0x00, 0x02, +// txHash (key in results map) +0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +// TxPredicateResults (value in results map) +// TxPredicateResults length +0x00, 0x00, 0x00, 0x01, +// precompile address (key in TxPredicateResults map) +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, +// Byte array results (value in TxPredicateResults map) +// Length of bytes result +0x00, 0x00, 0x00, 0x03, +// bytes +0x01, 0x02, 0x03 +// txHash2 (key in results map) +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +// TxPredicateResults (value in results map) +// TxPredicateResults length +0x00, 0x00, 0x00, 0x01, +// precompile address (key in TxPredicateResults map) +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, +// Byte array results (value in TxPredicateResults map) +// Length of bytes result +0x00, 0x00, 0x00, 0x03, +// bytes +0x01, 0x02, 0x03 +``` + +### Maximum Size + +Results has a maximum size of 1MB enforced by the codec. The actual size depends on how much data the Precompile predicates may put into the results, the gas cost they charge, and the block gas limit. + +The Results maximum size should comfortably exceed the maximum value that could happen in practice, so that a correct block builder will not attempt to build a block and fail to marshal the predicate results using the codec. + +We make this easy to reason about by assigning a minimum gas cost to the `PredicateGas` function of precompiles. In the case of Warp, the minimum gas cost is set to 200k gas, which can lead to at most 32 additional bytes being included in Results. + +The additional bytes come from the transaction hash (32 bytes), length of tx predicate results (4 bytes), the precompile address (20 bytes), length of the bytes result (4 bytes), and the additional byte in the results bitset (1 byte). This results in 200k gas contributing a maximum of 61 additional bytes to Result. + +For a block with a maximum gas limit of 100M, the block can include up to 500 validated predicates based contributing to the size of Result. At 61 bytes / validated predicate, this yields ~30KB, which is well short of the 1MB cap. diff --git a/scripts/actionlint.sh b/scripts/actionlint.sh index 2f0b27b536..f8093f7706 100755 --- a/scripts/actionlint.sh +++ b/scripts/actionlint.sh @@ -2,7 +2,11 @@ set -euo pipefail -go run github.com/rhysd/actionlint/cmd/actionlint@v1.7.1 "${@}" +if [[ $# -gt 0 ]]; then + go run github.com/rhysd/actionlint/cmd/actionlint@v1.7.1 "$@" +else + go run github.com/rhysd/actionlint/cmd/actionlint@v1.7.1 +fi echo "Checking use of scripts/* in GitHub Actions workflows..." SCRIPT_USAGE= @@ -21,4 +25,4 @@ done if [[ -n "${SCRIPT_USAGE}" ]]; then echo "Error: the lines listed above must be converted to use scripts/run_task.sh to ensure local reproducibility." exit 1 -fi \ No newline at end of file +fi diff --git a/scripts/build_test.sh b/scripts/build_test.sh index b2390e53ea..6b21fc0b0a 100755 --- a/scripts/build_test.sh +++ b/scripts/build_test.sh @@ -33,7 +33,7 @@ do if [[ ${command_status:-0} == 0 ]]; then rm test.out exit 0 - else + else unset command_status # Clear the error code for the next run fi diff --git a/scripts/eth-allowed-packages.txt b/scripts/eth-allowed-packages.txt index e8d62708db..484566e7ee 100644 --- a/scripts/eth-allowed-packages.txt +++ b/scripts/eth-allowed-packages.txt @@ -38,4 +38,4 @@ "github.com/ava-labs/libevm/trie/triestate" "github.com/ava-labs/libevm/trie/utils" "github.com/ava-labs/libevm/triedb" -"github.com/ava-labs/libevm/triedb/database" \ No newline at end of file +"github.com/ava-labs/libevm/triedb/database" diff --git a/scripts/filter_precommit_files.sh b/scripts/filter_precommit_files.sh new file mode 100755 index 0000000000..bd2d598bfd --- /dev/null +++ b/scripts/filter_precommit_files.sh @@ -0,0 +1,158 @@ +#!/usr/bin/env bash + +set -euo pipefail + +# Filters a list of filenames (from pre-commit) to exclude any paths that match +# directories listed (non-negated) in scripts/upstream_files.txt, and skips +# non-text files. Designed to be used as: +# scripts/filter_precommit_files.sh [cmd-args ...] -- +# with pass_filenames: true so pre-commit appends filenames after '--'. + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) +REPO_ROOT=$(cd -- "${SCRIPT_DIR}/.." &>/dev/null && pwd) +UPSTREAM_FILE="${REPO_ROOT}/scripts/upstream_files.txt" + +die() { + echo "error: $*" >&2 + exit 2 +} + +require_upstream_file() { + [[ -f "${UPSTREAM_FILE}" ]] || die "not found: ${UPSTREAM_FILE}" +} + +build_dirs_regex() { + # Build a regex like ^(core/|eth/|internal/|node/) from non-negated entries + # in scripts/upstream_files.txt. Returns empty string if no entries. + require_upstream_file + local topdirs_list=() + while IFS= read -r line; do + # strip leading/trailing whitespace + line="${line%%[[:space:]]*}" + line="${line##[[:space:]]}" + [[ -z "${line}" ]] && continue + [[ "${line}" = \!* ]] && continue + first_segment="${line%%/*}" + [[ -z "${first_segment}" ]] && continue + [[ "${first_segment}" = "." || "${first_segment}" = "*" ]] && continue + topdirs_list+=("${first_segment}") + done < "${UPSTREAM_FILE}" + + if [[ ${#topdirs_list[@]} -eq 0 ]]; then + printf '%s' "" + return 0 + fi + + local unique_topdirs + unique_topdirs=$(printf '%s\n' "${topdirs_list[@]}" | sort -u) + local parts=() + while IFS= read -r d; do + [[ -z "$d" ]] && continue + parts+=("${d}/") + done <<< "${unique_topdirs}" + + if [[ ${#parts[@]} -eq 0 ]]; then + printf '%s' "" + return 0 + fi + + local joined_parts + joined_parts=$(printf '%s|' "${parts[@]}") + joined_parts=${joined_parts%|} + printf '^(%s)' "$joined_parts" +} + +is_text_file() { + # Return 0 if the given path appears to be a text file, 1 if binary. + local f="$1" + [[ -f "$f" ]] || return 1 + [[ ! -s "$f" ]] && return 0 + if command -v file >/dev/null 2>&1; then + local mime + mime=$(file -b --mime "$f" || true) + case "$mime" in + *charset=binary*) return 1 ;; + esac + fi + if LC_ALL=C grep -Iq . "$f" 2>/dev/null; then + return 0 + fi + return 1 +} + +should_skip_path() { + # Args: + local p="$1" + local re="${2-}" + # Normalize to repo-relative for regex matching against top-level dirs + local rel="$p" + case "$p" in + "${REPO_ROOT}/"*) rel="${p#"${REPO_ROOT}/"}" ;; + esac + # Strip leading './' if present + case "$rel" in + ./*) rel="${rel#./}" ;; + esac + # Skip directories + [[ -d "$p" ]] && return 0 + # Skip obvious binaries by extension + case "$p" in + *.bin) return 0 ;; + esac + # Skip non-text + is_text_file "$p" || return 0 + # Skip upstream directories if regex provided + if [[ -n "$re" ]] && [[ "$rel" =~ $re ]]; then + return 0 + fi + return 1 +} + +parse_args() { + # Populate CMD[] and FILES[] from argv using '--' as delimiter + CMD=() + FILES=() + local found_delim=false + for arg in "$@"; do + if ! $found_delim; then + if [[ "$arg" == "--" ]]; then + found_delim=true + continue + fi + CMD+=("$arg") + else + FILES+=("$arg") + fi + done + if [[ ${#CMD[@]} -eq 0 ]]; then + die "usage: $0 [args...] -- [files...]" + fi +} + +main() { + parse_args "$@" + local dirs_regex + dirs_regex="$(build_dirs_regex)" + + # If no filenames provided, run the command as-is + if [[ ${#FILES[@]} -eq 0 ]]; then + exec "${CMD[@]}" + fi + + local filtered=() + for p in "${FILES[@]}"; do + if should_skip_path "$p" "$dirs_regex"; then + continue + fi + filtered+=("$p") + done + + # Nothing left to process + if [[ ${#filtered[@]} -eq 0 ]]; then + exit 0 + fi + + exec "${CMD[@]}" "${filtered[@]}" +} + +main "$@" diff --git a/scripts/known_flakes.txt b/scripts/known_flakes.txt index dc539bc74f..e586594c2f 100644 --- a/scripts/known_flakes.txt +++ b/scripts/known_flakes.txt @@ -7,4 +7,5 @@ TestResyncNewRootAfterDeletes TestTransactionSkipIndexing TestUpdatedKeyfileContents TestWaitDeployedCornerCases -TestWebsocketLargeRead \ No newline at end of file +TestWebsocketLargeRead + diff --git a/scripts/run_task.sh b/scripts/run_task.sh index a6ed1f4f84..beec43e21c 100755 --- a/scripts/run_task.sh +++ b/scripts/run_task.sh @@ -7,4 +7,4 @@ if command -v task > /dev/null 2>&1; then exec task "${@}" else go run github.com/go-task/task/v3/cmd/task@v3.39.2 "${@}" -fi +fi diff --git a/scripts/shellcheck.sh b/scripts/shellcheck.sh index f57b853362..3897e5d2ce 100755 --- a/scripts/shellcheck.sh +++ b/scripts/shellcheck.sh @@ -52,4 +52,8 @@ for file in ${IGNORED_FILES}; do IGNORED_CONDITIONS+=(-path "${REPO_ROOT}/${file}" -prune) done -find "${REPO_ROOT}" \( "${IGNORED_CONDITIONS[@]}" \) -o -type f -name "*.sh" -print0 | xargs -0 "${SHELLCHECK}" "${@}" +if [[ $# -gt 0 ]]; then + find "${REPO_ROOT}" \( "${IGNORED_CONDITIONS[@]}" \) -o -type f -name "*.sh" -print0 | xargs -0 "${SHELLCHECK}" "$@" +else + find "${REPO_ROOT}" \( "${IGNORED_CONDITIONS[@]}" \) -o -type f -name "*.sh" -print0 | xargs -0 "${SHELLCHECK}" +fi diff --git a/scripts/upstream_files.txt b/scripts/upstream_files.txt index 295a3fadfa..5a7b7fa60d 100644 --- a/scripts/upstream_files.txt +++ b/scripts/upstream_files.txt @@ -49,4 +49,4 @@ triedb/* !internal/ethapi/api_extra_test.go !tests/utils/* !tests/warp/* -!triedb/firewood/* \ No newline at end of file +!triedb/firewood/* diff --git a/sync/README.md b/sync/README.md index d0a676e113..2fcf50eb43 100644 --- a/sync/README.md +++ b/sync/README.md @@ -1,33 +1,38 @@ # State sync ## Overview + Normally, a node joins the network through bootstrapping: First it fetches all blocks from genesis to the chain's last accepted block from peers, then it applies the state transition specified in each block to reach the state necessary to join consensus. State sync is an alternative in which a node downloads the state of the chain from its peers at a specific _syncable_ block height. Then, the node processes the rest of the chain's blocks (from syncable block to tip) via normal bootstrapping. -Blocks at heights divisible by `defaultSyncableInterval` (= 16,384 or 2**14) are considered syncable. +Blocks at heights divisible by `defaultSyncableInterval` (= 16,384 or 2\*\*14) are considered syncable. _Note: `defaultSyncableInterval` must be divisible by `CommitInterval` (= 4096). This is so the state corresponding to syncable blocks is available on nodes with pruning enabled._ State sync is faster than bootstrapping and uses less bandwidth and computation: + - Nodes joining the network do not process all the state transitions. - The amount of data sent over the network is proportionate to the amount of state not the chain's length _Note: nodes joining the network through state sync will not have historical state prior to the syncable block._ ## What is the chain state? + The node needs the following data from its peers to continue processing blocks from a syncable block: + - Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block), - State of the cross-chain shared memory (this data is fetched from peers as a merkelized trie containing add/remove operations from Import and Export Txs, known as the _atomic trie_), - Contract code referenced in the account trie, - 256 parents of the syncable block (required for the BLOCKHASH opcode) ## Code structure + State sync code is structured as follows: - `sync/handlers`: Nodes that have joined the network are expected to respond to valid requests for the chain state: - `LeafsRequestHandler`: handles requests for trie data (leafs) - `CodeRequestHandler`: handles requests for contract code - `BlockRequestHandler`: handles requests for blocks - - _Note: There are response size and time limits in place so peers joining the network do not overload peers providing data. Additionally, the engine tracks the CPU usage of each peer for such messages and throttles inbound requests accordingly._ + - _Note: There are response size and time limits in place so peers joining the network do not overload peers providing data. Additionally, the engine tracks the CPU usage of each peer for such messages and throttles inbound requests accordingly._ - `sync/client`: Validates responses from peers and provides support for syncing tries. - `sync/statesync`: Uses `sync/client` to sync EVM related state: Accounts, storage tries, and contract code. - `plugin/evm/syncer`: Uses `sync/client` to sync the atomic trie. @@ -37,8 +42,8 @@ State sync code is structured as follows: - `peer`: Contains abstractions used by `sync/statesync` to send requests to peers (`AppRequest`) and receive responses from peers (`AppResponse`). - `message`: Contains structs that are serialized and sent over the network during state sync. - ## Sync summaries & engine involvement + When a new node wants to join the network via state sync, it will need a few pieces of information as a starting point so it can make valid requests to its peers: - Number (height) and hash of the latest available syncable block, @@ -47,16 +52,17 @@ When a new node wants to join the network via state sync, it will need a few pie The above information is called a _state summary_, and each syncable block corresponds to one such summary (see `message.Summary`). The engine and VM interact as follows to find a syncable state summary: - -1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `coreth`, this is controlled by the `state-sync-enabled` flag. -1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `coreth` will resume an interrupted sync. +1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `coreth`, this is controlled by the `state-sync-enabled` flag. +1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `coreth` will resume an interrupted sync. 1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). 1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`coreth` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. ## Syncing state + The following steps are executed by the VM to sync its state from peers (see `stateSyncClient.StateSync`): + 1. Wipe snapshot data 1. Sync 256 parents of the syncable block (see `BlockRequest`), 1. Sync the atomic trie, @@ -64,6 +70,7 @@ The following steps are executed by the VM to sync its state from peers (see `st 1. Update in-memory and on-disk pointers. Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a series of `LeafRequests` to its peers. Each request specifies: + - Type of trie (`NodeType`): - `statesync.StateTrieNode` (account trie and storage tries share the same database) - `atomic.TrieNode` (atomic trie has an independent database) @@ -73,18 +80,20 @@ Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a serie Peers responding to these requests send back trie leafs (key/value pairs) beginning at `Start` and up to `End` (or a maximum number of leafs). The response must also contain include a merkle proof for the range of leafs it contains. Nodes serving state sync data are responsible for constructing these proofs (see `sync/handlers/leafs_request.go`) `client.GetLeafs` handles sending a single request and validating the response. This method will retry the request from a different peer up to `maxRetryAttempts` (= 32) times if the peer's response is: + - malformed, - does not contain a valid merkle proof, - or is not received in time. - -If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. +If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. ### EVM state: Account trie, code, and storage tries + `sync/statesync.stateSyncer` uses `CallbackLeafSyncer` to sync the account trie. When the leaf callback is invoked, each leaf represents an account: + - If the account has contract code, it is requested from peers using `client.GetCode` - If the account has a storage root, it is added to the list of trie roots returned from the callback. `CallbackLeafSyncer` has `defaultNumThreads` (= 4) goroutines to fetch these tries concurrently. -If the account trie encounters a new storage trie task and there are already 4 in-progress trie tasks (1 for the account trie and 3 for in-progress storage trie tasks), then the account trie worker will block until one of the storage trie tasks finishes and it can create a new task. + If the account trie encounters a new storage trie task and there are already 4 in-progress trie tasks (1 for the account trie and 3 for in-progress storage trie tasks), then the account trie worker will block until one of the storage trie tasks finishes and it can create a new task. When an account leaf is received, it is converted to `SlimRLP` format and written to the snapshot. To reconstruct the trie, `stateSyncer` inserts leafs as they arrive in a `StackTrie`. Since leafs arrive sorted by increasing key order, the `StackTrie` can create intermediary trie nodes as soon as all possible children for a given path are known (by hashing the children). This allows the sync process to recreate the trie locally, without the need to transmit non-leaf nodes over the network. @@ -93,13 +102,16 @@ When the trie is complete, an `OnFinish` callback is called and we hash any rema When a storage trie leaf is received, it is stored in the account's storage snapshot. A `StackTrie` is used here to reconstruct intermediary trie nodes & root as well. ### Atomic trie + `plugin/evm.syncer` uses `CallbackLeafSyncer` to sync the atomic trie. In this trie, each leaf represents a set of put or remove shared memory operations and is structured as follows: + - Key: block height + peer blockchain ID - Value: codec serialized `atomic.Requests` (includes `PutRequests` and `RemoveRequests`) -For each 4096 blocks (`commitHeightInterval`) inserted in the atomic trie, a root is constructed and the trie is persisted. There is no concurrency in sycing this trie. +For each 4096 blocks (`commitHeightInterval`) inserted in the atomic trie, a root is constructed and the trie is persisted. There is no concurrency in sycing this trie. ### Updating in-memory and on-disk pointers + `plugin/evm.stateSyncClient.StateSyncSetLastSummaryBlock` is the last step in state sync. Once the tries have been synced, this method: @@ -109,8 +121,8 @@ Once the tries have been synced, this method: - Updates VM's last accepted block. - Applies the atomic operations from the atomic trie to shared memory. (Note: the VM will resume applying these operations even if the VM is shutdown prior to completing this step) - ## Resuming a partial sync operation + While state sync is faster than normal bootstrapping, the process may take several hours to complete. In case the node is shut down in the middle of a state sync, progress on syncing the account trie and storage tries is preserved: - When starting a sync, `stateSyncClient` persists the state summary to disk. This is so if the node is shut down while the sync is ongoing, this summary can be found and returned to the engine from `GetOngoingSyncStateSummary` upon node restart. @@ -121,10 +133,10 @@ While state sync is faster than normal bootstrapping, the process may take sever ## Configuration flags -| flag | type | description | default | -|------|------|-------------|---------| -| `state-sync-enabled` | `bool` | set to true to enable state sync | `false` | -| `state-sync-skip-resume` | `bool` | set to true to avoid resuming an ongoing sync | `false` | -| `state-sync-min-blocks` | `uint64` | Minimum number of blocks the chain must be ahead of local state to prefer state sync over bootstrapping | `300,000` | -| `state-sync-server-trie-cache` | `int` | Size of trie cache to serve state sync data in MB. Should be set to multiples of `64`. | `64` | -| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | +| flag | type | description | default | +| ------------------------------ | -------- | ---------------------------------------------------------------------------------------------------------------------- | --------- | +| `state-sync-enabled` | `bool` | set to true to enable state sync | `false` | +| `state-sync-skip-resume` | `bool` | set to true to avoid resuming an ongoing sync | `false` | +| `state-sync-min-blocks` | `uint64` | Minimum number of blocks the chain must be ahead of local state to prefer state sync over bootstrapping | `300,000` | +| `state-sync-server-trie-cache` | `int` | Size of trie cache to serve state sync data in MB. Should be set to multiples of `64`. | `64` | +| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | diff --git a/sync/vm/server.go b/sync/vm/server.go index d5d40c33eb..1159082ec3 100644 --- a/sync/vm/server.go +++ b/sync/vm/server.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package vm import ( diff --git a/utils/rpc/handler.go b/utils/rpc/handler.go index 1203af3708..0dab358d61 100644 --- a/utils/rpc/handler.go +++ b/utils/rpc/handler.go @@ -1,5 +1,6 @@ // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. + package rpc import (