This module is similar to Baseledger Lakewood (https://github.com/Baseledger/baseledger-lakewood) and it is used for storing proofs. Everyone who executes proof storing transactions would need to pay Work tokens fee, that are depending on payload size (details on this will follow). This module also exposes custom cosmos-sdk client for signing and broadcasting transactions within REST endpoint, just by sending proof as a string, and it uses preconfigured keys stored in node file keyring (https://docs.cosmos.network/master/run-node/keyring.html).
This module is forked from Gravity Bridge (https://github.com/Gravity-Bridge/Gravity-Bridge). Unlike Gravity Bridge, it is one way bridge (Ethereum => Cosmos), and it is listening and handling our application-specific events. Even though purpose is different and it is not separate chain, but only a module, structure and flow of bridge is following Gravity Bridge good practices: there is orchestrator (only with ethereum oracle in our case) that validators will need to run, that is listening to events and sending claim transactions, that are then voted within attestations. Also, compared to Gravity Bridge, we are using starport (https://github.com/tendermint/starport) to scaffold cosmos module.
Overview of changes compared to Gravity Bridge are:
- one way bridge (Ethereum => Cosmos)
- different smart contract (we do not use Gravity.sol)
- different events
- removed everything that we don't need (relayer, ethereum key etc) - only thing left is ethereum oracle for listening to baseledger specific events
- simplified module structure in orchestrator due to simplified overall code
- cosmos module was generated using starport ...
To make this work locally apart from starport rust is needed to be installed and then call
-
check https://github.com/Baseledger/baseledger-contracts to run hardhat
-
run
starport chain serve --verbosein baseledger folder (if starting from scratch runstarport chain serve --verbose --reset-onceand copy alice and bob mnemonics for further usage) -
cargo build --allin root of orchestrator folder -
navigate to baseledger_bridge folder and execute
cargo run -- init
cargo run -- keys set-orchestrator-key --phrase="<STARPORT_BOB_PHRASE>"
cargo run -- keys register-orchestrator-address --fees="0token" --validator-phrase="<STARPORT_ALICE_PHRASE>"
export COINMARKETCAP_API_TOKEN=<token>
export COINAPI_API_TOKEN=<token>
cargo run -- orchestrator --ethereum-rpc="http://localhost:8545" --baseledger-contract-address="<BASELEDGER_TEST_CONTRACT_ADDRESS>"- change the proto files in baseledger bridge
- navigate to /baseledger
- starport chain build --proto-all-modules
- navigate to /orchestrator/proto_build
- cargo run
Scripts that have N and M next to name has optional parameter to control how many nodes and orchestrators will be spawn. If you leave these out, there will be 3 by default
- Navigate to tests
- Run build-container.sh - This should be ran only once to build the docker images.
- Run start-containers.sh N - Starts N (default 3) baseledger nodes and a hardhat node.
- Run deploy-contracts.sh - This deploys the dummy UBT token contract as well as the BaseledgerUBTSplitter contract.
- Run setup-validators.sh N - Creates and shares genesis.json among N (default 3) validators and creates gentx files.
- Run run-testnet.sh N M - Starts N (default 3) nodes, registers and starts M (default 3) orchestrators.
- Run add-new-node-sh N - Starts Nth (default 4) node and Nth (default 4) orchestrator and adds it to local dockerized testnet.
- Run clean.sh N - Cleans N (default 3) baseledger nodes, hardhat node and network.