Prereq: #346
Here's the idea:
# Coming Soon: Specify live contracts to bind & import in this project using the given name.
# During initialization, these contracts will also be "spooned" into the development network,
# meaning that their data will match the live network at the given sequence number.
# [development.contracts.eurc]
# environment = "production"
# address = "C..."
# at-ledger-sequence = 50153603
This would allow dev & testing environments to quickly get to a sensible starting point, to avoid doing things like this (showing an example that creates dummy data for an oracle):
[development.contracts.data_feed]
client = true
init = """
admin_set --new-admin me
sep40_init --resolution 300 --assets '[{"Other":"USDT"},{"Other":"XLM"}]' --decimals 14 --base '{"Other":"USDT"}'
set_asset_price --price 10000000000000 --asset '{"Other":"XLM"}' --timestamp "$(date +%s)"
set_asset_price --price 100000000000000 --asset '{"Other":"USDT"}' --timestamp "$(date +%s)"
"""
This either requires pulling a Docker image for an indexer and building up a local index of the desired contract (which could take a long time—like, say, an hour idk), or querying a large archival index (presumably implemented as a SaaS, and thus requiring payment).
It also requires a way to directly update the local network's key-value store with the enormous amount of data.
And even after all that, the public keys and contract addresses stored in that archival data, which is now in the local network's data store, will not match actual public keys and contract addresses in the local data store. So some sort of mapping (or subsequent crawling & spooning) would be necessary.
Prereq: #346
Here's the idea:
This would allow dev & testing environments to quickly get to a sensible starting point, to avoid doing things like this (showing an example that creates dummy data for an oracle):
This either requires pulling a Docker image for an indexer and building up a local index of the desired contract (which could take a long time—like, say, an hour idk), or querying a large archival index (presumably implemented as a SaaS, and thus requiring payment).
It also requires a way to directly update the local network's key-value store with the enormous amount of data.
And even after all that, the public keys and contract addresses stored in that archival data, which is now in the local network's data store, will not match actual public keys and contract addresses in the local data store. So some sort of mapping (or subsequent crawling & spooning) would be necessary.