diff --git a/.gitbook/assets/C2D_Ocean Enterprise.png b/.gitbook/assets/C2D_Ocean Enterprise.png new file mode 100644 index 000000000..f7cd25307 Binary files /dev/null and b/.gitbook/assets/C2D_Ocean Enterprise.png differ diff --git a/.gitbook/assets/Contribute.png b/.gitbook/assets/Contribute.png new file mode 100644 index 000000000..111da1e8e Binary files /dev/null and b/.gitbook/assets/Contribute.png differ diff --git a/.gitbook/assets/Deployment.png b/.gitbook/assets/Deployment.png new file mode 100644 index 000000000..1d756dfa0 Binary files /dev/null and b/.gitbook/assets/Deployment.png differ diff --git a/.gitbook/assets/Hertz car reservation.pdf b/.gitbook/assets/Hertz car reservation.pdf new file mode 100644 index 000000000..356263dcb Binary files /dev/null and b/.gitbook/assets/Hertz car reservation.pdf differ diff --git a/.gitbook/assets/OE diagrams.pdf b/.gitbook/assets/OE diagrams.pdf new file mode 100644 index 000000000..1ec0a38ee Binary files /dev/null and b/.gitbook/assets/OE diagrams.pdf differ diff --git a/.gitbook/assets/OEC_Trust.png b/.gitbook/assets/OEC_Trust.png new file mode 100644 index 000000000..b3a5db53a Binary files /dev/null and b/.gitbook/assets/OEC_Trust.png differ diff --git a/.gitbook/assets/OEC_Whitepaper.pdf b/.gitbook/assets/OEC_Whitepaper.pdf new file mode 100644 index 000000000..b75646e6c Binary files /dev/null and b/.gitbook/assets/OEC_Whitepaper.pdf differ diff --git "a/.gitbook/assets/Ocean Enterprise_Cover-Styles_Zeichenfla\314\210che 1 Kopie 20.jpg" "b/.gitbook/assets/Ocean Enterprise_Cover-Styles_Zeichenfla\314\210che 1 Kopie 20.jpg" new file mode 100644 index 000000000..97d85d3b7 Binary files /dev/null and "b/.gitbook/assets/Ocean Enterprise_Cover-Styles_Zeichenfla\314\210che 1 Kopie 20.jpg" differ diff --git a/.gitbook/assets/TA-SSI (1).png b/.gitbook/assets/TA-SSI (1).png new file mode 100644 index 000000000..0f876012d Binary files /dev/null and b/.gitbook/assets/TA-SSI (1).png differ diff --git a/.gitbook/assets/TA-SSI.png b/.gitbook/assets/TA-SSI.png new file mode 100644 index 000000000..2e752396b Binary files /dev/null and b/.gitbook/assets/TA-SSI.png differ diff --git a/.gitbook/assets/TA-no SSI.png b/.gitbook/assets/TA-no SSI.png new file mode 100644 index 000000000..7ad370e43 Binary files /dev/null and b/.gitbook/assets/TA-no SSI.png differ diff --git a/.gitbook/assets/Technical.png b/.gitbook/assets/Technical.png new file mode 100644 index 000000000..f1e0154f5 Binary files /dev/null and b/.gitbook/assets/Technical.png differ diff --git a/.gitbook/assets/Technical2.png b/.gitbook/assets/Technical2.png new file mode 100644 index 000000000..445e2c568 Binary files /dev/null and b/.gitbook/assets/Technical2.png differ diff --git a/.gitbook/assets/User Guide2.png b/.gitbook/assets/User Guide2.png new file mode 100644 index 000000000..7a3009c2e Binary files /dev/null and b/.gitbook/assets/User Guide2.png differ diff --git a/.gitbook/assets/architecture/Ocean101.png b/.gitbook/assets/architecture/Ocean101.png deleted file mode 100644 index 20820afff..000000000 Binary files a/.gitbook/assets/architecture/Ocean101.png and /dev/null differ diff --git a/.gitbook/assets/architecture/architecture_overview.png b/.gitbook/assets/architecture/architecture_overview.png deleted file mode 100644 index fa75be2b6..000000000 Binary files a/.gitbook/assets/architecture/architecture_overview.png and /dev/null differ diff --git a/.gitbook/assets/architecture/decentralized_exchanges_marketplaces.png b/.gitbook/assets/architecture/decentralized_exchanges_marketplaces.png deleted file mode 100644 index 45be621e1..000000000 Binary files a/.gitbook/assets/architecture/decentralized_exchanges_marketplaces.png and /dev/null differ diff --git a/.gitbook/assets/c2d/Set-a-price-algo.png b/.gitbook/assets/c2d/Set-a-price-algo.png deleted file mode 100644 index 540af2263..000000000 Binary files a/.gitbook/assets/c2d/Set-a-price-algo.png and /dev/null differ diff --git a/.gitbook/assets/c2d/Sign-transactions.png b/.gitbook/assets/c2d/Sign-transactions.png deleted file mode 100644 index 348ccf43c..000000000 Binary files a/.gitbook/assets/c2d/Sign-transactions.png and /dev/null differ diff --git a/.gitbook/assets/c2d/Submit-compute-settings.png b/.gitbook/assets/c2d/Submit-compute-settings.png deleted file mode 100644 index 3e424eaa3..000000000 Binary files a/.gitbook/assets/c2d/Submit-compute-settings.png and /dev/null differ diff --git a/.gitbook/assets/c2d/algo-asset.png b/.gitbook/assets/c2d/algo-asset.png deleted file mode 100644 index 8dcb5aad4..000000000 Binary files a/.gitbook/assets/c2d/algo-asset.png and /dev/null differ diff --git a/.gitbook/assets/c2d/algorithm-privacy.png b/.gitbook/assets/c2d/algorithm-privacy.png deleted file mode 100644 index 54069c2af..000000000 Binary files a/.gitbook/assets/c2d/algorithm-privacy.png and /dev/null differ diff --git a/.gitbook/assets/c2d/buy-compute-job.png b/.gitbook/assets/c2d/buy-compute-job.png deleted file mode 100644 index c2c360e55..000000000 Binary files a/.gitbook/assets/c2d/buy-compute-job.png and /dev/null differ diff --git a/.gitbook/assets/c2d/c2d_compute_job.png b/.gitbook/assets/c2d/c2d_compute_job.png deleted file mode 100644 index 6c3c4e3d0..000000000 Binary files a/.gitbook/assets/c2d/c2d_compute_job.png and /dev/null differ diff --git a/.gitbook/assets/c2d/c2d_detailed_flow.png b/.gitbook/assets/c2d/c2d_detailed_flow.png deleted file mode 100644 index 60e0c79d2..000000000 Binary files a/.gitbook/assets/c2d/c2d_detailed_flow.png and /dev/null differ diff --git a/.gitbook/assets/c2d/compute-to-data-parameters-publish-algorithm.png b/.gitbook/assets/c2d/compute-to-data-parameters-publish-algorithm.png deleted file mode 100644 index a77fc3041..000000000 Binary files a/.gitbook/assets/c2d/compute-to-data-parameters-publish-algorithm.png and /dev/null differ diff --git a/.gitbook/assets/c2d/compute-to-data-parameters-publish-dataset.png b/.gitbook/assets/c2d/compute-to-data-parameters-publish-dataset.png deleted file mode 100644 index 4ec576605..000000000 Binary files a/.gitbook/assets/c2d/compute-to-data-parameters-publish-dataset.png and /dev/null differ diff --git a/.gitbook/assets/c2d/data-nft-c2d-preview.png b/.gitbook/assets/c2d/data-nft-c2d-preview.png deleted file mode 100644 index 68216e5ee..000000000 Binary files a/.gitbook/assets/c2d/data-nft-c2d-preview.png and /dev/null differ diff --git a/.gitbook/assets/c2d/dataset-compute-option.png b/.gitbook/assets/c2d/dataset-compute-option.png deleted file mode 100644 index a9f98314d..000000000 Binary files a/.gitbook/assets/c2d/dataset-compute-option.png and /dev/null differ diff --git a/.gitbook/assets/c2d/dataset-default-option.png b/.gitbook/assets/c2d/dataset-default-option.png deleted file mode 100644 index bf25c247a..000000000 Binary files a/.gitbook/assets/c2d/dataset-default-option.png and /dev/null differ diff --git a/.gitbook/assets/c2d/docker-image.png b/.gitbook/assets/c2d/docker-image.png deleted file mode 100644 index 1e5c1a172..000000000 Binary files a/.gitbook/assets/c2d/docker-image.png and /dev/null differ diff --git a/.gitbook/assets/c2d/double-check-work.png b/.gitbook/assets/c2d/double-check-work.png deleted file mode 100644 index cbd39efc1..000000000 Binary files a/.gitbook/assets/c2d/double-check-work.png and /dev/null differ diff --git a/.gitbook/assets/c2d/edit-asset-link.png b/.gitbook/assets/c2d/edit-asset-link.png deleted file mode 100644 index c988720cb..000000000 Binary files a/.gitbook/assets/c2d/edit-asset-link.png and /dev/null differ diff --git a/.gitbook/assets/c2d/edit-compute-settings.png b/.gitbook/assets/c2d/edit-compute-settings.png deleted file mode 100644 index a2022e8ee..000000000 Binary files a/.gitbook/assets/c2d/edit-compute-settings.png and /dev/null differ diff --git a/.gitbook/assets/c2d/preview-publish.png b/.gitbook/assets/c2d/preview-publish.png deleted file mode 100644 index c0c362722..000000000 Binary files a/.gitbook/assets/c2d/preview-publish.png and /dev/null differ diff --git a/.gitbook/assets/c2d/publish.png b/.gitbook/assets/c2d/publish.png deleted file mode 100644 index bf3a381d5..000000000 Binary files a/.gitbook/assets/c2d/publish.png and /dev/null differ diff --git a/.gitbook/assets/c2d/select-algorithm-for-compute.png b/.gitbook/assets/c2d/select-algorithm-for-compute.png deleted file mode 100644 index e6b1b07fb..000000000 Binary files a/.gitbook/assets/c2d/select-algorithm-for-compute.png and /dev/null differ diff --git a/.gitbook/assets/capture_011_28112025_170604.jpg b/.gitbook/assets/capture_011_28112025_170604.jpg new file mode 100644 index 000000000..cb8e0f365 Binary files /dev/null and b/.gitbook/assets/capture_011_28112025_170604.jpg differ diff --git a/.gitbook/assets/capture_031_09122025_183602.jpg b/.gitbook/assets/capture_031_09122025_183602.jpg new file mode 100644 index 000000000..2128d2337 Binary files /dev/null and b/.gitbook/assets/capture_031_09122025_183602.jpg differ diff --git a/.gitbook/assets/cli/flow.puml b/.gitbook/assets/cli/flow.puml deleted file mode 100644 index 1059d3796..000000000 --- a/.gitbook/assets/cli/flow.puml +++ /dev/null @@ -1,80 +0,0 @@ -@startuml "DDO Publishing Flow" -title "DDO Publishing Flow" - -skinparam sequenceArrowThickness 2 -skinparam roundcorner 10 -skinparam maxmessagesize 85 -skinparam sequenceParticipant underline - -actor "End User" as end_user -participant "Consumer\n(Ocean CLI)" as consumer -participant "Ocean.js" as ocean_js -participant "Ocean Node" as ocean_node -database "Ocean Node's Database\n(Typesense/ElasticSearch)" as db -participant "Smart Contracts" as smart_contracts -participant "DDO.js" as ddo_js - -group Asset Creation -end_user -> consumer: Requests to publish a dataset with selected file. -note over end_user -Command: **npm run publish ** -end note -consumer -> ocean_js: Calls asset creation function -group DataNFT, Datatoken and Pricing Schema Deployment -ocean_js -> smart_contracts: Deploy data NFT & datatoken with pricing schema -note over smart_contracts - Pricing schemas are: - - dispenser - - fixed rate exchange. -end note -smart_contracts --> smart_contracts: Events emmitted -ocean_js -> smart_contracts: Retrieve **NFTCreated** and **DatatokenCreated** events -alt Datatoken is **not** template 4 for EVM credentials - ocean_js -> ocean_node: Requests encryption for DDO services files - alt Encryption is successful - ocean_node --> ocean_js: Returns **200 OK** response - else Encryption is **NOT** successful - ocean_node --> ocean_js: Returns error status code - end -else Datatoken is template 4 for EVM credentials - note over ocean_js - No encryption for service files needed, - because on Oasis the datatoken, which contains - the services, is deployed encrypted form. - end note -end -end group -group DDO Encryption -ocean_js -> ocean_node: Requests encryption for DDO object -alt Encryption is successful - ocean_node --> ocean_js: Returns **200 OK** response - else Encryption is **NOT** successful - ocean_node --> ocean_js: Returns error status code - end -end group -group DDO Validation -ocean_js -> ocean_node: Requests validation for DDO structure -ocean_node -> ddo_js: Validate DDO structure -ddo_js -> ddo_js: Use SHACL schemas depending on DDO version for validation -alt Validation is successful - ddo_js --> ocean_node: Response with success - ocean_node --> ocean_js: Returns **200 OK** response - ocean_js -> smart_contracts: Calls **setMetadata** on chain - ocean_js --> consumer: Returns DID - consumer --> end_user: Returns DID - group Ocean Node indexes new DDOs - smart_contracts --> smart_contracts: Emit event **MetadataCreated** - ocean_node -> smart_contracts: Listens to **MetadataCreated** events - ocean_node -> ocean_node: Indexes DDO from chain in processMetadataEvent - ocean_node -> db: Stores DDO in the database - db --> ocean_node - end group - - else Validation is **NOT** successful - ddo_js --> ocean_node: Response with error and log the message - ocean_node --> ocean_js: Returns error status code - end - -end group -end group -@enduml \ No newline at end of file diff --git a/.gitbook/assets/components/aquarius.png b/.gitbook/assets/components/aquarius.png deleted file mode 100644 index e228c3658..000000000 Binary files a/.gitbook/assets/components/aquarius.png and /dev/null differ diff --git a/.gitbook/assets/components/aquarius_deployment.png b/.gitbook/assets/components/aquarius_deployment.png deleted file mode 100644 index c03b04154..000000000 Binary files a/.gitbook/assets/components/aquarius_deployment.png and /dev/null differ diff --git a/.gitbook/assets/components/barge.png b/.gitbook/assets/components/barge.png deleted file mode 100644 index d1e26cd63..000000000 Binary files a/.gitbook/assets/components/barge.png and /dev/null differ diff --git a/.gitbook/assets/components/ocean_py.png b/.gitbook/assets/components/ocean_py.png deleted file mode 100644 index 9becf0ac9..000000000 Binary files a/.gitbook/assets/components/ocean_py.png and /dev/null differ diff --git a/.gitbook/assets/components/provider.png b/.gitbook/assets/components/provider.png deleted file mode 100644 index 39efdde47..000000000 Binary files a/.gitbook/assets/components/provider.png and /dev/null differ diff --git a/.gitbook/assets/components/subgraph.png b/.gitbook/assets/components/subgraph.png deleted file mode 100644 index dba06b03e..000000000 Binary files a/.gitbook/assets/components/subgraph.png and /dev/null differ diff --git a/.gitbook/assets/cover/data_farming_banner.png b/.gitbook/assets/cover/data_farming_banner.png deleted file mode 100644 index 966c83563..000000000 Binary files a/.gitbook/assets/cover/data_farming_banner.png and /dev/null differ diff --git a/.gitbook/assets/cover/data_scientists_banner.png b/.gitbook/assets/cover/data_scientists_banner.png deleted file mode 100644 index 26b989898..000000000 Binary files a/.gitbook/assets/cover/data_scientists_banner.png and /dev/null differ diff --git a/.gitbook/assets/cover/defi_banner.png b/.gitbook/assets/cover/defi_banner.png deleted file mode 100644 index a453ebfec..000000000 Binary files a/.gitbook/assets/cover/defi_banner.png and /dev/null differ diff --git a/.gitbook/assets/cover/defi_card.png b/.gitbook/assets/cover/defi_card.png deleted file mode 100644 index c1f097bff..000000000 Binary files a/.gitbook/assets/cover/defi_card.png and /dev/null differ diff --git a/.gitbook/assets/cover/docs_banner.png b/.gitbook/assets/cover/docs_banner.png deleted file mode 100644 index 427abfb6d..000000000 Binary files a/.gitbook/assets/cover/docs_banner.png and /dev/null differ diff --git a/.gitbook/assets/cover/predictoor_banner.png b/.gitbook/assets/cover/predictoor_banner.png deleted file mode 100644 index 399ff4227..000000000 Binary files a/.gitbook/assets/cover/predictoor_banner.png and /dev/null differ diff --git a/.gitbook/assets/data-farming/predictoordf_main.png b/.gitbook/assets/data-farming/predictoordf_main.png deleted file mode 100644 index df12e0309..000000000 Binary files a/.gitbook/assets/data-farming/predictoordf_main.png and /dev/null differ diff --git a/.gitbook/assets/data-scientists/data-value-creation-loop.png b/.gitbook/assets/data-scientists/data-value-creation-loop.png deleted file mode 100644 index 973616e0f..000000000 Binary files a/.gitbook/assets/data-scientists/data-value-creation-loop.png and /dev/null differ diff --git a/.gitbook/assets/ddo.js/validate-flow.puml b/.gitbook/assets/ddo.js/validate-flow.puml deleted file mode 100644 index 17e0c8599..000000000 --- a/.gitbook/assets/ddo.js/validate-flow.puml +++ /dev/null @@ -1,39 +0,0 @@ - -@startuml "DDO Validation Flow using DDO.js Library" -title "DDO Publishing Flow using DDO.js Library" - -skinparam sequenceArrowThickness 2 -skinparam roundcorner 10 -skinparam maxmessagesize 85 -skinparam sequenceParticipant underline - -participant "Ocean.js" as ocean_js -participant "Ocean Node" as ocean_node -database "Ocean Node's Database\n(Typesense/ElasticSearch)" as db -participant "Smart Contracts" as smart_contracts -participant "DDO.js" as ddo_js - -group DDO Validation -ocean_js -> ocean_node: Requests validation for DDO structure -ocean_node -> ddo_js: Validate DDO structure -ddo_js -> ddo_js: Use SHACL schemas depending on DDO version for validation -alt Validation is successful - ddo_js --> ocean_node: Response with success - ocean_node --> ocean_js: Returns **200 OK** response - ocean_js -> smart_contracts: Calls **setMetadata** on chain - ocean_js --> ocean_js: Returns DID - group Ocean Node indexes new DDOs - smart_contracts --> smart_contracts: Emit event **MetadataCreated** - ocean_node -> smart_contracts: Listens to **MetadataCreated** events - ocean_node -> ocean_node: Indexes DDO from chain in processMetadataEvent - ocean_node -> db: Stores DDO in the database - db --> ocean_node - end group - else Validation is **NOT** successful - ddo_js --> ocean_node: Response with error and log the message - ocean_node --> ocean_js: Returns error status code - end - -end group - -@enduml \ No newline at end of file diff --git a/.gitbook/assets/deployment/image (3).png b/.gitbook/assets/deployment/image (3).png deleted file mode 100644 index 8c49613fa..000000000 Binary files a/.gitbook/assets/deployment/image (3).png and /dev/null differ diff --git a/.gitbook/assets/deployment/image (4).png b/.gitbook/assets/deployment/image (4).png deleted file mode 100644 index 8c49613fa..000000000 Binary files a/.gitbook/assets/deployment/image (4).png and /dev/null differ diff --git a/.gitbook/assets/general/dao.jpeg b/.gitbook/assets/general/dao.jpeg deleted file mode 100644 index 6c291bc36..000000000 Binary files a/.gitbook/assets/general/dao.jpeg and /dev/null differ diff --git a/.gitbook/assets/general/developers.png b/.gitbook/assets/general/developers.png deleted file mode 100644 index 3f6a86bf8..000000000 Binary files a/.gitbook/assets/general/developers.png and /dev/null differ diff --git a/.gitbook/assets/general/explore_ocean.png b/.gitbook/assets/general/explore_ocean.png deleted file mode 100644 index f2140ae88..000000000 Binary files a/.gitbook/assets/general/explore_ocean.png and /dev/null differ diff --git a/.gitbook/assets/general/whirlpool.png b/.gitbook/assets/general/whirlpool.png deleted file mode 100644 index c3b1422b9..000000000 Binary files a/.gitbook/assets/general/whirlpool.png and /dev/null differ diff --git a/.gitbook/assets/gif/200.webp b/.gitbook/assets/gif/200.webp deleted file mode 100644 index b978dd0ae..000000000 Binary files a/.gitbook/assets/gif/200.webp and /dev/null differ diff --git a/.gitbook/assets/gif/aquaman-fade.gif b/.gitbook/assets/gif/aquaman-fade.gif deleted file mode 100644 index 29c6c6138..000000000 Binary files a/.gitbook/assets/gif/aquaman-fade.gif and /dev/null differ diff --git a/.gitbook/assets/gif/aquaman-gold.gif b/.gitbook/assets/gif/aquaman-gold.gif deleted file mode 100644 index 3f83e002e..000000000 Binary files a/.gitbook/assets/gif/aquaman-gold.gif and /dev/null differ diff --git a/.gitbook/assets/gif/avatar-pick-whale.gif b/.gitbook/assets/gif/avatar-pick-whale.gif deleted file mode 100644 index 7762b2274..000000000 Binary files a/.gitbook/assets/gif/avatar-pick-whale.gif and /dev/null differ diff --git a/.gitbook/assets/gif/avatar-plugin.gif b/.gitbook/assets/gif/avatar-plugin.gif deleted file mode 100644 index 08d1ec35a..000000000 Binary files a/.gitbook/assets/gif/avatar-plugin.gif and /dev/null differ diff --git a/.gitbook/assets/gif/big-money.gif b/.gitbook/assets/gif/big-money.gif deleted file mode 100644 index 9276d3918..000000000 Binary files a/.gitbook/assets/gif/big-money.gif and /dev/null differ diff --git a/.gitbook/assets/gif/cash-flow.gif b/.gitbook/assets/gif/cash-flow.gif deleted file mode 100644 index 89d495c49..000000000 Binary files a/.gitbook/assets/gif/cash-flow.gif and /dev/null differ diff --git a/.gitbook/assets/gif/clueless-shopping.gif b/.gitbook/assets/gif/clueless-shopping.gif deleted file mode 100644 index 67146b0a9..000000000 Binary files a/.gitbook/assets/gif/clueless-shopping.gif and /dev/null differ diff --git a/.gitbook/assets/gif/giphy.gif b/.gitbook/assets/gif/giphy.gif deleted file mode 100644 index a5e74195f..000000000 Binary files a/.gitbook/assets/gif/giphy.gif and /dev/null differ diff --git a/.gitbook/assets/gif/giphy.webp b/.gitbook/assets/gif/giphy.webp deleted file mode 100644 index 21682362a..000000000 Binary files a/.gitbook/assets/gif/giphy.webp and /dev/null differ diff --git a/.gitbook/assets/gif/hustlin.gif b/.gitbook/assets/gif/hustlin.gif deleted file mode 100644 index 6a2e918e8..000000000 Binary files a/.gitbook/assets/gif/hustlin.gif and /dev/null differ diff --git a/.gitbook/assets/gif/just-publish.gif b/.gitbook/assets/gif/just-publish.gif deleted file mode 100644 index dfa0b1e3a..000000000 Binary files a/.gitbook/assets/gif/just-publish.gif and /dev/null differ diff --git a/.gitbook/assets/gif/kermit-typing.gif b/.gitbook/assets/gif/kermit-typing.gif deleted file mode 100644 index 4e0991658..000000000 Binary files a/.gitbook/assets/gif/kermit-typing.gif and /dev/null differ diff --git a/.gitbook/assets/gif/like-a-boss.gif b/.gitbook/assets/gif/like-a-boss.gif deleted file mode 100644 index 3cf63a3a6..000000000 Binary files a/.gitbook/assets/gif/like-a-boss.gif and /dev/null differ diff --git a/.gitbook/assets/gif/love-ice-melting.gif b/.gitbook/assets/gif/love-ice-melting.gif deleted file mode 100644 index 9a57fd1a7..000000000 Binary files a/.gitbook/assets/gif/love-ice-melting.gif and /dev/null differ diff --git a/.gitbook/assets/gif/mafs.gif b/.gitbook/assets/gif/mafs.gif deleted file mode 100644 index 3d19c0e1f..000000000 Binary files a/.gitbook/assets/gif/mafs.gif and /dev/null differ diff --git a/.gitbook/assets/gif/making-money-is-fun.gif b/.gitbook/assets/gif/making-money-is-fun.gif deleted file mode 100644 index f1930f3db..000000000 Binary files a/.gitbook/assets/gif/making-money-is-fun.gif and /dev/null differ diff --git a/.gitbook/assets/gif/many-penguins.gif b/.gitbook/assets/gif/many-penguins.gif deleted file mode 100644 index f0361c8de..000000000 Binary files a/.gitbook/assets/gif/many-penguins.gif and /dev/null differ diff --git a/.gitbook/assets/gif/money-robot.gif b/.gitbook/assets/gif/money-robot.gif deleted file mode 100644 index 4ea8aa78b..000000000 Binary files a/.gitbook/assets/gif/money-robot.gif and /dev/null differ diff --git a/.gitbook/assets/gif/morpheus-taunting.gif b/.gitbook/assets/gif/morpheus-taunting.gif deleted file mode 100644 index 2211fb359..000000000 Binary files a/.gitbook/assets/gif/morpheus-taunting.gif and /dev/null differ diff --git a/.gitbook/assets/gif/my-data.gif b/.gitbook/assets/gif/my-data.gif deleted file mode 100644 index f2dd42fb4..000000000 Binary files a/.gitbook/assets/gif/my-data.gif and /dev/null differ diff --git a/.gitbook/assets/gif/shopping-minions.gif b/.gitbook/assets/gif/shopping-minions.gif deleted file mode 100644 index d49c78764..000000000 Binary files a/.gitbook/assets/gif/shopping-minions.gif and /dev/null differ diff --git a/.gitbook/assets/gif/talk-data-to-me.gif b/.gitbook/assets/gif/talk-data-to-me.gif deleted file mode 100644 index c147240a4..000000000 Binary files a/.gitbook/assets/gif/talk-data-to-me.gif and /dev/null differ diff --git a/.gitbook/assets/gif/the-algorithm.gif b/.gitbook/assets/gif/the-algorithm.gif deleted file mode 100644 index 4df0b3394..000000000 Binary files a/.gitbook/assets/gif/the-algorithm.gif and /dev/null differ diff --git a/.gitbook/assets/gif/underwater-buddy-peewee.gif b/.gitbook/assets/gif/underwater-buddy-peewee.gif deleted file mode 100644 index b87c3f4a1..000000000 Binary files a/.gitbook/assets/gif/underwater-buddy-peewee.gif and /dev/null differ diff --git a/.gitbook/assets/gif/underwater-treasure.gif b/.gitbook/assets/gif/underwater-treasure.gif deleted file mode 100644 index 6cc01470d..000000000 Binary files a/.gitbook/assets/gif/underwater-treasure.gif and /dev/null differ diff --git a/.gitbook/assets/gif/welcome-to-my-dojo.gif b/.gitbook/assets/gif/welcome-to-my-dojo.gif deleted file mode 100644 index 309ebcb0c..000000000 Binary files a/.gitbook/assets/gif/welcome-to-my-dojo.gif and /dev/null differ diff --git a/.gitbook/assets/hosting/azure1.png b/.gitbook/assets/hosting/azure1.png deleted file mode 100644 index fa00009ac..000000000 Binary files a/.gitbook/assets/hosting/azure1.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure2.png b/.gitbook/assets/hosting/azure2.png deleted file mode 100644 index 511ff1999..000000000 Binary files a/.gitbook/assets/hosting/azure2.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure3.png b/.gitbook/assets/hosting/azure3.png deleted file mode 100644 index 4cf63916f..000000000 Binary files a/.gitbook/assets/hosting/azure3.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure4.png b/.gitbook/assets/hosting/azure4.png deleted file mode 100644 index 2b0746419..000000000 Binary files a/.gitbook/assets/hosting/azure4.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure5.png b/.gitbook/assets/hosting/azure5.png deleted file mode 100644 index c07525e5b..000000000 Binary files a/.gitbook/assets/hosting/azure5.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure6.png b/.gitbook/assets/hosting/azure6.png deleted file mode 100644 index 4c576189e..000000000 Binary files a/.gitbook/assets/hosting/azure6.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure7.png b/.gitbook/assets/hosting/azure7.png deleted file mode 100644 index 000dc780b..000000000 Binary files a/.gitbook/assets/hosting/azure7.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure8.png b/.gitbook/assets/hosting/azure8.png deleted file mode 100644 index d424ccae2..000000000 Binary files a/.gitbook/assets/hosting/azure8.png and /dev/null differ diff --git a/.gitbook/assets/hosting/azure9.png b/.gitbook/assets/hosting/azure9.png deleted file mode 100644 index f46b3c237..000000000 Binary files a/.gitbook/assets/hosting/azure9.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-connect.png b/.gitbook/assets/hosting/uploader-ui-connect.png deleted file mode 100644 index a84161b57..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-connect.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-copy-ID.png b/.gitbook/assets/hosting/uploader-ui-copy-ID.png deleted file mode 100644 index f3ddca19d..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-copy-ID.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-get-ddo-link.png b/.gitbook/assets/hosting/uploader-ui-get-ddo-link.png deleted file mode 100644 index 3c0ea9a53..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-get-ddo-link.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-get-quote.png b/.gitbook/assets/hosting/uploader-ui-get-quote.png deleted file mode 100644 index cc3b906a2..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-get-quote.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-market-publish.png b/.gitbook/assets/hosting/uploader-ui-market-publish.png deleted file mode 100644 index 7229ec335..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-market-publish.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-network.png b/.gitbook/assets/hosting/uploader-ui-network.png deleted file mode 100644 index 22eda668c..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-network.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-upload.png b/.gitbook/assets/hosting/uploader-ui-upload.png deleted file mode 100644 index 4ee477a9e..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-upload.png and /dev/null differ diff --git a/.gitbook/assets/hosting/uploader-ui-wait.png b/.gitbook/assets/hosting/uploader-ui-wait.png deleted file mode 100644 index c90b0c67d..000000000 Binary files a/.gitbook/assets/hosting/uploader-ui-wait.png and /dev/null differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..59c4961b5 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..522a688b1 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..e4761ce56 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..6f9929fdc Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..79b954430 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..75c05a4b1 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..a217ecd46 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..41a9dda6e Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..c3a43d1bf Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1) (1).png new file mode 100644 index 000000000..dd59f41a1 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1) (1).png b/.gitbook/assets/image (1) (1) (1).png new file mode 100644 index 000000000..83bdad381 Binary files /dev/null and b/.gitbook/assets/image (1) (1) (1).png differ diff --git a/.gitbook/assets/image (1) (1).png b/.gitbook/assets/image (1) (1).png new file mode 100644 index 000000000..00a3d0636 Binary files /dev/null and b/.gitbook/assets/image (1) (1).png differ diff --git a/.gitbook/assets/image (1).png b/.gitbook/assets/image (1).png index 4f0abba46..f0ce00450 100644 Binary files a/.gitbook/assets/image (1).png and b/.gitbook/assets/image (1).png differ diff --git a/.gitbook/assets/image (10) (1) (1) (1) (1).png b/.gitbook/assets/image (10) (1) (1) (1) (1).png new file mode 100644 index 000000000..25d849355 Binary files /dev/null and b/.gitbook/assets/image (10) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (10) (1) (1) (1).png b/.gitbook/assets/image (10) (1) (1) (1).png new file mode 100644 index 000000000..3a131657d Binary files /dev/null and b/.gitbook/assets/image (10) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (10) (1) (1).png b/.gitbook/assets/image (10) (1) (1).png new file mode 100644 index 000000000..3cd913f8c Binary files /dev/null and b/.gitbook/assets/image (10) (1) (1).png differ diff --git a/.gitbook/assets/image (10) (1).png b/.gitbook/assets/image (10) (1).png new file mode 100644 index 000000000..5294ba623 Binary files /dev/null and b/.gitbook/assets/image (10) (1).png differ diff --git a/.gitbook/assets/image (10).png b/.gitbook/assets/image (10).png new file mode 100644 index 000000000..32cc33899 Binary files /dev/null and b/.gitbook/assets/image (10).png differ diff --git a/.gitbook/assets/image (11) (1) (1) (1).png b/.gitbook/assets/image (11) (1) (1) (1).png new file mode 100644 index 000000000..25d849355 Binary files /dev/null and b/.gitbook/assets/image (11) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (11) (1) (1).png b/.gitbook/assets/image (11) (1) (1).png new file mode 100644 index 000000000..3a131657d Binary files /dev/null and b/.gitbook/assets/image (11) (1) (1).png differ diff --git a/.gitbook/assets/image (11) (1).png b/.gitbook/assets/image (11) (1).png new file mode 100644 index 000000000..13b60f5a1 Binary files /dev/null and b/.gitbook/assets/image (11) (1).png differ diff --git a/.gitbook/assets/image (11).png b/.gitbook/assets/image (11).png new file mode 100644 index 000000000..441cc978c Binary files /dev/null and b/.gitbook/assets/image (11).png differ diff --git a/.gitbook/assets/image (12) (1) (1) (1).png b/.gitbook/assets/image (12) (1) (1) (1).png new file mode 100644 index 000000000..5194324fd Binary files /dev/null and b/.gitbook/assets/image (12) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (12) (1) (1).png b/.gitbook/assets/image (12) (1) (1).png new file mode 100644 index 000000000..89488a91e Binary files /dev/null and b/.gitbook/assets/image (12) (1) (1).png differ diff --git a/.gitbook/assets/image (12) (1).png b/.gitbook/assets/image (12) (1).png new file mode 100644 index 000000000..ef6d9da51 Binary files /dev/null and b/.gitbook/assets/image (12) (1).png differ diff --git a/.gitbook/assets/image (12).png b/.gitbook/assets/image (12).png new file mode 100644 index 000000000..f5ee3c846 Binary files /dev/null and b/.gitbook/assets/image (12).png differ diff --git a/.gitbook/assets/image (13) (1) (1) (1).png b/.gitbook/assets/image (13) (1) (1) (1).png new file mode 100644 index 000000000..25d849355 Binary files /dev/null and b/.gitbook/assets/image (13) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (13) (1) (1).png b/.gitbook/assets/image (13) (1) (1).png new file mode 100644 index 000000000..89488a91e Binary files /dev/null and b/.gitbook/assets/image (13) (1) (1).png differ diff --git a/.gitbook/assets/image (13) (1).png b/.gitbook/assets/image (13) (1).png new file mode 100644 index 000000000..ef6d9da51 Binary files /dev/null and b/.gitbook/assets/image (13) (1).png differ diff --git a/.gitbook/assets/image (13).png b/.gitbook/assets/image (13).png new file mode 100644 index 000000000..c75742c83 Binary files /dev/null and b/.gitbook/assets/image (13).png differ diff --git a/.gitbook/assets/image (14) (1) (1) (1).png b/.gitbook/assets/image (14) (1) (1) (1).png new file mode 100644 index 000000000..e1a4f04df Binary files /dev/null and b/.gitbook/assets/image (14) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (14) (1) (1).png b/.gitbook/assets/image (14) (1) (1).png new file mode 100644 index 000000000..f13fc3a47 Binary files /dev/null and b/.gitbook/assets/image (14) (1) (1).png differ diff --git a/.gitbook/assets/image (14) (1).png b/.gitbook/assets/image (14) (1).png new file mode 100644 index 000000000..6590b06b1 Binary files /dev/null and b/.gitbook/assets/image (14) (1).png differ diff --git a/.gitbook/assets/image (14).png b/.gitbook/assets/image (14).png new file mode 100644 index 000000000..0d18e58a1 Binary files /dev/null and b/.gitbook/assets/image (14).png differ diff --git a/.gitbook/assets/image (15) (1) (1) (1).png b/.gitbook/assets/image (15) (1) (1) (1).png new file mode 100644 index 000000000..078f70745 Binary files /dev/null and b/.gitbook/assets/image (15) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (15) (1) (1).png b/.gitbook/assets/image (15) (1) (1).png new file mode 100644 index 000000000..1e86ed48e Binary files /dev/null and b/.gitbook/assets/image (15) (1) (1).png differ diff --git a/.gitbook/assets/image (15) (1).png b/.gitbook/assets/image (15) (1).png new file mode 100644 index 000000000..dd8001957 Binary files /dev/null and b/.gitbook/assets/image (15) (1).png differ diff --git a/.gitbook/assets/image (15).png b/.gitbook/assets/image (15).png new file mode 100644 index 000000000..13697381b Binary files /dev/null and b/.gitbook/assets/image (15).png differ diff --git a/.gitbook/assets/image (16) (1) (1) (1).png b/.gitbook/assets/image (16) (1) (1) (1).png new file mode 100644 index 000000000..1a5993303 Binary files /dev/null and b/.gitbook/assets/image (16) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (16) (1) (1).png b/.gitbook/assets/image (16) (1) (1).png new file mode 100644 index 000000000..1e86ed48e Binary files /dev/null and b/.gitbook/assets/image (16) (1) (1).png differ diff --git a/.gitbook/assets/image (16) (1).png b/.gitbook/assets/image (16) (1).png new file mode 100644 index 000000000..dc9e46196 Binary files /dev/null and b/.gitbook/assets/image (16) (1).png differ diff --git a/.gitbook/assets/image (16).png b/.gitbook/assets/image (16).png new file mode 100644 index 000000000..f54f2c946 Binary files /dev/null and b/.gitbook/assets/image (16).png differ diff --git a/.gitbook/assets/image (17) (1) (1) (1).png b/.gitbook/assets/image (17) (1) (1) (1).png new file mode 100644 index 000000000..9f7398401 Binary files /dev/null and b/.gitbook/assets/image (17) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (17) (1) (1).png b/.gitbook/assets/image (17) (1) (1).png new file mode 100644 index 000000000..64bc16141 Binary files /dev/null and b/.gitbook/assets/image (17) (1) (1).png differ diff --git a/.gitbook/assets/image (17) (1).png b/.gitbook/assets/image (17) (1).png new file mode 100644 index 000000000..df554fdb7 Binary files /dev/null and b/.gitbook/assets/image (17) (1).png differ diff --git a/.gitbook/assets/image (17).png b/.gitbook/assets/image (17).png new file mode 100644 index 000000000..9c8157730 Binary files /dev/null and b/.gitbook/assets/image (17).png differ diff --git a/.gitbook/assets/image (18) (1) (1) (1).png b/.gitbook/assets/image (18) (1) (1) (1).png new file mode 100644 index 000000000..475024190 Binary files /dev/null and b/.gitbook/assets/image (18) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (18) (1) (1).png b/.gitbook/assets/image (18) (1) (1).png new file mode 100644 index 000000000..64bc16141 Binary files /dev/null and b/.gitbook/assets/image (18) (1) (1).png differ diff --git a/.gitbook/assets/image (18) (1).png b/.gitbook/assets/image (18) (1).png new file mode 100644 index 000000000..c25293223 Binary files /dev/null and b/.gitbook/assets/image (18) (1).png differ diff --git a/.gitbook/assets/image (18).png b/.gitbook/assets/image (18).png new file mode 100644 index 000000000..a0546de87 Binary files /dev/null and b/.gitbook/assets/image (18).png differ diff --git a/.gitbook/assets/image (19) (1) (1) (1).png b/.gitbook/assets/image (19) (1) (1) (1).png new file mode 100644 index 000000000..cae12438b Binary files /dev/null and b/.gitbook/assets/image (19) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (19) (1) (1).png b/.gitbook/assets/image (19) (1) (1).png new file mode 100644 index 000000000..64bc16141 Binary files /dev/null and b/.gitbook/assets/image (19) (1) (1).png differ diff --git a/.gitbook/assets/image (19) (1).png b/.gitbook/assets/image (19) (1).png new file mode 100644 index 000000000..46a5b136a Binary files /dev/null and b/.gitbook/assets/image (19) (1).png differ diff --git a/.gitbook/assets/image (19).png b/.gitbook/assets/image (19).png new file mode 100644 index 000000000..ca91627f4 Binary files /dev/null and b/.gitbook/assets/image (19).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..59c4961b5 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..71aa27a94 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..21f0d61b9 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..dfe7f40e6 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..79b954430 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..a217ecd46 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1) (1).png new file mode 100644 index 000000000..c3a43d1bf Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1) (1).png b/.gitbook/assets/image (2) (1) (1) (1).png new file mode 100644 index 000000000..e2d3379c8 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1) (1).png b/.gitbook/assets/image (2) (1) (1).png new file mode 100644 index 000000000..06ac4bc74 Binary files /dev/null and b/.gitbook/assets/image (2) (1) (1).png differ diff --git a/.gitbook/assets/image (2) (1).png b/.gitbook/assets/image (2) (1).png new file mode 100644 index 000000000..2c980fc08 Binary files /dev/null and b/.gitbook/assets/image (2) (1).png differ diff --git a/.gitbook/assets/image (2).png b/.gitbook/assets/image (2).png new file mode 100644 index 000000000..d13e89ce6 Binary files /dev/null and b/.gitbook/assets/image (2).png differ diff --git a/.gitbook/assets/image (20) (1) (1) (1).png b/.gitbook/assets/image (20) (1) (1) (1).png new file mode 100644 index 000000000..3c78c18b0 Binary files /dev/null and b/.gitbook/assets/image (20) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (20) (1) (1).png b/.gitbook/assets/image (20) (1) (1).png new file mode 100644 index 000000000..bd53d4ced Binary files /dev/null and b/.gitbook/assets/image (20) (1) (1).png differ diff --git a/.gitbook/assets/image (20) (1).png b/.gitbook/assets/image (20) (1).png new file mode 100644 index 000000000..07a7c2d1d Binary files /dev/null and b/.gitbook/assets/image (20) (1).png differ diff --git a/.gitbook/assets/image (20).png b/.gitbook/assets/image (20).png new file mode 100644 index 000000000..9289fd7a0 Binary files /dev/null and b/.gitbook/assets/image (20).png differ diff --git a/.gitbook/assets/image (21) (1) (1).png b/.gitbook/assets/image (21) (1) (1).png new file mode 100644 index 000000000..109f8fc36 Binary files /dev/null and b/.gitbook/assets/image (21) (1) (1).png differ diff --git a/.gitbook/assets/image (21) (1).png b/.gitbook/assets/image (21) (1).png new file mode 100644 index 000000000..2c687973f Binary files /dev/null and b/.gitbook/assets/image (21) (1).png differ diff --git a/.gitbook/assets/image (21).png b/.gitbook/assets/image (21).png new file mode 100644 index 000000000..3a45aa52d Binary files /dev/null and b/.gitbook/assets/image (21).png differ diff --git a/.gitbook/assets/image (22) (1).png b/.gitbook/assets/image (22) (1).png new file mode 100644 index 000000000..097cf7216 Binary files /dev/null and b/.gitbook/assets/image (22) (1).png differ diff --git a/.gitbook/assets/image (22).png b/.gitbook/assets/image (22).png new file mode 100644 index 000000000..f303b8752 Binary files /dev/null and b/.gitbook/assets/image (22).png differ diff --git a/.gitbook/assets/image (23) (1).png b/.gitbook/assets/image (23) (1).png new file mode 100644 index 000000000..b51dd9253 Binary files /dev/null and b/.gitbook/assets/image (23) (1).png differ diff --git a/.gitbook/assets/image (23).png b/.gitbook/assets/image (23).png new file mode 100644 index 000000000..5edeca18d Binary files /dev/null and b/.gitbook/assets/image (23).png differ diff --git a/.gitbook/assets/image (24) (1).png b/.gitbook/assets/image (24) (1).png new file mode 100644 index 000000000..b51dd9253 Binary files /dev/null and b/.gitbook/assets/image (24) (1).png differ diff --git a/.gitbook/assets/image (24).png b/.gitbook/assets/image (24).png new file mode 100644 index 000000000..5d808c74b Binary files /dev/null and b/.gitbook/assets/image (24).png differ diff --git a/.gitbook/assets/image (25) (1).png b/.gitbook/assets/image (25) (1).png new file mode 100644 index 000000000..9484e7ceb Binary files /dev/null and b/.gitbook/assets/image (25) (1).png differ diff --git a/.gitbook/assets/image (25).png b/.gitbook/assets/image (25).png new file mode 100644 index 000000000..7c0a4cb69 Binary files /dev/null and b/.gitbook/assets/image (25).png differ diff --git a/.gitbook/assets/image (26) (1).png b/.gitbook/assets/image (26) (1).png new file mode 100644 index 000000000..ff7c8ff32 Binary files /dev/null and b/.gitbook/assets/image (26) (1).png differ diff --git a/.gitbook/assets/image (26).png b/.gitbook/assets/image (26).png new file mode 100644 index 000000000..8e5dfd7f8 Binary files /dev/null and b/.gitbook/assets/image (26).png differ diff --git a/.gitbook/assets/image (27) (1).png b/.gitbook/assets/image (27) (1).png new file mode 100644 index 000000000..ff7c8ff32 Binary files /dev/null and b/.gitbook/assets/image (27) (1).png differ diff --git a/.gitbook/assets/image (27).png b/.gitbook/assets/image (27).png new file mode 100644 index 000000000..f9248dd24 Binary files /dev/null and b/.gitbook/assets/image (27).png differ diff --git a/.gitbook/assets/image (28) (1).png b/.gitbook/assets/image (28) (1).png new file mode 100644 index 000000000..1701322a3 Binary files /dev/null and b/.gitbook/assets/image (28) (1).png differ diff --git a/.gitbook/assets/image (28).png b/.gitbook/assets/image (28).png new file mode 100644 index 000000000..f29591681 Binary files /dev/null and b/.gitbook/assets/image (28).png differ diff --git a/.gitbook/assets/image (29) (1).png b/.gitbook/assets/image (29) (1).png new file mode 100644 index 000000000..b25b97465 Binary files /dev/null and b/.gitbook/assets/image (29) (1).png differ diff --git a/.gitbook/assets/image (29).png b/.gitbook/assets/image (29).png new file mode 100644 index 000000000..1cb787faa Binary files /dev/null and b/.gitbook/assets/image (29).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..59c4961b5 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..ff401a8c5 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..21f0d61b9 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..91ea3b94e Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..7564046d9 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..740a05107 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1) (1).png new file mode 100644 index 000000000..14e7fe026 Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1) (1).png b/.gitbook/assets/image (3) (1) (1) (1).png new file mode 100644 index 000000000..c73e8c23d Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1) (1).png b/.gitbook/assets/image (3) (1) (1).png new file mode 100644 index 000000000..5d0af724e Binary files /dev/null and b/.gitbook/assets/image (3) (1) (1).png differ diff --git a/.gitbook/assets/image (3) (1).png b/.gitbook/assets/image (3) (1).png new file mode 100644 index 000000000..5f32f389c Binary files /dev/null and b/.gitbook/assets/image (3) (1).png differ diff --git a/.gitbook/assets/image (3).png b/.gitbook/assets/image (3).png new file mode 100644 index 000000000..84a620fe3 Binary files /dev/null and b/.gitbook/assets/image (3).png differ diff --git a/.gitbook/assets/image (30) (1).png b/.gitbook/assets/image (30) (1).png new file mode 100644 index 000000000..f0b4ca049 Binary files /dev/null and b/.gitbook/assets/image (30) (1).png differ diff --git a/.gitbook/assets/image (30).png b/.gitbook/assets/image (30).png new file mode 100644 index 000000000..e1b2136bf Binary files /dev/null and b/.gitbook/assets/image (30).png differ diff --git a/.gitbook/assets/image (31) (1).png b/.gitbook/assets/image (31) (1).png new file mode 100644 index 000000000..939af1944 Binary files /dev/null and b/.gitbook/assets/image (31) (1).png differ diff --git a/.gitbook/assets/image (31).png b/.gitbook/assets/image (31).png new file mode 100644 index 000000000..69de75134 Binary files /dev/null and b/.gitbook/assets/image (31).png differ diff --git a/.gitbook/assets/image (32).png b/.gitbook/assets/image (32).png new file mode 100644 index 000000000..93072664e Binary files /dev/null and b/.gitbook/assets/image (32).png differ diff --git a/.gitbook/assets/image (33).png b/.gitbook/assets/image (33).png new file mode 100644 index 000000000..69d4d47f5 Binary files /dev/null and b/.gitbook/assets/image (33).png differ diff --git a/.gitbook/assets/image (34).png b/.gitbook/assets/image (34).png new file mode 100644 index 000000000..302b3539e Binary files /dev/null and b/.gitbook/assets/image (34).png differ diff --git a/.gitbook/assets/image (35).png b/.gitbook/assets/image (35).png new file mode 100644 index 000000000..a8f01d4d2 Binary files /dev/null and b/.gitbook/assets/image (35).png differ diff --git a/.gitbook/assets/image (36).png b/.gitbook/assets/image (36).png new file mode 100644 index 000000000..70ddc26d4 Binary files /dev/null and b/.gitbook/assets/image (36).png differ diff --git a/.gitbook/assets/image (37).png b/.gitbook/assets/image (37).png new file mode 100644 index 000000000..70ddc26d4 Binary files /dev/null and b/.gitbook/assets/image (37).png differ diff --git a/.gitbook/assets/image (38).png b/.gitbook/assets/image (38).png new file mode 100644 index 000000000..3e7b990dc Binary files /dev/null and b/.gitbook/assets/image (38).png differ diff --git a/.gitbook/assets/image (39).png b/.gitbook/assets/image (39).png new file mode 100644 index 000000000..2815f2751 Binary files /dev/null and b/.gitbook/assets/image (39).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..2eba40695 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..5996980ed Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..21f0d61b9 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..6e3d59152 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..f59c50625 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1) (1).png new file mode 100644 index 000000000..740a05107 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1) (1).png b/.gitbook/assets/image (4) (1) (1) (1).png new file mode 100644 index 000000000..14e7fe026 Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1) (1).png b/.gitbook/assets/image (4) (1) (1).png new file mode 100644 index 000000000..c73e8c23d Binary files /dev/null and b/.gitbook/assets/image (4) (1) (1).png differ diff --git a/.gitbook/assets/image (4) (1).png b/.gitbook/assets/image (4) (1).png new file mode 100644 index 000000000..b0e7c861d Binary files /dev/null and b/.gitbook/assets/image (4) (1).png differ diff --git a/.gitbook/assets/image (4).png b/.gitbook/assets/image (4).png new file mode 100644 index 000000000..3caeb8ec0 Binary files /dev/null and b/.gitbook/assets/image (4).png differ diff --git a/.gitbook/assets/image (40).png b/.gitbook/assets/image (40).png new file mode 100644 index 000000000..c71a65a0c Binary files /dev/null and b/.gitbook/assets/image (40).png differ diff --git a/.gitbook/assets/image (41).png b/.gitbook/assets/image (41).png new file mode 100644 index 000000000..415b9bc56 Binary files /dev/null and b/.gitbook/assets/image (41).png differ diff --git a/.gitbook/assets/image (42).png b/.gitbook/assets/image (42).png new file mode 100644 index 000000000..f6860147b Binary files /dev/null and b/.gitbook/assets/image (42).png differ diff --git a/.gitbook/assets/image (43).png b/.gitbook/assets/image (43).png new file mode 100644 index 000000000..f6860147b Binary files /dev/null and b/.gitbook/assets/image (43).png differ diff --git a/.gitbook/assets/image (44).png b/.gitbook/assets/image (44).png new file mode 100644 index 000000000..c6e77fdcb Binary files /dev/null and b/.gitbook/assets/image (44).png differ diff --git a/.gitbook/assets/image (45).png b/.gitbook/assets/image (45).png new file mode 100644 index 000000000..4e9c86abd Binary files /dev/null and b/.gitbook/assets/image (45).png differ diff --git a/.gitbook/assets/image (46).png b/.gitbook/assets/image (46).png new file mode 100644 index 000000000..4e9c86abd Binary files /dev/null and b/.gitbook/assets/image (46).png differ diff --git a/.gitbook/assets/image (47).png b/.gitbook/assets/image (47).png new file mode 100644 index 000000000..9b6b89c05 Binary files /dev/null and b/.gitbook/assets/image (47).png differ diff --git a/.gitbook/assets/image (48).png b/.gitbook/assets/image (48).png new file mode 100644 index 000000000..37ad5a8c4 Binary files /dev/null and b/.gitbook/assets/image (48).png differ diff --git a/.gitbook/assets/image (49).png b/.gitbook/assets/image (49).png new file mode 100644 index 000000000..ae8393f3b Binary files /dev/null and b/.gitbook/assets/image (49).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..919a99edd Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..3be03ce8b Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..21f0d61b9 Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..f59c50625 Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1) (1).png new file mode 100644 index 000000000..16406f3cf Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1) (1).png b/.gitbook/assets/image (5) (1) (1) (1).png new file mode 100644 index 000000000..73f7cd378 Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1) (1).png b/.gitbook/assets/image (5) (1) (1).png new file mode 100644 index 000000000..47b51d98f Binary files /dev/null and b/.gitbook/assets/image (5) (1) (1).png differ diff --git a/.gitbook/assets/image (5) (1).png b/.gitbook/assets/image (5) (1).png new file mode 100644 index 000000000..8a7e04738 Binary files /dev/null and b/.gitbook/assets/image (5) (1).png differ diff --git a/.gitbook/assets/image (5).png b/.gitbook/assets/image (5).png new file mode 100644 index 000000000..81f5cadac Binary files /dev/null and b/.gitbook/assets/image (5).png differ diff --git a/.gitbook/assets/image (50).png b/.gitbook/assets/image (50).png new file mode 100644 index 000000000..faa45e5b0 Binary files /dev/null and b/.gitbook/assets/image (50).png differ diff --git a/.gitbook/assets/image (51).png b/.gitbook/assets/image (51).png new file mode 100644 index 000000000..99ccac768 Binary files /dev/null and b/.gitbook/assets/image (51).png differ diff --git a/.gitbook/assets/image (52).png b/.gitbook/assets/image (52).png new file mode 100644 index 000000000..4a7439a72 Binary files /dev/null and b/.gitbook/assets/image (52).png differ diff --git a/.gitbook/assets/image (53).png b/.gitbook/assets/image (53).png new file mode 100644 index 000000000..875629d30 Binary files /dev/null and b/.gitbook/assets/image (53).png differ diff --git a/.gitbook/assets/image (54).png b/.gitbook/assets/image (54).png new file mode 100644 index 000000000..385bffe38 Binary files /dev/null and b/.gitbook/assets/image (54).png differ diff --git a/.gitbook/assets/image (55).png b/.gitbook/assets/image (55).png new file mode 100644 index 000000000..385bffe38 Binary files /dev/null and b/.gitbook/assets/image (55).png differ diff --git a/.gitbook/assets/image (56).png b/.gitbook/assets/image (56).png new file mode 100644 index 000000000..6bcb0f702 Binary files /dev/null and b/.gitbook/assets/image (56).png differ diff --git a/.gitbook/assets/image (57).png b/.gitbook/assets/image (57).png new file mode 100644 index 000000000..1b02d112b Binary files /dev/null and b/.gitbook/assets/image (57).png differ diff --git a/.gitbook/assets/image (58).png b/.gitbook/assets/image (58).png new file mode 100644 index 000000000..41e3ddf2e Binary files /dev/null and b/.gitbook/assets/image (58).png differ diff --git a/.gitbook/assets/image (59).png b/.gitbook/assets/image (59).png new file mode 100644 index 000000000..612f5bcec Binary files /dev/null and b/.gitbook/assets/image (59).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..ee099f6da Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..b06cc8606 Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..f231c23bd Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..3f98f26f3 Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1) (1).png new file mode 100644 index 000000000..c8d2ea081 Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1) (1).png b/.gitbook/assets/image (6) (1) (1) (1).png new file mode 100644 index 000000000..5288f0360 Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1) (1).png b/.gitbook/assets/image (6) (1) (1).png new file mode 100644 index 000000000..78a0360ff Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1).png b/.gitbook/assets/image (6) (1).png new file mode 100644 index 000000000..e1b2136bf Binary files /dev/null and b/.gitbook/assets/image (6) (1).png differ diff --git a/.gitbook/assets/image (6).png b/.gitbook/assets/image (6).png new file mode 100644 index 000000000..9a1d49a99 Binary files /dev/null and b/.gitbook/assets/image (6).png differ diff --git a/.gitbook/assets/image (60).png b/.gitbook/assets/image (60).png new file mode 100644 index 000000000..3dc1a8860 Binary files /dev/null and b/.gitbook/assets/image (60).png differ diff --git a/.gitbook/assets/image (61).png b/.gitbook/assets/image (61).png new file mode 100644 index 000000000..40045c9ba Binary files /dev/null and b/.gitbook/assets/image (61).png differ diff --git a/.gitbook/assets/image (62).png b/.gitbook/assets/image (62).png new file mode 100644 index 000000000..ce4b38400 Binary files /dev/null and b/.gitbook/assets/image (62).png differ diff --git a/.gitbook/assets/image (63).png b/.gitbook/assets/image (63).png new file mode 100644 index 000000000..bd5690f58 Binary files /dev/null and b/.gitbook/assets/image (63).png differ diff --git a/.gitbook/assets/image (64).png b/.gitbook/assets/image (64).png new file mode 100644 index 000000000..bd5690f58 Binary files /dev/null and b/.gitbook/assets/image (64).png differ diff --git a/.gitbook/assets/image (65).png b/.gitbook/assets/image (65).png new file mode 100644 index 000000000..d77511fd0 Binary files /dev/null and b/.gitbook/assets/image (65).png differ diff --git a/.gitbook/assets/image (66).png b/.gitbook/assets/image (66).png new file mode 100644 index 000000000..c6f198904 Binary files /dev/null and b/.gitbook/assets/image (66).png differ diff --git a/.gitbook/assets/image (67).png b/.gitbook/assets/image (67).png new file mode 100644 index 000000000..c8b0b714a Binary files /dev/null and b/.gitbook/assets/image (67).png differ diff --git a/.gitbook/assets/image (68).png b/.gitbook/assets/image (68).png new file mode 100644 index 000000000..d52e9204e Binary files /dev/null and b/.gitbook/assets/image (68).png differ diff --git a/.gitbook/assets/image (69).png b/.gitbook/assets/image (69).png new file mode 100644 index 000000000..733a4b1a5 Binary files /dev/null and b/.gitbook/assets/image (69).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..ee099f6da Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..677f02245 Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..355b1efc7 Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..e2ba9931b Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1) (1).png new file mode 100644 index 000000000..1fec11b97 Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1) (1).png b/.gitbook/assets/image (7) (1) (1) (1).png new file mode 100644 index 000000000..5288f0360 Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1) (1).png b/.gitbook/assets/image (7) (1) (1).png new file mode 100644 index 000000000..64f42629d Binary files /dev/null and b/.gitbook/assets/image (7) (1) (1).png differ diff --git a/.gitbook/assets/image (7) (1).png b/.gitbook/assets/image (7) (1).png new file mode 100644 index 000000000..d73b6a2c2 Binary files /dev/null and b/.gitbook/assets/image (7) (1).png differ diff --git a/.gitbook/assets/image (7).png b/.gitbook/assets/image (7).png new file mode 100644 index 000000000..41413cdcf Binary files /dev/null and b/.gitbook/assets/image (7).png differ diff --git a/.gitbook/assets/image (70).png b/.gitbook/assets/image (70).png new file mode 100644 index 000000000..733a4b1a5 Binary files /dev/null and b/.gitbook/assets/image (70).png differ diff --git a/.gitbook/assets/image (71).png b/.gitbook/assets/image (71).png new file mode 100644 index 000000000..dad1d897b Binary files /dev/null and b/.gitbook/assets/image (71).png differ diff --git a/.gitbook/assets/image (72).png b/.gitbook/assets/image (72).png new file mode 100644 index 000000000..1fe707efb Binary files /dev/null and b/.gitbook/assets/image (72).png differ diff --git a/.gitbook/assets/image (73).png b/.gitbook/assets/image (73).png new file mode 100644 index 000000000..cc3166795 Binary files /dev/null and b/.gitbook/assets/image (73).png differ diff --git a/.gitbook/assets/image (74).png b/.gitbook/assets/image (74).png new file mode 100644 index 000000000..599b658e8 Binary files /dev/null and b/.gitbook/assets/image (74).png differ diff --git a/.gitbook/assets/image (75).png b/.gitbook/assets/image (75).png new file mode 100644 index 000000000..599b658e8 Binary files /dev/null and b/.gitbook/assets/image (75).png differ diff --git a/.gitbook/assets/image (76).png b/.gitbook/assets/image (76).png new file mode 100644 index 000000000..6e3c3a24b Binary files /dev/null and b/.gitbook/assets/image (76).png differ diff --git a/.gitbook/assets/image (77).png b/.gitbook/assets/image (77).png new file mode 100644 index 000000000..3179d4687 Binary files /dev/null and b/.gitbook/assets/image (77).png differ diff --git a/.gitbook/assets/image (78).png b/.gitbook/assets/image (78).png new file mode 100644 index 000000000..0e317bfec Binary files /dev/null and b/.gitbook/assets/image (78).png differ diff --git a/.gitbook/assets/image (79).png b/.gitbook/assets/image (79).png new file mode 100644 index 000000000..976165c05 Binary files /dev/null and b/.gitbook/assets/image (79).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..d3d272845 Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..15dc9c827 Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..f9c850ac2 Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..ab7d3baad Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1) (1).png new file mode 100644 index 000000000..09c2120d3 Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1) (1).png b/.gitbook/assets/image (8) (1) (1) (1).png new file mode 100644 index 000000000..7ff41b43a Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1) (1).png b/.gitbook/assets/image (8) (1) (1).png new file mode 100644 index 000000000..b76581d27 Binary files /dev/null and b/.gitbook/assets/image (8) (1) (1).png differ diff --git a/.gitbook/assets/image (8) (1).png b/.gitbook/assets/image (8) (1).png new file mode 100644 index 000000000..7100ce298 Binary files /dev/null and b/.gitbook/assets/image (8) (1).png differ diff --git a/.gitbook/assets/image (8).png b/.gitbook/assets/image (8).png new file mode 100644 index 000000000..b17f1befb Binary files /dev/null and b/.gitbook/assets/image (8).png differ diff --git a/.gitbook/assets/image (80).png b/.gitbook/assets/image (80).png new file mode 100644 index 000000000..960344e03 Binary files /dev/null and b/.gitbook/assets/image (80).png differ diff --git a/.gitbook/assets/image (81).png b/.gitbook/assets/image (81).png new file mode 100644 index 000000000..028a893ac Binary files /dev/null and b/.gitbook/assets/image (81).png differ diff --git a/.gitbook/assets/image (82).png b/.gitbook/assets/image (82).png new file mode 100644 index 000000000..21cfad2f4 Binary files /dev/null and b/.gitbook/assets/image (82).png differ diff --git a/.gitbook/assets/image (83).png b/.gitbook/assets/image (83).png new file mode 100644 index 000000000..297acf5c6 Binary files /dev/null and b/.gitbook/assets/image (83).png differ diff --git a/.gitbook/assets/image (84).png b/.gitbook/assets/image (84).png new file mode 100644 index 000000000..86b018300 Binary files /dev/null and b/.gitbook/assets/image (84).png differ diff --git a/.gitbook/assets/image (85).png b/.gitbook/assets/image (85).png new file mode 100644 index 000000000..aa2066040 Binary files /dev/null and b/.gitbook/assets/image (85).png differ diff --git a/.gitbook/assets/image (86).png b/.gitbook/assets/image (86).png new file mode 100644 index 000000000..aa2066040 Binary files /dev/null and b/.gitbook/assets/image (86).png differ diff --git a/.gitbook/assets/image (87).png b/.gitbook/assets/image (87).png new file mode 100644 index 000000000..e14a75207 Binary files /dev/null and b/.gitbook/assets/image (87).png differ diff --git a/.gitbook/assets/image (88).png b/.gitbook/assets/image (88).png new file mode 100644 index 000000000..81622a85d Binary files /dev/null and b/.gitbook/assets/image (88).png differ diff --git a/.gitbook/assets/image (89).png b/.gitbook/assets/image (89).png new file mode 100644 index 000000000..d633f1ce5 Binary files /dev/null and b/.gitbook/assets/image (89).png differ diff --git a/.gitbook/assets/image (9) (1) (1) (1) (1) (1).png b/.gitbook/assets/image (9) (1) (1) (1) (1) (1).png new file mode 100644 index 000000000..5332c9d3e Binary files /dev/null and b/.gitbook/assets/image (9) (1) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (9) (1) (1) (1) (1).png b/.gitbook/assets/image (9) (1) (1) (1) (1).png new file mode 100644 index 000000000..3a131657d Binary files /dev/null and b/.gitbook/assets/image (9) (1) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (9) (1) (1) (1).png b/.gitbook/assets/image (9) (1) (1) (1).png new file mode 100644 index 000000000..4e8db36e7 Binary files /dev/null and b/.gitbook/assets/image (9) (1) (1) (1).png differ diff --git a/.gitbook/assets/image (9) (1) (1).png b/.gitbook/assets/image (9) (1) (1).png new file mode 100644 index 000000000..281d56538 Binary files /dev/null and b/.gitbook/assets/image (9) (1) (1).png differ diff --git a/.gitbook/assets/image (9) (1).png b/.gitbook/assets/image (9) (1).png new file mode 100644 index 000000000..bc76efbf9 Binary files /dev/null and b/.gitbook/assets/image (9) (1).png differ diff --git a/.gitbook/assets/image (9).png b/.gitbook/assets/image (9).png new file mode 100644 index 000000000..3b69af67b Binary files /dev/null and b/.gitbook/assets/image (9).png differ diff --git a/.gitbook/assets/image.png b/.gitbook/assets/image.png index 1efb3b3b9..f0ce00450 100644 Binary files a/.gitbook/assets/image.png and b/.gitbook/assets/image.png differ diff --git a/.gitbook/assets/market/Access.png b/.gitbook/assets/market/Access.png deleted file mode 100644 index af31b621e..000000000 Binary files a/.gitbook/assets/market/Access.png and /dev/null differ diff --git a/.gitbook/assets/market/Enter-Metadata.png b/.gitbook/assets/market/Enter-Metadata.png deleted file mode 100644 index 26afa886c..000000000 Binary files a/.gitbook/assets/market/Enter-Metadata.png and /dev/null differ diff --git a/.gitbook/assets/market/Preview.png b/.gitbook/assets/market/Preview.png deleted file mode 100644 index 9be6db898..000000000 Binary files a/.gitbook/assets/market/Preview.png and /dev/null differ diff --git a/.gitbook/assets/market/Price.png b/.gitbook/assets/market/Price.png deleted file mode 100644 index 1bccc402f..000000000 Binary files a/.gitbook/assets/market/Price.png and /dev/null differ diff --git a/.gitbook/assets/market/Publish-Link.png b/.gitbook/assets/market/Publish-Link.png deleted file mode 100644 index 5e8a5a141..000000000 Binary files a/.gitbook/assets/market/Publish-Link.png and /dev/null differ diff --git a/.gitbook/assets/market/Screenshot 2023-06-13 at 14.39.17.png b/.gitbook/assets/market/Screenshot 2023-06-13 at 14.39.17.png deleted file mode 100644 index d21c05310..000000000 Binary files a/.gitbook/assets/market/Screenshot 2023-06-13 at 14.39.17.png and /dev/null differ diff --git a/.gitbook/assets/market/Screenshot 2023-06-13 at 14.43.25.png b/.gitbook/assets/market/Screenshot 2023-06-13 at 14.43.25.png deleted file mode 100644 index ebde7c97a..000000000 Binary files a/.gitbook/assets/market/Screenshot 2023-06-13 at 14.43.25.png and /dev/null differ diff --git a/.gitbook/assets/market/Screenshot 2023-06-14 at 14.30.59.png b/.gitbook/assets/market/Screenshot 2023-06-14 at 14.30.59.png deleted file mode 100644 index 1f9b8c899..000000000 Binary files a/.gitbook/assets/market/Screenshot 2023-06-14 at 14.30.59.png and /dev/null differ diff --git a/.gitbook/assets/market/connect-wallet.png b/.gitbook/assets/market/connect-wallet.png deleted file mode 100644 index 9d5327b70..000000000 Binary files a/.gitbook/assets/market/connect-wallet.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-1.png b/.gitbook/assets/market/consume-1.png deleted file mode 100644 index 6e849e5e3..000000000 Binary files a/.gitbook/assets/market/consume-1.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-2.png b/.gitbook/assets/market/consume-2.png deleted file mode 100644 index eb171cd0f..000000000 Binary files a/.gitbook/assets/market/consume-2.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-3.png b/.gitbook/assets/market/consume-3.png deleted file mode 100644 index 70486dcfa..000000000 Binary files a/.gitbook/assets/market/consume-3.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-4.png b/.gitbook/assets/market/consume-4.png deleted file mode 100644 index 1d9c0767e..000000000 Binary files a/.gitbook/assets/market/consume-4.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-5.png b/.gitbook/assets/market/consume-5.png deleted file mode 100644 index 617242ba7..000000000 Binary files a/.gitbook/assets/market/consume-5.png and /dev/null differ diff --git a/.gitbook/assets/market/consume-connect-wallet.png b/.gitbook/assets/market/consume-connect-wallet.png deleted file mode 100644 index ee05b3906..000000000 Binary files a/.gitbook/assets/market/consume-connect-wallet.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-10.1.png b/.gitbook/assets/market/market-customisation-10.1.png deleted file mode 100644 index 7519a33ed..000000000 Binary files a/.gitbook/assets/market/market-customisation-10.1.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-10.2.png b/.gitbook/assets/market/market-customisation-10.2.png deleted file mode 100644 index 37e36e4b1..000000000 Binary files a/.gitbook/assets/market/market-customisation-10.2.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-11.1.png b/.gitbook/assets/market/market-customisation-11.1.png deleted file mode 100644 index c85ddea0e..000000000 Binary files a/.gitbook/assets/market/market-customisation-11.1.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-12.png b/.gitbook/assets/market/market-customisation-12.png deleted file mode 100644 index 5133cf8db..000000000 Binary files a/.gitbook/assets/market/market-customisation-12.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-13.png b/.gitbook/assets/market/market-customisation-13.png deleted file mode 100644 index 3e4458b4f..000000000 Binary files a/.gitbook/assets/market/market-customisation-13.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-14.png b/.gitbook/assets/market/market-customisation-14.png deleted file mode 100644 index 8c1826ff5..000000000 Binary files a/.gitbook/assets/market/market-customisation-14.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-15.png b/.gitbook/assets/market/market-customisation-15.png deleted file mode 100644 index 28b70e09e..000000000 Binary files a/.gitbook/assets/market/market-customisation-15.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-18.png b/.gitbook/assets/market/market-customisation-18.png deleted file mode 100644 index cf5ed0f3d..000000000 Binary files a/.gitbook/assets/market/market-customisation-18.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-19.png b/.gitbook/assets/market/market-customisation-19.png deleted file mode 100644 index 725e984bf..000000000 Binary files a/.gitbook/assets/market/market-customisation-19.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-20.png b/.gitbook/assets/market/market-customisation-20.png deleted file mode 100644 index f12bf8910..000000000 Binary files a/.gitbook/assets/market/market-customisation-20.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-21.png b/.gitbook/assets/market/market-customisation-21.png deleted file mode 100644 index 6772204a0..000000000 Binary files a/.gitbook/assets/market/market-customisation-21.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-22.png b/.gitbook/assets/market/market-customisation-22.png deleted file mode 100644 index 683f129c9..000000000 Binary files a/.gitbook/assets/market/market-customisation-22.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-23.png b/.gitbook/assets/market/market-customisation-23.png deleted file mode 100644 index 3452489a6..000000000 Binary files a/.gitbook/assets/market/market-customisation-23.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-24.png b/.gitbook/assets/market/market-customisation-24.png deleted file mode 100644 index ae0d0fe0f..000000000 Binary files a/.gitbook/assets/market/market-customisation-24.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-25.png b/.gitbook/assets/market/market-customisation-25.png deleted file mode 100644 index 81591570c..000000000 Binary files a/.gitbook/assets/market/market-customisation-25.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-3.png b/.gitbook/assets/market/market-customisation-3.png deleted file mode 100644 index f66fcebee..000000000 Binary files a/.gitbook/assets/market/market-customisation-3.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-4.1.png b/.gitbook/assets/market/market-customisation-4.1.png deleted file mode 100644 index 4d32b0e92..000000000 Binary files a/.gitbook/assets/market/market-customisation-4.1.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-4.2.jpg b/.gitbook/assets/market/market-customisation-4.2.jpg deleted file mode 100644 index fcc7bcbf6..000000000 Binary files a/.gitbook/assets/market/market-customisation-4.2.jpg and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-4.png b/.gitbook/assets/market/market-customisation-4.png deleted file mode 100644 index 18d2b27c1..000000000 Binary files a/.gitbook/assets/market/market-customisation-4.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-5.png b/.gitbook/assets/market/market-customisation-5.png deleted file mode 100644 index 6a8669613..000000000 Binary files a/.gitbook/assets/market/market-customisation-5.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-6.1.png b/.gitbook/assets/market/market-customisation-6.1.png deleted file mode 100644 index c4b638687..000000000 Binary files a/.gitbook/assets/market/market-customisation-6.1.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-6.png b/.gitbook/assets/market/market-customisation-6.png deleted file mode 100644 index f9c106805..000000000 Binary files a/.gitbook/assets/market/market-customisation-6.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-7.1.png b/.gitbook/assets/market/market-customisation-7.1.png deleted file mode 100644 index 5a3e6ec70..000000000 Binary files a/.gitbook/assets/market/market-customisation-7.1.png and /dev/null differ diff --git a/.gitbook/assets/market/market-customisation-8.png b/.gitbook/assets/market/market-customisation-8.png deleted file mode 100644 index 7ec32182f..000000000 Binary files a/.gitbook/assets/market/market-customisation-8.png and /dev/null differ diff --git a/.gitbook/assets/market/publish-5.png b/.gitbook/assets/market/publish-5.png deleted file mode 100644 index f0de3f1f7..000000000 Binary files a/.gitbook/assets/market/publish-5.png and /dev/null differ diff --git a/.gitbook/assets/market/publish-6.png b/.gitbook/assets/market/publish-6.png deleted file mode 100644 index 5debf4ed9..000000000 Binary files a/.gitbook/assets/market/publish-6.png and /dev/null differ diff --git a/.gitbook/assets/market/publish-7.png b/.gitbook/assets/market/publish-7.png deleted file mode 100644 index c362a36a2..000000000 Binary files a/.gitbook/assets/market/publish-7.png and /dev/null differ diff --git a/.gitbook/assets/market/publish-page-2.png b/.gitbook/assets/market/publish-page-2.png deleted file mode 100644 index 0cb13b872..000000000 Binary files a/.gitbook/assets/market/publish-page-2.png and /dev/null differ diff --git a/.gitbook/assets/market/publish-page-before-edit.png b/.gitbook/assets/market/publish-page-before-edit.png deleted file mode 100644 index caeb4e3c0..000000000 Binary files a/.gitbook/assets/market/publish-page-before-edit.png and /dev/null differ diff --git a/.gitbook/assets/templates/TokengatedAIChatbot.png b/.gitbook/assets/templates/TokengatedAIChatbot.png deleted file mode 100644 index 15076fa9c..000000000 Binary files a/.gitbook/assets/templates/TokengatedAIChatbot.png and /dev/null differ diff --git a/.gitbook/assets/uploader/Diagrams_Retrieve_file_flow.png b/.gitbook/assets/uploader/Diagrams_Retrieve_file_flow.png deleted file mode 100644 index 993505788..000000000 Binary files a/.gitbook/assets/uploader/Diagrams_Retrieve_file_flow.png and /dev/null differ diff --git a/.gitbook/assets/uploader/Diagrams_Upload_file_flow.png b/.gitbook/assets/uploader/Diagrams_Upload_file_flow.png deleted file mode 100644 index e8a036618..000000000 Binary files a/.gitbook/assets/uploader/Diagrams_Upload_file_flow.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_1.png b/.gitbook/assets/uploader/uploader_screen_1.png deleted file mode 100644 index 2fafd7ef1..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_1.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_10.png b/.gitbook/assets/uploader/uploader_screen_10.png deleted file mode 100644 index a0a777349..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_10.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_11.png b/.gitbook/assets/uploader/uploader_screen_11.png deleted file mode 100644 index 0ade5189f..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_11.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_12.png b/.gitbook/assets/uploader/uploader_screen_12.png deleted file mode 100644 index 02cf37bf7..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_12.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_13.png b/.gitbook/assets/uploader/uploader_screen_13.png deleted file mode 100644 index 7229ec335..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_13.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_14.png b/.gitbook/assets/uploader/uploader_screen_14.png deleted file mode 100644 index 37928beb9..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_14.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_15.png b/.gitbook/assets/uploader/uploader_screen_15.png deleted file mode 100644 index 21c17f390..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_15.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_2.png b/.gitbook/assets/uploader/uploader_screen_2.png deleted file mode 100644 index a84161b57..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_2.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_3.png b/.gitbook/assets/uploader/uploader_screen_3.png deleted file mode 100644 index 22eda668c..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_3.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_4.png b/.gitbook/assets/uploader/uploader_screen_4.png deleted file mode 100644 index cc3b906a2..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_4.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_5.png b/.gitbook/assets/uploader/uploader_screen_5.png deleted file mode 100644 index 4ee477a9e..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_5.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_6.png b/.gitbook/assets/uploader/uploader_screen_6.png deleted file mode 100644 index c90b0c67d..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_6.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_7.png b/.gitbook/assets/uploader/uploader_screen_7.png deleted file mode 100644 index 3c0ea9a53..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_7.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_8.png b/.gitbook/assets/uploader/uploader_screen_8.png deleted file mode 100644 index 0ade5189f..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_8.png and /dev/null differ diff --git a/.gitbook/assets/uploader/uploader_screen_9.png b/.gitbook/assets/uploader/uploader_screen_9.png deleted file mode 100644 index 5d58b2ade..000000000 Binary files a/.gitbook/assets/uploader/uploader_screen_9.png and /dev/null differ diff --git a/.gitbook/assets/vscode/main-screenshot.png b/.gitbook/assets/vscode/main-screenshot.png deleted file mode 100644 index a22e21264..000000000 Binary files a/.gitbook/assets/vscode/main-screenshot.png and /dev/null differ diff --git a/.gitbook/assets/vscode/setup-screenshot.png b/.gitbook/assets/vscode/setup-screenshot.png deleted file mode 100644 index 862774d7d..000000000 Binary files a/.gitbook/assets/vscode/setup-screenshot.png and /dev/null differ diff --git a/.gitbook/assets/wallet/binance-receive.png b/.gitbook/assets/wallet/binance-receive.png deleted file mode 100644 index 343c81d82..000000000 Binary files a/.gitbook/assets/wallet/binance-receive.png and /dev/null differ diff --git a/.gitbook/assets/wallet/confirm-backup-phrase (1).png b/.gitbook/assets/wallet/confirm-backup-phrase (1).png deleted file mode 100644 index 5fa40a1c5..000000000 Binary files a/.gitbook/assets/wallet/confirm-backup-phrase (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/confirm-backup-phrase.png b/.gitbook/assets/wallet/confirm-backup-phrase.png deleted file mode 100644 index 5fa40a1c5..000000000 Binary files a/.gitbook/assets/wallet/confirm-backup-phrase.png and /dev/null differ diff --git a/.gitbook/assets/wallet/create-new-metamask-wallet (1).png b/.gitbook/assets/wallet/create-new-metamask-wallet (1).png deleted file mode 100644 index f53a81cd2..000000000 Binary files a/.gitbook/assets/wallet/create-new-metamask-wallet (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/create-new-metamask-wallet.png b/.gitbook/assets/wallet/create-new-metamask-wallet.png deleted file mode 100644 index f53a81cd2..000000000 Binary files a/.gitbook/assets/wallet/create-new-metamask-wallet.png and /dev/null differ diff --git a/.gitbook/assets/wallet/manage-tokens (1).png b/.gitbook/assets/wallet/manage-tokens (1).png deleted file mode 100644 index 09a6f4c46..000000000 Binary files a/.gitbook/assets/wallet/manage-tokens (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/manage-tokens.png b/.gitbook/assets/wallet/manage-tokens.png deleted file mode 100644 index 09a6f4c46..000000000 Binary files a/.gitbook/assets/wallet/manage-tokens.png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-add-network (1).png b/.gitbook/assets/wallet/metamask-add-network (1).png deleted file mode 100644 index 7b756c367..000000000 Binary files a/.gitbook/assets/wallet/metamask-add-network (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-add-network.png b/.gitbook/assets/wallet/metamask-add-network.png deleted file mode 100644 index 7b756c367..000000000 Binary files a/.gitbook/assets/wallet/metamask-add-network.png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-browser-extension (1).png b/.gitbook/assets/wallet/metamask-browser-extension (1).png deleted file mode 100644 index 7f590505a..000000000 Binary files a/.gitbook/assets/wallet/metamask-browser-extension (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-browser-extension.png b/.gitbook/assets/wallet/metamask-browser-extension.png deleted file mode 100644 index 7f590505a..000000000 Binary files a/.gitbook/assets/wallet/metamask-browser-extension.png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-chrome-extension (1).png b/.gitbook/assets/wallet/metamask-chrome-extension (1).png deleted file mode 100644 index af811b08d..000000000 Binary files a/.gitbook/assets/wallet/metamask-chrome-extension (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/metamask-chrome-extension.png b/.gitbook/assets/wallet/metamask-chrome-extension.png deleted file mode 100644 index af811b08d..000000000 Binary files a/.gitbook/assets/wallet/metamask-chrome-extension.png and /dev/null differ diff --git a/.gitbook/assets/wallet/polygon-bridge.png b/.gitbook/assets/wallet/polygon-bridge.png deleted file mode 100644 index ab3ac3b57..000000000 Binary files a/.gitbook/assets/wallet/polygon-bridge.png and /dev/null differ diff --git a/.gitbook/assets/wallet/polygon-explorer.png b/.gitbook/assets/wallet/polygon-explorer.png deleted file mode 100644 index 6cb11ed3a..000000000 Binary files a/.gitbook/assets/wallet/polygon-explorer.png and /dev/null differ diff --git a/.gitbook/assets/wallet/polygon-login.png b/.gitbook/assets/wallet/polygon-login.png deleted file mode 100644 index e9b366dcd..000000000 Binary files a/.gitbook/assets/wallet/polygon-login.png and /dev/null differ diff --git a/.gitbook/assets/wallet/polygon-ocean.png b/.gitbook/assets/wallet/polygon-ocean.png deleted file mode 100644 index aec7500b2..000000000 Binary files a/.gitbook/assets/wallet/polygon-ocean.png and /dev/null differ diff --git a/.gitbook/assets/wallet/polygon-wallet-page.png b/.gitbook/assets/wallet/polygon-wallet-page.png deleted file mode 100644 index 66421870c..000000000 Binary files a/.gitbook/assets/wallet/polygon-wallet-page.png and /dev/null differ diff --git a/.gitbook/assets/wallet/secret-backup-phrase (1).png b/.gitbook/assets/wallet/secret-backup-phrase (1).png deleted file mode 100644 index 04a2a2785..000000000 Binary files a/.gitbook/assets/wallet/secret-backup-phrase (1).png and /dev/null differ diff --git a/.gitbook/assets/wallet/secret-backup-phrase.png b/.gitbook/assets/wallet/secret-backup-phrase.png deleted file mode 100644 index 04a2a2785..000000000 Binary files a/.gitbook/assets/wallet/secret-backup-phrase.png and /dev/null differ diff --git a/README.md b/README.md index a89512c6b..c63b7b8a6 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,9 @@ --- -description: Help for wherever you are on your Ocean Protocol journey. +description: 'Data Sovereignty for the Data Economy: Next Generation Data and AI Ecosystems' layout: landing --- -# 👋 Ocean docs +# Ocean Enterprise docs + +
discoverLearn how Ocean Enterprise transforms data sharing and monetization with its powerful Web3 open source tools.discoverOcean Enterprise_Cover-Styles_Zeichenfläche 1 Kopie 20.jpg
user-guidesFollow the step-by-step instructions to unleash the power of Ocean Enterprise technologies!user-guidesUser Guide2.png
developersFind APIs, libraries, and other tools to build awesome dApps or integrate with the Ocean Enterprise ecosystem.developersTechnical2.png
infrastructureFor software architects and developers - deploy your own components in Ocean Enterprise ecosystem.infrastructureDeployment.png
- - - - - - - -
discoverLearn how Ocean Protocol transforms data sharing and monetization with its powerful Web3 open source tools.discoverdiscover_card.png
user-guidesFollow the step-by-step instructions for a no-code solution to unleash the power of Ocean Protocol technologies!user-guidesuser_guides_card.png
developersFind APIs, libraries, and other tools to build awesome dApps or integrate with the Ocean Protocol ecosystem.developersdeveloper_card.png
data-scientistsEarn $ from AI models, track provenance, get more data.data-scientistsdata_scientists_card.png
predictoorRun AI-powered prediction bots or trading bots to earn $.predictoorpredictoor_card.jpg
data farmingEarn OCEAN rewards by predicting (and more streams to come)data farmingdata_farming_card.png
infrastructureFor software architects and developers - deploy your own components in Ocean Protocol ecosystem.infrastructureinfrastructure_card.png
contributeGet involved! Learn how to contribute to Ocean Protocol.contributecontribute_card.png
diff --git a/SUMMARY.md b/SUMMARY.md index e742e7de0..4ffe63c32 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -1,133 +1,63 @@ # Table of contents -* [👋 Ocean docs](README.md) -* [🌊 Discover Ocean](discover/README.md) - * [Why Ocean?](discover/why-ocean.md) - * [What is Ocean?](discover/what-is-ocean.md) - * [What can you do with Ocean?](discover/benefits.md) - * [OCEAN: The Ocean token](discover/ocean-token.md) - * [Networks](discover/networks/README.md) - * [Network Bridges](discover/networks/bridges.md) - * [FAQ](discover/faq.md) - * [Glossary](discover/glossary.md) -* [📚 User Guides](user-guides/README.md) - * [Basic concepts](user-guides/basic-concepts.md) - * [Using Wallets](user-guides/wallets/README.md) - * [Set Up MetaMask](user-guides/wallets/metamask-setup.md) - * [Host Assets](user-guides/asset-hosting/README.md) - * [Uploader](user-guides/asset-hosting/uploader.md) - * [Arweave](user-guides/asset-hosting/arweave.md) - * [AWS](user-guides/asset-hosting/aws.md) - * [Azure Cloud](user-guides/asset-hosting/azure-cloud.md) - * [Google Storage](user-guides/asset-hosting/google-storage.md) - * [Github](user-guides/asset-hosting/github.md) - * [Liquidity Pools \[deprecated\]](user-guides/remove-liquidity-pools.md) -* [💻 Developers](developers/README.md) - * [Architecture Overview](developers/architecture.md) - * [Ocean Nodes](developers/ocean-node/README.md) - * [Node Architecture](developers/ocean-node/node-architecture.md) - * [Contracts](developers/contracts/README.md) - * [Data NFTs](developers/contracts/data-nfts.md) - * [Datatokens](developers/contracts/datatokens.md) - * [Data NFTs and Datatokens](developers/contracts/datanft-and-datatoken.md) - * [Datatoken Templates](developers/contracts/datatoken-templates.md) - * [Roles](developers/contracts/roles.md) - * [Pricing Schemas](developers/contracts/pricing-schemas.md) - * [Fees](developers/contracts/fees.md) - * [Publish Flow Overview](developers/publishing-flow-architecture.md) - * [Revenue](developers/contracts/revenue.md) - * [Fractional Ownership](developers/fractional-ownership.md) - * [Community Monetization](developers/community-monetization.md) - * [Metadata](developers/metadata.md) - * [Identifiers (DIDs)](developers/identifiers.md) - * [New DDO Specification](developers/new-ddo-specification.md) - * [Obsolete DDO Specification](developers/ddo-specification.md) - * [Storage Specifications](developers/storage.md) - * [Fine-Grained Permissions](developers/fg-permissions.md) - * [Retrieve datatoken/data NFT addresses & Chain ID](developers/retrieve-datatoken-address.md) - * [Get API Keys for Blockchain Access](developers/get-api-keys-for-blockchain-access.md) - * [Barge](developers/barge/README.md) - * [Local Setup](developers/barge/local-setup-ganache.md) - * [Ocean.js](developers/ocean.js/README.md) - * [Configuration](developers/ocean.js/configuration.md) - * [Creating a data NFT](developers/ocean.js/creating-datanft.md) - * [Publish](developers/ocean.js/publish.md) - * [Mint Datatokens](developers/ocean.js/mint-datatoken.md) - * [Update Metadata](developers/ocean.js/update-metadata.md) - * [Asset Visibility](developers/ocean.js/asset-visibility.md) - * [Consume Asset](developers/ocean.js/consume-asset.md) - * [Run C2D Jobs](developers/ocean.js/cod-asset.md) - * [Ocean CLI](developers/ocean-cli/README.md) - * [Install](developers/ocean-cli/install.md) - * [Publish](developers/ocean-cli/publish.md) - * [Edit](developers/ocean-cli/edit.md) - * [Consume](developers/ocean-cli/consume.md) - * [Run C2D Jobs](developers/ocean-cli/run-c2d.md) - * [DDO.js](developers/ddo.js/README.md) - * [Instantiate a DDO](developers/ddo.js/instantiate-ddo.md) - * [DDO Fields interactions](developers/ddo.js/retrieve-fields.md) - * [Validate](developers/ddo.js/validate.md) - * [Edit DDO Fields](developers/ddo.js/edit-fields.md) - * [Compute to data](developers/compute-to-data/README.md) - * [Compute to data](developers/compute-to-data/README.md) - * [Architecture](developers/compute-to-data/compute-to-data-architecture.md) - * [Datasets & Algorithms](developers/compute-to-data/compute-to-data-datasets-algorithms.md) - * [Workflow](developers/compute-to-data/compute-workflow.md) - * [Writing Algorithms](developers/compute-to-data/compute-to-data-algorithms.md) - * [Compute Options](developers/compute-to-data/compute-options.md) - * [Uploader](developers/uploader/README.md) - * [Uploader.js](developers/uploader/uploader-js.md) - * [Uploader UI](developers/uploader/uploader-ui.md) - * [Uploader UI to Market](developers/uploader/uploader-ui-marketplace.md) - * [VSCode Extension](developers/vscode/README.md) - * [Old Infrastructure](developers/old-infrastructure/README.md) - * [Aquarius](developers/old-infrastructure/aquarius/README.md) - * [Asset Requests](developers/old-infrastructure/aquarius/asset-requests.md) - * [Chain Requests](developers/old-infrastructure/aquarius/chain-requests.md) - * [Other Requests](developers/old-infrastructure/aquarius/other-requests.md) - * [Provider](developers/old-infrastructure/provider/README.md) - * [General Endpoints](developers/old-infrastructure/provider/general-endpoints.md) - * [Encryption / Decryption](developers/old-infrastructure/provider/encryption-decryption.md) - * [Compute Endpoints](developers/old-infrastructure/provider/compute-endpoints.md) - * [Authentication Endpoints](developers/old-infrastructure/provider/authentication-endpoints.md) - * [Subgraph](developers/old-infrastructure/subgraph/README.md) - * [Get data NFTs](developers/old-infrastructure/subgraph/list-data-nfts.md) - * [Get data NFT information](developers/old-infrastructure/subgraph/get-data-nft-information.md) - * [Get datatokens](developers/old-infrastructure/subgraph/list-datatokens.md) - * [Get datatoken information](developers/old-infrastructure/subgraph/get-datatoken-information.md) - * [Get datatoken buyers](developers/old-infrastructure/subgraph/get-datatoken-buyers.md) - * [Get fixed-rate exchanges](developers/old-infrastructure/subgraph/list-fixed-rate-exchanges.md) - * [Get veOCEAN stats](developers/old-infrastructure/subgraph/get-veocean-stats.md) - * [Developer FAQ](developers/dev-faq.md) -* [📊 Data Scientists](data-scientists/README.md) - * [Ocean.py](data-scientists/ocean.py/README.md) - * [Install](data-scientists/ocean.py/install.md) - * [Local Setup](data-scientists/ocean.py/local-setup.md) - * [Remote Setup](data-scientists/ocean.py/remote-setup.md) - * [Publish Flow](data-scientists/ocean.py/publish-flow.md) - * [Consume Flow](data-scientists/ocean.py/consume-flow.md) - * [Compute Flow](data-scientists/ocean.py/compute-flow.md) - * [Ocean Instance Tech Details](data-scientists/ocean.py/technical-details.md) - * [Ocean Assets Tech Details](data-scientists/ocean.py/ocean-assets-tech-details.md) - * [Ocean Compute Tech Details](data-scientists/ocean.py/ocean-compute-tech-details.md) - * [Datatoken Interface Tech Details](data-scientists/ocean.py/datatoken-interface-tech-details.md) - * [Join a Data Challenge](data-scientists/join-a-data-challenge.md) - * [Sponsor a Data Challenge](data-scientists/sponsor-a-data-challenge.md) - * [Data Value-Creation Loop](data-scientists/the-data-value-creation-loop.md) - * [What data is valuable?](data-scientists/data-engineers.md) -* [👀 Predictoor](predictoor/README.md) -* [💰 Data Farming](data-farming/README.md) - * [Predictoor DF](data-farming/predictoordf.md) - * [Guide to Predictoor DF](data-farming/predictoordf-guide.md) - * [FAQ](data-farming/faq.md) -* [🔨 Infrastructure](infrastructure/README.md) - * [Set Up a Server](infrastructure/setup-server.md) - * [Deploy Aquarius](infrastructure/deploying-aquarius.md) - * [Deploy Provider](infrastructure/deploying-provider.md) - * [Deploy Ocean Subgraph](infrastructure/deploying-ocean-subgraph.md) - * [Deploy C2D](infrastructure/compute-to-data-minikube.md) - * [For C2D, Set Up Private Docker Registry](infrastructure/compute-to-data-docker-registry.md) -* [🤝 Contribute](contribute/README.md) - * [Collaborators](contribute/projects-using-ocean.md) - * [Contributor Code of Conduct](contribute/code-of-conduct.md) - * [Legal Requirements](contribute/legal-reqs.md) +* [Ocean Enterprise docs](README.md) +* [Introduction](discover/README.md) + * [What is Ocean Enterprise?](discover/what-is-ocean.md) + * [What can you do with Ocean Enterprise?](discover/benefits.md) + * [Ocean Enterprise Collective e.V.](discover/ocean-enterprise-collective-e.v..md) + * [Licensing](discover/licensing.md) + * [Whitepaper](discover/whitepaper.md) + * [Privacy Policy](discover/privacy-policy.md) + * [Imprint](discover/imprint.md) +* [User Guides](user-guides/README.md) + * [Using the OE Marketplace](user-guides/using-the-oe-marketplace/README.md) + * [Onboarding to the Marketplace](user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/README.md) + * [Install and configure Metamask in the browser](user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/install-and-configure-metamask-in-the-browser.md) + * [Adding funds to the wallet](user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/adding-funds-to-the-wallet.md) + * [Setting up the SSI wallet](user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/setting-up-the-ssi-wallet.md) + * [Logging in to the Marketplace](user-guides/using-the-oe-marketplace/logging-in-to-the-marketplace.md) + * [Publishing an asset](user-guides/using-the-oe-marketplace/publishing-an-asset/README.md) + * [Asset Metadata](user-guides/using-the-oe-marketplace/publishing-an-asset/asset-metadata.md) + * [Asset Level Credentials](user-guides/using-the-oe-marketplace/publishing-an-asset/asset-level-credentials.md) + * [Service Metadata and Credentials](user-guides/using-the-oe-marketplace/publishing-an-asset/service-metadata-and-credentials.md) + * [Service Pricing](user-guides/using-the-oe-marketplace/publishing-an-asset/service-pricing.md) + * [Additional Asset Description](user-guides/using-the-oe-marketplace/publishing-an-asset/additional-asset-description.md) + * [Preview](user-guides/using-the-oe-marketplace/publishing-an-asset/preview.md) + * [Submit](user-guides/using-the-oe-marketplace/publishing-an-asset/submit.md) + * [Editing an asset](user-guides/using-the-oe-marketplace/editing-an-asset/README.md) + * [Update the asset's attributes and state](user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-attributes-and-state.md) + * [Update the asset's services](user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/README.md) + * [Update an existing service](user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/update-an-existing-service.md) + * [Create a new service](user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/create-a-new-service.md) + * [Consuming an asset's service](user-guides/using-the-oe-marketplace/consuming-an-assets-service.md) + * [Running Compute-To-Data Jobs](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/README.md) + * [C2D Concepts](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/c2d-concepts.md) + * [Running and managing C2D jobs](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/README.md) + * [Run a C2D starting from a dataset](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-starting-from-a-dataset.md) + * [Run a C2D job starting from an algorithm](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-job-starting-from-an-algorithm.md) + * [Manage the escrow account](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/manage-the-escrow-account.md) + * [C2D jobs history](user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/c2d-jobs-history.md) +* [Technical Architecture](developers/README.md) + * [High-Level Architecture](developers/architecture.md) + * [Dataspace Configuration Options](developers/architecture-1.md) + * [OE software stack components](developers/oe-software-stack-components.md) + * [Assets and Services](developers/assets-and-services/README.md) + * [Identifiers (DIDs)](developers/assets-and-services/identifiers.md) + * [Storage Specifications](developers/assets-and-services/storage.md) + * [Metadata - to be updated](developers/assets-and-services/metadata.md) + * [OE DDO Specification - to be updated](developers/assets-and-services/new-ddo-specification.md) + * [Managing access to assets - to be updated](developers/fg-permissions.md) + * [Supported networks & currencies](developers/networks.md) + * [Fees](developers/fees.md) +* [Deployment guides](infrastructure/README.md) + * [Get API Keys for Blockchain Access](infrastructure/get-api-keys-for-blockchain-access.md) + * [OE Node](infrastructure/oe-node.md) + * [SSI Stack](infrastructure/ssi-stack/README.md) + * [walt.id Verifier](infrastructure/ssi-stack/walt.id-verifier.md) + * [walt.id Issuer](infrastructure/ssi-stack/walt.id-issuer.md) + * [walt.id Wallet](infrastructure/ssi-stack/walt.id-wallet.md) + * [OPA server](infrastructure/ssi-stack/opa-server.md) + * [Policy Server](infrastructure/policy-server/README.md) + * [Policy Server proxy](infrastructure/policy-server/policy-server-proxy.md) + * [OE Marketplace](infrastructure/oe-marketplace.md) + * [KYB service](infrastructure/kyb-service.md) diff --git a/contribute/README.md b/contribute/README.md deleted file mode 100644 index 026fac79b..000000000 --- a/contribute/README.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Ways to Contribute -description: Help develop Ocean Protocol software like a superhero -cover: ../.gitbook/assets/cover/contribute_banner.png -coverY: 0 ---- - -# 🤝 Contribute - -
- -### Report a bug 🐞 - -Have you found a bug in the code? To report a bug that _isn't a vulnerability_, go to the relevant GitHub repository, click on the _Issues_ tab, and select _Bug Report_. - -First, make sure that you search existing open + closed issues + PRs to see if your bug has already been reported there. If not, then go ahead and create a new bug report! 🦸 - -#### Do you see an error in the Ocean Market? - -Follow the steps below to properly document your bug! Paste the screenshots into your GitHub issue. - -{% embed url="https://app.arcade.software/share/fUNrK6z2eurJ2C1ty2OG" fullWidth="false" %} -{% endembed %} - -### Report vulnerabilities - -For all the super sleuths out there, you may be able to earn a bounty for reporting vulnerabilities in sensitive parts of the code. Check out this page on [Immunify](https://immunefi.com/bounty/oceanprotocol/) for the latest bug bounties available. You can also responsibly disclose flaws by emailing us at [security@oceanprotocol.com](mailto:security@oceanprotocol.com). - -

Did you find a glitch in the code matrix?

- -### Suggest a new feature 🤔💭 - -Use the _Issues_ section of each repository and select _`Feature request`_ to suggest and discuss any features you would like to see added. - -As with bug reports, don't forget to search existing open + closed issues + PRs to see if something has already been suggested. - -### Improve core software - -It takes a tribe of awesome coders to build a tech stack, and you're invited to pitch in 😊 We'd love to have you contribute to any repository within the `oceanprotocol` [GitHub](https://github.com/oceanprotocol) organization! - -Before you start coding, please follow these basic guidelines: - -* If no feature request issue for your case is present, **please open one first before starting to work on something, so it can be discussed openly with Ocean core team**. -* Make yourself familiar with the repository-specific contribution requirements and code style requirements. -* Because of the weird world of intellectual property, we need you to follow the [legal requirements](legal-reqs.md) for contributing code. -* Be excellent to each other in the comments, as outlined in the [Contributor Code of Conduct](code-of-conduct.md). - -#### Your contribution workflow - -1. As an external developer, fork the respective repo and **push your code changes to your own fork.** Ocean core developers push directly on the repo under `oceanprotocol` org. -2. Provide the issue # information when you open a PR, for example: `issue-001-short-feature-description`. The issue number `issue-001` needs to reference the GitHub issue that you are trying to fix. The short feature description helps us to quickly distinguish your PR among the other PRs in play. -3. To get visibility and Continuous Integration feedback as early as possible, open your Pull Request as a `Draft`. -4. Give it a meaningful title, and at least link to the respective issue in the Pull Request description, like `Fixes #23`. Describe your changes, mention things for reviewers to look out for, and for UI changes screenshots and videos are helpful. -5. Once your Pull Request is ready, mark it as `Ready for Review`, in most repositories code owners are automatically notified and asked for review. -6. Get all CI checks green and address eventual change requests. -7. If your PR stays open for longer and merge conflicts are detected, merge or rebase your branch against the current `main` branch. -8. Once a Pull Request is approved, you can merge it. - -Depending on the release management of each repository, your contribution will be either included in a next release, or deployed live automatically. - -Except for GitHub, you can chat with most Ocean Protocol core developers in the [Discord](https://discord.gg/TnXjkR5) if you have further development questions. - -### Develop a dApp or integration on top of Ocean Protocol - -We LOVE builders of dApps on Ocean! Nothing makes us feel prouder than seeing you create awesome things with these open-source tools. - -If you need ANY help, then we're here to talk with you on [Discord](https://discord.gg/TnXjkR5) to give you advice. We're also consistently improving these docs to help you. And... you're here :) - -### Improve the docs - -The docs repo can always be improved. If you found a mistake or have an improvement to make, then follow the steps in the [contribution workflow](./#your-contribution-workflow) to submit your changes. - -### Apply for a developer job - -Do you REALLY love building on Ocean Protocol? Consider joining us full-time! Our openings are listed at [https://github.com/oceanprotocol/jobs](https://github.com/oceanprotocol/jobs). - - -Scroll a bit further, and at this page's footer, you'll find the social media links that allow you to join the Ocean community or engage in direct chats with us. 😊 Toodles! diff --git a/contribute/code-of-conduct.md b/contribute/code-of-conduct.md deleted file mode 100644 index cbf443937..000000000 --- a/contribute/code-of-conduct.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Contributor Code of Conduct -description: Be excellent to each other. ---- - -# Contributor Code of Conduct - -As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute to the project. - -We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or species. - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery -* Personal attacks -* Trolling or insulting/derogatory comments -* Public or private harassment -* Publishing other's private information, such as physical or electronic addresses, without explicit permission -* Deliberate intimidation -* Other unethical or unprofessional conduct - -Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. - -By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. - -This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. - -Instances of abusive, harassing, or otherwise unacceptable behavior directed at yourself or another community member may be reported by contacting a project maintainer at [conduct@oceanprotocol.com](mailto:conduct@oceanprotocol.com). All complaints will be reviewed and investigated and will result in a response that is appropriate to the circumstances. Maintainers are obligated to maintain confidentiality with regard to the reporter of an incident. - - - -This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.3.0, available at [contributor-covenant.org/version/1/3/0/](http://contributor-covenant.org/version/1/3/0/) diff --git a/contribute/legal-reqs.md b/contribute/legal-reqs.md deleted file mode 100644 index dad745042..000000000 --- a/contribute/legal-reqs.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: Legal Requirements when Contributing Code -description: How to make sure your code contributions can be included in the Ocean Protocol codebase. ---- - -## Ocean Protocol Software Licensing - -All Ocean Protocol code (software) is licensed under an [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.html). This page describes the Ocean Protocol policy to ensure that all contributions to the Ocean Protocol code are also licensed under the Apache 2.0 license (and that the contributor has the right to license it as such). - -If you are: - -- contributing code to complete a _currently-open_ [Ocean Protocol bounty](https://gitcoin.co/explorer?network=mainnet&idx_status=open&keywords=oceanprotocol&order_by=-web3_created&org=oceanprotocol) or -- a _current_ employee of BigchainDB GmbH - -then there is nothing extra for you to do: licensing is already handled. - -Otherwise you are an "external contributor" and you must do the following: - -1. Make sure that every file you modified or created contains a copyright notice comment like the following (at the top of the file): - - ```text - # Copyright Ocean Protocol contributors - # SPDX-License-Identifier: Apache-2.0 - ``` - - - If a copyright notice is not present, then add one. - - If the first line of the file is a line beginning with `#!` (e.g. `#!/usr/bin/python3`) then leave that as the first line and add the copyright notice afterwards. - - If a copyright notice is present but it says something like `Copyright 2023 Ocean Protocol Foundation` then please change it to say the above. - - Make sure you're using the correct syntax for comments (which varies from language to language). The example shown above is for a Python file. - -1. Read the [Developer Certificate of Origin, Version 1.1](https://developercertificate.org/). -1. You will be asked to include a Signed-off-by line in all your commit messages. (Instructions are given in the next step.) Make sure you understand that including a Signed-off-by line in your commits certifies that you can make the statements in the Developer Certificate of Origin. If you have questions about this, then please [ask on Discord](https://discord.gg/TnXjkR5) or elsewhere. Do not continue until you fully understand. -1. Make sure that all your commit messages include a Signed-off-by line of the form: - - ```text - Signed-off-by: Random J Developer - ``` - - with your real name and your real email address. Sorry, no pseudonyms or anonymous contributions. Tip: You can tell Git to include a Signed-off-by line in a commit message by using `git commit --signoff` or `git commit -s`. - -## Credits - -The Developer Certificate of Origin was developed by the Linux community and has since been adopted by other projects, including many under the Linux Foundation umbrella (e.g. Hyperledger Fabric). -The process described above (with the Signed-off-by line in Git commits) is also based on [the process used by the Linux community](https://github.com/torvalds/linux/blob/master/Documentation/process/submitting-patches.rst#11-sign-your-work---the-developers-certificate-of-origin). - -## The Future - -In the future, the Ocean Protocol Foundation will dissolve and the policy will probably change to work more like the Linux Kernel, where _every_ contributor must include a Signed-off-by line in all Git commits. diff --git a/contribute/projects-using-ocean.md b/contribute/projects-using-ocean.md deleted file mode 100644 index ef5a8cb06..000000000 --- a/contribute/projects-using-ocean.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Collaborators -description: We are so proud of the companies that use Ocean Protocol tools! ---- - -# Collaborators - -
- -From startups to full enterprises, we have so many collaborators using Ocean tech. Curious who's working with Ocean tools? Check out the up-to-date list of collaborators on the [Ecosystem page](https://oceanprotocol.com/ecosystem). - -### Show your support by trading OCEAN - -Visit [Coingecko's OCEAN markets page](https://www.coingecko.com/en/coins/ocean-protocol#markets) to see all the exchanges that support OCEAN. Here, you can see the most liquid exchanges, and many of them even offer liquidity mining and other yield opportunities. - -### Acknowledgements - -[GitBook](https://www.gitbook.com/) is a supporter of this open-source project by providing hosting for this documentation. diff --git a/data-farming/README.md b/data-farming/README.md deleted file mode 100644 index 08a4297a7..000000000 --- a/data-farming/README.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -description: Earn OCEAN rewards by predicting (and more streams to come). -cover: ../.gitbook/assets/cover/data_farming_banner.png -coverY: 0 ---- - -# 💰 Data Farming - -**Data Farming (DF) is Ocean's incentive program.** It rewards OCEAN to participants who make predictions (and more streams to come). - -[**The DF webapp**](https://df.oceandao.org) is where users perform most DF actions. - -## Current DF Streams & Budgets - -DF currently has one stream: - -* [**Predictoor DF**](predictoordf.md)**.** Run prediction bots to earn continuously. Weekly Predictoor rewards are 3,750 OCEAN + 20,000 ROSE rewards through 2025. - -All streams repeat **weekly** that start on Thursdays 00:00 at UTC and end on Wed at 23:59 UTC. - -DF Streams evolve over time. The next two sections cover past & future DF streams. - -## Past DF Streams - -In **Passive DF**, users locked OCEAN for veOCEAN. In **Active DF**, users allocated veOCEAN to curate data assets. - -* veOCEAN, Passive DF, and Active DF were [retired](https://blog.oceanprotocol.com/passive-volume-data-farming-airdrop-has-completed-they-are-now-retired-6933520b5fcb) on May 3, 2024, alongside an an airdrop to veOCEAN holders. -* **veOCEAN holders can claim airdrop & past rewards at** [**df.oceandao.org/rewards**](https://df.oceandao.org/rewards). -* The locked OCEAN will unlock according to its schedule (up to 4 years) -* [This article](https://blog.oceanprotocol.com/passive-volume-data-farming-airdrop-has-completed-they-are-now-retired-6933520b5fcb) has details. - -In **Challenge DF**, users did weekly one-off predictions. It was [retired](https://blog.oceanprotocol.com/df62-completes-and-df63-launches-predictoor-df-is-here-081fc78ceb70) on Nov 30, 2023. - -For further details, the ["Data Farming Series" article](https://blog.oceanprotocol.com/ocean-data-farming-series-c7922f1d0e45) chronicles week-by-week rewards and DF evolution. - -## Future DF Streams - -Potential DF evolution includes: - -* Scaling up Predictoor DF rewards. [Details](https://blog.oceanprotocol.com/ocean-protocol-update-2024-e463bf855b03#4da0). -* New stream: run Unified Backend nodes. [Details](https://blog.oceanprotocol.com/ocean-protocol-update-2024-e463bf855b03#f779). -* New stream: decentralized model training for world-world models. [Details](\[Details]\(https:/blog.oceanprotocol.com/ocean-protocol-update-2024-e463bf855b03/#4da0\).) - -## Networks - -To engage in Predictoor DF, users submit Predictions on Oasis Sapphire. Rewards for Predictoor DF are on Oasis Sapphire as well (it's intertwined). - -Passive DF and Volume DF reward payouts are on Ethereum network. - -The [networks docs](../discover/networks/) have more info. - -## Further resources - -* The [**DF FAQ**](faq.md) answers more questions. -* Main DF github repos: [df-py (backend)](https://github.com/oceanprotocol/df-py), [df-web (frontend)](https://github.com/oceanprotocol/df-web) -* The [Ocean Data Farming Series](https://blog.oceanprotocol.com/ocean-data-farming-series-c7922f1d0e45) article has a chronological account of all Data Farming activities since its inception. It links to related blog posts. - -*** - -_Next:_ [_Predictoor DF_](predictoordf.md) - -_Back:_ [_Docs main_](../) diff --git a/data-farming/faq.md b/data-farming/faq.md deleted file mode 100644 index 76a318653..000000000 --- a/data-farming/faq.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Data Farming FAQ -description: Frequently Asked Questions about Data Farming ---- -## Data Farming FAQ - -### Staking and Risk - -
- -What does "staking" mean in an Ocean context? - -Its precise meaning depends on the DF stream. - -- Predictoor DF: put OCEAN into a prediction transaction - -
- -
- -Are there any risks associated with DF? - -As with any system, inherent risks exist. We try to minimize them, as follows. - -- Predictoor DF: you stake a small amount of OCEAN in each epoch (eg every 5min). If issues arise, you can get out quickly. - -
- -
- -Is there any impermanent loss (IL) in my staking? - -No. IL is typically associated with providing liquidity to decentralized exchange or pools. There are no pools involved in any of the DF streams [1]. -
- - -### Rewards Payout - -
- -What APYs can I expect? - -For Predictoor DF, it varies a lot based on your prediction accuracy, and more. - -
- -
- -Can the DF rewards change during a given week? - -No. At the beginning of a new DF round, rules are laid out, either implicitly if no change from the previous round, or explicitly in a blog post if there are new rules. - -Caveat: it’s "no" at least in theory! Sometimes there may be tweaks if there is community consensus or a bug. - -
- -
- -Where do I learn more about Predictoor DF? - -In its [docs page](predictoordf.md). -
- - - -Congrats! You've completed reading Data Farming docs. - -_Next: Jump to [DF main](README.md)._ - -_Or: Jump to [Docs main](../README.md) and click on your interest._ - -_Back: [Predictoor DF Guide](predictoordf-guide.md)_ diff --git a/data-farming/predictoordf-guide.md b/data-farming/predictoordf-guide.md deleted file mode 100644 index fed32b712..000000000 --- a/data-farming/predictoordf-guide.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -description: >- - How to earn $ via predictoor DF ---- - -# Guide to Predictoor DF - -
- -In Predictoor DF (and Predictoor proper), you run prediction bots to earn continuously. This guide describes how to become eligible for OCEAN rewards and claim them; and the same for Oasis ROSE rewards. And of course first thing you need to do is become a predictoor. - -## How to become a predictoor - -- Play with the dapp: http://predictoor.ai -- Then go through "How to earn as a Predictoor" in [Predictoor docs](https://docs.predictoor.ai) -- Or, go straight to the [quickstart README](https://github.com/oceanprotocol/pdr-backend/blob/main/READMEs/predictoor.md) :) 🏎️ - -## On OCEAN Rewards in Predictoor DF - -- **Duration:** ongoing -- **To be eligible:** predictoors are automatically eligible 🧘 -- **To claim:** recall that the OCEAN rewards act as more sales coming to you (as a predictoor). So you claim your OCEAN from sales in the usual way, by running the OCEAN payout script. See the [payout README](https://github.com/oceanprotocol/pdr-backend/blob/main/READMEs/payout.md) for specific instructions. - - -## On ROSE rewards in Predictoor DF - -- ⚠️ **To be eligible** for a given DF round: you MUST run [OCEAN payout script](https://github.com/oceanprotocol/pdr-backend/blob/main/READMEs/payout.md) <= 4 days after the round ends, i.e. between Thu 00:00 UTC & Sun 11:59 PM UTC -- **To claim:** See the [payout README](https://github.com/oceanprotocol/pdr-backend/blob/main/READMEs/payout.md) for specific instructions. - ----- - -_Next: Jump to [DF FAQ](faq.md)._ - -_Back: [Predictoor DF](predictoordf.md)_ diff --git a/data-farming/predictoordf.md b/data-farming/predictoordf.md deleted file mode 100644 index 7ec23dffc..000000000 --- a/data-farming/predictoordf.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -description: >- - Baseline sales for predictoors ---- - -
- -**This page** is about Predictoor DF, and [this page](predictoordf-guide.md) is a guide. - -# Predictoor DF Overview - -**Predictoor DF** is a DF stream that amplifies predictoors’ earnings, via extra sales to ASI Predictoor data feeds. - -Predictoor DF weekly rewards are 3,750 [OCEAN](https://www.coingecko.com/en/coins/ocean-protocol) and 20,000 ROSE rewards (through 2025). - -## Introduction - -**[ASI Predictoor](../predictoor/README.md)** data feeds predict whether BTC, ETH etc will rise or fall 5min or 1h into the future. These feeds are crowdsourced by “predictoors”: people running AI-powered prediction bots. - -**[Data Farming (DF)](../data-farming/README.md)** is Ocean’s incentive program, that rewards OCEAN to people who make crypto price predictions. - -You should be familiar with both Predictoor and DF before reading on. - -## Predictoor DF Timing - -Predictoor DF started counting on Nov 9, 2023, at the beginning of Data Farming Round 63 (DF63). It runs indefinitely. - -## Predictoor DF Rewards - -Predictoor DF has one component: [OCEAN](https://www.coingecko.com/en/coins/ocean-protocol) rewards. - -### OCEAN Rewards - -- A special “DF buyer” bot purchases Predictoor feeds. It started operating on Nov 9, 2023. Every day, it spends 1/7 of the weekly Predictoor OCEAN budget for another 24h subscription. It spends an equal amount per feed. (Currently there are feeds: 10 x 5min, 10 x 1h.) -- The OCEAN comes from the Ocean DF budget, and specifically, the Active DF budget. -- Payout happens on Mondays, 4 days after the end of the DF round. -- Payout for a given predictoor is pro-rata to the net earnings of that predictoor over that DF round, specifically (total sales $ to the predictoor) minus (predictoor stake slashed due to being wrong). - -## How to Earn $ Via Predictoor DF - -**Running a predictoor bot will automatically make you eligible for Predictoor DF rewards.** - -The [Predictoor DF user guide](predictoordf-guide.md) tells how to get started as a predictoor, and how to claim rewards. - ----- - -_Next: [Predictoor DF Guide](predictoordf-guide.md)_ - -_Back: [DF Main](README.md)_ diff --git a/data-scientists/README.md b/data-scientists/README.md deleted file mode 100644 index 99b36e081..000000000 --- a/data-scientists/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -description: Earn $, track data & compute provenance, and get more data -cover: ../.gitbook/assets/cover/data_scientists_banner.png -coverY: 0 ---- - -# 📊 Data Scientists - -### How does Ocean benefit data scientists? - -It offers three main benefits: - -* **Earn.** You can earn $ by doing crypto price predictions via [Predictoor](../predictoor/), by curating data in [Data Farming](../data-farming/), competing in a [data challenge](join-a-data-challenge.md), and by selling data & models. -* **More Data.** Use [Compute-to-Data](../developers/compute-to-data/) to access private data to run your AI modeling algorithms against, data which was previously inaccessible. Browse [Ocean Market](https://market.oceanprotocol.com) and other Ocean-powered markets to find more data to improve your AI models. -* **Provenance.** The acts of publishing data, purchasing data, and consuming data are all recorded on the blockchain to make a tamper-proof audit trail. Know where your AI training data came from! - -### How do data scientists start using Ocean? - -Here are the most relevant Ocean tools to work with: - -* The [**ocean.py**](ocean.py/) library is built for the key environment of data scientists: Python. It can simply be imported alongside other Python data science tools like numpy, matplotlib, scikit-learn and tensorflow. You can use it to publish & sell data assets, buy assets, transfer ownership, and more. -* Predictoor's [**pdr-backend repo**](https://github.com/oceanprotocol/pdr-backend) has Python-based tools to run bots for crypto prediction or trading. -* [**Compete in a data challenge**](join-a-data-challenge.md), or [sponsor one](sponsor-a-data-challenge.md). - -### Are there mental models for earning $ in data? - -Yes. This section has two other pages which elaborate: - -* [The Data Value Creation Loop](the-data-value-creation-loop.md) lays out the life cycle of data, and how to focus towards high-value use cases. -* [What data is valuable](data-engineers.md) helps think about pricing data. - -### Further resources - -The blog post ["How Ocean Can Benefit Data Scientists"](https://blog.oceanprotocol.com/how-ocean-can-benefit-data-scientists-7e502e5f1a5f) elaborates further on the benefits of more data, provenance, and earning. - -
diff --git a/data-scientists/data-engineers.md b/data-scientists/data-engineers.md deleted file mode 100644 index e95523247..000000000 --- a/data-scientists/data-engineers.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -description: How to research where supply meets demand... 💰🧑‍🏫 ---- - -# What data is valuable? - -

When you sell the right data at the right price to meet demand.

- -### Simple Truths - -A lot of people miss the mark on tokenizing data that actually _sells_. If your goal is to make money, then you have to research target audiences, who's currently buying data, and **correctly price** your data to meet that demand. - -To figure out which market segments are paying for data, then it may help you to **go to the Ocean Market and sort by Sales.** - -But even then, it's not enough to just publish useful data on Ocean. **You need to market your data** **assets** to close sales. - -If you're still encountering challenges in generating income, don't worry! You can enter one of the [data challenges](https://oceanprotocol.com/challenges) to make sweet OCEAN rewards and build your data science skills. - -But what if you're a well-heeled company looking to create dApps or source data predictions? You can kickstart the value creation loop by working with Ocean Protocol to [sponsor a data challenge](sponsor-a-data-challenge.md). - -### What data could be useful for dApp builders? - -* **Government Open Data:** Governments serve as a rich and reliable source of data. However, this data often lacks proper documentation or poses challenges for data scientists to work with effectively. One idea is to clean and organize this data in a way that others can tap into this wealth of information with ease. For example, in one of the [data challenges](https://desights.ai/shared/challenge/8) we leveraged public real estate data from Dubai to build use cases for understanding and predicting valuations and rents. Local, state, and federal governments around the world provide access to valuable data. So make consuming that data easier to help consumers build useful products and help your local community. -* **Public APIs:** Data scientists can use free, public APIs to tokenize data in such a way that consumers can easily access it. [This ](https://github.com/public-apis/public-apis)is a repository of some public APIs for a wide range of topics, from weather to gaming to finance. -* **On-Chain Data:** There is consistent demand for good decentralized finance (DeFi) data and an emerging need for decentralized social data. Thus, data scientists can query blockchain data to build and sell valuable datasets for consumers. -* **Datasets for training AI and foundation models:** Much of the uniqueness and value in your data consists of aggregating and cleaning data from different sources. You can scrape the web or source data from other sources to present to AI/ML engineers looking for data to train their models. diff --git a/data-scientists/join-a-data-challenge.md b/data-scientists/join-a-data-challenge.md deleted file mode 100644 index de8f9b79b..000000000 --- a/data-scientists/join-a-data-challenge.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -description: >- - Roll with the brightest data scientists and machine learning experts for - prizes ---- - -# Join a Data Challenge - -

Bring on the data challenges.

- -Hone your skills, work on real business problems, and earn sweet dosh along the way. - -### What is an Ocean Protocol data challenge? - -Ocean Protocol's data challenges are open competitions where participants must solve a real business problem using data science or machine learning skills. Some challenges are designed for data exploration, analysis, and reporting, while others require developing machine learning models. Thus, data challenges have different types of formats, topics, and sponsors. One of the main advantages of these data challenges is that users retain ownership of their IP and the ability to further monetize their work outside of the competition. - -### Where can I find the data challenges? - -[Discover open challenges here.](https://oceanprotocol.com/challenges) - -### What is the typical flow for a data challenge? - -1. Participants download the necessary dataset(s) on the Ocean Market according to the data challenge instructions. -2. Participants may be tasked with building a report that combines data visualization and written explanations for a dataset, or perhaps tasked with building a machine learning model to predict a specific target value. -3. Participants publish their results either publicly or privately using Ocean Protocol smart contracts by the deadline as directed by the challenge instructions. -4. Winners are selected and announced generally within 2 weeks. -5. Winners will be sent instructions to claim their crypto prizes. diff --git a/data-scientists/ocean.py/README.md b/data-scientists/ocean.py/README.md deleted file mode 100644 index 3997e6d48..000000000 --- a/data-scientists/ocean.py/README.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -description: >- - Python library to privately & securely publish, exchange, and consume - data. ---- - -# Ocean.py - -[Ocean.py](https://github.com/oceanprotocol/ocean.py) helps data scientists earn $ from their AI models, track provenance of data & compute, and get more data. (More details [here](../../data-scientists/README.md).) - -Ocean.py makes these tasks easy: - -* **Publish** data services: data feeds, REST APIs, downloadable files or compute-to-data. Create an ERC721 **data NFT** for each service, and ERC20 **datatoken** for access (1.0 datatokens to access). -* **Sell** datatokens via for a fixed price. Sell data NFTs. -* **Transfer** data NFTs & datatokens to another owner, and all other ERC721 & ERC20 actions using web3. - -As a Python library, Ocean.py is built for the key environment of data scientists. It that can simply be imported alongside other Python data science tools like numpy, matplotlib, scikit-learn and tensorflow. - -
- -### Quickstart 🚀 - -Follow these steps in sequence to ramp into Ocean. - -1. [Install Ocean](install.md) 📥 -2. Setup 🛠️ - - [Remote ](remote-setup.md)(Win, MacOS, Linux) - - or [Local ](local-setup.md)(Linux only) -3. [Publish asset](publish-flow.md), post for free / for sale, dispense it / buy it, and [consume ](consume-flow.md)it -4. Run algorithms through [Compute-to-Data flow](compute-flow.md) using Ocean environment. - -After these quickstart steps, the main [README](https://github.com/oceanprotocol/ocean.py/blob/main/README.md) points to several other use cases, such as [Volume Data Farming](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/df.md), on-chain key-value stores ([public](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/key-value-public.md) or [private](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/key-value-private.md)), and other types of data assets ([REST API](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/publish-flow-restapi.md), [GraphQL](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/publish-flow-graphql.md), [on-chain](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/publish-flow-onchain.md)). diff --git a/data-scientists/ocean.py/compute-flow.md b/data-scientists/ocean.py/compute-flow.md deleted file mode 100644 index 8246f86e7..000000000 --- a/data-scientists/ocean.py/compute-flow.md +++ /dev/null @@ -1,219 +0,0 @@ ---- -description: This page shows how you run a compute flow. ---- - -# Compute Flow - -In this page, we provide the steps for publishing algorithm asset, run it on Ocean environment for C2D and retrieve the result logs, using ocean.py. - -We assumed that you have completed the installation part with the preferred setup. - -Here are the steps: - -1. Alice publishes dataset -2. Alice publishes algorithm -3. Alice allows the algorithm for C2D for that data asset -4. Bob acquires datatokens for data and algorithm -5. Bob starts a compute job using a free C2D environment (no provider fees) -6. Bob monitors logs / algorithm output - -Let's go through each step. - -### 1. Alice publishes dataset - -In the same python console: - -{% code overflow="wrap" %} -```python -# Publish data NFT, datatoken, and asset for dataset based on url - -# ocean.py offers multiple file object types. A simple url file is enough for here -from ocean_lib.structures.file_objects import UrlFile -DATA_url_file = UrlFile( - url="https://raw.githubusercontent.com/oceanprotocol/c2d-examples/main/branin_and_gpr/branin.arff" -) - -name = "Branin dataset" -(DATA_data_nft, DATA_datatoken, DATA_ddo) = ocean.assets.create_url_asset(name, DATA_url_file.url, {"from": alice}, with_compute=True, wait_for_aqua=True) -print(f"DATA_data_nft address = '{DATA_data_nft.address}'") -print(f"DATA_datatoken address = '{DATA_datatoken.address}'") - -print(f"DATA_ddo did = '{DATA_ddo.did}'") -``` -{% endcode %} - -To customise the privacy and accessibility of your compute service, add the `compute_values` argument to `create_url_asset` to set values according to the [DDO specs](/developers/identifiers.md). The function assumes the documented defaults. - -### 2. Alice publishes an algorithm - -In the same Python console: - -{% code overflow="wrap" %} -```python -# Publish data NFT & datatoken for algorithm -ALGO_url = "https://raw.githubusercontent.com/oceanprotocol/c2d-examples/main/branin_and_gpr/gpr.py" - -name = "grp" -(ALGO_data_nft, ALGO_datatoken, ALGO_ddo) = ocean.assets.create_algo_asset(name, ALGO_url, {"from": alice}, wait_for_aqua=True) - -print(f"ALGO_data_nft address = '{ALGO_data_nft.address}'") -print(f"ALGO_datatoken address = '{ALGO_datatoken.address}'") -print(f"ALGO_ddo did = '{ALGO_ddo.did}'") -``` -{% endcode %} - -### 3. Alice allows the algorithm for C2D for that data asset - -In the same Python console: - -{% code overflow="wrap" %} -```python -compute_service = DATA_ddo.services[1] -compute_service.add_publisher_trusted_algorithm(ALGO_ddo) -DATA_ddo = ocean.assets.update(DATA_ddo, {"from": alice}) -``` -{% endcode %} - -### 4. Bob acquires datatokens for data and algorithm - -In the same Python console: - -```python -# Alice mints DATA datatokens and ALGO datatokens to Bob. -# Alternatively, Bob might have bought these in a market. -from ocean_lib.ocean.util import to_wei -DATA_datatoken.mint(bob, to_wei(5), {"from": alice}) -ALGO_datatoken.mint(bob, to_wei(5), {"from": alice}) -``` - -You can choose each method for getting access from[ consume flow approaches](consume-flow.md). - -### 5. Bob starts a compute job using a free C2D environment - -Only inputs needed: DATA\_did, ALGO\_did. Everything else can get computed as needed. For demo purposes, we will use the free C2D environment, which requires no provider fees. - -In the same Python console: - -{% code overflow="wrap" %} -```python -# Convenience variables -DATA_did = DATA_ddo.did -ALGO_did = ALGO_ddo.did - -# Operate on updated and indexed assets -DATA_ddo = ocean.assets.resolve(DATA_did) -ALGO_ddo = ocean.assets.resolve(ALGO_did) - -compute_service = DATA_ddo.services[1] -algo_service = ALGO_ddo.services[0] -free_c2d_env = ocean.compute.get_free_c2d_environment(compute_service.service_endpoint, DATA_ddo.chain_id) - -from datetime import datetime, timedelta, timezone -from ocean_lib.models.compute_input import ComputeInput - -DATA_compute_input = ComputeInput(DATA_ddo, compute_service) -ALGO_compute_input = ComputeInput(ALGO_ddo, algo_service) - -# Pay for dataset and algo for 1 day -datasets, algorithm = ocean.assets.pay_for_compute_service( - datasets=[DATA_compute_input], - algorithm_data=ALGO_compute_input, - consume_market_order_fee_address=bob.address, - tx_dict={"from": bob}, - compute_environment=free_c2d_env["id"], - valid_until=int((datetime.now(timezone.utc) + timedelta(days=1)).timestamp()), - consumer_address=free_c2d_env["consumerAddress"], -) -assert datasets, "pay for dataset unsuccessful" -assert algorithm, "pay for algorithm unsuccessful" - -# Start compute job -job_id = ocean.compute.start( - consumer_wallet=bob, - dataset=datasets[0], - compute_environment=free_c2d_env["id"], - algorithm=algorithm, -) -print(f"Started compute job with id: {job_id}") -``` -{% endcode %} - -### 6. Bob monitors logs / algorithm output - -In the same Python console, you can check the job status as many times as needed: - -```python -# Wait until job is done -import time -from decimal import Decimal -succeeded = False -for _ in range(0, 200): - status = ocean.compute.status(DATA_ddo, compute_service, job_id, bob) - if status.get("dateFinished") and Decimal(status["dateFinished"]) > 0: - succeeded = True - break - time.sleep(5) -``` - -This will output the status of the current job. Here is a list of possible results: [Operator Service Status description](https://github.com/oceanprotocol/operator-service/blob/main/API.md#status-description). - -Once the returned status dictionary contains the `dateFinished` key, Bob can retrieve the job results using ocean.compute.result or, more specifically, just the output if the job was successful. For the purpose of this tutorial, let's choose the second option. - -```python -# Retrieve algorithm output and log files -output = ocean.compute.compute_job_result_logs( - DATA_ddo, compute_service, job_id, bob -)[0] - -import pickle -model = pickle.loads(output) # the gaussian model result -assert len(model) > 0, "unpickle result unsuccessful" -``` - -You can use the result however you like. For the purpose of this example, let's plot it. - -Make sure you have `matplotlib` package installed in your virtual environment. - -{% code overflow="wrap" %} -```python -import numpy -from matplotlib import pyplot - -X0_vec = numpy.linspace(-5., 10., 15) -X1_vec = numpy.linspace(0., 15., 15) -X0, X1 = numpy.meshgrid(X0_vec, X1_vec) -b, c, t = 0.12918450914398066, 1.5915494309189535, 0.039788735772973836 -u = X1 - b * X0 ** 2 + c * X0 - 6 -r = 10. * (1. - t) * numpy.cos(X0) + 10 -Z = u ** 2 + r - -fig, ax = pyplot.subplots(subplot_kw={"projection": "3d"}) -ax.scatter(X0, X1, model, c="r", label="model") -pyplot.title("Data + model") -pyplot.show() # or pyplot.savefig("test.png") to save the plot as a .png file instead -``` -{% endcode %} - -You should see something like this: - -
- -### Appendix. Tips & tricks - -This README has a simple ML algorithm. However, Ocean C2D is not limited to usage in ML. The file [c2d-flow-more-examples.md](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/c2d-flow-more-examples.md) has examples from vision and other fields. - -In the "publish algorithm" step, to replace the sample algorithm with another one: - -* Use one of the standard [Ocean algo_dockers images](https://github.com/oceanprotocol/algo_dockers) or publish a custom docker image. -* Use the image name and tag in the `container` part of the algorithm metadata. -* The image must have basic support for installing dependencies. E.g. "pip" for the case of Python. You can use other languages, of course. -* More info is available on the [algorithms page](../../developers/compute-to-data/compute-to-data-algorithms.md) - -The function to `pay_for_compute_service` automates order starting, order reusing and performs all the necessary Provider and on-chain requests. It modifies the contents of the given ComputeInput as follows: - -* If the dataset/algorithm contains a `transfer_tx_id` property, it will try to reuse that previous transfer id. If provider fees have expired but the order is still valid, then the order is reused on-chain. -* If the dataset/algorithm does not contain a `transfer_tx_id` or the order has expired (based on the Provider's response), then one new order will be created. - -This means you can reuse the same ComputeInput and you don't need to regenerate it everytime it is sent to `pay_for_compute_service`. This step makes sure you are not paying unnecessary or duplicated fees. - -If you wish to upgrade the compute resources, you can use any (paid) C2D environment. Inspect the results of `ocean.ocean_compute.get_c2d_environments(service.service_endpoint, DATA_ddo.chain_id)` and `ocean.retrieve_provider_fees_for_compute(datasets, algorithm_data, consumer_address, compute_environment, duration)` for a preview of what you will pay. Don't forget to handle any minting, allowance or approvals on the desired token to ensure transactions pass. diff --git a/data-scientists/ocean.py/consume-flow.md b/data-scientists/ocean.py/consume-flow.md deleted file mode 100644 index 3c10daba8..000000000 --- a/data-scientists/ocean.py/consume-flow.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -description: This page shows how you can get datatokens & download an asset ---- - -# Consume Flow - -Consume flow highlights the methods for getting a datatoken for accessing an asset from Ocean Market and for downloading the content of the asset. - -We assumed that you accomplished the publish flow presented previously. - -Now let's see how can Bob get access to Alice's asset in order to download/consume it. - -### Get access for a dataset 🔑 - -Below, we show four possible approaches: - -* A & B are when Alice is in contact with Bob. She can mint directly to him, or mint to herself and transfer to him. -* C is when Alice wants to share access for free, to anyone -* D is when Alice wants to sell access - -
- -In the same Python console: - -```python -from ocean_lib.ocean.util import to_wei - -#Approach A: Alice mints datatokens to Bob -datatoken.mint(bob, to_wei(1), {"from": alice}) - -#Approach B: Alice mints for herself, and transfers to Bob -datatoken.mint(alice, to_wei(1), {"from": alice}) -datatoken.transfer(bob, to_wei(1), {"from": alice}) - -#Approach C: Alice posts for free, via a dispenser / faucet; Bob requests & gets -datatoken.create_dispenser({"from": alice}) -datatoken.dispense(to_wei(1), {"from": bob}) - -#Approach D: Alice posts for sale; Bob buys -# D.1 Alice creates exchange -price = to_wei(100) -exchange = datatoken.create_exchange({"from": alice}, price, ocean.OCEAN_address) - -# D.2 Alice makes 100 datatokens available on the exchange -datatoken.mint(alice, to_wei(100), {"from": alice}) -datatoken.approve(exchange.address, to_wei(100), {"from": alice}) - -# D.3 Bob lets exchange pull the OCEAN needed -OCEAN_needed = exchange.BT_needed(to_wei(1), consume_market_fee=0) -ocean.OCEAN_token.approve(exchange.address, OCEAN_needed, {"from":bob}) - -# D.4 Bob buys datatoken -exchange.buy_DT(to_wei(1), consume_market_fee=0, tx_dict={"from": bob}) -``` - -For more info, check [Technical Details](technical-details.md) about ocean.py most used functions and also the smart contracts for [Dispenser](https://github.com/oceanprotocol/contracts/blob/main/contracts/pools/dispenser/Dispenser.sol) & [Fixed Rate Exchange](https://github.com/oceanprotocol/contracts/blob/main/contracts/pools/fixedRate/FixedRateExchange.sol). - -### Consume the asset ⬇️ - -To "consume" an asset typically means placing an "order", where you pass in 1.0 datatokens and get back a url. Then, you typically download the asset from the url. - -Bob now has the datatoken for the dataset! Time to download the dataset and use it. - -
- -In the same Python console: - -```python -# Bob sends a datatoken to the service to get access -order_tx_id = ocean.assets.pay_for_access_service(ddo, {"from": bob}) - -# Bob downloads the file. If the connection breaks, Bob can try again -asset_dir = ocean.assets.download_asset(ddo, bob, './', order_tx_id) - -import os -file_name = os.path.join(asset_dir, "file0") -``` - -Let's check that the file is downloaded. In a new console: - -```bash -cd my_project/datafile.did:op:* -cat file0 -``` - -The _beginning_ of the file should contain the following contents: - -```bash -% 1. Title: Branin Function -% 3. Number of instances: 225 -% 6. Number of attributes: 2 - -@relation branin - -@attribute 'x0' numeric -@attribute 'x1' numeric -@attribute 'y' numeric - -@data --5.0000,0.0000,308.1291 --3.9286,0.0000,206.1783 -... -``` - diff --git a/data-scientists/ocean.py/datatoken-interface-tech-details.md b/data-scientists/ocean.py/datatoken-interface-tech-details.md deleted file mode 100644 index 4f088ad9e..000000000 --- a/data-scientists/ocean.py/datatoken-interface-tech-details.md +++ /dev/null @@ -1,584 +0,0 @@ ---- -description: Technical details about Datatoken functions ---- - -# Datatoken Interface Tech Details - -`Datatoken contract interface` is like the superhero that kicks off the action-packed adventure of contract calls! It's here to save the day by empowering us to unleash the mighty powers of dispensers, fixed rate exchanges, and initializing orders. For this page, we present the utilitary functions that embark you on the Ocean journey. - -### Create Dispenser - -* **create\_dispenser**(`self`, `tx_dict: dict`, `max_tokens: Optional[Union[int, str]] = None`, `max_balance: Optional[Union[int, str]] = None`, `with_mint: Optional[bool] = True`) - -Through datatoken, you can deploy a new dispenser schema which is used for creating free assets, because its behaviour is similar with a faucet. ⛲ - -It is implemented in DatatokenBase, inherited by Datatoken2, so it can be called within both instances. - -**Parameters** - -* `tx_dict` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. -* `max_tokens` - maximum amount of tokens to dispense in wei. The default is a large number. -* `max_balance` - maximum balance of requester in wei. The default is a large number. -* `with_mint` - boolean, `true` if we want to allow the dispenser to be a minter as default value - -**Returns** - -`str` - -Return value is a hex string which denotes the transaction hash of dispenser deployment. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL336C5-L377C18) - -
- -Source code - -```python -@enforce_types - def create_dispenser( - self, - tx_dict: dict, - max_tokens: Optional[Union[int, str]] = None, - max_balance: Optional[Union[int, str]] = None, - with_mint: Optional[bool] = True, - ): - """ - For this datataken, create a dispenser faucet for free tokens. - - This wraps the smart contract method Datatoken.createDispenser() - with a simpler interface. - - :param: max_tokens - max # tokens to dispense, in wei - :param: max_balance - max balance of requester - :tx_dict: e.g. {"from": alice_wallet} - :return: tx - """ - # already created, so nothing to do - if self.dispenser_status().active: - return - - # set max_tokens, max_balance if needed - max_tokens = max_tokens or MAX_UINT256 - max_balance = max_balance or MAX_UINT256 - - # args for contract tx - dispenser_addr = get_address_of_type(self.config_dict, "Dispenser") - with_mint = with_mint # True -> can always mint more - allowed_swapper = ZERO_ADDRESS # 0 -> so anyone can call dispense - - # do contract tx - tx = self.createDispenser( - dispenser_addr, - max_tokens, - max_balance, - with_mint, - allowed_swapper, - tx_dict, - ) - return tx -``` - -
- -### Dispense Datatokens - -* **dispense**(`self`, `amount: Union[int, str]`, `tx_dict: dict`) - -This function is used to retrieve funds or datatokens for an user who wants to start an order. - -It is implemented in DatatokenBase, so it can be called within Datatoken class. - -**Parameters** - -* `amount` - amount of datatokens to be dispensed in wei (int or string format) -* `tx_dict` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. - -**Returns** - -`str` - -Return value is a hex string which denotes the transaction hash of dispensed datatokens, like a proof. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL379C5-L400C18) - -
- -Source code - -```python - @enforce_types - def dispense(self, amount: Union[int, str], tx_dict: dict): - """ - Dispense free tokens via the dispenser faucet. - - :param: amount - number of tokens to dispense, in wei - :tx_dict: e.g. {"from": alice_wallet} - :return: tx - """ - # args for contract tx - datatoken_addr = self.address - from_addr = ( - tx_dict["from"].address - if hasattr(tx_dict["from"], "address") - else tx_dict["from"] - ) - - # do contract tx - tx = self._ocean_dispenser().dispense( - datatoken_addr, amount, from_addr, tx_dict - ) - return tx -``` - -
- -### Dispense Datatokens & Order - -* **dispense\_and\_order**(`self`, `consumer: str`, `service_index: int`, `provider_fees: dict`, `transaction_parameters: dict`, `consume_market_fees=None`) -> `str` - -This function is used to retrieve funds or datatokens for an user who wants to start an order. - -It is implemented in `Datatoken2`, so it can be called within `Datatoken2` class (using the enterprise template). - -**Parameters** - -* `consumer` - address of the consumer wallet that needs funding -* `service_index` - service index as int for identifying the service that you want to further call [`start_order`](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL169C5-L197C10). -* `transaction_parameters` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. -* `consume_market_fees` - [`TokenInfo` ](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#L31)object which contains the consume market fee amount, address & token address. If it is not explicitly specified, by default it has an empty `TokenInfo` object. - -**Returns** - -`str` - -Return value is a hex string which denotes the transaction hash of dispensed datatokens, like a proof of starting order. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL439C5-L483C1) - -
- -Source code - -{% code overflow="wrap" %} -```python -def dispense_and_order( - self, - consumer: str, - service_index: int, - provider_fees: dict, - transaction_parameters: dict, - consume_market_fees=None, - ) -> str: - if not consume_market_fees: - consume_market_fees = TokenFeeInfo() - - buyer_addr = ( - transaction_parameters["from"].address - if hasattr(transaction_parameters["from"], "address") - else transaction_parameters["from"] - ) - - bal = from_wei(self.balanceOf(buyer_addr)) - if bal < 1.0: - dispenser_addr = get_address_of_type(self.config_dict, "Dispenser") - from ocean_lib.models.dispenser import Dispenser # isort: skip - - dispenser = Dispenser(self.config_dict, dispenser_addr) - - # catch key failure modes - st = dispenser.status(self.address) - active, allowedSwapper = st[0], st[6] - if not active: - raise ValueError("No active dispenser for datatoken") - if allowedSwapper not in [ZERO_ADDRESS, buyer_addr]: - raise ValueError(f"Not allowed. allowedSwapper={allowedSwapper}") - - # Try to dispense. If other issues, they'll pop out - dispenser.dispense( - self.address, "1 ether", buyer_addr, transaction_parameters - ) - - return self.start_order( - consumer=ContractBase.to_checksum_address(consumer), - service_index=service_index, - provider_fees=provider_fees, - consume_market_fees=consume_market_fees, - transaction_parameters=transaction_parameters, - ) -``` -{% endcode %} - -
- -### Dispenser Status - -* **dispenser\_status**(`self`) -> `DispenserStatus` - -**Returns** - -`DispenserStatus` - -Returns a `DispenserStatus` object returned from `Dispenser.sol::status(dt_addr)` which is composed of: - -* bool active -* address owner -* bool isMinter -* uint256 maxTokens -* uint256 maxBalance -* uint256 balance -* address allowedSwapper - -These are Solidity return values & types, but `uint256` means int in Python and `address` is a `string` instance. - -For tips & tricks, check [this section](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/READMEs/main-flow.md#faucet-tips--tricks) from the [README](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/READMEs/main-flow.md). - -It is implemented in `DatatokenBase`, inherited by `Datatoken2`, so it can be called within both instances. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL402C1-L409C43) - -
- -Source code - -```python -@enforce_types - def dispenser_status(self): - """:return: DispenserStatus object""" - # import here to avoid circular import - from ocean_lib.models.dispenser import DispenserStatus - - status_tup = self._ocean_dispenser().status(self.address) - return DispenserStatus(status_tup) -``` - -
- -### Create Fixed Rate Exchange - -* **create\_exchange**(`self`, `rate: Union[int, str]`, `base_token_addr: str`, `tx_dict: dict`, `owner_addr: Optional[str] = None`, `publish_market_fee_collector: Optional[str] = None, publish_market_fee: Union[int, str] = 0`, `allowed_swapper: str = ZERO_ADDRESS`, `full_info: bool = False`) -> `Union[OneExchange, tuple]` - -It is implemented in `DatatokenBase`, inherited by `Datatoken2`, so it can be called within both instances. - -For this datatoken, create a single fixed-rate exchange (`OneExchange`). - -This wraps the smart contract method `Datatoken.createFixedRate()` with a simpler interface. - -**Parameters** - -* `rate` - how many base tokens does 1 datatoken cost? In wei or string -* `base_token_addr` - e.g. OCEAN address -* `tx_dict` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. - -**Optional parameters** - -* `owner_addr` - owner of the datatoken -* `publish_market_fee_collector` - fee going to publish market address -* `publish_market_fee` - in wei or string, e.g. `int(1e15)` or `"0.001 ether"` -* `allowed_swapper` - if `ZERO_ADDRESS`, anyone can swap -* `full_info` - return just `OneExchange`, or `(OneExchange, )` - -**Returns** - -* `exchange` - OneExchange -* (maybe) `tx_receipt` - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL236C4-L310C1) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def create_exchange( - self, - rate: Union[int, str], - base_token_addr: str, - tx_dict: dict, - owner_addr: Optional[str] = None, - publish_market_fee_collector: Optional[str] = None, - publish_market_fee: Union[int, str] = 0, - allowed_swapper: str = ZERO_ADDRESS, - full_info: bool = False, - ) -> Union[OneExchange, tuple]: - - # import now, to avoid circular import - from ocean_lib.models.fixed_rate_exchange import OneExchange - - FRE_addr = get_address_of_type(self.config_dict, "FixedPrice") - from_addr = ( - tx_dict["from"].address - if hasattr(tx_dict["from"], "address") - else tx_dict["from"] - ) - BT = Datatoken(self.config_dict, base_token_addr) - owner_addr = owner_addr or from_addr - publish_market_fee_collector = publish_market_fee_collector or from_addr - - tx = self.contract.createFixedRate( - checksum_addr(FRE_addr), - [ - checksum_addr(BT.address), - checksum_addr(owner_addr), - checksum_addr(publish_market_fee_collector), - checksum_addr(allowed_swapper), - ], - [ - BT.decimals(), - self.decimals(), - rate, - publish_market_fee, - 1, - ], - tx_dict, - ) - - exchange_id = tx.events["NewFixedRate"]["exchangeId"] - FRE = self._FRE() - exchange = OneExchange(FRE, exchange_id) - if full_info: - return (exchange, tx) - return exchange -``` -{% endcode %} - -
- -### Buy Datatokens & Order - -* **buy\_DT\_and\_order**(`self`, `consumer: str`, `service_index: int`, `provider_fees: dict`, `exchange: Any`, `transaction_parameters: dict`, `consume_market_fees=None`) -> `str` - -This function is used to retrieve funds or datatokens for an user who wants to start an order. - -It is implemented in `Datatoken` class and it is also inherited in `Datatoken2` class. - -**Parameters** - -* `consumer` - address of the consumer wallet that needs funding -* `service_index` - service index as int for identifying the service that you want to further call [`start_order`](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL169C5-L197C10). -* `transaction_parameters` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. -* `consume_market_fees` - [`TokenInfo` ](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#L31)object which contains the consume market fee amount, address & token address. If it is not explicitly specified, by default it has an empty `TokenInfo` object. - -**Returns** - -`str` - -Return value is a hex string for transaction hash which denotes the proof of starting order. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL484C4-L518C10) - -
- -Source code - -```python - @enforce_types - def buy_DT_and_order( - self, - consumer: str, - service_index: int, - provider_fees: dict, - exchange: Any, - transaction_parameters: dict, - consume_market_fees=None, - ) -> str: - fre_address = get_address_of_type(self.config_dict, "FixedPrice") - - # import now, to avoid circular import - from ocean_lib.models.fixed_rate_exchange import OneExchange - - if not consume_market_fees: - consume_market_fees = TokenFeeInfo() - - if not isinstance(exchange, OneExchange): - exchange = OneExchange(fre_address, exchange) - - exchange.buy_DT( - datatoken_amt=to_wei(1), - consume_market_fee_addr=consume_market_fees.address, - consume_market_fee=consume_market_fees.amount, - tx_dict=transaction_parameters, - ) - - return self.start_order( - consumer=ContractBase.to_checksum_address(consumer), - service_index=service_index, - provider_fees=provider_fees, - consume_market_fees=consume_market_fees, - transaction_parameters=transaction_parameters, - ) - - -``` - -
- -### Get Exchanges - -* **get\_exchanges**(`self`) -> `list` - -**Returns** - -`list` - -Returns `List[OneExchange]` - all the exchanges for this datatoken. - -It is implemented in `DatatokenBase`, inherited by `Datatoken2`, so it can be called within both instances. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL311C4-L322C25) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def get_exchanges(self) -> list: - """return List[OneExchange] - all the exchanges for this datatoken""" - # import now, to avoid circular import - from ocean_lib.models.fixed_rate_exchange import OneExchange - - FRE = self._FRE() - addrs_and_exchange_ids = self.getFixedRates() - exchanges = [ - OneExchange(FRE, exchange_id) for _, exchange_id in addrs_and_exchange_ids - ] - return exchanges -``` -{% endcode %} - -
- -### Start Order - -* **start\_order**(`self`, `consumer: str`, `service_index: int`, `provider_fees: dict`, `transaction_parameters: dict`, `consume_market_fees=None`) -> `str` - -Starting order of a certain datatoken. - -It is implemented in Datatoken class and it is also inherited in Datatoken2 class. - -**Parameters** - -* `consumer` - address of the consumer wallet that needs funding -* `service_index` - service index as int for identifying the service that you want to apply `start_order`. -* `provider_fees` - dictionary which includes provider fees generated when `initialize` endpoint from `Provider` was called. -* `transaction_parameters` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. -* `consume_market_fees` - [`TokenInfo` ](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#L31)object which contains the consume market fee amount, address & token address. If it is not explicitly specified, by default it has an empty `TokenInfo` object. - -**Returns** - -`str` - -Return value is a hex string for transaction hash which denotes the proof of starting order. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL169C5-L197C10) - -
- -Source code - -```python -@enforce_types - def start_order( - self, - consumer: str, - service_index: int, - provider_fees: dict, - transaction_parameters: dict, - consume_market_fees=None, - ) -> str: - - if not consume_market_fees: - consume_market_fees = TokenFeeInfo() - - return self.contract.startOrder( - checksum_addr(consumer), - service_index, - ( - checksum_addr(provider_fees["providerFeeAddress"]), - checksum_addr(provider_fees["providerFeeToken"]), - int(provider_fees["providerFeeAmount"]), - provider_fees["v"], - provider_fees["r"], - provider_fees["s"], - provider_fees["validUntil"], - provider_fees["providerData"], - ), - consume_market_fees.to_tuple(), - transaction_parameters, - ) -``` - -
- -### Reuse Order - -* **reuse\_order**(`self`, `order_tx_id: Union[str, bytes]`, `provider_fees: dict`, `transaction_parameters: dict` ) -> `str` - -Reusing an order from a certain datatoken. - -It is implemented in Datatoken class and it is also inherited in Datatoken2 class. - -**Parameters** - -* `order_tx_id` - transaction hash of a previous order, string or bytes format. -* `provider_fees` - dictionary which includes provider fees generated when `initialize` endpoint from `Provider` was called. -* `transaction_parameters` - is the configuration `dictionary` for that specific transaction. Usually for `development` we include just the `from` wallet, but for remote networks, you can provide gas fees, required confirmations for that block etc. - -**Returns** - -`str` - -Return value is a hex string for transaction hash which denotes the proof of reusing order. - -**Defined in** - -[models/datatoken.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL199C5-L219C10) - -
- -Source code - -```python - @enforce_types - def reuse_order( - self, - order_tx_id: Union[str, bytes], - provider_fees: dict, - transaction_parameters: dict, - ) -> str: - return self.contract.reuseOrder( - order_tx_id, - ( - checksum_addr(provider_fees["providerFeeAddress"]), - checksum_addr(provider_fees["providerFeeToken"]), - int(provider_fees["providerFeeAmount"]), - provider_fees["v"], - provider_fees["r"], - provider_fees["s"], - provider_fees["validUntil"], - provider_fees["providerData"], - ), - transaction_parameters, - ) -``` - -
diff --git a/data-scientists/ocean.py/install.md b/data-scientists/ocean.py/install.md deleted file mode 100644 index b8daf3947..000000000 --- a/data-scientists/ocean.py/install.md +++ /dev/null @@ -1,52 +0,0 @@ -# Install - -Let’s start interacting with the python library by firstly installing it & its prerequisites. - -From the adventurous `Python 3.8.5` all the way up to `Python 3.10.4`, ocean.py has got your back! 🚀 - -While `ocean.py` can join you on your `Python 3.11` journey, a few manual tweaks may be required. But worry not, brave explorers, we've got all the juicy details for you below! 📚✨ -⚠️ Make sure that you have `autoconf`, `pkg-config` and `build-essential` or their equivalents installed on your host. - -### Installing ocean.py - -ocean.py is a Python library [on pypi as ocean-lib](https://pypi.org/project/ocean-lib/). So after you have completed the prerequisites step, let's create a new console for library installation: - -```bash -# Create your working directory -mkdir my_project -cd my_project - -# Initialize virtual environment and activate it. Install artifacts. -# Make sure your Python version inside the venv is >=3.8. -# Anaconda is not fully supported for now, please use venv -python3 -m venv venv -source venv/bin/activate - -# Avoid errors for the step that follows -pip install wheel - -# Install Ocean library. -pip install ocean-lib -``` - -### Potential issues & workarounds - -Issue: M1 \* `coincurve` or `cryptography` - -* If you have an Apple M1 processor, `coincurve` and `cryptography` installation may fail due missing packages, which come pre-packaged in other operating systems. -* Workaround: ensure you have `autoconf`, `automake` and `libtool` installed as it is mentioned in the prerequisites, e.g. using Homebrew or MacPorts. - -Issue: MacOS “Unsupported Architecture” - -* If you run MacOS, you may encounter an “Unsupported Architecture” issue. -* Workaround: install including ARCHFLAGS: `ARCHFLAGS="-arch x86_64" pip install ocean-lib`. [Details](https://github.com/oceanprotocol/ocean.py/issues/486). - -### why we 🥰 ocean.py - -`ocean.py` treats each Ocean smart contract as a Python class, and each deployed smart contract as a Python object. We love this feature, because it means Python programmers can treat Solidity code as Python code! 🤯 - -### Helpful resources - -Oh, buoy! 🌊🐙 When it comes to installation, ocean.py has you covered with a special README called ["install.md"](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/install.md). It's like a trusty guide that helps you navigate all the nitty-gritty details. So, let's dive in and ride the waves of installation together! 🏄‍♂️🌊 - - diff --git a/data-scientists/ocean.py/local-setup.md b/data-scientists/ocean.py/local-setup.md deleted file mode 100644 index 7381cbb3e..000000000 --- a/data-scientists/ocean.py/local-setup.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -description: Local setup for running & testing ocean.py ---- - -# Local Setup - -On this page, we continue our journey from [installation part](install.md), to do setup for local testing. Local setup means that we will use Ganache as local blockchain where we can effectuate transactions and all the services point to this network. - -⚠️ Ocean local setup uses Docker, which is fine for Linux/Ubuntu but plays badly with MacOS and Windows. If you are on these, you’ll want [remote setup](remote-setup.md)_._ - -Here are the following steps for configuring ocean.py on Ganache network using barge. - -### Prerequisites - -Ahoy there, matey! 🌊⚓️ When it comes to setting up ocean.py locally, we're diving into the world of Docker containers. These clever containers hold the trusty local blockchain nodes (Ganache) and the mighty Ocean middleware (Aquarius metadata cache and Provider to aid in consuming data assets). But fear not, for a smooth sailing experience, you'll need to ensure the following Docker components are shipshape and ready to go: - -1. [Docker](https://docs.docker.com/engine/install/) 🐳 -2. [Docker Compose](https://docs.docker.com/compose/install/) 🛠️ -3. Oh, and don't forget to [allow those non-root users](https://www.thegeekdiary.com/run-docker-as-a-non-root-user/) to join in on the fun! 🙅‍♂️ - -So hoist the anchor, prepare your Docker crew, and let's embark on an exciting voyage with ocean.py! 🚢⛵️ - -### 1. Download barge and run services - -Ocean `barge` runs ganache (local blockchain), Provider (data service), and Aquarius (metadata cache). - -Barge helps you quickly become familiar with Ocean, because the local blockchain has low latency and no transaction fees.\ - - -In a new console: - -```bash -# Grab repo -git clone https://github.com/oceanprotocol/barge -cd barge - -# Clean up old containers (to be sure) -docker system prune -a --volumes - -# Run barge: start Ganache, Provider, Aquarius; deploy contracts; update ~/.ocean -export GANACHE_FORK=london # for support of type 2 transactions -./start_ocean.sh -``` - -Let barge do its magic and wait until the blockchain is fully synced. That means when you start to see continuously `eth_blockNumber` - -### 2. Set envvars - -From here on, go to a console different than Barge. (E.g. the console where you installed Ocean, or a new one.) - -First, ensure that you're in the working directory, with venv activated: - -```bash -cd my_project -source venv/bin/activate -``` - -For this tutorial Alice is the publisher of the dataset and Bob is the consumer of the dataset. As a Linux user, you'll use "`export`" for setting the private keys. In the same console: - -```bash -# keys for alice and bob -export TEST_PRIVATE_KEY1=0x8467415bb2ba7c91084d932276214b11a3dd9bdb2930fefa194b666dd8020b99 -export TEST_PRIVATE_KEY2=0x1d751ded5a32226054cd2e71261039b65afb9ee1c746d055dd699b1150a5befc - - -# key for minting fake OCEAN -export FACTORY_DEPLOYER_PRIVATE_KEY=0xc594c6e5def4bab63ac29eed19a134c130388f74f019bc74b8f4389df2837a58 -``` - -### 3. Setup in Python - -In the same console, run Python console: - -```bash -python -``` - -In the Python console: - -```python -# Create Ocean instance -from ocean_lib.example_config import get_config_dict -config = get_config_dict("http://localhost:8545") - -from ocean_lib.ocean.ocean import Ocean -ocean = Ocean(config) - -# Create OCEAN object. Barge auto-created OCEAN, and ocean instance knows -OCEAN = ocean.OCEAN_token - -# Mint fake OCEAN to Alice & Bob -from ocean_lib.ocean.mint_fake_ocean import mint_fake_OCEAN -mint_fake_OCEAN(config) - -# Create Alice's wallet -import os -from eth_account import Account - -alice_private_key = os.getenv("TEST_PRIVATE_KEY1") -alice = Account.from_key(private_key=alice_private_key) -assert alice.balance() > 0, "Alice needs ETH" -assert OCEAN.balanceOf(alice) > 0, "Alice needs OCEAN" - -# Create additional wallets. While some flows just use Alice wallet, it's simpler to do all here. -bob_private_key = os.getenv('TEST_PRIVATE_KEY2') -bob = Account.from_key(private_key=bob_private_key) -assert bob.balance() > 0, "Bob needs ETH" -assert OCEAN.balanceOf(bob) > 0, "Bob needs OCEAN" - -# Compact wei <> eth conversion -from ocean_lib.ocean.util import to_wei, from_wei -``` diff --git a/data-scientists/ocean.py/ocean-assets-tech-details.md b/data-scientists/ocean.py/ocean-assets-tech-details.md deleted file mode 100644 index a3ff32f44..000000000 --- a/data-scientists/ocean.py/ocean-assets-tech-details.md +++ /dev/null @@ -1,981 +0,0 @@ ---- -description: Technical details about OceanAssets functions ---- - -# Ocean Assets Tech Details - -Through this class we can publish different types of assets & consume them to make 💲💲💲 - -### Creates URL Asset - -* **create\_url\_asset**(`self`, `name: str`, `url: str`, `publisher_wallet`, `wait_for_aqua: bool = True` ) -> `tuple` - -It is the most used functions in all the READMEs. - -Creates asset of type "dataset", having `UrlFiles`, with good defaults. - -It can be called after instantiating Ocean object. - -**Parameters** - -* `name` - name of the asset, `string` -* `url` - url that is stored in the asset, `string` -* `publisher_wallet` - wallet of the asset publisher/owner, `eth Account` -* `wait_for_aqua` - boolean value which default is `True`, waiting for aquarius to fetch the asset takes additional time, but if you want to be sure that your asset is indexed, keep the default value. - -**Returns** - -`tuple` - -A tuple which contains the data NFT, datatoken and the data asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL178C1-L185C82) - -
- -Source code - -{% code overflow="wrap" %} -```python - @enforce_types - def create_url_asset( - self, name: str, url: str, publisher_wallet, wait_for_aqua: bool = True - ) -> tuple: - """Create asset of type "data", having UrlFiles, with good defaults""" - metadata = self._default_metadata(name, publisher_wallet) - files = [UrlFile(url)] - return self._create_1dt(metadata, files, publisher_wallet, wait_for_aqua) -``` -{% endcode %} - -
- -### Creates Algorithm Asset - -* **create\_algo\_asset**(`self`, `name: str`, `url: str`, `publisher_wallet`, `image: str = "oceanprotocol/algo_dockers"`, `tag: str = "python-branin"`, `checksum: str = "sha256:8221d20c1c16491d7d56b9657ea09082c0ee4a8ab1a6621fa720da58b09580e4"`, `wait_for_aqua: bool = True`) -> `tuple`: - -Create asset of type "algorithm", having `UrlFiles`, with good defaults. - -It can be called after instantiating Ocean object. - -**Parameters**: - -* `name` - name of the asset, `string` -* `url` - url that is stored in the asset, `string` -* `publisher_wallet` - wallet of the asset publisher/owner, `eth Account` -* `image` - docker image of that algorithm, `string` -* `tag` - docker tag for that algorithm image, `string` -* `checksum` - docker checksum for algorithm's image, `string` -* `wait_for_aqua` - boolean value which default is `True`, waiting for aquarius to fetch the asset takes additional time, but if you want to be sure that your asset is indexed, keep the default value. - -**Returns** - -`tuple` - -A tuple which contains the algorithm NFT, algorithm datatoken and the algorithm asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL146C4-L176C82) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def create_algo_asset( - self, - name: str, - url: str, - publisher_wallet, - image: str = "oceanprotocol/algo_dockers", - tag: str = "python-branin", - checksum: str = "sha256:8221d20c1c16491d7d56b9657ea09082c0ee4a8ab1a6621fa720da58b09580e4", - wait_for_aqua: bool = True, - ) -> tuple: - """Create asset of type "algorithm", having UrlFiles, with good defaults""" - - if image == "oceanprotocol/algo_dockers" or tag == "python-branin": - assert image == "oceanprotocol/algo_dockers" and tag == "python-branin" - - metadata = self._default_metadata(name, publisher_wallet, "algorithm") - metadata["algorithm"] = { - "language": "python", - "format": "docker-image", - "version": "0.1", - "container": { - "entrypoint": "python $ALGO", - "image": image, - "tag": tag, - "checksum": checksum, - }, - } - - files = [UrlFile(url)] - return self._create_1dt(metadata, files, publisher_wallet, wait_for_aqua) -``` -{% endcode %} - -
- -### Creates Arweave Asset - -* **create\_arweave\_asset**(`self`, `name: str`, `transaction_id: str`, `publisher_wallet`, `wait_for_aqua: bool = True`) -> `tuple` - -Creates asset of type "data", having `ArweaveFile`, with good defaults. - -It can be called after instantiating Ocean object. - -**Parameters** - -* `name` - name of the asset, `string` -* `transaction_id` - transaction id from the arweave file, `string` -* `publisher_wallet` - wallet of the asset publisher/owner, `eth Account` -* `wait_for_aqua` - boolean value which default is `True`, waiting for aquarius to fetch the asset takes additional time, but if you want to be sure that your asset is indexed, keep the default value. - -**Returns** - -`tuple` - -A tuple which contains the data NFT, datatoken and the data asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL187C5-L198C82) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def create_arweave_asset( - self, - name: str, - transaction_id: str, - publisher_wallet, - wait_for_aqua: bool = True, - ) -> tuple: - """Create asset of type "data", having ArweaveFiles, with good defaults""" - metadata = self._default_metadata(name, publisher_wallet) - files = [ArweaveFile(transaction_id)] - return self._create_1dt(metadata, files, publisher_wallet, wait_for_aqua) -``` -{% endcode %} - -
- -### Creates GraphQL Asset - -* **create\_graphql\_asset**(`self`, `name: str`, `url: str`, `query: str`, `publisher_wallet`, `wait_for_aqua: bool = True`) -> `tuple` - -Creates asset of type "data", having `GraphqlQuery` files, with good defaults. - -It can be called after instantiating Ocean object. - -**Parameters** - -* `name` - name of the asset, `string` -* `url` - url of subgraph that you are using, `string` -* `query` - GraphQL query, `string` -* `publisher_wallet` - wallet of the asset publisher/owner, `eth Account` -* `wait_for_aqua` - boolean value which default is `True`, waiting for aquarius to fetch the asset takes additional time, but if you want to be sure that your asset is indexed, keep the default value. - -**Returns** - -`tuple` - -A tuple which contains the data NFT, datatoken and the data asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL200C5-L212C82) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def create_graphql_asset( - self, - name: str, - url: str, - query: str, - publisher_wallet, - wait_for_aqua: bool = True, - ) -> tuple: - """Create asset of type "data", having GraphqlQuery files, w good defaults""" - metadata = self._default_metadata(name, publisher_wallet) - files = [GraphqlQuery(url, query)] - return self._create_1dt(metadata, files, publisher_wallet, wait_for_aqua) -``` -{% endcode %} - -
- -### Creates Onchain Asset - -* **create\_onchain\_asset**(`self`, `name: str`, `contract_address: str`, `contract_abi: dict`, `publisher_wallet`, `wait_for_aqua: bool = True`) -> `tuple` - -Creates asset of type "data", having `SmartContractCall` files, with good defaults. - -It can be called after instantiating Ocean object. - -**Parameters** - -* `name` - name of the asset, `string` -* `contract_address` - contract address that should be stored in the asset, `string` -* `contract_abi` - ABI of functions presented in the contract, `string` -* `publisher_wallet` - wallet of the asset publisher/owner, `eth Account` -* `wait_for_aqua` - boolean value which default is `True`, waiting for aquarius to fetch the asset takes additional time, but if you want to be sure that your asset is indexed, keep the default value. - -**Returns** - -`tuple` - -A tuple which contains the data NFT, datatoken and the data asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL214C5-L229C1) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def create_onchain_asset( - self, - name: str, - contract_address: str, - contract_abi: dict, - publisher_wallet, - wait_for_aqua: bool = True, - ) -> tuple: - """Create asset of type "data", having SmartContractCall files, w defaults""" - chain_id = self._chain_id - onchain_data = SmartContractCall(contract_address, chain_id, contract_abi) - files = [onchain_data] - metadata = self._default_metadata(name, publisher_wallet) - return self._create_1dt(metadata, files, publisher_wallet, wait_for_aqua) -``` -{% endcode %} - -
- -### Creates Asset (for advanced skills) - -* **create**(`self`, `metadata: dict`, `publisher_wallet`, `credentials: Optional[dict] = None`, `data_nft_address: Optional[str] = None`, `data_nft_args: Optional[DataNFTArguments] = None`, `deployed_datatokens: Optional[List[Datatoken]] = None`, `services: Optional[list] = None`, `datatoken_args: Optional[List["DatatokenArguments"]] = None`, `encrypt_flag: Optional[bool] = True`, `compress_flag: Optional[bool] = True`, `wait_for_aqua: bool = True`) -> `tuple` - -Register an asset on-chain. Asset = {data\_NFT, >=0 datatokens, DDO} - -Creating/deploying a DataNFT contract and in the Metadata store (Aquarius). - -**Parameters** - -* `metadata`: `dictionary` conforming to the Metadata accepted by Ocean Protocol. -* `publisher_wallet`- `eth Account` of the publisher registering this asset. -* `credentials` - credentials `dictionary` necessary for the asset, which establish who can consume the asset and who cannot. -* `data_nft_address`- hex string, the address of the data NFT. The new asset will be associated with this data NFT address. -* `data_nft_args`- object of DataNFTArguments type if creating a new one. -* `deployed_datatokens`- list of datatokens which are already deployed. -* `services` - list of `Service` objects if you want to run multiple services for a datatoken or you have multiple datatokens with a single service each. -* `datatoken_args` - list of objects of `DatatokenArguments` type if creating a new datatokens. -* `encrypt_flag`- bool for encryption of the DDO. -* `compress_flag`- bool for compression of the DDO. -* `wait_for_aqua`- bool for spending time waiting for DDO to be updated in Aquarius. - -**Returns** - -`tuple` - -A tuple which contains the data NFT, datatoken and the data asset. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL259C5-L390C43) - -
- -Source code - -{% code overflow="wrap" %} -```python -def create( - self, - metadata: dict, - publisher_wallet, - credentials: Optional[dict] = None, - data_nft_address: Optional[str] = None, - data_nft_args: Optional[DataNFTArguments] = None, - deployed_datatokens: Optional[List[Datatoken]] = None, - services: Optional[list] = None, - datatoken_args: Optional[List["DatatokenArguments"]] = None, - encrypt_flag: Optional[bool] = True, - compress_flag: Optional[bool] = True, - wait_for_aqua: bool = True, - ) -> Optional[DDO]: - - self._assert_ddo_metadata(metadata) - - provider_uri = DataServiceProvider.get_url(self._config_dict) - - if not data_nft_address: - data_nft_args = data_nft_args or DataNFTArguments( - metadata["name"], metadata["name"] - ) - data_nft = data_nft_args.deploy_contract( - self._config_dict, publisher_wallet - ) - # register on-chain - if not data_nft: - logger.warning("Creating new NFT failed.") - return None, None, None - logger.info(f"Successfully created NFT with address {data_nft.address}.") - else: - data_nft = DataNFT(self._config_dict, data_nft_address) - - # Create DDO object - ddo = DDO() - - # Generate the did, add it to the ddo. - ddo.did = data_nft.calculate_did() - # Check if it's already registered first! - if self._aquarius.ddo_exists(ddo.did): - raise AquariusError( - f"Asset id {ddo.did} is already registered to another asset." - ) - ddo.chain_id = self._chain_id - ddo.metadata = metadata - - ddo.credentials = credentials if credentials else {"allow": [], "deny": []} - - ddo.nft_address = data_nft.address - datatokens = [] - - if not deployed_datatokens: - services = [] - for datatoken_arg in datatoken_args: - new_dt = datatoken_arg.create_datatoken( - data_nft, publisher_wallet, with_services=True - ) - datatokens.append(new_dt) - - services.extend(datatoken_arg.services) - - for service in services: - ddo.add_service(service) - else: - if not services: - logger.warning("services required with deployed_datatokens.") - return None, None, None - - datatokens = deployed_datatokens - dt_addresses = [] - for datatoken in datatokens: - if deployed_datatokens[0].address not in data_nft.getTokensList(): - logger.warning( - "some deployed_datatokens don't belong to the given data nft." - ) - return None, None, None - - dt_addresses.append(datatoken.address) - - for service in services: - if service.datatoken not in dt_addresses: - logger.warning("Datatoken services mismatch.") - return None, None, None - - ddo.add_service(service) - - # Validation by Aquarius - _, proof = self.validate(ddo) - proof = ( - proof["publicKey"], - proof["v"], - proof["r"][0], - proof["s"][0], - ) - - document, flags, ddo_hash = self._encrypt_ddo( - ddo, provider_uri, encrypt_flag, compress_flag - ) - - data_nft.setMetaData( - 0, - provider_uri, - Web3.toChecksumAddress(publisher_wallet.address.lower()).encode("utf-8"), - flags, - document, - ddo_hash, - [proof], - {"from": publisher_wallet}, - ) - - # Fetch the ddo on chain - if wait_for_aqua: - ddo = self._aquarius.wait_for_ddo(ddo.did) - - return (data_nft, datatokens, ddo) -``` -{% endcode %} - -**Publishing Alternatives** - -Here are some examples similar to the `create()` above, but exposes more fine-grained control. - -In the same python console: - -```python -# Specify metadata and services, using the Branin test dataset -date_created = "2021-12-28T10:55:11Z" -metadata = { - "created": date_created, - "updated": date_created, - "description": "Branin dataset", - "name": "Branin dataset", - "type": "dataset", - "author": "Trent", - "license": "CC0: PublicDomain", -} - -# Use "UrlFile" asset type. (There are other options) -from ocean_lib.structures.file_objects import UrlFile -url_file = UrlFile( - url="https://raw.githubusercontent.com/trentmc/branin/main/branin.arff" -) - -# Publish data asset -from ocean_lib.models.datatoken_base import DatatokenArguments -_, _, ddo = ocean.assets.create( - metadata, - {"from": alice}, - datatoken_args=[DatatokenArguments(files=[url_file])], -) -``` - -**DDO Encryption or Compression** - -The DDO is stored on-chain. It's encrypted and compressed by default. Therefore it supports GDPR "right-to-be-forgotten" compliance rules by default. - -You can control this during `create()`: - -* To disable encryption, use `ocean.assets.create(..., encrypt_flag=False)`. -* To disable compression, use `ocean.assets.create(..., compress_flag=False)`. -* To disable both, use `ocean.assetspy.create(..., encrypt_flag=False, compress_flag=False)`. - -**Create **_**just**_** a data NFT** - -Calling `create()` like above generates a data NFT, a datatoken for that NFT, and a ddo. This is the most common case. However, sometimes you may want _just_ the data NFT, e.g. if using a data NFT as a simple key-value store. Here's how: - -```python -data_nft = ocean.data_nft_factory.create({"from": alice}, 'NFT1', 'NFT1') -``` - -If you call `create()` after this, you can pass in an argument `data_nft_address:string` and it will use that NFT rather than creating a new one. - -**Create a datatoken from a data NFT** - -Calling `create()` like above generates a data NFT, a datatoken for that NFT, and a ddo object. However, we may want a second datatoken. Or, we may have started with _just_ the data NFT, and want to add a datatoken to it. Here's how: - -```python -datatoken = data_nft.create_datatoken({"from": alice}, "Datatoken 1", "DT1") -``` - -If you call `create()` after this, you can pass in an argument `deployed_datatokens:List[Datatoken1]` and it will use those datatokens during creation. - -**Create an asset & pricing schema simultaneously** - -Ocean Assets allows you to bundle several common scenarios as a single transaction, thus lowering gas fees. - -Any of the `ocean.assets.create__asset()` functions can also take an optional parameter that describes a bundled pricing schema (Dispenser or Fixed Rate Exchange). - -Here is an example involving an exchange: - -{% code overflow="wrap" %} -```python -from ocean_lib.models.fixed_rate_exchange import ExchangeArguments -(data_nft, datatoken, ddo) = ocean.assets.create_url_asset( - name, - url, - {"from": alice}, - pricing_schema_args=ExchangeArguments(rate=to_wei(3), base_token_addr=ocean.OCEAN_address, dt_decimals=18) -) - -assert len(datatoken.get_exchanges()) == 1 -``` -{% endcode %} - -
- -### Updates Asset - -* **update**(`self`, `ddo: DDO`, `publisher_wallet`, `provider_uri: Optional[str] = None`, `encrypt_flag: Optional[bool] = True`, `compress_flag: Optional[bool] = True`) -> `Optional[DDO]` - -Updates a ddo on-chain. - -**Parameters** - -* `ddo` - DDO to update -* `publisher_wallet` - who published this DDO -* `provider_uri` - URL of service provider. This will be used as base to construct the serviceEndpoint for the `access` (download) service. -* `encrypt_flag` - boolean value for encryption the DDO -* `compress_flag` - boolean value for compression the DDO - -**Returns** - -`DDO` or `None` - -The updated DDO, or `None` if updated DDO not found in Aquarius. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL392C5-L454C19) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def update( - self, - ddo: DDO, - publisher_wallet, - provider_uri: Optional[str] = None, - encrypt_flag: Optional[bool] = True, - compress_flag: Optional[bool] = True, - ) -> Optional[DDO]: - - self._assert_ddo_metadata(ddo.metadata) - - if not provider_uri: - provider_uri = DataServiceProvider.get_url(self._config_dict) - - assert ddo.nft_address, "need nft address to update a ddo" - data_nft = DataNFT(self._config_dict, ddo.nft_address) - - assert ddo.chain_id == self._chain_id - - for service in ddo.services: - service.encrypt_files(ddo.nft_address) - - # Validation by Aquarius - validation_result, errors_or_proof = self.validate(ddo) - if not validation_result: - msg = f"DDO has validation errors: {errors_or_proof}" - logger.error(msg) - raise ValueError(msg) - - document, flags, ddo_hash = self._encrypt_ddo( - ddo, provider_uri, encrypt_flag, compress_flag - ) - - proof = ( - errors_or_proof["publicKey"], - errors_or_proof["v"], - errors_or_proof["r"][0], - errors_or_proof["s"][0], - ) - - tx_result = data_nft.setMetaData( - 0, - provider_uri, - Web3.toChecksumAddress(publisher_wallet.address.lower()).encode("utf-8"), - flags, - document, - ddo_hash, - [proof], - {"from": publisher_wallet}, - ) - - ddo = self._aquarius.wait_for_ddo_update(ddo, tx_result.txid) - - return ddo -``` -{% endcode %} - -
- -### Resolves Asset - -* **resolve**(`self`, `did: str`) -> `"DDO"` - -Resolves the asset from Metadata Cache store (Aquarius). - -**Parameter** - -* `did` - identifier of the DDO to be searched & resolved in Aquarius - -**Returns** - -`DDO` - -Returns DDO instance. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL456C5-L458C43) - -
- -Source code - -```python -@enforce_types - def resolve(self, did: str) -> "DDO": - return self._aquarius.get_ddo(did) -``` - -
- -### Searches Assets by Text - -* **search**(`self`, `text: str`) -> `list` - -Searches a DDO by a specific text. - -**Parameter** - -* `text` - string text to search for assets which include it. - -**Returns** - -`list` - -A list of DDOs which have matches with the text provided as parameter. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL460C4-L475C10) - -
- -Source code - -```python -@enforce_types - def search(self, text: str) -> list: - """ - Search for DDOs in aquarius that contain the target text string - :param text - target string - :return - List of DDOs that match with the query - """ - logger.info(f"Search for DDOs containing text: {text}") - text = text.replace(":", "\\:").replace("\\\\:", "\\:") - return [ - DDO.from_dict(ddo_dict["_source"]) - for ddo_dict in self._aquarius.query_search( - {"query": {"query_string": {"query": text}}} - ) - if "_source" in ddo_dict - ] -``` - -
- -### Searches Asset by GraphQL Query - -* **query**(`self`, `query: dict`) -> `list` - -Searches a DDO by a specific query. - -**Parameter** - -* `query` - dictionary type query to search for assets which include it. - -**Returns** - -`list` - -A list of DDOs which have matches with the query provided as parameter. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL477C4-L490C10) - -
- -Source code - -{% code overflow="wrap" %} -```python - @enforce_types - def query(self, query: dict) -> list: - """ - Search for DDOs in aquarius with a search query dict - :param query - dict with query parameters - More info at: https://docs.oceanprotocol.com/api-references/aquarius-rest-api - :return - List of DDOs that match the query. - """ - logger.info(f"Search for DDOs matching query: {query}") - return [ - DDO.from_dict(ddo_dict["_source"]) - for ddo_dict in self._aquarius.query_search(query) - if "_source" in ddo_dict - ] -``` -{% endcode %} - -
- -### Downloads Asset - -* **download\_asset**(`self`, `ddo: DDO`, `consumer_wallet`, `destination: str`, `order_tx_id: Union[str, bytes]`, `service: Optional[Service] = None`, `index: Optional[int] = None`, `userdata: Optional[dict] = None`) -> `str` - -Downloads the asset from Ocean Market. - -**Parameters** - -* `ddo` - DDO to be downloaded. -* `consumer_wallet` - eth Account for the wallet that "ordered" the asset. -* `destination` - destination path, as string, where the asset will be downloaded. -* `order_tx_id` - transaction ID for the placed order, string and bytes formats are accepted. - -**Optional parameters** - -* `service` - optionally if you want to provide the `Service` object through you downloaded the asset. -* `index` - optionally if you want to download certain files, not the whole asset, you can specify how many files you want to download as positive `integer` format. -* `userdata` - `dictionary` additional data from user. - -**Returns** - -`str` - -The full path to the downloaded file as `string`. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL492C5-L516C20) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def download_asset( - self, - ddo: DDO, - consumer_wallet, - destination: str, - order_tx_id: Union[str, bytes], - service: Optional[Service] = None, - index: Optional[int] = None, - userdata: Optional[dict] = None, - ) -> str: - service = service or ddo.services[0] # fill in good default - - if index is not None: - assert isinstance(index, int), logger.error("index has to be an integer.") - assert index >= 0, logger.error("index has to be 0 or a positive integer.") - - assert ( - service and service.type == ServiceTypes.ASSET_ACCESS - ), f"Service with type {ServiceTypes.ASSET_ACCESS} is not found." - - path: str = download_asset_files( - ddo, service, consumer_wallet, destination, order_tx_id, index, userdata - ) - return path -``` -{% endcode %} - -
- -### Pays for Access Service - -* **pay\_for\_access\_service**(`self`, `ddo: DDO`, `wallet`, `service: Optional[Service] = None`, `consume_market_fees: Optional[TokenFeeInfo] = None`, `consumer_address: Optional[str] = None`, `userdata: Optional[dict] = None`) - -Pays for access service by calling initialize endpoint from Provider and starting the order. - -**Parameters** - -* `ddo` - DDO to be downloaded. -* `wallet`- eth Account for the wallet that pays for the asset. - -**Optional parameters** - -* `service` - optionally if you want to provide the `Service` object through you downloaded the asset. -* `consume_market_fees` - `TokenFeeInfo` object which contains consume market fee address, amount and token address. -* `consumer_address` - address for the consumer which pays for the access. -* `userdata` - `dictionary` additional data from user. - -**Returns** - -`str` - -Return value is a hex string for transaction hash which denotes the proof of starting order. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL518C5-L571C28) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def pay_for_access_service( - self, - ddo: DDO, - wallet, - service: Optional[Service] = None, - consume_market_fees: Optional[TokenFeeInfo] = None, - consumer_address: Optional[str] = None, - userdata: Optional[dict] = None, - ): - # fill in good defaults as needed - service = service or ddo.services[0] - consumer_address = consumer_address or wallet.address - - # main work... - dt = Datatoken(self._config_dict, service.datatoken) - balance = dt.balanceOf(wallet.address) - - if balance < to_wei(1): - raise InsufficientBalance( - f"Your token balance {balance} {dt.symbol()} is not sufficient " - f"to execute the requested service. This service " - f"requires 1 wei." - ) - - consumable_result = is_consumable( - ddo, - service, - {"type": "address", "value": wallet.address}, - userdata=userdata, - ) - if consumable_result != ConsumableCodes.OK: - raise AssetNotConsumable(consumable_result) - - data_provider = DataServiceProvider - - initialize_args = { - "did": ddo.did, - "service": service, - "consumer_address": consumer_address, - } - - initialize_response = data_provider.initialize(**initialize_args) - provider_fees = initialize_response.json()["providerFee"] - - receipt = dt.start_order( - consumer=consumer_address, - service_index=ddo.get_index_of_service(service), - provider_fees=provider_fees, - consume_market_fees=consume_market_fees, - transaction_parameters={"from": wallet}, - ) - - return receipt.txid -``` -{% endcode %} - -
- -### Pays for Compute Service - -* **pay\_for\_compute\_service**(`self`, `datasets: List[ComputeInput]`, `algorithm_data: Union[ComputeInput, AlgorithmMetadata]`, `compute_environment: str`, `valid_until: int`, `consume_market_order_fee_address: str`, `wallet`, `consumer_address: Optional[str] = None`) - -Pays for compute service by calling `initializeCompute` endpoint from Provider to retrieve the provider fees and starting the order afterwards. - -**Parameters** - -* `datasets` - list of `ComputeInput` objects, each of them includes mandatory the DDO and service. -* `algorithm_data` - which can be either a `ComputeInput` object which contains the whole DDO and service, either provide just the algorithm metadata as `AlgorithmMetadata`. -* `compute_environment` - `string` that represents the ID from the chosen C2D environment. -* `valid_until` - `UNIX timestamp` which represents until when the algorithm can be used/run. -* `consume_market_order_fee_address` - string address which denotes the consume market fee address for that order and can be the wallet address itself. -* `wallet` - the `eth Account` which pays for the compute service - -**Optional parameters** - -* `consumer_address` - is the string address of the C2D environment consumer. - -**Returns** - -`tuple` - -Return value is a tuple composed of list of datasets and algorithm data (if exists in result), `(datasets, algorithm_data)`. - -**Defined in** - -[ocean/ocean_assets.py](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/ocean/ocean_assets.py#LL573C5-L627C30) - -
- -Source code - -```python - @enforce_types - def pay_for_compute_service( - self, - datasets: List[ComputeInput], - algorithm_data: Union[ComputeInput, AlgorithmMetadata], - compute_environment: str, - valid_until: int, - consume_market_order_fee_address: str, - wallet, - consumer_address: Optional[str] = None, - ): - data_provider = DataServiceProvider - - if not consumer_address: - consumer_address = wallet.address - - initialize_response = data_provider.initialize_compute( - [x.as_dictionary() for x in datasets], - algorithm_data.as_dictionary(), - datasets[0].service.service_endpoint, - consumer_address, - compute_environment, - valid_until, - ) - - result = initialize_response.json() - for i, item in enumerate(result["datasets"]): - self._start_or_reuse_order_based_on_initialize_response( - datasets[i], - item, - TokenFeeInfo( - consume_market_order_fee_address, - datasets[i].consume_market_order_fee_token, - datasets[i].consume_market_order_fee_amount, - ), - wallet, - consumer_address, - ) - - if "algorithm" in result: - self._start_or_reuse_order_based_on_initialize_response( - algorithm_data, - result["algorithm"], - TokenFeeInfo( - address=consume_market_order_fee_address, - token=algorithm_data.consume_market_order_fee_token, - amount=algorithm_data.consume_market_order_fee_amount, - ), - wallet, - consumer_address, - ) - - return datasets, algorithm_data - - return datasets, None -``` - -
diff --git a/data-scientists/ocean.py/ocean-compute-tech-details.md b/data-scientists/ocean.py/ocean-compute-tech-details.md deleted file mode 100644 index 3792ef6cc..000000000 --- a/data-scientists/ocean.py/ocean-compute-tech-details.md +++ /dev/null @@ -1,381 +0,0 @@ ---- -description: Technical details about OceanCompute functions ---- - -# Ocean Compute Tech Details - -Using this class, we are able to manipulate a compute job, run it on Ocean environment and retrieve the results after the execution is finished. - -### Start Compute Job - -* **start**(`self`, `consumer_wallet`, `dataset: ComputeInput`, `compute_environment: str`, `algorithm: Optional[ComputeInput] = None`, `algorithm_meta: Optional[AlgorithmMetadata] = None`, `algorithm_algocustomdata: Optional[dict] = None`, `additional_datasets: List[ComputeInput] = []`) -> `str` - -Starts a compute job. - -It can be called within Ocean Compute class. - -**Parameters** - -* `consumer_wallet` - the `eth Account` of consumer who pays & starts for compute job. -* `dataset` - `ComputeInput` object, each of them includes mandatory the DDO and service. -* `compute_environment` - `string` that represents the ID from the chosen C2D environment. -* `additional_datasets` - list of `ComputeInput` objects for additional datasets in case of starting a compute job for multiple datasets. - -**Optional parameters** - -* `algorithm` - `ComputeInput` object, each of them includes mandatory the DDO and service for algorithm. -* `algorithm_meta` - either provide just the algorithm metadata as `AlgorithmMetadata.` -* `algorithm_algocustomedata` - additional user data for the algorithm as dictionary. - -**Returns** - -`str` - -Returns a string type job ID. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL32C4-L70C33) - -
- -Source code - -```python - @enforce_types - def start( - self, - consumer_wallet, - dataset: ComputeInput, - compute_environment: str, - algorithm: Optional[ComputeInput] = None, - algorithm_meta: Optional[AlgorithmMetadata] = None, - algorithm_algocustomdata: Optional[dict] = None, - additional_datasets: List[ComputeInput] = [], - ) -> str: - metadata_cache_uri = self._config_dict.get("METADATA_CACHE_URI") - ddo = Aquarius.get_instance(metadata_cache_uri).get_ddo(dataset.did) - service = ddo.get_service_by_id(dataset.service_id) - assert ( - ServiceTypes.CLOUD_COMPUTE == service.type - ), "service at serviceId is not of type compute service." - - consumable_result = is_consumable( - ddo, - service, - {"type": "address", "value": consumer_wallet.address}, - with_connectivity_check=True, - ) - if consumable_result != ConsumableCodes.OK: - raise AssetNotConsumable(consumable_result) - - # Start compute job - job_info = self._data_provider.start_compute_job( - dataset_compute_service=service, - consumer=consumer_wallet, - dataset=dataset, - compute_environment=compute_environment, - algorithm=algorithm, - algorithm_meta=algorithm_meta, - algorithm_custom_data=algorithm_algocustomdata, - input_datasets=additional_datasets, - ) - return job_info["jobId"] -``` - -
- -### Compute Job Status - -* **status**(`self`, `ddo: DDO`, `service: Service`, `job_id: str`, `wallet`) -> `Dict[str, Any]` - -Gets status of the compute job. - -It can be called within Ocean Compute class. - -**Parameters** - -* `ddo` - DDO offering the compute service of this job -* `service` - Service object of compute -* `job_id` - ID of the compute job -* `wallet` - eth Account which initiated the compute job - -**Returns** - -`Dict[str, Any]` - -A dictionary which contains the status for an existing compute job, keys are `(ok, status, statusText)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL72C5-L88C24) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def status(self, ddo: DDO, service: Service, job_id: str, wallet) -> Dict[str, Any]: - """ - Gets job status. - - :param ddo: DDO offering the compute service of this job - :param service: compute service of this job - :param job_id: str id of the compute job - :param wallet: Wallet instance - :return: dict the status for an existing compute job, keys are (ok, status, statusText) - """ - job_info = self._data_provider.compute_job_status( - ddo.did, job_id, service, wallet - ) - job_info.update({"ok": job_info.get("status") not in (31, 32, None)}) - - return job_info -``` -{% endcode %} - -
- -### Compute Job Result - -* **result**(`self`, `ddo: DDO`, `service: Service`, `job_id: str`, `index: int`, `wallet` ) -> `Dict[str, Any]` - -Gets compute job result. - -It can be called within Ocean Compute class. - -**Parameters** - -* `ddo` - DDO offering the compute service of this job -* `service` - Service object of compute -* `job_id` - ID of the compute job -* `index` - compute result index -* `wallet` - eth Account which initiated the compute job - -**Returns** - -`Dict[str, Any]` - -A dictionary wich contains the results/logs urls for an existing compute job, keys are `(did, urls, logs)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL90C5-L106C22) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def result( - self, ddo: DDO, service: Service, job_id: str, index: int, wallet - ) -> Dict[str, Any]: - """ - Gets job result. - - :param ddo: DDO offering the compute service of this job - :param service: compute service of this job - :param job_id: str id of the compute job - :param index: compute result index - :param wallet: Wallet instance - :return: dict the results/logs urls for an existing compute job, keys are (did, urls, logs) - """ - result = self._data_provider.compute_job_result(job_id, index, service, wallet) - - return result -``` -{% endcode %} - -
- -### Compute Job Result Logs - -* **compute\_job\_result\_logs**(`self`, `ddo: DDO`, `service: Service`, `job_id: str`, `wallet`, `log_type="output"`) -> `Dict[str, Any]` - -Gets job output if exists. - -It can be called within Ocean Compute class. - -**Parameters** - -* `ddo` - DDO offering the compute service of this job -* `service` - Service object of compute -* `job_id` - ID of the compute job -* `wallet` - eth Account which initiated the compute job -* `log_type` - string which selects what kind of logs to display. Default "output" - -**Returns** - -`Dict[str, Any]` - -A dictionary which includes the results/logs urls for an existing compute job, keys are `(did, urls, logs)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL108C5-L130C22) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def compute_job_result_logs( - self, - ddo: DDO, - service: Service, - job_id: str, - wallet, - log_type="output", - ) -> Dict[str, Any]: - """ - Gets job output if exists. - - :param ddo: DDO offering the compute service of this job - :param service: compute service of this job - :param job_id: str id of the compute job - :param wallet: Wallet instance - :return: dict the results/logs urls for an existing compute job, keys are (did, urls, logs) - """ - result = self._data_provider.compute_job_result_logs( - ddo, job_id, service, wallet, log_type - ) - - return result -``` -{% endcode %} - -
- -### Stop Compute Job - -* **stop**(`self`, `ddo: DDO`, `service: Service`, `job_id: str`, `wallet`) -> `Dict[str, Any]` - -Attempts to stop the running compute job. - -It can be called within Ocean Compute class. - -**Parameters** - -* `ddo` - DDO offering the compute service of this job -* `service` - Service object of compute -* `job_id` - ID of the compute job -* `wallet` - eth Account which initiated the compute job - -**Returns** - -`Dict[str, Any]` - -A dictionary which contains the status for the stopped compute job, keys are `(ok, status, statusText)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL132C5-L146C24) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def stop(self, ddo: DDO, service: Service, job_id: str, wallet) -> Dict[str, Any]: - """ - Attempt to stop the running compute job. - - :param ddo: DDO offering the compute service of this job - :param job_id: str id of the compute job - :param wallet: Wallet instance - :return: dict the status for the stopped compute job, keys are (ok, status, statusText) - """ - job_info = self._data_provider.stop_compute_job( - ddo.did, job_id, service, wallet - ) - job_info.update({"ok": job_info.get("status") not in (31, 32, None)}) - return job_info -``` -{% endcode %} - -
- -### Get Priced C2D Environments - -* **get\_c2d\_environments**(`self`, `service_endpoint: str`, `chain_id: int`) - -Get list of compute environments. - -It can be called within Ocean Compute class. - -**Parameters** - -* `service_endpoint` - string Provider URL that is stored in compute service. -* `chain_id` - using Provider multichain, `chain_id` is required to specify the network for your environment. It has `int` type. - -**Returns** - -`list` - -A list of objects containing information about each compute environment. For each compute environment, these are the following keys: `(id, feeToken, priceMin, consumerAddress, lastSeen, namespace, status)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL148C4-L150C84) - -
- -Source code - -{% code overflow="wrap" %} -```python - @enforce_types - def get_c2d_environments(self, service_endpoint: str, chain_id: int): - return DataServiceProvider.get_c2d_environments(service_endpoint, chain_id) -``` -{% endcode %} - -
- -### Get Free C2D Environments - -* **get\_free\_c2d\_environment**(`self`, `service_endpoint: str`, `chain_id`) - -Get list of free compute environments. - -Important thing is that not all Providers contain free environments (`priceMin = 0`). - -It can be called within Ocean Compute class. - -**Parameters** - -* `service_endpoint` - string Provider URL that is stored in compute service. -* `chain_id` - using Provider multichain, `chain_id` is required to specify the network for your environment. It has `int` type. - -**Returns** - -`list` - -A list of objects containing information about each compute environment. For each compute environment, these are the following keys: `(id, feeToken, priceMin, consumerAddress, lastSeen, namespace, status)`. - -**Defined in** - -[ocean/ocean_compute.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_compute.py#LL152C5-L155C87) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def get_free_c2d_environment(self, service_endpoint: str, chain_id): - environments = self.get_c2d_environments(service_endpoint, chain_id) - return next(env for env in environments if float(env["priceMin"]) == float(0)) -``` -{% endcode %} - -
diff --git a/data-scientists/ocean.py/publish-flow.md b/data-scientists/ocean.py/publish-flow.md deleted file mode 100644 index 4516b9405..000000000 --- a/data-scientists/ocean.py/publish-flow.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -description: >- - This page shows how you can publish a data NFT, a datatoken & a data asset all - at once in different scenarios. ---- - -# Publish Flow - -In this page, we provide some tips & tricks for publishing an asset on Ocean Market using ocean.py. - -We assume you've already (a) [installed Ocean](install.md), and (b) done [local setup](local-setup.md) or [remote setup](remote-setup.md). This flow works for either one, without any changes between them. - -In the Python console: - -```python -#data info -name = "Branin dataset" -url = "https://raw.githubusercontent.com/trentmc/branin/main/branin.arff" - -#create data asset -(data_nft, datatoken, ddo) = ocean.assets.create_url_asset(name, url, {"from": alice}) - -#print -print("Just published asset:") -print(f" data_nft: symbol={data_nft.symbol()}, address={data_nft.address}") -print(f" datatoken: symbol={datatoken.symbol()}, address={datatoken.address}") -print(f" did={ddo.did}") -``` - -You've now published an Ocean asset! - -* [`data_nft`](../../developers/contracts/data-nfts.md) is the base (base IP) -* [`datatoken`](../../developers/contracts/datatokens.md) for access by others (licensing) -* [`ddo`](../../developers/ddo-specification.md) holding metadata - -
- -### Appendix - -For more information regarding: Data NFT & Datatokens interfaces and how they are implemented in Solidity, we suggest to follow up this [article](../../developers/contracts/datanft-and-datatoken.md) and [contracts repo](https://github.com/oceanprotocol/contracts) from GitHub. - -As you may want to explore more the DDO specs, structure & meaning, we invite you to consult [DDO Specification](../../developers/ddo-specification.md) section. - -#### Publishing Alternatives - -Here's an example similar to the `create()` step above, but exposes more parameters to interact with, which requires deeper knowledge about ocean.py usage. The below example points out the creation of an asset and attempts to create a datatoken as well, with the files specified in `DatatokenArguments` class. You have the freedom to customize the data NFT, datatoken and also fields from DDO, such as: - -* services -* metadata -* credentials - -In the same python console: - -```python -# Specify metadata and services, using the Branin test dataset -date_created = "2021-12-28T10:55:11Z" -metadata = { - "created": date_created, - "updated": date_created, - "description": "Branin dataset", - "name": "Branin dataset", - "type": "dataset", - "author": "Trent", - "license": "CC0: PublicDomain", -} - -# Use "UrlFile" asset type. (There are other options) -from ocean_lib.structures.file_objects import UrlFile -url_file = UrlFile( - url="https://raw.githubusercontent.com/trentmc/branin/main/branin.arff" -) - -# Publish data asset -from ocean_lib.models.datatoken_base import DatatokenArguments -_, _, ddo = ocean.assets.create( - metadata, - {"from": alice}, - datatoken_args=[DatatokenArguments(files=[url_file])], -) -``` - -#### DDO Encryption or Compression - -The DDO is stored on-chain. It's encrypted and compressed by default. Therefore it supports GDPR "right-to-be-forgotten" compliance rules by default. - -You can control this during `create()`: - -* To disable encryption, use [`ocean.assets.create(..., encrypt_flag=False)`](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_assets.py#L425). -* To disable compression, use [`ocean.assets.create(..., compress_flag=False)`](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_assets.py#L426). -* To disable both, use [`ocean.assetspy.create(..., encrypt_flag=False, compress_flag=False)`](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean_assets.py#LL425C8-L426C46). - -#### Create a data NFT - -Calling `create()` like above generates a data NFT, a datatoken for that NFT, and a ddo. This is the most common case. However, sometimes you may want _just_ the data NFT, e.g. if using a data NFT as a simple key-value store. Here's how: - -```python -data_nft = ocean.data_nft_factory.create({"from": alice}, 'NFT1', 'NFT1') -``` - -If you call `create()` after this, you can pass in an argument `data_nft_address:string` and it will use that NFT rather than creating a new one. - -#### Create a datatoken from a data NFT - -Calling `create()` like above generates a data NFT, a datatoken for that NFT, and a ddo object. However, we may want a second datatoken. Or, we may have started with _just_ the data NFT, and want to add a datatoken to it. Here's how: - -```python -datatoken = data_nft.create_datatoken({"from": alice}, "Datatoken 1", "DT1") -``` - -If you call `create()` after this, you can pass in an argument `deployed_datatokens:List[Datatoken1]` and it will use those datatokens during creation. - -#### Create an asset & pricing schema simultaneously - -Ocean Assets allows you to bundle several common scenarios as a single transaction, thus lowering gas fees. - -Any of the `ocean.assets.create__asset()` functions can also take an optional parameter that describes a bundled [pricing schema](https://github.com/oceanprotocol/ocean.py/blob/4aa12afd8a933d64bc2ed68d1e5359d0b9ae62f9/ocean_lib/models/datatoken.py#LL199C5-L219C10) (Dispenser or Fixed Rate Exchange). - -Here is an example involving an exchange: - -{% code overflow="wrap" %} -```python -from ocean_lib.models.fixed_rate_exchange import ExchangeArguments -(data_nft, datatoken, ddo) = ocean.assets.create_url_asset( - name, - url, - {"from": alice}, - pricing_schema_args=ExchangeArguments(rate=to_wei(3), base_token_addr=ocean.OCEAN_address, dt_decimals=18) -) - -assert len(datatoken.get_exchanges()) == 1 -``` -{% endcode %} - diff --git a/data-scientists/ocean.py/remote-setup.md b/data-scientists/ocean.py/remote-setup.md deleted file mode 100644 index 78bed4deb..000000000 --- a/data-scientists/ocean.py/remote-setup.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -description: Remote setup for running & testing ocean.py ---- - -# Remote Setup - -This setup does not use barge and uses a remote chain to do the transactions. When the network URL is specified & configured, ocean.py will use components (such as Provider, Aquarius, C2D) according to the expected blockchain. - -Here, we do setup for Sepolia. It's similar for other remote chains. - -Here, we will: - -1. Configure Networks -2. Create two accounts - `REMOTE_TEST_PRIVATE_KEY1` and `2` -3. Get test ETH on Sepolia -4. Get test OCEAN on Sepolia -5. Set envvars -6. Set up Alice and Bob wallets in Python - -Let's go! - -### 1. Configure Networks - -#### 1.1 Supported networks - -All [Ocean chain deployments](https://docs.oceanprotocol.com/discover/networks) (Eth mainnet, Polygon, etc) are supported. -For any supported network, use the RPC URL of your choice when passing it to the ocean config object. - -#### 1.2 RPCs and Infura - -In order to obtain API keys for blockchain access, follow up [this document](http://127.0.0.1:5000/o/mTcjMqA4ylf55anucjH8/s/zQlpIJEeu8x5yl0OLuXn/) for tips & tricks. - -**If you do have an Infura account** - -Use the full RPC URL including the base and API key, e.g. for sepolia: `https://sepolia.infura.io/v3/` - -### 2. Create EVM Accounts (One-Time) - -An EVM account is singularly defined by its private key. Its address is a function of that key. Let's generate two accounts! - -In a new or existing console, run Python. - -```bash -python -``` - -In the Python console: - -```python -from eth_account.account import Account -account1 = Account.create() -account2 = Account.create() - -print(f""" -REMOTE_TEST_PRIVATE_KEY1={account1.key.hex()}, ADDRESS1={account1.address} -REMOTE_TEST_PRIVATE_KEY2={account2.key.hex()}, ADDRESS2={account2.address} -""") -``` - -Then, hit Ctrl-C to exit the Python console. - -Now, you have two EVM accounts (address & private key). Save them somewhere safe, like a local file or a password manager. - -These accounts will work on any EVM-based chain: production chains like Eth mainnet and Polygon, and testnets like Sepolia. Here, we'll use them for Sepolia. - -### 3. Get (test) ETH on Sepolia - -We need the a network's native token to pay for transactions on the network. [ETH](https://ethereum.org/en/get-eth/) is the native token for Ethereum mainnet; [MATIC](https://polygon.technology/matic-token/) is the native token for Polygon, and [(test) ETH](https://www.alchemy.com/faucets/ethereum-sepolia) is the native token for Sepolia. - -To get free (test) ETH on Sepolia: - -1. Go to the faucet [https://www.alchemy.com/faucets/ethereum-sepolia](https://www.alchemy.com/faucets/ethereum-sepolia). Login or create an account on Alchemy. -2. Request funds for ADDRESS1 -3. Request funds for ADDRESS2 - - -### 4. Get (test) OCEAN on Sepolia - -[OCEAN](https://oceanprotocol.com/token) can be used as a data payment token, and locked into veOCEAN for Data Farming / curation. The READMEs show how to use OCEAN in both cases. - -* (Test) OCEAN is on each testnet. Test OCEAN on Sepolia is at [`0x1B083D8584dd3e6Ff37d04a6e7e82b5F622f3985`](https://sepolia.etherscan.io/address/0x1B083D8584dd3e6Ff37d04a6e7e82b5F622f3985). - -To get free (test) OCEAN on Sepolia: - -1. Go to the faucet [https://faucet.sepolia.oceanprotocol.com/](https://faucet.sepolia.oceanprotocol.com/) -2. Request funds for ADDRESS1 -3. Request funds for ADDRESS2 - -You can confirm receiving funds by going to the following url, and seeing your reported OCEAN balance: `https://sepolia.etherscan.io/address/0x1B083D8584dd3e6Ff37d04a6e7e82b5F622f3985?a=` - -### 5. Set envvars - -As usual, Linux/MacOS needs "`export`" and Windows needs "`set`". In the console: - -**Linux & MacOS users:** - -```bash -# For accounts: set private keys -export REMOTE_TEST_PRIVATE_KEY1= -export REMOTE_TEST_PRIVATE_KEY2= -``` - -**Windows users:** - -```powershell -# For accounts: set private keys -set REMOTE_TEST_PRIVATE_KEY1= -set REMOTE_TEST_PRIVATE_KEY2= -``` - -### 6. Setup in Python - -In your working console, run Python: - -```bash -python -``` - -In the Python console: - -```python -# Create Ocean instance -import os -from ocean_lib.example_config import get_config_dict -from ocean_lib.ocean.ocean import Ocean -config = get_config_dict("https://polygon.llamarpc.com") # or use another RPC URL, or an Infura one -ocean = Ocean(config) - -# Create OCEAN object. ocean_lib knows where OCEAN is on all remote networks -OCEAN = ocean.OCEAN_token - -# Create Alice's wallet -from eth_account import Account - -alice_private_key = os.getenv('REMOTE_TEST_PRIVATE_KEY1') -alice = Account.from_key(private_key=alice_private_key) -assert alice.balance() > 0, "Alice needs MATIC" -assert OCEAN.balanceOf(alice) > 0, "Alice needs OCEAN" - -# Create Bob's wallet. While some flows just use Alice wallet, it's simpler to do all here. -bob_private_key = os.getenv('REMOTE_TEST_PRIVATE_KEY2') -bob = Account.from_key(private_key=bob_private_key) -assert bob.balance() > 0, "Bob needs MATIC" -assert OCEAN.balanceOf(bob) > 0, "Bob needs OCEAN" - -# Compact wei <> eth conversion -from ocean_lib.ocean.util import to_wei, from_wei -``` - -If you get a gas-related error like `transaction underpriced`, you'll need to change the `maxFeePerGas` or `maxPriorityFeePerGas`. diff --git a/data-scientists/ocean.py/technical-details.md b/data-scientists/ocean.py/technical-details.md deleted file mode 100644 index 47fd81b6c..000000000 --- a/data-scientists/ocean.py/technical-details.md +++ /dev/null @@ -1,531 +0,0 @@ ---- -description: Technical details about most used ocean.py functions ---- - -# Ocean Instance Tech Details - -At the beginning of most flows, we create an `ocean` object, which is an instance of class [`Ocean`](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py). It exposes useful information, including the following: - -* properties for config & OCEAN -* contract objects retrieval -* users' orders -* provider fees - -### Constructor - -* **\_\_init\_\_**(`self`, `config_dict: Dict`, `data_provider: Optional[Type] = None`) - -The Ocean class is the entry point into Ocean Procol. - -In order to initialize a Ocean object, you must provide `config_dict` which is a `Dictionary` instance and optionally a `DataServiceProvider` instance. - -**Parameters** - -* `config_dict`: `dict` which is mandatory and it contains the configuration as dictionary format. -* `data_provider`: `Optional[DataProvider]` which is optional with a default value of None. If it is not provided, the constructor will instantiate a new one from scratch. - -**Returns** - -`None` - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#L43) - -
- -Source code - -{% code overflow="wrap" %} -```python -class Ocean: - """The Ocean class is the entry point into Ocean Protocol.""" - - @enforce_types - def __init__(self, config_dict: Dict, data_provider: Optional[Type] = None) -> None: - """Initialize Ocean class. - - Usage: Make a new Ocean instance - - `ocean = Ocean({...})` - - This class provides the main top-level functions in ocean protocol: - 1. Publish assets metadata and associated services - - Each asset is assigned a unique DID and a DID Document (DDO) - - The DDO contains the asset's services including the metadata - - The DID is registered on-chain with a URL of the metadata store - to retrieve the DDO from - - `ddo = ocean.assets.create(metadata, publisher_wallet)` - - 2. Discover/Search ddos via the current configured metadata store (Aquarius) - - - Usage: - `ddos_list = ocean.assets.search('search text')` - - An instance of Ocean is parameterized by a `Config` instance. - - :param config_dict: variable definitions - :param data_provider: `DataServiceProvider` instance - """ - config_errors = {} - for key, value in config_defaults.items(): - if key not in config_dict: - config_errors[key] = "required" - continue - - if not isinstance(config_dict[key], type(value)): - config_errors[key] = f"must be {type(value).__name__}" - - if config_errors: - raise Exception(json.dumps(config_errors)) - - self.config_dict = config_dict - - network_name = config_dict["NETWORK_NAME"] - check_network(network_name) - - if not data_provider: - data_provider = DataServiceProvider - - self.assets = OceanAssets(self.config_dict, data_provider) - self.compute = OceanCompute(self.config_dict, data_provider) - - logger.debug("Ocean instance initialized: ") -``` -{% endcode %} - -
- -### Config Getter - -* **config**(`self`) -> `dict` - -It is a helper method for retrieving the user's configuration for ocean.py.\ -It can be called only by Ocean object and returns a python dictionary. - -**Returns** - -`dict` - -Configuration fields as dictionary. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL265C1-L268C32) - -
- -Source code - -```python -@property - @enforce_types - def config(self) -> dict: # alias for config_dict - return self.config_dict -``` - -
- -### OCEAN Address - -* **ocean_address**(`self`) -> `str` - -It is a helper method for retrieving the OCEAN's token address.\ -It can be called only by Ocean object and returns the address as a `string`. - -**Returns** - -`str` - -OCEAN address for that network. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL100C1-L103C52) - -
- -Source code - -```python - @property - @enforce_types - def OCEAN_address(self) -> str: - return get_ocean_token_address(self.config) -``` - -[`get_ocean_token_address`](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/util.py#LL31C1-L38C89) function is an utilitary function which gets the address from `address.json` file - -{% code overflow="wrap" %} -```python -@enforce_types -def get_ocean_token_address(config_dict: dict) -> str: - """Returns the OCEAN address for given network or web3 instance - Requires either network name or web3 instance. - """ - addresses = get_contracts_addresses(config_dict) - - return Web3.toChecksumAddress(addresses.get("Ocean").lower()) if addresses else None -``` -{% endcode %} - -
- -### OCEAN Token Object - -* **ocean_token**(`self`) -> `DatatokenBase` -* **OCEAN**(`self`) -> `DatatokenBase` as alias for the above option - -It is a helper method for retrieving the OCEAN token object (Datatoken class).\ -It can be called within Ocean class and returns the OCEAN Datatoken. - -**Returns** - -`DatatokenBase` - -OCEAN token as `DatatokenBase` object. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL105C1-L113C32) - -
- -Source code - -```python - @property - @enforce_types - def OCEAN_token(self) -> DatatokenBase: - return DatatokenBase.get_typed(self.config, self.OCEAN_address) - - @property - @enforce_types - def OCEAN(self): # alias for OCEAN_token - return self.OCEAN_token -``` - -
- -### Data NFT Factory - -* **data\_nft\_factory**(`self`) -> `DataNFTFactoryContract` - -It is a property for getting `Data NFT Factory` object for the singleton smart contract.\ -It can be called within Ocean class and returns the `DataNFTFactoryContract` instance. - -**Returns** - -`DataNFTFactoryContract` - -Data NFT Factory contract object which access all the functionalities available from smart contracts in Python. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL117C1-L120C80) - -
- -Source code - -{% code overflow="wrap" %} -```python -@property - @enforce_types - def data_nft_factory(self) -> DataNFTFactoryContract: - return DataNFTFactoryContract(self.config, self._addr("ERC721Factory")) -``` -{% endcode %} - -
- -### Dispenser - -* **dispenser**(`self`) -> `Dispenser` - -`Dispenser` is represented by a faucet for free data.\ -It is a property for getting `Dispenser` object for the singleton smart contract.\ -It can be called within Ocean class and returns the `Dispenser` instance. - -**Returns** - -`Dispenser` - -Dispenser contract object which access all the functionalities available from smart contracts in Python. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL122C1-L125C63) - -
- -Source code - -```python - @property - @enforce_types - def dispenser(self) -> Dispenser: - return Dispenser(self.config, self._addr("Dispenser")) -``` - -
- -### Fixed Rate Exchange - -* **fixed\_rate\_exchange**(`self`) -> `FixedRateExchange` - -Exchange is used for priced data.\ -It is a property for getting `FixedRateExchange` object for the singleton smart contract.\ -It can be called within Ocean class and returns the `FixedRateExchange` instance. - -**Returns** - -`FixedRateExchange` - -Fixed Rate Exchange contract object which access all the functionalities available from smart contracts in Python. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL127C1-L130C72) - -
- -Source code - -```python - @property - @enforce_types - def fixed_rate_exchange(self) -> FixedRateExchange: - return FixedRateExchange(self.config, self._addr("FixedPrice")) -``` - -
- -### NFT Token Getter - -* **get\_nft\_token**(`self`, `token_adress: str`) -> `DataNFT` - -It is a getter for a specific data NFT object based on its checksumed address.\ -It can be called within Ocean class which returns the `DataNFT` instance based on string `token_address` specified as parameter. - -**Parameters** - -* `token_address` - string checksumed address of the NFT token that you are searching for. - -**Returns** - -`DataNFT` - -Data NFT object which access all the functionalities available for ERC721 template in Python. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL139C5-L145C51) - -
- -Source code - -```python - @enforce_types - def get_nft_token(self, token_address: str) -> DataNFT: - """ - :param token_address: Token contract address, str - :return: `DataNFT` instance - """ - return DataNFT(self.config, token_address) -``` - -
- -### Datatoken Getter - -* **get\_datatoken**(`self`, `token_address: str`) -> `DatatokenBase` - -It is a getter for a specific `datatoken` object based on its checksumed address.\ -It can be called within Ocean class with a string `token_address` as parameter which returns the `DatatokenBase` instance depending on datatoken's template index. - -**Parameters** - -* `token_address` - string checksumed address of the datatoken that you are searching for. - -**Returns** - -`DatatokenBase` - -Datatoken object which access all the functionalities available for ERC20 templates in Python. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL147C5-L153C67) - -
- -Source code - -```python -@enforce_types - def get_datatoken(self, token_address: str) -> DatatokenBase: - """ - :param token_address: Token contract address, str - :return: `Datatoken1` or `Datatoken2` instance - """ - return DatatokenBase.get_typed(self.config, token_address) - -``` - -
- -### User Orders Getter - -* **get\_user\_orders**(`self`, `address: str`, `datatoken: str`) -> `List[AttributeDict]` - -Returns the list of orders that were made by a certain user on a specific datatoken. - -It can be called within Ocean class. - -**Parameters** - -* `address` - wallet address of that user -* `datatoken` - datatoken address - -**Returns** - -`List[AttributeDict]` - -List of all the orders on that `datatoken` done by the specified `user`. - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL157C5-L173C23) - -
- -Source code - -{% code overflow="wrap" %} -```python - @enforce_types - def get_user_orders(self, address: str, datatoken: str) -> List[AttributeDict]: - """ - :return: List of orders `[Order]` - """ - dt = DatatokenBase.get_typed(self.config_dict, datatoken) - _orders = [] - for log in dt.get_start_order_logs(address): - a = dict(log.args.items()) - a["amount"] = int(log.args.amount) - a["address"] = log.address - a["transactionHash"] = log.transactionHash - a = AttributeDict(a.items()) - - _orders.append(a) - - return _orders -``` -{% endcode %} - - - -
- -### Provider Fees - -* **retrieve\_provider\_fees**( `self`, `ddo: DDO`, `access_service: Service`, `publisher_wallet` ) -> `dict` - -Calls Provider to compute provider fees as dictionary for access service. - -**Parameters** - -* `ddo` - the data asset which has the DDO object -* `access_service` - Service instance for the service that needs the provider fees -* `publisher_wallet` - Wallet instance of the user that wants to retrieve the provider fees - -**Returns** - -`dict` - -A dictionary which contains the following keys (`providerFeeAddress`, `providerFeeToken`, `providerFeeAmount`, `providerData`, `v`, `r`, `s`, `validUntil`). - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL177C4-L189C1) - -
- -Source code - -{% code overflow="wrap" %} -```python - @enforce_types - def retrieve_provider_fees( - self, ddo: DDO, access_service: Service, publisher_wallet - ) -> dict: - - initialize_response = DataServiceProvider.initialize( - ddo.did, access_service, consumer_address=publisher_wallet.address - ) - initialize_data = initialize_response.json() - provider_fees = initialize_data["providerFee"] - - return provider_fees -``` -{% endcode %} - -
- -### Compute Provider Fees - -* **retrieve\_provider\_fees\_for\_compute**(`self`, `datasets: List[ComputeInput]`, `algorithm_data: Union[ComputeInput, AlgorithmMetadata]`, `consumer_address: str`, `compute_environment: str`, `valid_until: int`) -> `dict` - -Calls Provider to generate provider fees as dictionary for compute service. - -**Parameters** - -* `datasets` - list of `ComputeInput` which contains the data assets -* `algorithm_data` - necessary data for algorithm and it can be either a `ComputeInput` object, either just the algorithm metadata, `AlgorithmMetadata` -* `consumer_address` - address of the compute consumer wallet which is requesting the provider fees -* `compute_environment` - id provided from the compute environment as `string` -* `valid_until` - timestamp in UNIX miliseconds for the duration of provider fees for the compute service. - -**Returns** - -`dict` - -A dictionary which contains the following keys (`providerFeeAddress`, `providerFeeToken`, `providerFeeAmount`, `providerData`, `v`, `r`, `s`, `validUntil`). - -**Defined in** - -[ocean/ocean.py](https://github.com/oceanprotocol/ocean.py/blob/main/ocean_lib/ocean/ocean.py#LL190C4-L210C1) - -
- -Source code - -{% code overflow="wrap" %} -```python -@enforce_types - def retrieve_provider_fees_for_compute( - self, - datasets: List[ComputeInput], - algorithm_data: Union[ComputeInput, AlgorithmMetadata], - consumer_address: str, - compute_environment: str, - valid_until: int, - ) -> dict: - - initialize_compute_response = DataServiceProvider.initialize_compute( - [x.as_dictionary() for x in datasets], - algorithm_data.as_dictionary(), - datasets[0].service.service_endpoint, - consumer_address, - compute_environment, - valid_until, - ) - - return initialize_compute_response.json() -``` -{% endcode %} - -
diff --git a/data-scientists/sponsor-a-data-challenge.md b/data-scientists/sponsor-a-data-challenge.md deleted file mode 100644 index e44f14573..000000000 --- a/data-scientists/sponsor-a-data-challenge.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -description: Sponsor a data challenge to crowdsource solutions for your business problems ---- - -# Sponsor a Data Challenge - -

Make the game, set the rules.

- -Hosting a data challenge is a fun way to engage data scientists and machine learning experts around the world to **solve your real business problems**. Incentivize participants to **build products using your data**, **explain insights in your data**, or **provide useful data predictions** for your business. Plus, it's a whole lot cheaper than hiring an in-house data science team! - -### How to sponsor an Ocean Protocol data challenge? - -1. Establish the business problem you want to solve. The first step in building a data solution is understanding what you want to solve. For example, you may want to be able to predict the drought risk in an area to help price parametric insurance, or predict the price of ETH to optimize Uniswap LPing. -2. Curate the dataset(s) that participants will use for the challenge. The key to hosting a good data challenge is to provide an exciting and through dataset that participants can use to build their solutions. Do your research to understand what data is available, whether it be free from an API, available for download, require any transformations, etc. For the first challenge, it is alright if the created dataset is a static file. However, it is best to ensure there is a path to making the data available from a dynamic endpoint so that entires can eventually be applied to current, real-world use cases. -3. Decide how the judging process will occur. This includes how long to make review period, how to score submissions, and how to decide any prizes will be divided among participants -4. Work with Ocean Protocol to gather participants for your data challenge. Creating blog posts and hosting Twitter Spaces is a good way to spread the word about your data challenge. -5. To submit your application, kindly visit [here](https://docs.google.com/forms/d/e/1FAIpQLSdBcTJepav-6k5PmIGwX5e4gpQgGb_82UxzwvCBhilVc59bXQ/viewform), and for more information, head over to this [page](https://oceanprotocol.com/earn/data-challenges/). diff --git a/data-scientists/the-data-value-creation-loop.md b/data-scientists/the-data-value-creation-loop.md deleted file mode 100644 index 172dcfe1d..000000000 --- a/data-scientists/the-data-value-creation-loop.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -description: Thrive in the open data economy by closing the loop towards speed and value ---- - -# The Data Value-Creation Loop - -
- -### Motivation - -The core infrastructure is in place for an open data economy. Dozens of teams are building on it. But it’s not 100% obvious for teams how to make $. - -We ask: - -> How do people sustain and thrive in the emerging open data economy? - -**Our answer is simple: ensure that they can make money!** - -However, this isn’t enough. We need to dive deeper. - -### The Data Value-Creation Loop - -The next question is: - -> How do people make money in the open data economy? - -**Our answer is: create value from data, make money from that value, and loop back and reinvest this value creation into further growth.** - -**We call this the Data Value-Creation Loop.** The figure above illustrates. - -Let’s go through the steps of the loop. - -- At the top, the user gets data by buying it or spending $ to create it. -- Then, they build an AI model from the data. -- Then they make predictions. E.g. “ETH will rise in next 5 minutes” -- Then, they choose actions. E.g. “buy ETH”. -- In executing these actions, they data scientist (or org) will make $ on average. -- The $ earned is put back into buying more data, and other activities. And the loop repeats. - -In this loop, dapp builders can help their users make money; data scientists can earn directly; and crypto enthusiasts can catalyze the first two if incentivized properly (e.g. to curate valuable data). - -### The Data Value Supply Chain - -**If we unroll the loop, we get a data value supply chain.** In most supply chains, the most value creation is at the last step, right before the action is taken. Would you rather a farmer in Costa Rica selling a sack of coffee beans for $5, or Starbucks selling 5 beans’ worth of coffee for $5? - -Therefore, **for data value supply chains, the most value creation in the prediction step.** - -To the question “How do people make money in the open data economy?”, the “create value from data!” almost seem like a truism. Don’t fool yourself. It’s highly useful in practice: **focus only on activities that fully go through the data value-creation loop.** - -However, this is still too open-ended. We need to dive deeper. - -### Which Vertical? How To Compare Opportunities - -There are perhaps dozens of verticals or hundreds of possible opportunities of creating and closing data value-creation loops. How to select which? We’ve found that two measuring sticks help the most. - -**Key criteria:** - -1. **How quickly one can go through the data value-creation loop?** -2. **What’s the $ size of the opportunity** - -For (2), it’s not just “what’s the size of the market”, it’s also “can the product make an impact in the market and capture enough value to be meaningful”. - -We analyzed dozens of possible verticals with according to these criteria. For any given data application, the loop should be fast with serious $ opportunity. - -Here are some examples. - -- **Small $, slow**. Traditional music is small $ and slow, because incumbents like Universal dominate by controlling the back catalogue. -- **Large $, slow**. Medicine is large $ but slow, due to the approval process. -Small $, fast. Decentralized music is fast but small $ (for now! Fingers crossed). - -**We want: large $, fast.** Here are the standouts. - -- **Decentralized Finance (DeFi)** is a great fit. One can loop at the speed of blocks (or faster), and trade volumes have serious $. -- **LLMs and modern AI** is close: one can loop quickly, and with the right application make $. The challenge is: what’s the right application? - -### Project Criteria - -We encourage you - as a builder - to choose projects that close the data value-creation loops. Especially loops with maximum $ and speed. - -We follow our advice for internal projects too. Predictoor, Data Farming, and DeFi-oriented data challenges are standout examples. - -### Summary -To sustain and thrive in the open data economy: make money! - -Do this by closing the data value-creation loop, in a vertical / opportunity where you can loop quickly and the $ opportunity is large. - - diff --git a/developers/README.md b/developers/README.md index 1ba06785e..b26d303de 100644 --- a/developers/README.md +++ b/developers/README.md @@ -1,42 +1,22 @@ --- -cover: ../.gitbook/assets/cover/developer_banner.png +description: >- + This chapter describes the technical details of the Ocean Enterprise technical + stack. +cover: ../.gitbook/assets/Technical2.png coverY: 0 --- -# 💻 Developers +# Technical Architecture + + -## What can you build with Ocean? -1. **Token-gated dApps & REST APIs**: monetize by making your dApp or its REST API token-gated. [Here's how](https://github.com/oceanprotocol/token-gating-template). -2. **AI dApps**: monetize your AI dApp by token-gating on AI training data, feature vectors, models, or predictions. -3. **Data Markets**: build a decentralized data market. [Here's how](https://github.com/oceanprotocol/market) -4. **Private user profile data**: storing user profile data on your centralized server exposes you to liability. Instead, have it on-chain encrypted by the user's wallet, and just-in-time decrypt for the app. [Video](https://www.youtube.com/watch?v=xTfI8spLq1k\&ab\_channel=ParticleNetwork), [slides](https://docs.google.com/presentation/d/1\_lkDVUkA0Rx1R7RpkaSeLkX3PeOBoMQyRhvxjwTvd6A/edit?usp=sharing). -Example live dapps: -* **Data Markets**: [Acentrik Market](https://market.acentrik.io/) for enterprises, and [Ocean Market](https://market.oceanprotocol.com) for general. -* **Token-gated dapps**: [Autobot](https://autobotocean.com/) for analytics, and [Ocean Waves](https://waves.oceanprotocol.com/) for music. -* **Token-gated feeds**: [Ocean Predictoor](https://predictoor.ai) for AI prediction feeds -## How do developers start using Ocean? -* **App level:** [**Use an Ocean Template**](https://oceanprotocol.com/templates). -* **Library level:** [**Use ocean.js**](ocean.js) is a library built for the key environment of dApp developers: JavaScript. Import it & use it your frontend or NodeJS. -* **Contract level:** [**Call Ocean contracts**](contracts/) on Eth mainnet [or other chains](../discover/networks/). -## Developer Docs Quick-links -* [Architecture](architecture.md) - blockchain/contracts layer, middleware, dapps -* Earning revenue: [code to get payment](contracts/revenue.md), [fractional $](fractional-ownership.md), [community $](community-monetization.md) -* Schemas: [Metadata](metadata.md), [identifiers/DIDs](identifiers.md), [identifier objects/DDOs](ddo-specification.md), [storage](storage.md), [fine-grained permissions](fg-permissions.md) -* Components: - * [Barge](barge/) - local chain for testing - * [Ocean subgraph](old-infrastructure/subgraph/) - grabbing event data from the chain - * [Ocean CLI](ocean-cli/) - command-line interface - * [Compute-to-data](compute-to-data/) - practical privacy approach - * [Aquarius](old-infrastructure/aquarius/) - metadata cache - * [Provider](old-infrastructure/provider/) - handshaking for access control -* [FAQ](dev-faq.md) *** diff --git a/developers/architecture-1.md b/developers/architecture-1.md new file mode 100644 index 000000000..8d796ce41 --- /dev/null +++ b/developers/architecture-1.md @@ -0,0 +1,55 @@ +# Dataspace Configuration Options + +Depending on the required level of protection for the assets published in an OE-enabled dataspace, the OE stack can be configured either with or without **SSI-based access control**. Each configuration requires a distinct set of software components, which are detailed in this chapter. + +**Note:** For details on SSI-based access control, refer to [Managing access to assets](fg-permissions.md). + +## OE-enabled dataspace with SSI-based access control enabled (SSI on) + +The configuration of an OE-enabled dataspace with SSI on is presented in the diagram below. + +
+ +Participants interact with the dataspace through the **Marketplace**’s user interface, which enables them to manage their own assets and access assets shared by others. The **OE Node** serves as the core component of the system, supporting the secure publication, retrieval, and consumption of assets. + +### **Logging in to the Marketplace** + +To publish or consume assets, a user must first log in to the marketplace. Logging in requires establishing a connection to the marketplace server using both the **Web3 wallet** and the **SSI wallet**. The Web3 Wallet stores the participant’s Web3 private key, while the SSI wallet manages the participant’s DID and associated Verifiable Credentials. + +### Publishing an asset + +When an asset is published, a corresponding NFT is created on the **Blockchain.** Then, the asset description (DDO) is encrypted by the OE Node, saved in **IPFS,** and the ID of the IPFS content is saved on-chain. The asset is then indexed by the OE Node and becomes available for consumption through the Marketplace. + +### Controlling access to assets + +In this configuration, the OE Node delegates asset access control to the **Policy Server**. When a participant attempts to consume the service of an asset, the OE Node forwards the request to the Policy Server, including the access control rules defined at both the asset and service levels and the participant's web3 address. Using this information, the Policy Server evaluates the request and determines whether the participant is authorized to access the service, should be denied, or - if SSI-based access policies apply - must complete additional verification. + +If additional verification is required, the Policy Server forwards the request to the Verifier component, which initiates an OIDC presentation session. During this session, the Verifier and the SSI Wallet exchange several messages to determine which Verifiable Credentials must be presented. These messages flow between the Verifier and the SSI Wallet through the Policy Server, the Ocean Node, and the **Policy Server Proxy**. + +Once this exchange is complete, the participant sees in the Marketplace UI a list of Verifiable Credentials that satisfy the presentation requirements. The participant selects the credentials to submit, and the SSI Wallet packages them into a Verifiable Presentation, which is then sent to the Verifier. + +The Verifier evaluates the submitted credentials against the rules defined for the asset. If custom rules are present, the Verifier consults the **OPA (Open Policy Agent) Server**. Optionally, it may rely on an external **Credential Verification Service** to determine whether the credentials meet the verification criteria. + +Finally, the Verifier returns an allow/deny decision to the Policy Server, which relays the result back to the participant. Access to the service is granted or denied based on this outcome. + + + +## OE-enabled dataspace with SSI-based access control disabled (SSI off) + +The configuration of an OE-enabled dataspace with SSI off is presented in the diagram below. + +
+ +With SSI‑based access control disabled, asset access decisions rely solely on web3 addresses. In this setup, the OE Node handles access verification internally. As a result, the dataspace architecture remains straightforward, requiring only the **Marketplace** and **OE Node** components. + +### **Logging in to the Marketplace** + +To publish or consume assets, a user must first log in to the marketplace. Logging in requires establishing a connection to the marketplace server using the **Web3 wallet**. + +### Publishing an asset + +When an asset is published, a corresponding NFT is created on the **Blockchain.** Then, the asset description (DDO) is encrypted by the OE Node, saved in **IPFS,** and the ID of the IPFS content is saved on-chain. The asset is then indexed by the OE Node and becomes available for consumption through the Marketplace. + +### Controlling access to assets + +When a participant attempts to consume the service of an asset, the OE Node verifies the participant's web3 address against the allow and deny rules defined for web3 addresses, at both the asset and the service levels. The deny list takes precedence. Access to the service is granted or denied based on this outcome. diff --git a/developers/architecture.md b/developers/architecture.md index 955014769..7171ba46c 100644 --- a/developers/architecture.md +++ b/developers/architecture.md @@ -1,65 +1,48 @@ --- -description: Ocean Protocol Architecture Adventure! +description: This page describes the architecture of a Ocean Enterprise system --- -# Architecture Overview +# High-Level Architecture -Embark on an exploration of the innovative realm of Ocean Protocol, where data flows seamlessly and AI achieves new heights. Dive into the intricately layered architecture that converges data and services, fostering a harmonious collaboration. Let us delve deep and uncover the profound design of Ocean Protocol.🐬 +Ocean Enterprise has a multi-layer architecture, as presented in the following diagram. -

Overview of the Ocean Protocol Architecture

+
-### Layer 1: The Foundational Blockchain Layer -At the core of Ocean Protocol lies the robust [Blockchain Layer](contracts/). Powered by blockchain technology, this layer ensures secure and transparent transactions. It forms the bedrock of decentralized trust, where data providers and consumers come together to trade valuable assets. -The [smart contracts](contracts/) are deployed on the Ethereum mainnet and other compatible [networks](../discover/networks/). The libraries encapsulate the calls to these smart contracts and provide features like publishing new assets, facilitating consumption, managing pricing, and much more. To explore the contracts in more depth, go ahead to the [contracts](contracts/) section. +**Data Storage Layer:** handles saving and retrieving of the data managed by the OE stack. The following types of storage are used: -### Layer 2: The Empowering Middle Layer +* _IPFS_ for storing the asset description. When an asset is created in OE, its description, including all metadata attached to the asset, is saved in IPFS. +* _Blockchain_ for storing the reference to the asset description. After the description of the asset is created in IPFS, the reference to the IPFS object is saved in a transaction on the blockchain. +* _Web storage_ for storing the actual content of the assets. For assets of type dataset or algorithm, their content is saved on a storage platform accessible via the HTTP protocol. The storage platform can be either on the publisher's premises or in the cloud. -Above the smart contracts, you'll find essential [libraries](architecture.md#libraries) employed by applications within the Ocean Protocol ecosystem, the [middleware components](architecture.md#middleware-components), and [Compute-to-Data](architecture.md#compute-to-data). -#### Libraries -These libraries include [Ocean.js](ocean.js), a JavaScript library, and [Ocean.py](../data-scientists/ocean.py), a Python library. They serve as powerful tools for developers, enabling integration and interaction with the protocol. +**Business Logic Layer:** Ocean Enterprise enables a decentralized exchange of data, on one side, and value, on the other side, between a publisher and a consumer. The core element that enables this exchange is represented by the Smart Contracts deployed on-chain. To this end, the OE Smart Contracts implement the flows for publishing and consuming assets. This layer includes factory contracts, templates for data NFTs and data tokens, fixed-rate exchange contracts for paid assets, and Dispenser contracts for free assets. -1. [Ocean.js](ocean.js): Ocean.js is a JavaScript library that serves as a powerful tool for developers looking to integrate their applications with the Ocean Protocol ecosystem. Designed to facilitate interaction with the protocol, Ocean.js provides a comprehensive set of functionalities, including data tokenization, asset management, and smart contract interaction. Ocean.js simplifies the process of implementing data access controls, building dApps, and exploring data sets within a decentralized environment. -2. [Ocean.py](../data-scientists/ocean.py): Ocean.py is a Python library that empowers developers to integrate their applications with the Ocean Protocol ecosystem. With its rich set of functionalities, Ocean.py provides a comprehensive toolkit for interacting with the protocol. Developers and [data scientists](../data-scientists/) can leverage Ocean.py to perform a wide range of tasks, including data tokenization, asset management, and smart contract interactions. This library serves as a bridge between Python and the decentralized world of Ocean Protocol, enabling you to harness the power of decentralized data. -#### Ocean Nodes -Ocean Node is a single component which runs all core middleware services within the Ocean stack. It replaces the roles of Aquarius, Provider and the Subgraph. It integrates the Indexer for metadata management and the Provider for secure data access. It ensures efficient and reliable interactions within the Ocean Protocol network. +**Services Layer:** Represented by the Ocean Node component, this layer acts as a bridge between the Business Logic Layer and SDK, orchestrating how requests are processed and how business logic is executed. It provides the following services: -Ocean Nodes handles network communication through libp2p, supports secure data handling, and enables flexible compute-to-data operations. +* orchestration of the entire consumption flows (download and Compute-To-Data). +* validation for requests. +* data encryption/decryption +* data streaming of the purchased asset to the consumer +* indexing mechanisms for assets +* abstraction of the complexity of the business and data layer, exposing clean APIs -The functions of Ocean nodes include: -* It is crucial in handling the asset downloads, it streams the purchased data directly to the buyer. -* It conducts the permission an access checks during the consume flow. -* The Node handles [new DDO structure](https://docs.oceanprotocol.com/developers/new-ddo-specification) (Decentralized Data Object) encryption, but it offers support for [existing DDO format](https://docs.oceanprotocol.com/developers/ddo-specification). -* It establishes communication with the operator-service for initiating Compute-to-Data jobs. -* It provides a metadata cache, enhancing search efficiency by caching on-chain data into a Typesense database. This enables faster and more efficient data discovery. -* It supports multiple chains. -#### Old components +**SDK:** The ocean.js library encapsulates the Smart Contract functions to create assets as well as the services provided by Ocean Node in JavaScript functions, which can be used to develop OE-enabled business applications. -Previously Ocean used the following middleware components: -1. [Aquarius](old-infrastructure/aquarius/) -2. [Provider](old-infrastructure/provider/) -3. [Subgraph](old-infrastructure/subgraph/) -#### Compute-to-Data +**User Interface:** This facilitates the interaction between an end-user and the system. OE provides two interfaces: -[Compute-to-Data](compute-to-data/) (C2D) represents a groundbreaking paradigm within the Ocean Protocol ecosystem, revolutionizing the way data is processed and analyzed. With C2D, the traditional approach of moving data to the computation is inverted, ensuring privacy and security. Instead, algorithms are securely transported to the data sources, enabling computation to be performed locally, without the need to expose sensitive data. This innovative framework facilitates collaborative data analysis while preserving data privacy, making it ideal for scenarios where data owners want to retain control over their valuable assets. C2D provides a powerful tool for enabling secure and privacy-preserving data analysis and encourages collaboration among data providers, ensuring the utilization of valuable data resources while maintaining strict privacy protocols. +* _Ocean Enterprise Marketplace_: a graphical interface where users can publish, retrieve, and consume assets in a very user-friendly manner. The user interface controls how data is displayed and how it responds to user actions. It also performs input validations before passing data to deeper levels. +* _Command Line Interface (CLI)_: a set of high-level tools that enable the creation and consumption of OE assets from a command line interface. This is appropriate when OE assets need to be manipulated in back-end like applications, where a user interface is not required. -### Layer 3: The Accessible Application Layer -Here, the ocean comes alive with a vibrant ecosystem of dApps, marketplaces, and more. This layer hosts a variety of user-friendly interfaces, applications, and tools, inviting data scientists and curious explorers alike to access, explore, and contribute to the ocean's treasures. -Prominently featured within this layer is [Ocean Market](https://market.oceanprotocol.com), a hub where data enthusiasts and industry stakeholders converge to discover, trade, and unlock the inherent value of data assets. Beyond Ocean Market, the Application Layer hosts a diverse ecosystem of specialized applications and marketplaces, each catering to unique use cases and industries. Empowered by the capabilities of Ocean Protocol, these applications facilitate advanced data exploration, analytics, and collaborative ventures, revolutionizing the way data is accessed, shared, and monetized. +**Access Control Layer:** Is a critical component that governs who can interact with specific resources in OE and under what conditions. It acts as a gatekeeper, enforcing policies that determine user permissions based on identity or other descriptive attributes. This layer controls, for instance, who is allowed to publish assets, who is allowed to consume a specific asset, or what algorithm is allowed to be executed on top of a specific dataset. -### Layer 4: The Friendly Wallets - -At the top of the Ocean Protocol ecosystem, we find the esteemed [Web 3 Wallets](../user-guides/wallets/), the gateway for users to immerse themselves in the world of decentralized data transactions. These wallets serve as trusted companions, enabling users to seamlessly transact within the ecosystem, purchase and sell data NFTs, and acquire valuable datatokens. For a more detailed exploration of Web 3 Wallets and their capabilities, you can refer to the [wallet intro page](../user-guides/wallets/). - -With the layers of the architecture clearly delineated, the stage is set for a comprehensive exploration of their underlying logic and intricate design. By examining each individually, we can gain a deeper understanding of their unique characteristics and functionalities. diff --git a/developers/assets-and-services/README.md b/developers/assets-and-services/README.md new file mode 100644 index 000000000..bfda031f0 --- /dev/null +++ b/developers/assets-and-services/README.md @@ -0,0 +1,2 @@ +# Assets and Services + diff --git a/developers/identifiers.md b/developers/assets-and-services/identifiers.md similarity index 86% rename from developers/identifiers.md rename to developers/assets-and-services/identifiers.md index 79001ec77..583a19327 100644 --- a/developers/identifiers.md +++ b/developers/assets-and-services/identifiers.md @@ -10,10 +10,6 @@ description: >- In Ocean, we use decentralized identifiers (DIDs) to identify your asset within the network. Decentralized identifiers (DIDs) are a type of identifier that enables verifiable, decentralized digital identity. In contrast to typical, centralized identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. Specifically, while other parties might be used to help enable the discovery of information related to a DID, the design enables the controller of a DID to prove control over it without requiring permission from any other party. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject. -{% embed url="https://www.youtube.com/watch?t=95s&v=I06AUNt7ee8" %} -What is a DID and DDO? -{% endembed %} - ### Examples DIDs in Ocean follow [the generic DID scheme](https://w3c-ccg.github.io/did-spec/#the-generic-did-scheme), they look like this: @@ -24,7 +20,7 @@ did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea The part after `did:op:` is the ERC721 contract address(in checksum format) and the chainId (expressed to 10 decimal places). The following javascript example shows how to calculate the DID for the asset: -```runkit nodeVersion="18.x.x" +```runkit const CryptoJS = require('crypto-js') const dataNftAddress = '0xa331155197F70e5e1EA0CC2A1f9ddB1D49A9C1De' @@ -38,5 +34,5 @@ console.log(did) Before creating a DID you should first publish a data NFT, we suggest reading the following sections so you are familiar with the process: -* [Creating a data NFT with ocean.js](ocean.js/creating-datanft.md) -* [Publish flow with ocean.py](../data-scientists/ocean.py/publish-flow.md) +* [Creating a data NFT with ocean.js](/broken/pages/5hloqSKuDCMkPEGOgqSn) +* [Publish flow with ocean.py](../../data-scientists/ocean.py/publish-flow.md) diff --git a/developers/metadata.md b/developers/assets-and-services/metadata.md similarity index 68% rename from developers/metadata.md rename to developers/assets-and-services/metadata.md index 8afd369a2..9afe71abf 100644 --- a/developers/metadata.md +++ b/developers/assets-and-services/metadata.md @@ -2,11 +2,9 @@ description: How can you enhance data discovery? --- -# Metadata +# Metadata - to be updated -Metadata plays a **crucial role** in asset **discovery**, providing essential information such as **asset type, name, creation date, and licensing details**. Each data asset can have a [decentralized identifier (DID)](identifiers.md) that resolves to a DID document ([DDO](ddo-specification.md)) containing associated metadata. The DDO is essentially a collection of fields in a [JSON](https://www.json.org/) object. To understand working with OCEAN DIDs, you can refer to the [DID documentation](identifiers.md). For a more comprehensive understanding of metadata structure, the [DDO Specification](ddo-specification.md) documentation provides in-depth information. - -

Data discovery

+Metadata plays a **crucial role** in asset **discovery**, providing essential information such as **asset type, name, creation date, and licensing details**. Each data asset can have a [decentralized identifier (DID)](identifiers.md) that resolves to a DID document ([DDO](/broken/pages/dd4e368QBuvVe0L8VV4B)) containing associated metadata. The DDO is essentially a collection of fields in a [JSON](https://www.json.org/) object. To understand working with OCEAN DIDs, you can refer to the [DID documentation](identifiers.md). For a more comprehensive understanding of metadata structure, the [DDO Specification](/broken/pages/dd4e368QBuvVe0L8VV4B) documentation provides in-depth information. In general, any dApp within the Ocean ecosystem is required to store metadata for every listed dataset. The metadata is useful to determine which datasets are the most relevant. @@ -38,25 +36,25 @@ Decentralized identifiers (DIDs) are a type of identifier that enable verifiable An _asset_ in Ocean represents a downloadable file, compute service, or similar. Each asset is a _resource_ under the control of a _publisher_. The Ocean network itself does _not_ store the actual resource (e.g. files). -An _asset_ has a DID and DDO. The DDO should include metadata about the asset, and define access in at least one [service](ddo-specification.md#services). Only _owners_ or _delegated users_ can modify the DDO. +An _asset_ has a DID and DDO. The DDO should include metadata about the asset, and define access in at least one [service](/broken/pages/dd4e368QBuvVe0L8VV4B#services). Only _owners_ or _delegated users_ can modify the DDO. -All DDOs are stored on-chain in encrypted form to be fully GDPR-compatible. A metadata cache like [_Aquarius_](old-infrastructure/aquarius/) can help in reading, decrypting, and searching through encrypted DDO data from the chain. Because the file URLs are encrypted on top of the full DDO encryption, returning unencrypted DDOs e.g. via an API is safe to do as the file URLs will still stay encrypted. +All DDOs are stored on-chain in encrypted form to be fully GDPR-compatible. A metadata cache like [_Aquarius_](../old-infrastructure/aquarius/) can help in reading, decrypting, and searching through encrypted DDO data from the chain. Because the file URLs are encrypted on top of the full DDO encryption, returning unencrypted DDOs e.g. via an API is safe to do as the file URLs will still stay encrypted. #### Publishing & Retrieving DDOs -The DDO is stored on-chain as part of the NFT contract and stored in encrypted form using the private key of the [_Provider_](old-infrastructure/provider/). To resolve it, a metadata cache like [_Aquarius_](old-infrastructure/aquarius/) must query the [Provider](old-infrastructure/provider/) to decrypt the DDO. +The DDO is stored on-chain as part of the NFT contract and stored in encrypted form using the private key of the [_Provider_](../old-infrastructure/provider/). To resolve it, a metadata cache like [_Aquarius_](../old-infrastructure/aquarius/) must query the [Provider](../old-infrastructure/provider/) to decrypt the DDO. Here is the flow: -

DDO Flow

+

DDO Flow

To set up the metadata for an asset, you'll need to call the [**setMetaData**](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC721Template.sol#L247) function at the contract level. -* [**\_metaDataState**](ddo-specification.md#state) - Each asset has a state, which is held by the NFT contract. One of the following: active (0), end-of-life (1), deprecated (2), revoked (3), ordering temporarily disabled (4), and asset unlisted (5). +* [**\_metaDataState**](/broken/pages/dd4e368QBuvVe0L8VV4B#state) - Each asset has a state, which is held by the NFT contract. One of the following: active (0), end-of-life (1), deprecated (2), revoked (3), ordering temporarily disabled (4), and asset unlisted (5). * **\_metaDataDecryptorUrl** - You create the DDO and then the Provider encrypts it with its private key. Only that Provider can decrypt it. * **\_metaDataDecryptorAddress** - The decryptor address. * **flags** - Additional information to represent the state of the data. One of two values: 0 - plain text, 1 - compressed, 2 - encrypted. Used by Aquarius. -* **data -** The [DDO](ddo-specification.md) of the asset. You create the DDO as a JSON, send it to the [Provider](old-infrastructure/provider/) that encrypts it, and then you set it up at the contract level. +* **data -** The [DDO](/broken/pages/dd4e368QBuvVe0L8VV4B) of the asset. You create the DDO as a JSON, send it to the [Provider](../old-infrastructure/provider/) that encrypts it, and then you set it up at the contract level. * **\_metaDataHash** - Hash of the clear data **generated before the encryption.** It is used by Provider to check the validity of the data after decryption. * **\_metadataProofs** - Array with signatures of entities who validated data (before the encryption). Pass an empty array if you don't have any. @@ -80,7 +78,7 @@ While we utilize a specific DDO structure, you have the flexibility to customize {% endhint %} {% hint style="info" %} -As developers, we understand that you eat, breathe, and live code. That's why we invite you to explore the [ocean.py](../data-scientists/ocean.py/publish-flow.md#publishing-alternatives) and [ocean.js](ocean.js/update-metadata.md) pages, where you'll find practical examples of how to set up and update metadata for an asset :computer: +As developers, we understand that you eat, breathe, and live code. That's why we invite you to explore the [ocean.py](../../data-scientists/ocean.py/publish-flow.md#publishing-alternatives) and [ocean.js](/broken/pages/lKtgTZPvLHXWCDBtKFPG) pages, where you'll find practical examples of how to set up and update metadata for an asset :computer: {% endhint %} You'll have more information about the DIDs, on the [Identifiers](identifiers.md) page. diff --git a/developers/new-ddo-specification.md b/developers/assets-and-services/new-ddo-specification.md similarity index 91% rename from developers/new-ddo-specification.md rename to developers/assets-and-services/new-ddo-specification.md index 93ede7bbe..7fe4b22e3 100644 --- a/developers/new-ddo-specification.md +++ b/developers/assets-and-services/new-ddo-specification.md @@ -7,9 +7,9 @@ description: >- the DDO standard. --- -# New DDO Specification +# OE DDO Specification - to be updated -## New DDO Schema - High Level +## New DDO Schema - High Level The below diagram shows the high-level DDO schema depicting the content of each data structure and the relations between them. @@ -76,7 +76,7 @@ Stats "1" --> "1..*" Price A DDO in Ocean has these required attributes: -
AttributeTypeDescription
@contextArray of stringContexts used for validation.
idstringComputed as sha256(address of ERC721 contract + chainId).
versionstringVersion information in SemVer notation referring to this DDO spec version, like 4.1.0.
chainIdnumberStores the chainId of the network the DDO was published to.
nftAddressstringNFT contract linked to this asset
metadataMetadataStores an object describing the asset.
servicesServicesStores an array of services defining access to the asset.
credentialsCredentialsDescribes the credentials needed to access a dataset in addition to the services definition.
+
AttributeTypeDescription
@contextArray of stringContexts used for validation.
idstringComputed as sha256(address of ERC721 contract + chainId).
versionstringVersion information in SemVer notation referring to this DDO spec version, like 4.1.0.
chainIdnumberStores the chainId of the network the DDO was published to.
nftAddressstringNFT contract linked to this asset
metadataMetadataStores an object describing the asset.
servicesServicesStores an array of services defining access to the asset.
credentialsCredentialsDescribes the credentials needed to access a dataset in addition to the services definition.
@@ -278,7 +278,7 @@ A DDO in Ocean has these required attributes: This object holds information describing the actual asset. -
AttributeTypeDescription
createdISO date/time stringContains the date of the creation of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
updatedISO date/time stringContains the date of last update of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
description*stringDetails of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for.
copyrightHolderstringThe party holding the legal copyright. Empty by default.
name*stringDescriptive name or title of the asset.
type*stringAsset type. Includes "dataset" (e.g. csv file), "algorithm" (e.g. Python script). Each type needs a different subset of metadata attributes.
author*stringName of the entity generating this data (e.g. Tfl, Disney Corp, etc.).
license*stringShort name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified".
linksArray of stringMapping of URL strings for data samples, or links to find out more information. Links may be to either a URL or another asset.
contentLanguagestringThe language of the content. Use one of the language codes from the IETF BCP 47 standard
tagsArray of stringArray of keywords or tags used to describe this content. Empty by default.
categoriesArray of stringArray of categories associated to the asset. Note: recommended to use tags instead of this.
additionalInformationObjectStores additional information, this is customizable by publisher
algorithm**Algorithm MetadataInformation about asset of type algorithm
+
AttributeTypeDescription
createdISO date/time stringContains the date of the creation of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
updatedISO date/time stringContains the date of last update of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
description*stringDetails of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for.
copyrightHolderstringThe party holding the legal copyright. Empty by default.
name*stringDescriptive name or title of the asset.
type*stringAsset type. Includes "dataset" (e.g. csv file), "algorithm" (e.g. Python script). Each type needs a different subset of metadata attributes.
author*stringName of the entity generating this data (e.g. Tfl, Disney Corp, etc.).
license*stringShort name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified".
linksArray of stringMapping of URL strings for data samples, or links to find out more information. Links may be to either a URL or another asset.
contentLanguagestringThe language of the content. Use one of the language codes from the IETF BCP 47 standard
tagsArray of stringArray of keywords or tags used to describe this content. Empty by default.
categoriesArray of stringArray of categories associated to the asset. Note: recommended to use tags instead of this.
additionalInformationObjectStores additional information, this is customizable by publisher
algorithm**Algorithm MetadataInformation about asset of type algorithm
\* Required @@ -310,7 +310,7 @@ Services define the access for an asset, and each service is represented by its An asset should have at least one service to be actually accessible and can have as many services which make sense for a specific use case. -
AttributeTypeDescription
id*stringUnique ID
type*stringType of service access, compute, wss etc.
namestringService friendly name
descriptionstringService description
datatokenAddress*stringDatatoken
serviceEndpoint*stringProvider URL (schema + host)
files*FilesEncrypted file.
timeout*numberDescribing how long the service can be used after consumption is initiated. A timeout of 0 represents no time limit. Expressed in seconds.
compute**ComputeIf service is of type compute, holds information about the compute-related privacy settings & resources.
consumerParametersConsumer ParametersAn object the defines required consumer input before consuming the asset
additionalInformationObjectStores additional information, this is customizable by publisher
+
AttributeTypeDescription
id*stringUnique ID
type*stringType of service access, compute, wss etc.
namestringService friendly name
descriptionstringService description
datatokenAddress*stringDatatoken
serviceEndpoint*stringProvider URL (schema + host)
files*FilesEncrypted file.
timeout*numberDescribing how long the service can be used after consumption is initiated. A timeout of 0 represents no time limit. Expressed in seconds.
compute**ComputeIf service is of type compute, holds information about the compute-related privacy settings & resources.
consumerParametersConsumer ParametersAn object the defines required consumer input before consuming the asset
additionalInformationObjectStores additional information, this is customizable by publisher
\* Required @@ -430,16 +430,13 @@ States details: 5. **Ordering is temporarily disabled**: Assets in this state are still discoverable, but ordering functionality is temporarily disabled. Users can view the asset and gather information, but they cannot place orders at that moment. However, these assets are still listed under the owner's profile. 6. **Asset unlisted**: Assets in the "Asset unlisted" state are not discoverable. However, users can still place orders for these assets, making them accessible. Unlisted assets are listed under the owner's profile, allowing users to view and access them. -### Aquarius Enhanced DDO Response - The following fields are added by _Aquarius_ in its DDO response for convenience reasons, where an asset returned by _Aquarius_ inherits the DDO fields stored on-chain. -These additional fields are never stored on-chain and are never taken into consideration when [hashing the DDO](ddo-specification.md#ddo-checksum). - +These additional fields are never stored on-chain and are never taken into consideration when [hashing the DDO](/broken/pages/dd4e368QBuvVe0L8VV4B#ddo-checksum). #### Datatokens -The `datatokens` array contains information about the ERC20 datatokens attached to [asset services](ddo-specification.md#services). +The `datatokens` array contains information about the ERC20 datatokens attached to [asset services](/broken/pages/dd4e368QBuvVe0L8VV4B#services).
AttributeTypeDescription
addressstringContract address of the deployed ERC20 contract.
namestringName of NFT set in contract.
symbolstringSymbol of NFT set in contract.
serviceIdstringID of the service the datatoken is attached to.
@@ -473,18 +470,19 @@ The `datatokens` array contains information about the ERC20 datatokens attached **Indexed Metadata** contains off-chain data that helps storing assets pricing details and displaying them properly within decenterlized applications. **Indexed Metadata** is composed of the following objects: + * NFT * Event * Purgatory * Stats -When hashing is performed against a document, **indexedMeatadata** object has to be removed from the DDO structure, its off-chain data being stored and maintained only in the **_Indexer_** database, within **DDO** collection. +When hashing is performed against a document, **indexedMeatadata** object has to be removed from the DDO structure, its off-chain data being stored and maintained only in the _**Indexer**_ database, within **DDO** collection. #### NFT The `nft` object contains information about the ERC721 NFT contract which represents the intellectual property of the publisher. -
AttributeTypeDescription
addressstringContract address of the deployed ERC721 NFT contract.
namestringName of NFT set in contract.
symbolstringSymbol of NFT set in contract.
ownerstringETH account address of the NFT owner.
statenumberState of the asset reflecting the NFT contract value. See State
createdISO date/time stringContains the date of NFT creation.
tokenURIstringtokenURI
+
AttributeTypeDescription
addressstringContract address of the deployed ERC721 NFT contract.
namestringName of NFT set in contract.
symbolstringSymbol of NFT set in contract.
ownerstringETH account address of the NFT owner.
statenumberState of the asset reflecting the NFT contract value. See State
createdISO date/time stringContains the date of NFT creation.
tokenURIstringtokenURI
@@ -505,7 +503,6 @@ The `nft` object contains information about the ERC721 NFT contract which repres
- #### Event The `event` section contains information about the last transaction that created or updated the DDO. @@ -605,7 +602,7 @@ For algorithms and datasets that are used for compute to data, there are additio * `publisherTrustedAlgorithms` * `consumerParameters` -Details for each of these are explained on the [Compute Options page](compute-to-data/compute-options.md). +Details for each of these are explained on the [Compute Options page](../compute-to-data/compute-options.md). ## New DDO Schema - Detailed @@ -749,4 +746,3 @@ IndexedMetadata "1" --> "1" Purgatory Stats "1" --> "1..n" Price ``` - diff --git a/developers/storage.md b/developers/assets-and-services/storage.md similarity index 96% rename from developers/storage.md rename to developers/assets-and-services/storage.md index d645d75e4..5ad1c0beb 100644 --- a/developers/storage.md +++ b/developers/assets-and-services/storage.md @@ -154,7 +154,7 @@ Example: } ``` -To get information about the files after encryption, the `/fileinfo` endpoint of the [_Provider_](old-infrastructure/provider/) returns based on a passed DID an array of file metadata (based on the file type): +To get information about the files after encryption, the `/fileinfo` endpoint of the [_Provider_](../old-infrastructure/provider/) returns based on a passed DID an array of file metadata (based on the file type): ```json [ diff --git a/developers/barge/README.md b/developers/barge/README.md deleted file mode 100644 index 520026353..000000000 --- a/developers/barge/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -description: 🧑🏽‍💻 Local Development Environment for Ocean Protocol ---- - -# Barge - -The Barge component of Ocean Protocol is a powerful tool designed to simplify the development process by providing Docker Compose files for running the full Ocean Protocol stack locally. It allows developers to set up and configure the various services required by Ocean Protocol for local testing and development purposes. - -By using the Barge component, developers can spin up an environment that includes default versions of [Aquarius](../old-infrastructure/aquarius/), [Provider](../old-infrastructure/provider/), [Subgraph](../old-infrastructure/subgraph/), and [Compute-to-Data](../compute-to-data/). Additionally, it deploys all the [smart contracts](../contracts/) from the ocean-contracts repository, ensuring a complete and functional local setup. Barge component also starts additional services like [Ganache](https://trufflesuite.com/ganache/), which is a local blockchain simulator used for smart contract development, and [Elasticsearch](https://www.elastic.co/elasticsearch/), a powerful search and analytics engine required by Aquarius for efficient indexing and querying of data sets. A full list of components and exposed ports is available in the GitHub [repository](https://github.com/oceanprotocol/barge#component-versions-and-exposed-ports). - -

Load Ocean components locally by using Barge

- -To explore all the available options and gain a deeper understanding of how to utilize the Barge component, you can visit the official GitHub [repository](https://github.com/oceanprotocol/barge#all-options) of Ocean Protocol. - -By utilizing the Barge component, developers gain the freedom to conduct experiments, customize, and fine-tune their local development environment, and offers the flexibility to override the Docker image tag associated with specific components. By setting the appropriate environment variable before executing the start\_ocean.sh command, developers can customize the versions of various components according to their requirements. For instance, developers can modify the: `AQUARIUS_VERSION`, `PROVIDER_VERSION`, `CONTRACTS_VERSION`, `RBAC_VERSION`, and `ELASTICSEARCH_VERSION` environment variables to specify the desired Docker image tags for each respective component. - -{% hint style="warning" %} -⚠️ We've got an important heads-up about Barge that we want to share with you. Brace yourself, because **Barge is not for the faint-hearted**! Here's the deal: the barge works great on Linux, but we need to be honest about its limitations on macOS. And, well, it doesn't work at all on Windows. Sorry, Windows users! - -To make things easier for everyone, we **strongly** recommend giving a try first on a **testnet**. Everything is configured already so it should be sufficient for your needs as well. Visit the [networks](../../discover/networks/) page to have clarity on the available test networks. ⚠️ -{% endhint %} diff --git a/developers/barge/local-setup-ganache.md b/developers/barge/local-setup-ganache.md deleted file mode 100644 index 75a13945c..000000000 --- a/developers/barge/local-setup-ganache.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -description: 🧑🏽‍💻 Your Local Development Environment for Ocean Protocol ---- - -# Local Setup - -**Functionalities of Barge** - -Barge offers several functionalities that enable developers to create and test the Ocean Protocol infrastructure efficiently. Here are its key components: - -
FunctionalityDescription
AquariusA metadata storage and retrieval service for Ocean Protocol. Allows indexing and querying of metadata.
ProviderA service that facilitates interaction between users and the Ocean Protocol network.
GanacheA local Ethereum blockchain network for testing and development purposes.
TheGraphA decentralized indexing and querying protocol used for building subgraphs in Ocean Protocol.
ocean-contractsSmart contracts repository for Ocean Protocol. Deploys and manages the necessary contracts for local development.
Customization and OptionsBarge provides various options to customize component versions, log levels, and enable/disable specific blocks.
- -Barge helps developers to get started with Ocean Protocol by providing a local development environment. With its modular and user-friendly design, developers can focus on building and testing their applications without worrying about the intricacies of the underlying infrastructure. - -To use Barge, you can follow the instructions in the [Barge repository](https://github.com/oceanprotocol/barge). - -Before getting started, make sure you have the following prerequisites: - -* **Linux** or **macOS** operating system. Barge does not currently support Windows, but you can run it inside a Linux virtual machine or use the Windows Subsystem for Linux (WSL). -* Docker installed on your system. You can download and install Docker from the [Docker website](https://www.docker.com/get-started). On Linux, you may need to allow non-root users to run Docker. On Windows or macOS, it is recommended to increase the memory allocated to Docker to 4 GB (default is 2 GB). -* Docker Compose, which is used to manage the Docker containers. You can find installation instructions in the [Docker Compose documentation](https://docs.docker.com/compose/). - -Once you have the prerequisites set up, you can clone the Barge repository and navigate to the repository folder using the command line: - -```bash -git clone git@github.com:oceanprotocol/barge.git -cd barge -``` - -The repository contains a shell script called `start_ocean.sh` that you can run to start the Ocean Protocol stack locally for development. To start Barge with the default configurations, simply run the following command: - -```bash -./start_ocean.sh -``` - -This command will start the default versions of Aquarius, Provider, and Ganache, along with the Ocean contracts deployed to Ganache. - -For more advanced options and customization, you can refer to the README file in the Barge repository. It provides detailed information about the available startup options, component versions, log levels, and more. - -To clean up your environment and stop all the Barge-related containers, volumes, and networks, you can run the following command: - -```bash -./cleanup.sh -``` - -Please refer to the Barge repository's README for more comprehensive instructions, examples, and details on how to use Barge for local development with the Ocean Protocol stack. diff --git a/developers/community-monetization.md b/developers/community-monetization.md deleted file mode 100644 index 798996512..000000000 --- a/developers/community-monetization.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -description: How can you build a self sufficient project? ---- - -# Community Monetization - -The intentions with all of the updates are to ensure that your project is able to become self-sufficient and profitable in the long run (if that’s your aim). We love projects that are built on top of Ocean and we want to ensure that you are able to generate enough income to keep your project running well into the future. - -### 1. Publishing & Selling Data - -**Do you have data that you can monetize?** :thinking: - -Ocean introduced the new crypto primitives of “data on-ramp” and “data off-ramp” via datatokens. The publisher creates ERC20 datatokens for a dataset (on-ramp). Then, anyone can access that dataset by acquiring and sending datatokens to the publisher via Ocean handshaking (data off-ramp). As a publisher, it’s in your best interest to create and publish useful data — datasets that people want to consume — because the more they consume the more you can **earn**. This is the heart of Ocean utility: connecting data publishers with data consumers :people\_hugging: - -The datasets can take one of many shapes. For AI use cases, they may be raw datasets, cleaned-up datasets, feature-engineered **data**, **AI models**, **AI model predictions**, or otherwise. (They can even be other forms of copyright-style IP such as **photos**, **videos**, or **music**!) Algorithms themselves may be sold as part of Ocean’s Compute-to-Data feature. - -The first opportunity of data NFTs is the potential to sell the base intellectual property (IP) as an exclusive license to others. This is akin to EMI selling the Beatles’ master tapes to Universal Music: whoever owns the masters has the right to create records, CDs, and digital [sub-licenses](../discover/glossary.md). It’s the same for data: as the data NFT owner you have the **exclusive right** to create ERC20 datatoken sub-licenses. With Ocean, this right is now transferable as a data NFT. You can sell these data NFTs in **OpenSea** and other NFT marketplaces. - -If you’re part of an established organization or a growing startup, you’ll also love the new role structure that comes with data NFTs. For example, you can specify a different address to collect [revenue](contracts/revenue.md) compared to the address that owns the NFT. It’s now possible to fully administer your project through these [roles](contracts/roles.md). - -**In short, if you have data to sell, then Ocean gives you superpowers to scale up and manage your data project. We hope this enables you to bring your data to new audiences and increase your profits.** - -### 2. Running Your Own Data dApp - -We have always been super encouraging of anyone who wishes to build a dApp on top of Ocean or to fork Ocean Market and make their own data marketplace. And now, we have taken this to the next level and introduced more opportunities and even more fee customization options. - -Ocean empowers dApp owners like yourself to have greater flexibility and control over the fees you can charge. This means you can tailor the fee structure to suit your specific needs and ensure the sustainability of your project. **The smart contracts enable you to collect a fee not only in consume, but also in fixed-rate exchange, also you can set the fee value.** For more detailed information regarding the fees, we invite you to visit the [fees](contracts/fees.md) page. - -Another new opportunity is using your own **ERC20** token in your dApp, where it’s used as the unit of exchange. This is fully supported and can be a great way to ensure the sustainability of your project. - -### 3. Running Your Own Provider - -Now this is a completely brand new opportunity to start generating [revenue](contracts/revenue.md) — running your own [provider](https://github.com/oceanprotocol/provider). We have been aware for a while now that many of you haven’t taken up the opportunity to run your own provider, and the reason seems obvious — there aren’t strong enough incentives to do so. - -For those that aren’t aware, [Ocean Provider](old-infrastructure/provider/) is the proxy service that’s responsible for encrypting/ decrypting the data and streaming it to the consumer. It also validates if the user is allowed to access a particular data asset or service. It’s a crucial component in Ocean’s architecture. - -Now, as mentioned above, fees are now paid to the individual or organization running the provider whenever a user downloads a data asset. The fees for downloading an asset are set as a cost per MB. In addition, there is also a provider fee that is paid whenever a compute job is run, which is set as a price per minute. - -The download and compute fees can both be set to any absolute amount and you can also decide which token you want to receive the fees in — they don’t have to be in the same currency used in the consuming market. So for example, the provider fee could be a fixed rate of 5 USDT per 1000 MB of data downloaded, and this fee remains fixed in USDT even if the marketplace is using a completely different currency. - -Additionally, provider fees are not limited to data consumption — they can also be used to charge for compute resources. So, for example, this means a provider can charge a fixed fee of 15 DAI to reserve compute resources for 1 hour. This has a huge upside for both the user and the provider host. From the user’s perspective, this means that they can now reserve a suitable amount of compute resources according to what they require. For the host of the provider, this presents another great opportunity to create an income. - -**Benefits to the Ocean Community** We’re always looking to give back to the Ocean community and collecting fees is an important part of that. As mentioned above, the Ocean Protocol Foundation retains the ability to implement community fees on data consumption. The tokens that we receive will either be burned or invested in the community via projects that they are building. These investments will take place either through [Data Farming](../data-farming/), [Ocean Shipyard](https://oceanprotocol.com/shipyard), or Ocean Ventures. - -Projects that utilize OCEAN or H2O are subject to a 0.1% fee. In the case of projects that opt to use different tokens, an additional 0.1% fee will be applied. We want to support marketplaces that use other tokens but we also recognize that they don’t bring the same wider benefit to the Ocean community, so we feel this small additional fee is proportionate. diff --git a/developers/compute-to-data/README.md b/developers/compute-to-data/README.md deleted file mode 100644 index a127c1adb..000000000 --- a/developers/compute-to-data/README.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -description: Compute to data version 2 (C2dv2) ---- - -# Compute to data - -### Introduction - -Certain datasets, such as health records and personal information, are too sensitive to be directly sold. However, Compute-to-Data offers a solution that allows you to monetize these datasets while keeping the data private. Instead of selling the raw data itself, you can offer compute access to the private data. This means you have control over which algorithms can be run on your dataset. For instance, if you possess sensitive health records, you can permit an algorithm to calculate the average age of a patient without revealing any other details. - -Compute-to-Data effectively resolves the tradeoff between leveraging the benefits of private data and mitigating the risks associated with data exposure. It enables the data to remain on-premise while granting third parties the ability to perform specific compute tasks on it, yielding valuable results like statistical analysis or AI model development. - -Private data holds immense value as it can significantly enhance research and business outcomes. However, concerns regarding privacy and control often impede its accessibility. Compute-to-Data addresses this challenge by granting specific access to the private data without directly sharing it. This approach finds utility in various domains, including scientific research, technological advancements, and marketplaces where private data can be securely sold while preserving privacy. Companies can seize the opportunity to monetize their data assets while ensuring the utmost protection of sensitive information. - -Private data has the potential to drive groundbreaking discoveries in science and technology, with increased data improving the predictive accuracy of modern AI models. Due to its scarcity and the challenges associated with accessing it, private data is often regarded as the most valuable. By utilizing private data through Compute-to-Data, significant rewards can be reaped, leading to transformative advancements and innovative breakthroughs. - -{% hint style="info" %} -The Ocean Protocol provides a compute environment that you can access at the following [address](https://stagev4.c2d.oceanprotocol.com/). Feel free to explore and utilize this platform for your needs. -{% endhint %} - -We suggest reading these guides to get an understanding of how compute-to-data works: - -### Architecture & Overview Guides - -* [Architecture](compute-to-data-architecture.md) -* [Datasets & Algorithms](compute-to-data-datasets-algorithms.md) -* [Writing Algorithms](compute-to-data-algorithms.md) -* [Compute options](compute-options.md) - -### User Guides - -* [How to write compute to data algorithms](broken-reference/) -* [How to publish a compute-to-data algorithm](broken-reference/) -* [How to publish a dataset for compute to data](broken-reference/) - -### Developer Guides - -* [How to use compute to data with ocean.js](../ocean.js/cod-asset.md) -* [How to use compute to data with ocean.py](../../data-scientists/ocean.py) - -### Infrastructure Deployment Guides - -* [Minikube Environment](../../infrastructure/compute-to-data-minikube.md) -* [Private docker registry](../../infrastructure/compute-to-data-docker-registry.md) diff --git a/developers/contracts/README.md b/developers/contracts/README.md deleted file mode 100644 index acddb2407..000000000 --- a/developers/contracts/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -description: Empowering the Decentralised Data Economy ---- - -# Contracts - -The suite of smart contracts serve as the backbone of the decentralized data economy. These contracts facilitate secure, transparent, and efficient interactions among data providers, consumers, and ecosystem participants. - -The smart contracts have been deployed across multiple [networks](../../discover/networks/) and are readily accessible through the GitHub [repository](https://github.com/oceanprotocol/contracts/tree/main/contracts). They introduced significant enhancements that encompass the following key **features**: - -### [**Data NFTs**](data-nfts.md) **for Enhanced Data IP Management** - -In Ocean V3, the publication of a dataset involved deploying an ERC20 "datatoken" contract along with relevant [metadata](../metadata.md). This process allowed the dataset publisher to claim copyright or exclusive rights to the underlying Intellectual Property (IP). Upon obtaining 1.0 ERC20 datatokens for a particular dataset, users were granted a license to consume that dataset, utilizing the Ocean infrastructure by spending the obtained datatokens. - -However, Ocean V3 faced limitations in terms of flexibility. It lacked support for different licenses associated with the same base IP, such as 1-day versus 1-month access, and the transferability of the base IP was not possible. Additionally, the ERC20 datatoken template was hardcoded, restricting customization options. - -Ocean V4 effectively tackles these challenges by adopting **ERC721** **tokens** to explicitly represent the **base IP** as "data NFTs" (Non-Fungible Tokens). [**Data NFT**](data-nfts.md) owners can now deploy ERC20 "datatoken" contracts specific to their data NFTs, with each datatoken contract offering its own distinct licensing terms. - -By utilizing ERC721 tokens, Ocean **grants data creators greater flexibility and control over licensing arrangements**. The introduction of data NFTs allows for the representation of [base IP](../../discover/glossary.md) and the creation of customized ERC20 datatoken contracts tailored to individual licensing requirements. - -

Ocean Protocol Smart Contracts

- -### [**Community monetization**](../community-monetization.md), to help the community create sustainable businesses. - -Ocean brings forth enhanced opportunities for dApp owners, creating a conducive environment for the emergence of a thriving market of **third-party Providers**. - -With Ocean, dApp owners can unlock additional benefits. Firstly, the smart contracts empower dApp owners to collect [fees](fees.md) not only during **data consumption** but also through **fixed-rate exchanges**. This expanded revenue model allows owners to derive more value from the ecosystem. Moreover, in Ocean, the dApp operator has the authority to determine the fee value, providing them with **increased control** over their pricing strategies. - -In addition to empowering dApp owners, Ocean facilitates the participation of third-party [Providers](../old-infrastructure/provider/) who can offer compute services in exchange for a fee. This paves the way for the development of a diverse marketplace of Providers. This model supports both centralized trusted providers, where data publishers and consumers have established trust relationships, as well as trustless providers that leverage decentralization or other privacy-preserving mechanisms. - -By enabling a marketplace of [Providers](../old-infrastructure/provider/), Ocean fosters competition, innovation, and choice. It creates an ecosystem where various providers can offer their compute services, catering to the diverse needs of data publishers and consumers. Whether based on trust or privacy-preserving mechanisms, this expansion in provider options enhances the overall functionality and accessibility of the Ocean Protocol ecosystem. - -Key features of the smart contracts: - -* Base IP is now represented by a data [NFT](data-nfts.md), from which a data publisher can create multiple ERC20s [datatokens](datatokens.md) representing different types of access for the same dataset. -* Interoperability with the NFT ecosystem (and DeFi & DAO tools). -* Allows new data [NFT & datatoken templates](datatoken-templates.md), for flexibility and future-proofing. -* Besides base data IP, you can use data NFTs to **implement comments & ratings, verifiable claims, identity credentials, and social media posts**. They can point to parent data NFTs, enabling the nesting of comments on comments, or replies to tweets. All on-chain, GDPR-compliant, easily searched, with js & py drivers 🤯 -* Introduce an advanced [Fee](fees.md) structure both for dApp and provider runners 💰 -* [Roles](roles.md) Administration: there are now multiple roles for a more flexible administration both at [NFT](data-nfts.md) and [ERC20](datatokens.md) levels 👥 -* When the NFT is transferred, it auto-updates all permissions, e.g. who receives payment, or who can mint derivative ERC20 datatokens. -* Key-value store in the NFT contract: NFT contract can be used to store custom key-value pairs (ERC725Y standard) enabling applications like soulbound tokens and Sybil protection approaches 🗃️ -* Multiple NFT template support: the Factory can deploy different types of NFT templates 🖼️ -* Multiple datatoken template support: the Factory can deploy different types of [datatoken templates](datatoken-templates.md). - -In the forthcoming pages, you will discover more information about the key features. If you have any inquiries or find anything missing, feel free to contact the core team on [Discord](https://discord.com/invite/TnXjkR5) 💬 diff --git a/developers/contracts/data-nfts.md b/developers/contracts/data-nfts.md deleted file mode 100644 index 4fcde9093..000000000 --- a/developers/contracts/data-nfts.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -description: ERC721 data NFTs represent holding the copyright/base IP of a data asset. ---- - -# Data NFTs - -A non-fungible token stored on the blockchain represents a unique asset. NFTs can represent images, videos, digital art, or any piece of information. NFTs can be traded, and allow the transfer of copyright/base IP. [EIP-721](https://eips.ethereum.org/EIPS/eip-721) defines an interface for handling NFTs on EVM-compatible blockchains. The creator of the NFT can deploy a new contract on Ethereum or any Blockchain supporting NFT-related interface and also, transfer the ownership of copyright/base IP through transfer transactions. - -## What is a Data NFT? - -A data NFT represents the **copyright** (or **exclusive license** against copyright) for a data asset on the blockchain — we call this the “**base IP**”. When a user publishes a dataset in Ocean, they create a new NFT as part of the process. This data NFT is proof of your claim of base IP. Assuming a valid claim, you are entitled to the revenue from that asset, just like a title deed gives you the right to receive rent. - -The data NFT smart contract holds metadata about the data asset, stores roles like “who can mint datatokens” or “who controls fees”, and an open-ended key-value store to enable custom fields. - -If you have the private key that controls the NFT, you own that NFT. The owner has the claim on the base IP and is the default recipient of any revenue. They can also assign another account to receive revenue. This enables the publisher to sell their base IP and the revenues that come with it. When the Data NFT is transferred to another user, all the information about roles and where the revenue should be sent is reset. The default recipient of the revenue is the new owner of the data NFT. - -### Key Features and Functionality - -Data NFTs offer several key features and functionalities within the Ocean Protocol ecosystem: - -1. **Ownership and Transferability**: Data NFTs establish ownership rights, enabling data owners to transfer or sell their data assets to other participants in the network. -2. **Metadata and Descriptions**: Each Data NFT contains metadata that describes the associated dataset, providing essential information such as title, description, creator, and licensing terms. -3. **Access Control and Permissions**: Data NFTs can include access control mechanisms, allowing data owners to define who can access and utilize their datasets, as well as the conditions and terms of usage. -4. **Interoperability**: Data NFTs conform to the ERC721 token standard, ensuring interoperability across various platforms, wallets, and marketplaces within the Ethereum ecosystem. - -#### Data NFTs Open Up New Possibilities - -By tokenizing data assets into Data NFTs, data owners can establish clear ownership rights and enable seamless transferability of the associated datasets. Data NFTs serve as digital certificates of authenticity, enabling data consumers to trust the origin and integrity of the data they access. - -With data NFTs, you are able to take advantage of the broader NFT ecosystem and all the tools and possibilities that come with it. As a first example, many leading crypto wallets have first-class support for NFTs, allowing you to manage data NFTs from those wallets. Or, you can post your data NFT for sale on a popular NFT marketplace like [OpenSea](https://www.opensea.io/) or [Rarible](https://www.rarible.com/). As a final example, we’re excited to see [data NFTs linked to physical items via WiseKey chips](https://www.globenewswire.com/news-release/2021/05/19/2232106/0/en/WISeKey-partners-with-Ocean-Protocol-to-launch-TrustedNFT-io-a-decentralized-marketplace-for-objects-of-value-designed-to-empower-artists-creators-and-collectors-with-a-unique-solu.html). - -### Implementation in Ocean Protocol - -We have implemented data NFTs using the [ERC721 standard](https://erc721.org/). Ocean Protocol defines the [ERC721Factory](https://github.com/oceanprotocol/contracts/blob/main/contracts/ERC721Factory.sol) contract, allowing **Base IP holders** to create their ERC721 contract instances on any supported networks. The deployed contract stores Metadata, ownership, sub-license information, and permissions. The contract creator can also create and mint ERC20 token instances for sub-licensing the **Base IP**. - -ERC721 tokens are non-fungible, and thus cannot be used for automatic price discovery like ERC20 tokens. ERC721 and ERC20 combined together can be used for sub-licensing. Ocean Protocol's [ERC721Template](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC721Template.sol) solves this problem by using ERC721 for tokenizing the **Base IP** and tokenizing sub-licenses by using ERC20. To save gas fees, it uses [ERC1167](https://eips.ethereum.org/EIPS/eip-1167) proxy approach on the **ERC721 template**. - -Our implementation has been built on top of the battle-tested [OpenZeppelin contract library](https://docs.openzeppelin.com/contracts/4.x/erc721). However, there are a bunch of interesting parts of the implementation that go a bit beyond an out-of-the-box NFT. The data NFTs can be easily managed from any NFT marketplace like [OpenSea](https://opensea.io/). - -

Data NFT on Open Sea

- -Something else that we’re super excited about in the data NFTs is a cutting-edge standard called [ERC725](https://github.com/ERC725Alliance/erc725/blob/main/docs/ERC-725.md) being driven by our friends at [Lukso](https://lukso.network/about). The ERC725y feature enables the NFT owner (or a user with the “store updater” role) to input and update information in a key-value store. These values can be viewed externally by anyone. - -ERC725y is incredibly flexible and can be used to store any string; you could use it for anything from additional metadata to encrypted values. This helps future-proof the data NFTs and ensure that they are suitable for a wide range of projects that have not been launched yet. As you can imagine, the inclusion of ERC725y has huge potential and we look forward to seeing the different ways people end up using it. If you’re interested in using this, take a look at [EIP725](https://eips.ethereum.org/EIPS/eip-725#erc725y). diff --git a/developers/contracts/datanft-and-datatoken.md b/developers/contracts/datanft-and-datatoken.md deleted file mode 100644 index e965203b0..000000000 --- a/developers/contracts/datanft-and-datatoken.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: Data NFTs and Datatokens -description: >- - In Ocean Protocol, ERC721 data NFTs represent holding the copyright/base IP of - a data asset, and ERC20 datatokens represent licenses to access the assets. ---- - -# Data NFTs and Datatokens - -

Data NFTs and Datatokens

- -In summary: A [**data NFT**](data-nfts.md) serves as a **representation of the copyright** or exclusive license for a data asset on the blockchain, known as the [**base IP**](../../discover/glossary.md). **Datatokens**, on the other hand, function as a crucial mechanism for **decentralized access** to data assets. - -For a specific data NFT, multiple ERC20 datatoken contracts can exist. Here's the main concept: Owning 1.0 datatokens grants you the ability to **consume** the corresponding dataset. Essentially, it acts as a **sub-license** from the [base IP](../../discover/glossary.md), allowing you to utilize the dataset according to the specified license terms (when provided by the publisher). License terms can be established with a "good default" or by the Data NFT owner. - -The choice to employ the ERC20 fungible token standard for datatokens is logical, as licenses themselves are fungible. This standard ensures compatibility and interoperability of datatokens with ERC20-based wallets, decentralized exchanges (DEXes), decentralized autonomous organizations (DAOs), and other relevant platforms. Datatokens can be transferred, acquired through marketplaces or exchanges, distributed via airdrops, and more. - -You can [publish](../../discover/glossary.md) a data NFT initially with no ERC20 datatoken contracts. This means you simply aren’t ready to grant access to your data asset yet (sub-license it). Then, you can publish one or more ERC20 datatoken contracts against the data NFT. One datatoken contract might grant consume rights for **1 day**, another for **1 week**, etc. Each different datatoken contract is for **different** license terms. - -For a more comprehensive exploration of intellectual property and its practical connections with ERC721 and ERC20, you can read the blog post written by [Trent McConaghy](http://www.trent.st/), co-founder of Ocean Protocol. It delves into the subject matter in detail and provides valuable insights. - -{% embed url="https://blog.oceanprotocol.com/nfts-ip-1-practical-connections-of-erc721-with-intellectual-property-dc216aaf005d" %} - -**DataNFTs and Datatokens example:** - -* In step 1, Alice **publishes** her dataset with Ocean: this means deploying an ERC721 data NFT contract (claiming copyright/base IP), then an ERC20 datatoken contract (license against base IP). Then Alice mints an ERC20 datatokens -* In step 2, Alice **transfers** 1.0 of them to Bob's wallet; now he has a license to be able to download that dataset. - -

Data NFT & Datatokens flow

- -What happends under the hood? 🤔 - -Publishing with smart contracts in Ocean Protocol involves a well-defined process that streamlines the publishing of data assets. It provides a systematic approach to ensure efficient management and exchange of data within the Ocean Protocol ecosystem. By leveraging smart contracts, publishers can securely create and deploy data NFTs, allowing them to tokenize and represent their data assets. Additionally, the flexibility of the smart contracts enables publishers to define pricing schemas for datatokens, facilitating fair and transparent transactions. This publishing framework empowers data publishers by providing them with greater control and access to a global marketplace, while ensuring trust, immutability, and traceability of their published data assets. - -The smart contracts publishing includes the following steps: - -1. The data publisher initiates the creation of a new data NFT. -2. The data NFT factory deploys the template for the new data NFT. -3. The data NFT template creates the data NFT contract. -4. The address of the newly created data NFT is available to the data publisher. -5. The publisher is now able to create datatokens with pricing schema for the data NFT. To accomplish this, the publisher initiates a call to the data NFT contract, specifically requesting the creation of a new datatoken with a fixed rate schema. -6. The data NFT contract deploys a new datatoken and a fixed rate schema by interacting with the datatoken template contract. -7. The datatoken contract is created (Datatoken-1 contract). -8. The datatoken template generates a new fixed rate schema for Datatoken-1. -9. The address of Datatoken-1 is now available to the data publisher. -10. Optionally, the publisher can create a new datatoken (Datatoken-2) with a free price schema. -11. The data NFT contract interacts with the Datatoken Template contract to create a new datatoken and a dispenser schema. -12. The datatoken templated deploys the Datatoken-2 contract. -13. The datatoken templated creates a dispenser for the Datatoken-2 contract. - -Below is a visual representation that illustrates the flow: - -

Data NFT & Datatokens flow

- -We have some awesome hands-on experience when it comes to publishing a data NFT and minting datatokens. - -* Publish using [ocean.py](../../data-scientists/ocean.py/publish-flow.md) -* Publish using [ocean.js](../ocean.js/publish.md) - -### Other References - -* [Data & NFTs 1: Practical Connections of ERC721 with Intellectual Property](https://blog.oceanprotocol.com/nfts-ip-1-practical-connections-of-erc721-with-intellectual-property-dc216aaf005d) -* [Data & NFTs 2: Leveraging ERC20 Fungibility](https://blog.oceanprotocol.com/nfts-ip-2-leveraging-erc20-fungibility-bcee162290e3) -* [Data & NFTs 3: Combining ERC721 & ERC20](https://blog.oceanprotocol.com/nfts-ip-3-combining-erc721-erc20-b69ea659115e) -* [Fungibility sightings in NFTs](https://blog.oceanprotocol.com/on-difficult-to-explain-fungibility-sightings-in-nfts-26bc18620f70) diff --git a/developers/contracts/datatoken-templates.md b/developers/contracts/datatoken-templates.md deleted file mode 100644 index daf988cf8..000000000 --- a/developers/contracts/datatoken-templates.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -description: Discover all about the extensible & flexible smart contract templates. ---- - -# Datatoken Templates - -Each [data NFT](data-nfts.md) or [datatoken](datatokens.md) within Ocean Protocol is generated from pre-defined [template](https://github.com/oceanprotocol/contracts/tree/main/contracts/templates) contracts. The _**templateId**_ parameter specifies the template used for creating a data NFT or datatoken, which can be set during the creation process. The _**templateId**_ is stored within the smart contract code and can be accessed using the [_**getId**_](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/interfaces/IERC20Template.sol#L134)_**()**_ function. - -```solidity - -it("#getId - should return templateId", async () => { - const templateId = 1; - assert((await erc20Token.getId()) == templateId); - }); - -``` - -Currently, Ocean Protocol supports **1** [template](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC721Template.sol) type for data NFTs and **2** template variants for datatokens: the [**regular template**](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC20Template.sol) and the [**enterprise template**](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC20TemplateEnterprise.sol). While these templates share the same interfaces, they differ in their underlying implementation and may offer additional features. - -The details regarding currently supported **datatoken templates** are as follows: - -### **Regular template** - -The regular template allows users to buy/sell/hold datatokens. The datatokens can be minted by the address having a [`MINTER`](roles.md#minter) role, making the supply of datatoken variable. This template is assigned _**`templateId =`**_`1` and the source code is available [here](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC20Template.sol). - -### **Enterprise template** - -The enterprise template has additional functions apart from methods in the ERC20 interface. This additional feature allows access to the service by paying in the basetoken instead of the datatoken. Internally, the smart contract handles the conversion of basetoken to datatoken, initiating an order to access the service, and minting/burning the datatoken. The total supply of the datatoken effectively remains 0 in the case of the enterprise template. This template is assigned _**`templateId =`**_`2` and the source code is available [here](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC20TemplateEnterprise.sol). - -#### Set the template - -When you're creating an ERC20 datatoken, you can specify the desired template by passing on the template index. - -{% tabs %} -{% tab title="Ocean.js" %} -To specify the datatoken template via ocean.js, you need to customize the [DatatokenCreateParams](https://github.com/oceanprotocol/ocean.js/blob/ae2ff1ccde53ace9841844c316a855de271f9a3f/src/%40types/Datatoken.ts#L3) with your desired `templateIndex`. - -The default template used is 1. - -```typescript -export interface DatatokenCreateParams { - templateIndex: number - minter: string - paymentCollector: string - mpFeeAddress: string - feeToken: string - feeAmount: string - cap: string - name?: string - symbol?: string -} -``` -{% endtab %} - -{% tab title="Ocean.py" %} -To specify the datatoken template via ocean.py, you need to customize the [DatatokenArguments](https://github.com/oceanprotocol/ocean.py/blob/bad11fb3a4cb00be8bab8febf3173682e1c091fd/ocean_lib/models/datatoken_base.py#L64) with your desired template\_index. - -The default template used is 1. - -```python -class DatatokenArguments: - def __init__( - self, - name: Optional[str] = "Datatoken 1", - symbol: Optional[str] = "DT1", - template_index: Optional[int] = 1, - minter: Optional[str] = None, - fee_manager: Optional[str] = None, - publish_market_order_fees: Optional = None, - bytess: Optional[List[bytes]] = None, - services: Optional[list] = None, - files: Optional[List[FilesType]] = None, - consumer_parameters: Optional[List[Dict[str, Any]]] = None, - cap: Optional[int] = None, - ): -``` -{% endtab %} -{% endtabs %} - -{% hint style="info" %} -By default, all assets published through the Ocean Market use the Enterprise Template. -{% endhint %} - -#### Retrieve the template - -To identify the template used for a specific asset, you can easily retrieve this information using the network explorer. Here are the steps to follow: - -1. Visit the network explorer where the asset was published. -2. Search for the datatoken address :mag: -3. Once you have located the datatoken address, click on the contract tab to access more details. -4. Within the contract details, we can identify and determine the template used for the asset. - - - -We like making things easy :sunglasses: so here is an even easier way to retrieve the info for [this](https://market.oceanprotocol.com/asset/did:op:cd086344c275bc7c560e91d472be069a24921e73a2c3798fb2b8caadf8d245d6) asset published in the Ocean Market: - -{% embed url="https://app.arcade.software/share/wxBPSc42eSYUiawSY8rC" fullWidth="false" %} -{% endembed %} - -{% hint style="info" %} -_It's important to note that Ocean Protocol may introduce new templates to support additional variations of data NFTs and datatokens in the future._ -{% endhint %} diff --git a/developers/contracts/datatokens.md b/developers/contracts/datatokens.md deleted file mode 100644 index d3b44a64c..000000000 --- a/developers/contracts/datatokens.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -description: ERC20 datatokens represent licenses to access the assets. ---- - -# Datatokens - -Fungible tokens are a type of digital asset that are identical and interchangeable with each other. Each unit of a fungible token holds the same value and can be exchanged on a one-to-one basis. This means that one unit of a fungible token is indistinguishable from another unit of the same token. Examples of fungible tokens include cryptocurrencies like Bitcoin (BTC) and Ethereum (ETH), where each unit of the token is equivalent to any other unit of the same token. Fungible tokens are widely used for transactions, trading, and as a means of representing value within blockchain-based ecosystems. - -## What is a Datatoken? - -Datatokens are fundamental within Ocean Protocol, representing a key mechanism to **access** data assets in a decentralized manner. In simple terms, a datatoken is an **ERC20-compliant token** that serves as **access control** for a data/service represented by a [data NFT](data-nfts.md). - -Datatokens enable data assets to be tokenized, allowing them to be easily traded, shared, and accessed within the Ocean Protocol ecosystem. Each datatoken is associated with a particular data asset, and its value is derived from the underlying dataset's availability, scarcity, and demand. - -By using datatokens, data owners can retain ownership and control over their data while still enabling others to access and utilize it based on predefined license terms. These license terms define the conditions under which the data can be accessed, used, and potentially shared by data consumers. - -### Understanding Datatokens and Licenses - -Each datatoken represents a [**sub-license**](../../discover/glossary.md) from the base intellectual property (IP) owner, enabling users to access and consume the associated dataset. The license terms can be set by the data NFT owner or default to a predefined "good default" license. The fungible nature of ERC20 tokens aligns perfectly with the fungibility of licenses, facilitating seamless exchangeability and interoperability between different datatokens. - -By adopting the ERC20 standard for datatokens, Ocean Protocol ensures compatibility and interoperability with a wide array of ERC20-based wallets, [decentralized exchanges (DEXes)](https://blog.oceanprotocol.com/ocean-datatokens-will-be-tradeable-on-decentrs-dex-41715a166a1f), decentralized autonomous organizations (DAOs), and other blockchain-based platforms. This standardized approach enables users to effortlessly transfer, purchase, exchange, or receive datatokens through various means such as marketplaces, exchanges, or airdrops. - -### Utilizing Datatokens - -Data owners and consumers can engage with datatokens in numerous ways. Datatokens can be acquired through transfers or obtained by purchasing them on dedicated marketplaces or exchanges. Once in possession of the datatokens, users gain access to the corresponding dataset, enabling them to utilize the data within the boundaries set by the associated license terms. - -Once someone has generated datatokens, they can be used in any ERC20 exchange, centralized or decentralized. In addition, Ocean provides a convenient default marketplace that is tuned for data: [Ocean Market](https://market.oceanprotocol.com). It’s a vendor-neutral reference data marketplace for use by the Ocean community. - -You can publish a [data NFT](data-nfts.md) initially with no ERC20 datatoken contracts. This means you simply aren’t ready to grant access to your data asset yet (sub-license it). Then, you can publish one or more ERC20 datatoken contracts against the data NFT. One datatoken contract might grant consume rights for 1 day, another for 1 week, etc. Each different datatoken contract is for different license terms. diff --git a/developers/contracts/fees.md b/developers/contracts/fees.md deleted file mode 100644 index ececcbc63..000000000 --- a/developers/contracts/fees.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -description: The Ocean Protocol defines various fees for creating a sustainability loop. ---- - -# Fees - -One transaction may have fees going to several entities, such as the market where the asset was published, or the Ocean Community. Here are all of them: - -* Publish Market: the market where the asset was published. -* Consume Market: the market where the asset was consumed. -* Provider: the entity facilitating asset consumption. May serve up data, run compute, etc. -* Ocean Community: Ocean Community Wallet. - -### Publish fee - -When you publish an asset on the Ocean marketplace, there are currently no charges for publishing fees :tada: - -However, if you're building a custom marketplace, you have the flexibility to include a publishing fee by adding an extra transaction in the publish flow. Depending on your marketplace's unique use case, you, as the marketplace owner, can decide whether or not to implement this fee. We believe in giving you the freedom to tailor your marketplace to your specific needs and preferences. - -| Value in Ocean Market | Value in Other Markets | -| :-------------------: | :----------------------------: | -| 0% | Customizable in market config. | - -### Swap fee - -Swap fees are incurred as a transaction cost whenever someone exchanges one type of token for another within a [fixed rate exchange](pricing-schemas.md#fixed-pricing). These exchanges can involve swapping a datatoken for a basetoken, like OCEAN or H2O, or vice versa, where basetoken is exchanged for datatoken. The specific value of the swap fee depends on the type of token being used in the exchange. - -The swap fee values are set at the smart contract level and can only be modified by the Ocean Protocol Foundation (OPF). - -| Value for OCCEAN or H2O | Value for other ERC20 tokens | -| :---------------------: | :--------------------------: | -| 0.1% | 0.2% | - -### Consume(aka. Order) fee - -When a user exchanges a [datatoken](datatokens.md) for the privilege of downloading an asset or initiating a compute job that utilizes the asset, consume fees come into play. These fees are associated with accessing an asset and include: - -1. **Publisher Market** Consumption Fee - * Defined during the ERC20 [creation](https://github.com/oceanprotocol/contracts/blob/b937a12b50dc4bdb7a6901c33e5c8fa136697df7/contracts/templates/ERC721Template.sol#L334). - * Defined as Address, Token, Amount. The amount is an absolute value(not a percentage). - * A marketplace can charge a specified amount per order. - * Eg: A market can set a fixed fee of 10 USDT per order, no matter what pricing schemas are used (fixedrate with ETH, BTC, dispenser, etc). -2. **Consume Market** Consumption Fee - * A market can specify what fee it wants on the order function. -3. **Provider** Consumption Fees - * Defined by the [Provider](../old-infrastructure/provider/) for any consumption. - * Expressed in: Address, Token, Amount (absolute), Timeout. - * You can retrieve them when calling the initialize endpoint. - * Eg: A provider can charge a fixed fee of 10 USDT per consume, irrespective of the pricing schema used (e.g., fixed rate with ETH, BTC, dispenser). -4. **Ocean Community** Fee - * Ocean's smart contracts collect **Ocean Community fees** during order operations. These fees are reinvested in community projects and distributed to the veOCEAN holders through Data Farming. - * This fee is set at the [smart contract](https://github.com/oceanprotocol/contracts/blob/main/contracts/communityFee/OPFCommunityFeeCollector.sol) level. - * It can be updated by Ocean Protocol Foundation. See details in the [smart contracts](https://github.com/oceanprotocol/contracts/blob/main/contracts/pools/FactoryRouter.sol#L391-L407). - -
- -Update Ocean Community Fees - -The Ocean Protocol Foundation can [change](https://github.com/oceanprotocol/contracts/blob/main/contracts/pools/FactoryRouter.sol#L391-L407) the Ocean community fees. - -```solidity -/** -* @dev updateOPCFee - * Updates OP Community Fees - * @param _newSwapOceanFee Amount charged for swapping with ocean approved tokens - * @param _newSwapNonOceanFee Amount charged for swapping with non ocean approved tokens - * @param _newConsumeFee Amount charged from consumeFees - * @param _newProviderFee Amount charged for providerFees - */ -function updateOPCFee(uint256 _newSwapOceanFee, uint256 _newSwapNonOceanFee, - uint256 _newConsumeFee, uint256 _newProviderFee) external onlyRouterOwner { - - swapOceanFee = _newSwapOceanFee; - swapNonOceanFee = _newSwapNonOceanFee; - consumeFee = _newConsumeFee; - providerFee = _newProviderFee; - emit OPCFeeChanged(msg.sender, _newSwapOceanFee, _newSwapNonOceanFee, _newConsumeFee, _newProviderFee); -} -``` - -
- -Each of these fees plays a role in ensuring fair compensation and supporting the Ocean community. - -| Fee | Value in Ocean Market | Value in Other Markets | -| ---------------- | :-------------------: | -------------------------------------------------------- | -| Publisher Market | 0 | Customizable in market config. | -| Consume Market | 0 | Customizable in market config. | -| Provider | 0 | Customizable. See details [below](fees.md#provider-fee). | -| Ocean Community | 0.03 DT | 0.03 DT | - -### Provider fee - -[Providers](../old-infrastructure/provider/) facilitate data consumption, initiate compute jobs, encrypt and decrypt DDOs, and verify user access to specific data assets or services. - -Provider fees serve as [compensation](../community-monetization.md#3.-running-your-own-provider) to the individuals or organizations operating their own provider instances when users request assets. - -* Defined by the [Provider](../old-infrastructure/provider/) for any consumption. -* Expressed in: Address, Token, Amount (absolute), Timeout. -* You can retrieve them when calling the initialize endpoint. -* These fees can be set as a **fixed amount** rather than a percentage. -* Providers have the flexibility to specify the token in which the fees must be paid, which can differ from the token used in the consuming market. -* Provider fees can be utilized to charge for [computing](../compute-to-data/) resources. Consumers can select the desired payment amount based on the compute resources required to execute an algorithm within the [Compute-to-Data](../compute-to-data/) environment, aligning with their specific needs. -* Eg: A provider can charge a fixed fee of 10 USDT per consume, irrespective of the pricing schema used (e.g., fixed rate with ETH, BTC, dispenser). -* Eg: A provider may impose a fixed fee of 15 DAI to reserve compute resources for 1 hour, enabling the initiation of compute jobs. - -These fees play a crucial role in incentivizing individuals and organizations to operate provider instances and charge consumers based on their resource usage. By doing so, they contribute to the growth and sustainability of the Ocean Protocol ecosystem. - -| Type | OPF Provider | 3rd party Provider | -| ---------------------------------------------------------------------------- | :--------------------: | -------------------------------------------------------------------- | -| Token to charge the fee: `PROVIDER_FEE_TOKEN` | OCEAN |

Customizable by the Provider Owner.
E.g. USDC

| -| Download: `COST_PER_MB` | 0 | Customizable in the Provider `envvars`. | -|

Compute: COST_PER_MIN
Environment: 1 CPU, 60 secs max

| 0 | Customizable in the OperatorEngine `envvars`. | -|

Compute: COST_PER_MIN
Environment: 1 CPU, 1 hour max

| 1.0 OCEAN/min | Customizable in the OperatorEngine `envvars`. | -| Ocean Community | 0% of the Provider fee | 0% of the Provider fee. | - -{% hint style="info" %} -Stay up-to-date with the latest information! The values within the system are regularly updated. We recommend verifying the most recent values directly from the [contracts](https://github.com/oceanprotocol/contracts) and the [market](https://github.com/oceanprotocol/market). -{% endhint %} diff --git a/developers/contracts/pricing-schemas.md b/developers/contracts/pricing-schemas.md deleted file mode 100644 index 7a1b2fc45..000000000 --- a/developers/contracts/pricing-schemas.md +++ /dev/null @@ -1,188 +0,0 @@ ---- -description: Choose the revenue model during asset publishing. ---- - -# Pricing Schemas - -Ocean Protocol offers you flexible and customizable pricing options to monetize your valuable data assets. You have two main pricing models to choose from: - -* [Fixed pricing](pricing-schemas.md#fixed-pricing) -* [Free pricing](pricing-schemas.md#free-pricing) - -These models are designed to cater to your specific needs and ensure a smooth experience for data consumers. - -The price of an asset is determined by the number of tokens (this can be OCEAN or any ERC20 Token configured when published the asset) a buyer must pay to access the data. When users pay the tokens, they get a _datatoken_ in their wallets, a tokenized representation of the access right stored on the blockchain. To read more about datatoken and data NFT click [here](datanft-and-datatoken.md). - -To provide you with even greater flexibility in monetizing your data assets, Ocean Protocol allows you to customize the pricing schema by configuring your own ERC20 token when publishing the asset. This means that instead of using OCEAN as the pricing currency, you can utilize your own token, aligning the pricing structure with your specific requirements and preferences. - -You can customised your token this way: - -{% tabs %} -{% tab title="Ocean Market" %} -```javascript -NEXT_PUBLIC_OCEAN_TOKEN_ADDRESS='0x00000' // YOUR TOKEN'S ADDRESS -``` -{% endtab %} - -{% tab title="Ocean.js" %} -
// https://github.com/oceanprotocol/ocean.js/blob/main/CodeExamples.md#61-publish-a-dataset-create-nft--datatoken-with-a-fixed-rate-exchange
-const freParams: FreCreationParams = {
-    fixedRateAddress: addresses.FixedPrice,
-    baseTokenAddress: addresses.Ocean, // you can customize this with any ERC20 token
-    owner: await publisherAccount.getAddress(),
-    marketFeeCollector: await publisherAccount.getAddress(),
-    baseTokenDecimals: 18,
-    datatokenDecimals: 18,
-    fixedRate: '1',
-    marketFee: '0.001',
-    allowedConsumer: ZERO_ADDRESS,
-    withMint: true
-}
-
-{% endtab %} - -{% tab title="Ocean.py" %} -```python -exchange_args = ExchangeArguments( - rate=to_wei(1), # you can customize this with any price - base_token_addr=OCEAN.address, # you can customize this with any ERC20 token - owner_addr=publisher_wallet.address, - publish_market_fee_collector=ZERO_ADDRESS, - publish_market_fee=0, - allowed_swapper=ZERO_ADDRESS, - full_info=False, - dt_decimals=datatoken.decimals() -) -``` -{% endtab %} -{% endtabs %} - -Furthermore, Ocean Protocol recognizes that different data assets may have distinct pricing needs. That's why the platform supports multiple pricing schemas, allowing you to implement various pricing models for different datasets or use cases. This flexibility ensures that you can tailor the pricing strategy to each specific asset, maximizing its value and potential for monetization. - -

Pricing Schemas

- -### Fixed pricing - -With the fixed pricing model, you have the power to set a specific price for your data assets. This means that buyers interested in accessing your data will need to pay the designated amount of configured tokens. To make things even easier, Ocean automatically creates a special token called a "datatoken" behind the scenes. - -This datatoken represents the access right to your data, so buyers don't have to worry about the technical details. If you ever want to adjust the price of your dataset, you have the flexibility to do so whenever you need. - -The fixed pricing model relies on the [createWithDecimals](https://github.com/oceanprotocol/contracts/blob/d288e3f94cf6ba2be151a3284da0a3606a263bb9/contracts/pools/fixedRate/FixedRateExchange.sol#L201) in the smart contract, which securely stores the pricing information for assets published using this model. - -
- -Create NFT with Fixed Rate Pricing - -```javascript -/** - * @dev createNftWithErc20WithFixedRate - * Creates a new NFT, then a ERC20, then a FixedRateExchange, all in one call - * Use this carefully, because if Fixed Rate creation fails, you are still going to pay a lot of gas - * @param _NftCreateData input data for NFT Creation - * @param _ErcCreateData input data for ERC20 Creation - * @param _FixedData input data for FixedRate Creation - */ -function createNftWithErc20WithFixedRate( -NftCreateData calldata _NftCreateData, -ErcCreateData calldata _ErcCreateData, -FixedData calldata _FixedData -) external nonReentrant returns (address erc721Address, address erc20Address, bytes32 exchangeId){ -//we are adding ourselfs as a ERC20 Deployer, because we need it in order to deploy the fixedrate -erc721Address = deployERC721Contract( - _NftCreateData.name, - _NftCreateData.symbol, - _NftCreateData.templateIndex, - address(this), - address(0), - _NftCreateData.tokenURI, - _NftCreateData.transferable, - _NftCreateData.owner); -erc20Address = IERC721Template(erc721Address).createERC20( - _ErcCreateData.templateIndex, - _ErcCreateData.strings, - _ErcCreateData.addresses, - _ErcCreateData.uints, - _ErcCreateData.bytess -); -exchangeId = IERC20Template(erc20Address).createFixedRate( - _FixedData.fixedPriceAddress, - _FixedData.addresses, - _FixedData.uints - ); -// remove our selfs from the erc20DeployerRole -IERC721Template(erc721Address).removeFromCreateERC20List(address(this)); -} -``` - -
- -{% hint style="info" %} -There are two templates available: [ERC20Template](datatoken-templates.md#regular-template) and [ERC20TemplateEnterprise](datatoken-templates.md#enterprise-template). - -In the case of [ERC20TemplateEnterprise](datatoken-templates.md#enterprise-template), when you deploy a fixed rate exchange, the funds generated as revenue are automatically sent to the owner's address. The owner receives the revenue without any manual intervention. - -On the other hand, with [ERC20Template](datatoken-templates.md#regular-template), for a fixed rate exchange, the revenue is available at the fixed rate exchange level. The owner or the payment collector has the authority to manually retrieve the revenue. -{% endhint %} - -### Free pricing - -On the other hand, the free pricing model gives data consumers access to your asset without requiring them to make a direct payment. Users can freely access your data, with the only cost being the transaction fees associated with the blockchain network. - -In this model, datatokens are allocated to a dispenser smart contract, which dispenses data tokens to users at no charge when they access your asset. This is perfect if you want to make your data widely available and encourage collaboration. It's particularly suitable for individuals and organizations working in the public domain or for assets that need to comply with open-access licenses. - -The free pricing model relies on the [create](https://github.com/oceanprotocol/contracts/blob/d288e3f94cf6ba2be151a3284da0a3606a263bb9/contracts/pools/dispenser/Dispenser.sol#L154) in the smart contract, which securely stores the pricing information for assets published using this model. - -
- -Create NFT with Free Pricing - -```javascript -/** - * @dev createNftWithErc20WithDispenser - * Creates a new NFT, then a ERC20, then a Dispenser, all in one call - * Use this carefully - * @param _NftCreateData input data for NFT Creation - * @param _ErcCreateData input data for ERC20 Creation - * @param _DispenserData input data for Dispenser Creation - */ -function createNftWithErc20WithDispenser( - NftCreateData calldata _NftCreateData, - ErcCreateData calldata _ErcCreateData, - DispenserData calldata _DispenserData -) external nonReentrant returns (address erc721Address, address erc20Address){ - //we are adding ourselfs as a ERC20 Deployer, because we need it in order to deploy the fixedrate - erc721Address = deployERC721Contract( - _NftCreateData.name, - _NftCreateData.symbol, - _NftCreateData.templateIndex, - address(this), - address(0), - _NftCreateData.tokenURI, - _NftCreateData.transferable, - _NftCreateData.owner); - erc20Address = IERC721Template(erc721Address).createERC20( - _ErcCreateData.templateIndex, - _ErcCreateData.strings, - _ErcCreateData.addresses, - _ErcCreateData.uints, - _ErcCreateData.bytess - ); - IERC20Template(erc20Address).createDispenser( - _DispenserData.dispenserAddress, - _DispenserData.maxTokens, - _DispenserData.maxBalance, - _DispenserData.withMint, - _DispenserData.allowedSwapper - ); - // remove our selfs from the erc20DeployerRole - IERC721Template(erc721Address).removeFromCreateERC20List(address(this)); -} -``` - -
- -To make the most of these pricing models, you can rely on user-friendly libraries such as [Ocean.js ](../ocean.js/)and [Ocean.py](../../data-scientists/ocean.py/), specifically developed for interacting with Ocean Protocol. - -With Ocean.js, you can use the [createFRE() ](../ocean.js/publish.md)function to effortlessly deploy a data NFT (non-fungible token) and datatoken with a fixed-rate exchange pricing model. Similarly, in Ocean.py, the [create\_url\_asset()](../../data-scientists/ocean.py/publish-flow.md#create-an-asset--pricing-schema-simultaneously) function allows you to create an asset with fixed pricing. These libraries simplify the process of interacting with Ocean Protocol, managing pricing, and handling asset creation. - -By taking advantage of Ocean Protocol's pricing options and leveraging the capabilities of [Ocean.js](../ocean.js/) and [Ocean.py](../../data-scientists/ocean.py/) (or by using the [Market](https://market.oceanprotocol.com)), you can effectively monetize your data assets while ensuring transparent and seamless access for data consumers. diff --git a/developers/contracts/revenue.md b/developers/contracts/revenue.md deleted file mode 100644 index 112f383e8..000000000 --- a/developers/contracts/revenue.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -description: Explore and manage the revenue generated from your data NFTs. ---- - -# Revenue - -Having a [data NFT](data-nfts.md) that generates revenue continuously, even when you're not actively involved, is an excellent source of income. This revenue stream allows you to earn consistently without actively dedicating your time and effort. Each time someone buys access to your NFT, you receive money, further enhancing the financial benefits. This steady income allows you to enjoy the rewards of your asset while minimizing the need for constant engagement:moneybag: - -

Make it rain

- -By default, the revenue generated from a [data NFT](data-nfts.md) is directed to the [owner](roles.md#nft-owner) of the NFT. This arrangement automatically updates whenever the data NFT is transferred to a new owner. - -However, there are scenarios where you may prefer the revenue to be sent to a different account instead of the owner. This can be accomplished by designating a new payment collector. This feature becomes particularly beneficial when the data NFT is owned by an organization or enterprise rather than an individual. - -{% hint style="info" %} -There are two templates available: [ERC20Template](datatoken-templates.md#regular-template) and [ERC20TemplateEnterprise](datatoken-templates.md#enterprise-template). - -In the case of [ERC20TemplateEnterprise](datatoken-templates.md#enterprise-template), when you deploy a fixed rate exchange, the funds generated as revenue are automatically sent to the owner's address. The owner receives the revenue without any manual intervention. - -On the other hand, with [ERC20Template](datatoken-templates.md#regular-template), for a fixed rate exchange, the revenue is available at the fixed rate exchange level. The owner or the payment collector has the authority to manually retrieve the revenue. -{% endhint %} - -There are several methods available for establishing a new **payment collector**. You have the option to utilize the ERC20Template/ERC20TemplateEnterprise contract directly. Another approach is to leverage the [ocean.py](../../data-scientists/ocean.py/) and [ocean.js](../ocean.js/) libraries. Alternatively, you can employ the network explorer associated with your asset. Lastly, you can directly set it up within the Ocean Market. - -Here are some examples of how to set up a new payment collector using the mentioned methods: - -1. Using [Ocean.js](https://github.com/oceanprotocol/ocean.js/blob/ae2ff1ccde53ace9841844c316a855de271f9a3f/src/contracts/Datatoken.ts#L393). - -```typescript -datatokenAddress = 'Your datatoken address' -paymentCollectorAddress = 'New payment collector address' - -await datatoken.setPaymentCollector(datatokenAddress, callerAddress, paymentCollectorAddress) -``` - -2. Using [Ocean.py](https://github.com/oceanprotocol/ocean.py/blob/bad11fb3a4cb00be8bab8febf3173682e1c091fd/ocean\_lib/models/test/test\_datatoken.py#L39). - -```python -datatokenAddress = 'Your datatoken address' -paymentCollectorAddress = 'New payment collector address' - -datatoken.setPaymentCollector(paymentCollectorAddress, {"from": publisher_wallet}) -``` - -3. Using the [Ocean Market](https://market.oceanprotocol.com/). - -Go to the asset detail page and then click on “Edit Asset” and then scroll down to the field called “Payment Collector Address”. Add the new Ethereum address in this field and then click “Submit“. Finally, you will then need to sign two transactions to finalize the update. - -

Update payment collector

diff --git a/developers/contracts/roles.md b/developers/contracts/roles.md deleted file mode 100644 index f8555fcad..000000000 --- a/developers/contracts/roles.md +++ /dev/null @@ -1,351 +0,0 @@ ---- -title: Data NFTs and datatoken roles -description: >- - The permissions stored on chain in the contracts control the access to the - data NFT (ERC721) and datatoken (ERC20) smart contract functions. ---- - -# Roles - -The permissions governing access to the smart contract functions are stored within the [data NFT](data-nfts.md) (ERC721) smart contract. Both the [data NFT](data-nfts.md) (ERC721) and [datatoken](datatokens.md) (ERC20) smart contracts utilize this information to enforce restrictions on certain actions, limiting access to authorized users. The tables below outline the specific actions that are restricted and can only be accessed by allowed users. - -The [data NFT](data-nfts.md) serves as the foundational intellectual property (IP) for the asset, and all datatokens are inherently linked to the data NFT smart contract. This linkage has enabled the introduction of various exciting capabilities related to role administration. - -### NFT Owner - -The NFT owner is the owner of the base-IP and is therefore at the highest level. The NFT owner can perform any action or assign any role but crucially, the NFT owner is the only one who can assign the manager role. Upon deployment or transfer of the data NFT, the NFT owner is automatically added as a manager. The NFT owner is also the only role that can’t be assigned to multiple users — the only way to share this role is via multi-sig or a DAO. - -## Roles-NFT level - -

Roles at the data NFT level

- -{% hint style="info" %} -With the exception of the NFT owner role, all other roles can be assigned to multiple users. -{% endhint %} - -There are several methods available to assign roles and permissions. One option is to utilize the [ocean.py](../../data-scientists/ocean.py/) and [ocean.js](../ocean.js/) libraries that we provide. These libraries offer a streamlined approach for assigning roles and permissions programmatically. - -Alternatively, for a more straightforward solution that doesn't require coding, you can utilize the network explorer of your asset's network. By accessing the network explorer, you can directly interact with the contracts associated with your asset. Below, we provide a few examples to help guide you through the process. - -### Manager - -The ability to add or remove Managers is exclusive to the **NFT Owner**. If you are the NFT Owner and wish to add/remove a new manager, simply call the [addManager](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC721Template.sol#L426)/[removeManager](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC721Template.sol#L438) function within the ERC721Template contract. This function enables you to grant managerial permissions to the designated individual. - -
- -Add/Remove Manager Contract functions - -```solidity -/** -* @dev addManager -* Only NFT Owner can add a new manager (Roles admin) -* There can be multiple minters -* @param _managerAddress new manager address -*/ - -function addManager(address _managerAddress) external onlyNFTOwner { - _addManager(_managerAddress); -} - -/** -* @dev removeManager -* Only NFT Owner can remove a manager (Roles admin) -* There can be multiple minters -* @param _managerAddress new manager address -*/ -function removeManager(address _managerAddress) external onlyNFTOwner { - _removeManager(_managerAddress); -} -``` - -
- -The **manager** can assign or revoke three main roles (**deployer, metadata updater, and store updater**). The manager is also able to call any other contract (ERC725X implementation). - -{% embed url="https://app.arcade.software/share/qC8QpkLsFIQk3NxPzB8p" fullWidth="false" %} - -### Metadata Updater - -There is also a specific role for updating the metadata. The [Metadata](../metadata.md) updater has the ability to update the information about the data asset (title, description, sample data etc) that is displayed to the user on the asset detail page within the market. - -To add/remove a metadata updater, the manager can use the [addToMetadataList](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L164)/[removeFromMetadataList](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L183) functions from the ERC721RolesAddress. - -
- -Add/Remove Metadata Updater Contract functions - -```solidity -/** -* @dev addToMetadataList -* Adds metadata role to an user. -* It can be called only by a manager -* @param _allowedAddress user address -*/ -function addToMetadataList(address _allowedAddress) public onlyManager { - _addToMetadataList(_allowedAddress); -} - - -/** -* @dev removeFromMetadataList -* Removes metadata role from an user. -* It can be called by a manager or by the same user, if he already has metadata role -* @param _allowedAddress user address -*/ -function removeFromMetadataList(address _allowedAddress) public { - if(permissions[msg.sender].manager == true || - (msg.sender == _allowedAddress && permissions[msg.sender].updateMetadata == true) - ){ - Roles storage user = permissions[_allowedAddress]; - user.updateMetadata = false; - emit RemovedFromMetadataList(_allowedAddress,msg.sender,block.timestamp,block.number); - _SafeRemoveFromAuth(_allowedAddress); - } - else{ - revert("ERC721RolesAddress: Not enough permissions to remove from metadata list"); - } -} -``` - -
- -### Store Updater - -The store updater can store, remove or update any arbitrary key value using the ERC725Y implementation (at the ERC721 level). The use case for this role depends a lot on what data is being stored in the ERC725Y key-value pair — as mentioned above, this is highly flexible. - -To add/remove a store updater, the manager can use the [addTo725StoreList](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L61)/[removeFrom725StoreList](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L76) functions from the ERC721RolesAddress. - -
- -Add/Remove Store Updater Contract functions - -```solidity -/** -* @dev addTo725StoreList -* Adds store role to an user. -* It can be called only by a manager -* @param _allowedAddress user address -*/ -function addTo725StoreList(address _allowedAddress) public onlyManager { - if(_allowedAddress != address(0)){ - Roles storage user = permissions[_allowedAddress]; - user.store = true; - _pushToAuth(_allowedAddress); - emit AddedTo725StoreList(_allowedAddress,msg.sender,block.timestamp,block.number); - } -} - -/** -* @dev removeFrom725StoreList -* Removes store role from an user. -* It can be called by a manager or by the same user, if he already has store role -* @param _allowedAddress user address -*/ -function removeFrom725StoreList(address _allowedAddress) public { - if(permissions[msg.sender].manager == true || - (msg.sender == _allowedAddress && permissions[msg.sender].store == true) - ){ - Roles storage user = permissions[_allowedAddress]; - user.store = false; - emit RemovedFrom725StoreList(_allowedAddress,msg.sender,block.timestamp,block.number); - _SafeRemoveFromAuth(_allowedAddress); - } - else{ - revert("ERC721RolesAddress: Not enough permissions to remove from 725StoreList"); - } -} -``` - -
- -### ERC20 Deployer - -The Deployer has a bunch of privileges at the ERC20 datatoken level. They can deploy new datatokens with fixed price exchange, or free pricing. They can also update the ERC725Y key-value store and **assign** **roles** at the ERC20 level(datatoken level). - -To add/remove an ERC20 deployer, the manager can use the [addToCreateERC20List](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L111)/[removeFromCreateERC20List](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L129) functions from the ERC721RolesAddress. - -
- -Add/Remove ERC20 Deployer Contract functions - -```solidity -/** -* @dev addToCreateERC20List -* Adds deployERC20 role to an user. -* It can be called only by a manager -* @param _allowedAddress user address -*/ -function addToCreateERC20List(address _allowedAddress) public onlyManager { - _addToCreateERC20List(_allowedAddress); -} - -/** -* @dev removeFromCreateERC20List -* Removes deployERC20 role from an user. -* It can be called by a manager or by the same user, if he already has deployERC20 role -* @param _allowedAddress user address -*/ -function removeFromCreateERC20List(address _allowedAddress) public { - if(permissions[msg.sender].manager == true || - (msg.sender == _allowedAddress && permissions[msg.sender].deployERC20 == true) - ){ - Roles storage user = permissions[_allowedAddress]; - user.deployERC20 = false; - emit RemovedFromCreateERC20List(_allowedAddress,msg.sender,block.timestamp,block.number); - _SafeRemoveFromAuth(_allowedAddress); - } - else{ - revert("ERC721RolesAddress: Not enough permissions to remove from ERC20List"); - } -} -``` - -
- -{% hint style="info" %} -To assign/remove all the above roles(ERC20 Deployer, Metadata Updater, or Store Updater), the manager can use the [**addMultipleUsersToRoles**](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/utils/ERC721RolesAddress.sol#L268) function from the ERC721RolesAddress. -{% endhint %} - -
- -Assign multiple roles at once Contract function - -```solidity -/** -* @dev addMultipleUsersToRoles -* Add multiple users to multiple roles -* @param addresses Array of addresses -* @param roles Array of coresponding roles -*/ -function addMultipleUsersToRoles(address[] memory addresses, RolesType[] memory roles) external onlyManager { - require(addresses.length == roles.length && roles.length>0 && roles.length<50, "Invalid array size"); - uint256 i; - for(i=0; i - -### Roles & permissions in data NFT (ERC721) smart contract - -
Action ↓ / Role →NFT OwnerManagerERC20 DeployerStore UpdaterMetadata Updater
Set token URI
Add manager
Remove manager
Clean permissions
Set base URI
Set Metadata state
Set Metadata
Create new datatoken
Executes any other smart contract
Set new key-value in store
- -## Roles-datatokens level - -

Roles at the datatokens level

- -### Minter - -The Minter has the ability to mint new datatokens, provided the limit has not been exceeded. - -To add/remove a minter, the ERC20 deployer can use the [addMinter](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC20Template.sol#L617)/[removeMinter](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC20Template.sol#L628) functions from the ERC20Template. - -
- -Add/Remove Minter Contract functions - -```solidity -/** -* @dev addMinter -* Only ERC20Deployer (at 721 level) can update. -* There can be multiple minters -* @param _minter new minter address -*/ - -function addMinter(address _minter) external onlyERC20Deployer { - _addMinter(_minter); -} - -/** -* @dev removeMinter -* Only ERC20Deployer (at 721 level) can update. -* There can be multiple minters -* @param _minter minter address to remove -*/ - -function removeMinter(address _minter) external onlyERC20Deployer { - _removeMinter(_minter); -} -``` - -
- -{% embed url="https://app.arcade.software/share/OHlwsPbf29S1PLh03FM7" fullWidth="false" %} - -### Fee Manager - -Finally, we also have a fee manager which has the ability to set a new fee collector — this is the account that will receive the datatokens when a data asset is consumed. If no fee collector account has been set, the **datatokens will be sent by default to the NFT Owner**. - -{% hint style="info" %} -The applicable fees (market and community fees) are automatically deducted from the datatokens that are received. -{% endhint %} - -To add/remove a fee manager, the ERC20 deployer can use the [addPaymentManager](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC20Template.sol#L639)/[removePaymentManager](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC20Template.sol#L653) functions from the ERC20Template. - -
- -Add/Remove Fee Manager Contract functions - -```solidity -/** -* @dev addPaymentManager (can set who's going to collect fee when consuming orders) -* Only ERC20Deployer (at 721 level) can update. -* There can be multiple paymentCollectors -* @param _paymentManager new minter address -*/ -function addPaymentManager(address _paymentManager) external onlyERC20Deployer -{ - _addPaymentManager(_paymentManager); -} - -/** -* @dev removePaymentManager -* Only ERC20Deployer (at 721 level) can update. -* There can be multiple paymentManagers -* @param _paymentManager _paymentManager address to remove -*/ - -function removePaymentManager(address _paymentManager) external onlyERC20Deployer -{ - _removePaymentManager(_paymentManager); -} -``` - -
- -{% hint style="info" %} -When the NFT ownership is transferred to another wallet address, all the roles and permissions and [cleared](https://github.com/oceanprotocol/contracts/blob/9e29194d910f28a4f0ef17ce6dc8a70741f63309/contracts/templates/ERC721Template.sol#L511). - -
function cleanPermissions() external onlyNFTOwner {
-    _cleanPermissions();
-    //Make sure that owner still has permissions
-    _addManager(ownerOf(1));
-}   
-
-{% endhint %} - -### Roles & permission in datatoken (ERC20) smart contract - -
Action ↓ / Role →ERC20 DeployerMinterNFT ownerFee manager
Create Fixed Rate exchange
Create Dispenser
Add minter
Remove minter
Add fee manager
Remove fee manager
Set data
Clean permissions
Mint
Set fee collector
diff --git a/developers/ddo-specification.md b/developers/ddo-specification.md deleted file mode 100644 index e79eff93e..000000000 --- a/developers/ddo-specification.md +++ /dev/null @@ -1,597 +0,0 @@ ---- -title: DDO -slug: /developers/ddo/ -section: developers -description: >- - Specification of decentralized identifiers for assets in Ocean Protocol using - the DDO standard. ---- - -# DDO Specification - -### DDO Schema - High Level - -The below diagram shows the high-level DDO schema depicting the content of each data structure and the relations between them. - -Please note that some data structures apply only on certain types of services or assets. - -```mermaid ---- -title: DDO High Level Diagram ---- -classDiagram - - class DDO{ - } - - class Metadata{ - } - class Credentials{ - } - - class AlgorithmMetadata["AlgorithmMetadata\n(for algorithm type)"] { - } - - class Container{ - } - class Service{ - } - class ConsumerParameters["Consumer\nParameters"]{ - } - class Compute{ - } -DDO "1" --> "1" Metadata -DDO "1" --> "0..n" Credentials -DDO "1" --> "1..*" Service - -Metadata "1" --> "0..1" AlgorithmMetadata -AlgorithmMetadata "1" --> "1..*" Container -AlgorithmMetadata "1" --> "1..*" ConsumerParameters - -Service "1" --> "0..n" Compute -Service "1" --> "0..n" ConsumerParameters -``` - -### Required Attributes - -A DDO in Ocean has these required attributes: - -
AttributeTypeDescription
@contextArray of stringContexts used for validation.
idstringComputed as sha256(address of ERC721 contract + chainId).
versionstringVersion information in SemVer notation referring to this DDO spec version, like 4.1.0.
chainIdnumberStores the chainId of the network the DDO was published to.
nftAddressstringNFT contract linked to this asset
metadataMetadataStores an object describing the asset.
servicesServicesStores an array of services defining access to the asset.
credentialsCredentialsDescribes the credentials needed to access a dataset in addition to the services definition.
- -
- -Full Enhanced DDO Example - -{% code overflow="wrap" %} -```json -{ - "@context": ["https://w3id.org/did/v1"], - "id": "did:op:ACce67694eD2848dd683c651Dab7Af823b7dd123", - "version": "4.1.0", - "chainId": 1, - "nftAddress": "0x123", - "metadata": { - "created": "2020-11-15T12:27:48Z", - "updated": "2021-05-17T21:58:02Z", - "description": "Sample description", - "name": "Sample asset", - "type": "dataset", - "author": "OPF", - "license": "https://market.oceanprotocol.com/terms" - }, - "services": [ - { - "id": "1", - "type": "access", - "files": "0x044736da6dae39889ff570c34540f24e5e084f4e5bd81eff3691b729c2dd1465ae8292fc721e9d4b1f10f56ce12036c9d149a4dab454b0795bd3ef8b7722c6001e0becdad5caeb2005859642284ef6a546c7ed76f8b350480691f0f6c6dfdda6c1e4d50ee90e83ce3cb3ca0a1a5a2544e10daa6637893f4276bb8d7301eb35306ece50f61ca34dcab550b48181ec81673953d4eaa4b5f19a45c0e9db4cd9729696f16dd05e0edb460623c843a263291ebe757c1eb3435bb529cc19023e0f49db66ef781ca692655992ea2ca7351ac2882bf340c9d9cb523b0cbcd483731dc03f6251597856afa9a68a1e0da698cfc8e81824a69d92b108023666ee35de4a229ad7e1cfa9be9946db2d909735", - "name": "Download service", - "description": "Download service", - "datatokenAddress": "0x123", - "serviceEndpoint": "https://myprovider.com", - "timeout": 0, - "consumerParameters": [ - { - "name": "surname", - "type": "text", - "label": "Name", - "required": true, - "default": "NoName", - "description": "Please fill your name" - }, - { - "name": "age", - "type": "number", - "label": "Age", - "required": false, - "default": 0, - "description": "Please fill your age" - } - ] - }, - { - "id": "2", - "type": "compute", - "files": "0x044736da6dae39889ff570c34540f24e5e084f4e5bd81eff3691b729c2dd1465ae8292fc721e9d4b1f10f56ce12036c9d149a4dab454b0795bd3ef8b7722c6001e0becdad5caeb2005859642284ef6a546c7ed76f8b350480691f0f6c6dfdda6c1e4d50ee90e83ce3cb3ca0a1a5a2544e10daa6637893f4276bb8d7301eb35306ece50f61ca34dcab550b48181ec81673953d4eaa4b5f19a45c0e9db4cd9729696f16dd05e0edb460623c843a263291ebe757c1eb3435bb529cc19023e0f49db66ef781ca692655992ea2ca7351ac2882bf340c9d9cb523b0cbcd483731dc03f6251597856afa9a68a1e0da698cfc8e81824a69d92b108023666ee35de4a229ad7e1cfa9be9946db2d909735", - "name": "Compute service", - "description": "Compute service", - "datatokenAddress": "0x124", - "serviceEndpoint": "https://myprovider.com", - "timeout": 3600, - "compute": { - "allowRawAlgorithm": false, - "allowNetworkAccess": true, - "publisherTrustedAlgorithmPublishers": ["0x234", "0x235"], - "publisherTrustedAlgorithms": [ - { - "did": "did:op:123", - "filesChecksum": "100", - "containerSectionChecksum": "200" - }, - { - "did": "did:op:124", - "filesChecksum": "110", - "containerSectionChecksum": "210" - } - ] - } - } - ], - "credentials": { - "allow": [ - { - "type": "address", - "values": ["0x123", "0x456"] - } - ], - "deny": [ - { - "type": "address", - "values": ["0x2222", "0x333"] - } - ] - }, - - "nft": { - "address": "0x123", - "name": "Ocean Protocol Asset", - "symbol": "OCEAN-A", - "owner": "0x0000000", - "state": 0, - "created": "2000-10-31T01:30:00", - "tokenURI": "xxx" - }, - - "datatokens": [ - { - "address": "0x000000", - "name": "Datatoken 1", - "symbol": "DT-1", - "serviceId": "1" - }, - { - "address": "0x000001", - "name": "Datatoken 2", - "symbol": "DT-2", - "serviceId": "2" - } - ], - - "event": { - "tx": "0x8d127de58509be5dfac600792ad24cc9164921571d168bff2f123c7f1cb4b11c", - "block": 12831214, - "from": "0xAcca11dbeD4F863Bb3bC2336D3CE5BAC52aa1f83", - "contract": "0x1a4b70d8c9DcA47cD6D0Fb3c52BB8634CA1C0Fdf", - "datetime": "2000-10-31T01:30:00" - }, - - "purgatory": { - "state": false - }, - - "stats": { - "orders": 4 - } -} -``` -{% endcode %} - -
- -### Metadata - -This object holds information describing the actual asset. - -
AttributeTypeDescription
createdISO date/time stringContains the date of the creation of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
updatedISO date/time stringContains the date of last update of the dataset content in ISO 8601 format preferably with timezone designators, e.g. 2000-10-31T01:30:00Z.
description*stringDetails of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for.
copyrightHolderstringThe party holding the legal copyright. Empty by default.
name*stringDescriptive name or title of the asset.
type*stringAsset type. Includes "dataset" (e.g. csv file), "algorithm" (e.g. Python script). Each type needs a different subset of metadata attributes.
author*stringName of the entity generating this data (e.g. Tfl, Disney Corp, etc.).
license*stringShort name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified".
linksArray of stringMapping of URL strings for data samples, or links to find out more information. Links may be to either a URL or another asset.
contentLanguagestringThe language of the content. Use one of the language codes from the IETF BCP 47 standard
tagsArray of stringArray of keywords or tags used to describe this content. Empty by default.
categoriesArray of stringArray of categories associated to the asset. Note: recommended to use tags instead of this.
additionalInformationObjectStores additional information, this is customizable by publisher
algorithm**Algorithm MetadataInformation about asset of type algorithm
- -\* Required - -\*\* Required for algorithms only - -
- -Metadata Example - -```json -{ - "metadata": { - "created": "2020-11-15T12:27:48Z", - "updated": "2021-05-17T21:58:02Z", - "description": "Sample description", - "name": "Sample asset", - "type": "dataset", - "author": "OPF", - "license": "https://market.oceanprotocol.com/terms" - } -} -``` - -
- -#### Services - -Services define the access for an asset, and each service is represented by its respective datatoken. - -An asset should have at least one service to be actually accessible and can have as many services which make sense for a specific use case. - -
AttributeTypeDescription
id*stringUnique ID
type*stringType of service access, compute, wss etc.
namestringService friendly name
descriptionstringService description
datatokenAddress*stringDatatoken
serviceEndpoint*stringProvider URL (schema + host)
files*FilesEncrypted file.
timeout*numberDescribing how long the service can be used after consumption is initiated. A timeout of 0 represents no time limit. Expressed in seconds.
compute**ComputeIf service is of type compute, holds information about the compute-related privacy settings & resources.
consumerParametersConsumer ParametersAn object the defines required consumer input before consuming the asset
additionalInformationObjectStores additional information, this is customizable by publisher
- -\* Required - -\*\* Required for compute assets only - -#### Files - -The `files` field is returned as a `string` which holds the encrypted file URLs. - -
- -Files Example - -{% code overflow="wrap" %} -```json -{ - "files": "0x044736da6dae39889ff570c34540f24e5e084f4e5bd81eff3691b729c2dd1465ae8292fc721e9d4b1f10f56ce12036c9d149a4dab454b0795bd3ef8b7722c6001e0becdad5caeb2005859642284ef6a546c7ed76f8b350480691f0f6c6dfdda6c1e4d50ee90e83ce3cb3ca0a1a5a2544e10daa6637893f4276bb8d7301eb35306ece50f61ca34dcab550b48181ec81673953d4eaa4b5f19a45c0e9db4cd9729696f16dd05e0edb460623c843a263291ebe757c1eb3435bb529cc19023e0f49db66ef781ca692655992ea2ca7351ac2882bf340c9d9cb523b0cbcd483731dc03f6251597856afa9a68a1e0da698cfc8e81824a69d92b108023666ee35de4a229ad7e1cfa9be9946db2d909735" -} -``` -{% endcode %} - -
- -#### Credentials - -By default, a consumer can access a resource if they have 1 datatoken. _Credentials_ allow the publisher to optionally specify more fine-grained permissions. - -Consider a medical data use case, where only a credentialed EU researcher can legally access a given dataset. Ocean supports this as follows: a consumer can only access the resource if they have 1 datatoken _and_ one of the specified `"allow"` credentials. - -This is like going to an R-rated movie, where you can only get in if you show both your movie ticket (datatoken) _and_ some identification showing you're old enough (credential). - -Only credentials that can be proven are supported. This includes Ethereum public addresses and in the future [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) and more. - -Ocean also supports `deny` credentials: if a consumer has any of these credentials, they can not access the resource. - -Here's an example object with both `allow` and `deny` entries: - -
- -Credentials Example - -```json -{ - "credentials": { - "allow": [ - { - "type": "address", - "values": ["0x123", "0x456"] - } - ], - "deny": [ - { - "type": "address", - "values": ["0x2222", "0x333"] - } - ] - } -} -``` - -
- -#### DDO Checksum - -In order to ensure the integrity of the DDO, a checksum is computed for each DDO: - -```js -const checksum = sha256(JSON.stringify(ddo)); -``` - -The checksum hash is used when publishing/updating metadata using the `setMetaData` function in the ERC721 contract, and is stored in the event generated by the ERC721 contract. - -
- -MetadataCreated and MetadataUpdated smart contract events - -```solidity -event MetadataCreated( - address indexed createdBy, - uint8 state, - string decryptorUrl, - bytes flags, - bytes data, - bytes metaDataHash, - uint256 timestamp, - uint256 blockNumber -); - -event MetadataUpdated( - address indexed updatedBy, - uint8 state, - string decryptorUrl, - bytes flags, - bytes data, - bytes metaDataHash, - uint256 timestamp, - uint256 blockNumber -); -``` - -
- -_Aquarius_ should always verify the checksum after data is decrypted via a _Provider_ API call. - -#### State - -Each asset has a state, which is held by the NFT contract. The possible states are: - -
StateDescriptionDiscoverable in Ocean MarketOrdering allowedListed under profile
0ActiveYesYesYes
1End-of-lifeYesNoNo
2Deprecated (by another asset)NoNoNo
3Revoked by publisherNoNoNo
4Ordering is temporary disabledYesNoYes
5Asset unlisted.NoYesYes
- -States details: - -1. **Active**: Assets in the "Active" state are fully functional and available for discovery in Ocean Market, and other components. Users can search for, view, and interact with these assets. Ordering is allowed, which means users can place orders to purchase or access the asset's services. -2. **End-of-life**: Assets in the "End-of-life" state remain discoverable but cannot be ordered. This state indicates that the assets are usually deprecated or outdated, and they are no longer actively promoted or maintained. -3. **Deprecated (by another asset)**: This state indicates that another asset has deprecated the current asset. Deprecated assets are not discoverable, and ordering is not allowed. Similar to the "End-of-life" state, deprecated assets are not listed under the owner's profile. -4. **Revoked by publisher**: When an asset is revoked by its publisher, it means that the publisher has explicitly revoked access or ownership rights to the asset. Revoked assets are not discoverable, and ordering is not allowed. -5. **Ordering is temporarily disabled**: Assets in this state are still discoverable, but ordering functionality is temporarily disabled. Users can view the asset and gather information, but they cannot place orders at that moment. However, these assets are still listed under the owner's profile. -6. **Asset unlisted**: Assets in the "Asset unlisted" state are not discoverable. However, users can still place orders for these assets, making them accessible. Unlisted assets are listed under the owner's profile, allowing users to view and access them. - -### Aquarius Enhanced DDO Response - -The following fields are added by _Aquarius_ in its DDO response for convenience reasons, where an asset returned by _Aquarius_ inherits the DDO fields stored on-chain. - -These additional fields are never stored on-chain and are never taken into consideration when [hashing the DDO](ddo-specification.md#ddo-checksum). - -#### NFT - -The `nft` object contains information about the ERC721 NFT contract which represents the intellectual property of the publisher. - -
AttributeTypeDescription
addressstringContract address of the deployed ERC721 NFT contract.
namestringName of NFT set in contract.
symbolstringSymbol of NFT set in contract.
ownerstringETH account address of the NFT owner.
statenumberState of the asset reflecting the NFT contract value. See State
createdISO date/time stringContains the date of NFT creation.
tokenURIstringtokenURI
- -
- -NFT Object Example - -```json -{ - "nft": { - "address": "0x000000", - "name": "Ocean Protocol Asset", - "symbol": "OCEAN-A", - "owner": "0x0000000", - "state": 0, - "created": "2000-10-31T01:30:00Z" - } -} -``` - -
- -#### Datatokens - -The `datatokens` array contains information about the ERC20 datatokens attached to [asset services](ddo-specification.md#services). - -
AttributeTypeDescription
addressstringContract address of the deployed ERC20 contract.
namestringName of NFT set in contract.
symbolstringSymbol of NFT set in contract.
serviceIdstringID of the service the datatoken is attached to.
- -
- -Datatokens Array Example - -```json -{ - "datatokens": [ - { - "address": "0x000000", - "name": "Datatoken 1", - "symbol": "DT-1", - "serviceId": "1" - }, - { - "address": "0x000001", - "name": "Datatoken 2", - "symbol": "DT-2", - "serviceId": "2" - } - ] -} -``` - -
- -#### Event - -The `event` section contains information about the last transaction that created or updated the DDO. - -
- -Event Example - -{% code overflow="wrap" %} -```json -{ - "event": { - "tx": "0x8d127de58509be5dfac600792ad24cc9164921571d168bff2f123c7f1cb4b11c", - "block": 12831214, - "from": "0xAcca11dbeD4F863Bb3bC2336D3CE5BAC52aa1f83", - "contract": "0x1a4b70d8c9DcA47cD6D0Fb3c52BB8634CA1C0Fdf", - "datetime": "2000-10-31T01:30:00" - } -} -``` -{% endcode %} - -
- -#### Purgatory - -Contains information about an asset's purgatory status defined in [`list-purgatory`](https://github.com/oceanprotocol/list-purgatory). Marketplace interfaces are encouraged to prevent certain user actions like adding liquidity on assets in purgatory. - -
AttributeTypeDescription
statebooleanIf true, asset is in purgatory.
reasonstringIf asset is in purgatory, contains the reason for being there as defined in list-purgatory.
- -
- -Purgatory Example - -```json -{ - "purgatory": { - "state": true, - "reason": "Copyright violation" - } - -} -``` - -```json -{ - "purgatory": { - "state": false - } -} -``` - -
- -#### Statistics - -The `stats` section contains different statistics fields. - -
AttributeTypeDescription
ordersnumberHow often an asset was ordered, meaning how often it was either downloaded or used as part of a compute job.
- -
- -Statistics Example - -```json -{ - "stats": { - "orders": 4 - } -} -``` - -
- -### Compute to data - -For algorithms and datasets that are used for compute to data, there are additional fields and objects within the DDO structure that you need to consider. These include: - -* `compute` attributes -* `publisherTrustedAlgorithms` -* `consumerParameters` - -Details for each of these are explained on the [Compute Options page](compute-to-data/compute-options.md). - -### DDO Schema - Detailed - -The below diagram shows the detailed DDO schema depicting the content of each data structure and the relations between them. - -Please note that some data structures apply only on certain types of services or assets. - -```mermaid ---- -title: DDO Detailed Diagram ---- -classDiagram - - class DDO{ - +@context - +id - +version - +chainId - +nftAddress - +Metadata - +Credentials - +Service - } - - class Metadata{ - +created - +updated - +description - +name - +type ["dataset"/"algorithm"] - +author - +license - +tags - +links - +contentLanguage - +categories - +copyrightHolder - +additionalInformation - +AlgorithmMetadata [for "algorithm" type] - } - class Credentials{ - +allow - +deny - } - - class AlgorithmMetadata["AlgorithmMetadata (for algorithm)"] { - +language - +version - +ConsumerParameters - +Container - } - - class Container{ - +entrypoint - +image - +tag - +checksum - } - class Service{ - +id - +type ["access"/"compute"] - +files - +name - +description - +datatokenAddress - +serviceEndpoint - +timeout - +additionalInformation - +ConsumerParameters - +Compute - } - class ConsumerParameters{ - +type - +name - +label - +required - +description - +default - +options - } - class Compute{ - +publisherTrustedAlgorithms - +publisherTrustedAlgorithmPublishers - } -DDO "1" --> "1" Metadata -DDO "1" --> "1..n" Service -DDO "1" --> "*" Credentials - - -Metadata "1" --> "0..1" AlgorithmMetadata -AlgorithmMetadata "1" --> "1..*" Container -AlgorithmMetadata "1" --> "1..*" ConsumerParameters - -Service "1" --> "0..n" Compute -Service "1" --> "0..n" ConsumerParameters -``` - diff --git a/developers/ddo.js/README.md b/developers/ddo.js/README.md deleted file mode 100644 index e6eebab25..000000000 --- a/developers/ddo.js/README.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -description: >- - Ocean Protocol's JavaScript library to manipulate with DDO and Asset fields and to validate DDO structures depending on version. ---- - -# DDO.js - -Welcome to the DDO.js! Your utility library for working with DDOs and Assets like a pro. 🚀 - -The DDO.js offers a wide range of functionalities, enabling you to: - -* [**Instantiate DDO**](instantiate-ddo.md) 📤 by `DDOManager` depending on version. -* [**Retrieve**](retrieve-fields.md) 📥 DDO data together with Asset fields using defined helper methods. -* [**Validate DDO**](validate.md) 📤 using SHACL schemas. -* [**Edit**](edit-fields.md) ✏️ existing fields of DDO and Asset. - -## Installation - -It is available as `npm` package, therefore to install in your `js` project, simply run in the console: - -```bash -npm install @oceanprotocol/ddo-js -``` - -## Key Information - -Let's dive into the DDO.js's capabilities together! If you're ready to explore each functionality in detail, simply go through the next pages. \ No newline at end of file diff --git a/developers/ddo.js/edit-fields.md b/developers/ddo.js/edit-fields.md deleted file mode 100644 index ef9a7fa65..000000000 --- a/developers/ddo.js/edit-fields.md +++ /dev/null @@ -1,41 +0,0 @@ -# Edit DDO Fields ✏️ - -To edit fields in the DDO structure, DDO instance from `DDOManager` is required to call `updateFields` method which is present for all types of DDOs, but targets specific DDO fields, according to DDO's version. - -**NOTE**: There are some restrictions that need to be taken care of before updating fields which do not exist for certain DDO. - -For e.g. `deprecatedDDO`, the update on `services` key is not supported, because a `deprecatedDDO` is not supposed to store `services` information. It is design to support only: `id`, `nftAddress`, `chainId`, `indexedMetadata.nft.state`. - -Supported fields to be updated are: - -```javascript - -export interface UpdateFields { - id?: string; - nftAddress?: string; - chainId?: number; - datatokens?: AssetDatatoken[]; - indexedMetadata?: IndexedMetadata; - services?: ServiceV4[] | ServiceV5[]; - issuer?: string; - proof?: Proof; -} -``` - -## Usage of Update Fields Function - -Now let's use [DDO V4 example](./instantiate-ddo.md#usage-examples), `DDOExampleV4` into the following javascript code, assuming `@oceanprotocol/ddo-js` has been installed as dependency before: - -```javascript -const { DDOManager } = require ('@oceanprotocol/ddo-js'); - -const ddoInstance = DDOManager.getDDOClass(DDOExampleV4); -const nftAddressToUpdate = "0xfF4AE9869Cafb5Ff725f962F3Bbc22Fb303A8aD8" -ddoInstance.updateFields({ nftAddress: nftAddressToUpdate }) // It supports update on multiple fields -// The same script can be applied on DDO V5 and deprecated DDO from `Instantiate DDO section`. -``` -**Execute script** - -```bash -node update-ddo-fields.js -``` \ No newline at end of file diff --git a/developers/ddo.js/instantiate-ddo.md b/developers/ddo.js/instantiate-ddo.md deleted file mode 100644 index 53ea22169..000000000 --- a/developers/ddo.js/instantiate-ddo.md +++ /dev/null @@ -1,267 +0,0 @@ -# Instantiate a DDO 📥 - -The DDO instantiation within the `DDO.js` library is done through `static` class `DDOManager` which returns the dedicated DDO class according to DDO's version. - -Supported versions are: `4.1.0`, `4.3.0`, `4.5.0`, `4.7.0`, `5.0.0`, `deprecated`. - -DDO Manager has 3 children classes: -- `V4DDO` for `4.1.0`, `4.3.0`, `4.5.0`, `4.7.0` -- `V5DDO` for `5.0.0` which contains the credentials subject used for enterprise purposes -- `DeprecatedDDO` for `deprecated` which represents a shorter form of DDO due to `deprecated` state for the NFT field (value of NFT state is different than `0`). - -## Usage Examples - -DDO V4 example: - -``` -export const DDOExampleV4 = { - '@context': ['https://w3id.org/did/v1'], - id: 'did:op:fa0e8fa9550e8eb13392d6eeb9ba9f8111801b332c8d2345b350b3bc66b379d5', - nftAddress: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - version: '4.1.0', - chainId: 137, - metadata: { - created: '2022-12-30T08:40:06Z', - updated: '2022-12-30T08:40:06Z', - type: 'dataset' as 'dataset' | 'algorithm', - name: 'DEX volume in details', - description: - 'Volume traded and locked of Decentralized Exchanges (Uniswap, Sushiswap, Curve, Balancer, ...), daily in details', - tags: ['index', 'defi', 'tvl'], - author: 'DEX', - license: 'https://market.oceanprotocol.com/terms', - additionalInformation: { - termsAndConditions: true - } - }, - services: [ - { - id: '24654b91482a3351050510ff72694d88edae803cf31a5da993da963ba0087648', - type: 'access', - files: - '0x04beba2f90639ff7559618160df5a81729904022578e6bd5f60c3bebfe5cb2aca59d7e062228a98ed88c4582c290045f47cdf3824d1c8bb25b46b8e10eb9dc0763ce82af826fd347517011855ce1396ac94af8cc6f29b78012b679cb78a594d9064b6f6f4a8229889f0bb53262b6ab62b56fa5c608ea126ba228dd0f87290c0628fe07023416280c067beb01a42d0a4df95fdb5a857f1f59b3e6a13b0ae4619080369ba5bede6c7beff6afc7fc31c71ed8100e7817d965d1f8f1abfaace3c01f0bd5d0127df308175941088a1f120a4d9a0290be590d65a7b4de01ae1efe24286d7a06fadeeafba83b5eab25b90961abf1f24796991f06de6c8e1c2357fbfb31f484a94e87e7dba80a489e12fffa1adde89f113b4c8c4c8877914911a008dbed0a86bdd9d14598c35894395fb4a8ea764ed2f9459f6acadac66e695b3715536338f6cdee616b721b0130f726c78ca60ec02fc86c', - datatokenAddress: '0xfF4AE9869Cafb5Ff725f962F3Bbc22Fb303A8aD8', - serviceEndpoint: 'https://v4.provider.polygon.oceanprotocol.com', - timeout: 604800 - } - ], - indexedMetadata: { - event: { - txid: '0xceb617f13a8db82ba9ef24efcee72e90d162915fd702f07ac6012427c31ac952', - block: 39326976, - from: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - contract: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - datetime: '2023-02-15T16:42:22' - }, - nft: { - address: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - name: 'Ocean Data NFT', - symbol: 'OCEAN-NFT', - state: 0, - tokenURI: - 'data:application/json;base64,eyJuYW1lIjoiT2NlYW4gRGF0YSBORlQiLCJzeW1ib2wiOiJPQ0VBTi1ORlQiLCJkZXNjcmlwdGlvbiI6IlRoaXMgTkZUIHJlcHJlc2VudHMgYW4gYXNzZXQgaW4gdGhlIE9jZWFuIFByb3RvY29sIHY0IGVjb3N5c3RlbS5cblxuVmlldyBvbiBPY2VhbiBNYXJrZXQ6IGh0dHBzOi8vbWFya2V0Lm9jZWFucHJvdG9jb2wuY29tL2Fzc2V0L2RpZDpvcDpmYTBlOGZhOTU1MGU4ZWIxMzM5MmQ2ZWViOWJhOWY4MTExODAxYjMzMmM4ZDIzNDViMzUwYjNiYzY2YjM3OWQ1IiwiZXh0ZXJuYWxfdXJsIjoiaHR0cHM6Ly9tYXJrZXQub2NlYW5wcm90b2NvbC5jb20vYXNzZXQvZGlkOm9wOmZhMGU4ZmE5NTUwZThlYjEzMzkyZDZlZWI5YmE5ZjgxMTE4MDFiMzMyYzhkMjM0NWIzNTBiM2JjNjZiMzc5ZDUiLCJiYWNrZ3JvdW5kX2NvbG9yIjoiMTQxNDE0IiwiaW1hZ2VfZGF0YSI6ImRhdGE6aW1hZ2Uvc3ZnK3htbCwlM0Nzdmcgdmlld0JveD0nMCAwIDk5IDk5JyBmaWxsPSd1bmRlZmluZWQnIHhtbG5zPSdodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyclM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5Mjc3JyBkPSdNMCw5OUwwLDIzQzEzLDIwIDI3LDE4IDM3LDE4QzQ2LDE3IDUyLDE4IDYyLDIwQzcxLDIxIDg1LDI0IDk5LDI3TDk5LDk5WicvJTNFJTNDcGF0aCBmaWxsPSclMjNmZjQwOTJiYicgZD0nTTAsOTlMMCw1MkMxMSw0OCAyMyw0NCAzMyw0NEM0Miw0MyA1MCw0NSA2MSw0OEM3MSw1MCA4NSw1MiA5OSw1NUw5OSw5OVonJTNFJTNDL3BhdGglM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5MmZmJyBkPSdNMCw5OUwwLDcyQzgsNzMgMTcsNzUgMjksNzZDNDAsNzYgNTMsNzYgNjYsNzdDNzgsNzcgODgsNzcgOTksNzhMOTksOTlaJyUzRSUzQy9wYXRoJTNFJTNDL3N2ZyUzRSJ9', - owner: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - created: '2022-12-30T08:40:43' - }, - purgatory: { - state: false - }, - stats: [ - { - orders: 36, - price: { - value: 1000, - tokenAddress: '0x282d8efCe846A88B159800bd4130ad77443Fa1A1', - tokenSymbol: 'mOCEAN' - } - } - ] - }, - datatokens: [ - { - address: '0xfF4AE9869Cafb5Ff725f962F3Bbc22Fb303A8aD8', - name: 'Boorish Fish Token', - symbol: 'BOOFIS-23', - serviceId: - '24654b91482a3351050510ff72694d88edae803cf31a5da993da963ba0087648' - } - ], - accessDetails: { - templateId: 2, - publisherMarketOrderFee: '0', - type: 'fixed', - addressOrId: - '0xd829c22afa50a25ad965e2c2f3d89940a6a27dbfabc2631964ea882883bc7d11', - price: '1000', - isPurchasable: true, - baseToken: { - address: '0x282d8efce846a88b159800bd4130ad77443fa1a1', - name: 'Ocean Token (PoS)', - symbol: 'mOCEAN', - decimals: 18 - }, - datatoken: { - address: '0xff4ae9869cafb5ff725f962f3bbc22fb303a8ad8', - name: 'Boorish Fish Token', - symbol: 'BOOFIS-23' - } - }, - credentials: null -}; -``` - -DDO V5 example: - -``` -export const DDOExampleV5 = { - '@context': ['https://www.w3.org/ns/credentials/v2'], - version: '5.0.0', - id: 'did:ope:fa0e8fa9550e8eb13392d6eeb9ba9f8111801b332c8d2345b350b3bc66b379d5', - credentialSubject: { - id: 'did:ope:fa0e8fa9550e8eb13392d6eeb9ba9f8111801b332c8d2345b350b3bc66b379d5', - metadata: { - created: '2024-10-03T14:35:20Z', - updated: '2024-10-03T14:35:20Z', - type: 'dataset', - name: 'DDO 5.0.0 Asset', - description: { - '@value': 'New asset published using ocean CLI tool with version 5.0.0', - '@language': 'en', - '@direction': 'ltr' - }, - copyrightHolder: 'Your Copyright Holder', - providedBy: 'Your Organization', - author: 'oceanprotocol', - license: { - name: 'https://market.oceanprotocol.com/terms' - }, - tags: ['version-5', 'new-schema'], - categories: ['data', 'ocean-protocol'], - additionalInformation: { - termsAndConditions: true - } - }, - services: [ - { - id: 'ccb398c50d6abd5b456e8d7242bd856a1767a890b537c2f8c10ba8b8a10e6025', - type: 'access', - name: 'Access Service', - description: { - '@value': 'Service for accessing the dataset', - '@language': 'en', - '@direction': 'ltr' - }, - datatokenAddress: '0xff4ae9869cafb5ff725f962f3bbc22fb303a8ad8', - nftAddress: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - serviceEndpoint: 'https://v4.provider.oceanprotocol.com', - files: - 'https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-abstract10.xml.gz-rss.xml', - timeout: 86400, - compute: { - allowRawAlgorithm: false, - allowNetworkAccess: true - }, - state: 0, - credentials: [{}] - } - ], - credentials: { - allow: { - request_credentials: [ - { - type: 'VerifiableId', - format: 'jwt_vc_json' - }, - { - type: 'ProofOfResidence', - format: 'jwt_vc_json' - }, - { - type: 'OpenBadgeCredential', - format: 'jwt_vc_json', - policies: ['signature'] - } - ] - } - }, - indexedMetadata: { - event: { - txid: '0xceb617f13a8db82ba9ef24efcee72e90d162915fd702f07ac6012427c31ac952', - block: 39326976, - from: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - contract: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - datetime: '2023-02-15T16:42:22' - }, - nft: { - address: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - name: 'Ocean Data NFT', - symbol: 'OCEAN-NFT', - state: 0, - tokenURI: - 'data:application/json;base64,eyJuYW1lIjoiT2NlYW4gRGF0YSBORlQiLCJzeW1ib2wiOiJPQ0VBTi1ORlQiLCJkZXNjcmlwdGlvbiI6IlRoaXMgTkZUIHJlcHJlc2VudHMgYW4gYXNzZXQgaW4gdGhlIE9jZWFuIFByb3RvY29sIHY0IGVjb3N5c3RlbS5cblxuVmlldyBvbiBPY2VhbiBNYXJrZXQ6IGh0dHBzOi8vbWFya2V0Lm9jZWFucHJvdG9jb2wuY29tL2Fzc2V0L2RpZDpvcDpmYTBlOGZhOTU1MGU4ZWIxMzM5MmQ2ZWViOWJhOWY4MTExODAxYjMzMmM4ZDIzNDViMzUwYjNiYzY2YjM3OWQ1IiwiZXh0ZXJuYWxfdXJsIjoiaHR0cHM6Ly9tYXJrZXQub2NlYW5wcm90b2NvbC5jb20vYXNzZXQvZGlkOm9wOmZhMGU4ZmE5NTUwZThlYjEzMzkyZDZlZWI5YmE5ZjgxMTE4MDFiMzMyYzhkMjM0NWIzNTBiM2JjNjZiMzc5ZDUiLCJiYWNrZ3JvdW5kX2NvbG9yIjoiMTQxNDE0IiwiaW1hZ2VfZGF0YSI6ImRhdGE6aW1hZ2Uvc3ZnK3htbCwlM0Nzdmcgdmlld0JveD0nMCAwIDk5IDk5JyBmaWxsPSd1bmRlZmluZWQnIHhtbG5zPSdodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyclM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5Mjc3JyBkPSdNMCw5OUwwLDIzQzEzLDIwIDI3LDE4IDM3LDE4QzQ2LDE3IDUyLDE4IDYyLDIwQzcxLDIxIDg1LDI0IDk5LDI3TDk5LDk5WicvJTNFJTNDcGF0aCBmaWxsPSclMjNmZjQwOTJiYicgZD0nTTAsOTlMMCw1MkMxMSw0OCAyMyw0NCAzMyw0NEM0Miw0MyA1MCw0NSA2MSw0OEM3MSw1MCA4NSw1MiA5OSw1NUw5OSw5OVonJTNFJTNDL3BhdGglM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5MmZmJyBkPSdNMCw5OUwwLDcyQzgsNzMgMTcsNzUgMjksNzZDNDAsNzYgNTMsNzYgNjYsNzdDNzgsNzcgODgsNzcgOTksNzhMOTksOTlaJyUzRSUzQy9wYXRoJTNFJTNDL3N2ZyUzRSJ9', - owner: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - created: '2022-12-30T08:40:43' - }, - purgatory: { - state: false - }, - stats: [ - { - orders: 36, - price: { - value: 1000, - tokenAddress: '0x282d8efCe846A88B159800bd4130ad77443Fa1A1', - tokenSymbol: 'mOCEAN' - } - } - ] - }, - datatokens: [ - { - address: '0xfF4AE9869Cafb5Ff725f962F3Bbc22Fb303A8aD8', - name: 'Boorish Fish Token', - symbol: 'BOOFIS-23', - serviceId: - '24654b91482a3351050510ff72694d88edae803cf31a5da993da963ba0087648' - } - ], - chainId: 137, - nftAddress: '0xBB1081DbF3227bbB233Db68f7117114baBb43656' - }, - issuer: 'did:op:issuer-did', - type: ['VerifiableCredential'], - additionalDdos: [{ type: '', data: '' }] -}; - -``` - -Deprecated DDO Example: - -``` -export const deprecatedDDO = { - id: 'did:op:fa0e8fa9550e8eb13392d6eeb9ba9f8111801b332c8d2345b350b3bc66b379d5', - version: 'deprecated', - chainId: 137, - nftAddress: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - indexedMetadata: { - nft: { - state: 5 - } - } -}; -``` - -Now let's use these DDO examples, `DDOExampleV4`, `DDOExampleV5`, `deprecatedDDO` into the following javascript code, assuming `@oceanprotocol/ddo-js` has been installed as dependency before: - -```javascript -const { DDOManager } = require ('@oceanprotocol/ddo-js'); - -const ddoV4Instance = DDOManager.getDDOClass(DDOExampleV4); -const ddoV5Instance = DDOManager.getDDOClass(DDOExampleV5); -const deprecatedDdoInstance = DDOManager.getDDOClass(deprecatedDDO); -``` -**Execute script** - -```bash -node instantiate-ddo.js -``` \ No newline at end of file diff --git a/developers/ddo.js/retrieve-fields.md b/developers/ddo.js/retrieve-fields.md deleted file mode 100644 index 39180f79d..000000000 --- a/developers/ddo.js/retrieve-fields.md +++ /dev/null @@ -1,90 +0,0 @@ -# Retrieve DDO Fields 📥 - - -After creating DDO instance based on DDO's version, we can interact with the DDO fields through the following methods: - -- `getDDOFields()` which returns DDO fields such as: - - **id**: The Decentralized Identifier (DID) of the asset. - - **version**: The version of the DDO. - - **metadata**: The metadata describing the asset. - - **services**: An array of services associated with the asset. - - **credentials**: An array of verifiable credentials. - - **chainId**: The blockchain chain ID where the asset is registered. - - **nftAddress**: The address of the NFT representing the asset. -- `getAssetFields()` which returns Asset fields such as: - - **datatokens** (optional): The datatokens associated with the asset. - - **indexedMetadata** (optional): Encapsulates data about blockchain asset related event, NFT, stats (pricing of the asset, number of orders per asset), purgatory (if the asset belongs or not in the purgatory). - - **event** (optional): The last event related to the asset. - - **nft** (optional): Information about the NFT representing the asset. - - **purgatory** (optional): Purgatory status of the asset, if applicable. - - **stats** (optional): Statistical information about the asset (e.g., usage, views). - **Example of indexedMetadata** - ```json - indexedMetadata: { - event: { - txid: '0xceb617f13a8db82ba9ef24efcee72e90d162915fd702f07ac6012427c31ac952', - block: 39326976, - from: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - contract: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - datetime: '2023-02-15T16:42:22' - }, - nft: { - address: '0xBB1081DbF3227bbB233Db68f7117114baBb43656', - name: 'Ocean Data NFT', - symbol: 'OCEAN-NFT', - state: 0, - tokenURI: - 'data:application/json;base64,eyJuYW1lIjoiT2NlYW4gRGF0YSBORlQiLCJzeW1ib2wiOiJPQ0VBTi1ORlQiLCJkZXNjcmlwdGlvbiI6IlRoaXMgTkZUIHJlcHJlc2VudHMgYW4gYXNzZXQgaW4gdGhlIE9jZWFuIFByb3RvY29sIHY0IGVjb3N5c3RlbS5cblxuVmlldyBvbiBPY2VhbiBNYXJrZXQ6IGh0dHBzOi8vbWFya2V0Lm9jZWFucHJvdG9jb2wuY29tL2Fzc2V0L2RpZDpvcDpmYTBlOGZhOTU1MGU4ZWIxMzM5MmQ2ZWViOWJhOWY4MTExODAxYjMzMmM4ZDIzNDViMzUwYjNiYzY2YjM3OWQ1IiwiZXh0ZXJuYWxfdXJsIjoiaHR0cHM6Ly9tYXJrZXQub2NlYW5wcm90b2NvbC5jb20vYXNzZXQvZGlkOm9wOmZhMGU4ZmE5NTUwZThlYjEzMzkyZDZlZWI5YmE5ZjgxMTE4MDFiMzMyYzhkMjM0NWIzNTBiM2JjNjZiMzc5ZDUiLCJiYWNrZ3JvdW5kX2NvbG9yIjoiMTQxNDE0IiwiaW1hZ2VfZGF0YSI6ImRhdGE6aW1hZ2Uvc3ZnK3htbCwlM0Nzdmcgdmlld0JveD0nMCAwIDk5IDk5JyBmaWxsPSd1bmRlZmluZWQnIHhtbG5zPSdodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyclM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5Mjc3JyBkPSdNMCw5OUwwLDIzQzEzLDIwIDI3LDE4IDM3LDE4QzQ2LDE3IDUyLDE4IDYyLDIwQzcxLDIxIDg1LDI0IDk5LDI3TDk5LDk5WicvJTNFJTNDcGF0aCBmaWxsPSclMjNmZjQwOTJiYicgZD0nTTAsOTlMMCw1MkMxMSw0OCAyMyw0NCAzMyw0NEM0Miw0MyA1MCw0NSA2MSw0OEM3MSw1MCA4NSw1MiA5OSw1NUw5OSw5OVonJTNFJTNDL3BhdGglM0UlM0NwYXRoIGZpbGw9JyUyM2ZmNDA5MmZmJyBkPSdNMCw5OUwwLDcyQzgsNzMgMTcsNzUgMjksNzZDNDAsNzYgNTMsNzYgNjYsNzdDNzgsNzcgODgsNzcgOTksNzhMOTksOTlaJyUzRSUzQy9wYXRoJTNFJTNDL3N2ZyUzRSJ9', - owner: '0x0DB823218e337a6817e6D7740eb17635DEAdafAF', - created: '2022-12-30T08:40:43' - }, - purgatory: { - state: false - }, - stats: [ - { - datatokenAddress: "0x34e84f653Dcb291838aa8AF8Be1E1eF30e749ba0", - name: "BDT2", - symbol: "DT2", - serviceId: "1", - orders: 5, - prices: [ - { - type: "fixedrate", - price: "1", - token:"0x967da4048cD07aB37855c090aAF366e4ce1b9F48", - contract: "freContractAddress", - exchangeId: "0x23434" - } - ] - } - ] - } - ``` -- `getDDOData()` which simply retruns as `Record` the full DDO structure including DDO and Asset fields. -- `getDid()` which returns only the Decentralized Identifier (DID), as `string`, of the asset. - -## Usage of DDO Manager Functions - -Now let's use [DDO V4 example](./instantiate-ddo.md#usage-examples), `DDOExampleV4` into the following javascript code, assuming `@oceanprotocol/ddo-js` has been installed as dependency before: - -```javascript -const { DDOManager } = require ('@oceanprotocol/ddo-js'); - -const ddoInstance = DDOManager.getDDOClass(DDOExampleV4); - -// DDO V4 -console.log('DDO V4 Fields: ', ddoInstance.getDDOFields()); -// Individual fields access -console.log('DDO V4 chain ID: ', ddoInstance.getDDOFields().chainId); -console.log('DDO V4 Asset Fields: ', ddoInstance.getAssetFields()); -console.log('DDO V4 Data: ', ddoInstance.getDDOData()); -console.log('DDO V4 DID: ', ddoInstance.getDid()); - -// The same script can be applied on DDO V5 and deprecated DDO from `Instantiate DDO section`. -``` -**Execute script** - -```bash -node retrieve-ddo-fields.js -``` \ No newline at end of file diff --git a/developers/ddo.js/validate.md b/developers/ddo.js/validate.md deleted file mode 100644 index 50c96063f..000000000 --- a/developers/ddo.js/validate.md +++ /dev/null @@ -1,33 +0,0 @@ -# Validate a DDO 📥 - -The DDO validation within the `DDO.js` library is performed based on SHACL schemas which enforce DDO fields types and structure based on DDO version. - -**NOTE**: For more information regarding DDO structure, please consult [new DDO specification here](../new-ddo-specification.md). - -

DDO Validation Flow using DDO.js

- -The above diagram depicts the high level flow of Ocean core stack interaction for DDO validation using DDO.js, which will be called by Ocean Node whenever a new DDO is to be published. - -Based on the DDO version, `ddo.js` will apply the corresponding SHACL schema to validate DDO fields against it. - -Supported SHACL schemas can be found [here](https://github.com/oceanprotocol/ddo.js/tree/main/schemas). - -**NOTE**: For DDO validation, `indexedMetadata` will not be taken in consideration in this process. - -## Usage of DDO validation from Library - -Now let's use [DDO V4 example](./instantiate-ddo.md#usage-examples), `DDOExampleV4` into the following javascript code, assuming `@oceanprotocol/ddo-js` has been installed as dependency before: - -```javascript -const { DDOManager } = require ('@oceanprotocol/ddo-js'); - -const ddoInstance = DDOManager.getDDOClass(DDOExampleV4); -const validation = await ddoInstance.validate(); -console.log('Validation true/false: ' + validation[0]); -console.log('Validation message: ' + validation[1]); -``` -**Execute script** - -```bash -node validate-ddo.js -``` diff --git a/developers/dev-faq.md b/developers/dev-faq.md deleted file mode 100644 index 5005b0be1..000000000 --- a/developers/dev-faq.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: Development FAQ -description: Frequently Asked Questions About Ocean Technology ---- -## Development FAQ - -Have some questions about the Ocean Protocol tech stack? - -Hopefully, you'll find the answers here! If not then please don't hesitate to reach out to us on [discord](https://discord.gg/EdmenE7eTj) - there are no stupid questions! - -
-The blockchain is public - does this mean that anyone can access my data? - -The blockchain being public means that transaction information is transparent and can be viewed by anyone. However, your data isn't directly accessible to the public. Ocean Protocol employs various mechanisms, including encryption and access control, to safeguard your data. Access to the data is determined by the permissions you set, ensuring that only authorized users can retrieve and work with your data. So, while blockchain transactions are public, your data remains protected and accessible only to those with proper authorization. - -
- -
-How are datatokens created? - -Datatokens are created within the Ocean Protocol ecosystem when you tokenize a dataset(convert a dataset into a fungible token that can be traded). More details, on the [datatokens page](../developers/contracts/datatokens.md) -
- -
-How does the datatoken creator make money? - -You can generate revenue as a dataset publisher by selling datatokens to access your published dataset. For more details, please visit the [community monetization](https://docs.oceanprotocol.com/developers/community-monetization#1.-publishing-and-selling-data) page. -
- -
-Where can I find information about the number of datatokens created and track their progress? - -To access this data, some technical expertise is required. You can find this information at the subgraph level. In the documentation, we provide a few examples of how to retrieve this data using JavaScript. Feel free to give it a shot by visiting this [page](../developers/subgraph/list-datatokens). If it doesn't meet your requirements, don't hesitate to reach out to us on Discord. -
- -
-How can developers use Ocean technology to build their own data marketplaces? - -You can fork Ocean Market and then make changes as you wish. Please see the [customising your market](../developers/build-a-marketplace/customising-your-market) page for details. -
- -
-Is there a trading platform or stock exchange that has successfully forked the Ocean marketplace codebase? - -Ocean technology is actively used by Daimler/Acentrik, deltaDAO/GAIA-X, and several other entities. You can find further details on the Ocean [ecosystem page](https://oceanprotocol.com/explore/ecosystem). - -
- -
-What are the Ocean faucets and how can they be used? - -An Ocean faucet is a site to get (fake) OCEAN for use on a given testnet. There's an Ocean faucet for each testnet that Ocean is deployed to. The [networks](../discover/networks/) page have more information. -
- -
-How can I convert tokens from the BEP20 network to the ERC20 network? - -Please follow this [tutorial](https://x.com/ASI_Alliance/status/1848393597722165429) to bridge from/to BNB Smart Chain. Please double-check the addresses and make sure you are using the right smart contracts. -
- -
-How to bridge my mOcean back to Ocean? - -Please follow this [tutorial](../discover/networks/bridges#polygon-ex-matic-bridge) to bridge to/from Polygon mainnet. Please double-check the addresses and make sure you are using the right smart contracts. -
- -
-Is it possible to reverse engineer a dataset on Ocean by having access to both the algorithm and the output? - -Not to our knowledge. But please, give it a shot and share the results with us 😄 - -PS: We offer good rewards 😇 -
- -
-If a dataset consists of 100 individuals' private data, does this solution allow each individual to maintain sovereign control over their data while still enabling algorithms to compute as if it were one dataset? - -Yes. Each individual could publish their dataset themselves, to get a data NFT. From the data NFT, they can mint datatokens which are to access the data. They have sovereign control over this, as hold the keys to the data NFTs and datatokens, and have great flexibility in how to give others access. For example, they could send a datatoken to a DAO for the DAO can manage. Or they could grant datatoken-minting permissions to the DAO. The DAO could use this to assemble a dataset across 100 individuals. ⁣ -⁣ -Learn more about Data NFTs on the [Docs](../developers/contracts/data-nfts). -
diff --git a/developers/fees.md b/developers/fees.md new file mode 100644 index 000000000..5f7e6a33b --- /dev/null +++ b/developers/fees.md @@ -0,0 +1,22 @@ +--- +description: >- + The Ocean Enterprise Collective defines various fees for creating a + sustainability loop. +--- + +# Fees + +One transaction may have fees going to several entities, such as the marketplace operator where the asset was published, the Ocean Node provider, and the Ocean Enterprise Collective e.V. + +* **Marketplace**: the marketplace where the asset is published or consumed +* **Provider**: the Ocean Node facilitating asset consumption. May serve up data, run C2D jobs, etc. +* **Ocean Enterprise Collective**: for developing and maintaining the OE code base + +
Fee TypeValue
Marketplaceset by the marketplace operator
for asset publishing and/or consumption
Providerset by the Ocean Node operator
Ocean Enterprise Collective set by OEC eV at 1.9% of the asset and compute job price
with a minimum value of 1 cent
+ + + +{% hint style="info" %} +Stay up-to-date with the latest information! The values within the system are regularly updated. We recommend verifying the most recent values directly from the [contracts](https://github.com/oceanprotocol/contracts) and the [market](https://github.com/oceanprotocol/market). +{% endhint %} + diff --git a/developers/fg-permissions.md b/developers/fg-permissions.md index 5a0d3e426..136a65223 100644 --- a/developers/fg-permissions.md +++ b/developers/fg-permissions.md @@ -5,7 +5,23 @@ description: >- can publish, buy or browse data --- -# Fine-Grained Permissions +# Managing access to assets - to be updated + +an OE-enabled dataspace can be deployed in one of two security configurations: + +1. SSI Security Disabled: In configurations where SSI security is disabled, access control relies solely on the consumer’s Web3 address. Access to an asset is granted if: + +* The address is explicitly listed in the allow list, or +* The address is not present in the deny list + +2. SSI Security Enabled: In a dataspace secured by Self-Sovereign Identity (SSI), access to assets is granted upon successful verification of: + +* The consumer’s Web3 address +* The Verifiable Credentials required by each asset. + +This dual-layered approach ensures that only authorized identities possessing the appropriate credentials can interact with sensitive resources. + + A large part of Ocean is about access control, which is primarily handled by datatokens. Users can access a resource (e.g. a file) by redeeming datatokens for that resource. We recognize that enterprises and other users often need more precise ways to specify and manage access, and we have introduced fine-grained permissions for these use cases. Fine-grained permissions mean that access can be controlled precisely at two levels: diff --git a/developers/fractional-ownership.md b/developers/fractional-ownership.md deleted file mode 100644 index 9112c8bca..000000000 --- a/developers/fractional-ownership.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -description: >- - Exploring fractional ownership in Web3, combining NFTs and DeFi for - co-ownership of data IP and tokenized DAOs for collective data management. ---- - -# Fractional Ownership - -Fractional ownership represents an exciting subset within the realm of Web3, combining the realms of NFTs and DeFi. It introduces the concept of co-owning data intellectual property (IP). - -Ocean offers two approaches to facilitate fractional ownership: - -1. Sharded Holding of ERC20 Datatokens: Under this approach, each holder of ERC20 tokens possesses the typical datatoken rights outlined earlier. For instance, owning 1.0 datatoken allows consumption of a particular asset. Ocean conveniently provides this feature out of the box. -2. Sharding ERC721 Data NFT: This method involves dividing the ownership of an ERC721 data NFT among multiple individuals, granting each co-owner the right to a portion of the earnings generated from the underlying IP. Moreover, these co-owners collectively control the data NFT. For instance, a dedicated DAO may be established to hold the data NFT, featuring its own ERC20 token. DAO members utilize their tokens to vote on updates to data NFT roles or the deployment of ERC20 datatokens associated with the ERC721. - -It's worth noting that for the second approach, one might consider utilizing platforms like Niftex for sharding. However, important questions arise in this context: - -* What specific rights do shard-holders possess? -* It's possible that they have limited rights, just as Amazon shareholders don't have the authority to roam the hallways of Amazon's offices simply because they own shares -* Additionally, how do shard-holders exercise control over the data NFT? - -These concerns are effectively addressed by employing a tokenized DAO, as previously described. - -

DAO

- -Data DAOs present a fascinating use case whenever a group of individuals desires to collectively manage data or consolidate data for increased bargaining power. Such DAOs can take the form of unions, cooperatives, or trusts. - -Consider the following example involving a mobile app: You install the app, which includes an integrated crypto wallet. After granting permission for the app to access your location data, it leverages the DAO to sell your anonymized location data on your behalf. The DAO bundles your data with that of thousands of other DAO members, and as a member, you receive a portion of the generated profits. - -This use case can manifest in several variations. Each member's data feed could be represented by their own data NFT, accompanied by corresponding datatokens. Alternatively, a single data NFT could aggregate data feeds from all members into a unified feed, which is then fractionally owned through sharded ERC20 tokens (as described in approach 1) or by sharding the ERC721 data NFT (as explained in approach 2). If you're interested in establishing a data union, we recommend reaching out to our associates at [Data Union](https://www.dataunion.app/). diff --git a/developers/networks.md b/developers/networks.md new file mode 100644 index 000000000..afa1fc233 --- /dev/null +++ b/developers/networks.md @@ -0,0 +1,69 @@ +--- +title: null +description: >- + All the public networks the Ocean Enterprise contracts are deployed to and the + currencies in which the published assets can be listed. +--- + +# Supported networks & currencies + +## Supported Networks + +Ocean Enterprise smart contracts are deployed on multiple public networks: several production chains and several testnets. + +The file [`address.json`](https://github.com/oceanprotocol/contracts/blob/v4main/addresses/address.json) holds up-to-date deployment addresses for all Ocean contracts. + +### Networks Summary + +The networks where Ocean Enterprise smart contracts are deployed are: + +**Production Networks:** + +* Ethereum Mainnet +* Optimism (OP) Mainnet + +**Test Networks:** + +* Ethereum Sepolia +* Optimism (OP) Sepolia + + + +### Production Networks + +The smart contracts deployed by O.E.C. in the production networks and used by default by the Ocean Enterprise components are available at the following addresses. + +#### Ethereum Mainnet + +
Smart ContractAddress
EnterpriseFeeCollector0x254302d1Ae1e1200319c885D93D40a8927ACFcD7
FixedPriceEnterprise0x6C97D128f7E7D21ac3C722458Dc5d71f7e1bBa6e
EnterpriseEscrow0x7F773EE2B8AFE158FA03B72fC20672B408Cd9818
ERC20Template Enterprise0x3E85e7Cb15880b6d4871092E74bF65CE03E8448D
+ +#### Optimism Mainnet + +
Smart ContractAddress
EnterpriseFeeCollector0xE9397625Df9B63f0C152f975234b7988b54710B8
FixedPriceEnterprise0x1d535147a97bd87c8443125376E6671B60556E07
EnterpriseEscrow0xc313e19146Fc9a04470689C9d41a4D3054693531
ERC20Template Enterprise0x1B083D8584dd3e6Ff37d04a6e7e82b5F622f3985
+ + + +### Test Networks + +The smart contracts deployed by O.E.C. in the test networks and used by default by the Ocean Enterprise components are available at the following addresses. + +#### Ethereum Sepolia + +
Smart ContractAddress
EnterpriseFeeCollector0x4D49eEedFac8Ea03328c0E4871b680C06d892092
FixedPriceEnterprise0xEcD0C3519a081e3924D6F3197f86980eA7dfCf71
EnterpriseEscrow0x5494711392a67DA50D3bC7b1fcC2d1877cFaA4d2
ERC20Template Enterprise0xDEfD0018969cd2d4E648209F876ADe184815f038
+ + + +## Supported currencies + +Ocean Enterprise supports the following currencies: + +### Production Networks currencies + +
Blockchain NetworkSupported CurrencyContract address
Ethereum mainnetUSDC0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48
Ethereum mainnetEURC0x1aBaEA1f7C830bD89Acc67eC4af516284b1bC33c
Ethereum mainnetEURAU0x4933A85b5b5466Fbaf179F72D3DE273c287EC2c2
Optimism mainnetUSDC0x0b2C639c533813f4Aa9D7837CAf62653d097Ff85
+ +### Test Networks currencies + +
Blockchain NetworkSupported CurrencyContract address
Ethereum SepoliaUSDC0xf08A50178dfcDe18524640EA6618a1f965821715
Ethereum SepoliaEURC0x08210F9170F89Ab7658F0B5E3fF39b0E03C594D4
Optimism SepoliaUSDC0x5fd84259d66Cd46123540766Be93DFE6D43130D7
+ +*** + diff --git a/developers/ocean-cli/README.md b/developers/ocean-cli/README.md deleted file mode 100644 index 76c41e0dc..000000000 --- a/developers/ocean-cli/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -description: >- - CLI tool to interact with the oceanprotocol's JavaScript library to privately - & securely publish, consume and run compute on data. ---- - -# Ocean CLI - -Welcome to the Ocean CLI, your powerful command-line tool for seamless interaction with Ocean Protocol's data-sharing capabilities. 🚀 - -The Ocean CLI offers a wide range of functionalities, enabling you to: - -* [**Publish**](publish.md) 📤 data services: downloadable files or compute-to-data. -* [**Edit**](edit.md) ✏️ existing assets. -* [**Consume**](consume.md) 📥 data services, ordering datatokens and downloading data. -* [**Compute to Data**](run-c2d.md) 💻 on public available datasets using a published algorithm. Free version of compute-to-data feature is available - -## Key Information - -The Ocean CLI is powered by the [ocean.js](../ocean.js/) JavaScript library, an integral part of the [Ocean Protocol](https://oceanprotocol.com) toolset. 🌐 - -Let's dive into the CLI's capabilities and unlock the full potential of Ocean Protocol together! If you're ready to explore each functionality in detail, simply go through the next pages. diff --git a/developers/ocean-cli/consume.md b/developers/ocean-cli/consume.md deleted file mode 100644 index 9c955e0f0..000000000 --- a/developers/ocean-cli/consume.md +++ /dev/null @@ -1,13 +0,0 @@ -# Consume a Dataset 📥 - -The process of consuming an asset is straightforward. To achieve this, you only need to execute a single command: - -```bash -npm run cli download 'assetDID' 'download-location-path' -``` - -In this command, replace `assetDID` with the specific DID of the asset you want to consume, and `download-location-path` with the desired path where you wish to store the downloaded asset content - -Once executed, this command orchestrates both the **ordering** of a [datatoken](../contracts/datatokens.md) and the subsequent download operation. The asset's content will be automatically retrieved and saved at the specified location, simplifying the consumption process for users. - -
Consume
diff --git a/developers/ocean-cli/edit.md b/developers/ocean-cli/edit.md deleted file mode 100644 index c27de80d3..000000000 --- a/developers/ocean-cli/edit.md +++ /dev/null @@ -1,22 +0,0 @@ -# Edit a Dataset ✏️ - -To make changes to a dataset, you'll need to start by retrieving the asset's [Decentralized Data Object](../ddo-specification.md) (DDO). - -## Retrieve DDO - -Obtaining the DDO of an asset is a straightforward process. You can accomplish this task by executing the following command: - -```bash -npm run cli getDDO 'assetDID' -``` - -
Retrieve DDO
- -## Edit the Dataset - -After retrieving the asset's DDO and saving it as a JSON file, you can proceed to edit the metadata as needed. Once you've made the necessary changes, you can utilize the following command to apply the updated metadata: - -```bash -npm run cli editAsset 'DATASET_DID' 'PATH_TO_UPDATED_FILE` - -``` diff --git a/developers/ocean-cli/install.md b/developers/ocean-cli/install.md deleted file mode 100644 index ab828ab89..000000000 --- a/developers/ocean-cli/install.md +++ /dev/null @@ -1,77 +0,0 @@ -# Installation and Configuration 🛠️ - -To get started with the Ocean CLI, follow these steps for a seamless setup: - -## Clone the Repository - -Begin by cloning the repository. You can achieve this by executing the following command in your terminal: - -```bash -$ git clone https://github.com/oceanprotocol/ocean-cli.git -``` - -Cloning the repository will create a local copy on your machine, allowing you to access and work with its contents. - -## Install NPM Dependencies - -After successfully cloning the repository, you should install the necessary npm dependencies to ensure that the project functions correctly. This can be done with the following command: - -```bash -npm install -``` - -## Build the TypeScript code - -To compile the TypeScript code and prepare the CLI for use, execute the following command: - -```bash -npm run build -``` - -Now, let's configure the environment variables required for the CLI to function effectively. 🚀 - - -## Setting Environment Variables 🌐 - -To successfully configure the CLI tool, two essential steps must be undertaken: the setting of the account's private key and the definition of the desired RPC endpoint. These actions are pivotal in enabling the CLI tool to function effectively. - -### Private Key Configuration - -The CLI tool requires the configuration of the account's 'private key'(by exporting env "PRIVATE_KEY") or a 'mnemonic'(by exporting env "MNEMONIC"). -Both serve as the means by which the CLI tool establishes a connection to the associated wallet. It plays a crucial role in authenticating and authorizing operations performed by the tool. You must choose either one option or the other. The tool will not utilize both simultaneously. - -```bash -export PRIVATE_KEY="XXXX" -``` -or - -```bash -export MNEMONIC="XXXX" -``` - -### RPC Endpoint Specification -Additionally, it is imperative to specify the RPC endpoint that corresponds to the desired network for executing operations. The CLI tool relies on this user-provided RPC endpoint to connect to the network required for its functions. This connection to the network is vital as it enables the CLI tool to interact with the blockchain and execute operations seamlessly. - -```bash -export RPC='XXXX' -``` - -Furthermore, there are additional environment variables that can be configured to enhance the flexibility and customization of the environment. These variables include options such as the metadataCache URL and Provider URL, which can be specified if you prefer to utilize a custom deployment of Aquarius or Provider in contrast to the default settings. Moreover, you have the option to provide a custom address file path if you wish to use customized smart contracts or deployments for your specific use case. Remember setting the next environment variables is optional. - -```bash -export AQUARIUS_URL='XXXX' -export PROVIDER_URL='XXXX' -export ADDRESS_FILE='../path/to/your/address-file' -``` - -## Usage - -To explore the commands and option flags available in the Ocean CLI, simply run the following command: - -```bash -npm run cli h -``` - -
Available CLI commands & options
- -With the Ocean CLI successfully installed and configured, you're ready to dive into its capabilities and unlock the full potential of Ocean Protocol. If you encounter any issues during the setup process or have questions, feel free to seek assistance from the [support](https://discord.com/invite/TnXjkR5) team. 🌊 diff --git a/developers/ocean-cli/publish.md b/developers/ocean-cli/publish.md deleted file mode 100644 index 9bd07f7a3..000000000 --- a/developers/ocean-cli/publish.md +++ /dev/null @@ -1,85 +0,0 @@ -# Publish a Dataset 📤 - -Once you've configured the RPC environment variable, you're ready to publish a new dataset on the connected network. The flexible setup allows you to switch to a different network simply by substituting the RPC endpoint with one corresponding to another network. 🌐 - -For setup configuration on Ocean CLI, please consult first [install section](install.md) - -To initiate the dataset publishing process, we'll start by updating the helper [DDO](../ddo-specification.md)(Decentralized Data Object) example named "SimpleDownloadDataset.json." This example can be found in the `./metadata` folder, located at the root directory of the cloned Ocean CLI project. - -```json -{ - "@context": ["https://w3id.org/did/v1"], - "id": "", - "nftAddress": "", - "version": "4.1.0", - "chainId": 80001, - "metadata": { - "created": "2021-12-20T14:35:20Z", - "updated": "2021-12-20T14:35:20Z", - "type": "dataset", - "name": "ocean-cli demo asset", - "description": "asset published using ocean cli tool", - "tags": ["test"], - "author": "oceanprotocol", - "license": "https://market.oceanprotocol.com/terms", - "additionalInformation": { - "termsAndConditions": true - } - }, - "services": [ - { - "id": "ccb398c50d6abd5b456e8d7242bd856a1767a890b537c2f8c10ba8b8a10e6025", - "type": "access", - "files": { - "datatokenAddress": "0x0", - "nftAddress": "0x0", - "files": [ - { - "type": "url", - "url": "https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-abstract10.xml.gz-rss.xml", - "method": "GET" - } - ] - }, - "datatokenAddress": "", - "serviceEndpoint": "https://v4.provider.oceanprotocol.com", - "timeout": 86400 - } - ], - "event": {}, - "nft": { - "address": "", - "name": "Ocean Data NFT", - "symbol": "OCEAN-NFT", - "state": 5, - "tokenURI": "", - "owner": "", - "created": "" - }, - "purgatory": { - "state": false - }, - "datatokens": [], - "stats": { - "allocated": 0, - "orders": 0, - "price": { - "value": "2" - } - } -} -``` - -{% hint style="info" %} -The provided example creates a consumable asset with a predetermined price of 2 OCEAN. If you wish to modify this and create an asset that is freely accessible, you can do so by replacing the value of "stats.price.value" with 0 in the JSON example mentioned above. -{% endhint %} - -Now, let's run the command to publish the dataset: - -```bash -npm run cli publish metadata/simpleDownloadDataset.json -``` - -
Publish dataset
- -Executing this command will initiate the dataset publishing process, making your dataset accessible and discoverable on the Ocean Protocol network. 🌊 \ No newline at end of file diff --git a/developers/ocean-cli/run-c2d.md b/developers/ocean-cli/run-c2d.md deleted file mode 100644 index 866bb0af1..000000000 --- a/developers/ocean-cli/run-c2d.md +++ /dev/null @@ -1,85 +0,0 @@ -# Run C2D Jobs 🚀 - -## Get Compute Environments - -To proceed with compute-to-data job creation, the prerequisite is -to select the preferred environment to run the algorithm on it. This can be -accomplished by running the CLI command `getComputeEnvironments` likewise: -```bash -npm run cli getComputeEnvironments -``` - - -## Start a Compute Job 🎯 - -Initiating a compute job can be accomplished through two primary methods. -1. The first approach involves publishing both the dataset and algorithm, as explained in the previous section, [Publish a Dataset](./publish.md) Once that's completed, you can proceed to initiate the compute job. -2. Alternatively, you have the option to explore available datasets and algorithms and kickstart a compute-to-data job by combining your preferred choices. - -To illustrate the latter option, you can use the following command: - -```bash -npm run cli startCompute 'DATASET_DID' 'ALGO_DID' -``` -In this command, replace `DATASET_DID` with the specific DID of the dataset you intend to utilize and `ALGO_DID` with the DID of the algorithm you want to apply. By executing this command, you'll trigger the initiation of a compute-to-data job that harnesses the selected dataset and algorithm for processing. - -
Start a compute job
- -## Start a Free Compute Job 🎯 - -For running the algorithms free by starting a compute job, these are the following steps. -**Note** -Only for free start compute, the dataset is **not mandatory** for user to provide in the command line. The required command line parameters are the algorithm DID and environment ID, retrieved from `getComputeEnvironments` -command. -1. The first step involves publishing the algorithm, as explained in the previous section, [Publish a Dataset](./publish.md) Once that's completed, you can proceed to initiate the compute job. -2. Alternatively, you have the option to explore available algorithms and kickstart a free compute-to-data job by combining your preferred choices. - -To illustrate the latter option, you can use the following command for running free start compute with **additional datasets**: - -```bash -npm run cli freeStartCompute ['DATASET_DID1','DATASET_DID2'] 'ALGO_DID' 'ENV_ID' -``` - -In this command, replace `DATASET_DID` with the specific DID of the dataset you intend to utilize and `ALGO_DID` with the DID of the algorithm you want to apply and the environment for **free** start compute returned from `npm run cli getComputeEnvironments`. -By executing this command, you'll trigger the initiation of a free compute-to-data job with the alogithm provided. -Free start compute can be run **without** published datasets, only the algorithm and environment is required: -```bash -npm run cli freeStartCompute [] 'ALGO_DID' 'ENV_ID' -``` -**NOTE:** For `zsh` console, please surround `[]` with quotes like this: `"[]"`. -
Start a free compute job
- -## Download Compute Results 🧮 - -To obtain the compute results, we'll follow a two-step process. First, we'll employ the `getJobStatus`` method, patiently monitoring its status until it signals the job's completion. Afterward, we'll utilize this method to acquire the actual results. - -## Retriving Algorithm Logs - -To monitor the algorithm logs execution and setup configuration for algorithm, -this command does the trick! - -```bash -npm run cli computeStreamableLogs -``` - - -### Monitor Job Status -To track the status of a job, you'll require both the dataset DID and the compute job DID. You can initiate this process by executing the following command: - -```bash -npm run cli getJobStatus 'DATASET_DID' 'JOB_ID' -``` - -Executing this command will allow you to observe the job's status and verify its successful completion. - -
Get Job Status
- -### Download C2D Results - -For the second method, the dataset DID is no longer required. Instead, you'll need to specify the job ID, the index of the result you wish to download from the available results for that job, and the destination folder where you want to save the downloaded content. The corresponding command is as follows: - -```bash - npm run cli downloadJobResults 'JOB_ID' 'RESULT_INDEX' 'DESTINATION_FOLDER' -``` - -
Download C2D Job Results
diff --git a/developers/ocean-node/README.md b/developers/ocean-node/README.md deleted file mode 100644 index a513e98f3..000000000 --- a/developers/ocean-node/README.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -description: The new Ocean stack ---- - -# Ocean Nodes - -Ocean Nodes are a vital part of the Ocean Protocol core technology stack. The Ocean Nodes monorepo that replaces the three previous components: [Provider](../old-infrastructure/provider/), [Aquarius](../old-infrastructure/aquarius/) and [subgraph](../old-infrastructure/subgraph/). It has been designed to significantly simplify the process of starting the Ocean stack - it runs everything you need with one simple command. - -It integrates multiple services for secure and efficient data operations, utilizing technologies like libp2p for peer-to-peer communication. Its modular and scalable architecture supports various use cases, from simple data retrieval to complex compute-to-data (C2D) tasks. - -The node is structured into separate layers, including the network layer for communication, and the components layer for core services like the Indexer and Provider. This layered architecture ensures efficient data management and high security. - -Flexibility and extensibility are key features of Ocean Node, allowing multiple compute engines, such as Docker and Kubernetes, to be managed within the same framework. The orchestration layer coordinates interactions between the core node and execution environments, ensuring the smooth operation of compute tasks. - -For details on how to run a node see the [readme](https://github.com/oceanprotocol/ocean-node/) in the GitHub repository. - -However, your nodes must meet specific criteria in order to be eligible for incentives. Here’s what’s required: -1) Public Accessibility: Nodes must have a public IP address -2) API and P2P Ports: Nodes must expose both HTTP API and P2P ports to facilitate seamless communication within the network - -You can easily check the eligibility of the nodes by connecting to the [Ocean Nodes Dashboard](http://nodes.oceanprotocol.com/) and looking for the green status indicator next to your IP address - -Follow the steps to install the Node and be eligible for rewards- - -1) Find your public IP: You’ll need this for the configuration. You can easily find it by googling “my IP” -2) Run the [Quickstart Guide](https://github.com/oceanprotocol/ocean-node/blob/main/docs/dockerDeployment.md): If you’ve already deployed a node, we recommend either redeploying with the guide or ensuring that your environment variables are correct and you’re running the latest version -3) Get your Node ID: After starting the node, you can retrieve the ID from the console - -![image](https://miro.medium.com/v2/resize:fit:720/format:webp/0*BKLTEYu92MX1Q6BW.png) - -4) Expose Your Node to the Internet: -From a different device, check if your node is accessible by running - -telnet{your ip}{P2P_ipV4BindTcpPort} - -2. To forward the node port, please follow the instructions provided by your router manufacturer — ex: [Asus](https://www.asus.com/support/faq/1037906/), [TpLink](https://www.tp-link.com/en/support/faq/1379/), [Huawei](https://consumer.huawei.com/en/support/content/en-us15806329/), [Mercusys](https://www.mercusys.com/ro/faq-121) etc. - -Verify eligibility on the Ocean Node Dashboard: Check https://nodes.oceanprotocol.com/ and search for your peerID to ensure your node is correctly configured. - -#### Ocean Nodes replace the Provider: - -* The Node is the only component that can access your data -* It performs checks on-chain for buyer permissions and payments -* Encrypts the URL and metadata during publish -* Decrypts the URL when the dataset is downloaded or a compute job is started -* Provides access to data assets by streaming data (and never the URL) -* Provides compute services (connects to C2D environment) -* Typically run by the Data owner - -#### Ocean Nodes replace Aquarius: - -* A new component called Indexer replaces the functionality of Aquarius. -* The indexer acts as a cache for on-chain data. It stores the metadata from the smart contract events off-chain in a Typesense database. -* It monitors events: It continually checks for MetadataCreated and MetadataUpdated events, processing these events and updating them in the database. -* Serves as an API: It provides a REST API that fetches data from the off-chain datastore. -* Offers easy query access: The API provides a convenient method to access metadata without scanning the blockchain. - -**Ocean Nodes replace the Subgraph:** - -* Indexing the data from the smart contact events. -* The data is indexed and updated in real-time. -* Providing an API which receives and responds to queries. -* Simplifying the development experience for anyone building on Ocean. - -### API - -For details on all of the HTTP endpoints exposed by the Ocean Nodes API, refer to the API.md file in the github repository. - -{% embed url="https://github.com/oceanprotocol/ocean-node/blob/develop/API.md" %} - -### Compute to Data (C2D) - -The Ocean nodes provide a convenient and easy way to run a compute-to-data environment. This gives you the opportunity to monetize your node as you can charge fees for using the C2D environment and there are also additional incentives provided Ocean Protocol Foundation (OPF). Soon we will also be releasing C2D V2 which will provide different environments and new ways to pay for computation. - -For more details on the C2D V2 architecture, refer to the documentation in the repository:\ - - -{% embed url="https://github.com/oceanprotocol/ocean-node/blob/develop/docs/C2DV2.md" %} diff --git a/developers/ocean-node/node-architecture.md b/developers/ocean-node/node-architecture.md deleted file mode 100644 index b94379b10..000000000 --- a/developers/ocean-node/node-architecture.md +++ /dev/null @@ -1,77 +0,0 @@ -# Node Architecture - -Ocean Nodes are the core infrastructure component within the Ocean Protocol ecosystem, designed to facilitate decentralized data exchange and management. It operates by leveraging a multi-layered architecture that includes network, components, and module layers. - -Key features include secure peer-to-peer communication via libp2p, flexible and secure encryption solutions, and support for various Compute-to-Data (C2D) operations. - -Ocean Node's modular design allows for customization and scalability, enabling seamless integration of its core services—such as the Indexer for metadata management and the Provider for secure data transactions—ensuring robust and efficient decentralized data operations. - -### Architecture Overview - -The Node stack is divided into the following layers: - -* Network layer (libp2p & HTTP API) -* Components layer (Indexer, Provider) -* Modules layer - -

Ocean Nodes Infrastructure diagram

- -### Features - -* libp2p supports ECDSA key pairs, and node identity should be defined as a public key. -* Multiple ways of storing URLs: - * Choose one node and use that private key to encrypt URLs (enterprise approach). - * Choose several nodes, so your files can be accessed even if one node goes down (given at least one node is still alive). -* Supports multiple C2D types: - * Light Docker only (for edge nodes). - * Ocean C2D (Kubernetes). -* Each component can be enabled/disabled on startup (e.g., start node without Indexer). - -### Nodes and Network Model - -Nodes can receive user requests in two ways: - -* HTTP API -* libp2p from another node - -They are merged into a common object and passed to the appropriate component. - -Nodes should be able to forward requests between them if the local database is missing objects. (Example: Alice wants to get DDO id #123 from Node A. Node A checks its local database. If the DDO is found, it is sent back to Alice. If not, Node A can query the network and retrieve the DDO from another node that has it.) - -Nodes' libp2p implementation: - -* Should support core protocols (ping, identify, kad-dht for peering, circuit relay for connections). -* For peer discovery, we should support both mDNS & Kademlia DHT. -* All Ocean Nodes should subscribe to the topic: OceanProtocol. If any interesting messages are received, each node is going to reply. - -### Components & Modules - -#### Indexer - -An off-chain, multi-chain metadata & chain events cache. It continually monitors the chains for well-known events and caches them (V4 equivalence: Aquarius). - -Features: - -* Monitors MetadataCreated, MetadataUpdated, MetadataState and stores DDOs in the database. -* Validates DDOs according to multiple SHACL schemas. When hosting a node, you can provide your own SHACL schema or use the ones provided. -* Provides proof for valid DDOs. -* Monitors all transactions and events from the data token contracts. This includes minting tokens, creating pricing schema (fixed & free pricing), and orders. -* Allows queries for all the above. - -#### Provider - -* Performs checks on-chain for buyer permissions and payments. -* The provider is crucial in checking that all the relevant fees have been paid before the consumer is able to download the asset. See the [Fees page](../contracts/fees.md) for details on all of the different types of fees. -* Encrypts the URL and metadata during publishing. -* Decrypts the URL when the dataset is downloaded or a compute job is started. -* Encrypts/decrypts files before storage/while accessing. -* Provides access to data assets by streaming data (and never the URL). -* Provides compute services. -* The node operator can charge provider fees, compensating the individuals or organizations operating their own node when users request assets. -* Currently, we are providing the legacy Ocean C2D compute services (which run in Kubernetes) via the node. In the future, we will soon be releasing C2D V2 which will also allow connections to multiple C2D engines: light, Ocean C2D, and third parties. - -For more details on the C2D V2 architecture, refer to the documentation in the repository: - -{% embed url="https://github.com/oceanprotocol/ocean-node/blob/develop/docs/C2DV2.md" %} - -### diff --git a/developers/ocean.js/README.md b/developers/ocean.js/README.md deleted file mode 100644 index a0caca2a4..000000000 --- a/developers/ocean.js/README.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -description: >- - JavaScript library to privately & securely publish, exchange, and consume - data. ---- - -# Ocean.js - -With ocean.js, you can: - -* **Publish** data services: downloadable files or compute-to-data. Create an ERC721 **data NFT** for each service, and ERC20 **datatoken** for access (1.0 datatokens to access). -* **Sell** datatokens for a fixed price. Sell data NFTs. -* **Transfer** data NFTs & datatokens. - -Ocean.js is part of the [Ocean Protocol](https://oceanprotocol.com) toolset. - -{% embed url="https://www.youtube.com/watch?v=lqGXPkPUCqI" %} -Introducing Ocean.JS -{% endembed %} - -The Ocean.js library adopts the module architectural pattern, ensuring clear separation and organization of code units. Utilizing ES6 modules simplifies the process by allowing you to import only the necessary module for your specific task. - -The module structure follows this format: - -* Types -* Config -* Contracts -* Services -* Utils - -When working with a particular module, you will need to provide different parameters. To instantiate classes from the contracts module, you must pass objects such as Signer, which represents the wallet instance, or the contract address you wish to utilize, depending on the scenario. As for the services modules, you will need to provide the provider URI or metadata cache URI. - - -# Examples and Showcases 🌟🚀 - -Ocean.js is more than just a library; it's a gateway to unlocking your potential in the world of decentralized data services. To help you understand its real-world applications, we've curated a collection of examples and showcases. These examples demonstrate how you can use Ocean.js to create innovative solutions that harness the power of decentralized technologies. Each example provides a unique perspective on how you can apply Ocean.js, from decentralized marketplaces for workshops to peer-to-peer platforms for e-books and AI-generated art. These showcases serve as an inspiration for developers like you, looking to leverage Ocean.js in your projects, showcasing its adaptability and transformative capabilities. Dive into these examples to see how Ocean.js can bring your creative visions to life. 📚 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Decentralised Data Marketplace 🌊A decentralised marketplace for peer-to-peer online workshops.marketplace.pnghttps://github.com/oceanprotocol/market
Music NFTs Marketplace 🎼A peer-to-peer platform for buying on-demand music NFTs.ocean_waves.pnghttps://github.com/oceanprotocol/waves
Tokengated Content 🔒A decentralised marketplace for buying & selling AI-generated Art.tokengate.pnghttps://github.com/oceanprotocol/token-gating-template
Tokengated AI Chatbot 🤖A decentralised e-commerce platform to sell templates, UI kits and plugins.tokengated_chatbot.pnghttps://github.com/oceanprotocol/tokengated-next-chatgpt
Buy & Sell Online Workshops 🎓A decentralised marketplace for peer-to-peer online workshops.oceanLearn.jpghttps://www.figma.com/proto/8nT6qEUMMmJsMs8Ow2KzAN/OceanLearn?type=design&scaling=min-zoom&page-id=5%3A44&starting-point-node-id=5%3A91
E-Books On-Demand 📖A peer-to-peer platform for reading on-demand e-books.oceanReads.jpghttps://www.figma.com/proto/xReYRMMnhrynRsNqdy63tT/OceanReads?type=design&node-id=28-380&scaling=min-zoom&page-id=28%3A380&starting-point-node-id=135%3A92
Buy Templates, UI Kits, and plugins 🎨A decentralized e-commerce platform to sell templates, UI kits, and plugins.webPallete.pnghttps://www.figma.com/proto/xAcyc0rqZNTA8TdW43NN5P/WebPalette?type=design&node-id=0-1&scaling=min-zoom&page-id=0%3A1&starting-point-node-id=9%3A138
Decentralised Ticketing Mobile App 📱The first end-to-end decentralized mobile App to buy, sell & trade tickets of any type.TicketingMobileApp.pnghttps://www.figma.com/proto/lu5ODNDwIrJmlM0WqBeBJ3/OceanTickets?page-id=75%3A386&type=design&node-id=336-126&viewport=131%2C706%2C0.19&t=ia9UyDUfZxZQS4k1-1&scaling=scale-down&starting-point-node-id=336%3A126
Publish & Collect Digital Art 🖼️A decentralised marketplace for buying & selling AI-generated Art.oceanArt.jpghttps://www.figma.com/proto/LwbMqloVagXnmlaeDCFiJC/OceanArt?type=design&node-id=13-122&scaling=min-zoom&page-id=13%3A122&starting-point-node-id=13%3A169
- -With these examples and showcases, you've seen just a glimpse of what you can achieve with this library. Now, it's your turn to dive in, explore, and unleash your creativity using Ocean.js. 🚀 - diff --git a/developers/ocean.js/asset-visibility.md b/developers/ocean.js/asset-visibility.md deleted file mode 100644 index 276c0810a..000000000 --- a/developers/ocean.js/asset-visibility.md +++ /dev/null @@ -1,71 +0,0 @@ -# Asset Visibility - -In the Ocean Protocol ecosystem, each asset is associated with a state that is maintained by the NFT (Non-Fungible Token) contract. The [state of an asset](../ddo-specification.md#state) determines its visibility and availability for different actions on platforms like Ocean Market, as well as its appearance in user profiles. To explore the various asset's state in detail, please check out the [DDO Specification](../ddo-specification.md#state) page. It provides comprehensive information about the different states that assets can be in. - -By assigning specific states to assets, Ocean Protocol enables a structured approach to asset management and visibility. These states help regulate asset discoverability, ordering permissions, and the representation of assets in user profiles, ensuring a controlled and reliable asset ecosystem. - -It is possible to remove assets from Ocean Protocol by modifying the state of the asset. Each asset has a state, which is stored in the NFT contract. Additional details regarding asset states can be found at this [link](../ddo-specification.md#state). There is also an assets purgatory that contains information about the purgatory status of an asset, as defined in the list-purgatory. For more information about the purgatory, please refer to the [DID and DDO Identifier docs](/developers/identifiers.md). - -We can utilize a portion of the previous tutorial on updating metadata and incorporate the steps to update the asset's state in the asset DDO. - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -{% hint style="info" %} -The variables **AQUARIUS\_URL** and **PROVIDER\_URL** should be set correctly in `.env` file -{% endhint %} - -#### Create a script to update the state of an asset by updating the asset's metatada - -Create a new file in the same working directory where the configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -{% code overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { ZERO_ADDRESS, NftFactory, getHash, Nft } = require ('@oceanprotocol/lib'); - -// replace the did here -const did = "did:op:a419f07306d71f3357f8df74807d5d12bddd6bcd738eb0b461470c64859d6f0f"; - -// This function takes did as a parameter and updates the data NFT information -const updateAssetState = async (did) => { - - const publisherAccount = await oceanConfig.publisherAccount.getAddress(); - - // Fetch ddo from Aquarius - const asset = await await oceanConfig.aquarius.resolve(did); - - const nft = new Nft(oceanConfig.ethersProvider); - - // Update the metadata state and bring it to end-of-life state ("1") - await nft.setMetadataState( - asset?.nft?.address, - publisherAccount, - 1 - ) - - // Check if ddo is correctly udpated in Aquarius - await oceanConfig.aquarius.waitForAqua(ddo.id); - - // Fetch updated asset from Aquarius - const updatedAsset = await await oceanConfig.aquarius.resolve(did); - - console.log(`Resolved asset did [${updatedAsset.id}]from aquarius.`); - console.log(`Updated asset state: [${updatedAsset.nft.state}].`); - -}; - -// Call setMetadata(...) function defined above -updateAssetState(did).then(() => { - process.exit(); -}).catch((err) => { - console.error(err); - process.exit(1); -}); -``` -{% endcode %} diff --git a/developers/ocean.js/cod-asset.md b/developers/ocean.js/cod-asset.md deleted file mode 100644 index a1853c8e7..000000000 --- a/developers/ocean.js/cod-asset.md +++ /dev/null @@ -1,182 +0,0 @@ -# Run C2D Jobs - -**Overview** - -Compute-to-Data is a powerful feature of Ocean Protocol that enables privacy-preserving data analysis and computation. With Compute-to-Data, data owners can maintain control over their data while allowing external parties to perform computations on that data. - -This documentation provides an overview of Compute-to-Data in Ocean Protocol and explains how to use it with Ocean.js. For detailed code examples and implementation details, please refer to the official [Ocean.js](https://github.com/oceanprotocol/ocean.js) GitHub repository. - -**Getting Started** - -To get started with Compute-to-Data using Ocean.js, follow these steps: - -1. **Environment Setup**: Ensure that you have the necessary dependencies and libraries installed to work with Ocean.js. Refer to the Ocean.js documentation for detailed instructions on setting up your development environment. -2. **Connecting to the Ocean Protocol Network**: Establish a connection to the Ocean Protocol network using Ocean.js. This connection will enable you to interact with the various components of Ocean Protocol, including Compute-to-Data. -3. **Registering a Compute-to-Data Service**: As a data provider, you can register a Compute-to-Data service using Ocean.js. This process involves specifying the data you want to expose and defining the computation tasks that can be performed on it. -4. **Searching and Consuming Compute-to-Data Services**: As a data consumer, you can search for Compute-to-Data services available on the Ocean Protocol network. Utilize Ocean.js to discover services based on data types, pricing, and other parameters. -5. **Executing Computations on Data**: Once you have identified a suitable Compute-to-Data service, use Ocean.js to execute computations on the provided data. The actual computation is performed by the service provider, and the results are securely returned to you. - -Please note that the implementation details of Compute-to-Data can vary depending on your specific use case. The code examples available in the Ocean.js GitHub repository provide comprehensive illustrations of working with Compute-to-Data in Ocean Protocol. Visit [ComputeExamples.md](https://github.com/oceanprotocol/ocean.js/blob/main/ComputeExamples.md) for detailed code snippets and explanations that guide you through leveraging Compute-to-Data capabilities. - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -{% hint style="info" %} -The variable **AQUARIUS\_URL** and **PROVIDER\_URL** should be set correctly in `.env` file -{% endhint %} - -#### Create a script that starts compute to data using an already published dataset and algorithm - -Create a new file in the same working directory where configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -
// Note: Make sure .env file and config.js are created and setup correctly
-const { oceanConfig } = require('./config.js');
-const { ZERO_ADDRESS, NftFactory, getHash, Nft } = require ('@oceanprotocol/lib');
-
-// replace the did here
-const datasetDid = "did:op:a419f07306d71f3357f8df74807d5d12bddd6bcd738eb0b461470c64859d6f0f";
-const algorithmDid = "did:op:a419f07306d71f3357f8df74807d5d12bddd6bcd738eb0b461470c64859d6f0f";
-
-// This function takes dataset and algorithm dids as a parameters,
-// and starts a compute job for them
-const startComputeJob = async (datasetDid, algorithmDid) => {
-  
-  const consumer = await oceanConfig.consumerAccount.getAddress();
-  
-   // Fetch the dataset and the algorithm from Aquarius
-  const dataset = await await oceanConfig.aquarius.resolve(datasetDid);
-  const algorithm = await await oceanConfig.aquarius.resolve(algorithmDid);
-  
-  // Let's fetch the compute environments and choose the free one
-  const computeEnv = computeEnvs[resolvedDatasetDdo.chainId].find(
-    (ce) => ce.priceMin === 0
-  )
-  
-  // Request five minutes of compute access
-  const mytime = new Date()
-  const computeMinutes = 5
-  mytime.setMinutes(mytime.getMinutes() + computeMinutes)
-  const computeValidUntil = Math.floor(mytime.getTime() / 1000
-  
-  // Let's initialize the provider for the compute job
-  const asset: ComputeAsset[] = {
-    documentId: dataset.id,
-    serviceId: dataset.services[0].id
-  }
-
-  const algo: ComputeAlgorithm = {
-    documentId: algorithm.id,
-    serviceId: algorithm.services[0].id
-  }
-  
-  const providerInitializeComputeResults = await ProviderInstance.initializeCompute(
-    assets,
-    algo,
-    computeEnv.id,
-    computeValidUntil,
-    providerUrl,
-    await consumerAccount.getAddress()
-  )
-  
-  await approve(
-    consumerAccount,
-    config,
-    await consumerAccount.getAddress(),
-    addresses.Ocean,
-    datasetFreAddress,
-    '100'
-  )
-  
-  await approve(
-    consumerAccount,
-    config,
-    await consumerAccount.getAddress(),
-    addresses.Ocean,
-    algoFreAddress,
-    '100'
-  )
-    
-  const fixedRate = new FixedRateExchange(fixedRateExchangeAddress, consumerAccount)
-  const buyDatasetTx = await fixedRate.buyDatatokens(datasetFreAddress, '1', '2')
-  const buyAlgoTx = await fixedRate.buyDatatokens(algoFreAddress, '1', '2')
- 
-  
-  // We now order both the dataset and the algorithm
-  algo.transferTxId = await handleOrder(
-    providerInitializeComputeResults.algorithm,
-    algorithm.services[0].datatokenAddress,
-    consumerAccount,
-    computeEnv.consumerAddress,
-    0
-  )
-  
-  asset.transferTxId = await handleOrder(
-    providerInitializeComputeResults.datasets[0],
-    dataset.services[0].datatokenAddress,,
-    consumerAccount,
-    computeEnv.consumerAddress,
-    0
-  )
-  
-  // Start the compute job for the given dataset and algorithm
-  const computeJobs = await ProviderInstance.computeStart(
-    providerUrl,
-    consumerAccount,
-    computeEnv.id,
-    assets[0],
-    algo
-  )
-  
-  return  computeJobs[0].jobId
-  
-};
-
-const checkIfJobFinished = async (jobId) => {
-  const jobStatus = await ProviderInstance.computeStatus(
-      providerUrl,
-      await consumerAccount.getAddress(),
-      computeJobId,
-      DATASET_DDO.id
-    )
-  if (jobStatus?.status === 70) return true
-  else checkIfJobFinished(jobId)
-}
-
-const checkIfJobFinished = async (jobId) => {
-  const jobStatus = await ProviderInstance.computeStatus(
-      providerUrl,
-      await consumerAccount.getAddress(),
-      computeJobId,
-      DATASET_DDO.id
-    )
-  if (jobStatus?.status === 70) return true
-  else checkIfJobFinished(jobId)
-}
-
-const downloadComputeResults = async (jobId) => {
-  const downloadURL = await ProviderInstance.getComputeResultUrl(
-      oceanConfig.providerURI,
-      oceanConfig.consumerAccount,
-      jobId,
-      0
-    )
-}
-
-// Call startComputeJob(...) checkIfJobFinished(...) downloadComputeResults(...)
-// functions defined above in that particular order 
-startComputeJob(datasetDid, algorithmDid).then((jobId) => {
-  checkIfJobFinished(jobId).then((result) => {
-    downloadComputeResults(jobId).then((result) => {
-      process.exit();
-    })
-  })
-}).catch((err) => {
-  console.error(err);
-  process.exit(1);
-});
-
- diff --git a/developers/ocean.js/configuration.md b/developers/ocean.js/configuration.md deleted file mode 100644 index 3f94dbe8b..000000000 --- a/developers/ocean.js/configuration.md +++ /dev/null @@ -1,179 +0,0 @@ -# Configuration - -For obtaining the API keys for blockchain access and setting the correct environment variables, please consult [this section](http://127.0.0.1:5000/o/mTcjMqA4ylf55anucjH8/s/zQlpIJEeu8x5yl0OLuXn/) first and proceed with the next steps. - -### Create a directory - -Let's start with creating a working directory where we store the environment variable file, configuration files, and the scripts. - -```bash -mkdir my-ocean-project -cd my-ocean-project -``` - -### Create a `.env` file - -In the working directory create a `.env` file. The content of this file will store the values for the following variables: - -
Variable nameDescriptionRequired
OCEAN_NETWORKName of the network where the Ocean Protocol's smart contracts are deployed.Yes
OCEAN_NETWORK_URLThe URL of the Ethereum node (along with API key for non-local networks)**Yes
PRIVATE_KEYThe private key of the account which you want to use. A private key is made up of 64 hex characters. Make sure you have sufficient balance to pay for the transaction fees.Yes
AQUARIUS_URLThe URL of the Aquarius. This value is needed when reading an asset from off-chain store.No
PROVIDER_URLThe URL of the Provider. This value is needed when publishing a new asset or update an existing asset.No
- -{% hint style="info" %} -Treat this file as a secret and do not commit this file to git or share the content publicly. If you are using git, then include this file name in `.gitignore` file. -{% endhint %} - -The below tabs show partially filled `.env` file content for some of the supported networks. - -{% tabs %} -{% tab title="Mainnet" %} -{% code title=".env" %} - -```bash -# Mandatory environment variables - -OCEAN_NETWORK=mainnet -OCEAN_NETWORK_URL= -PRIVATE_KEY= - -# Optional environment variables - -AQUARIUS_URL=https://v4.aquarius.oceanprotocol.com/ -PROVIDER_URL=https://v4.provider.oceanprotocol.com -``` - -{% endcode %} -{% endtab %} - -{% tab title="Polygon" %} -{% code title=".env" %} - -```bash -# Mandatory environment variables - -OCEAN_NETWORK=polygon -OCEAN_NETWORK_URL= -PRIVATE_KEY= - -# Optional environment variables - -AQUARIUS_URL=https://v4.aquarius.oceanprotocol.com/ -PROVIDER_URL=https://v4.provider.oceanprotocol.com -``` - -{% endcode %} -{% endtab %} - -{% tab title="Local (using Barge)" %} -{% code title=".env" %} - -```bash -# Mandatory environment variables -OCEAN_NETWORK=development -OCEAN_NETWORK_URL=http://172.15.0.3:8545/ -AQUARIUS_URL=http://172.15.0.5:5000 -PROVIDER_URL=http://172.15.0.4:8030 - -# Replace PRIVATE_KEY if needed -PRIVATE_KEY=0xc594c6e5def4bab63ac29eed19a134c130388f74f019bc74b8f4389df2837a58 -``` - -{% endcode %} -{% endtab %} -{% endtabs %} - -Replace `` with the appropriate values. You can see all the networks configuration on Oceanjs' [config helper](https://github.com/oceanprotocol/ocean.js/blob/main/src/config/ConfigHelper.ts#L42). - -### Setup dependencies - -In this step, all required dependencies will be installed. - -### Installation & Usage - -Let's install Ocean.js library into your current project by running: - -{% tabs %} -{% tab title="Terminal" %} -{% code overflow="wrap" %} - -```bash -npm init -npm i @oceanprotocol/lib@latest dotenv crypto-js ethers@5.7.4 @truffle/hdwallet-provider -``` - -{% endcode %} -{% endtab %} -{% endtabs %} - -### Create a configuration file - -A configuration file will read the content of the `.env` file and initialize the required configuration objects which will be used in the further tutorials. The below scripts creates a Web3 wallet instance and an Ocean's configuration object. - -Create the configuration file in the working directory i.e. at the same path where the `.env` is located. - -{% tabs %} -{% tab title="config.js" %} -{% code title="config.js" %} - -```javascript -require("dotenv").config(); -const { - Aquarius, - ConfigHelper, - configHelperNetworks, -} = require("@oceanprotocol/lib"); -const ethers = require("ethers"); -import fs from "fs"; -import { homedir } from "os"; - -async function oceanConfig() { - const provider = new ethers.providers.JsonRpcProvider( - process.env.OCEAN_NETWORK_URL || configHelperNetworks[1].nodeUri - ); - const publisherAccount = new ethers.Wallet(process.env.PRIVATE_KEY, provider); - - let oceanConfig = new ConfigHelper().getConfig( - parseInt(String((await publisherAccount.provider.getNetwork()).chainId)) - ); - const aquarius = new Aquarius(oceanConfig?.metadataCacheUri); - - // If using local development environment, read the addresses from local file. - // The local deployment address file can be generated using barge. - if (process.env.OCEAN_NETWORK === "development") { - const addresses = JSON.parse( - // eslint-disable-next-line security/detect-non-literal-fs-filename - fs.readFileSync( - process.env.ADDRESS_FILE || - `${homedir}/.ocean/ocean-contracts/artifacts/address.json`, - "utf8" - ) - ).development; - - oceanConfig = { - ...oceanConfig, - oceanTokenAddress: addresses.Ocean, - fixedRateExchangeAddress: addresses.FixedPrice, - dispenserAddress: addresses.Dispenser, - nftFactoryAddress: addresses.ERC721Factory, - opfCommunityFeeCollector: addresses.OPFCommunityFeeCollector, - }; - } - - oceanConfig = { - ...oceanConfig, - publisherAccount: publisherAccount, - consumerAccount: publisherAccount, - aquarius: aquarius, - }; - - return oceanConfig; -} - -module.exports = { - oceanConfig, -}; -``` - -{% endcode %} -{% endtab %} -{% endtabs %} - -Now you have set up the necessary files and configurations to interact with Ocean Protocol's smart contracts using ocean.js. You can proceed with further tutorials or development using these configurations. diff --git a/developers/ocean.js/consume-asset.md b/developers/ocean.js/consume-asset.md deleted file mode 100644 index bb0171888..000000000 --- a/developers/ocean.js/consume-asset.md +++ /dev/null @@ -1,129 +0,0 @@ -# Consume Asset - -Consuming an asset involves a two-step process: **placing an order** and then **utilizing the order** transaction to **download** and **access** the asset's files. Let's delve into each step in more detail. - -To initiate the ordering process, there are two scenarios depending on the pricing schema of the asset. Firstly, if the asset has a fixed-rate pricing schema configured, you would need to acquire the corresponding datatoken by purchasing it. Once you have obtained the datatoken, you send it to the publisher to place the order for the asset. - -The second scenario applies when the asset follows a free pricing schema. In this case, you can obtain a free datatoken from the dispenser service provided by Ocean Protocol. Using the acquired free datatoken, you can place the order for the desired asset. - -However, it's crucial to note that even when utilizing free assets, network gas fees still apply. These fees cover the costs associated with executing transactions on the blockchain network. - -Additionally, the specific type of datatoken associated with an asset influences the ordering process. There are two common datatoken templates: Template 1 (regular template) and Template 2 (enterprise template). The type of template determines the sequence of method calls required before placing an order. - -For assets utilizing Template '1', prior to ordering, you need to perform two separate method calls. First, you need to call the `approve` method to grant permission for the fixedRateExchange contract to spend the required amount of datatokens. Then, you proceed to call the `buyDatatokens` method from the fixedRateExchange contract. This process ensures that you have the necessary datatokens in your possession to successfully place the order. Alternatively, if the asset follows a free pricing schema, you can employ the `dispenser.dispense` method to obtain the free datatoken before proceeding with the order. - -On the other hand, assets utilizing Template '2' offer bundled methods for a more streamlined approach. For ordering such assets, you can use methods like `buyFromFreeAndOrder` or `buyFromDispenserAndOrder`. These bundled methods handle the acquisition of the necessary datatokens and the subsequent ordering process in a single step, simplifying the workflow for enterprise-template assets. - -Later on, when working with the ocean.js library, you can use this order transaction identifier to call the `getDownloadUrl` method from the provider service class. This method allows you to retrieve the download URL for accessing the asset's files. - -#### Prerequisites - -- [Obtain an API key](../get-api-keys-for-blockchain-access.md) -- [Set up the .env file](configuration.md#create-a-env-file) -- [Install the dependencies](configuration.md#setup-dependencies) -- [Create a configuration file](configuration.md#create-a-configuration-file) - -{% hint style="info" %} -The variables **AQUARIUS_URL** and **PROVIDER_URL** should be set correctly in `.env` file -{% endhint %} - -#### Create a script to consume an asset - -Create a new file in the same working directory where the configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -
// Note: Make sure .env file and config.js are created and setup correctly
-const { oceanConfig } = require("./config.js");
-const {
-	ZERO_ADDRESS,
-	NftFactory,
-	getHash,
-	ProviderFees,
-	Datatoken,
-	ProviderInstance,
-	Nft,
-	FixedRateExchange,
-  approve
-} = require("@oceanprotocol/lib");
-
-// replace the did here
-const did = "did:op:a419f07306d71f3357f8df74807d5d12bddd6bcd738eb0b461470c64859d6f0f";
-
-// This function takes did as a parameter and updates the data NFT information
-const consumeAsset = async (did) => {
-	const consumer = await oceanConfig.consumerAccount.getAddress();
-
-	// Fetch ddo from Aquarius
-	const asset = await await oceanConfig.aquarius.resolve(did);
-
-	const nft = new Nft(consumer);
-
-	await approve(
-		Error,
-		oceanConfig,
-		await consumer.getAddress(),
-		oceanConfig.Ocean,
-		oceanConfig.fixedRateExchangeAddress,
-		"1"
-	);
-
-	const fixedRate = new FixedRateExchange(
-		oceanConfig.fixedRateExchangeAddress,
-		consumer
-	);
-
-	const txBuyDt = await fixedRate.buyDatatokens(
-		oceanConfig.fixedRateId,
-		"1",
-		"2"
-	);
-
-	const initializeData = await ProviderInstance.initialize(
-		asset.id,
-		asset.services[0].id,
-		0,
-		await consumer.getAddress(),
-		oceanConfig.providerUri
-	);
-
-	const providerFees: ProviderFees = {
-		providerFeeAddress: initializeData.providerFee.providerFeeAddress,
-		providerFeeToken: initializeData.providerFee.providerFeeToken,
-		providerFeeAmount: initializeData.providerFee.providerFeeAmount,
-		v: initializeData.providerFee.v,
-		r: initializeData.providerFee.r,
-		s: initializeData.providerFee.s,
-		providerData: initializeData.providerFee.providerData,
-		validUntil: initializeData.providerFee.validUntil,
-	};
-
-	const datatoken = new Datatoken(consumer);
-
-	const tx = await datatoken.startOrder(
-		oceanConfig.fixedRateExchangeAddress,
-		await consumer.getAddress(),
-		0,
-		providerFees
-	);
-
-	const orderTx = await tx.wait();
-	const orderStartedTx = getEventFromTx(orderTx, "OrderStarted");
-
-	const downloadURL = await ProviderInstance.getDownloadUrl(
-		asset.id,
-		asset.services[0].id,
-		0,
-		orderTx.transactionHash,
-		oceanConfig.providerUri,
-		consumer
-	);
-};
-
-// Call setMetadata(...) function defined above
-consumeAsset(did).then(() => {
-  process.exit();
-}).catch((err) => {
-  console.error(err);
-  process.exit(1);
-});
-
-
diff --git a/developers/ocean.js/creating-datanft.md b/developers/ocean.js/creating-datanft.md deleted file mode 100644 index c78cd6008..000000000 --- a/developers/ocean.js/creating-datanft.md +++ /dev/null @@ -1,78 +0,0 @@ -# Creating a data NFT - -This tutorial guides you through the process of creating your own data NFT using Ocean libraries. To know more about data NFT please refer [this page](../contracts/data-nfts.md). - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -#### Create a script to deploy dataNFT - -The provided script demonstrates how to create a data NFT using Oceanjs. - -First, create a new file in the working directory, alongside the `config.js` and `.env` files. Name it `create_dataNFT.js` (or any appropriate name). Then, copy the following code into the new created file: - -{% tabs %} -{% tab title="create_dataNFT.js" %} -{% code title="create_dataNFT.js" overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { ZERO_ADDRESS, NftFactory } = require ('@oceanprotocol/lib'); - -// Deinfe a function which will create a dataNFT using Ocean.js library -const createDataNFT = async () => { - let config = await oceanConfig(); - // Create a NFTFactory - const factory = new NftFactory(config.nftFactoryAddress, config.publisherAccount); - - const publisherAddress = await config.publisherAccount.getAddress(); - - // Define dataNFT parameters - const nftParams = { - name: '72120Bundle', - symbol: '72Bundle', - // Optional parameters - templateIndex: 1, - tokenURI: 'https://example.com', - transferable: true, - owner: publisherAddress - }; - - const bundleNFT = await factory.createNFT(nftParams); - - const trxReceipt = await bundleNFT.wait() - - return { - trxReceipt - }; -}; - -// Call the create createDataNFT() function -createDataNFT() - .then(({ nftAddress }) => { - console.log(`DataNft address ${nftAddress}`); - process.exit(); - }) - .catch((err) => { - console.error(err); - process.exit(1); - }); -``` -{% endcode %} - -Run script: - -```bash -node create_dataNFT.js -``` -{% endtab %} -{% endtabs %} - -* Check out these [code examples](https://github.com/oceanprotocol/ocean.js/blob/main/CodeExamples.md#L0-L1) or [compute to data examples](https://github.com/oceanprotocol/ocean.js/blob/main/ComputeExamples.md#L417) to see how you can use ocean.js. -* If you have any difficulties or if you have further questions about how to use ocean.js please reach out to us on [Discord](https://discord.gg/TnXjkR5). -* If you notice any bugs or issues with ocean.js please [open an issue on github](https://github.com/oceanprotocol/ocean.js/issues/new?assignees=\&labels=bug\&template=bug\_report.md\&title=). -* Visit the [Ocean Protocol website](https://oceanprotocol.com/) for general information about Ocean Protocol. diff --git a/developers/ocean.js/mint-datatoken.md b/developers/ocean.js/mint-datatoken.md deleted file mode 100644 index 971794b16..000000000 --- a/developers/ocean.js/mint-datatoken.md +++ /dev/null @@ -1,85 +0,0 @@ -# Mint Datatokens - -This tutorial guides you through the process of minting datatokens and sending them to a receiver address. The tutorial assumes that you already have the address of the datatoken contract which is owned by you. - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -#### Create a script to mint datatokens - -Create a new file in the same working directory where configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -{% tabs %} -{% tab title="mint_datatoken.js" %} -{% code title="mint_datatoken.js" overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { amountToUnits } = require ('@oceanprotocol/lib'); -const ethers = require('ethers'); - -// Define a function createFRE() -const createMINT = async () => { - - let config = await oceanConfig(); - const publisher = config.publisherAccount - const publisherAccount = await config.publisherAccount.getAddress() - - const minAbi = [ - { - constant: false, - inputs: [ - { name: 'to', type: 'address' }, - { name: 'value', type: 'uint256' } - ], - name: 'mint', - outputs: [{ name: '', type: 'bool' }], - payable: false, - stateMutability: 'nonpayable', - type: 'function' - } - ] - - const tokenContract = new ethers.Contract(config.oceanTokenAddress, minAbi, publisher) - const estGasPublisher = await tokenContract.estimateGas.mint( - publisherAccount, - amountToUnits(null, null, '1000', 18) - ) - const trxReceipt = await sendTx( - estGasPublisher, - publisher, - 1, - tokenContract.mint, - publisherAccount, - amountToUnits(null, null, '1000', 18) - ) - - return { - trxReceipt - }; -}; - -// Call the createFRE() function -createMINT() - .then(({ trxReceipt }) => { - console.log(`TX Receipt ${trxReceipt}`); - process.exit(1); - }) - .catch((err) => { - console.error(err); - process.exit(1); - }); -``` -{% endcode %} - -**Execute script** - -```bash -node mint_datatoken.js -``` -{% endtab %} -{% endtabs %} diff --git a/developers/ocean.js/publish.md b/developers/ocean.js/publish.md deleted file mode 100644 index e0efeb6e1..000000000 --- a/developers/ocean.js/publish.md +++ /dev/null @@ -1,190 +0,0 @@ -# Publish - -This tutorial guides you through the process of creating your own data NFT and a datatoken using Ocean libraries. To know more about data NFTs and datatokens please refer [this page](../contracts/datanft-and-datatoken.md). Ocean Protocol supports different pricing schemes which can be set while publishing an asset. Please refer [this page](../contracts/pricing-schemas.md) for more details on pricing schemes. - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -#### Create a script to deploy a data NFT and datatoken with the price schema you chose. - -Create a new file in the same working directory where configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -{% hint style="info" %} -**Fees**: The code snippets below define fees related parameters. Please refer [fees page ](../contracts/fees.md)for more details -{% endhint %} - -The code utilizes methods such as `NftFactory` and `Datatoken` from the Ocean libraries to enable you to interact with the Ocean Protocol and perform various operations related to data NFTs and datatokens. - -The `createFRE()` performs the following: - -1. Creates a web3 instance and import Ocean configs. -2. Retrieves the accounts from the web3 instance and sets the publisher. -3. Defines parameters for the data NFT, including name, symbol, template index, token URI, transferability, and owner. -4. Defines parameters for the datatoken, including name, symbol, template index, cap, fee amount, payment collector address, fee token address, minter, and multi-party fee address. -5. Defines parameters for the price schema, including the fixed rate address, base token address, owner, market fee collector, base token decimals, datatoken decimals, fixed rate, market fee, and optional parameters. -6. Uses the NftFactory to create a data NFT and datatoken with the fixed rate exchange, using the specified parameters. -7. Retrieves the addresses of the data NFT and datatoken from the result. -8. Returns the data NFT and datatoken addresses. - -{% tabs %} -{% tab title="create_datatoken_with_fre.js" %} -{% code title="create_datatoken_with_fre.js" overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { ZERO_ADDRESS, NftFactory } = require ('@oceanprotocol/lib'); - -// Define a function createFRE() -const createFRE = async () => { - - const FRE_NFT_NAME = 'Datatoken 2' - const FRE_NFT_SYMBOL = 'DT2' - - let config = await oceanConfig(); - - // Create a NFTFactory - const factory = new NftFactory(config.nftFactoryAddress, config.publisherAccount); - - const nftParams = { - name: FRE_NFT_NAME, - symbol: FRE_NFT_SYMBOL, - templateIndex: 1, - tokenURI: '', - transferable: true, - owner: await config.publisherAccount.getAddress() - } - - const datatokenParams = { - templateIndex: 1, - cap: '100000', - feeAmount: '0', - paymentCollector: ZERO_ADDRESS, - feeToken: ZERO_ADDRESS, - minter: await config.publisherAccount.getAddress(), - mpFeeAddress: ZERO_ADDRESS - } - - const freParams = { - fixedRateAddress: config.fixedRateExchangeAddress, - baseTokenAddress: config.oceanTokenAddress, - owner: await config.publisherAccount.getAddress(), - marketFeeCollector: await config.publisherAccount.getAddress(), - baseTokenDecimals: 18, - datatokenDecimals: 18, - fixedRate: '1', - marketFee: '0.001', - allowedConsumer: ZERO_ADDRESS, - withMint: true - } - - const bundleNFT = await factory.createNftWithDatatokenWithFixedRate( - nftParams, - datatokenParams, - freParams - ) - - const trxReceipt = await bundleNFT.wait() - - return { - trxReceipt - }; -}; - -// Call the createFRE() function -createFRE() - .then(({ trxReceipt }) => { - console.log(`TX Receipt ${trxReceipt}`); - process.exit(1); - }) - .catch((err) => { - console.error(err); - process.exit(1); - }); -``` -{% endcode %} - -Execute script - -```bash -node create_datatoken_with_fre.js -``` -{% endtab %} - -{% tab title="create_datatoken_with_free.js" %} -{% code title="create_datatoken_with_free.js" overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { ZERO_ADDRESS, NftFactory } = require ('@oceanprotocol/lib'); - -// Define a function createFRE() -const createFRE = async () => { - - const DISP_NFT_NAME = 'Datatoken 3' - const DISP_NFT_SYMBOL = 'DT3' - - let config = await oceanConfig(); - - // Create a NFTFactory - const factory = new NftFactory(config.nftFactoryAddress, config.publisherAccount); - - const nftParams = { - name: DISP_NFT_NAME, - symbol: DISP_NFT_SYMBOL, - templateIndex: 1, - tokenURI: '', - transferable: true, - owner: await config.publisherAccount.getAddress() - } - - const datatokenParams = { - templateIndex: 1, - cap: '100000', - feeAmount: '0', - paymentCollector: ZERO_ADDRESS, - feeToken: ZERO_ADDRESS, - minter: await config.publisherAccount.getAddress(), - mpFeeAddress: ZERO_ADDRESS - } - - const dispenserParams = { - dispenserAddress: config.dispenserAddress, - maxTokens: '1', - maxBalance: '1', - withMint: true, - allowedSwapper: ZERO_ADDRESS - } - - const bundleNFT = await factory.createNftWithDatatokenWithDispenser( - nftParams, - datatokenParams, - dispenserParams - ) - - const trxReceipt = await bundleNFT.wait() - - return { - trxReceipt - }; -}; - -// Call the createFRE() function -createDispenser() - .then(({ trxReceipt }) => { - console.log(`TX Receipt ${trxReceipt}`); - process.exit(1); - }) - .catch((err) => { - console.error(err); - process.exit(1); - }); -``` -{% endcode %} -{% endtab %} -{% endtabs %} - -By utilizing these dependencies and configuration settings, the script can leverage the functionalities provided by the Ocean libraries and interact with the Ocean Protocol ecosystem effectively. diff --git a/developers/ocean.js/update-metadata.md b/developers/ocean.js/update-metadata.md deleted file mode 100644 index d15a8e439..000000000 --- a/developers/ocean.js/update-metadata.md +++ /dev/null @@ -1,96 +0,0 @@ -# Update Metadata - -This tutorial will guide you to update an existing asset published on-chain using Ocean libraries. The tutorial assumes that you already have the `did` of the asset which needs to be updated. In this tutorial, we will update the name, description, tags of the data NFT. Please refer [the page on DDO](../ddo-specification.md) to know more about additional the fields which can be updated. - -#### Prerequisites - -* [Obtain an API key](../get-api-keys-for-blockchain-access.md) -* [Set up the .env file](configuration.md#create-a-env-file) -* [Install the dependencies](configuration.md#setup-dependencies) -* [Create a configuration file](configuration.md#create-a-configuration-file) - -{% hint style="info" %} -The variable **AQUARIUS\_URL** and **PROVIDER\_URL** should be set correctly in `.env` file -{% endhint %} - -#### Create a script to update the metadata - -Create a new file in the same working directory where configuration file (`config.js`) and `.env` files are present, and copy the code as listed below. - -{% tabs %} -{% tab title="ocean.js" %} -{% code title="updateMetadata.js" overflow="wrap" %} -```javascript -// Note: Make sure .env file and config.js are created and setup correctly -const { oceanConfig } = require('./config.js'); -const { ZERO_ADDRESS, NftFactory, getHash, Nft } = require ('@oceanprotocol/lib'); - -// replace the did here -const did = "did:op:a419f07306d71f3357f8df74807d5d12bddd6bcd738eb0b461470c64859d6f0f"; - -// This function takes did as a parameter and updates the data NFT information -const setMetadata = async (did) => { - - const publisherAccount = await oceanConfig.publisherAccount.getAddress(); - - // Fetch ddo from Aquarius - const ddo = await await oceanConfig.aquarius.resolve(did); - - const nft = new Nft(config.publisherAccount); - - // update the ddo here - ddo.metadata.name = "Sample dataset v2"; - ddo.metadata.description = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam"; - ddo.metadata.tags = ["new tag1", "new tag2"]; - - const providerResponse = await oceanConfig.ethersProvider.encrypt(ddo, process.env.OCEAN_NETWORK_URL); - const encryptedResponse = await providerResponse; - const metadataHash = getHash(JSON.stringify(ddo)); - - // Update the data NFT metadata - await nft.setMetadata( - ddo.nftAddress, - publisherAccount, - 0, - process.env.OCEAN_NETWORK_URL, - '', - '0x2', - encryptedResponse, - `0x${metadataHash}` - ); - - // Check if ddo is correctly udpated in Aquarius - await oceanConfig.aquarius.waitForAqua(ddo.id); - - console.log(`Resolved asset did [${ddo.id}]from aquarius.`); - console.log(`Updated name: [${ddo.metadata.name}].`); - console.log(`Updated description: [${ddo.metadata.description}].`); - console.log(`Updated tags: [${ddo.metadata.tags}].`); - -}; - -// Call setMetadata(...) function defined above -setMetadata(did).then(() => { - process.exit(); -}).catch((err) => { - console.error(err); - process.exit(1); -}); -``` -{% endcode %} - -Execute the script - -```bash -node updateMetadata.js -``` -{% endtab %} -{% endtabs %} - -We provided several code examples using the Ocean.js library for interacting with the Ocean Protocol. Some highlights from the [code examples](https://github.com/oceanprotocol/ocean.js/blob/main/CodeExamples.md) ([compute examples](https://github.com/oceanprotocol/ocean.js/blob/main/ComputeExamples.md)) are: - -1. **Minting an NFT** - This example demonstrates how to mint an NFT (Non-Fungible Token) using the Ocean.js library. It shows the necessary steps, including creating a NFTFactory instance, defining NFT parameters, and calling the `create()` method to mint the NFT. -2. **Publishing a dataset** - This example explains how to publish a dataset on the Ocean Protocol network. It covers steps such as creating a DDO, signing the DDO, and publish the dataset. -3. **Consuming a dataset** - This example demonstrates how to consume a published dataset. It shows how to search for available assets, retrieve the DDO for a specific asset, order the asset using a specific datatoken, and then download the asset. - -You can explore more detailed code examples and explanations on Ocean.js [readme](https://github.com/oceanprotocol/ocean.js#readme). diff --git a/developers/oe-software-stack-components.md b/developers/oe-software-stack-components.md new file mode 100644 index 000000000..aa2a0a1bb --- /dev/null +++ b/developers/oe-software-stack-components.md @@ -0,0 +1,76 @@ +# OE software stack components + +The Ocean Enterprise software stack comprises the following components: + +## Marketplace + +The Marketplace provides the user interface through which dataspace participants manage their own assets and access assets shared by others. Main features include: + +* Support for assets with multiple services +* Verification flow for SSI-based access control to assets +* Compute-to-Data wizard for starting C2D jobs +* Support for consumers' parameters in download and C2D flows +* User dashboard + + + +## OE Node + +The OE Node is the heart of the system; it is a multi-role component, providing the following features: + +* encryption of the asset's URI and description during asset publishing +* indexing of the published assets +* on-chain verification for consumer payment +* streaming the asset's data directly to the consumer, without revealing the asset's URI +* providing a Compute-To-Data environment to run jobs using the published assets +* management of the C2D jobs + + + +## Policy Server + +The Policy Server is used by the OE Node to determine whether a consumer is authorized to access an asset’s service. It validates access credentials using both the consumer’s web3 address and any SSI‑based access policies defined for the asset. Its core responsibilities include: + +* Verifying access based on the consumer’s web3 address. +* Checking for SSI-based access control policies if the web3 address is permitted. +* Forwarding any SSI-based access policies to the Verifier component for evaluation. +* Facilitating the data exchange between the Verifier and the consumer’s SSI Wallet to validate the Verifiable Credentials against the asset’s SSI policies. +* Relaying the Verifier’s allow/deny decision back to the OE Node. + + + +## Policy Server Proxy + +Policy Server Proxy exposes a set of endpoints that the SSI wallet uses to communicate with the Verifier. Its purpose is to preserve the expected communication flow between the SSI Wallet and the Verifier while routing all interactions through the OE Node and the Policy Server components. + + + +## Verifier + +The Verifier (component provided by [walt.id](https://walt.id/)) validates a wide range of digital credentials—such as W3C VCs, SD‑JWT VCs. It checks signatures, formats, and trust policies, and allows you to customize verification behavior through configurable policies, including support for ecosystems like EBSI. For more information about the walt.id verifier, please consult this link: [https://docs.walt.id/community-stack/verifier/getting-started](https://docs.walt.id/community-stack/verifier/getting-started) + +In the context of the OE software stack, the Verifier provides the following core functionalities: + +* receives the requested credentials and the verification policies from the Policy Server and initiates a presentation session +* validates the Verifiable Credentials submitted by the SSI wallet, for a specific verification request, against the verification policies and returns a success/failure message to the Policy Server. +* invokes the OPA Server to assess and enforce custom policies during the verification process +* optionally interacts with external credential verification services to perform advanced or domain‑specific validation of Verifiable Credentials. + + + +## SSI Wallet + +The SSI Wallet (component provided by [walt.id](https://walt.id/)) is an API-driven identity wallet that lets users store, manage, and present a wide range of digital credentials—including W3C VCs, SD‑JWT VCs, using OIDC4VC standards. It also allows users to manage keys and DIDs. + +In the context of the OE software stack, the SSI Wallet provides the following core functionalities: + +* securely stores the participant's private keys, DIDs, and Verifiable Credentials +* Constructs a Verifiable Presentation from the credentials selected by the participant and submits it to the Verifier for validation + +**Note**: It is recommended that, in a production environment, each participant deploys its own SSI Wallet instance within its infrastructure to securely store keys, DIDs, and verifiable credentials. However, the marketplace operator may also provide a default SSI Wallet instance, which the marketplace will automatically use whenever a participant does not supply its own. + +## OPA Server + +The Open Policy Agent Server (available [here](https://www.openpolicyagent.org/)) is a general-purpose policy engine that unifies policy enforcement, based on a high-level declarative language. + +In the context of the OE software stack, the OPA Server is invoked by the Verifier to assess a set of rules and return a true/false response. diff --git a/developers/old-infrastructure/README.md b/developers/old-infrastructure/README.md deleted file mode 100644 index a8ac4b3b8..000000000 --- a/developers/old-infrastructure/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Old Infrastructure - -Ocean Protocol is now using Ocean Nodes for all backend infrastructure. Previously we used these three components: - -1. [Aquarius](aquarius/): Aquarius is a metadata cache used to enhance search efficiency by caching on-chain data into Elasticsearch. By accelerating metadata retrieval, Aquarius enables faster and more efficient data discovery. -2. [Provider](provider/): The Provider component was used to facilitate various operations within the ecosystem. It assists in asset downloading, handles [DDO](../ddo-specification.md) (Decentralized Data Object) encryption, and establishes communication with the operator-service for Compute-to-Data jobs. This ensures secure and streamlined interactions between different participants. -3. [Subgraph](subgraph/): The Subgraph is an off-chain service that utilizes GraphQL to offer efficient access to information related to datatokens, users, and balances. By leveraging the subgraph, data retrieval becomes faster compared to an on-chain query. This enhances the overall performance and responsiveness of applications that rely on accessing this information. diff --git a/developers/old-infrastructure/aquarius/README.md b/developers/old-infrastructure/aquarius/README.md deleted file mode 100644 index b924fc0ee..000000000 --- a/developers/old-infrastructure/aquarius/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# Aquarius - -### What is Aquarius? - -Aquarius is a tool that tracks and caches the metadata from each chain where the Ocean Protocol smart contracts are deployed. It operates off-chain, running an Elasticsearch database. This makes it easy to query the metadata generated on-chain. - -The core job of Aquarius is to continually look out for new metadata being created or updated on the blockchain. Whenever such events occur, Aquarius takes note of them, processes this information, and adds it to its database. This allows it to keep an up-to-date record of the metadata activity on the chains. - -Aquarius has its own interface (API) that allows you to easily query this metadata. With Aquarius, you don't need to do the time-consuming task of scanning the data chains yourself. It offers you a convenient shortcut to the information you need. It's ideal for when you need a search feature within your dApp. - -

Aquarius high level overview

- -### What does Aquarius do? - -1. Acts as a cache: It stores metadata from multiple blockchains in off-chain in an Elasticsearch database. -2. Monitors events: It continually checks for MetadataCreated and MetadataUpdated events, processing these events and updating them in the database. -3. Offers easy query access: The Aquarius API provides a convenient method to access metadata without needing to scan the blockchain. -4. Serves as an API: It provides a REST API that fetches data from the off-chain datastore. -5. Features an EventsMonitor: This component runs continually to retrieve and index chain Metadata, saving results into an Elasticsearch database. -6. Configurable components: The EventsMonitor has customizable features like the MetadataContract, Decryptor class, allowed publishers, purgatory settings, VeAllocate, start blocks, and more. - -### How to run Aquarius? - -We recommend checking the README in the [Aquarius GitHub repository](https://github.com/oceanprotocol/aquarius) for the steps to run the Aquarius. If you see any errors in the instructions, please open an issue within the GitHub repository. - -### What technology does Aquarius use? - -* Python: This is the main programming language used in Aquarius. -* Flask: This Python framework is used to construct the Aquarius API. -* Elasticsearch: This is a search and analytics engine used for efficient data indexing and retrieval. -* REST API: Aquarius uses this software architectural style for providing interoperability between computer systems on the internet. - -### Postman documentation - -Click [here](https://documenter.getpostman.com/view/2151723/UVkmQc7r) to explore the documentation and more examples in postman. diff --git a/developers/old-infrastructure/aquarius/asset-requests.md b/developers/old-infrastructure/aquarius/asset-requests.md deleted file mode 100644 index 14708ed6e..000000000 --- a/developers/old-infrastructure/aquarius/asset-requests.md +++ /dev/null @@ -1,275 +0,0 @@ -# Asset Requests - -The universal Aquarius Endpoint is [`https://v4.aquarius.oceanprotocol.com`](https://v4.aquarius.oceanprotocol.com). - -### **DDO** - -A method for retrieving all information about the asset using a unique identifier known as a Decentralized Identifier (DID). - -* **Endpoint**: `GET /api/aquarius/assets/ddo/` -* **Purpose**: This endpoint is used to fetch the Decentralized Document (DDO) of a particular asset. A DDO is a detailed information package about a specific asset, including its ID, metadata, and other necessary data. -* **Parameters**: The `` in the URL is a placeholder for the DID, a unique identifier for the asset you want to retrieve the DDO for. - -
NameDescriptionTypeWithinRequired
didDID of the assetstringpathtrue
- -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. In this case, it means the server successfully found and returned the DDO for the given DID. The returned data is formatted in JSON. -* **404**: This is an HTTP response code that signifies the requested resource couldn't be found on the server. In this context, it means the asset DID you requested isn't found in Elasticsearch, the database Aquarius uses. The server responds with a JSON-formatted message stating that the asset DID wasn't found. - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:cd086344c275bc7c560e91d472be069a24921e73a2c3798fb2b8caadf8d245d6' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') -const did = 'did:op:ce3f161fb98c64a2ded37fd34e25f28343f2c88d0c8205242df9c621770d4b3b' -const response = await axios(`https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/${did}`) -console.log(response.status) -console.log(response.data.nftAddress) -console.log(response.data.metadata.name) -console.log(response.data.metadata.description) - -``` - -### **Metadata** - -A method for retrieving the metadata about the asset using the Decentralized Identifier (DID). - -* **Endpoint**: `GET /api/aquarius/assets/metadata/` -* **Purpose**: This endpoint is used to fetch the metadata of a particular asset. It includes details about the asset such as its name, description, creation date, owner, etc. -* **Parameters**: The `` in the URL is a placeholder for the DID, a unique identifier for the asset you want to retrieve the metadata for. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. In this case, it means the server successfully found and returned the metadata for the given DID. The returned data is formatted in JSON. -* **404**: This is an HTTP response code that signifies the requested resource couldn't be found on the server. In this context, it means the asset DID you requested isn't found in the database. The server responds with a JSON-formatted message stating that the asset DID wasn't found. - -#### Parameters - -
NameDescriptionTypeWithinRequired
didDID of the assetstringpathtrue
- -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/metadata/did:op:cd086344c275bc7c560e91d472be069a24921e73a2c3798fb2b8caadf8d245d6' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') -const did = 'did:op:ce3f161fb98c64a2ded37fd34e25f28343f2c88d0c8205242df9c621770d4b3b' -const response = await axios(`https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/metadata/${did}`) -console.log(response.status) -console.log(response.data.name) -console.log(response.data.description) - -``` - -### **Asset Names** - -Used to retrieve the names of a group of assets using a list of unique identifiers known as Decentralized Identifiers (DIDs). - -Here's a more detailed explanation: - -* **Endpoint**: `POST /api/aquarius/assets/names` -* **Purpose**: This endpoint is used to fetch the names of specific assets. These assets are identified by a list of DIDs provided in the request payload. The returned asset names are those specified in the assets' metadata. -* **Parameters**: The parameters are sent in the body of the POST request, formatted as JSON. Specifically, an array of DIDs (named "didList") should be provided. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. In this case, it means the server successfully found and returned the names for the assets corresponding to the provided DIDs. The returned data is formatted in JSON, mapping each DID to its respective asset name. -* **400**: This is an HTTP response code that signifies a client error in the request. In this context, it means that the "didList" provided in the request payload was empty. The server responds with a JSON-formatted message indicating that the requested "didList" cannot be empty. - -#### Parameters - -
NameDescriptionTypeWithinRequired
didListlist of asset DIDslistbodytrue
- -#### Curl Example - -```bash -curl --location --request POST 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/names' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "didList" : ["did:op:cd086344c275bc7c560e91d472be069a24921e73a2c3798fb2b8caadf8d245d6"] -}' -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const body = {didList : ["did:op:cd086344c275bc7c560e91d472be069a24921e73a2c3798fb2b8caadf8d245d6", "did:op:ce3f161fb98c64a2ded37fd34e25f28343f2c88d0c8205242df9c621770d4b3b"]} - -const response = await axios.post('https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/names', body) -console.log(response.status) -for (let key in response.data) { - console.log(key + ': ' + response.data[key]); -} - -``` - -### Query Assets - -Used to run a custom search query on the assets using Elasticsearch's native query syntax. We recommend reading the [Elasticsearch documentation](https://www.elastic.co/guide/index.html) to understand their syntax. - -* **Endpoint**: `POST /api/aquarius/assets/query` -* **Purpose**: This endpoint is used to execute a native Elasticsearch (ES) query against the stored assets. This allows for highly customizable searches and can be used to filter and sort assets based on complex criteria. The body of the request should contain a valid JSON object that defines the ES query. -* **Parameters**: The parameters for this endpoint are provided in the body of the POST request as a valid JSON object that conforms to the Elasticsearch query DSL (Domain Specific Language). - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server successfully ran your ES query and returned the results. The results are returned as a JSON object. -* **500**: This HTTP status code represents a server error. In this context, it typically means there was an error with Elasticsearch while trying to execute the query. It could be due to an invalid or malformed query, an issue with the Elasticsearch service, or some other server-side problem. The specific details of the error are typically included in the response body. - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request POST 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/query' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "query": { - "match_all": {} - } -}' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const body = { "query": { "match_all": { } } } - - -const response = await axios.post('https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/query', body) -console.log(response.status) -console.log(response.data.hits.hits[0]) -for (const value of response.data.hits.hits) { - console.log(value); -} - -``` - -### Validate DDO - -Used to validate the content of a DDO (Decentralized Identifier Document). - -* **Endpoint**: `POST /api/aquarius/assets/ddo/validate` -* **Purpose**: This endpoint is used to verify the validity of a DDO. This could be especially helpful prior to submitting a DDO to ensure it meets the necessary criteria and avoid any issues or errors. The endpoint consumes `application/octet-stream`, which means the data sent should be in binary format, often used for handling different data types. -* **Parameters**: The parameters for this endpoint are provided in the body of the POST request as a valid JSON object, which represents the DDO that needs to be validated. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server successfully validated your DDO content and it meets the necessary criteria. -* **400**: This HTTP status code indicates a client error. In this context, it means that the submitted DDO format is invalid. You will need to revise the DDO content according to the required specifications and resubmit it. -* **500**: This HTTP status code represents a server error. This indicates an internal server error while processing your request. The specific details of the error are typically included in the response body. - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request POST 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/query/api/v1/aquarius/assets/ddo/validate' \ ---header 'Content-Type: application/json' \ ---data-raw '' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const body = { - "@context": ["https://w3id.org/did/v1"], - "id": "did:op:56c3d0ac76c02cc5cec98993be2b23c8a681800c08f2ff77d40c895907517280", - "version": "4.1.0", - "chainId": 1337, - "nftAddress": "0xabc", - "metadata": { - "created": "2000-10-31T01:30:00.000-05:00Z", - "updated": "2000-10-31T01:30:00.000-05:00", - "name": "Ocean protocol white paper", - "type": "dataset", - "description": "Ocean protocol white paper -- description", - "author": "Ocean Protocol Foundation Ltd.", - "license": "CC-BY", - "contentLanguage": "en-US", - "tags": ["white-papers"], - "additionalInformation": {"test-key": "test-value"}, - "links": [ - "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs-daily/", - "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs-averages-25km/", - "http://data.ceda.ac.uk/badc/ukcp09/" - ] - }, - "services": [ - { - "id": "test", - "type": "access", - "datatokenAddress": "0xC7EC1970B09224B317c52d92f37F5e1E4fF6B687", - "name": "Download service", - "description": "Download service", - "serviceEndpoint": "http://172.15.0.4:8030/", - "timeout": 0, - "files": "encryptedFiles" - } - ] - } - -const response = await axios.post( 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/validate', body) -console.log(response.status) -console.log(response.data) - -``` - -### Trigger Caching - -Used to manually initiate the process of DDO caching based on a transaction ID. This transaction ID should include either MetadataCreated or MetadataUpdated events. - -* **Endpoint**: `POST /api/aquarius/assets/triggerCaching` -* **Purpose**: This endpoint is used to manually trigger the caching process of a DDO (Decentralized Identifier Document). This process is initiated based on a specific transaction ID, which should include either MetadataCreated or MetadataUpdated events. This can be particularly useful in situations where immediate caching of metadata changes is required. -* **Parameters**: The parameters for this endpoint are provided in the body of the POST request as a valid JSON object. This includes the transaction ID and log index that is associated with the metadata event. - -
NameDescriptionTypeWithinRequired
transactionIdDID of the assetstringpathtrue
logIndexcustom log index for the transactionintpathfalse
- -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server successfully initiated the DDO caching process and the updated asset is returned. -* **400**: This HTTP status code indicates a client error. In this context, it suggests issues with the request: either the log index was not found, or the transaction log did not contain MetadataCreated or MetadataUpdated events. You should revise your input parameters and try again. -* **500**: This HTTP status code represents a server error. This indicates an internal server error while processing your request. The specific details of the error are typically included in the response body. - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request POST 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/triggerCaching' \ ---header 'Content-Type: application/json' \ ---data-raw '' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const body = { "transactionId": "0x945596edf2a26d127514a78ed94fea86b199e68e9bed8b6f6d6c8bb24e451f27", "logIndex": 0} -const response = await axios.post( 'https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/triggerCaching', body) -console.log(response.status) -console.log(response.data) - -``` - diff --git a/developers/old-infrastructure/aquarius/chain-requests.md b/developers/old-infrastructure/aquarius/chain-requests.md deleted file mode 100644 index 2668536e9..000000000 --- a/developers/old-infrastructure/aquarius/chain-requests.md +++ /dev/null @@ -1,82 +0,0 @@ -# Chain Requests - -The universal Aquarius Endpoint is [`https://v4.aquarius.oceanprotocol.com`](https://v4.aquarius.oceanprotocol.com). - -### Chain List - -Retrieves a list of chains that are currently supported or recognized by the Aquarius service. - -* **Endpoint**: `GET /api/aquarius/chains/list` -* **Purpose**: This endpoint provides a list of the chain IDs that are recognized by the Aquarius service. Each chain ID represents a different blockchain network, and the boolean value indicates if the chain is currently active (true) or not (false). -* **Parameters**: This endpoint does not require any parameters. You simply send a GET request to it. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns a JSON object containing chain IDs as keys and their active status as values. - -Example response: - -```json -{ "246": true, "3": true, "137": true, - "2021000": true, "4": true, "1": true, - "56": true, "80001": true, "1287": true -} -``` - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/api/aquarius/chains/list' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const response = await axios( 'https://v4.aquarius.oceanprotocol.com/api/aquarius/chains/list') -console.log(response.status) -console.log(response.data) - -``` - -### **Chain Status** - -Retrieves the index status for a specific chain\_id from the Aquarius service. - -* **Endpoint**: `GET /api/aquarius/chains/status/{chain_id}` -* **Purpose**: This endpoint is used to fetch the index status for a specific blockchain chain, identified by its chain\_id. The status, expressed as the "last\_block", gives the most recent block that Aquarius has processed on this chain. -* **Parameters**: This endpoint requires a chain\_id as a parameter in the path. This chain\_id represents the specific chain you want to get the index status for. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns a JSON object containing the "last\_block", which is the most recent block that Aquarius has processed on this chain. In the response example you provided, "25198729" is the last block processed on the chain with the chain\_id "137". - -Example response: - -```json -{"last_block": 25198729} -``` - -#### Curl Example - -{% code overflow="wrap" %} -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/api/aquarius/chains/status/137' -``` -{% endcode %} - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') -const chainId = 1 - -const response = await axios( `https://v4.aquarius.oceanprotocol.com/api/aquarius/chains/status/${chainId}`) -console.log(response.status) -console.log(response.data) - -``` - diff --git a/developers/old-infrastructure/aquarius/other-requests.md b/developers/old-infrastructure/aquarius/other-requests.md deleted file mode 100644 index 40d9eede8..000000000 --- a/developers/old-infrastructure/aquarius/other-requests.md +++ /dev/null @@ -1,100 +0,0 @@ -# Other Requests - -The universal Aquarius Endpoint is [`https://v4.aquarius.oceanprotocol.com`](https://v4.aquarius.oceanprotocol.com). - -### **Info** - -Retrieves version, plugin, and software information from the Aquarius service. - -* **Endpoint**: `GET /` -* **Purpose**: This endpoint is used to fetch key information about the Aquarius service, including its current version, the plugin it's using, and the name of the software itself. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns a JSON object containing the `plugin`, `software`, and `version`. - -Example response: - -```json -{ - "plugin": "elasticsearch", - "software": "Aquarius", - "version": "4.2.0" -} -``` - -#### Curl Example - -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/' -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const response = await axios( 'https://v4.aquarius.oceanprotocol.com/') -console.log(response.status) -console.log(response.data) - -``` - - - -### **Health** - -Retrieves the health status of the Aquarius service. - -* **Endpoint**: `GET /health` -* **Purpose**: This endpoint is used to fetch the current health status of the Aquarius service. This can be helpful for monitoring and ensuring that the service is running properly. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns a message indicating the health status. For example, "Elasticsearch connected" indicates that the Aquarius service is able to connect to Elasticsearch, which is a good sign of its health. - -**Curl Example** - -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/health' -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const response = await axios( 'https://v4.aquarius.oceanprotocol.com/health') -console.log(response.status) -console.log(response.data) - -``` - -### **Spec** - -Retrieves the Swagger specification for the Aquarius service. - -* **Endpoint**: `GET /spec` -* **Purpose**: This endpoint is used to fetch the Swagger specification of the Aquarius service. Swagger is a set of rules (in other words, a specification) for a format describing REST APIs. This endpoint returns a document that describes the entire API, including the available endpoints, their methods, parameters, and responses. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns the Swagger specification. - -#### Example - -```bash -curl --location --request GET 'https://v4.aquarius.oceanprotocol.com/spec' -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const response = await axios( 'https://v4.aquarius.oceanprotocol.com/spec') -console.log(response.status) -console.log(response.data.info) - -``` - diff --git a/developers/old-infrastructure/provider/README.md b/developers/old-infrastructure/provider/README.md deleted file mode 100644 index c30ba562b..000000000 --- a/developers/old-infrastructure/provider/README.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -description: An integral part of the Ocean Protocol stack ---- - -# Provider - -### What is Provider? - -It is a REST API designed specifically for the provision of data services. It essentially acts as a proxy that encrypts and decrypts the metadata and access information for the data asset. - -Constructed using the Python Flask HTTP server, the Provider service is the only component in the Ocean Protocol stack with the ability to access your data, it is an important layer of security for your information. - -The Provider service has several key functions. Firstly, it performs on-chain checks to ensure the buyer has permission to access the asset. Secondly, it encrypts the URL and metadata during the publication phase, providing security for your data during the initial upload. - -The Provider decrypts the URL when a dataset is downloaded and it streams the data directly to the buyer, it never reveals the asset URL to the buyer. This provides a layer of security and ensures that access is only provided when necessary. - -Additionally, the Provider service offers compute services by establishing a connection to the C2D environment. This enables users to compute and manipulate data within the Ocean Protocol stack, adding a new level of utility and function to this data services platform. - -### What does the Provider do? - -* The only component that can access your data -* Performs checks on-chain for buyer permissions and payments -* Encrypts the URL and metadata during publish -* Decrypts the URL when the dataset is downloaded or a compute job is started -* Provides access to data assets by streaming data (and never the URL) -* Provides compute services (connects to C2D environment) -* Typically run by the Data owner - -

Ocean Provider - publish & consume

- -In the publishing process, the provider plays a crucial role by encrypting the DDO using its private key. Then, the encrypted DDO is stored on the blockchain. - -During the consumption flow, after a consumer obtains access to the asset by purchasing a datatoken, the provider takes responsibility for decrypting the DDO and fetching data from the source used by the data publisher. - -### What technology is used? - -* Python: This is the main programming language used in Provider. -* Flask: This Python framework is used to construct the Provider API. -* HTTP Server: Provider responds to HTTP requests from clients (like web browsers), facilitating the exchange of data and information over the internet. - -### How to run the provider? - -We recommend checking the README in the Provider [GitHub repository](https://github.com/oceanprotocol/provider) for the steps to run the Provider. If you see any errors in the instructions, please open an issue within the GitHub repository. - -### Ocean Provider Endpoints Specification - -The following pages in this section specify the endpoints for Ocean Provider that have been implemented by the core developers. - -For inspecting the errors received from `Provider` and their reasons, please revise this [document](https://github.com/oceanprotocol/provider/blob/main/ocean\_provider/routes/README.md). diff --git a/developers/old-infrastructure/provider/authentication-endpoints.md b/developers/old-infrastructure/provider/authentication-endpoints.md deleted file mode 100644 index 0fd7391f5..000000000 --- a/developers/old-infrastructure/provider/authentication-endpoints.md +++ /dev/null @@ -1,114 +0,0 @@ -# Authentication Endpoints - -Provider offers an alternative to signing each request, by allowing users to generate auth tokens. The generated auth token can be used until its expiration in all supported requests. Simply omit the signature parameter and add the AuthToken request header based on a created token. - -Please note that if a signature parameter exists, it will take precedence over the AuthToken headers. All routes that support a signature parameter support the replacement, with the exception of auth-related ones (createAuthToken and deleteAuthToken need to be signed). - -### Create Auth Token - -**Endpoint:** `GET /api/services/createAuthToken` - -**Description:** Allows the user to create an authentication token that can be used to authenticate requests to the provider API, instead of signing each request. The generated auth token can be used until its expiration in all supported requests. - -**Parameters:** - -* `address`: The Ethereum address of the consumer (Optional). -* `nonce`: A unique identifier for this request, to prevent replay attacks (Required). -* `signature`: A digital signature proving ownership of the `address`. The signature should be generated by signing the hashed concatenation of the `address` and `nonce` parameters (Required). -* `expiration`: A valid future UTC timestamp representing when the auth token will expire (Required). - -**Curl Example:** - -{% code overflow="wrap" %} -``` -GET /api/services/createAuthToken?address=&&nonce=&&expiration=&signature= -``` -{% endcode %} - -Inside the angular brackets, the user should provide the valid values for the request. - -Response: - -{% code overflow="wrap" %} -``` -{"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE2NjAwNTMxMjksImFkZHJlc3MiOiIweEE3OGRlYjJGYTc5NDYzOTQ1QzI0Nzk5MTA3NUUyYTBlOThCYTdBMDkifQ.QaRqYeSYxZpnFayzPmUkj8TORHHJ_vRY-GL88ZBFM0o"} -``` -{% endcode %} - -#### Javascript Example: - -```runkit nodeVersion="18.x.x" -const axios = require('axios'); -const address = "0x7e2a2FA2a064F693f0a55C5639476d913Ff12D05" -const nonce = "1" -const signature = "" -const url = `http://provider.oceanprotocol.com/api/services/createAuthToken?address=${address}&nonce=${nonce}&expiration=&signature=`; -axios.get(url).then(response => { - console.log(response.data); -}).catch(error => { - console.error(error); -}); - -``` - -#### Delete Auth Token - -#### DELETE /api/services/deleteAuthToken - -Allows the user to delete an existing auth token before it naturally expires. - -Parameters - -``` -address: String object containing consumer's address (optional) -nonce: Integer, Nonce (required) -signature: String object containing user signature (signed message) - The signature is based on hashing the following parameters: - address + nonce -token: token to be expired -``` - -Returns: Success message if token is successfully deleted. If the token is not found or already expired, returns an error message. - -#### Javascript Example: - -{% code overflow="wrap" %} -```javascript -const axios = require('axios'); - -// Define the address, token, and signature -const address = ''; // Replace with your address -const token = ''; // Replace with your token -const signature = ''; // Replace with your signature - -// Define the URL for the deleteAuthToken endpoint -const deleteAuthTokenURL = 'http:///api/services/deleteAuthToken'; // Replace with your provider's URL - -// Make the DELETE request -axios.delete(deleteAuthTokenURL, { - data: { - address: address, - token: token - }, - headers: { - 'Content-Type': 'application/json', - 'signature': signature - } -}) -.then(response => { - console.log(response.data); -}) -.catch(error => { - console.log('Error:', error); -}); - -``` -{% endcode %} - -Replace ``, ``, ``, and `` with actual values. This script sends a DELETE request to the `deleteAuthToken` endpoint and logs the response. Please ensure that `axios` is installed in your environment (`npm install axios`). - -#### Example Response: - -``` -{"success": "Token has been deactivated."} -``` diff --git a/developers/old-infrastructure/provider/compute-endpoints.md b/developers/old-infrastructure/provider/compute-endpoints.md deleted file mode 100644 index 6fa1b48c8..000000000 --- a/developers/old-infrastructure/provider/compute-endpoints.md +++ /dev/null @@ -1,319 +0,0 @@ -# Compute Endpoints - -All compute endpoints respond with an Array of status objects, each object describing a compute job info. - -Each status object will contain: - -``` - owner:The owner of this compute job - documentId: String object containing document id (e.g. a DID) - jobId: String object containing workflowId - dateCreated: Unix timestamp of job creation - dateFinished: Unix timestamp when job finished (null if job not finished) - status: Int, see below for list - statusText: String, see below - algorithmLogUrl: URL to get the algo log (for user) - resultsUrls: Array of URLs for algo outputs - resultsDid: If published, the DID -``` - -Status description (`statusText`): (see Operator-Service for full status list) - -| status | Description | -| ------ | ----------------------------- | -| 1 | Warming up | -| 10 | Job started | -| 20 | Configuring volumes | -| 30 | Provisioning success | -| 31 | Data provisioning failed | -| 32 | Algorithm provisioning failed | -| 40 | Running algorith | -| 50 | Filtering results | -| 60 | Publishing results | -| 70 | Job completed | - -### Create or restart compute job - -**Endpoint:** POST /api/services/compute - -Start a new job - -Parameters - -{% code overflow="wrap" %} -``` - signature: String object containing user signature (signed message) (required) - consumerAddress: String object containing consumer's ethereum address (required) - nonce: Integer, Nonce (required) - environment: String representing a compute environment offered by the provider - dataset: Json object containing dataset information - dataset.documentId: String, object containing document id (e.g. a DID) (required) - dataset.serviceId: String, ID of the service the datatoken is attached to (required) - dataset.transferTxId: Hex string, the id of on-chain transaction for approval of datatokens transfer - given to the provider's account (required) - dataset.userdata: Json, user-defined parameters passed to the dataset service (optional) - algorithm: Json object, containing algorithm information - algorithm.documentId: Hex string, the did of the algorithm to be executed (optional) - algorithm.meta: Json object, defines the algorithm attributes and url or raw code (optional) - algorithm.serviceId: String, ID of the service to use to process the algorithm (optional) - algorithm.transferTxId: Hex string, the id of on-chain transaction of the order to use the algorithm (optional) - algorithm.userdata: Json, user-defined parameters passed to the algorithm running service (optional) - algorithm.algocustomdata: Json object, algorithm custom parameters (optional) - additionalDatasets: Json object containing a list of dataset objects (optional) - - One of `algorithm.documentId` or `algorithm.meta` is required, `algorithm.meta` takes precedence -``` -{% endcode %} - -Returns: Array of `status` objects as described above, in this case the array will have only one object - -Example: - -```json -POST /api/compute -payload: -{ - "signature": "0x00110011", - "consumerAddress": "0x123abc", - "nonce": 1, - "environment": "env", - "dataset": { - "documentId": "did:op:2222...", - "serviceId": "compute", - "transferTxId": "0x0232123..." - } -} -``` - -Response: - -```json -[ - { - "jobId": "0x1111:001", - "status": 1, - "statusText": "Job started", - ... - } -] -``` - -### Status and Result - -#### GET /api/services/compute - -Get all jobs and corresponding stats - -Parameters - -{% code overflow="wrap" %} -``` - signature: String object containing user signature (signed message) - documentId: String object containing document did (optional) - jobId: String object containing workflowID (optional) - consumerAddress: String object containing consumer's address (optional) - - At least one parameter from documentId, jobId and owner is required (can be any of them) -``` -{% endcode %} - -Returns - -Array of `status` objects as described above - -Example: - -``` -GET /api/services/compute?signature=0x00110011&documentId=did:op:1111&jobId=012023 -``` - -Response: - -```json -[ - { - "owner": "0x1111", - "documentId": "did:op:2222", - "jobId": "3333", - "dateCreated": "2020-10-01T01:00:00Z", - "dateFinished": "2020-10-01T01:00:00Z", - "status": 5, - "statusText": "Job finished", - "algorithmLogUrl": "http://example.net/logs/algo.log", - "resultsUrls": [ - "http://example.net/logs/output/0", - "http://example.net/logs/output/1" - ], - "resultsDid": "did:op:87bdaabb33354d2eb014af5091c604fb4b0f67dc6cca4d18a96547bffdc27bcf" - }, - { - "owner": "0x1111", - "documentId": "did:op:2222", - "jobId": "3334", - "dateCreated": "2020-10-01T01:00:00Z", - "dateFinished": "2020-10-01T01:00:00Z", - "status": 5, - "statusText": "Job finished", - "algorithmLogUrl": "http://example.net/logs2/algo.log", - "resultsUrls": [ - "http://example.net/logs2/output/0", - "http://example.net/logs2/output/1" - ], - "resultsDid": "" - } -] -``` - -#### GET /api/services/computeResult - -Allows download of asset data file. - -Parameters - -``` - jobId: String object containing workflowId (optional) - index: Integer, index of the result to download (optional) - consumerAddress: String object containing consumer's address (optional) - nonce: Integer, Nonce (required) - signature: String object containing user signature (signed message) -``` - -Returns: Bytes string containing the compute result. - -Example: - -{% code overflow="wrap" %} -``` -GET /api/services/computeResult?index=0&consumerAddress=0xA78deb2Fa79463945C247991075E2a0e98Ba7A09&jobId=4d32947065bb46c8b87c1f7adfb7ed8b&nonce=1644317370 -``` -{% endcode %} - -Response: - -``` -b'{"result": "0x0000000000000000000000000000000000000000000000000000000000000001"}' -``` - -### Stop - -#### PUT /api/services/compute - -Stop a running compute job. - -Parameters - -{% code overflow="wrap" %} -``` - signature: String object containing user signature (signed message) - documentId: String object containing document did (optional) - jobId: String object containing workflowID (optional) - consumerAddress: String object containing consumer's address (optional) - - At least one parameter from documentId,jobId and owner is required (can be any of them) -``` -{% endcode %} - -Returns - -Array of `status` objects as described above - -Example: - -``` -PUT /api/services/compute?signature=0x00110011&documentId=did:op:1111&jobId=012023 -``` - -Response: - -```json -[ - { - ..., - "status": 7, - "statusText": "Job stopped", - ... - } -] -``` - -### Delete - -#### DELETE /api/services/compute - -Delete a compute job and all resources associated with the job. If job is running it will be stopped first. - -Parameters - -``` - signature: String object containing user signature (signed message) - documentId: String object containing document did (optional) - jobId: String object containing workflowId (optional) - consumerAddress: String object containing consumer's address (optional) - - At least one parameter from documentId, jobId is required (can be any of them) - in addition to consumerAddress and signature -``` - -Returns - -Array of `status` objects as described above - -Example: - -``` -DELETE /api/services/compute?signature=0x00110011&documentId=did:op:1111&jobId=012023 -``` - -Response: - -```json -[ - { - ..., - "status": 8, - "statusText": "Job deleted successfully", - ... - } -] -``` - -#### GET /api/services/computeEnvironments - -Allows download of asset data file. - -Parameters - -{% code overflow="wrap" %} -``` -chainID: Int object representing the chain ID that the Provider is connected to (mandatory) -``` -{% endcode %} - -Returns: List of compute environments. - -Example: - -``` -GET /api/services/computeEnvironments?chainId=8996 -``` - -Response: - -```json -[ - { - "cpuType":"AMD Ryzen 7 5800X 8-Core Processor", - "currentJobs":0, - "desc":"This is a mocked environment", - "diskGB":2, - "gpuType":"AMD RX570", - "id":"ocean-compute", - "maxJobs":10, - "nCPU":2, - "nGPU":0, - "priceMin":2.3, - "ramGB":1 - }, - ... -] -``` diff --git a/developers/old-infrastructure/provider/encryption-decryption.md b/developers/old-infrastructure/provider/encryption-decryption.md deleted file mode 100644 index 9ccdde2da..000000000 --- a/developers/old-infrastructure/provider/encryption-decryption.md +++ /dev/null @@ -1,101 +0,0 @@ -# Encryption / Decryption - -### Encrypt endpoint - -* **Endpoint**: `POST /api/services/encrypt` -* **Parameters**: The body of the request should contain a binary application/octet-stream. -* **Purpose**: This endpoint is used to encrypt a document. It accepts binary data and returns an encrypted bytes string. -* **Responses**: - * **200**: This is a successful HTTP response code. It returns a bytes string containing the encrypted document. For example: `b'0x04b2bfab1f4e...7ed0573'` - -Example response: - -```python -b'0x04b2bfab1f4e...7ed0573' -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const fetch = require('cross-fetch') - -const data = "test" -const response = await fetch('https://v4.provider.oceanprotocol.com/api/services/encrypt?chainId=1', { - method: 'POST', - body: JSON.stringify(data), - headers: { 'Content-Type': 'application/octet-stream' } - }) -console.log(response) - -``` - -### Decrypt endpoint - -* **Endpoint**: `POST /api/services/decrypt` -* **Parameters**: The body of the request should contain a JSON object with the following properties: - * `decrypterAddress`: A string containing the address of the decrypter (required). - * `chainId`: The chain ID of the network the document is on (required). - * `transactionId`: The transaction ID of the encrypted document (optional). - * `dataNftAddress`: The address of the data non-fungible token (optional). - * `encryptedDocument`: The encrypted document (optional). - * `flags`: The flags of the encrypted document (optional). - * `documentHash`: The hash of the encrypted document (optional). - * `nonce`: The nonce of the encrypted document (required). - * `signature`: The signature of the encrypted document (required). -* **Purpose**: This endpoint is used to decrypt a document. It accepts the decrypter address, chain ID, and other optional parameters, and returns the decrypted document. -* **Responses**: - * **200**: This is a successful HTTP response code. It returns a bytes string containing the decrypted document. - -#### Javascript Example - -{% code overflow="wrap" %} -```javascript -const axios = require('axios'); - -async function decryptAsset(payload) { - // Define the base URL of the services. - const SERVICES_URL = ""; // Replace with your base services URL. - - // Define the endpoint. - const endpoint = `${SERVICES_URL}/api/services/decrypt`; - - try { - // Send a POST request to the endpoint with the payload in the request body. - const response = await axios.post(endpoint, payload); - - // Check the response. - if (response.status !== 200) { - throw new Error(`Response status code is not 200: ${response.data}`); - } - - // Use the response data here. - console.log(response.data); - - } catch (error) { - console.error(error); - } -} - -// Define the payload. -let payload = { - "decrypterAddress": "", // Replace with your decrypter address. - "chainId": "", // Replace with your chain ID. - "transactionId": "", // Replace with your transaction ID. - "dataNftAddress": "", // Replace with your Data NFT Address. -}; - -// Run the function. -decryptAsset(payload); - -``` -{% endcode %} - - - -Example response: - -{% code overflow="wrap" %} -```python -b'{"@context": ["https://w3id.org/did/v1"], "id": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c ...' -``` -{% endcode %} diff --git a/developers/old-infrastructure/provider/general-endpoints.md b/developers/old-infrastructure/provider/general-endpoints.md deleted file mode 100644 index 2d1fa7be9..000000000 --- a/developers/old-infrastructure/provider/general-endpoints.md +++ /dev/null @@ -1,218 +0,0 @@ -# General Endpoints - -### Nonce - -Retrieves the last-used nonce value for a specific user's Ethereum address. - -* **Endpoint**: `GET /api/services/nonce` -* **Parameters**: `userAddress`: This is a string that should contain the Ethereum address of the user. It is passed as a query parameter in the URL. -* **Purpose**: This endpoint is used to fetch the last-used nonce value for a user's Ethereum address. A nonce is a number that can only be used once, and it's typically used in cryptography to prevent replay attacks. While this endpoint provides the last-used nonce, it's recommended to use the current UTC timestamp as a nonce, where required in other endpoints. - -Here are some typical responses you might receive from the API: - -* **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returns a JSON object containing the nonce value. - -Example response: - -```json -{ - "nonce": 23 -} -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const response = await axios( `https://v4.provider.oceanprotocol.com/api/services/nonce?userAddress=0x0db823218e337a6817e6d7740eb17635deadafaf`) - -console.log(response.status) -console.log(response.data) - -``` - -### File Info - -Retrieves Content-Type and Content-Length from the given URL or asset. - -* **Endpoint**: `POST /api/services/fileinfo` -* **Parameters**: The body of the request should contain a JSON object with the following properties: - * `did`: This is a string representing the Decentralized Identifier (DID) of the dataset. - * `serviceId`: This is a string representing the ID of the service. -* **Purpose**: This endpoint is used to retrieve the `Content-Type` and `Content-Length` from a given URL or asset. For published assets, `did` and `serviceId` should be provided. It also accepts file objects (as described in the Ocean Protocol documentation) and can compute a checksum if the file size is less than `MAX_CHECKSUM_LENGTH`. For larger files, the checksum will not be computed. -* **Responses**: - * **200**: This is a successful HTTP response code. It returns a JSON object containing the file info. - -Example response: - -```json -[ - { - "contentLength":"1161", - "contentType":"application/json", - "index":0, - "valid": true - },... -] -``` - -#### Javascript Example - -```runkit nodeVersion="18.x.x" -const axios = require('cross-fetch') - -const data = "test" -const response = await fetch('https://v4.provider.oceanprotocol.com/api/services/encrypt?chainId=1', { - method: 'POST', - body: JSON.stringify(data), - headers: { 'Content-Type': 'application/octet-stream' } - }) -console.log(response) - -``` - -### Download - -* **Endpoint**: `GET /api/services/download` -* **Parameters**: The query parameters for this endpoint should contain the following properties: - * `documentId`: A string containing the document id (e.g., a DID). - * `serviceId`: A string representing the list of `file` objects that describe each file in the dataset. - * `transferTxId`: A hex string representing the ID of the on-chain transaction for approval of data tokens transfer given to the provider's account. - * `fileIndex`: An integer representing the index of the file from the files list in the dataset. - * `nonce`: The nonce. - * `consumerAddress`: A string containing the consumer's Ethereum address. - * `signature`: A string containing the user's signature (signed message). -* **Purpose**: This endpoint is used to retrieve the attached asset files. It returns a file stream of the requested file. -* **Responses**: - * **200**: This is a successful HTTP response code. It means the server has successfully processed the request and returned the file stream. - -#### Javascript Example - -Before calling the `/download` endpoint, you need to follow these steps: - -1. You need to set up and connect a wallet for the consumer. The consumer needs to have purchased the datatoken for the asset that you are trying to download. Libraries such as ocean.js or ocean.py can be used for this. -2. Get the nonce. This can be done by calling the `/getnonce` endpoint above. -3. Sign a message from the account that has purchased the datatoken. -4. Add the nonce and signature to the payload. - -```javascript -const axios = require('axios'); - -async function downloadAsset(payload) { - // Define the base URL of the services. - const SERVICES_URL = ""; // Replace with your base services URL. - - // Define the endpoint. - const endpoint = `${SERVICES_URL}/api/services/download`; - - try { - // Send a GET request to the endpoint with the payload as query parameters. - const response = await axios.get(endpoint, { params: payload }); - - // Check the response. - if (response.status !== 200) { - throw new Error(`Response status code is not 200: ${response.data}`); - } - - // Use the response data here. - console.log(response.data); - - } catch (error) { - console.error(error); - } -} - -// Define the payload. -let payload = { - "documentId": "", // Replace with your document ID. - "serviceId": "", // Replace with your service ID. - "consumerAddress": "", // Replace with your consumer address. - "transferTxId": "", // Replace with your transfer transaction ID. - "fileIndex": 0 -}; - -// Run the function. -downloadAsset(payload); - -``` - -### Initialize - -In order to consume a data service the user is required to send one datatoken to the provider. - -The datatoken is transferred on the blockchain by requesting the user to sign an ERC20 approval transaction where the approval is given to the provider's account for the number of tokens required by the service. - -* **Endpoint**: `GET /api/services/initialize` -* **Parameters**: The query parameters for this endpoint should contain the following properties: - * `documentId`: A string containing the document id (e.g., a DID). - * `serviceId`: A string representing the ID of the service the data token is attached to. - * `consumerAddress`: A string containing the consumer's Ethereum address. - * `environment`: A string representing a compute environment offered by the provider. - * `validUntil`: An integer representing the date of validity of the service (optional). - * `fileIndex`: An integer representing the index of the file from the files list in the dataset. If set, the provider will validate the file access (optional). -* **Purpose**: This endpoint is used to initialize a service and return a quote for the number of tokens to transfer to the provider's account. -* **Responses**: - * **200**: This is a successful HTTP response code. It returns a JSON object containing information about the quote for tokens to be transferred. - -#### Javascript Example - -```javascript -const axios = require('axios'); - -async function initializeServiceAccess(payload) { - // Define the base URL of the services. - const SERVICES_URL = ""; // Replace with your base services URL. - - // Define the endpoint. - const endpoint = `${SERVICES_URL}/api/services/initialize`; - - try { - // Send a GET request to the endpoint with the payload in the request query. - const response = await axios.get(endpoint, { params: payload }); - - // Check the response. - if (response.status !== 200) { - throw new Error(`Response status code is not 200: ${response.data}`); - } - - // Use the response data here. - console.log(response.data); - - } catch (error) { - console.error(error); - } -} - -// Define the payload. -let payload = { - "documentId": "", // Replace with your document ID. - "consumerAddress": "", // Replace with your consumer address. - "serviceId": "", // Replace with your service ID. - // Add other necessary parameters as needed. -}; - -// Run the function. -initializeServiceAccess(payload); - -``` - -Example response: - -```json -{ - "datatoken": "0x21fa3ea32892091...", - "nonce": 23, - "providerFee": { - "providerFeeAddress": "0xabc123...", - "providerFeeToken": "0xabc123...", - "providerFeeAmount": "200", - "providerData": "0xabc123...", - "v": 27, - "r": "0xabc123...", - "s": "0xabc123...", - "validUntil": 123456, - }, - "computeAddress": "0x8123jdf8sdsa..." -} -``` diff --git a/developers/old-infrastructure/subgraph/README.md b/developers/old-infrastructure/subgraph/README.md deleted file mode 100644 index 3e3a18bb8..000000000 --- a/developers/old-infrastructure/subgraph/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -description: >- - Unlocking the Speed: Subgraph - Bringing Lightning-Fast Retrieval to On-Chain - Data. ---- - -# Subgraph - -### What is the Subgraph? - -The [Ocean Subgraph](https://github.com/oceanprotocol/ocean-subgraph) is built on top of [The Graph](https://thegraph.com/) (the popular :sunglasses: indexing and querying protocol for blockchain data). It is an essential component of the Ocean Protocol ecosystem. It provides an off-chain service that utilizes GraphQL to offer efficient access to information related to datatokens, users, and balances. By leveraging the subgraph, data retrieval becomes faster compared to an on-chain query. The data sourced from the Ocean subgraph can be accessed through [GraphQL](https://graphql.org/learn/) queries. - -Imagine this 💭: if you were to always fetch data from the on-chain, you'd start to feel a little...old :older\_woman: Like your queries are stuck in a time warp. But fear not! When you embrace the power of the subgraph, data becomes your elixir of youth. - -

Ocean Subgraph

- -The subgraph reads data from the blockchain, extracting relevant information. Additionally, it indexes events emitted from the Ocean smart contracts. This collected data is then made accessible to any decentralized applications (dApps) that require it, through GraphQL queries. The subgraph organizes and presents the data in a JSON format, facilitating efficient and structured access for dApps. - -### How to use the Subgraph? - -You can utilize the Subgraph instances provided by Ocean Protocol or deploy your instance. Deploying your own instance allows you to have more control and customization options for your specific use case. To learn how to host your own Ocean Subgraph instance, refer to the guide available on the [Deploying Ocean Subgraph](../../../infrastructure/deploying-ocean-subgraph.md) page. - -If you're eager to use the Ocean Subgraph, here's some important information for you: We've deployed an Ocean Subgraph for each of the supported networks. Take a look at the table below, where you'll find handy links to both the subgraph instance and GraphiQL for each network. With the user-friendly GraphiQL interface, you can execute GraphQL queries directly, without any additional setup. It's a breeze! :ocean: - -{% hint style="info" %} -When it comes to fetching valuable information about [Data NFTs](../../contracts/data-nfts.md) and [datatokens](../../contracts/datatokens.md), the subgraph queries play a crucial role. They retrieve numerous details and information, but, the Subgraph cannot decrypt the DDO. But worry not, we have a dedicated component for that—[Aquarius](../aquarius/)! 🐬 Aquarius communicates with the provider and decrypts the encrypted information, making it readily available for queries. -{% endhint %} - -### Ocean Subgraph deployments - -| Network | Subgraph URL | GraphiQL URL | -| -------------------- | ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | -| Ethereum | [Subgraph](https://v4.subgraph.mainnet.oceanprotocol.com) | [GraphiQL](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) | -| Polygon | [Subgraph](https://v4.subgraph.polygon.oceanprotocol.com/) | [GraphiQL](https://v4.subgraph.polygon.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) | -| OP Mainnet(Optimism) | [Subgraph](https://v4.subgraph.optimism.oceanprotocol.com) | [GraphiQL](https://v4.subgraph.optimism.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) | -| Sepolia | [Subgraph](https://v4.subgraph.sepolia.oceanprotocol.com) | [GraphiQL](https://v4.subgraph.sepolia.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) | - -{% hint style="warning" %} -When making subgraph queries, please remember that the parameters you send, such as a datatoken address or a data NFT address, should be in **lowercase**. This is an essential requirement to ensure accurate processing of the queries. We kindly request your attention to this detail to facilitate a seamless query experience. -{% endhint %} - -In the following pages, we've prepared a few examples just for you. From running queries to exploring data, you'll have the chance to dive right into the Ocean Subgraph data. There, you'll find a wide range of additional code snippets and examples that showcase the power and versatility of the Ocean Subgraph. So, grab a virtual snorkel, and let's explore together! 🤿 - -{% hint style="info" %} -For more examples, visit the subgraph GitHub [repository](https://github.com/oceanprotocol/ocean-subgraph), where you'll discover an extensive collection of code snippets and examples that highlight the Subgraph's capabilities and adaptability. -{% endhint %} diff --git a/developers/old-infrastructure/subgraph/get-data-nft-information.md b/developers/old-infrastructure/subgraph/get-data-nft-information.md deleted file mode 100644 index 4972981be..000000000 --- a/developers/old-infrastructure/subgraph/get-data-nft-information.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -description: >- - Explore the Power of Querying: Unveiling In-Depth Details of Individual Data - NFTs ---- - -# Get data NFT information - -Now that you are familiar with the process of retrieving a list of data NFTs 😎, let's explore how to obtain more specific details about a particular NFT through querying. By utilizing the knowledge you have gained, you can customize your GraphQL query to include additional parameters such as the NFT's metadata, creator information, template, or any other relevant data points. This will enable you to delve deeper into the intricacies of a specific NFT and gain a comprehensive understanding of its attributes. With this newfound capability, you can unlock valuable insights and make informed decisions based on the specific details retrieved. So, let's dive into the fascinating world of querying and unravel the unique characteristics of individual data NFTs. - - - -The result of the following GraphQL query returns the information about a particular data NFT. In this example, `0x1c161d721e6d99f58d47f709cdc77025056c544c`. - -_PS: In this example, the query is executed on the Ocean subgraph deployed on the mainnet. If you want to change the network, please refer to_ [_this table_](README.md#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="Javascript" %} -The javascript below can be used to run the query and fetch the information of a data NFT. If you wish to change the network, replace the variable's value `network` as needed. Change the value of the variable `datanftAddress` with the address of your choice. - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const datanftAddress = "0x1c161d721e6d99f58d47f709cdc77025056c544c"; - -const query = `{ - nft (id:"${datanftAddress}", subgraphError:deny){ - id - name - symbol - owner - address - assetState - tx - block - transferable - creator - createdTimestamp - providerUrl - managerRole - erc20DeployerRole - storeUpdateRole - metadataRole - tokenUri - template - orderCount - } -}` - -const network = "mainnet" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - let result = JSON.stringify(response.data) - console.log(result) - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query and fetch the details about an NFT. If you wish to change the network, replace the variable's value `base_url` as needed. Change the value of the variable dataNFT\_address with the address of the datatoken of your choice. - -**Create script** - -{% code title="dataNFT_information.py" %} -```python -import requests -import json - -dataNFT_address = "0x1c161d721e6d99f58d47f709cdc77025056c544c" -query = """ -{{ - nft (id:"{0}", subgraphError:deny){{ - id - name - symbol - owner - address - assetState - tx - block - transferable - creator - createdTimestamp - providerUrl - managerRole - erc20DeployerRole - storeUpdateRole - metadataRole - tokenUri - template - orderCount - }} -}}""".format( - dataNFT_address -) - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -
python dataNFT_information.py
-
-{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch the information about a data NFT in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). If you want to fetch the information about another NFT, replace the `id` with the address of your choice. - -```graphql -{ - nft (id:"0x1c161d721e6d99f58d47f709cdc77025056c544c", subgraphError:deny){ - id - name - symbol - owner - address - assetState - tx - block - transferable - creator - createdTimestamp - providerUrl - managerRole - erc20DeployerRole - storeUpdateRole - metadataRole - tokenUri - template - orderCount - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -```json -{ - "data": { - "nft": { - "address": "0x1c161d721e6d99f58d47f709cdc77025056c544c", - "assetState": 0, - "block": 15185270, - "createdTimestamp": 1658397870, - "creator": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "erc20DeployerRole": [ - "0x1706df1f2d93558d1d77bed49ccdb8b88fafc306" - ], - "id": "0x1c161d721e6d99f58d47f709cdc77025056c544c", - "managerRole": [ - "0xd30dd83132f2227f114db8b90f565bca2832afbd" - ], - "metadataRole": null, - "name": "Ocean Data NFT", - "orderCount": "1", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "providerUrl": "https://v4.provider.mainnet.oceanprotocol.com", - "storeUpdateRole": null, - "symbol": "OCEAN-NFT", - "template": "", - "tokenUri": "", - "transferable": true, - "tx": "0x327a9da0d2e9df945fd2f8e10b1caa77acf98e803c5a2f588597172a0bcbb93a" - } - } -} -``` - -
diff --git a/developers/old-infrastructure/subgraph/get-datatoken-buyers.md b/developers/old-infrastructure/subgraph/get-datatoken-buyers.md deleted file mode 100644 index c858ee39a..000000000 --- a/developers/old-infrastructure/subgraph/get-datatoken-buyers.md +++ /dev/null @@ -1,373 +0,0 @@ ---- -description: Query the Subgraph to see the buyers of a datatoken. ---- - -# Get datatoken buyers - -The result of the following GraphQL query returns the list of buyers for a particular datatoken. Here, `0xc22bfd40f81c4a28c809f80d05070b95a11829d9` is the address of the datatoken. - -_PS: In this example, the query is executed on the Ocean subgraph deployed on the **Sepolia** network. If you want to change the network, please refer to_ [_this table_](README.md#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="JavaScript" %} -The javascript below can be used to run the query and fetch the list of buyers for a datatoken. If you wish to change the network, replace the variable's value `network` as needed. Change the value of the variable `datatoken` with the address of your choice. - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const datatoken = "0xc22bfd40f81c4a28c809f80d05070b95a11829d9".toLowerCase() - -const query = `{ - token(id : "${datatoken}") { - id, - orders( - orderBy: createdTimestamp - orderDirection: desc - first: 1000 - ) { - id - consumer { - id - } - payer { - id - } - reuses { - id - } - block - createdTimestamp - amount - } - } -}` - -const network = "sepolia" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - const orders = response.data.data.token.orders - console.log(orders) - for (let order of orders) { - console.log('id:' + order.id + ' consumer: ' + order.consumer.id + ' payer: ' + order.payer.id) - } - console.log(response.data.data.token.orders) - }) - .catch(function (error) { - console.log(error); -}); - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query and fetch the list of buyers for a datatoken. If you wish to change the network, replace the variable's value `base_url` as needed. Change the value of the variable `datatoken_address` with the address of the datatoken of your choice. - -**Create Script** - -{% code title="datatoken_buyers.py" %} -```python -import requests -import json - -datatoken_address = "0xc22bfd40f81c4a28c809f80d05070b95a11829d9" -query = """ -{{ - token(id:"{0}"){{ - id, - orders( - orderBy: createdTimestamp - orderDirection: desc - first: 1000 - ){{ - id - consumer{{ - id - }} - payer{{ - id - }} - reuses{{ - id - }} - block - createdTimestamp - amount - }} - }} -}}""".format( - datatoken_address -) - -base_url = "https://v4.subgraph.sepolia.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute Script** - -``` -python datatoken_buyers.py -``` -{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch the list of buyers for a datatoken in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.sepolia.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph). - -```graphql - - token(id : "0xc22bfd40f81c4a28c809f80d05070b95a11829d9") { - id, - orders( - orderBy: createdTimestamp - orderDirection: desc - first: 1000 - ) { - id - consumer { - id - } - payer { - id - } - reuses { - id - } - block - createdTimestamp - amount - } - } -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -{% code overflow="wrap" %} -```json -{ - "data": { - "token": { - "id": "0xc22bfd40f81c4a28c809f80d05070b95a11829d9", - "orders": [ - { - "amount": "1", - "block": 36669814, - "consumer": { - "id": "0x0b58857708a6f84e7ee04beaef069a7e6d1d4a0b" - }, - "createdTimestamp": 1686386048, - "id": "0xd65c927af039bed60be4bfcb00a75eebe7db695598350ba9bc6cb5d6a6180062-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x0b58857708a6f84e7ee04beaef069a7e6d1d4a0b-38.0", - "payer": { - "id": "0x0b58857708a6f84e7ee04beaef069a7e6d1d4a0b" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35582325, - "consumer": { - "id": "0x027bfbe29df80bde49845b6fecf5e4ed14518f1f" - }, - "createdTimestamp": 1684067341, - "id": "0x118317568256f457a6ac29ba03875ad83815d5d8ec834c721ea20d80643d8629-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x027bfbe29df80bde49845b6fecf5e4ed14518f1f-0.0", - "payer": { - "id": "0x027bfbe29df80bde49845b6fecf5e4ed14518f1f" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35578590, - "consumer": { - "id": "0x86874bf84f0d27dcfc6c4c34ab99aad8ced8d892" - }, - "createdTimestamp": 1684059403, - "id": "0xe9668b60b5fe7cbfacf0311ae4dc93c50c43484c0a8cf94db783ffbee1be7cd5-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x86874bf84f0d27dcfc6c4c34ab99aad8ced8d892-1.0", - "payer": { - "id": "0x86874bf84f0d27dcfc6c4c34ab99aad8ced8d892" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35511102, - "consumer": { - "id": "0xb62e762af637b49eb4870bce8fe21bfff189e495" - }, - "createdTimestamp": 1683915991, - "id": "0x047a7ce1b3c69a5fc4c2c8078a2cc356164519077ef095265e4bcba1e0baf6c9-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0xb62e762af637b49eb4870bce8fe21bfff189e495-0.0", - "payer": { - "id": "0xb62e762af637b49eb4870bce8fe21bfff189e495" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35331127, - "consumer": { - "id": "0x85c1bbdc1b6a199e0964cb849deb59aef3045edd" - }, - "createdTimestamp": 1683533500, - "id": "0x8cbfb5a85d43f5a5b4aff4a2d657fe7dac4528a86cc78f21897fdd0169d3b3c3-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x85c1bbdc1b6a199e0964cb849deb59aef3045edd-0.0", - "payer": { - "id": "0x85c1bbdc1b6a199e0964cb849deb59aef3045edd" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35254580, - "consumer": { - "id": "0xf9df381272afc2d1bd8fbbc0061cdb1d387c2032" - }, - "createdTimestamp": 1683370838, - "id": "0x246637f9a410664c6880e7768880696763e7fd66aa7cc286fdc62d5d8589481c-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0xf9df381272afc2d1bd8fbbc0061cdb1d387c2032-3.0", - "payer": { - "id": "0xf9df381272afc2d1bd8fbbc0061cdb1d387c2032" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35110175, - "consumer": { - "id": "0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede" - }, - "createdTimestamp": 1683063962, - "id": "0xed9bcc6149cab8ee67a38d6b423a05ca328533d43ff83aff140fe9c424e449ee-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede-9.0", - "payer": { - "id": "0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 35053093, - "consumer": { - "id": "0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd" - }, - "createdTimestamp": 1682942664, - "id": "0xa97fa2c99f8e5f16ba7245989830c552bace1f72476f5dee4da01c0d56ada7be-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd-12.0", - "payer": { - "id": "0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34985052, - "consumer": { - "id": "0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd" - }, - "createdTimestamp": 1682798076, - "id": "0xb9b72efad41ded4fcb7e23f14a7caa3ebc4fdfbb710318cbf25d92068c8a650d-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd-0.0", - "payer": { - "id": "0x56e08babb8bf928bd8571d2a2a78235ae57ae5bd" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34984847, - "consumer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "createdTimestamp": 1682797640, - "id": "0x9d616c85fdfe8655640bf77ecea0e42a7a9d331c5f51975f2a56b4f5ac8ec955-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff-0.0", - "payer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34982389, - "consumer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "createdTimestamp": 1682792418, - "id": "0x16eee832f9e85ca8ac8f82aecb8861e5bb5378c2771bf9abd3930b9438dbbc01-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff-9.0", - "payer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34980112, - "consumer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "createdTimestamp": 1682787580, - "id": "0x5264d4694fc78d9211a658363d98571f8d455dfcf89f3450520909416a103c2c-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff-0.0", - "payer": { - "id": "0x3f0cc2ad70839e2b684f173389f7dd71fe5186ff" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34969169, - "consumer": { - "id": "0x616b5249aaf1c924339f8b8e94474e64ceb22af3" - }, - "createdTimestamp": 1682764326, - "id": "0x7222faab923d80218b242aec2670c1a775c77a254a28782e04aed5cb36c395d3-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x616b5249aaf1c924339f8b8e94474e64ceb22af3-18.0", - "payer": { - "id": "0x616b5249aaf1c924339f8b8e94474e64ceb22af3" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34938635, - "consumer": { - "id": "0x71eb23e03d3005803db491639a7ebb717810bd04" - }, - "createdTimestamp": 1682699439, - "id": "0x3eae9d33fe3223e25ca058955744c98ba8aa211b1e3e1bf62eb653c0d0441b79-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x71eb23e03d3005803db491639a7ebb717810bd04-0.0", - "payer": { - "id": "0x71eb23e03d3005803db491639a7ebb717810bd04" - }, - "reuses": [] - }, - { - "amount": "1", - "block": 34938633, - "consumer": { - "id": "0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede" - }, - "createdTimestamp": 1682699435, - "id": "0x8dfe458aa689a29ceea3208f55856420dbfd80ed777fd01103581cff9d7d76b7-0xc22bfd40f81c4a28c809f80d05070b95a11829d9-0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede-0.0", - "payer": { - "id": "0x726ab53c8da3efed40a32fe6ab5daa65b9da7ede" - }, - "reuses": [] - } - ] - } - } -} -``` -{% endcode %} - -
diff --git a/developers/old-infrastructure/subgraph/get-datatoken-information.md b/developers/old-infrastructure/subgraph/get-datatoken-information.md deleted file mode 100644 index 9b3846c96..000000000 --- a/developers/old-infrastructure/subgraph/get-datatoken-information.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -description: >- - Explore the Power of Querying: Unveiling In-Depth Details of Individual - Datatokens ---- - -# Get datatoken information - -To fetch detailed information about a specific datatoken, you can utilize the power of GraphQL queries. By constructing a query tailored to your needs, you can access key parameters such as the datatoken's ID, name, symbol, total supply, creator, and associated dataTokenAddress. This allows you to gain a deeper understanding of the datatoken's characteristics and properties. With this information at your disposal, you can make informed decisions, analyze market trends, and explore the vast potential of datatokens within the Ocean ecosystem. Harness the capabilities of GraphQL and unlock a wealth of datatoken insights. - - - -The result of the following GraphQL query returns the information about a particular datatoken. Here, `0x122d10d543bc600967b4db0f45f80cb1ddee43eb` is the address of the datatoken. - -_PS: In this example, the query is executed on the Ocean subgraph deployed on the mainnet. If you want to change the network, please refer to_ [_this table_](README.md#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="Javascript" %} -The javascript below can be used to run the query and fetch the information of a datatoken. If you wish to change the network, replace the variable's value `network` as needed. Change the value of the variable `datatokenAddress` with the address of your choice. - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const datatokenAddress = "0x122d10d543bc600967b4db0f45f80cb1ddee43eb"; - -const query = `{ - token(id:"${datatokenAddress}", subgraphError: deny){ - id - symbol - nft { - name - symbol - address - } - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){ - amount - serviceIndex - payer { - id - } - consumer{ - id - } - estimatedUSDValue - lastPriceToken - lastPriceValue - } - } - fixedRateExchanges(subgraphError:deny){ - id - price - active - } -}` - -const network = "mainnet" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - let result = JSON.stringify(response.data) - console.log(result); - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query and fetch a datatoken information. If you wish to change the network, replace the variable's value `base_url` as needed. Change the value of the variable `datatoken_address` with the address of the datatoken of your choice. - -**Create script** - -{% code title="datatoken_information.py" %} -```python -import requests -import json - -datatoken_address = "0x122d10d543bc600967b4db0f45f80cb1ddee43eb" -query = """ -{{ - token(id:"{0}", subgraphError: deny){{ - id - symbol - nft {{ - name - symbol - address - }} - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){{ - amount - serviceIndex - payer {{ - id - }} - consumer{{ - id - }} - estimatedUSDValue - lastPriceToken - lastPriceValue - }} - }} - fixedRateExchanges(subgraphError:deny){{ - id - price - active - }} -}}""".format( - datatoken_address -) - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com/" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -
python datatoken_information.py
-
-{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch the information of a datatoken in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). - -``` -{ - token(id:"0x122d10d543bc600967b4db0f45f80cb1ddee43eb", subgraphError: deny){ - id - symbol - nft { - name - symbol - address - } - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){ - amount - serviceIndex - payer { - id - } - consumer{ - id - } - estimatedUSDValue - lastPriceToken - lastPriceValue - } - } - fixedRateExchanges(subgraphError:deny){ - id - price - active - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -```json -{ - "data": { - "fixedRateExchanges": [ - { - "active": true, - "id": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02-0x06284c39b48afe5f01a04d56f1aae45dbb29793b190ee11e93a4a77215383d44", - "price": "33" - }, - { - "active": true, - "id": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02-0x2719862ebc4ed253f09088c878e00ef8ee2a792e1c5c765fac35dc18d7ef4deb", - "price": "35" - }, - { - "active": true, - "id": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02-0x2dccaa373e4b65d5ec153c150270e989d1bda1efd3794c851e45314c40809f9c", - "price": "33" - } - ], - "token": { - "cap": "115792089237316195423570985008687900000000000000000000000000", - "holderCount": "0", - "id": "0x122d10d543bc600967b4db0f45f80cb1ddee43eb", - "isDatatoken": true, - "name": "Brave Lobster Token", - "nft": { - "address": "0xea615374949a2405c3ee555053eca4d74ec4c2f0", - "name": "Ocean Data NFT", - "symbol": "OCEAN-NFT" - }, - "orderCount": "0", - "orders": [], - "symbol": "BRALOB-11" - } - } -} -``` - -
diff --git a/developers/old-infrastructure/subgraph/get-veocean-stats.md b/developers/old-infrastructure/subgraph/get-veocean-stats.md deleted file mode 100644 index c4ffd1bdb..000000000 --- a/developers/old-infrastructure/subgraph/get-veocean-stats.md +++ /dev/null @@ -1,724 +0,0 @@ ---- -description: 'Discover the World of veOCEAN: Retrieving a Stats' ---- - -# Get veOCEAN stats - -If you are already familiarized with veOCEAN, you're off to a great start. However, if you need a refresher, we recommend visiting the [veOCEAN](../../../data-farming/passivedf.md) page for a quick overview :mag: - -On this page, you'll find a few examples to fetch the stats of veOCEANS from the Ocean Subgraph. These examples serve as a valuable starting point to help you retrieve essential information about veOCEAN. However, if you're eager to delve deeper into the topic, we invite you to visit the [GitHub](https://github.com/oceanprotocol/ocean-subgraph/blob/main/test/integration/VeOcean.test.ts) repository. There, you'll discover a wealth of additional examples, which provide comprehensive insights. Feel free to explore and expand your knowledge! :books: - -{% hint style="info" %} -The veOCEAN is deployed on the Ethereum mainnet, along with two test networks. The statistical data available is specifically limited to these networks. -{% endhint %} - -### - -### Get the total amount of locked OCEAN - -{% tabs %} -{% tab title="JavaScript" %} -You can utilize the following JavaScript code snippet to execute the query and retrieve the total number of locked OCEAN: - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const query = `query{ - globalStatistics{ - totalOceanLocked - } - }` - -var config = { - method: 'post', - url: `https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - console.log(response.data.data.globalStatistics) - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -You can employ the following Python script to execute the query and retrieve the total amount of locked OCEAN from the subgraph: - -**Create script** - -{% code title="get_ocean_locked.py" %} -```python -import requests -import json - -query = """ -{ - globalStatistics { - totalOceanLocked - } -}""" - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = response.json() - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -``` -python get_ocean_locked.py -``` -{% endtab %} - -{% tab title="Query" %} -To fetch the total amount of Ocean locked in the Ocean Subgraph [GraphiQL](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) interface, you can use the following query: - -```graphql -query { - globalStatistics { - totalOceanLocked - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -```json -{ - "data": { - "globalStatistics": [ - { - "totalOceanLocked": "38490790.606836146522318627" - } - ] - } -} -``` - -
- -### Get the veOCEAN holders list - -{% tabs %} -{% tab title="JavaScript" %} -You can utilize the following JavaScript code snippet to execute the query and fetch the list of veOCEAN holders. - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const query = `query { - veOCEANs { - id, - lockedAmount - unlockTime - } -}` - -var config = { - method: 'post', - url: `https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - for (let veHolder of response.data.data.veOCEANs) { - console.log(veHolder) - } - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -You can employ the following Python script to execute the query and fetch the list of veOCEAN holders from the subgraph. - -{% code title="get_veOcean_holders.py" %} -```python -import requests -import json - -query = """ -{ - veOCEANs { - id, - lockedAmount - unlockTime - } -}""" - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -```bash -python get_veOcean_holders.py -``` -{% endtab %} - -{% tab title="Query" %} -To fetch the list of veOCEAN holders in the Ocean Subgraph [GraphiQL](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql) interface, you can use the following query: - -```graphql -query { - veOCEANs { - id, - lockedAmount - unlockTime - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -{% code overflow="wrap" %} -```json -{ - "data": { - "veOCEANs": [ - { - "id": "0x000afce0e19523ca2566b142bd12968fe1e44fe8", - "lockedAmount": "1011", - "unlockTime": "1727913600" - }, - { - "id": "0x001b71fad769b3cd47fd4c9849c704fdfabf6096", - "lockedAmount": "8980", - "unlockTime": "1790208000" - }, - { - "id": "0x002570980aa53893c6981765698b6ebab8ae7ea1", - "lockedAmount": "126140", - "unlockTime": "1790208000" - }, - { - "id": "0x006d0f31a00e1f9c017ab039e9d0ba699433a28c", - "lockedAmount": "75059", - "unlockTime": "1812585600" - }, - { - "id": "0x006d559fc29090589d02fb71d4142aa58b030013", - "lockedAmount": "100", - "unlockTime": "1793232000" - }, - { - "id": "0x008ed443f31a4b3aee02fbfe61c7572ddaf3a679", - "lockedAmount": "1100", - "unlockTime": "1795651200" - }, - { - "id": "0x009ec7d76febecabd5c73cb13f6d0fb83e45d450", - "lockedAmount": "11200", - "unlockTime": "1790812800" - }, - { - "id": "0x01d5595949fdbe521fbc39eaf09192dffb3bfc17", - "lockedAmount": "28576", - "unlockTime": "1675900800" - }, - { - "id": "0x02535d7bab47a83d33623c9a4ca854a1b1192121", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x02a6ab92964309e0d8a739e0252b3acfd3a58972", - "lockedAmount": "1178", - "unlockTime": "1712188800" - }, - { - "id": "0x02aa319b5ce28294b7207bdce3bbcf4bf514c05b", - "lockedAmount": "300", - "unlockTime": "1736985600" - }, - { - "id": "0x02ae6dfaffc2c1f410fcad1f36885f6cc8b677d5", - "lockedAmount": "1009", - "unlockTime": "1730937600" - }, - { - "id": "0x034e1f7a66b582b68e511b325ed0ccb71bb4bc12", - "lockedAmount": "15919", - "unlockTime": "1727913600" - }, - { - "id": "0x035a209abf018e4f94173fdeabe5abe69f1efbed", - "lockedAmount": "1907", - "unlockTime": "1714003200" - }, - { - "id": "0x03d4682823c33995184a6a85a97f4ca1715c9d5c", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x04aa87fa73238b563417d17ca7e57fd91ccd521e", - "lockedAmount": "9435", - "unlockTime": "1801699200" - }, - { - "id": "0x04c697561092c9cc56be6ff5b8e2789b0ca5837c", - "lockedAmount": "226", - "unlockTime": "1681948800" - }, - { - "id": "0x051f12380b842104391a0f9c55b32f6636cc7a0f", - "lockedAmount": "24900", - "unlockTime": "1685577600" - }, - { - "id": "0x054e061f1e1c1d775a2e5f20304aab83af7dab63", - "lockedAmount": "5000", - "unlockTime": "1701907200" - }, - { - "id": "0x054efb6d55466ba2ffb4133f39ae67985a314bed", - "lockedAmount": "33083", - "unlockTime": "1697068800" - }, - { - "id": "0x05a79e69c0dcb9335cbfa5b579635cbbd60f70ba", - "lockedAmount": "15837", - "unlockTime": "1728518400" - }, - { - "id": "0x05b2716d750f50c4fcd2110c5bff3f74bf0910e6", - "lockedAmount": "744", - "unlockTime": "1796256000" - }, - { - "id": "0x05b93ddd5a0ecfbdda3ccccd11882820f9cf7454", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x05c01104bd6c4c099fe4d13b0faf0a8c94f11082", - "lockedAmount": "106026", - "unlockTime": "1723680000" - }, - { - "id": "0x06a2006ca85813e652506b865e590f44eae3928a", - "lockedAmount": "3100", - "unlockTime": "1727308800" - }, - { - "id": "0x0705adac1869aa2648ddcf00da24b0ab6b76ede1", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x07dee7fb11086d543ed943bf075ad6ac2007aada", - "lockedAmount": "34", - "unlockTime": "1665014400" - }, - { - "id": "0x0848db7cb495e7b9ada1d4dc972b9a526d014d84", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x0861fcabe37a5ce396a8d85cd816e0cc6b4633ff", - "lockedAmount": "500", - "unlockTime": "1738800000" - }, - { - "id": "0x08c26d09393dc0adc7349c0c8d1bdae63555c312", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x0a8162d91d6bf4530950e539068c75f7ddf972bc", - "lockedAmount": "534", - "unlockTime": "1791417600" - }, - { - "id": "0x0abe9b7740686cbf24b9f206e7d4e8ec25519476", - "lockedAmount": "230", - "unlockTime": "1690416000" - }, - { - "id": "0x0aef715335d0a19b870ca20fb540e16a6e606fbd", - "lockedAmount": "210", - "unlockTime": "1696464000" - }, - { - "id": "0x0b5665d637f45d6fff6c4afd4ea4191904ef38bb", - "lockedAmount": "10000", - "unlockTime": "1710979200" - }, - { - "id": "0x0bc1e0d21e3806056eeca20b69dd3f33bb49d0c7", - "lockedAmount": "690", - "unlockTime": "1738195200" - }, - { - "id": "0x0bc9cd548cc04bfcf8ef2fca50c13b9b4f62f6d4", - "lockedAmount": "1250", - "unlockTime": "1796256000" - }, - { - "id": "0x0bdf0d54e6f64da97728051e702fa0b9f61d2375", - "lockedAmount": "1024", - "unlockTime": "1701302400" - }, - { - "id": "0x0be1b7f1a2eacde1cf5b48a4a1034c70dac06a70", - "lockedAmount": "19982", - "unlockTime": "1800489600" - }, - { - "id": "0x0c16b6d59a9d242f9cf6ca1999e372dd89a098a2", - "lockedAmount": "1000", - "unlockTime": "1723075200" - }, - { - "id": "0x0c21d79f460f7cacf3fd35172151bdbc5d61d9c1", - "lockedAmount": "10", - "unlockTime": "1676505600" - }, - { - "id": "0x0c4f299cce0e56004a6e3a30f43146a205bd2b9d", - "lockedAmount": "250", - "unlockTime": "1690416000" - }, - { - "id": "0x0c59aeeb4f82bbb7e38958900df5bf499c3e9e4f", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x0c6415489a8cc61ca7d32a29f7cdc1e980af16f1", - "lockedAmount": "3788", - "unlockTime": "1725494400" - }, - { - "id": "0x0ca0c241a45a9e8abad30a632df1a9a09a4eb692", - "lockedAmount": "24987", - "unlockTime": "1729123200" - }, - { - "id": "0x0cf776d57e0223f47ed3a101927bb78d41ad8a13", - "lockedAmount": "16967", - "unlockTime": "1790208000" - }, - { - "id": "0x0d04e73d950ff53e586da588c43bb3ac5ae53872", - "lockedAmount": "19517", - "unlockTime": "1703721600" - }, - { - "id": "0x0daefc5251f8f7f5a5dc987e8a6c96d9deb84559", - "lockedAmount": "3000", - "unlockTime": "1727308800" - }, - { - "id": "0x0e0bab764f38d63abf08680a50b33718c98b90e6", - "lockedAmount": "13782", - "unlockTime": "1797465600" - }, - { - "id": "0x0ed8063fcc5b44f664333b59a12d187de6551088", - "lockedAmount": "265", - "unlockTime": "1804118400" - }, - { - "id": "0x0ed8486119b992258a3754decaa36bf8bed543e8", - "lockedAmount": "25881", - "unlockTime": "1697068800" - }, - { - "id": "0x0efbdc4e858cbb269545d48f7b30ab260a3e5d10", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x0f1107f97af6ae6eb37a9d35060aaa21cdaa109f", - "lockedAmount": "15000", - "unlockTime": "1790812800" - }, - { - "id": "0x0f84452c0dcda0c9980a0a802eb8b8dbaaf52c54", - "lockedAmount": "25", - "unlockTime": "1687392000" - }, - { - "id": "0x1019b7e639234c589c34385955adfbe0af8d8453", - "lockedAmount": "2121", - "unlockTime": "1706140800" - }, - { - "id": "0x104e9bce2d1a6fb449c14272f0157422a00adaa5", - "lockedAmount": "7300", - "unlockTime": "1744243200" - }, - { - "id": "0x111849a4943891b071f7cdb1babebcb74415204a", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x11300251b903ba70f51262f3e49aa7c22f81e1b2", - "lockedAmount": "1504", - "unlockTime": "1794441600" - }, - { - "id": "0x119b6e8c6b258b2b93443e949ef5066a85d75e44", - "lockedAmount": "30000", - "unlockTime": "1748476800" - }, - { - "id": "0x11e43d79e4193dfc1247697cb0ae15b17d27fc5b", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1215fed867ad6eb5f078fc8b477a1a32eb59d75d", - "lockedAmount": "18752", - "unlockTime": "1730332800" - }, - { - "id": "0x126bc064dbd1d0205fc608c3178a60c9706b482c", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1280cfea89a214b490c202fa22688813df8d8c04", - "lockedAmount": "26000", - "unlockTime": "1727913600" - }, - { - "id": "0x13203b4fef73f05b3db709c41c96179b37bf01eb", - "lockedAmount": "293", - "unlockTime": "1738195200" - }, - { - "id": "0x1479a4884dee82dc8471e0006102f9d400445332", - "lockedAmount": "13009", - "unlockTime": "1698883200" - }, - { - "id": "0x149756907221491eca8c5816a6b5d6b60fcd7d60", - "lockedAmount": "4985", - "unlockTime": "1701907200" - }, - { - "id": "0x153785d85dffe5b92083e30003aa58f18344d032", - "lockedAmount": "50", - "unlockTime": "1802304000" - }, - { - "id": "0x15558eb2aeb93ed561515a47441bf49250933ba9", - "lockedAmount": "500000", - "unlockTime": "1804118400" - }, - { - "id": "0x15a919e499d88a71e94d34ab76986799f69b4ff2", - "lockedAmount": "4940", - "unlockTime": "1733961600" - }, - { - "id": "0x15abf18f424cd2755e9d680eeeaa02bc00c1f00e", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x15f311af257d6e8520ebf29eae5ba76c4dd45c6a", - "lockedAmount": "1420", - "unlockTime": "1796860800" - }, - { - "id": "0x1609665376e39e9d9cdfdc75e44f80bb899e9d21", - "lockedAmount": "8016", - "unlockTime": "1699488000" - }, - { - "id": "0x1694ab8e597e90fcb4cd637bafa3e553fc1d0083", - "lockedAmount": "364", - "unlockTime": "1734566400" - }, - { - "id": "0x175437b00da09f18d89571b95a41a15aa8415eba", - "lockedAmount": "88050", - "unlockTime": "1798675200" - }, - { - "id": "0x1758bc68a87abfede6a213666d15c028f2708b2b", - "lockedAmount": "1494", - "unlockTime": "1731542400" - }, - { - "id": "0x1789bf2df0fffa3ab5d235b41ecb72f48294d955", - "lockedAmount": "920", - "unlockTime": "1701302400" - }, - { - "id": "0x1843c3d1dd3e2564fada8ea50bb73819c6b53047", - "lockedAmount": "3354", - "unlockTime": "1793836800" - }, - { - "id": "0x184f19323defce76af86bb5a63aa976cd9f256d7", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x18559e7f5d87f5c607a34ed45453d62832804c97", - "lockedAmount": "3275", - "unlockTime": "1687996800" - }, - { - "id": "0x1891c8d948bc041b5e7c1a35185cc593a33b4a6c", - "lockedAmount": "7436", - "unlockTime": "1790208000" - }, - { - "id": "0x1a0d80e1bd429127bc9a4acee880426b818764ee", - "lockedAmount": "420", - "unlockTime": "1807747200" - }, - { - "id": "0x1a2409444f2f349c2e539eb013eed985b9d54e2f", - "lockedAmount": "500", - "unlockTime": "1687996800" - }, - { - "id": "0x1a9a6198c28d4dd5b9ab58e84677520ec741cb29", - "lockedAmount": "2565", - "unlockTime": "1683158400" - }, - { - "id": "0x1ab21891e9230e4a8c3e09d88e3c0b48d54f1a86", - "lockedAmount": "980", - "unlockTime": "1734566400" - }, - { - "id": "0x1bafc574581ea4b938dcfe0d0d93778303cb3fb7", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1c175ce4f8f3e8a16df7165f15057a82a88c025c", - "lockedAmount": "953", - "unlockTime": "1692230400" - }, - { - "id": "0x1c7b100cc8a2966d35ac6cc0ccaf4d5cba463b94", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1cd1b778cdc329292d196e490b65b7950bee1c97", - "lockedAmount": "301", - "unlockTime": "1700092800" - }, - { - "id": "0x1d11c308464f09228f7c81daa253ff9f415ea4f7", - "lockedAmount": "21908", - "unlockTime": "1697068800" - }, - { - "id": "0x1d3c2dc18ca3da0406cfb3634faab589c769215b", - "lockedAmount": "625", - "unlockTime": "1689811200" - }, - { - "id": "0x1dc865705a03d63953e7df83caefc8928e555b6c", - "lockedAmount": "5245", - "unlockTime": "1812585600" - }, - { - "id": "0x1ddb98275a09552b5be11e8e3118684ed6a809fc", - "lockedAmount": "10000", - "unlockTime": "1725494400" - }, - { - "id": "0x1e180d121eff6cd1b376af9318d4128093c46032", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1e2394b6b88f9329127d98347f6e696e4af33e13", - "lockedAmount": "0", - "unlockTime": "0" - }, - { - "id": "0x1e38e305126bfe9b6329f5fdce28d72fdf9d5647", - "lockedAmount": "183844", - "unlockTime": "1801699200" - }, - { - "id": "0x1f130be1f04e159ef98c54f677b9b980b012417b", - "lockedAmount": "10663", - "unlockTime": "1745452800" - }, - { - "id": "0x1f3bcd409b2b2d88259aca77115e858ea3c65e9c", - "lockedAmount": "2000", - "unlockTime": "1732147200" - }, - { - "id": "0x1fac06467b7d9c3a9361f42ab7bd09e6a5719ec7", - "lockedAmount": "81285", - "unlockTime": "1802908800" - }, - { - "id": "0x1fba4f4446859ab451cb7f3b8fbce9bcdc97fdb9", - "lockedAmount": "560", - "unlockTime": "1689206400" - }, - { - "id": "0x200fa3e7e3fbfeb15b76e53f2810faec71a5336d", - "lockedAmount": "2375", - "unlockTime": "1805932800" - }, - { - "id": "0x2017ade0a289de891ca7e733513b264cfec2c8ce", - "lockedAmount": "9119", - "unlockTime": "1703721600" - } - ] - } -} -``` -{% endcode %} - -
diff --git a/developers/old-infrastructure/subgraph/list-data-nfts.md b/developers/old-infrastructure/subgraph/list-data-nfts.md deleted file mode 100644 index 6d3bc9726..000000000 --- a/developers/old-infrastructure/subgraph/list-data-nfts.md +++ /dev/null @@ -1,250 +0,0 @@ ---- -description: 'Discover the World of NFTs: Retrieving a List of Data NFTs' ---- - -# Get data NFTs - -If you are already familiarized with the concept of NFTs, you're off to a great start. However, if you require a refresher, we recommend visiting the [data NFTs and datatokens page](../../contracts/datanft-and-datatoken.md) for a quick overview. - -Now, let us delve into the realm of utilizing the subgraph to extract a list of data NFTs that have been published using the Ocean contracts. By employing GraphQL queries, we can seamlessly retrieve the desired information from the subgraph. You'll see how simple it is :sunglasses: - -You'll find below an example of a GraphQL query that retrieves the first 10 data NFTs from the subgraph. The GraphQL query is structured to access the "nfts" route, extracting the first 10 elements. For each item retrieved, it retrieves the `id`, `name`, `symbol`, `owner`, `address`, `assetState`, `tx`, `block` and `transferable` parameters. - -There are several options available to see this query in action. Below, you will find three: - -1. Run the GraphQL query in the GraphiQL interface. -2. Execute the query in Python by following the code snippet. -3. Execute the query in JavaScript by clicking on the "Run" button of the Javascript tab. - -_PS: In these examples, the query is executed on the Ocean subgraph deployed on the mainnet. If you want to change the network, please refer to_ [_this table_](./#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="Javascript" %} -The javascript below can be used to run the query and retrieve a list of NFTs. If you wish to change the network, then replace the value of `network` variable as needed. - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const query = `{ - nfts (skip:0, first: 10, subgraphError:deny){ - id - name - symbol - owner - address - assetState - tx - block - transferable - } -}` - -const network = "mainnet" -const config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { 'Content-Type': 'application/json' }, - data: JSON.stringify({ query: query }) -} - -const response = await axios(config) -for (let nft of response.data.data.nfts) { - console.log(' id:' + nft.id + ' name: ' + nft.name + ' address: ' + nft.address) -} - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query to fetch a list of data NFTs from the subgraph. If you wish to change the network, replace the value of the variable `base_url` as needed. - -**Create script** - -{% code title="list_dataNFTs.py" %} -```python -import requests -import json - - -query = """ -{ - nfts (skip:0, first: 10, subgraphError:deny){ - id - name - symbol - owner - address - assetState - tx - block - transferable - } -}""" - - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -``` -python list_dataNFTs.py -``` -{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch a list of data NFTs in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). - -```graphql -{ - nfts (skip:0, first: 10, subgraphError:deny){ - id - name - symbol - owner - address - assetState - tx - block - transferable - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -```json -{ - "data": { - "nfts": [ - { - "address": "0x1c161d721e6d99f58d47f709cdc77025056c544c", - "assetState": 0, - "block": 15185270, - "id": "0x1c161d721e6d99f58d47f709cdc77025056c544c", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x327a9da0d2e9df945fd2f8e10b1caa77acf98e803c5a2f588597172a0bcbb93a" - }, - { - "address": "0x1e06501660623aa973474e3c59efb8ba542cb083", - "assetState": 0, - "block": 15185119, - "id": "0x1e06501660623aa973474e3c59efb8ba542cb083", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0xd351ccee22b505d811c29fa524db920815936672b20b8f3a09485e389902fd27" - }, - { - "address": "0x2eaa55236f799c6ebec72e77a1a6296ea2e704b1", - "assetState": 0, - "block": 15185009, - "id": "0x2eaa55236f799c6ebec72e77a1a6296ea2e704b1", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0xf6d55306ab4dc339dc1655a2d119af468a79a70fa62ea11de78879da61e89e7b" - }, - { - "address": "0x2fbe924f6c92825929dc7785fe05d15e35f2612b", - "assetState": 0, - "block": 15185235, - "id": "0x2fbe924f6c92825929dc7785fe05d15e35f2612b", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0xa9ff9d461b4b7344ea181de32fa6412c7ea8e21f485ab4d8a7b9cfcdb68d9d51" - }, - { - "address": "0x4c04433bb1760a66be7713884bb6370e9c567cef", - "assetState": 0, - "block": 15185169, - "id": "0x4c04433bb1760a66be7713884bb6370e9c567cef", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x54c5463e8988b5fa4e4cfe71ee391505801931abe9e94bf1588dd538ec3aa4c9" - }, - { - "address": "0x619c500dcb0251b31cd480030db2dcc19866c0c3", - "assetState": 0, - "block": 15236619, - "id": "0x619c500dcb0251b31cd480030db2dcc19866c0c3", - "name": "abc", - "owner": "0x12fe650c86cd4346933ef1bcab21a1979d4c6786", - "symbol": "GOAL-9956", - "transferable": true, - "tx": "0x6178b03589cda98573ff52a1afbcc07b14a2fddacc0132595949e9d8a0ed1b32" - }, - { - "address": "0x6d45a5b38a122a6dbc042601236d6ecc5c8e343e", - "assetState": 0, - "block": 15109853, - "id": "0x6d45a5b38a122a6dbc042601236d6ecc5c8e343e", - "name": "Ocean Data NFT", - "owner": "0xbbd33afa85539fa65cc08a2e61a001876d2f13fe", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x27aa77a0bf3f7878910dc7bfe2116d9271138c222b3d898381a5c72eefefe624" - }, - { - "address": "0x7400078c5d4fd7704afca45a928d9fc97cbea744", - "assetState": 0, - "block": 15185056, - "id": "0x7400078c5d4fd7704afca45a928d9fc97cbea744", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x2025374cd347e25e2651feec2f2faa2feb26664698eaea42b5dad1a31eda57f8" - }, - { - "address": "0x81decdb59dce5b4323e683a76f8fa8dd0eabc560", - "assetState": 0, - "block": 15185003, - "id": "0x81decdb59dce5b4323e683a76f8fa8dd0eabc560", - "name": "Ocean Data NFT", - "owner": "0xd30dd83132f2227f114db8b90f565bca2832afbd", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x6ad6ec2ce86bb70e077590a64c886d72975374bd2e993f143d9da8edcaace82b" - }, - { - "address": "0x8684119ecf77c5be41f01760ad466725ffd9b960", - "assetState": 0, - "block": 14933034, - "id": "0x8684119ecf77c5be41f01760ad466725ffd9b960", - "name": "Ocean Data NFT", - "owner": "0x87b5606fba13529e1812319d25c6c2cd5c3f3cbc", - "symbol": "OCEAN-NFT", - "transferable": true, - "tx": "0x55ba746cd8e8fb4c739b8544a9034848082b627500b854cb8db0802dd7beb172" - } - ] - } -} -``` - -
diff --git a/developers/old-infrastructure/subgraph/list-datatokens.md b/developers/old-infrastructure/subgraph/list-datatokens.md deleted file mode 100644 index a93fefeb3..000000000 --- a/developers/old-infrastructure/subgraph/list-datatokens.md +++ /dev/null @@ -1,213 +0,0 @@ ---- -description: 'Discover the World of datatokens: Retrieving a List of datatokens' ---- - -# Get datatokens - -With your newfound knowledge of fetching data NFTs and retrieving the associated information, fetching a list of datatokens will be a breeze :ocean:. Building upon your understanding, let's now delve into the process of retrieving a list of datatokens. By applying similar techniques and leveraging the power of GraphQL queries, you'll be able to effortlessly navigate the landscape of datatokens and access the wealth of information they hold. So, let's dive right in and unlock the potential of exploring datatokens with ease and efficiency. - - - -_PS: In this example, the query is executed on the Ocean subgraph deployed on the mainnet. If you want to change the network, please refer to_ [_this table_](README.md#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="Javascript" %} -The javascript below can be used to run the query. If you wish to change the network, replace the variable's value `network` as needed. - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const query = `{ - tokens(skip:0, first: 2, subgraphError: deny){ - id - symbol - nft { - name - symbol - address - } - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){ - amount - serviceIndex - payer { - id - } - consumer{ - id - } - estimatedUSDValue - lastPriceToken - lastPriceValue - } - } -}` - -const network = "mainnet" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - let result = JSON.stringify(response.data) - console.log(result); - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query and fetch a list of datatokens. If you wish to change the network, then replace the value of the variable `base_url` as needed. - -**Create script** - -{% code title="list_all_tokens.py" %} -```python -import requests -import json - -query = """ -{{ - tokens(skip:0, first: 2, subgraphError: deny){{ - id - symbol - nft {{ - name - symbol - address - }} - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){{ - amount - serviceIndex - payer {{ - id - }} - consumer{{ - id - }} - estimatedUSDValue - lastPriceToken - lastPriceValue - }} - - - }} -}}""" - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -``` -python list_all_tokens.py -``` -{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch a list of datatokens in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). - -```graphql -{ - tokens(skip:0, first: 2, subgraphError: deny){ - id - symbol - nft { - name - symbol - address - } - name - symbol - cap - isDatatoken - holderCount - orderCount - orders(skip:0,first:1){ - amount - serviceIndex - payer { - id - } - consumer{ - id - } - estimatedUSDValue - lastPriceToken - lastPriceValue - } - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample Response - -```json -{ - "data": { - "tokens": [ - { - "cap": null, - "holderCount": "0", - "id": "0x0642026e7f0b6ccac5925b4e7fa61384250e1701", - "isDatatoken": false, - "name": "H2O", - "nft": null, - "orderCount": "0", - "orders": [], - "symbol": "H2O" - }, - { - "cap": "115792089237316195423570985008687900000000000000000000000000", - "holderCount": "0", - "id": "0x122d10d543bc600967b4db0f45f80cb1ddee43eb", - "isDatatoken": true, - "name": "Brave Lobster Token", - "nft": { - "address": "0xea615374949a2405c3ee555053eca4d74ec4c2f0", - "name": "Ocean Data NFT", - "symbol": "OCEAN-NFT" - }, - "orderCount": "0", - "orders": [], - "symbol": "BRALOB-11" - } - ] - } -} -``` - -
diff --git a/developers/old-infrastructure/subgraph/list-fixed-rate-exchanges.md b/developers/old-infrastructure/subgraph/list-fixed-rate-exchanges.md deleted file mode 100644 index 76b14089a..000000000 --- a/developers/old-infrastructure/subgraph/list-fixed-rate-exchanges.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -description: 'Discover the World of NFTs: Retrieving a List of Fixed-rate exchanges' ---- - -# Get fixed-rate exchanges - -Having gained knowledge about fetching lists of data NFTs and datatokens and extracting specific information about each, let's now explore the process of retrieving the information of fixed-rate exchanges. A fixed-rate exchange refers to a mechanism where data assets can be traded at a predetermined rate or price. These exchanges offer stability and predictability in data transactions, enabling users to securely and reliably exchange data assets based on fixed rates. If you need a refresher on fixed-rate exchanges, visit the [asset pricing](../../contracts/pricing-schemas.md#fixed-pricing) page. - -_PS: In this example, the query is executed on the Ocean subgraph deployed on the mainnet. If you want to change the network, please refer to_ [_this table_](./#ocean-subgraph-deployments)_._ - -{% tabs %} -{% tab title="Javascript" %} -The javascript below can be used to run the query and fetch a list of fixed-rate exchanges. If you wish to change the network, replace the variable's value `network` as needed. - -```runkit nodeVersion="18.x.x" -var axios = require('axios'); - -const query = `{ - fixedRateExchanges(skip:0, first:2, subgraphError:deny){ - id - contract - exchangeId - owner{id} - datatoken{ - id - name - symbol - } - price - datatokenBalance - active - totalSwapValue - swaps(skip:0, first:1){ - tx - by { - id - } - baseTokenAmount - dataTokenAmount - createdTimestamp - } - updates(skip:0, first:1){ - oldPrice - newPrice - newActive - createdTimestamp - tx - } - } -}` - -const network = "mainnet" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - let result = JSON.stringify(response.data) - console.log(result) - }) - .catch(function (error) { - console.log(error); - }); - -``` -{% endtab %} - -{% tab title="Python" %} -The Python script below can be used to run the query and retrieve a list of fixed-rate exchanges. If you wish to change the network, then replace the value of the variable `base_url` as needed. - -**Create script** - -{% code title="list_fixed_rate_exchanges.py" %} -```python -import requests -import json - - -query = """ -{ - fixedRateExchanges(skip:0, first:2, subgraphError:deny){ - id - contract - exchangeId - owner{id} - datatoken{ - id - name - symbol - } - price - datatokenBalance - active - totalSwapValue - swaps(skip:0, first:1){ - tx - by { - id - } - baseTokenAmount - dataTokenAmount - createdTimestamp - } - updates(skip:0, first:1){ - oldPrice - newPrice - newActive - createdTimestamp - tx - } - } -}""" - - -base_url = "https://v4.subgraph.mainnet.oceanprotocol.com" -route = "/subgraphs/name/oceanprotocol/ocean-subgraph" - -url = base_url + route - -headers = {"Content-Type": "application/json"} -payload = json.dumps({"query": query}) -response = requests.request("POST", url, headers=headers, data=payload) -result = json.loads(response.text) - -print(json.dumps(result, indent=4, sort_keys=True)) -``` -{% endcode %} - -**Execute script** - -``` -python list_fixed_rate_exchanges.py -``` -{% endtab %} - -{% tab title="Query" %} -Copy the query to fetch a list of fixed-rate exchanges in the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). - -``` -{ - fixedRateExchanges(skip:0, first:2, subgraphError:deny){ - id - contract - exchangeId - owner{id} - datatoken{ - id - name - symbol - } - price - datatokenBalance - active - totalSwapValue - swaps(skip:0, first:1){ - tx - by { - id - } - baseTokenAmount - dataTokenAmount - createdTimestamp - } - updates(skip:0, first:1){ - oldPrice - newPrice - newActive - createdTimestamp - tx - } - } -} -``` -{% endtab %} -{% endtabs %} - -
- -Sample response - -```json -{ - "data": { - "fixedRateExchanges": [ - { - "active": true, - "contract": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02", - "datatoken": { - "id": "0x9b39a17cc72c8be4813d890172eff746470994ac", - "name": "Delightful Pelican Token", - "symbol": "DELPEL-79" - }, - "datatokenBalance": "0", - "exchangeId": "0x06284c39b48afe5f01a04d56f1aae45dbb29793b190ee11e93a4a77215383d44", - "id": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02-0x06284c39b48afe5f01a04d56f1aae45dbb29793b190ee11e93a4a77215383d44", - "owner": { - "id": "0x03ef3f422d429bcbd4ee5f77da2917a699f237ed" - }, - "price": "33", - "swaps": [ - { - "baseTokenAmount": "33.033", - "by": { - "id": "0x9b39a17cc72c8be4813d890172eff746470994ac" - }, - "createdTimestamp": 1656563684, - "dataTokenAmount": "1", - "tx": "0x0b55482f69169c103563062e109f9d71afa01d18f201c425b24b1c74d3c282a3" - } - ], - "totalSwapValue": "0", - "updates": [] - }, - { - "active": true, - "contract": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02", - "datatoken": { - "id": "0x2cf074e36a802241f2f8ddb35f4a4557b8f1179b", - "name": "Arcadian Eel Token", - "symbol": "ARCEEL-17" - }, - "datatokenBalance": "0", - "exchangeId": "0x2719862ebc4ed253f09088c878e00ef8ee2a792e1c5c765fac35dc18d7ef4deb", - "id": "0xfa48673a7c36a2a768f89ac1ee8c355d5c367b02-0x2719862ebc4ed253f09088c878e00ef8ee2a792e1c5c765fac35dc18d7ef4deb", - "owner": { - "id": "0x87b5606fba13529e1812319d25c6c2cd5c3f3cbc" - }, - "price": "35", - "swaps": [], - "totalSwapValue": "0", - "updates": [] - } - ] - } -} -``` - -
diff --git a/developers/publishing-flow-architecture.md b/developers/publishing-flow-architecture.md deleted file mode 100644 index 4f44e5dc4..000000000 --- a/developers/publishing-flow-architecture.md +++ /dev/null @@ -1,39 +0,0 @@ -# High Level Publish Flow - -Let's remember the interaction with Ocean's stack components for DDO publishing flow! - -For this particular flow, we selected as consumer - Ocean CLI. -To explore more details regarding Ocean CLI usage, kindly check [this dedicated section](../developers/ocean-cli/README.md). - -In this context, we address the following sequence diagram along with the explanations. - -

DDO Publish Flow

- -1. **Asset Creation Begins** -- The End User initiates the process by running the command: **_npm run publish _**. -This redirects to Ocean CLI (Consumer) to start publishing the dataset with the selected file. - -- The Consumer then calls `ocean.js`, which handles the asset creation logic. - -2. **Smart Contract Deployment** - -- Ocean.js interacts with the Smart Contracts to deploy: -Data NFT, Datatoken, pricing schema such as __Dispenser__ -from free assets and __Fixed Rate Exchange__ for priced assets. - -- Once deployed, the smart contracts emit the **NFTCreated** and **DatatokenCreated** events (and additionally **DispenserCreated** and **FixedRateCreated** for pricing schema deployments). - -- Ocean.js listens to these events and checks the datatoken template. If it is template 4, then no encryption is needed for service files, because [template 4 contract of ERC20](https://github.com/oceanprotocol/contracts/blob/main/contracts/templates/ERC20Template4.sol) is used on top of credential EVM chains, which already encrypt the information on-chain, e.g. Sapphire Testnet. Otherwise, service files need to be encrypted by Ocean Node's dedicated handler. - -3. **DDO Validation** -Ocean.js requests Ocean Node to validate the DDO structure against the SHACL schemas, depending on DDO version. For this task, Ocean Node uses util functions from `DDO.js` library which is out dedicated tool for DDO interactions. - -- ✅ _If Validation Succeeds_: -Ocean.js can call setMetadata on-chain and then returns the DID to the Consumer, which is passed back to the End User. The DID gets indexed in parallel, because Ocean Node listens through Indexer to blockchain events, including `MetadataCreated` and the DDO will be processed and stored within `Ocean Node's Database`. - -- ❌ _If Validation Fails_: -Ocean Node logs the issue and responds to Ocean.js with an error status and asset creation halts here. - -## Hands-On Approach - -Regarding publishing new datasets through consumer, Ocean CLI, please consult [this dedicated section](../developers/ocean-cli/publish.md). \ No newline at end of file diff --git a/developers/retrieve-datatoken-address.md b/developers/retrieve-datatoken-address.md deleted file mode 100644 index 855ed1749..000000000 --- a/developers/retrieve-datatoken-address.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -description: >- - Use these steps to reveal the information contained within an asset's DID and - list the buyers of a datatoken ---- - -# Retrieve datatoken/data NFT addresses & Chain ID - -### How to find the network, datatoken address, and data NFT address from an Ocean Market link? - -If you are given an Ocean Market link, then the network and datatoken address for the asset is visible on the Ocean Market webpage. For example, given this asset's Ocean Market link: [https://odc.oceanprotocol.com/asset/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1](https://odc.oceanprotocol.com/asset/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1) the webpage shows that this asset is hosted on the Mumbai network, and one simply clicks the datatoken's hyperlink to reveal the datatoken's address as shown in the screenshot below: - -

See the Network and Datatoken Address for an Ocean Market asset by visiting the asset's Ocean Market page.

- -#### More Detailed Info: - -You can access all the information for the Ocean Market asset also by **enabling Debug mode**. To do this, follow these steps: - -**Step 1** - Click the Settings button in the top right corner of the Ocean Market - -

Click the Settings button

- -**Step 2** - Check the Activate Debug Mode box in the dropdown menu - -

Check 'Active Debug Mode'

- -**Step 3** - Go to the page for the asset you would like to examine, and scroll through the DDO information to find the NFT address, datatoken address, chain ID, and other information. - -
- -### How to use Aquarius to find the chainID and datatoken address from a DID? - -If you know the DID:op but you don't know the source link, then you can use Ocean Aquarius to resolve the metadata for the DID:op to find the `chainId`+ `datatoken address` of the asset. Simply enter in your browser "[https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1)\" to fetch the metadata. - -For example, for the following DID:op: "did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1" the Ocean Aquarius URL can be modified to add the DID:op and resolve its metadata. Simply add "[https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1)" to the beginning of the DID:op and enter the link in your browser like this: [https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1) - -

The metadata printout for this DID:op with the network's Chain ID and datatoken address circled in red

- -Here are the networks and their corresponding chain IDs: - -```json -"mumbai: 80001" -"polygon: 137" -"bsc: 56" -"energyweb: 246" -"moonriver: 1285" -"mainnet: 1" -"goerli: 5" -"polygonedge: 81001" -"gaiaxtestnet: 2021000" -"alfajores: 44787" -"gen-x-testnet: 100" -"filecointestnet: 3141" -"oasis_saphire_testnet: 23295" -"development: 8996" -``` - diff --git a/developers/uploader/README.md b/developers/uploader/README.md deleted file mode 100644 index ce89fe256..000000000 --- a/developers/uploader/README.md +++ /dev/null @@ -1,74 +0,0 @@ -# Uploader - -### What's Uploader? - -The Uploader represents a cutting-edge solution designed to streamline the upload process within a decentralized network. Built with efficiency and scalability in mind, Uploader leverages advanced technologies to provide secure, reliable, and cost-effective storage solutions to users. - -### Architecture Overview - -Uploader is built on a robust architecture that seamlessly integrates various components to ensure optimal performance. The architecture consists of: - -- Uploader API Layer: Exposes both public and private APIs for frontend and microservices interactions, respectively. -- 1-N Storage Microservices: Multiple microservices, each specializing in different storage types, responsible for handling storage operations. -- IPFS Integration: Temporary storage using the InterPlanetary File System (IPFS). - -### Streamlined File Uploads - -Uploader streamlines the file uploading process, providing users with a seamless experience to effortlessly incorporate their digital assets into a decentralized network. Whether you're uploading images, documents, or other media, Uploader enhances accessibility and ease of use, fostering a more decentralized and inclusive digital landscape. - -### Unique Identifiers - -Obtain unique identifiers such as hashes or CIDs for your uploaded files. These unique identifiers play a pivotal role in enabling efficient tracking and interaction with decentralized assets. By obtaining these identifiers, users gain a crucial toolset for managing, verifying, and engaging with their digital assets on the decentralized network, ensuring a robust and secure mechanism for overseeing the lifecycle of their contributed files. - -### Features - -Uploader offers a range of powerful features tailored to meet the needs of any decentralized storage: - -- User Content Uploads: Users can seamlessly upload their content through the user-friendly frontend interface. -- Payment Handling: Uploader integrates with payment systems to manage the financial aspects of storage services. -- Decentralized Storage: Content is pushed to decentralized storage networks like Filecoin and Arweave for enhanced security and redundancy. -- API Documentation: Comprehensive API documentation on each repo to allow users to understand and interact with the system effortlessly. -- Uploader.js: a TypeScript library designed to simplify interaction with the Uploader API. This library provides a user-friendly and intuitive interface for calling API endpoints within the Uploader Storage system. - -### Components - -- [Uploader](https://github.com/oceanprotocol/decentralized_storage_backend) -- [Uploader.js](https://github.com/oceanprotocol/uploader.js) -- [Uploader UI](https://github.com/oceanprotocol/uploader-ui-lib) - -### Microservices: - -- [Filecoin](https://github.com/oceanprotocol/uploader_filecoin) (WIP) -- [Arweave](https://github.com/oceanprotocol/uploader_arweave) - -### User Workflow - -Uploader simplifies the user workflow, allowing for easy management of storage operations: - -- Users fetch available storage types and payment options from the frontend. -- Quotes for storing files on the Microservice network. -- Files are uploaded from the frontend to Uploader, which handles temporary storage via IPFS. -- The Microservice takes over, ensuring data is stored on the selected network securely. -- Users can monitor upload status and retrieve links to access their stored content. - -#### File storage flow - -

Ocean Uploader - storage flow 1

- -#### File retrieval flow - -

Ocean Uploader - storage flow 1

- -### API Documentation - -Documentation is provided in the repos to facilitate seamless integration and interaction with the Uploader. The documentation outlines all API endpoints, payload formats, and example use cases, empowering developers to effectively harness the capabilities of the Uploader solution. - -### Troubleshooting - -Did you encounter a problem? Open an issue in Ocean Protocol's repos: - -- [Uploader](https://github.com/oceanprotocol/decentralized_storage_backend/issues) -- [Uploader.js](https://github.com/oceanprotocol/uploader.js/issues) -- [Filecoin Microservice](https://github.com/oceanprotocol/uploader_filecoin/issues) -- [Arweave Microservice](https://github.com/oceanprotocol/uploader_arweave/issues) -- [Uploader UI Library](https://github.com/oceanprotocol/uploader-ui-lib/issues) diff --git a/developers/uploader/uploader-js.md b/developers/uploader/uploader-js.md deleted file mode 100644 index d7b82b928..000000000 --- a/developers/uploader/uploader-js.md +++ /dev/null @@ -1,154 +0,0 @@ -# Uploader.js - -Uploader.js is a robust TypeScript library that serves as a vital bridge to interact with the Ocean Uploader API. It simplifies the process of managing file storage uploads, obtaining quotes, and more within the Ocean Protocol ecosystem. This library offers developers a straightforward and efficient way to access the full range of Uploader API endpoints, facilitating seamless integration of decentralized storage capabilities into their applications. - -Whether you're building a decentralized marketplace, a content management system, or any application that involves handling digital assets, Uploader.js provides a powerful toolset to streamline your development process and enhance your users' experience. - -### Browser Usage - -Ensure that the Signer object (signer in this case) you're passing to the function when you call it from the browser is properly initialized and is compatible with the browser. For instance, if you're using something like MetaMask as your Ethereum provider in the browser, you'd typically use the ethers.Web3Provider to generate a signer. - -### How to Safely Store Your Precious Files with Ocean Uploader Magic 🌊✨ - -Excited to get your files safely stored? Let's breeze through the process using Ocean Uploader. First things first, install the package with npm or yarn: - -```bash -npm install @oceanprotocol/uploader - -```bash -yarn add @oceanprotocol/uploader -``` - -or - -```bash -yarn add @oceanprotocol/uploader -``` - -Got that done? Awesome! Now, let's dive into a bit of TypeScript: - -```typescript -import { ethers } from 'ethers'; -import { - UploaderClient, - GetQuoteArgs, - GetQuoteResult -} from '@oceanprotocol/uploader'; -import dotenv from 'dotenv'; - -dotenv.config(); - -// Set up a new instance of the Uploader client -const signer = new ethers.Wallet(process.env.PRIVATE_KEY); -const client = new UploaderClient(process.env.UPLOADER_URL, process.env.UPLOADER_ACCOUNT, signer); - -async function uploadAsset() { - // Get storage info - const info = await client.getStorageInfo(); - - // Fetch a quote using the local file path - const quoteArgs: GetQuoteArgs = { - type: info[0].type, - duration: 4353545453, - payment: { - chainId: info[0].payment[0].chainId, - tokenAddress: info[0].payment[0].acceptedTokens[0].value - }, - userAddress: process.env.USER_ADDRESS, - filePath: ['/home/username/ocean/test1.txt'] // example file path - }; - const quoteResult: GetQuoteResult = await client.getQuote(quoteArgs); - - // Upload the file using the returned quote - await client.upload(quoteResult.quoteId, quoteArgs.filePath); - console.log('Files uploaded successfully.'); -} - -uploadAsset().catch(console.error); - -``` - -There you go! That's all it takes to upload your files using Uploader.js. Easy, right? Now go ahead and get those files stored securely. You got this! 🌟💾 - -For additional details, please visit the [Uploader.js](https://github.com/oceanprotocol/uploader.js) repository. - -### API - -The library offers developers a versatile array of methods designed for seamless interaction with the Ocean Uploader API. These methods collectively empower developers to utilize Ocean's decentralized infrastructure for their own projects: - -
- - constructor(baseURL: string) - - - ``` - Create a new instance of the UploaderClient. - ``` -
-
- - getStorageInfo() - - - ``` - Fetch information about supported storage types and payments. - ``` -
-
- - getQuote(args: GetQuoteArgs) - - - ``` - Fetch a quote for storing files on a specific storage. - ``` -
-
- - upload(quoteId: string, nonce: number, signature: string, files: File[]) - - - ``` - Upload files according to the quote request. - ``` -
-
- - getStatus(quoteId: string) - - - ``` - Fetch the status of an asset during upload. - ``` -
-
- - getLink(quoteId: string, nonce: number, signature: string) - - - ``` - Fetch hash reference for the asset. For example: CID for Filecoin, Transaction Hash for Arweave. - ``` -
-
- - registerMicroservice(args: RegisterArgs) - - - ``` - Register a new microservice that handles a storage type. - ``` -
-
- - getHistory(page: number = 1, pageSize: number = 25) - - - ``` - Retrieves the quote history for the given user address, nonce, and signature. - ``` -
-
-Whether you're a developer looking to integrate Ocean Uploader into your application or a contributor interested in enhancing this TypeScript library, we welcome your involvement. By following the provided documentation, you can harness the capabilities of Uploader.js to make the most of decentralized file storage in your projects. - -Feel free to explore the API reference, contribute to the library's development, and become a part of the Ocean Protocol community's mission to democratize data access and storage. \ No newline at end of file diff --git a/developers/uploader/uploader-ui-marketplace.md b/developers/uploader/uploader-ui-marketplace.md deleted file mode 100644 index b67107c2a..000000000 --- a/developers/uploader/uploader-ui-marketplace.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -description: With the Uploader UI, users can effortlessly upload their files and obtain a unique hash or CID (Content Identifier) for each uploaded asset to use on the Marketplace. ---- - -# Uploader UI Marketplace - -Step 1: Copy the hash or CID from your upload. - - - -Step 2: Open the Ocean Marketplace. Go to publish and fill in all the information for your dataset. - - - -Step 3: When selecting the file to publish, open the hosting provider (e.g. "Arweave" tab) - - - -Step 4: Paste the hash you copied earlier. - - - -Step 5: Click on "VALIDATE" to ensure that your file gets validated correctly. - - - -This feature not only simplifies the process of storing and managing files but also seamlessly integrates with the Ocean Marketplace. Once your file is uploaded via Uploader UI, you can conveniently use the generated hash or CID to interact with your assets on the Ocean Marketplace, streamlining the process of sharing, validating, and trading your digital content. \ No newline at end of file diff --git a/developers/uploader/uploader-ui.md b/developers/uploader/uploader-ui.md deleted file mode 100644 index 797ed4c01..000000000 --- a/developers/uploader/uploader-ui.md +++ /dev/null @@ -1,178 +0,0 @@ -# Uploader UI - -The [Uploader UI](https://github.com/oceanprotocol/uploader-ui-lib) stands as a robust UI react library dedicated to optimizing the uploading, and interaction with digital assets. - -Through an intuitive platform, the tool significantly simplifies the entire process, offering users a seamless experience for uploading files, acquiring unique identifiers such as hashes or CIDs, and effectively managing their decentralized assets. Developed using React, TypeScript, and CSS modules, the library seamlessly connects to Ocean remote components by default, ensuring a cohesive and efficient integration within the ecosystem. - -## 🚀 Usage - -Integrating [Uploader UI](https://github.com/oceanprotocol/uploader-ui-lib) into your application is straightforward. The package facilitates seamless uploads but requires a wallet connector library to function optimally. Compatible wallet connection choices include [ConnectKit](https://docs.family.co/), [Web3Modal](https://web3modal.com/), [Dynamic](https://dynamic.xyz/) and [RainbowKit](https://www.rainbowkit.com/docs/installation). - -**Step 1:** Install the necessary packages. For instance, if you're using ConnectKit, the installation command would be: - -```bash -npm install connectkit @oceanprotocol/uploader-ui-lib -``` - -**Step 2:** Incorporate the UploaderComponent from the uploader-ui-lib into your app. It's crucial to ensure the component is nested within both the WagmiConfig and ConnectKit providers. Here's a basic implementation: - -```javascript -import React from 'react' -import { WagmiConfig, createConfig } from 'wagmi' -import { polygon } from 'wagmi/chains' -import { - ConnectKitProvider, - getDefaultConfig, - ConnectKitButton -} from 'connectkit' -import UploaderComponent from 'uploader-ui-lib' - -export default function App () { - // Initialize the Wagmi client - const wagmiConfig = createConfig( - getDefaultConfig({ - appName: 'Ocean Uploader UI', - infuraId: 'Your infura ID', - chains: [polygon], - walletConnectProjectId: 'Your wallet connect project ID' - }) - ) - - return ( - - - {/* Your App */} - - - - - ) -} - -``` - -By following the steps above, you can smoothly incorporate the Uploader UI into your application while ensuring the essential providers wrap the necessary components. - -Alternatively, the example below shows how you could use [uploader-ui-lib](https://github.com/oceanprotocol/uploader-ui-lib) with RainbowKit: - -```javascript -import React from 'react' -import { WagmiConfig, createConfig } from 'wagmi' -import { polygon } from 'wagmi/chains' -import { RainbowKitProvider, ConnectButton } from '@rainbow-me/rainbowkit'; -import UploaderComponent from 'uploader-ui-lib' - -export default function App () { - // Initialize the Wagmi client - const wagmiConfig = createConfig( - getDefaultConfig({ - appName: 'Ocean Uploader UI', - infuraId: 'Your infura ID', - chains: [polygon], - walletConnectProjectId: 'Your wallet connect project ID' - }) - ) - - return ( - - - {/* Your App */} - - - - - ) -} - -``` - -\*\* under development - -## NextJS Setup for Ocean Protocol Uploader UI Library - -1. To use Ocean's Uploader UI library in your NextJS project modify your `next.config.js` file to include these fallbacks: - -```javascript -module.exports = { - webpack: (config) => { - config.resolve.fallback = { - fs: false, - process: false, - net: false, - tls: false - } - return config - } -} -``` - -\*\* add these fallbacks to avoid any issue related to webpack 5 Polyfills imcompatibility: https://github.com/webpack/changelog-v5#automatic-nodejs-polyfills-removed - -2. Install dependencies: - -```bash -npm install @oceanprotocol/uploader-ui-lib -``` - -3. Import the library's CSS into your project: - -```javascript -import '@oceanprotocol/uploader-ui-lib/dist/index.es.css'; -``` - -4. Dynamically import the Uploader component and ensure it is not processed during server-side rendering (SSR) using the next/dynamic function: - -```javascript -import dynamic from 'next/dynamic'; -... - -const Uploader = dynamic(() => import('@oceanprotocol/uploader-ui-lib').then((module) => module.Uploader), { ssr: false }); -``` - -5. Import component: - -```javascript - - - - ... - - - - -``` - -When incorporating the Uploader component into your application, make sure to set 'use client' on top in your app's component. This ensures that the component operates on the client side, bypassing SSR when rendering: - -```javascript -'use client' -import dynamic from 'next/dynamic' -``` - -This comprehensive setup ensures the proper integration and functioning of the Ocean Protocol's Uploader UI library within a NextJS application. - -For more details visit the [Uploader UI](https://github.com/oceanprotocol/uploader-ui) project. \ No newline at end of file diff --git a/developers/vscode/README.md b/developers/vscode/README.md deleted file mode 100644 index 9a137b4c1..000000000 --- a/developers/vscode/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# Ocean Protocol VSCode Extension - -Run compute jobs on Ocean Protocol directly from VS Code. The extension automatically detects your active algorithm file and streamlines job submission, monitoring, and results retrieval. Simply open a python or javascript file and click **Start Compute Job**. - -
Ocean Protocol VSCode Extension
Ocean Protocol VSCode Extension
- -## Getting Started - -Once installed, the extension adds an Ocean Protocol section to your VSCode workspace. Here you can configure your compute settings and run compute jobs using the currently active algorithm file. - -1. Install the extension from the VS Code Marketplace -2. Open the Ocean Protocol panel from the activity bar -3. Configure your compute settings: - - Node URL (pre-filled with default Ocean compute node) - - Optional private key for your wallet -4. Select your files: - - Algorithm file (JS or Python) - - Optional dataset file (JSON) - - Results folder location -5. Click **Start Compute Job** -6. Monitor the job status and logs in the output panel -7. Once completed, the results file will automatically open in VSCode - -### Requirements - -VS Code 1.96.0 or higher - -### Troubleshooting - -- Verify your RPC URL, Ocean Node URL, and Compute Environment URL if connections fail. -- Check the output channels for detailed logs. -- For further assistance, refer to the Ocean Protocol documentation or join the Discord community. - -### Optional Setup - -- Custom Compute Node: Enter your own node URL or use the default Ocean Protocol node -- Wallet Integration: Use auto-generated wallet or enter private key for your own wallet -- Custom Docker Images. If you need a custom environment with your own dependencies installed, you can use a custom docker image. Default is oceanprotocol/algo_dockers (Python) or node (JavaScript) -- Docker Tags: Specify version tags for your docker image (like python-branin or latest) -- Algorithm: The vscode extension automatically detects open JavaScript or Python files. Or alternatively you can specify the algorithm file manually here. -- Dataset: Optional JSON file for input data -- Results Folder: Where computation results will be saved - -
Ocean Protocol VSCode Extension Optional Setup
Optional Setup Configuration
- -## Contributing - -Your contributions are welcomed! Please check our [GitHub repository](https://github.com/oceanprotocol/vscode-extension) for the contribution guidelines. - -## Resources - -- [Ocean Protocol Documentation](https://docs.oceanprotocol.com) -- [GitHub Repository](https://github.com/oceanprotocol) \ No newline at end of file diff --git a/discover/README.md b/discover/README.md index 8847c00e1..c57ae8667 100644 --- a/discover/README.md +++ b/discover/README.md @@ -1,36 +1,25 @@ --- -cover: ../.gitbook/assets/cover/discover_banner.png -coverY: 7.413145539906106 +cover: ../.gitbook/assets/Ocean Enterprise_Cover-Styles_Zeichenfläche 1 Kopie 20.jpg +coverY: 0 --- -# 🌊 Discover Ocean +# Introduction -Ocean's mission is to level the playing field for AI and data. +Growing demand for AI products and data sovereignty **is fuelling a proliferation of institutional AI and data ecosystems.** -How? **By helping you monetize AI models and data, while preserving privacy.** +Since the initial release of Ocean Protocol there has been strong interest from the business and enterprise community in Ocean Protocol’s next generation data and AI ecosystem technology. -Ocean is a decentralized data exchange protocol to drive AI. Its core tech is: - -* Data NFTs & datatokens, to enable token-gated access control, data wallets, data DAOs, and more. -* Compute-to-data: buy & sell private data, while preserving privacy - -### Ocean Users Are... - -* [**Developers**](../developers/)**.** Build token-gated AI dApps & APIs -* [**Data scientists**](../data-scientists/)**.** Earn via predictions & challenges -* [**OCEAN holders**](../data-farming/)**.** Earn rewards by running prediction bots on DeFi crypto tokens to accurately predict their price directions in 5 minutes and 1 Hour timeframe. -* [**Ocean ambassadors**](https://oceanprotocol.com/explore/community) +Now, thanks to Ocean Enterprise, businesses can leverage a fully compliant, stable and secure version of Ocean Protocol that includes a wide range of specially developed enterprise ready features ready for immediate deployment at scale. ### Quick Links -* [Why Ocean?](why-ocean.md) and [What is Ocean?](what-is-ocean.md) -* [What can you do with Ocean?](benefits.md) -* [OCEAN: The Ocean token](ocean-token.md) -* [Networks](networks/), [Bridges](networks/bridges.md) -* [FAQ](faq.md), [Glossary](glossary.md) +* [What is Ocean Enterprise?](what-is-ocean.md) +* [What can you do with Ocean Enterprise?](benefits.md) +* [Licensing Information](licensing.md) +* [FAQ](/broken/pages/3Y6sOofwqndzkjpx5dym), [Glossary](/broken/pages/R2OLx4Fnk7DPAqs49qjw) *** -_Next:_ [_Why Ocean?_](why-ocean.md) +_Next:_ _Why Ocean?_ _Back:_ [_Docs main_](../) diff --git a/discover/benefits.md b/discover/benefits.md index ef5051019..17809bfb6 100644 --- a/discover/benefits.md +++ b/discover/benefits.md @@ -1,21 +1,13 @@ -# What can you do with Ocean? +# What can you do with Ocean Enterprise? -This page shows things you can do with Ocean... +Ocean Enterprise essentially provides enterprises with a comprehensive platform to build data marketplaces, enable secure data sharing, and create new revenue streams from their data assets while maintaining compliance and security standards. -* As a builder -* As a data scientist -* As an OCEAN holder -* Become an Ocean ambassador +**Core Capabilities:** -Let's explore each... - -## What builders can do - -
- -
- -
+* **Data Monetization & Sharing**: Create secure, controlled environments for businesses to share and monetize their data assets +* **Compute-to-Data (C2D)**: Enable data processing without exposing the underlying data, maintaining privacy and security +* **Advanced Pricing Models**: Implement sophisticated pricing mechanisms for data assets and services +* **IP Licensing**: Advanced intellectual property licensing capabilities for data and algorithms
@@ -35,82 +27,8 @@ Focus on the backend: make a Web3-native REST API. Like the token-gated dApps, c
-
- -Build Your Data Market - -Build a decentralized data marketplace by [forking Ocean Market code](../developers/build-a-marketplace/) to quickly get something good, or by building up from Ocean components for a more custom look. - -
- -To dive deeper, please go to [Developers page](../developers/). - -## What data scientists can do - -
- -
- -
- -
- -Use Ocean in Python - -The [**ocean.py**](../data-scientists/ocean.py/) library is built for the key environment of data scientists: Python. Use it to earn $ from your data, share your data, get more data from others, and see provenance of data usage. - -
- -
- -Do crypto price predictions - -With [Ocean Predictoor](../predictoor/), you submit predictions for the future price of BTC, ETH etc, and earn. The more accurate your predictions, the more $ you can earn. - -
- -
- -Compete in a Data Challenge - -Ocean regularly offer [data science challenges](../data-scientists/join-a-data-challenge.md) on real-world problems. Showcase your skills, and earn $ prizes. - -
- -To dive deeper, please go to [Data Scientists page](../data-scientists/). - -## What OCEAN holders can do - -
- -Earn Rewards via Data Farming - -Ocean's [Data Farming](../data-farming/) incentives program rewards OCEAN to participants who make accurate predictions of the price directions of DeFi crypto tokens. Most of the activity happens on [Predictoor.ai](https://www.predictoor.ai/). Explore more [here](https://docs.oceanprotocol.com/data-farming/predictoordf) - -
- -## Become an Ocean Ambassador - -
- -Become an Ambassador - -As an ambassador, you are an advocate for the protocol, promoting its vision and mission. By sharing your knowledge and enthusiasm, you can educate others about the benefits of Ocean Protocol, inspiring them to join the ecosystem. As part of a global community of like-minded individuals, you gain access to exclusive resources, networking opportunities, and collaborations that further enhance your expertise in the data economy. Of course, the Ocean Protocol Ambassador Program rewards contributors with weekly bounties and discretionary grants for growing the Ocean Protocol communtiy worldwide. - -Follow the steps below to become an ambassador: - -To become a member of the Ambassador Program, follow these steps: - -1. Join Ocean Protocol's [Discord](https://discord.com/invite/TnXjkR5) server -2. Join the Discord channel called #treasure-hunter. -3. Access the application form: "[Apply](https://discord.com/channels/612953348487905282/1133478278531911790) to use this channel." -4. Answer the questions in the application form. -5. Once you've completed the application process, you can start earning experience points (XP) by actively engaging in discussions on various topics related to the Ocean Protocol. - -
- *** -_Next:_ [_OCEAN: The Ocean token_](ocean-token.md) +_Next:_ [Ocean Enterprise Collective e.V](ocean-enterprise-collective-e.v..md) _Back:_ [_What is Ocean?_](what-is-ocean.md) diff --git a/discover/faq.md b/discover/faq.md deleted file mode 100644 index dbad2e43c..000000000 --- a/discover/faq.md +++ /dev/null @@ -1,338 +0,0 @@ ---- -title: FAQs -description: ---- - -# FAQ - -Have some questions about Ocean Protocol? - -Hopefully, you'll find the answers here! If not then please don't hesitate to reach out to us on [discord](https://discord.gg/TnXjkR5) - there are no stupid questions! - -## General - -
-How is Ocean Protocol related to AI? - -Modern Artificial Intelligence (AI) models require vast amounts of training data. - -In fact, _every stage_ in the AI modeling life cycle is about data: raw training data -> cleaned data -> feature vectors -> trained models -> model predictions. - -Ocean's all about managing data: getting it, sharing it, selling it, and making $ from it -- all with Web3 benefits like decentralized control, data provenance, privacy, sovereign control, and more. - -Thus, Ocean helps manage data all along the AI model life cycle: -- Ocean helps with raw training data -- Ocean helps with cleaned data & feature vectors -- Ocean helps with trained models as data -- Ocean helps with model predictions as data - -A great example is [Ocean Predictoor](../predictoor/), where user make $ from their model predictions in a decentralized, private fashion. - -
- -
-How is Ocean Protocol aiming to start a new Data Economy? - -Ocean Protocol's mission is to develop tools and services that facilitate the emergence of a new Data Economy. This new economy aims to empower data owners with control, maintain privacy, and catalyze the commercialization of data, including the establishment of data marketplaces. - -To understand more about Ocean's vision, check out this [blog post](https://blog.oceanprotocol.com/mission-values-for-ocean-protocol-aba998e95b8). -
- -
-How does Ocean Protocol generate revenue? - -The protocol generates revenue through transaction fees. These fees serve multiple purposes: they fund the ongoing development of Ocean technology and support the buy-and-burn process of the OCEAN. - -To get a glimpse of the revenue generated on the Polygon network, which is the most frequently used network, you can find detailed information [here](https://polygonscan.com/address/0x042BFbd88c3998282153088604207b2AeF045b43#tokentxns). - -To monitor burned tokens, visit [etherscan](https://etherscan.io/token/0x967da4048cd07ab37855c090aaf366e4ce1b9f48?a=0x000000000000000000000000000000000000dead). As of September 2023, approximately 1.4 million tokens have been burned. 🔥📈 -
- -
-How decentralized is Ocean? - -To be fully decentralized means no single point of control, at any level of the stack. - -- OCEAN is already fully decentralized. -- The Ocean core tech stack is already fully decentralized too: smart contracts on permissionless chains, and anyone can run support middleware. -- Predictoor is fully decentralized. -- Data Farming has some centralized components; we aim to decentralize those in the next 12-24 months. ⁣ - -
- - -## About OCEAN - -
-What is ASI token and what it's major usecase? -In late March, Ocean Protocol, Singularity NET & Fetch.ai joined forces to form Superintelligence Alliance and announced a token merger, combining OCEAN, FET,& AGIX into a single ASI. ASI token will fund the Superintelligence Alliance's mission to build decentralized Artificial Superintelligence (ASI) for the benefit of humanity. We're focused on developing decentralized AI tools for today's business and retail applications, while also securing decentralized compute power for the future of AI. - -
- -
-How is OCEAN used? How does it capture value? - -OCEAN token major usage is currently in Predictoor DF i.e. rewarding Predictoors who perform predictions on DeFi token price feeds to predict the price directions of Defi token feeds. To know more about this, navigate [here](https://docs.oceanprotocol.com/data-farming) - -
- -
-What is the total supply of OCEAN? - -1.41 Billion OCEAN. -
- -
-Can OCEAN supply become deflationary? - -A portion of the revenue earned in the Ocean ecosystem is earmarked for buy-and-burn. If the transaction volume on Ocean reaches scale and is broadly adopted to the point where the buy-burn mechanism outruns the emissions of OCEAN, the supply would deflate. -
- -
-Does OCEAN also have governance functionality? - -During the OceanDAO grants program (2021-2022), OCEAN was used for community voting and governance. Currently, there are no governance functions associated with the token. -
- -
- Which blockchain network currently has the highest liquidity for OCEAN? - -Ethereum mainnet. -
- -
-Can the Ocean tech stack be used without OCEAN? - -All Ocean modules and components are open-source and freely available to the community. Developers can change the default currency from OCEAN to a different one for their dApp. - -
- -
-How does the ecosystem and the token benefit from the usage of the open-source tech stack when transactions can be paid in any currency? - -For each consume transaction, the Ocean community gets a small fee. This happens whether OCEAN is used or not. [Here are details](../developers/contracts/fees.md). -
- -## Ocean Nodes - -
-What are Ocean Nodes? - - -Ocean Nodes is a decentralized solution that simplifies running and monetizing AI models by allowing users to manage data, computational resources, and AI models through Ocean Protocol's infrastructure, enabling easier and more secure data sharing and decentralized AI model development. Learn more [here](https://docs.oceanprotocol.com/developers/ocean-node). - -
- -
-What are the minimum requirements to run a node? Can it be run on a phone or other small devices? -We recommend the following minimum system requirements for running one Ocean node, though these may vary depending on your configuration: -- 1 vCPU -- 2 GB RAM for basic operations -- 4 GB storage -- Operating System: We recommend using the latest LTS version of Ubuntu or the latest iOS. However, nodes should also work on other operating systems, including Windows. - -While it is technically feasible to run a node on smaller devices, such as phones, the limited processing power and memory of these devices can lead to significant performance issues, making them unreliable for stable node operation. -
- -
-Can I run a node using Windows or macOS, and are there any recommended guides for those operating systems? -Yes, you can run an Ocean node on both Windows and macOS. - -For Windows, it's recommended to use WSL2 (Windows Subsystem for Linux) to create a Linux environment, as it works better with Docker. Once WSL2 is set up, you can follow the Linux installation guides. Here’s a [helpful link](https://techcommunity.microsoft.com/t5/windows-11/how-to-install-the-linux-windows-subsystem-in-windows-11/m-p/2701207) to get started with WSL2 - -For macOS, you can install Docker directly and run the Docker image. It’s also recommended to use Homebrew to install necessary dependencies like Node.js. - -For a detailed setup guide, refer to the [OceanNode GitHub Repository](https://github.com/oceanprotocol/ocean-node). - -
- -
-Is there a maximum number of nodes allowed, and are there rules against running multiple nodes on the same IP? -There’s no limit to the number of nodes you can run, however there are a few guidelines to keep in mind. You can run multiple nodes on the same IP address, as long as each node is using a different port. - -
- -
-How long does it take for a new node to appear in the dashboard? -The time it takes for a new node to appear on the dashboard depends on the system load. Typically, nodes become visible within a few hours, though this can vary based on network conditions. - -
-
-How can I verify that my node is running successfully? -To verify your node is running properly, follow these steps: - -1) Check the Local Dashboard: Go to http://your_ip:8000/dashboard to view the status of your node, including connected peers and the indexer status. - -2) Verify on the Ocean Node Dashboard: After a few hours, visit the [Ocean Node Dashboard](https://nodes.oceanprotocol.com/) and search for your Node ID, Wallet, or IP to confirm your node is correctly configured and visible on the network. - -
-
-Are there penalties if my node goes offline? -If your node goes offline, it won't be treated as a new node when you restart it - the timer will pick up from where it left off. However, frequent disconnections can impact your eligibility and uptime metrics, which are important for earning rewards. To qualify for rewards, your node must maintain at least 90% uptime. For example, in a week (10,080 minutes), your node needs to be active for at least 9,072 minutes. If your node is down for more than 16 hours and 48 minutes in a week, it will not be eligible for rewards. - -
-
-How many nodes a user can run using a single wallet or on a single server? -Each node needs its own wallet - one node per wallet. You can use an Admin wallet to manage multiple nodes, but it’s not recommended to use the same private key for multiple nodes. Since the node ID is derived from the private key, using the same key for different nodes may cause issues. - -You can run as many nodes on a server as its resources allow, depending on the server’s capacity. -
- -
- Why does my node show “Reward Eligibility: false” and “No peer data” even though it is connected? - -Your node may show "Reward Eligibility: false" and "No peer data" even when connected, and this may be for a few reasons: - -1) Random Round Checks: The node status may change due to random round checks. If your node is unreachable during one of these checks, it could trigger these messages. - -2) Configuration Issues: Misconfigurations, like an incorrect P2P_ANNOUNCE_ADDRESS, can impact communication. Ensure your settings are correct. - -3) Port Accessibility: Make sure the required ports are open and accessible for your node to operate properly. -
- -
-How do I backup or migrate my node to a new server without losing uptime? - -To back up or migrate your node without losing uptime, follow these steps: - -1) Run a Parallel Node: Start a new node on the new VPS while keeping the old one active. This ensures uninterrupted uptime during migration. - -2) Use the Same Private Key: Configure the new node with the same private key as the old one. This will retain the same node ID and ensure continuity in uptime and rewards eligibility. - -3) Update Configuration: Update the new node's configuration, including the announce_address in the Docker YAML file, to reflect the new IP address. - -4) Verify on the Dashboard: Check the [Ocean Node Dashboard](https://nodes.oceanprotocol.com/) to confirm that the new node is recognized and that the IP address has been correctly updated. - -
- -
-How do I resolve the "No peer data" issue that affects node eligibility? -It's normal for a node's status to change automatically from time to time due to random round checks conducted on each node. If a node is unreachable during a check, the system will display the reason on the dashboard. - -To resolve the "No peer data" issue, consider the following steps: - -1) Restart Your Node: This simple action has been helpful for some users facing similar issues. - -2) Check Configuration: -a) Ensure that your P2P_ANNOUNCE_ADDRESS is configured correctly. -b) Verify that the necessary ports are open. - -3) Local Dashboard Access: Confirm that you can access your node from the local dashboard by visiting http://your_ip:8000/dashboard. -
- -
-Do I need to open all ports to the outside world (e.g., 9000-9003, 8000)? -It's not necessary to open all ports; typically, opening port 8000 is sufficient for most operations. However, if you are running services that require additional ports - such as ports 9000-9003 for P2P connections - you may need to open those based on your specific setup and requirements. -
- -
-How is the node's reward calculated, and will my income depend on the server's capacity? -The rewards for Ocean nodes are mainly determined by your node's uptime. Nodes that maintain an uptime of 90% or higher qualify for rewards from a substantial reward pool of 250,000 ROSE per epoch. Your income is not affected by the server's capacity; it relies solely on the reliability and uptime of your node. -
- -
-What are the rewards for running a node, and how is the distribution handled? -Rewards for running a node are 360,000 ROSE per epoch and are automatically sent to your wallet if you meet all the requirements. These rewards are distributed in ROSE tokens within the Oasis Sapphire network. -
- -
-Does my node's hardware setup (CPU, RAM, storage) impact the rewards I receive? - -Your node's hardware setup - CPU, RAM, and storage - does not directly influence your rewards. The primary factor for receiving rewards is your node's uptime. As long as your node meets the minimum system requirements (90% node uptime) and maintains high availability, you remain eligible for rewards. Rewards are based on uptime rather than hardware specifications. -
- - -## Grants, challenges, and ecosystem - -
-Is Acentrik from Mercedes Benz built on top of Ocean? - -3rd party markets such as Gaia-X, BDP and Acentrik use Ocean components to power their marketplace. They will likely use another currency for the exchange of services. If these marketplaces are publicly accessible, indexable and abide by the fee structure set out by Ocean Protocol, transaction fees would be remitted back to the Ocean community. These transaction fees would be allocated according to plan set out [here](https://blog.oceanprotocol.com/ocean-token-model-3e4e7af210f9). - -
- -
-What is Ocean Shipyard? - -Ocean Shipyard is an early-stage grant program established to fund the next generation of Web3 dApps built on Ocean Protocol. It is made for entrepreneurs looking to build Web3 solutions on Ocean, make valuable data available, build innovations, and create value for the Ocean ecosystem. - -The [Shipyard page](https://oceanprotocol.com/shipyard) has details. -
- -
-Where can we see previous data challenges and submitted solutions? - -You can find a list of past data challenges on the [website](https://oceanprotocol.com/challenges). -
- -
-What are the steps needed to encourage people to use the Ocean ecosystem? - -There are a wide host of technical, business, and cultural barriers to overcome before volume sales can scale. Blockchain and crypto technology are relatively new and adopted by a niche group of enthusiasts. On top, the concept of a Data Economy is still nascent. Data buyers are generally restricted to data scientists, researchers, or large corporations, while data providers are mainly corporations and government entities. The commercialization of data is still novel and the processes are being developed and refined. -
- - -## Data security - - -
-Is my data secure? - -Yes. Ocean Protocol understands that some data is too sensitive to be shared — potentially due to GDPR or other reasons. For these types of datasets, we offer a unique service called [compute-to-data](../developers/compute-to-data/README.md). This enables you to monetize the dataset that sits behind a firewall without ever revealing the raw data to the consumer. For example, researchers and data scientists pay to run their algorithms on the data set, and the computation is performed behind a firewall; all the researchers or data scientists receive is the results generated by their algorithm. -
- - -
-How does Ocean Protocol enforce penalties if data is shared without permission? - -Determining whether someone has downloaded your data and is reselling it is quite challenging. While they are bound by a contract not to do so, it's practically impossible to monitor their actions. If you want to maintain the privacy of your dataset, you can explore the option of using compute-to-data(C2D). Via C2D your data remains private and people can only run algorithms(that you approve of) to extract intelligence. - -This issue is similar to what any digital distribution platform faces. For instance, can Netflix prevent individuals from downloading and redistributing their content? Not entirely. They invest significant resources in security, but ultimately, complete prevention is extremely difficult. They mainly focus on making it more challenging for such activities to occur. -
- - -## Data marketplaces & Ocean Market - -
-What is a decentralized data marketplace? - -A data marketplace allows providers to publish data and buyers to consume data. - -Unlike centralized data marketplaces, decentralized ones give users more control over their data and algorithms by minimizing custodianship and providing transparent and immutable records of every transaction. - -Ocean Market is a reference decentralized data marketplace powered by Ocean stack. - -Ocean Compute-to-Data (C2D) enables data and algorithms can be ingested into secure Docker containers where escapes are avoided, protecting both the data and algorithms. C2D can be used from Ocean Market. -
- -
-Is there a website or platform that tracks the consume volume of Ocean Market? - - -Yes. See [autobotocean.com](https://autobotocean.com/). -
- -
-Since Ocean Market is open source, what are the future plans for the project in terms of its economic direction? - -Ocean Market is a showcase for the practical application of Ocean, showing others what a decentralized data marketplace look like. - -Fees are generated Ocean Market from Ocean Market that head to Ocean community. The earlier Q&A on revenue has details. -
- -## Contacting Ocean core team - -
-Who is the right person to talk to regarding a marketing proposal or collaboration? - -For collaborations, please fill in this [form](https://docs.google.com/forms/d/e/1FAIpQLSdBz7cblsz5yuOKMVoPVfK0Pp1Xuqjwner1kCkRibIIbYMe-w/viewform). -One member of our team will reach out to you 🤝 -
- - ----- - -_Next: [Glossary](glossary.md)_ - -_Back: [Bridges](networks/bridges.md)_ - diff --git a/discover/glossary.md b/discover/glossary.md deleted file mode 100644 index c0e917a97..000000000 --- a/discover/glossary.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -description: >- - Key terms, concepts, and acronyms used in Ocean ---- - -# Glossary - -## Ocean Protocol Concepts - -
- -Ocean Protocol -Ocean Protocol is a decentralized data exchange protocol that enables individuals and organizations to share, sell, and consume data in a secure, transparent, and privacy-preserving manner. The protocol is designed to address the current challenges in data sharing, such as data silos, lack of interoperability, and data privacy concerns. Ocean Protocol uses blockchain technology, smart contracts, and cryptographic techniques to create a network where data providers can offer their data assets for sale, data consumers can purchase and access the data, and developers can build data-driven applications and services on top of the protocol. - -
- -
- -OCEAN - -The Ocean Protocol's token (OCEAN) is a utility token used in the Ocean Protocol ecosystem. It serves as a medium of exchange and a unit of value for data services in the network. Participants in the Ocean ecosystem can use OCEAN to buy and sell data, stake on data assets, and participate in the governance of the protocol. - -
- -
- -Data Consume Volume (DCV) - -The data consume value (DCV) is a key metric that refers to the amount of $ spent over a time period, to buy data assets where the data assets are subsequently consumed. - -
- -
- -Transaction Volume (TV) - -The transaction value is a key metric that refers to the number of blockchain transactions done over a time period. - -
- -
- -Ocean Data Challenges - -[Ocean Data Challenges](https://oceanprotocol.com/challenges) is a program organized by Ocean Protocol that seeks to expedite the shift into a New Data Economy by incentivizing data-driven insights and the building of algorithms geared toward solving complex business challenges. The challenges aim to encourage the Ocean community and other data enthusiasts to collaborate and leverage the capabilities of the Ocean Protocol to produce data-driven insights and design algorithms that are specifically tailored to solving intricate business problems. - -Ocean Data Challenges typically involve a specific data problem or use case, for which participants are asked to develop a solution. The challenges are open to many participants, including data scientists, developers, researchers, and entrepreneurs. Participants are given access to relevant data sets, tools, and resources and invited to submit their solutions - -
- -
- -Ocean Market - -The [Ocean Market](http://market.oceanprotocol.com) is a decentralized data marketplace built on top of the Ocean Protocol. It is a platform where data providers can list their data assets for sale, and data consumers can browse and purchase data that meets their specific needs. The Ocean Market supports a wide range of data types, including but not limited to, text, images, videos, and sensor data. - -While the Ocean Market is a vital part of the Ocean Protocol ecosystem and is anticipated to facilitate the unlocking of data value and stimulate data-driven innovation, it is important to note that it is primarily a **technology demonstrator**. As a decentralized data marketplace built on top of the Ocean Protocol, the Ocean Market **showcases** the capabilities and features of the protocol, including secure and transparent data exchange, flexible access control, and token-based incentivization. It serves as a testbed for the development and refinement of the protocol's components and provides a sandbox environment for experimentation and innovation. As such, the Ocean Market is a powerful tool for demonstrating the potential of the Ocean Protocol and inspiring the creation of new data-driven applications and services. - -
- -
- -Ocean Shipyard - -[Ocean Shipyard](https://oceanprotocol.com/shipyard) is an early-stage grant program established to fund the next generation of Web3 dApps built on Ocean Protocol. It is made for entrepreneurs looking to build open-source Web3 solutions on Ocean, make valuable data available, build innovations, and create value for the Ocean ecosystem. - -In Shipyard, the Ocean core team curates project proposals that are set up to deliver according to clear delivery milestone timelines and bring particular strategic value for the future development of Ocean. - -
- -
- -veOCEAN - -_ve_ tokens have been introduced by several projects such as [Curve](https://curve.fi/) and [Balancer](https://balancer.fi/). These tokens require users to lock _project tokens_ in return for _ve\_. - -[veOCEAN](https://df.oceandao.org/veocean) gives token holders the ability to lock OCEAN to earn yield and curate data. - -In exchange for locking tokens, users can earn rewards. The amount of reward depends on how long the tokens are locked. Furthermore, veTokens can be used for asset curation. - -
- -
- -Ocean Data Farming (DF) - -[Ocean Data Farming (DF)](https://df.oceandao.org/) incentivizes for growth of Data Consume Volume (DCV) in the Ocean ecosystem. [DF](../data-farming/README.md) is like DeFi liquidity mining, but tuned for DCV. DF emits OCEAN for passive rewards and active rewards. - -* As a veOCEAN holder, you get _passive_ rewards by default. -* If you _actively_ curate data by allocating veOCEAN towards data assets with high Data Consume Volume (DCV), then you can earn more. - -
- -
- -Passive Rewards - -When a user locks their OCEAN for a finite period of time, they get veOCEAN in return. Based on the quantity of veOCEAN, the user accumulates weekly OCEAN rewards. Because rewards are generated without human intervention, these are called [Passive Rewards](../data-farming/README.md). OCEAN Data Farming Passive Rewards are claimable every Thursday on the [Rewards page](https://df.oceandao.org/rewards). - -
- -
- -Volume DF - -When a user allocates veOCEAN to Ocean Market projects, then weekly OCEAN rewards are given to a user based on the sales of those projects. Since these rewards depend on human intervention to decide the allocations, these are categorized as [Volume DF](../data-farming/README.md) rewards. OCEAN Data Farming Volume DF rewards are claimable every Thursday on the [Rewards page](https://df.oceandao.org/rewards). - -
- -## Intellectual Property (IP) Concepts - -
- -Base IP - -**Base IP** means the artifact being copyrighted. Represented by the {ERC721 address, tokenId} from the publish transactions. - -
- -
- -Base IP holder - -**Base IP holder** means the holder of the Base IP. Represented as the actor that did the initial "publish" action. - -
- -
- -Sub-licensee - -**Sub-licensee** is the holder of the sub-license. Represented as the entity that controls address ERC721.\_owners\[tokenId=x]. - -
- -
- -To Publish - -Claim copyright or exclusive base license. - -
- -
- -To Sub-license - -Transfer one (of many) sub-licenses to new licensee: ERC20.transfer(to=licensee, value=1.0). - -
- - -## Web3 Fundamentals - -
- -Web3 - -Web3 (also known as Web 3.0 or the decentralized web) is a term used to describe the next evolution of the internet, where decentralized technologies are used to enable greater privacy, security, and user control over data and digital assets. - -While the current version of the web (Web 2.0) is characterized by centralized platforms and services that collect and control user data, Web3 aims to create a more decentralized and democratized web by leveraging technologies such as blockchain, peer-to-peer networking, and decentralized file storage. - -Ocean Protocol is designed to be a Web3-compatible platform that allows users to create and operate decentralized data marketplaces. This means that data providers and consumers can transact directly with each other, without the need for intermediaries or centralized authorities. - -
- -
- -Blockchain - -A distributed ledger technology (DLT) that enables secure, transparent, and decentralized transactions. Blockchains use cryptography to maintain the integrity and security of the data they store. - -By using blockchain technology, Ocean Protocol provides a transparent and secure way to share and monetize data, while also protecting the privacy and ownership rights of data providers. Additionally, blockchain technology enables the creation of immutable and auditable records of data transactions, which can be used for compliance, auditing, and other purposes. - -
- -
- -Decentralization - -Decentralization is the distribution of power, authority, or control away from a central authority or organization, towards a network of distributed nodes or participants. Decentralized systems are often characterized by their ability to operate without a central point of control, and their ability to resist censorship and manipulation. - -In the context of Ocean Protocol, decentralization refers to the use of blockchain technology to create a decentralized data exchange protocol. Ocean Protocol leverages decentralization to enable the sharing and monetization of data while preserving privacy and data ownership. - -
- -
- -Block Explorer - -A tool that allows users to view information about transactions, blocks, and addresses on a blockchain network. Block explorers provide a [graphical interface](https://etherscan.io/token/0x967da4048cD07aB37855c090aAF366e4ce1b9F48) for interacting with a blockchain, and they allow users to search for specific transactions, view the details of individual blocks, and track the movement of cryptocurrency between addresses. Block explorers are commonly used by cryptocurrency enthusiasts, developers, and businesses to monitor network activity and verify transactions. - -
- -
- -Cryptocurrency - -A digital or virtual currency that uses cryptography for security and operates independently of a central bank. Cryptocurrencies use blockchain or other distributed ledger technologies to maintain their transaction history and prevent fraud. - -Ocean Protocol uses a cryptocurrency called Ocean (OCEAN) as its native token. OCEAN is used as a means of payment for data transactions on the ecosystem, and it is also used to incentivize network participants, such as data providers, validators, and curators. - -Like other cryptocurrencies, OCEAN operates on a blockchain, which ensures that transactions are secure, transparent, and immutable. The use of a cryptocurrency like OCEAN provides a number of benefits for the Ocean Protocol network, including faster transaction times, lower transaction fees, and greater transparency and trust. - -
- -
- -Decentralized applications (dApps) - -dApps (short for decentralized applications) are software applications that run on decentralized peer-to-peer networks, such as blockchain. Unlike traditional software applications that rely on a centralized server or infrastructure, dApps are designed to be decentralized, open-source, and community-driven. - -dApps in the Ocean ecosystem are designed to enable secure and transparent data transactions between data providers and consumers, without the need for intermediaries or centralized authorities. These applications can take many forms, including data marketplaces, data analysis tools, data-sharing platforms, and many more. A good example of a dApp is the [Ocean Market](https://market.oceanprotocol.com/). - -
- -
- -Interoperability - -The ability of different blockchain networks to communicate and interact with each other. Interoperability is important for creating a seamless user experience and enabling the transfer of value across different blockchain ecosystems. - -In the context of Ocean Protocol, interoperability enables the integration of the protocol with other blockchain networks and decentralized applications (dApps). This enables data providers and users to access and share data across different networks and applications, creating a more open and connected ecosystem for data exchange. - -
- -
- -Smart contract - -Smart contracts are self-executing digital contracts that allow for the automation and verification of transactions without the need for a third party. They are programmed using code and operate on a decentralized blockchain network. Smart contracts are designed to enforce the rules and regulations of a contract, ensuring that all parties involved fulfill their obligations. Once the conditions of the contract are met, the smart contract automatically executes the transaction, ensuring that the terms of the contract are enforced in a transparent and secure manner. - -Ocean ecosystem smart contracts are deployed on multiple blockchains like Polygon, Energy Web Chain, BNB Smart Chain, and others. The code is open source and available on the organization's [GitHub](https://github.com/oceanprotocol/contracts). - -
- -
- -Ethereum Virtual Machine (EVM) - -The Ethereum Virtual Machine (EVM) is a runtime environment that executes smart contracts on the Ethereum blockchain. It is a virtual machine that runs on top of the Ethereum network, allowing developers to create and deploy decentralized applications (dApps) on the network. The EVM provides a platform for developers to create smart contracts in various programming languages, including Solidity, Vyper, and others. - -The Ocean Protocol ecosystem is a decentralized data marketplace built on the Ethereum blockchain. It is designed to provide a secure and transparent platform for sharing and selling data. - -
- -
- -ERC - -ERC stands for Ethereum Request for Comments and refers to a series of technical standards for Ethereum-based tokens and smart contracts. ERC standards are created and proposed by developers to the Ethereum community for discussion, review, and implementation. These standards ensure that smart contracts and tokens are compatible with other applications and platforms built on the Ethereum blockchain. - -In the context of Ocean Protocol, several ERC standards are used to create and manage tokens on the network. Standards like [ERC-20](https://ethereum.org/en/developers/docs/standards/tokens/erc-20/), [ERC-721](https://eips.ethereum.org/EIPS/eip-721) and [ERC-1155](https://eips.ethereum.org/EIPS/eip-1155). - -
- -
- -ERC-20 - -[ERC-20](https://ethereum.org/en/developers/docs/standards/tokens/erc-20/) is a technical standard used for smart contracts on the Ethereum blockchain that defines a set of rules and requirements for creating tokens that are compatible with the Ethereum ecosystem. ERC-20 tokens are fungible, meaning they are interchangeable with other ERC-20 tokens and have a variety of use cases such as creating digital assets, utility tokens, or fundraising tokens for initial coin offerings (ICOs). - -The ERC-20 standard is used for creating fungible tokens on the Ocean Protocol network. Fungible tokens are identical and interchangeable with each other, allowing them to be used interchangeably on the network. - -
- -
- -ERC-721 - -[ERC-721](https://eips.ethereum.org/EIPS/eip-721) is a technical standard used for smart contracts on the Ethereum blockchain that defines a set of rules and requirements for creating non-fungible tokens (NFTs). ERC-721 tokens are unique and cannot be exchanged for other tokens or assets on a one-to-one basis, making them ideal for creating digital assets such as collectibles, game items, and unique digital art. - -The ERC-721 standard is used for creating non-fungible tokens (NFTs) on the Ocean Protocol network. NFTs are unique and non-interchangeable tokens that can represent a wide range of assets, such as digital art, collectibles, and more. - -
- -
- -ERC-1155 - -[ERC-1155](https://eips.ethereum.org/EIPS/eip-1155) is a technical standard for creating smart contracts on the Ethereum blockchain that allows for the creation of both fungible and non-fungible tokens within the same contract. This makes it a "multi-token" standard that provides more flexibility than the earlier ERC-20 and ERC-721 standards, which only allow for the creation of either fungible or non-fungible tokens, respectively. - -The ERC-1155 standard is used for creating multi-token contracts on the Ocean Protocol network. Multi-token contracts allow for the creation of both fungible and non-fungible tokens within the same contract, providing greater flexibility for developers. - -
- -
- -Consensus Mechanism - -A consensus mechanism is a method used in blockchain networks to ensure that all participants in the network agree on the state of the ledger or the validity of transactions. Consensus mechanisms are designed to prevent fraud, double-spending, and other types of malicious activity on the network. - -In the context of Ocean Protocol, the consensus mechanism used is Proof of Stake (PoS). - -
- -
- -Proof of Stake (PoS) - -A consensus mechanism used in blockchain networks that require validators to hold a certain amount of cryptocurrency as a stake in order to participate in the consensus process. PoS is an alternative to proof of work (PoW) and is designed to be more energy efficient. - -
- -
- -Proof of Work (PoW) - -A consensus mechanism used in blockchain networks that require validators to solve complex mathematical puzzles in order to participate in the consensus process. PoW is the original consensus mechanism used in the Bitcoin blockchain and is known for its high energy consumption. - -
- -
- -BUIDL - -A term used in the cryptocurrency and blockchain space to encourage developers and entrepreneurs to build new products and services. The term is a deliberate misspelling of the word "build" and emphasizes the importance of taking action and creating value in the ecosystem. - -
- -## - -## Decentralized Finance (DeFi) fundamentals - -
- -DeFi - -A financial system that operates on a decentralized, blockchain-based platform, rather than relying on traditional financial intermediaries such as banks, brokerages, or exchanges. In a DeFi system, financial transactions are executed using smart contracts, which are self-executing computer programs that automatically enforce the terms of an agreement between parties. - -
- -
- -Decentralized exchange (DEX) - -A Decentralized exchange (DEX) is an exchange that operates on a decentralized platform, allowing users to trade cryptocurrencies directly with one another without the need for a central authority or intermediary. DEXs typically use smart contracts to facilitate trades and rely on a network of nodes to process transactions and maintain the integrity of the exchange. - -
- -
- -Staking - -The act of holding a cryptocurrency in a wallet or on a platform to support the network and earn rewards. Staking is typically used in proof-of-stake (PoS) blockchain networks as a way to secure the network and maintain consensus. - -
- -
- -Lending - -The act of providing cryptocurrency to a borrower in exchange for interest payments. Lending platforms match borrowers with lenders and use smart contracts to facilitate loan agreements. - -
- -
- -Borrowing - -The act of borrowing cryptocurrency from a lender and agreeing to repay the loan with interest. Borrowing platforms match borrowers with lenders and use smart contracts to facilitate loan agreements. - -
- -
- -Farming - -A strategy in which investors provide liquidity to a DeFi protocol in exchange for rewards in the form of additional cryptocurrency or governance tokens. Farming typically involves providing liquidity to a liquidity pool and earning a share of the trading fees generated by the pool. Yield farming is a type of farming strategy. - -
- -
- -Annual percentage Yield (APY) - -Represents the total amount of interest earned on a deposit or investment account over one year, including the effect of compounding. - -
- -
- -Annual Percentage Rate (APR) - -Represents the annual cost of borrowing money, including the interest rate and any fees or charges associated with the loan, expressed as a percentage. - -
- -
- -Liquidty pools (LP) - -Liquidity Pools (LPs) are pools of tokens that are locked in a smart contract on a decentralized exchange (DEX) in order to facilitate the trading of those tokens. LPs provide liquidity to the DEX and allow traders to exchange tokens without needing a counterparty, while LP providers earn a share of the trading fees in exchange for providing liquidity. - -
- -
- -Yield Farming - -A strategy in which investors provide liquidity to a DeFi protocol in exchange for rewards in the form of additional cryptocurrency or governance tokens. Yield farming is designed to incentivize users to contribute to the growth and adoption of a DeFi protocol. - -
- -## Data Science Terminology - -
- -AI - -AI stands for Artificial Intelligence. It refers to the development of computer systems that can perform tasks that would typically require human intelligence to complete. AI technologies enable computers to learn, reason, and adapt in a way that resembles human cognition. - -
- -
- -Machine learning - -Machine learning is a subfield of artificial intelligence (AI) that involves teaching computers to learn from data, without being explicitly programmed. In other words, it is a way for machines to automatically learn and improve from experience, without being explicitly told what to do in every situation. - -
- - ----- - - -Congrats! You've completed this quick introduction to Ocean. - -_Next: Jump to [Docs main](../README.md) and click on your interest._ - -_Back: [FAQ](faq.md)_ - - - diff --git a/discover/imprint.md b/discover/imprint.md new file mode 100644 index 000000000..257f4349f --- /dev/null +++ b/discover/imprint.md @@ -0,0 +1,20 @@ +# Imprint + +Thanks for your interest in the OEC + +**Ocean Enterprise Collective e.V**\ +Carmerstrasse 18\ +10623 Berlin, Germany + +**E-Mail:** [info@oceanenterprise.io](mailto:info@oceanenterprise.io) + +**Members of the Board:** Mihai Badea, Alexander Eger, Sheridan Johns + +**Association register:** Vereinsregister, Amtsgerichts Charlottenburg (Berlin), VR 41774 B + +**Accountable pursuant to § 18 MStV:**\ +Sheridan Johns\ +Carmerstrasse 18\ +10623 Berlin, Germany + +The European Commission provides a platform for online dispute resolution, which you can find here: [https://ec.europa.eu/consumers/odr/](https://ec.europa.eu/consumers/odr/). We are not obliged or willing to participate in a dispute resolution procedure before a consumer arbitration board. diff --git a/discover/licensing.md b/discover/licensing.md new file mode 100644 index 000000000..dad70a266 --- /dev/null +++ b/discover/licensing.md @@ -0,0 +1,336 @@ +# Licensing + +## PREAMBLE + +A. This repository is a dual-licensed software, available under commercial and open-source license terms. + +B. These open-source license terms (“OS License Terms”) solely apply to the use of the Software in conjunction with smart contracts provided by OEC e.V. The latest version of the addresses of the smart contracts provided by OEC e.V. can be found on the website of OEC e.V. under \[[https://docs.oceanenterprise.io/developers/networks](../developers/networks.md)]. + +C. The commercial license terms (“Commercial License Terms”) apply to the use of the Software in conjunction with smart contracts provided by parties other than OEC e.V. + +## OPEN-SOURCE LICENSE (GPLv3) + +1. OWNERSHIP AND DELIVERY OF THE SOFTWARE + +1.1 OEC e.V. is the sole and exclusive owner of all rights of use in the Software and any associated documentation and manuals. + +1.2 The use of the Software shall be subject to these OS License Terms and to the standard open-source license terms referred to herein. + +1.3 The Software is made available in source code and object code form, along with any associated documentation and manuals. + +2. LICENSE 2.1 The Software is provided free of charge, under the GNU General Public License 3 (available here and attached as Schedule 1 (“GNU General Public License 3 Terms”), with the rights and obligations set forth in therein. + +2.2 The use of the Software under the OS License Terms is subject to the following conditions: 2.2.1 All access to and use, including, but not limited to, propagation and conveying, of the Software shall be in accordance with these OS License Terms and the GNU General Public License 3 Terms. + +2.2.2. Unless otherwise provided herein, OEC e.V.’s role hereunder is that of “Licensor” under the GNU General Public License 3 Terms, and the rights and obligations stipulated for the “Licensor” under the GNU General Public License 3 Terms shall apply to OEC e.V. mutatis mutandis. + +2.2.3 The right to use the Software is limited to the use in conjunction with the use of smart contracts provided by OEC e.V. The latest version of the addresses of the smart contracts provided by OEC e.V. can be found on the website of OEC e.V. under \[https://docs.oceanenterprise.io/developers/networks]. + +2.2.4 Any use of the Software other than in conjunction with the use of smart contracts provided by OEC e.V. shall require a commercial license. OEC e.V. makes the SoftwareSofftware available for such use under the Commercial License Terms. + +2.2.5 The right to use the Software under these OS License Terms is subject to the condition subsequent of any use of the Software inconsistent with these OS License Terms. + +2.3 The user shall be authorized to modify the Software or portions of the Software and to copy and distribute the results of such modification under these OS License Terms and the GNU General Public License 3 Terms, provided that the user causes any subject matter containing the Software or portions of the Software, whether modified or not, to be licensed at no charge to any third party under these OS License Terms and the GNU General Public License 3 Terms. + +2.4 The use and compatibility of the Software with other Open Source Licenses is permitted in accordance with the GNU General Public License 3 Terms. The use of the Software is especially compatible with the Apache 2.0 terms (available here). In all cases, any resulting work is licensed under the GNU General Public License 3 Terms. + +2.5 The user is not allowed to grant sublicenses. Any recipient of the Software automatically receives a license from OEC e.V. to use, modify and propagate the Software, subject to these OS License Terms and the GNU General Public License 3 Terms. + +2.6 Any use of the Software not expressly permitted herein shall require OEC e.V.’s prior consent. The user shall be liable for any unauthorized use of the Software without limitation. + +2.7 Clause 2.6 shall not apply to the use of the Software for internal evaluation and testing in test environments (i.e. Ethereum Sepolia, Optimism Sepolia etc.). + +3. LIMITATION OF LIABILITY 3.1 OEC e.V. shall be liable in accordance with applicable laws in the following cases: 3.1.1 injury to life, body or health of a person; + +3.1.2 where damages, losses, costs or expenses are caused by intent or gross negligence; and + +3.1.3 where liability cannot be limited under applicable law, such as the Product Liability Act in German law. + +3.2 OEC e.V. shall not be liable in cases where damages, losses, costs or expenses are resulting from slight negligence except for breaches of essential contractual obligations, i.e., such obligations the violation of which endangers the purpose of these OS License Terms and on the fulfillment of which the user relies and may rely to a particular extent (cardinal obligations or Kardinalpflichten); such liability shall be limited to an amount reasonably foreseeable for the kind of relationship contemplated in these OS License Terms. + +3.3 In case of a loss of data OEC e.V.’s liability hereunder shall be limited to the amount of typical recovery costs which would have arisen if proper and regular data backup measures had been carried out by the user. 3.4 Any other liability of OEC e.V. shall be excluded. + +4. AUDIT OEC e.V. shall be authorized to appoint a qualified third party to conduct audits of the use of the Software at the user’s premises or otherwise, to verify compliance with these OS License Terms. Such audit may occur once every calendar year, with reasonable prior notice, and outside this interval if there is reasonable suspicion of the user’s non-compliance with these OS License Terms. The user shall provide to the auditor all physical and other access and any cooperation reasonably required by the auditor to carry out the audit. +5. EXPORT CONTROL The user shall comply with all applicable export control laws, rules and regulations, as amended from time to time, and shall indemnify and hold OEC e.V. harmless from any liability arising out of use of the Software in violation of these laws, rules or regulations. +6. MISCELLANEOUS 6.1 These OS License Term shall be governed by and construed in accordance with the laws of the Federal Republic of Germany, excluding the provisions of the United Nations Convention on Contracts for the International Sale of Goods dated 11.4.1980 (CISG). + +6.2 For all disputes arising out of or in connection with these OS License Terms the courts of \[…] shall have exclusive jurisdiction. + +6.3 Should any provision of these OS License Terms be or become invalid, this shall not affect the validity of the remaining provisions. + +SCHEDULE 1 + +GNU General Public License 3 Version 3, 29 June 2007 Copyright © 2007 Free Software Foundation, Inc. [https://fsf.org/](https://fsf.org/) Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. + +Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. + +TERMS AND CONDITIONS 0. Definitions. “This License” refers to version 3 of the GNU General Public License. + +“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. + +“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations. + +To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work. + +A “covered work” means either the unmodified Program or a work based on the Program. + +To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. + +To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. + +An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. + +1. Source Code. + +The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work. A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. + +The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. + +The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. + +The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. + +The Corresponding Source for a work in source code form is that same work. + +2. Basic Permissions. + +All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. + +You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. + +Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. + +3. Protecting Users' Legal Rights From Anti-Circumvention Law. + +No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. + +When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. + +4. Conveying Verbatim Copies. + +You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. + +5. Conveying Modified Source Versions. + +You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: + +a) The work must carry prominent notices stating that you modified it, and giving a relevant date. + +b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”. + +c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. + +d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. + +A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. + +6. Conveying Non-Source Forms. + +You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: + +a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. + +b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. + +c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. + +d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. + +e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. + +A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. + +A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. + +“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. + +If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). + +The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. + +Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. + +7. Additional Terms. + +“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. + +When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. + +Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: + +a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or + +b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or + +c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or + +d) Limiting the use for publicity purposes of names of licensors or authors of the material; or + +e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or + +f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. + +All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. + +If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. + +Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. + +8. Termination. + +You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). + +However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. + +Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. + +Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. + +9. Acceptance Not Required for Having Copies. + +You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. + +10. Automatic Licensing of Downstream Recipients. + +Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. + +An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. + +You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. + +11. Patents. + +A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”. + +A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. + +Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. + +In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. + +If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. + +If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. + +A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. + +Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. + +12. No Surrender of Others' Freedom. + +If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. + +13. Use with the GNU Affero General Public License. + +Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. + +14. Revised Versions of this License. + +The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. + +If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. + +Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. + +15. Disclaimer of Warranty. + +THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + +16. Limitation of Liability. + +IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +17. Interpretation of Sections 15 and 16. + +If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. + +END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs + +If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found. \ Copyright (C) + +``` +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see . +``` + +Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type` show c' for details. + +The hypothetical commands `show w' and` show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”. You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see [https://www.gnu.org/licenses/](https://www.gnu.org/licenses/). The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read [https://www.gnu.org/licenses/why-not-lgpl.html](https://www.gnu.org/licenses/why-not-lgpl.html). + +## COMMERCIAL LICENSE + +1. OWNERSHIP AND DELIVERY OF THE SOFTWARE 1.1 OEC e.V. is the sole and exclusive owner of all rights of use in the Software and any associated documentation and manuals. + +1.2 The use of the Software shall be subject to these Commercial License Terms. + +1.3 The Software is made available in in machine readable form, along with any associated instructions and manuals. + +2. LICENSE + +2.1 The use of the Software under these Commercial License Terms is subject to the following conditions: + +2.1.1 Any access to and use of the Software shall be in accordance with these Commercial License Terms. + +2.1.2 The right of use of the Software hereunder shall be non-exclusive, non-transferable, and non-sublicensable, perpetual, worldwide and subject to the timely payment of the License Fee. 2.1.3 Any reproduction and decompilation or reverse engineering (in each case except as permitted by law), distribution, translation, creation of derivative works of the Software and other modifications of the Software shall be strictly prohibited. + +2.2 Any use of the software Software not expressly permitted herein shall require OEC e.V.’s prior consent. The user shall be liable for any unauthorized use of the Software without limitation. + +2.3 Clause 2.2 shall not apply to the use of the Software for internal evaluation and testing in test environments (i.e. Ethereum Sepolia, Polygon Amoy etc.) for up to \[90] days. + +3. LICENSE FEES 3.1 The license fee shall be payable as an annual one-time fee (“License Fee”) of 40.000 EUR. + +3.2 The License Fee referred to in clause 3.1 is exclusive of any applicable VAT or other taxes. + +3.3 The License Fee shall be paid without any setoff or withholding for any reason (except for mandatory withholding tax under applicable laws). + +3.4 All taxes shall be paid by the user. + + + +4. WARRANTY AND INDEMNIFICATION + +4.1 OEC e.V. warrants that it is entitled to grant the rights of use of the Software as stipulated in these Commercial License Terms. + +4.2 The user’s rights in case of defects of the Software shall become statute-barred twelve (12) months after delivery of the Software and any associated documentation and manuals. The same period shall apply to any updates, upgrades and new versions of the Software which OEC e.V. may make available from time to time. + +4.3 If any third party claims that the use of the Software in accordance with these Commercial License Terms infringes its rights, OEC e.V. will defend such a claim and indemnify the user from any adverse final judgment and any settlement to which OEC e.V. consents, provided that the user promptly notifies OEC e.V. of the third-party claim or threat, supports OEC e.V.’s defense of the claim as reasonably requested by OEC e.V., and the alleged infringement is not caused by any unauthorized modification of the Software. + +4.4 OEC e.V. may, at its sole discretion, obtain from the third party the rights necessary to stop the alleged infringement or modify the Software in such manner that the infringement no longer occurs, provided that such modification not substantially impair the Software’s functionality. + +4.5 Any claims for damages in connection with the use of the Software are subject to the limitations set forth under clause 5. + +5. LIMITATION OF LIABILITY + +5.1 OEC e.V. shall be liable in accordance with applicable laws in the following cases: 5.1.1 injury to life, body or health of a person; + +5.1.2 where damages, losses, costs or expenses are caused by intent or gross negligence; and + +5.1.3 where liability cannot be limited under applicable law, such as the Product Liability Act in German law. + +5.2 OEC e.V. shall not be liable in cases where damages, losses, costs or expenses are resulting from slight negligence except for breaches of essential contractual obligations, i.e., such obligations the violation of which endangers the purpose of these Commercial License Terms and on the fulfillment of which the user relies and may rely to a particular extent (cardinal obligations or Kardinalpflichten); such liability shall be limited to an amount reasonably foreseeable for the kind of relationship contemplated in these Commercial License Terms. 5.3 In case of a loss of data OEC e.V.’s liability hereunder shall be limited to the amount of typical recovery costs which would have arisen if proper and regular data backup measures had been carried out by the user. 5.4 Any other liability of OEC e.V. shall be excluded. + +6. AUDIT + +OEC e.V. shall be authorized to appoint a qualified third party to conduct audits of the use of the Software at the user’s premises or otherwise, to verify compliance with these Commercial License Terms. Such audit may occur once every calendar year, with reasonable prior notice, and outside this interval if there is reasonable suspicion of the user’s non-compliance with these Commercial License Terms. The user shall provide to the auditor all physical and other access and any cooperation reasonably required by the auditor to carry out the audit. 7. EXPORT CONTROL + +The user shall comply with all applicable export control laws, rules and regulations, as amended from time to time, and shall indemnify and hold OEC e.V. harmless from any liability arising out of use of the Software in violation of these laws, rules or regulations. + +8. MISCELLANEOUS + +8.1 These Commercial License Terms shall be governed by and construed in accordance with the laws of the Federal Republic of Germany, excluding the provisions of the United Nations Convention on Contracts for the International Sale of Goods dated 11.4.1980 (CISG). + +8.2 For all disputes arising out of or in connection with these Commercial License Terms the courts of \[…] shall have exclusive jurisdiction. + +8.3 Should any provision of these Commercial License Terms be or become invalid, this shall not affect the validity of the remaining provisions. diff --git a/discover/networks/README.md b/discover/networks/README.md deleted file mode 100644 index 759317edf..000000000 --- a/discover/networks/README.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: -description: All the public networks the Ocean Protocol contracts are deployed to. ---- - -# Networks - -Ocean Protocol's smart contracts and [OCEAN](../ocean-token.md) are deployed on multiple public networks: several production chains, and several testnets too. - -The file [`address.json`](https://github.com/oceanprotocol/contracts/blob/v4main/addresses/address.json) holds up-to-date deployment addresses for all Ocean contracts. - -On tokens: -- You need the network's native token to pay for gas to make transactions: ETH for Ethereum mainnet, MATIC for Polygon, etc. You typically get these from exchanges. -- You may get OCEAN from an exchange, and bridge it as needed. -- For testnets, you'll need "fake" native tokens to pay for gas, and "fake" OCEAN. Typically, you get these from faucets. -- Below, we give token-related instructions, for each network. - -## Networks Summary - -Here are the networks that Ocean is deployed to. - -**Production Networks:** -- Ethereum mainnet -- Polygon mainnet -- Oasis Sapphire mainnet -- BNB Smart Chain -- Energy Web Chain -- Optimism (OP) Mainnet -- Moonriver - -**Test Networks:** -- Görli -- Sepolia -- Oasis Sapphire testnet -- Optimism (OP) Sepolia - -The rest of this doc gives details for each network. You can skip it until you need the reference information. - -## Production Networks - -### Ethereum Mainnet - - - - - -
Native tokenETH
OCEAN address0x967da4048cD07aB37855c090aAF366e4ce1b9F48
Explorerhttps://etherscan.io
- -**Wallet.** To connect to Ethereum mainnet with e.g. MetaMask, click on the network name dropdown and select "Ethereum mainnet" from the list. - - -### Polygon Mainnet - - - - - - - - -
Native tokenMATIC
OCEAN address0x282d8efCe846A88B159800bd4130ad77443Fa1A1
Explorerhttps://polygonscan.com
- -**Wallet.** If you can't find Polygon Mainnet as a predefined network, follow [Polygon's guide](https://wiki.polygon.technology/docs/develop/metamask/config-polygon-on-metamask/#add-the-polygon-network-manually). - -**Bridge.** Follow the [Polygon Bridge guide](bridges.md) in our docs. - -### Oasis Sapphire Mainnet - -[Ocean Predictoor](../../predictoor/README.md) is deployed on Oasis Sapphire mainnet for its ability to keep EVM transactions private. This deployment does do not currently support ocean.js, ocean.py, or Ocean Market. - - - - - - - -
Native tokenROSE
OCEAN address0x39d22B78A7651A76Ffbde2aaAB5FD92666Aca520
Explorerhttps://explorer.oasis.io/mainnet/sapphire
- -**Wallet.** If you cannot find Oasis Sapphire Mainnet as a predefined network, fyou can manually connect by entering the following during import: Network Name: `Oasis Sapphire`, RPC URL: `https://sapphire.oasis.io`, Chain ID: `23294`, Token: `ROSE`. For further info, see [Oasis tokens docs](https://docs.oasis.io/general/manage-tokens/). - -**Bridge.** Use [Celer](https://cbridge.celer.network/1/23294/OCEAN) to bridge OCEAN from Ethereum mainnet to Oasis Sapphire mainnet. - -### BNB Smart Chain - - - - - - - -
Native tokenBSC BNB
OCEAN address0xdce07662ca8ebc241316a15b611c89711414dd1a
Explorerhttps://bscscan.com/
- -This is one of the [Binance](https://binance.com)-spawned chains. BNB is the token of Binance. - -**Wallet.** If BNB Smart Chain is not listed as a predefined network in your wallet, see [Binance's Guide](https://academy.binance.com/en/articles/connecting-metamask-to-binance-smart-chain) to manually connect. - -**Bridge.** Our [BNB Smart Chain Bridge Guide](bridges.md#bnb-smart-chain-bridge) describes how to get OCEAN to BNB Smart Chain. - -### Energy Web Chain (EWC) - - - - - - - -
Native tokenEnergy Web Chain EWT
OCEAN address0x593122aae80a6fc3183b2ac0c4ab3336debee528
Explorerhttps://explorer.energyweb.org/
- -This is the chain for [Energy Web Foundation](https://www.energyweb.org/). - -**Wallet.** If you cannot find Energy Web Chain as a predefined network in your wallet, you can manually connect to it by following this [guide](https://energy-web-foundation.gitbook.io/energy-web/how-tos-and-tutorials/connect-to-energy-web-chain-main-network-with-metamash). - -**Bridge.** To bridge assets between Ethereum Mainnet and Energy Web Chain and Ethereum mainnet, you can use [Omni bridge by Carbonswap](https://bridge.carbonswap.exchange/). - -### Optimism (OP) Mainnet - - - - - - - -
Native tokenETH
OCEAN address0x2561aa2bB1d2Eb6629EDd7b0938d7679B8b49f9E
Explorerhttps://optimistic.etherscan.io
- -**Wallet.** If you cannot find Optimism as a predefined network in your wallet, you can manually connect to with [this OP guide](https://community.optimism.io/docs/useful-tools/networks/#op-mainnet). - -**Bridge.** Follow the [OP Bridge guide](https://docs.optimism.io/builders/dapp-developers/bridging/standard-bridge). - -### Moonriver - - - - - - - -
Native tokenMoonriver MOVR
OCEAN address0x99C409E5f62E4bd2AC142f17caFb6810B8F0BAAE
Explorerhttps://blockscout.moonriver.moonbeam.network
- - -[Moonriver](https://moonbeam.network/networks/moonriver/) is an EVM-based parachain of Kusama. - -**Wallet.** If Moonriver is not listed as a predefined network in your wallet, you can manually connect to it by following [Moonriver's guide](https://docs.moonbeam.network/builders/get-started/networks/moonriver/#connect-metamask). - -**Bridge.** To bridge assets between Moonriver and Ethereum mainnet, you can use the [Celer](https://cbridge.celer.network/bridge/moonriver-ethereum/). - - -## Test Networks - -Unlike production networks, tokens on test networks do not hold real economic value. - - -### Sepolia - - - - - - - - - -
Native tokenSepolia (fake) ETH
Native token faucetHere
OCEAN address0x1B083D8584dd3e6Ff37d04a6e7e82b5F622f3985
OCEAN faucetHere
Explorerhttps://sepolia.etherscan.io
- -**Wallet.** To connect with e.g. MetaMask, select "Sepolia" from the network dropdown list(enable "Show test networks"). - - -### Oasis Sapphire Testnet - -[Ocean Predictoor](../../predictoor/README.md) is deployed on Oasis Sapphire testnet. This deployment does do not currently support ocean.js, ocean.py, or Ocean Market. - - - - - - - - - - -
Native token(fake) ROSE
Native token faucetHere
OCEAN address0x973e69303259B0c2543a38665122b773D28405fB
OCEAN faucetHere
Explorerhttps://explorer.oasis.io/testnet/sapphire
- -**Wallet.** If you cannot find Oasis Sapphire Testnet as a predefined network, you can manually connect to it by entering the following during import: Network Name: `Oasis Sapphire Testnet`, RPC URL: `https://testnet.sapphire.oasis.dev`, Chain ID: `23295`, Token: `ROSE`. For further info, see [Oasis tokens docs](https://docs.oasis.io/general/manage-tokens/). - - -### Optimism (OP) Sepolia - - - - - - - - - -
Native tokenSepolia (fake) ETH
Native token faucetHere
OCEAN address0xf26c6C93f9f1d725e149d95f8E7B2334a406aD10
OCEAN faucetHere
Explorerhttps://sepolia-optimism.etherscan.io
- -**Wallet.** If OP Sepolia is not listed as a predefined network, follow [OP's Guide](https://community.optimism.io/docs/useful-tools/networks/#op-sepolia). - ----- - -_Next: [Bridges](bridges.md)_ - -_Back: [OCEAN: the Ocean token](../ocean-token.md)_ - diff --git a/discover/networks/bridges.md b/discover/networks/bridges.md deleted file mode 100644 index e83c6a438..000000000 --- a/discover/networks/bridges.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Bridges -description: Token migration between two blockchain networks. ---- - -# Bridges - -On [March 26th](https://blog.oceanprotocol.com/fetch-ai-ocean-protocol-and-singularitynet-unite-to-create-artificial-superintelligence-alliance-0768d608ecfa), Ocean Protocol, SingularityNET, and Fetch.ai joined forces to form the Superintelligence Alliance and announced a strategic merger of their tokens—OCEAN, FET, and AGIX—into a single unified token called “ASI.” - -If you'd like to migrate your OCEAN tokens to FET, please follow the instructions below according to the network where you currently hold your tokens: -1) Ethereum (ERC-20): -For OCEAN tokens on the Ethereum network, you can participate in the Phase 1 migration to FET by visiting [here](https://singularitydao.ai/migrate-asi). -2) Polygon - For OCEAN tokens on the Polygon network, first swap them to the Polygon (POL) token, then send it to an exchange that jas listed FET and do rest of the conversion there. You might come across the name "Matic" in some places instead of "Polygon" because the network is still using its old brand name in certain instances. Don't worry though, it's the same network whether you see Matic or Polygon. -3) Binance Smart Chain (BEP-20) - If you hold OCEAN tokens on the Binance Smart Chain network, transfer them to Binance on the BEP-20 network, where you can convert them to FET. - -For other bridges and networks, see the [Networks page](README.md). - - - - - - -_Next: [FAQ](../faq.md)_ - -_Back: [Networks](README.md)_ - diff --git a/discover/ocean-enterprise-collective-e.v..md b/discover/ocean-enterprise-collective-e.v..md new file mode 100644 index 000000000..26d303396 --- /dev/null +++ b/discover/ocean-enterprise-collective-e.v..md @@ -0,0 +1,11 @@ +# Ocean Enterprise Collective e.V. + +### Ocean Enterprise Collective e.V. + +Ocean Enterprise is designed, developed maintained and governed by the Ocean Enterprise Collective e.V. (OEC): a non-profit association registered in Germany that was founded by companies representing a wide range of countries and industries including agriculture, energy, health, human resources, manufacturing and public sector.\ +\ +Whether startup, enterprise, or government entity, we welcome you to become a member of the OEC and join a pioneering community of passionate business leaders shaping the future of data sharing, AI compliance, and next generation digital ecosystems.\ +\ +OEC membership opens the door to cutting-edge technologies, high-value partnerships, and new business opportunities in the new data economy.\ +\ +Learn more about [OEC membership opportunities and benefits](https://www.oceanenterprise.io/member-benefits).
diff --git a/discover/ocean-token.md b/discover/ocean-token.md deleted file mode 100644 index 15fcc3ddc..000000000 --- a/discover/ocean-token.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -description: ---- - -## The OCEAN Token - -The Ocean Token (OCEAN) was the utility token powering the Ocean Protocol ecosystem, used for staking, governance, and purchasing data services, enabling secure, transparent, and decentralized data exchange and monetization. -
- -# The ASI Token - -On [March 26th](https://blog.oceanprotocol.com/fetch-ai-ocean-protocol-and-singularitynet-unite-to-create-artificial-superintelligence-alliance-0768d608ecfa), Ocean Protocol, SingularityNET, and [Fetch.ai](http://fetch.ai/) joined forces to form the Superintelligence Alliance and announced a strategic merger of their tokens—OCEAN, FET, and AGIX—into a single unified token called “ASI.” The primary vision behind this Alliance is to empower individuals with the freedom to own and control their data and AI, while upholding each person’s autonomy and sovereignty in the emerging AI-driven economy. - -Starting with FET as the base token of the Alliance, the FET token will be renamed ASI, with an additional 1.48 Billion tokens minted: 867 million ASI allocated to AGIX holders and 611 million ASI allocated to OCEAN token holders. The total supply of ASI tokens will be 2.63 Billion tokens. - -If you are holding OCEAN tokens on the Ethereum network, then you can participate in the Phase 1 migration to FET [here](https://singularitydao.ai/migrate-asi). - -For more info, navigate to this [section](https://oceanprotocol.com/about-us/asi-token/) of our official website. - - -_Next: [Networks](networks/README.md)_ - -_Back: [What can you do with Ocean?](benefits.md)_ diff --git a/discover/privacy-policy.md b/discover/privacy-policy.md new file mode 100644 index 000000000..be3cd4d32 --- /dev/null +++ b/discover/privacy-policy.md @@ -0,0 +1,109 @@ +# Privacy Policy + +_Last updated on June 5, 2025._ + +This privacy policy informs you about how **Ocean Enterprise Collective e.V. (in the following OEC, we, us, our)** processes your personal data. Moreover, this privacy policy informs you about your rights. + +#### 1. Contact details of the controller + +The controller pursuant to the EU General Data Protection Regulation ("GDPR") for the processing of your personal data is: + +**Ocean Enterprise Collective e.V.**\ +Carmerstrasse 18\ +10623 Berlin\ +Germany + +E-mail: [**info@oceanenterprise.io**](mailto:info@oceanenterprise.io) + +#### 2. What's personal data? + +Personal data is any information that can be directly or indirectly associated with you. OEC processes the following personal data. + +* **Log file data including IP addresses:** Logfile data including IP addresses are processed when visiting our website. +* **E-mail:** If you contact OEC via e-mail, we process your e-mail address and any personal data you decide to provide in your message (such as your name). + +You can find further information about the processing of your personal data in the chapter "Processing operations according to Article 13 GDPR". + +#### 3. Processing operations according to Article 13 GDPR + +**3.1 Providing our website and creating log files** + +We host our website with Webflow (Webflow, Inc. located at 398 11th Street, 2nd Floor, San Francisco, CA 94103, USA). When you visit our website, Webflow collects and uses your IP address and creates logfiles including your IP address. + +**Purpose:** Collecting and using your IP address is necessary for providing our website because it is a technical requirement for ensuring communication between your device and our website. Logfiles including your IP address are created for security, fraud-prevention, abuse-prevention, and troubleshooting purposes. + +**Legal basis:** The legal basis for this processing is our legitimate interest, pursuant to Art. 6(1)(f) GDPR. + +**Legitimate interests:** Our legitimate interest is to provide our website to you and to enable security, a technically error-free presentation, and the optimization of the website. + +**Retention period:** Webflow stores your personal data for 15 days. + +**3.2 Contact via e-mail** + +If you contact us via e-mail, OEC collects, uses, and stores your e-mail address, and any other information you provide us in your message, such as your name. When you send us an e-mail, our (mail) service provider supports us in processing your personal data so we can communicate with you. + +**Purpose:** We collect, use and store this personal data to respond to your inquiries. + +**Legal basis:** The legal basis for this processing is our legitimate interest, according to Art. 6(1)(f) GDPR. + +**Legitimate interests:** Our legitimate interest is to answer your inquiries. + +**Retention period:** We store your personal data as long as we need it to process your inquires. We store your personal data beyond this period if we are obliged to do so due to retention obligations under tax and commercial law or in the event of legal disputes. If the latter is the case, your personal data will be erased after the retention period has expired. + +#### 4. Cookies + +Our website uses cookies. You can manage cookies via your browser settings, including disabling or deleting cookies. If you want to change your cookie consent, use the Cookie Settings link in the footer when available. + +#### 5. Automated decision making including profiling according to Article 13(2)(f) GDPR + +Automated decision making including profiling does not take place. + +#### 6. External links + +Our website contains links to websites owned by third parties. These websites are beyond our control and responsibility. + +#### 7. Your rights + +**7.1 Right to withdraw consent (Art. 7(3) GDPR)** + +You have the right to withdraw your consent at any time. The withdrawal of consent does not affect the lawfulness of processing based on consent before its withdrawal. + +**7.2 Right of access (Art. 15 GDPR)** + +You have the right to obtain confirmation as to whether OEC processes personal data about you. If we are processing personal data about you, you have the right to access these personal data and to gain the information defined in Art. 15 GDPR. + +**7.3 Right to rectification (Art. 16 GDPR)** + +You have the right to obtain without undue delay the rectification of inaccurate personal data about you. Additionally, you have the right that incomplete personal data about you are completed. + +**7.4 Right to erasure (Art. 17 GDPR)** + +You have the right to obtain without undue delay the erasure of personal data about you, where the defined legal grounds in Art. 17 GDPR apply. + +**7.5 Right to restriction of processing (Art. 18 GDPR)** + +Moreover, you have the right to obtain the restriction of processing your personal data where the defined legal grounds in Art. 18 GDPR apply. + +**7.6 Right to data portability (Art. 20 GDPR)** + +You have the right to receive your personal data in a structured, commonly used, and machine-readable format. Additionally, you have the right to transmit those data to another controller without hindrance, where the defined legal grounds in Art. 20 GDPR apply. You can make use of your right to data portability by contacting us. + +**7.7 Right to object (Art. 21 GDPR)** + +On grounds relating to your particular situation, you have the right to object to the processing of your personal data where we based the processing on legitimate interests (Art. 6(1)(f) GDPR). If you object, OEC will no longer process your personal data unless we can demonstrate compelling legitimate grounds for the processing, overriding your rights, freedoms, and interests, or if the processing is required to establish, exercise, or defend legal claims. + +**7.8 Right to lodge a complaint (Art. 77 GDPR)** + +You have the right to lodge a complaint with a supervisory authority if you consider the processing of your personal data by OEC to infringe the GDPR. You can lodge a complaint in particular + +* in the Member State of your habitual residence, +* in the Member State of your place of work, and +* in the place of the alleged infringement. + +#### 8. Questions + +If you have any questions about our privacy policy, please send us an e-mail at [**info@oceanenterprise.io**](mailto:info@oceanenterprise.io). + +#### 9. Changes to the Privacy Policy + +This privacy policy will be amended from time to time. You can see the date of the last alteration at the top of the privacy policy. diff --git a/discover/what-is-ocean.md b/discover/what-is-ocean.md index a44fd704c..747da48fc 100644 --- a/discover/what-is-ocean.md +++ b/discover/what-is-ocean.md @@ -1,71 +1,45 @@ ---- -description: ---- +# What is Ocean Enterprise? -
+### What is Ocean Enterprise? -## What is Ocean? +Ocean Enterprise is a free open-source enterprise-ready data ecosystem software solution that enables companies and public institutions to securely manage and monetize proprietary AI & data products and services in a trusted and compliant environment.\ +\ +Domain agnostic and collectively governed by an independent non-profit association, Ocean Enterprise is shaping a new transparent era of the data economy and is already being used by leading data-driven businesses in aerospace, agriculture, manufacturing, mobility, smart cities and more. -Ocean is a decentralized data exchange protocol. +### Tech: Data NFTs and Datatokens -AI lives on data; Ocean facilitates it. +Ocean Enterprise enables decentralized access control via token-gating using Data NFTs (for assigning IP) and Datatokens (for consuming data services). \ +\ +Key principles: -Ocean has two specific parts: -- A live tech stack. At the core is **Datatokens** and **Compute-to-Data** -- A lively community. This includes **builders, data scientists**, and **Ocean Ambassadors**. Ocean's community is active on **social media**. - -Let's drill into each. - -## Tech: Ocean data NFTs and datatokens - -These enable decentralized access control, via token-gating. Key principles: - -- Publish data services as ERC721 data NFTs and ERC20 datatokens -- You can access the dataset / data service if you hold 1.0 datatokens -- Consuming data services = spending datatokens +* Publish data services as a data NFT (ERC721) +* Access the datasets and data services if you hold datatokens (ERC20) +* Consuming data services = spending datatokens Crypto wallets, exchanges, and DAOs become _data_ wallets, exchanges, and DAOs. -
-

Data NFTs & datatokens are an on-ramp and off-ramp for data assets into DeFi
-
- -Data can be on Azure or AWS, Filecoin or Arweave, REST APIs or smart contract feeds. Data may be raw AI training data, feature vectors, trained models, even AI model predictions, or non-AI data. - -## Tech: Ocean Compute-to-Data - -This enables one buy & sell private data, while preserving privacy -- Private data is valuable: using it can improve research and business outcomes. But concerns over privacy and control make it hard to access. -- Compute-to-Data (C2D) grants access run compute against the data, _on the same premises of the data_. Only the results are visible to the consumer. The data never leaves the premises. Decentralized blockchain technology does the handshaking. -- C2D enables people to sell private data while preserving privacy, as an opportunity for companies to monetize their data assets. -- C2D can also be used for data sharing in science or technology contexts, with lower liability risk, because the data doesn't move. - -
-

Compute-to-Data flow
-
- -## Community: Ocean Ecosystem - -Ocean has a lively [ecosystem](https://oceanprotocol.com/explore/ecosystem) of dapps grown over years, built by enthusiastic developers. - -
+### Tech: Compute-to-Data (C2D) -The Ocean ecosystem also contains many data scientists and AI enthusiasts, excited about the future of AI & data. You can find them doing [predictions](https://www.predictoor.ai/), [data challenges](https://competitions.desights.ai/challenge/list), [Data Farming](https://docs.oceanprotocol.com/data-farming), and more. +Ocean Enterprise enables you to buy & sell private data, while preserving privacy -## Community: Ocean Ambassadors +* Private data is valuable: using it can improve research and business outcomes. But concerns over privacy and control make it hard to access. +* Compute-to-Data (C2D) grants access run compute against the data, _on the same premises of the data_. Only the results are visible to the consumer. The data never leaves the premises. Decentralized blockchain technology does the handshaking. +* C2D enables people to sell private data while preserving privacy, as an opportunity for companies to monetize their data assets. +* C2D can also be used for data sharing in science or technology contexts, with lower liability risk, because the data doesn't move. +* Data can be on Azure or AWS, Filecoin or Arweave, REST APIs or smart contract feeds. Data may be raw AI training data, feature vectors, trained models, even AI model predictions, or non-AI data. -Ocean has an excellent [community of ambassadors](https://oceanprotocol.com/explore/community). Anyone can join. +

Compute-to-Data flow

-
+### Ecosystem, News & Updates -## Community: Social Media +Ocean Enterprise Collective emerged out of the Ocean Protocol [ecosystem](https://oceanprotocol.com/explore/ecosystem), a vibrant and forward-thinking community of data scientists and AI enthusiasts actively building and shaping the future of AI & data. \ +\ +Keep up to date on all the latest Ocean Enterprise developments and news by following Ocean Enterprise on [LinkedIn](https://www.linkedin.com/company/ocean-enterprise-collective) and [Medium](https://medium.com/ocean-enterprise-collective) to. Or, track Ocean Enterprise progress directly on [GitHub](https://github.com/OceanProtocolEnterprise). -Follow Ocean on [Twitter](https://twitter.com/OceanProtocol) or [Telegram](https://t.me/oceanprotocol_community) to keep up to date. Chat directly with the Ocean community on [Discord](https://discord.gg/TnXjkR5). Or, track Ocean progress directly on [GitHub](https://github.com/oceanprotocol). -Finally, the [Ocean blog](https://blog.oceanprotocol.com/) has regular updates. ----- +*** -_Next: [What can you do with Ocean?](benefits.md)_ +_Next:_ [_What can you do with Ocean?_](benefits.md) -_Back: [Why Ocean?](why-ocean.md)_ +_Back:_ [_Why Ocean?_](/broken/pages/EF1FkqJ9GSKnM8iRAjrc) diff --git a/discover/whitepaper.md b/discover/whitepaper.md new file mode 100644 index 000000000..b67eda3c8 --- /dev/null +++ b/discover/whitepaper.md @@ -0,0 +1,9 @@ +# Whitepaper + +**ABSTRACT** + +Ocean Enterprise is an open-source software framework and governance model designed to create next-generation data spaces and AI ecosystems. It addresses challenges such as data silos, compliance, control, and value creation by enabling organizations to collaborate efficiently and create value from digital resources at scale without surrendering control. The framework utilizes a federated architecture that combines technical data sovereignty, value creation and exchange, transparency, sovereign deployments and Compute-to-Data capabilities that provide pier-to-pier access control to privately held data. This allows data to be used for computation and AI model training without the underlying intellectual property leaving its owner’s control. Governed by the Ocean Enterprise Collective e.V., a non-profit association, it ensures transparent governance and alignment with key European data standards and regulations. + +Access the full whitepaper below. + +{% file src="../.gitbook/assets/OEC_Whitepaper.pdf" %} diff --git a/discover/why-ocean.md b/discover/why-ocean.md deleted file mode 100644 index ba623fe4e..000000000 --- a/discover/why-ocean.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -description: ---- - -### Why Ocean? - -Ocean was founded to level the playing field for AI and data. - -{% embed url="https://youtu.be/4P72ZelkEpQ" %} - -To dive deeper, see [this blog](https://blog.oceanprotocol.com/from-ai-to-blockchain-to-data-meet-ocean-f210ff460465) or [this video](https://youtu.be/XN_PHg1K61w). - ----- - -_Next: [What is Ocean?](what-is-ocean.md)_ - -_Back: [Discover Ocean: main](README.md)_ \ No newline at end of file diff --git a/infrastructure/README.md b/infrastructure/README.md index 2559b91f5..046bcad9f 100644 --- a/infrastructure/README.md +++ b/infrastructure/README.md @@ -1,10 +1,14 @@ --- -description: Learn how to deploy Ocean components in your environment. -cover: ../.gitbook/assets/cover/infrastructure_banner.png +description: Learn how to deploy Ocean Enterprise in your environment. +cover: ../.gitbook/assets/Deployment.png coverY: 0 --- -# 🔨 Infrastructure +# Deployment guides + + + + There are many ways in which the components can be deployed, from simple configurations used for development and testing to complex configurations, used for production systems. diff --git a/infrastructure/compute-to-data-docker-registry.md b/infrastructure/compute-to-data-docker-registry.md deleted file mode 100644 index 6e1255af9..000000000 --- a/infrastructure/compute-to-data-docker-registry.md +++ /dev/null @@ -1,314 +0,0 @@ ---- -title: Setting up private docker registry for Compute-to-Data environment -description: >- - Learn how to setup your own docker registry and push images for running - algorithms in a C2D environment. ---- - -# C2D - Private Docker Registry - -The document is intended for a production setup. The tutorial provides the steps to set up a private docker registry on the server for the following scenarios: - -* Allow registry access only to the C2D environment. -* Anyone can pull the image from the registry but, only authenticated users will push images to the registry. - -### Setup 1: Allow registry access only to the C2D environment - -To implement this use case, 1 domain will be required: - -* **example.com**: This domain will allow only image pull operations - -_Note: Please change the domain names to your application-specific domain names._ - -#### 1.1 Prerequisites - -* A docker environment running on a Linux server. -* Docker compose is installed. -* C2D environment is running. -* The domain names are mapped to the server hosting the registry. - -#### 1.2 Generate certificates - -```bash -# install certbot: https://certbot.eff.org/ -sudo certbot certonly --standalone --cert-name example.com -d example.com -``` - -_Note: Check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._ - -#### 1.3 Generate a password file - -Replace content in `<>` with appropriate content. - -```bash -docker run \ - --entrypoint htpasswd \ - httpd:2 -Bbn > /auth/htpasswd -``` - -#### 1.4 Docker compose template file for registry - -Copy the below `yml` content to `docker-compose.yml` file and replace content in `<>`. - -```yml -version: '3' - -services: - registry: - restart: always - container_name: my-docker-registry - image: registry:2 - ports: - - 5050:5000 - environment: - REGISTRY_AUTH: htpasswd - REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd - REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm - REGISTRY_HTTP_SECRET: - volumes: - - /data:/var/lib/registry - - /auth:/auth - nginx: - image: nginx:latest - container_name: nginx - volumes: - - /nginx/logs:/app/logs/ - - nginx.conf:/etc/nginx/nginx.conf - - /etc/letsencrypt/:/etc/letsencrypt/ - ports: - - 80:80 - - 443:443 - depends_on: - - registry -``` - -#### 1.5 Nginx configuration - -Copy the below nginx configuration to a `nginx.conf` file. - -``` -events {} -http { - access_log /app/logs/access.log; - error_log /app/logs/error.log; - - server { - client_max_body_size 4096M; - listen 80 default_server; - server_name _; - return 301 https://$host$request_uri; - } - - server { - # Allowed request size should be large enough to allow pull operations - client_max_body_size 4096M; - listen 443 ssl; - server_name example.com; - ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; - location / { - proxy_connect_timeout 75s; - proxy_pass http://registry-read-only:5000; - } - } -} -``` - -#### 1.6 Create Kubernetes secret in C2D server - -Login into the compute-to-data environment and run the following command with the appropriate credentials: - -```bash -kubectl create secret docker-registry regcred --docker-server=example.com --docker-username= --docker-password= --docker-email= -n ocean-compute -``` - -#### 1.7 Update operator-engine configuration - -Add `PULL_SECRET` property with value `regcred` in the [operator.yml](https://github.com/oceanprotocol/operator-engine/blob/main/kubernetes/operator.yml) file of operator-engine configuration. For more details on operator-engine properties refer to the [operator-engine readme](https://github.com/oceanprotocol/operator-engine/blob/v4main/README.md). - -Apply updated operator-engine configuration. - -```bash -kubectl config set-context --current --namespace ocean-compute -kubectl apply -f operator-engine/kubernetes/operator.yml -``` - -### Steup 2: Allow anonymous `pull` operations - -To implement this use case, 2 domains will be required: - -* **example.com**: This domain will only allow image push/pull operations from authenticated users. -* **readonly.example.com**: This domain will allow only image pull operations - -_Note: Please change the domain names to your application-specific domain names._ - -#### 2.1 Prerequisites - -* Running docker environment on the Linux server. -* Docker compose is installed. -* 2 domain names are mapped to the same server IP address. - -#### 2.2 Generate certificates - -```bash -# install certbot: https://certbot.eff.org/ -sudo certbot certonly --standalone --cert-name example.com -d example.com -sudo certbot certonly --standalone --cert-name readonly.example.com -d readonly.example.com -``` - -_Note: Do check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._ - -#### 2.3 Generate a password file - -Replace content in `<>` with appropriate content. - -```bash -docker run \ - --entrypoint htpasswd \ - httpd:2 -Bbn > /auth/htpasswd -``` - -#### 2.4 Docker compose template file for registry - -Copy the below `yml` content to `docker-compose.yml` file and replace content in `<>`. Here, we will be creating two services of the docker registry so that anyone can `pull` the images from the registry but, only authenticated users can `push` the images. - -```yml -version: '3' - -services: - registry: - restart: always - container_name: my-docker-registry - image: registry:2 - ports: - - 5050:5000 - environment: - REGISTRY_AUTH: htpasswd - REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd - REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm - REGISTRY_HTTP_SECRET: - volumes: - - /data:/var/lib/registry - - /auth:/auth - registry-read-only: - restart: always - container_name: my-registry-read-only - image: registry:2 - read_only: true - ports: - - 5051:5000 - environment: - REGISTRY_HTTP_SECRET: ${REGISTRY_HTTP_SECRET} - volumes: - - /docker-registry/data:/var/lib/registry:ro - depends_on: - - registry - nginx: - image: nginx:latest - container_name: nginx - volumes: - - /nginx/logs:/app/logs/ - - nginx.conf:/etc/nginx/nginx.conf - - /etc/letsencrypt/:/etc/letsencrypt/ - ports: - - 80:80 - - 443:443 - depends_on: - - registry-read-only -``` - -#### 2.5 Nginx configuration - -Copy the below nginx configuration to a `nginx.conf` file. - -``` -events {} -http { - access_log /app/logs/access.log; - error_log /app/logs/error.log; - - server { - client_max_body_size 4096M; - listen 80 default_server; - server_name _; - return 301 https://$host$request_uri; - } - - server { - # Allowed request size should be large enough to allow push operations - client_max_body_size 4096M; - listen 443 ssl; - server_name readonly.example.com; - ssl_certificate /etc/letsencrypt/live/readonly.example.com/fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live/readonly.example.com/privkey.pem; - location / { - proxy_connect_timeout 75s; - proxy_pass http://registry:5000; - } - } - - server { - # Allowed request size should be large enough to allow pull operations - client_max_body_size 4096M; - listen 443 ssl; - server_name example.com; - ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; - location / { - proxy_connect_timeout 75s; - proxy_pass http://registry-read-only:5000; - } - } -} -``` - -### Start the registry - -```bash -docker-compose -f docker-compose.yml up -``` - -### Working with registry - -#### Login to registry - -```bash -docker login example.com -u -p -``` - -#### Build and push an image to the registry - -Use the commands below to build an image from a `Dockerfile` and push it to your private registry. - -```bash -docker build . -t example.com/my-algo:latest -docker image push example.com/my-algo:latest -``` - -#### List images in the registry - -```bash -curl -X GET -u : https://example.com/v2/_catalog -``` - -#### Pull an image from the registry - -Use the commands below to build an image from a `Dockerfile` and push it to your private registry. - -```bash -# requires login -docker image pull example.com/my-algo:latest - -# allows anonymous pull if 2nd setup scenario is implemented -docker image pull readonly.example.com/my-algo:latest -``` - -#### Next step - -You can publish an algorithm asset with the metadata containing the registry URL, image, and tag information to enable users to run C2D jobs. - -### Further references - -* [Setup Compute-to-Data environment](compute-to-data-minikube.md) -* [Writing algorithms](../developers//compute-to-data/compute-to-data-algorithms.md) -* [C2D example](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/c2d-flow.md) diff --git a/infrastructure/compute-to-data-minikube.md b/infrastructure/compute-to-data-minikube.md deleted file mode 100644 index 73592244a..000000000 --- a/infrastructure/compute-to-data-minikube.md +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: Minikube Compute-to-Data Environment ---- - -# Deploying C2D - -This chapter will present how to deploy the C2D component of the Ocean stack. As mentioned in the [C2D Architecture chapter](../developers/compute-to-data/#architecture-and-overview-guides), the Compute-to-Data component uses Kubernetes to orchestrate the creation and deletion of the pods in which the C2D jobs are run. - -For the ones that do not have a Kubernetes environment available, we added to this guide instructions on how to install Minikube, which is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. In case you have a Kubernetes environment in place, please skip directly to step 4 of this guide. - - - -### Requirements - -* Communications: a functioning internet-accessible provider service -* Hardware: a server capable of running compute jobs (e.g. we used a machine with 8 CPUs, 16 GB Ram, 100GB SSD, and a fast internet connection). See [this guide](setup-server.md) for how to create a server; -* Operating system: Ubuntu 22.04 LTS - - - -### Steps - -1. [Install Docker and Git](compute-to-data-minikube.md#install-docker-and-git) -2. [Install Minikube](compute-to-data-minikube.md#install-minikube) -3. [Start Minikube](compute-to-data-minikube.md#start-minikube) -4. [Install the Kubernetes command line tool (kubectl)](compute-to-data-minikube.md#install-the-kubernetes-command-line-tool-kubectl) -5. [Download all required files](compute-to-data-minikube.md#download-all-required-files) -6. [Create namespaces](compute-to-data-minikube.md#create-namespaces) -7. [Setup up Postgresql](compute-to-data-minikube.md#setup-up-postgresql) -7. [Run the IPFS host (optional)](compute-to-data-minikube.md#run-the-ipfs-host-optional) -8. [Update the storage class](compute-to-data-minikube.md#update-the-storage-class) -9. [Setup C2D Orchestrator](compute-to-data-minikube.md#setup-c2d-orchestrator) -10. [Setup your first environment](compute-to-data-minikube.md#setup-your-first-environment) -11. [Update Provider](compute-to-data-minikube.md#update-provider) -12. [Automated deployment example](compute-to-data-minikube.md#automated-deployment-example) - - -#### Install Docker and Git - -```bash -sudo apt update -sudo apt install git docker.io -sudo usermod -aG docker $USER && newgrp docker -``` - -#### Install Minikube - -```bash -wget -q --show-progress https://github.com/kubernetes/minikube/releases/download/v1.22.0/minikube_1.22.0-0_amd64.deb -sudo dpkg -i minikube_1.22.0-0_amd64.deb -``` - -#### Start Minikube - -The first command is important and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828). - -```bash -minikube config set kubernetes-version v1.16.0 -minikube start --cni=calico --driver=docker --container-runtime=docker -``` - -Depending on the number of available CPUs, RAM, and the required resources for running the job, consider adding options `--cpu`, `--memory`, and `--disk-size` to avoid runtime issues. - -For other options to run minikube refer to this [link](https://minikube.sigs.k8s.io/docs/commands/start/) - -#### Install the Kubernetes command line tool (kubectl) - -```bash -curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" -curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" -echo "$(> /etc/hosts' - -``` - -#### Update the storage class - -The storage class is used by Kubernetes to create the temporary volumes on which the data used by the algorithm will be stored. - -Please ensure that your class allocates volumes in the same region and zone where you are running your pods. - -You need to consider the storage class available for your environment. - -For Minikube, you can use the default 'standard' class. - -In AWS, we created our own 'standard' class: - -```bash -kubectl get storageclass standard -o yaml -``` - -```yaml -allowedTopologies: -- matchLabelExpressions: - - key: failure-domain.beta.kubernetes.io/zone - values: - - us-east-1a -apiVersion: storage.k8s.io/v1 -kind: StorageClass -parameters: - fsType: ext4 - type: gp2 -provisioner: kubernetes.io/aws-ebs -reclaimPolicy: Delete -volumeBindingMode: Immediate -``` - -For more information, please visit https://kubernetes.io/docs/concepts/storage/storage-classes/ - -If you need to use your own classes, you will need to edit 'operator_engine/kubernetes/operator.yml'. - -#### Setup C2D Orchestrator - -C2D Orchestrator (aka operator-service) has two main functions: - - First, it's the outside interface of your C2D Cluster to the world. External components(like Provider) are calling APIs exposed by this - - Secondly, operator-service manages multiple environments and sends the jobs to the right environment. - -Edit `operator-service/kubernetes/deployment.yaml`. Change `ALLOWED_ADMINS` to a nice long random password. - -Let's deploy C2D Orchestrator. - -```bash -kubectl config set-context --current --namespace ocean-operator -kubectl apply -f operator-service/kubernetes/deployment.yaml -``` - -Now, let's expose the service. - -```bash -kubectl expose deployment operator-api --namespace=ocean-operator --port=8050 -``` - -You can run a port forward in a new terminal (see below) or create your ingress service and setup DNS and certificates (not covered here): - -```bash -kubectl -n ocean-operator port-forward svc/operator-api 8050 -``` - -Alternatively you could use another method to communicate between the C2D Environment and the provider, such as an SSH tunnel. - -And now it's time to initialize the database. - - -If your Minikube is running on compute.example.com: - -```bash -curl -X POST "https://compute.example.com/api/v1/operator/pgsqlinit" -H "accept: application/json" -H "Admin: myAdminPass" -``` -(where myAdminPass is configured in [Setup C2D Orchestrator](compute-to-data-minikube.md#setup-c2d-orchestrator)) - -Congrats, you have operator-service running. - -#### Setup your first environment - -Let's create our first environment. -Edit `operator-service/kubernetes/deployment.yaml`. - - set OPERATOR_PRIVATE_KEY. This has to be unique among multiple environments. In the future, this will be the account credited with fees. - - optionally change more env variables, to customize your environment. Check the [README](https://github.com/oceanprotocol/operator-engine#customize-your-operator-engine-deployment) section of the operator engine to customize your deployment. At a minimum, you should add your IPFS URLs or AWS settings, and add (or remove) notification URLs. - -Finally, let's deploy it: - -```bash -kubectl config set-context --current --namespace ocean-compute -kubectl create -f operator-service/kubernetes/postgres-configmap.yaml -kubectl apply -f operator-engine/kubernetes/sa.yml -kubectl apply -f operator-engine/kubernetes/binding.yml -kubectl apply -f operator-engine/kubernetes/operator.yml -``` - -**Optional**: For production enviroments, it's safer to block access to metadata. To do so run the below command: - -```bash -kubectl -n ocean-compute apply -f /ocean/operator-engine/kubernetes/egress.yaml -``` -Congrats,your c2d environment is running. - -If you want to deploy another one, just repeat the steps above, with a different namespace and different OPERATOR_PRIVATE_KEY. - - - -#### Update Provider - -Update your existing provider service by updating the `operator_service.url` value in `config.ini`, or set the appropiate ENV variable. - - -```ini -operator_service.url = https://compute.example.com/ -``` - -Restart your provider service. - -#### Automated deployment example - -If your setup is more complex, you can checkout (our automated deployment example)[https://github.com/oceanprotocol/c2d_barge/blob/main/c2d_barge_deployer/docker-entrypoint.sh]. -This script is used by barge to automaticly deploy the C2D cluster, with two environments. \ No newline at end of file diff --git a/infrastructure/deploy-SSI-infrastructure b/infrastructure/deploy-SSI-infrastructure new file mode 100644 index 000000000..584d1deb6 --- /dev/null +++ b/infrastructure/deploy-SSI-infrastructure @@ -0,0 +1,11 @@ +Walt ID stack +- issuer +- wallet +- verifier + + +Policy Server + +Policy Server Proxy Endpoints + +KYB component \ No newline at end of file diff --git a/infrastructure/deploy-marketplace b/infrastructure/deploy-marketplace new file mode 100644 index 000000000..e69de29bb diff --git a/infrastructure/deploy-node b/infrastructure/deploy-node new file mode 100644 index 000000000..e69de29bb diff --git a/infrastructure/deploying-aquarius.md b/infrastructure/deploying-aquarius.md deleted file mode 100644 index f29dd0de4..000000000 --- a/infrastructure/deploying-aquarius.md +++ /dev/null @@ -1,552 +0,0 @@ -# Deploying Aquarius - -### About Aquarius - -Aquarius is an off-chain component that caches the asset's metadata published on-chain. By deploying their own instance of Aquarius, developers can control which assets are visible in their DApp. For example, having a custom Aquarius instance allows only the assets from specific addresses to be visible in the DApp. - -This tutorial will provide the steps to deploy Aquarius. Ocean Protocol provides Aquarius Docker images which can be viewed [here](https://hub.docker.com/r/oceanprotocol/aquarius/tags). Visit [this](https://github.com/oceanprotocol/aquarius) page to view the Aquarius source code. - -Aquarius consists of two parts: - -* **API:** The Aquarius API provides a user with a convenient way to access the metadata without scanning the chain itself. -* **Event monitor:** Aquarius continually monitors the chains for MetadataCreated and MetadataUpdated events, processes these events, and adds them to the database. - -As mentioned in the [Setup a Server](setup-server.md) document, all Ocean components can be deployed in two configurations: simple, based on Docker Engine and Docker Compose, and complex, based on Kubernetes with Docker Engine. This document will present how to deploy Aquarius in each of these configurations. - -## Deploying Aquarius using Docker Engine and Docker Compose - -This guide will deploy Aquarius, including Elasticsearch as a single systemd service. - -### Prerequisites - -* A server for hosting Aquarius. See [this guide](setup-server.md) for how to create a server; -* Docker Compose and Docker Engine are installed and configured on the server. See [this guide](setup-server.md#install-docker-engine-and-docker-compose) for how to install these products. -* The RPC URLs and API keys for each of the networks to which the Aquarius will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. - -### Steps - -#### 1. Create the /etc/docker/compose/aquarius/docker-compose.yml file - -From a terminal console, create /etc/docker/compose/aquarius/docker-compose.yml file, then copy and paste the following content to it. Check the comments in the file and replace the fields with the specific values of your implementation. The following example is for deploying Aquarius for Goerli network. - -For each other network in which you want to deploy Aquarius, add to the file a section similar to "aquarius-events-goerli" included in this example and update the corresponding parameters (i.e. EVENTS\_RPC, OCEAN\_ADDRESS, SUBGRAPH\_URLS) specific to that network. - -```yaml -version: '3.9' -services: - elasticsearch: - image: elasticsearch:8.7.0 - container_name: elasticsearch - restart: on-failure - environment: - ES_JAVA_OPTS: "-Xms512m -Xmx512m" - MAX_MAP_COUNT: "64000" - discovery.type: "single-node" - ELASTIC_PASSWORD: "changeme" - xpack.security.enabled: "false" - xpack.security.http.ssl.enabled: "false" - volumes: - - data:/usr/share/elasticsearch/data - ports: - - 9200:9200 - networks: - - backend - aquarius: - image: oceanprotocol/aquarius:v5.1.2 - container_name: aquarius - restart: on-failure - ports: - - 5000:5000 - networks: - - backend - depends_on: - - elasticsearch - environment: - DB_MODULE: elasticsearch - DB_HOSTNAME: http://elasticsearch - DB_PORT: 9200 - DB_USERNAME: elastic - DB_PASSWORD: changeme - DB_NAME: aquarius - DB_SCHEME: http - DB_SSL : "false" - LOG_LEVEL: "INFO" - AQUARIUS_URL: "http://0.0.0.0:5000" - AQUARIUS_WORKERS : "4" - RUN_AQUARIUS_SERVER: "1" - AQUARIUS_CONFIG_FILE: "config.ini" - EVENTS_ALLOW: 0 - RUN_EVENTS_MONITOR: 0 - ALLOWED_PUBLISHERS: '[""]' - aquarius-events-goerli: - image: oceanprotocol/aquarius:v5.1.2 - container_name: aquarius-events-goerli - restart: on-failure - networks: - - backend - depends_on: - - elasticsearch - environment: - DB_MODULE: elasticsearch - DB_HOSTNAME: http://elasticsearch - DB_PORT: 9200 - DB_USERNAME: elastic - DB_PASSWORD: changeme - DB_NAME: aquarius - DB_SCHEME: http - DB_SSL : "false" - LOG_LEVEL: "INFO" - AQUARIUS_URL: "http://0.0.0.0:5000" - AQUARIUS_WORKERS : "1" - RUN_AQUARIUS_SERVER : "0" - AQUARIUS_CONFIG_FILE: "config.ini" - ALLOWED_PUBLISHERS: '[""]' - NETWORK_NAME: "goerli" - EVENTS_RPC: "https://goerli.infura.io/v3/" - METADATA_UPDATE_ALL : "0" - OCEAN_ADDRESS : 0xcfdda22c9837ae76e0faa845354f33c62e03653a - RUN_EVENTS_MONITOR: 1 - BLOCKS_CHUNK_SIZE: "5000" - SUBGRAPH_URLS: "5: https://v4.subgraph.goerli.oceanprotocol.com" -volumes: - data: - driver: local -networks: - backend: - driver: bridge -``` - -#### 2. Create the /etc/systemd/system/docker-compose@aquarius.service file - -Create the _/etc/systemd/system/docker-compose@aquarius.service_ file then copy and paste the following content to it. This example file could be customized if needed. - -```yaml -[Unit] -Description=%i service with docker compose -Requires=docker.service -After=docker.service - -[Service] -Type=oneshot -RemainAfterExit=true -Environment="PROJECT=ocean" -WorkingDirectory=/etc/docker/compose/%i -ExecStartPre=/usr/bin/env docker-compose -p $PROJECT pull -ExecStart=/usr/bin/env docker-compose -p $PROJECT up -d -ExecStop=/usr/bin/env docker-compose -p $PROJECT stop -ExecStopPost=/usr/bin/env docker-compose -p $PROJECT down - - -[Install] -WantedBy=multi-user.target -``` - -#### 3. Reload the systemd manager configuration - -Run the following command to reload the systemd manager configuration - -```bash -sudo systemctl daemon-reload -``` - -Optionally, you can enable the services to start at boot, using the following command: - -```bash -sudo systemctl enable docker-compose@aquarius.service -``` - -#### 4. Start Aquarius service - -To start the Aquarius service, run the following command: - -```bash -sudo systemctl start docker-compose@aquarius.service -``` - -#### 5. Check the service's status - -Check the status of the service by running the following commands: - -```bash -sudo systemctl status docker-compose@aquarius.service -``` - -#### 6. Confirm Aquarius is accessible - -Run the following commands to access Aquarius The output should be similar to the one displayed here. - -
$ curl localhost:9200
-{
-  "name" : "a93d989293ac",
-  "cluster_name" : "docker-cluster",
-  "cluster_uuid" : "Bs16cyCwRCOIbmaBUEj5fA",
-  "version" : {
-    "number" : "8.7.0",
-    "build_flavor" : "default",
-    "build_type" : "docker",
-    "build_hash" : "09520b59b6bc1057340b55750186466ea715e30e",
-    "build_date" : "2023-03-27T16:31:09.816451435Z",
-    "build_snapshot" : false,
-    "lucene_version" : "9.5.0",
-    "minimum_wire_compatibility_version" : "7.17.0",
-    "minimum_index_compatibility_version" : "7.0.0"
-  },
-  "tagline" : "You Know, for Search"
-}
-
- -```bash -$ curl localhost:5000 -{"plugin":"module","software":"Aquarius","version":"5.1.2"} -``` - -#### 7. Use Docker CLI to check the Aquarius service's logs - -If needed, use docker CLI to check Aquarius' service logs. - -First, identify the container id: - -```bash -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -355baee34d50 oceanprotocol/aquarius:v5.1.2 "/aquarius/docker-en…" About a minute ago Up About a minute 5000/tcp aquarius-events-goerli -f1f97d6f146f oceanprotocol/aquarius:v5.1.2 "/aquarius/docker-en…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp aquarius -a93d989293ac elasticsearch:8.7.0 "/bin/tini -- /usr/l…" About a minute ago Up About a minute 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp elasticsearch - -``` - -Then, check the logs from the Aqauarius' Docker containers: - -```bash -$ docker logs aquarius [--follow] -$ docker logs aquarius-events-goerli [--follow] -``` - -## Deploying Aquarius using Kubernetes - -Aquarius depends on the backend database and in this example we will deploy the following resources: - -* Elasticsearch. -* Aquarius ([Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)) - -Templates (yaml files) are provided and could be customized based on the environment's specifics. - -### Prerequisites - -* A server for hosting Aquarius. See [this guide](setup-server.md) for how to create a server; -* Kubernetes with Docker Engine is installed and configured on the server. See [this chapter](setup-server.md#install-kubernetes-with-docker-engine) for information on installing Kubernetes. -* The RPC URLs and API keys for each of the networks to which the Aquarius will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. - -### Steps - -1. [Deploy Elasticsearch service](deploying-aquarius.md#1.-deploy-elasticsearch) -2. [Deploy Aquarius service](deploying-aquarius.md#2.-deploy-aquarius) - -#### 1. Deploy Elasticsearch - -It is recommended to deploy Elasticsearch through Helm [chart](https://github.com/elastic/cloud-on-k8s). - -a. Once the Elasticsearch pods are running, the database service should be available: - -```bash -$ kubectl port-forward --namespace ocean svc/elasticsearch-master 9200:9200 -Forwarding from 127.0.0.1:9200 -> 9200 -Forwarding from [::1]:9200 -> 9200 -``` - -b. Check that the Elasticsearch service is accessible: - -``` -$ curl localhost:9200 -{ - "name" : "elasticsearch-master-2", - "cluster_name" : "elasticsearch", - "cluster_uuid" : "KMAfL5tVSJWFfmCOklT0qg", - "version" : { - "number" : "8.5.2", - "build_flavor" : "default", - "build_type" : "docker", - "build_hash" : "a846182fa16b4ebfcc89aa3c11a11fd5adf3de04", - "build_date" : "2022-11-17T18:56:17.538630285Z", - "build_snapshot" : false, - "lucene_version" : "9.4.1", - "minimum_wire_compatibility_version" : "7.17.0", - "minimum_index_compatibility_version" : "7.0.0" - }, - "tagline" : "You Know, for Search" -} -``` - -#### 2. Deploy Aquarius - -Aquarius supports indexing multiple chains using a single instance to serve API requests and one instance for each chain that must be indexed. - -

Aquarius deployment - multiple chains indexing

- -The following deployment templates could be used for guidance. Some parameters are [optional](https://github.com/oceanprotocol/aquarius) and the template could be adjusted based on these considerations. Common cases are the deployments for one/multiple Ethereum networks: - -* Mainnet -* Sepolia - -a. Create a YAML file for Aquarius configuration. - -The following templates (annotated) could be edited and used for deployment. - -* [_aquarius-deployment.yaml_](https://github.com/oceanprotocol/aquarius/blob/update-deploy-docs/deployment/aquarius-deployment.yaml) (annotated): this deployment is responsible for serving API requests - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - annotations: - labels: - app: aquarius - name: aquarius -spec: - progressDeadlineSeconds: 600 - replicas: 1 - revisionHistoryLimit: 5 - selector: - matchLabels: - app: aquarius - strategy: - rollingUpdate: - maxSurge: 25% - maxUnavailable: 25% - type: RollingUpdate - template: - metadata: - creationTimestamp: null - labels: - app: aquarius - spec: - containers: - - env: - - name: LOG_LEVEL - value: DEBUG - - name: AQUARIUS_URL - value: http://0.0.0.0:5000 - - name: AQUARIUS_WORKERS - value: "4" - - name: DB_HOSTNAME - value: < ES service hostname > - - name: DB_MODULE - value: elasticsearch - - name: DB_NAME - value: aquarius - - name: DB_PORT - value: "9200" - - name: DB_SCHEME - value: http - - name: DB_USERNAME - value: < ES username > - - name: DB_PASSWORD - value: < ES password > - - name: DB_SSL - value: "false" - - name: RUN_AQUARIUS_SERVER - value: "1" - - name: RUN_EVENTS_MONITOR - value: "0" - - name: EVENTS_ALLOW - value: "0" - - name: CONFIG_FILE - value: config.ini - - name: ALLOWED_PUBLISHERS - value: '[""]' - image: oceanprotocol/aquarius:v5.1.2 => check the available versions: https://hub.docker.com/repository/docker/oceanprotocol/aquarius/tags?page=1&ordering=last_updated - imagePullPolicy: Always - livenessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 5000 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 2 - name: aquarius - ports: - - containerPort: 5000 - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 5000 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - resources: - limits: - cpu: 800m - memory: 1Gi - requests: - cpu: 800m - memory: 1Gi - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - restartPolicy: Always - schedulerName: default-scheduler - terminationGracePeriodSeconds: 30ya -``` - -Example deployment for Sepoia (Polygon testnet): - -* [aquarius-events-sepolia-deployment.yaml](https://github.com/oceanprotocol/aquarius/blob/update-deploy-docs/deployment/aquarius-events-sepolia-deployment.yaml) (annotated) - this deployment will be responsible for indexing the block and storing the metadata published on-chain: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - annotations: - labels: - app: aquarius-events-sepolia - name: aquarius-events-sepolia -spec: - progressDeadlineSeconds: 600 - replicas: 1 - revisionHistoryLimit: 5 - selector: - matchLabels: - app: aquarius-events-sepolia - strategy: - rollingUpdate: - maxSurge: 25% - maxUnavailable: 25% - type: RollingUpdate - template: - metadata: - creationTimestamp: null - labels: - app: aquarius-events-sepolia - spec: - containers: - - env: - - name: LOG_LEVEL - value: DEBUG - - name: AQUARIUS_URL - value: http://0.0.0.0:5000 - - name: AQUARIUS_WORKERS - value: "1" - - name: DB_HOSTNAME - value: < ES service hostname > - - name: DB_MODULE - value: elasticsearch - - name: DB_NAME - value: aquarius - - name: DB_PORT - value: "9200" - - name: DB_SCHEME - value: http - - name: DB_USERNAME - value: < ES username > - - name: DB_PASSWORD - value: < ES password > - - name: DB_SSL - value: "false" - - name: RUN_AQUARIUS_SERVER - value: "0" - - name: RUN_EVENTS_MONITOR - value: "1" - - name: CONFIG_FILE - value: config.ini - - name: ALLOWED_PUBLISHERS - value: '[""]' - - name: NETWORK_NAME - value: sepolia - - name: EVENTS_RPC - value: https://polygon-sepolia.infura.io/v3/< INFURA ID > => or another RPC service for this network - - name: METADATA_UPDATE_ALL - value: "0" - - name: ASSET_PURGATORY_URL - value: https://raw.githubusercontent.com/oceanprotocol/list-purgatory/main/list-assets.json - - name: ACCOUNT_PURGATORY_URL - value: https://raw.githubusercontent.com/oceanprotocol/list-purgatory/main/list-accounts.json - - name: PURGATORY_UPDATE_INTERVAL - value: "60" - - name: OCEAN_ADDRESS - value: 0xd8992Ed72C445c35Cb4A2be468568Ed1079357c8 - - name: SUBGRAPH_URLS - value: | - {"80001": "https://v4.subgraph.sepolia.oceanprotocol.com"} => or your own deployed Ocean Subgraph service for this network - - name: BLOCKS_CHUNK_SIZE - value: "3500" - - name: EVENTS_HTTP - value: "1" - image: oceanprotocol/aquarius:v5.1.2 => check the available versions: https://hub.docker.com/repository/docker/oceanprotocol/aquarius/tags?page=1&ordering=last_updated - imagePullPolicy: Always - livenessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 5001 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - name: aquarius-events-sepolia - ports: - - containerPort: 5000 - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 5001 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - resources: - limits: - cpu: 500m - memory: 1Gi - requests: - cpu: 500m - memory: 1Gi - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - restartPolicy: Always - schedulerName: default-scheduler - terminationGracePeriodSeconds: 30 -``` - -Tip: before deployment, you can [validate](https://github.com/instrumenta/kubeval) the yaml file. - -b. Deploy the configuration - -Deploy the configuration in Kubernetes using the following commands. - -```bash -$ kubectl apply -f aquarius-deployment.yaml -$ kubectl apply -f aquarius-events-rinkeby-deployment.yaml - - -kubectl get pods -l app=aquarius -NAME READY STATUS RESTARTS AGE -aquarius-6fd9cc975b-fxr4d 1/1 Running 0 1d - - kubectl get pods -l app=aquarius-events-sepolia -NAME READY STATUS RESTARTS AGE -aquarius-events-sepolia-8748976c4-mh24n 1/1 Running 0 1d -``` - -Check the logs for newly deployed Aquarius by running the following command: - -```bash -$ kubectl logs aquarius-6fd9cc975b-fxr4d [--follow] - -$ kubectl logs aquarius-events-sepolia-8748976c4-mh24n [--follow] -``` - -c. Create a Kubernetes service - -The next step is to create a Kubernetes service (eg. ClusterIP, NodePort, Loadbalancer, ExternalName) for this deployment, depending on the environment specifications. Follow [this link](https://kubernetes.io/docs/concepts/services-networking/service/) for details on how to create a Kubernetes service. diff --git a/infrastructure/deploying-ocean-subgraph.md b/infrastructure/deploying-ocean-subgraph.md deleted file mode 100644 index a2e4c99aa..000000000 --- a/infrastructure/deploying-ocean-subgraph.md +++ /dev/null @@ -1,703 +0,0 @@ -# Deploying Ocean Subgraph - -### About Ocean Subgraph - -Ocean subgraph allows querying the datatoken, data NFT, and all event information using GraphQL. Hosting the Ocean subgraph saves the cost and time required in querying the data directly from the blockchain. The steps in this tutorial will explain how to host Ocean subgraph for the EVM-compatible chains supported by Ocean Protocol. - -Ocean Subgraph is deployed on top of [graph-node](https://github.com/graphprotocol/graph-node), therefore, in this document, we will show first how to deploy graph-node - either using Docker Engine or Kubernetes - and then how to install Ocean Subgraph on the graph-node system. - -## Deploying Graph-node using Docker Engine and Docker Compose - -### Prerequisites - -* A server for hosting Graph-node. See [this guide](setup-server.md) for how to create a server; -* Docker Compose and Docker Engine are installed and configured on the server. See [this guide](setup-server.md#install-docker-engine-and-docker-compose) for how to install these products. -* The RPC URLs and API keys for each of the networks to which Ocean Subgraph will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. - -### Steps - -1. [Create the /etc/docker/compose/graph-node/docker-compose.yml file](deploying-ocean-subgraph.md#1-create-the-etcdockercomposegraph-nodedocker-composeyml-file) -2. [Create the /etc/systemd/system/docker-compose@graph-node.service file](deploying-ocean-subgraph.md#2-create-the-etcsystemdsystemdocker-composegraph-nodeservice-file) -3. [Reload the systemd manager configuration](deploying-ocean-subgraph.md#3.-reload-the-systemd-manager-configuration) -4. [Start the Ocean Subgraph service](deploying-ocean-subgraph.md#4-deploy-ocean-subgraph) -5. [Check the service's status](deploying-ocean-subgraph.md#5.-check-the-services-status) -6. [Check Ocean Subgraph's service logs](deploying-ocean-subgraph.md#6-check-graph-node-service-logs) - -#### 1. Create the /etc/docker/compose/graph-node/docker-compose.yml file - -From a terminal console, create the _/etc/docker/compose/graph-node/docker-compose.yml_ file, then copy and paste the following content to it (. Check the comments in the file and replace the fields with the specific values of your implementation. - -_/etc/docker/compose/graph-node/docker-compose.yml_ (annotated - example for `sepolia` network) - -```yaml -version: '3' -services: - graph-node: - image: graphprotocol/graph-node:v0.28.2 - container_name: graph-node - restart: on-failure - ports: - - '8000:8000' - - '8020:8020' - - '8030:8030' - - '8040:8040' - depends_on: - - ipfs - - postgres-graph - environment: - postgres_host: postgres-graph - postgres_user: graph-node - postgres_pass: < password > - postgres_db: sepolia - ipfs: 'ipfs:5001' - ethereum: 'sepolia:https://sepolia.infura.io/v3/' - GRAPH_LOG: info - ipfs: - image: ipfs/go-ipfs:v0.4.23 - container_name: ipfs - restart: on-failure - ports: - - '5001:5001' - volumes: - - ipfs-graph-node:/data/ipfs - postgres-graph: - image: postgres:15.3 - container_name: postgres - restart: on-failure - ports: - - '5432:5432' - command: ["postgres", "-cshared_preload_libraries=pg_stat_statements"] - environment: - POSTGRES_USER: graph-node - POSTGRES_PASSWORD: < password > - POSTGRES_DB: sepolia - volumes: - - pgdata-graph-node:/var/lib/postgresql/data -volumes: - pgdata-graph-node: - driver: local - ipfs-graph-node: - driver: local -``` - -#### 2. Create the /etc/systemd/system/docker-compose@graph-node.service file - -Create the _/etc/systemd/system/docker-compose@graph-node.service_ file then copy and paste the following content to it. This example file could be customized if needed. - -``` -[Unit] -Description=%i service with docker compose -Requires=docker.service -After=docker.service - -[Service] -Type=oneshot -RemainAfterExit=true -Environment="PROJECT=ocean" -WorkingDirectory=/etc/docker/compose/%i -ExecStartPre=/usr/bin/env docker-compose -p $PROJECT pull -ExecStart=/usr/bin/env docker-compose -p $PROJECT up -d -ExecStop=/usr/bin/env docker-compose -p $PROJECT stop -ExecStopPost=/usr/bin/env docker-compose -p $PROJECT down - - -[Install] -WantedBy=multi-user.target -``` - -#### 3. Reload the systemd manager configuration - -Run the following command to reload the systemd manager configuration - -```bash -sudo systemctl daemon-reload -``` - -Optionally, you can enable the services to start at boot, using the following command: - -```bash -sudo systemctl enable docker-compose@graph-node.service -``` - -#### 4. Start graph-node service - -To start the Ocean Subgraph service, run the following command: - -```bash -sudo systemctl start docker-compose@graph-node.service -``` - -#### 5. Check the service's status - -Check the status of the service by running the following command. The output of the command should be similar to the one presented here. - -```bash -$ sudo systemctl status docker-compose@graph-node.service -● docker-compose@graph-node.service - graph-node service with docker compose - Loaded: loaded (/etc/systemd/system/docker-compose@graph-node.service; disabled; vendor preset: enabled) - Active: active (exited) since Sun 2023-06-25 17:05:25 UTC; 6s ago - Process: 4878 ExecStartPre=/usr/bin/env docker-compose -p $PROJECT pull (code=exited, status=0/SUCCESS) - Process: 4887 ExecStart=/usr/bin/env docker-compose -p $PROJECT up -d (code=exited, status=0/SUCCESS) - Main PID: 4887 (code=exited, status=0/SUCCESS) - CPU: 123ms - -Jun 25 17:05:24 testvm env[4887]: Container ipfs Created -Jun 25 17:05:24 testvm env[4887]: Container graph-node Creating -Jun 25 17:05:24 testvm env[4887]: Container graph-node Created -Jun 25 17:05:24 testvm env[4887]: Container ipfs Starting -Jun 25 17:05:24 testvm env[4887]: Container postgres Starting -Jun 25 17:05:24 testvm env[4887]: Container ipfs Started -Jun 25 17:05:25 testvm env[4887]: Container postgres Started -Jun 25 17:05:25 testvm env[4887]: Container graph-node Starting -Jun 25 17:05:25 testvm env[4887]: Container graph-node Started -Jun 25 17:05:25 testvm systemd[1]: Finished graph-node service with docker compose. - -``` - -#### 6. Check graph-node service logs - -If needed, use docker CLI to check Ocean Subgraph service logs. - -First, check the container status - -```bash -$ docker ps --format "table {{.Image}}\t{{.Ports}}\t{{.Names}}\t{{.Status}}" -IMAGE PORTS NAMES STATUS -graphprotocol/graph-node:v0.28.2 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:8020->8020/tcp, :::8020->8020/tcp, 0.0.0.0:8030->8030/tcp, :::8030->8030/tcp, 0.0.0.0:8040->8040/tcp, :::8040->8040/tcp, 8001/tcp graph-node Up 55 minutes -ipfs/go-ipfs:v0.4.23 4001/tcp, 8080-8081/tcp, 0.0.0.0:5001->5001/tcp, :::5001->5001/tcp ipfs Up 55 minutes -postgres:15.3 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgres Up 55 minutes -``` - -Then, check the logs of the Ocean Subgraph docker container: - -```bash -docker logs graph-node [--follow] -``` - -## Deploying graph-node using Kubernetes - -In this example, we will deploy graph-node as a Kubernetes deployment service. [graph-node](https://github.com/graphprotocol/graph-node) has the following dependencies: PostgreSQL and IPFS. - -### Prerequisites: - -* A server for hosting graph-node. See [this guide](setup-server.md) for how to create a server; -* Kubernetes with Docker Engine is installed and configured on the server. See [this chapter](setup-server.md#install-kubernetes-with-docker-engine) for information on installing Kubernetes. -* The RPC URLs and API keys for each of the networks to which the Provider will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. - -### Steps - -1. [Deploy PostgreSQL](deploying-ocean-subgraph.md#1.-deploy-postgresql) -2. [Deploy IPFS](deploying-ocean-subgraph.md#2.-deploy-ipfs) -3. [Deploy Graph-node](deploying-ocean-subgraph.md#deploy-graph-node) - -#### 1. Deploy PostgreSQL - -It is recommended to deploy PostgreSQL as helm chart. - -References: [https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart](https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart) - -Once PostgreSQL pods are running, a database must be created: eg. `sepolia.` - -#### 2. Deploy IPFS - -The following template can be customized to deploy IPFS statefulset and service. - -```yaml -apiVersion: apps/v1 -kind: StatefulSet -metadata: - labels: - app: ipfs - name: ipfs -spec: - podManagementPolicy: OrderedReady - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - app: ipfs - serviceName: ipfs - template: - metadata: - creationTimestamp: null - labels: - app: ipfs - spec: - containers: - - image: ipfs/go-ipfs:v0.4.22 - imagePullPolicy: IfNotPresent - livenessProbe: - failureThreshold: 3 - httpGet: - path: /debug/metrics/prometheus - port: api - scheme: HTTP - initialDelaySeconds: 15 - periodSeconds: 3 - successThreshold: 1 - timeoutSeconds: 1 - name: s1-ipfs - ports: - - containerPort: 5001 - name: api - protocol: TCP - - containerPort: 8080 - name: gateway - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: /debug/metrics/prometheus - port: api - scheme: HTTP - initialDelaySeconds: 15 - periodSeconds: 3 - successThreshold: 1 - timeoutSeconds: 1 - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - volumeMounts: - - mountPath: /data/ipfs - name: ipfs-storage - dnsPolicy: ClusterFirst - restartPolicy: Always - schedulerName: default-scheduler - securityContext: - fsGroup: 1000 - runAsUser: 1000 - terminationGracePeriodSeconds: 30 - updateStrategy: - rollingUpdate: - partition: 0 - type: RollingUpdate - volumeClaimTemplates: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - creationTimestamp: null - name: ipfs-storage - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1G - volumeMode: Filesystem - status: - phase: Pending ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: ipfs - name: ipfs -spec: - clusterIP: - clusterIPs: - ipFamilies: - - IPv4 - ipFamilyPolicy: SingleStack - ports: - - name: api - port: 5001 - - name: gateway - port: 8080 - selector: - app: ipf -``` - -#### Deploy Graph-node - -The following annotated templated can be customized to deploy graph-node deployment and service: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - annotations: - labels: - app: sepolia-graph-node - name: sepolia-graph-node -spec: - progressDeadlineSeconds: 600 - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - app: sepolia-graph-node - strategy: - rollingUpdate: - maxSurge: 25% - maxUnavailable: 25% - type: RollingUpdate - template: - metadata: - creationTimestamp: null - labels: - app: sepolia-graph-node - spec: - containers: - - env: - - name: ipfs - value: ipfs..svc.cluster.local:5001 - - name: postgres_host - value: postgresql..svc.cluster.local - - name: postgres_user - value: < postgresql user > - - name: postgres_pass - value: < postgresql database password > - - name: postgres_db - value: < postgresql database > - - name: ethereum - value: sepolia:https://sepolia.infura.io/v3/< INFURA ID> - - name: GRAPH_KILL_IF_UNRESPONSIVE - value: "true" - image: graphprotocol/graph-node:v0.28.2 - imagePullPolicy: IfNotPresent - livenessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 8000 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - name: sepolia-graph-node - ports: - - containerPort: 8000 - name: graphql - protocol: TCP - - containerPort: 8020 - name: jsonrpc - protocol: TCP - - containerPort: 8030 - name: indexnode - protocol: TCP - - containerPort: 8040 - name: metrics - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: / - port: 8000 - scheme: HTTP - initialDelaySeconds: 20 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - resources: - limits: - cpu: "2" - memory: 1536Mi - requests: - cpu: 1500m - memory: 1536Mi - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - restartPolicy: Always - schedulerName: default-scheduler - terminationGracePeriodSeconds: 30 ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: sepolia-graph-node - name: sepolia-graph-node -spec: - clusterIP: - clusterIPs: - internalTrafficPolicy: Cluster - ipFamilies: - - IPv4 - ipFamilyPolicy: SingleStack - ports: - - name: graphql - port: 8000 - - name: jsonrpc - port: 8020 - - name: indexnode - port: 8030 - - name: metrics - port: 8040 - selector: - app: sepolia-graph-nodeyam -``` - -## Deploy Ocean Subgraph - -After you deployed graph-node, either using Kubernetes or Docker Compose, you can proceed to deploy Ocean Subgraph on top of it. - -### Prerequisites - -* graph-node up-and-running - -### Steps - -1. [Install Node.js locally](deploying-ocean-subgraph.md#1.-install-node.js-locally) -2. [Download and extract Ocean-subgraph](#2.-download-and-extract-ocean-subgraph) - -#### 1. Install Node.js locally - -To install Node.js locally, please refer to this [link ](https://nodejs.org/en/download)for instructions. - -#### 2. Download and extract Ocean-subgraph - -Download and extract [Ocean-subgraph](https://github.com/oceanprotocol/ocean-subgraph) (check [here](https://github.com/oceanprotocol/ocean-subgraph/releases) the available releases). - -#### 3. Install dependencies - -From the directory where Ocean subgraph was extracted, run the following command: - -```bash -npm i -``` - -#### 4. Deploy Ocean Subgraph - -In the following example, we are deploying on Ocean Subgraph on graph-node running for `sepolia` testnet. - -Note: for `ocean-subgraph` deployment in the Kubernetes environment, both `graph-node` and `ipfs` services must be locally forwarded using `kubectl port-forward` command. - -Run the following command: - -```bash -$ npm run quickstart:sepolia - -> ocean-subgraph@3.0.8 quickstart:sepolia -> node ./scripts/generatenetworkssubgraphs.js sepolia && npm run codegen && npm run create:local && npm run deploy:local - -Creating subgraph.yaml for sepolia - Adding veOCEAN -Skipping polygon -Skipping bsc -Skipping energyweb -Skipping moonriver -Skipping mainnet -Skipping polygonedge -Skipping gaiaxtestnet -Skipping alfajores -Skipping gen-x-testnet -Skipping filecointestnet - -> ocean-subgraph@3.0.8 codegen -> graph codegen --output-dir src/@types - - Skip migration: Bump mapping apiVersion from 0.0.1 to 0.0.2 - Skip migration: Bump mapping apiVersion from 0.0.2 to 0.0.3 - Skip migration: Bump mapping apiVersion from 0.0.3 to 0.0.4 - Skip migration: Bump mapping apiVersion from 0.0.4 to 0.0.5 - Skip migration: Bump mapping apiVersion from 0.0.5 to 0.0.6 - Skip migration: Bump manifest specVersion from 0.0.1 to 0.0.2 - Apply migration: Bump manifest specVersion from 0.0.2 to 0.0.4 -✔ Apply migrations -✔ Load subgraph from subgraph.yaml - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/ERC721Factory.sol/ERC721Factory.json - Load contract ABI from abis/ERC20.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/FactoryRouter.sol/FactoryRouter.json - Load contract ABI from abis/ERC20.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veAllocate.sol/veAllocate.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veOCEAN.vy/veOCEAN.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veDelegation.vy/veDelegation.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veFeeDistributor.vy/veFeeDistributor.json - Load contract ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/df/DFRewards.sol/DFRewards.json -✔ Load contract ABIs - Generate types for contract ABI: ERC721Factory (node_modules/@oceanprotocol/contracts/artifacts/contracts/ERC721Factory.sol/ERC721Factory.json) - Write types to src/@types/ERC721Factory/ERC721Factory.ts - Generate types for contract ABI: ERC20 (abis/ERC20.json) - Write types to src/@types/ERC721Factory/ERC20.ts - Generate types for contract ABI: FactoryRouter (node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/FactoryRouter.sol/FactoryRouter.json) - Write types to src/@types/FactoryRouter/FactoryRouter.ts - Generate types for contract ABI: ERC20 (abis/ERC20.json) - Write types to src/@types/FactoryRouter/ERC20.ts - Generate types for contract ABI: veAllocate (node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veAllocate.sol/veAllocate.json) - Write types to src/@types/veAllocate/veAllocate.ts - Generate types for contract ABI: veOCEAN (node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veOCEAN.vy/veOCEAN.json) - Write types to src/@types/veOCEAN/veOCEAN.ts - Generate types for contract ABI: veDelegation (node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veDelegation.vy/veDelegation.json) - Write types to src/@types/veDelegation/veDelegation.ts - Generate types for contract ABI: veFeeDistributor (node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veFeeDistributor.vy/veFeeDistributor.json) - Write types to src/@types/veFeeDistributor/veFeeDistributor.ts - Generate types for contract ABI: DFRewards (node_modules/@oceanprotocol/contracts/artifacts/contracts/df/DFRewards.sol/DFRewards.json) - Write types to src/@types/DFRewards/DFRewards.ts -✔ Generate types for contract ABIs - Generate types for data source template ERC20Template - Generate types for data source template ERC721Template - Generate types for data source template Dispenser - Generate types for data source template FixedRateExchange - Write types for templates to src/@types/templates.ts -✔ Generate types for data source templates - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20Template.sol/ERC20Template.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20TemplateEnterprise.sol/ERC20TemplateEnterprise.json - Load data source template ABI from abis/ERC20.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC20Roles.sol/ERC20Roles.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC721Template.sol/ERC721Template.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC721RolesAddress.sol/ERC721RolesAddress.json - Load data source template ABI from abis/ERC20.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/dispenser/Dispenser.sol/Dispenser.json - Load data source template ABI from abis/ERC20.json - Load data source template ABI from node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/fixedRate/FixedRateExchange.sol/FixedRateExchange.json - Load data source template ABI from abis/ERC20.json -✔ Load data source template ABIs - Generate types for data source template ABI: ERC20Template > ERC20Template (node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20Template.sol/ERC20Template.json) - Write types to src/@types/templates/ERC20Template/ERC20Template.ts - Generate types for data source template ABI: ERC20Template > ERC20TemplateEnterprise (node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20TemplateEnterprise.sol/ERC20TemplateEnterprise.json) - Write types to src/@types/templates/ERC20Template/ERC20TemplateEnterprise.ts - Generate types for data source template ABI: ERC20Template > ERC20 (abis/ERC20.json) - Write types to src/@types/templates/ERC20Template/ERC20.ts - Generate types for data source template ABI: ERC20Template > ERC20Roles (node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC20Roles.sol/ERC20Roles.json) - Write types to src/@types/templates/ERC20Template/ERC20Roles.ts - Generate types for data source template ABI: ERC721Template > ERC721Template (node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC721Template.sol/ERC721Template.json) - Write types to src/@types/templates/ERC721Template/ERC721Template.ts - Generate types for data source template ABI: ERC721Template > ERC721RolesAddress (node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC721RolesAddress.sol/ERC721RolesAddress.json) - Write types to src/@types/templates/ERC721Template/ERC721RolesAddress.ts - Generate types for data source template ABI: ERC721Template > ERC20 (abis/ERC20.json) - Write types to src/@types/templates/ERC721Template/ERC20.ts - Generate types for data source template ABI: Dispenser > Dispenser (node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/dispenser/Dispenser.sol/Dispenser.json) - Write types to src/@types/templates/Dispenser/Dispenser.ts - Generate types for data source template ABI: Dispenser > ERC20 (abis/ERC20.json) - Write types to src/@types/templates/Dispenser/ERC20.ts - Generate types for data source template ABI: FixedRateExchange > FixedRateExchange (node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/fixedRate/FixedRateExchange.sol/FixedRateExchange.json) - Write types to src/@types/templates/FixedRateExchange/FixedRateExchange.ts - Generate types for data source template ABI: FixedRateExchange > ERC20 (abis/ERC20.json) - Write types to src/@types/templates/FixedRateExchange/ERC20.ts -✔ Generate types for data source template ABIs -✔ Load GraphQL schema from schema.graphql - Write types to src/@types/schema.ts -✔ Generate types for GraphQL schema - -Types generated successfully - - -> ocean-subgraph@3.0.8 create:local -> graph create oceanprotocol/ocean-subgraph --node http://127.0.0.1:8020 - -Created subgraph: oceanprotocol/ocean-subgraph - -> ocean-subgraph@3.0.8 deploy:local -> graph deploy oceanprotocol/ocean-subgraph subgraph.yaml -l $npm_package_version --debug --ipfs http://127.0.0.1:5001 --node http://127.0.0.1:8020 - - Skip migration: Bump mapping apiVersion from 0.0.1 to 0.0.2 - Skip migration: Bump mapping apiVersion from 0.0.2 to 0.0.3 - Skip migration: Bump mapping apiVersion from 0.0.3 to 0.0.4 - Skip migration: Bump mapping apiVersion from 0.0.4 to 0.0.5 - Skip migration: Bump mapping apiVersion from 0.0.5 to 0.0.6 - Skip migration: Bump manifest specVersion from 0.0.1 to 0.0.2 - Skip migration: Bump manifest specVersion from 0.0.2 to 0.0.4 -✔ Apply migrations -✔ Load subgraph from subgraph.yaml - Compile data source: ERC721Factory => build/ERC721Factory/ERC721Factory.wasm - Compile data source: FactoryRouter => build/FactoryRouter/FactoryRouter.wasm - Compile data source: veAllocate => build/veAllocate/veAllocate.wasm - Compile data source: veOCEAN => build/veOCEAN/veOCEAN.wasm - Compile data source: veDelegation => build/veDelegation/veDelegation.wasm - Compile data source: veFeeDistributor => build/veFeeDistributor/veFeeDistributor.wasm - Compile data source: DFRewards => build/DFRewards/DFRewards.wasm - Compile data source template: ERC20Template => build/templates/ERC20Template/ERC20Template.wasm - Compile data source template: ERC721Template => build/templates/ERC721Template/ERC721Template.wasm - Compile data source template: Dispenser => build/templates/Dispenser/Dispenser.wasm - Compile data source template: FixedRateExchange => build/templates/FixedRateExchange/FixedRateExchange.wasm -✔ Compile subgraph - Copy schema file build/schema.graphql - Write subgraph file build/ERC721Factory/node_modules/@oceanprotocol/contracts/artifacts/contracts/ERC721Factory.sol/ERC721Factory.json - Write subgraph file build/ERC721Factory/abis/ERC20.json - Write subgraph file build/FactoryRouter/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/FactoryRouter.sol/FactoryRouter.json - Write subgraph file build/FactoryRouter/abis/ERC20.json - Write subgraph file build/veAllocate/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veAllocate.sol/veAllocate.json - Write subgraph file build/veOCEAN/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veOCEAN.vy/veOCEAN.json - Write subgraph file build/veDelegation/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veDelegation.vy/veDelegation.json - Write subgraph file build/veFeeDistributor/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veFeeDistributor.vy/veFeeDistributor.json - Write subgraph file build/DFRewards/node_modules/@oceanprotocol/contracts/artifacts/contracts/df/DFRewards.sol/DFRewards.json - Write subgraph file build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20Template.sol/ERC20Template.json - Write subgraph file build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20TemplateEnterprise.sol/ERC20TemplateEnterprise.json - Write subgraph file build/ERC20Template/abis/ERC20.json - Write subgraph file build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC20Roles.sol/ERC20Roles.json - Write subgraph file build/ERC721Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC721Template.sol/ERC721Template.json - Write subgraph file build/ERC721Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC721RolesAddress.sol/ERC721RolesAddress.json - Write subgraph file build/ERC721Template/abis/ERC20.json - Write subgraph file build/Dispenser/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/dispenser/Dispenser.sol/Dispenser.json - Write subgraph file build/Dispenser/abis/ERC20.json - Write subgraph file build/FixedRateExchange/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/fixedRate/FixedRateExchange.sol/FixedRateExchange.json - Write subgraph file build/FixedRateExchange/abis/ERC20.json - Write subgraph manifest build/subgraph.yaml -✔ Write compiled subgraph to build/ - Add file to IPFS build/schema.graphql - .. QmQa3a9ypCLC84prHGQdhbcGG4DHJceqADGxmZMmAAXuTz - Add file to IPFS build/ERC721Factory/node_modules/@oceanprotocol/contracts/artifacts/contracts/ERC721Factory.sol/ERC721Factory.json - .. QmSoG3r5vyWXqjEfKAQYjwtQcQkZCsZEcJXVFWVq1tT1dD - Add file to IPFS build/ERC721Factory/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 - Add file to IPFS build/FactoryRouter/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/FactoryRouter.sol/FactoryRouter.json - .. QmcBVA1R3yi2167UZMvV4LvG4cMHjL8ZZXmPMriCjn8DEe - Add file to IPFS build/FactoryRouter/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 (already uploaded) - Add file to IPFS build/veAllocate/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veAllocate.sol/veAllocate.json - .. Qmc3iwQkQAhqe1PjzTt6KZLh9rsWQvyxkFt7doj2iXv8C3 - Add file to IPFS build/veOCEAN/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veOCEAN.vy/veOCEAN.json - .. QmahFjirJqiwKpytFZ9CdE92LdPGBUDZs6AWpsrH2wn1VP - Add file to IPFS build/veDelegation/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veDelegation.vy/veDelegation.json - .. QmfU6kZ5sksLdj3q88n7SUP63C1cnhQjU8vuMmRYwf2v5r - Add file to IPFS build/veFeeDistributor/node_modules/@oceanprotocol/contracts/artifacts/contracts/ve/veFeeDistributor.vy/veFeeDistributor.json - .. QmVU51oBr62D4UFXTwnMcbzuBBAAeQssqmqM9jic7A6L3v - Add file to IPFS build/DFRewards/node_modules/@oceanprotocol/contracts/artifacts/contracts/df/DFRewards.sol/DFRewards.json - .. QmcckRMahzpL7foEFGpWfkDBsyoWbNRfLC32uFq8ceUV3a - Add file to IPFS build/ERC721Factory/ERC721Factory.wasm - .. QmVfDAgZdKWxMuNfT7kso1LbFre2xhYbEeHBGm3gH3R9oE - Add file to IPFS build/FactoryRouter/FactoryRouter.wasm - .. QmYCC9AcaYw3nGSqNXNFHVsuB67FQEyZ8twRjRXrprcgyp - Add file to IPFS build/veAllocate/veAllocate.wasm - .. QmUFaYDxChi5nKEJLvHQZP1cRoqqP5k3fYSwk2JjuSceiJ - Add file to IPFS build/veOCEAN/veOCEAN.wasm - .. QmRYCyYKwHdSeM55vuvL1mdCooDkFQm6d2TQ7iK2N1qgur - Add file to IPFS build/veDelegation/veDelegation.wasm - .. QmaTjRLirzfidtQTYgzxqVVD9AX9e69TN1Y8fEsNQ9AEZq - Add file to IPFS build/veFeeDistributor/veFeeDistributor.wasm - .. QmZCEp4yxiDyuksEjSaceogJwLMto2UGfV1KxVuJTJLTqg - Add file to IPFS build/DFRewards/DFRewards.wasm - .. QmRSxe52B836bdfoJbuDY4tUCawzqgkHRNxe9ucU1JdYm5 - Add file to IPFS build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20Template.sol/ERC20Template.json - .. QmPkhFvnBbqA3You7NsK5Zsyh8kkizXUHF9pcC5V6qDJQu - Add file to IPFS build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC20TemplateEnterprise.sol/ERC20TemplateEnterprise.json - .. QmZnogwnfr4TeBPykvmCL2oaX63AKQP1F1uBAbbfnyPAzB - Add file to IPFS build/ERC20Template/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 (already uploaded) - Add file to IPFS build/ERC20Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC20Roles.sol/ERC20Roles.json - .. QmTWTzg4jTx4GxGApVyxirNRTxB7QovS4bHGuWnnW8Ciz2 - Add file to IPFS build/templates/ERC20Template/ERC20Template.wasm - .. QmUcxes5La7n9481Vf9AoQ2Mjt1CrbS7T6tDhpnfF77Uh5 - Add file to IPFS build/ERC721Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/templates/ERC721Template.sol/ERC721Template.json - .. QmPE82CiACicgu1WxEjeFrLmskiJADroQRnxH7owpK6jaP - Add file to IPFS build/ERC721Template/node_modules/@oceanprotocol/contracts/artifacts/contracts/utils/ERC721RolesAddress.sol/ERC721RolesAddress.json - .. Qmdhi7UK6Ww8vXH9YC3JxVUEFjTyx3XycF53rRZapVK5c3 - Add file to IPFS build/ERC721Template/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 (already uploaded) - Add file to IPFS build/templates/ERC721Template/ERC721Template.wasm - .. QmNhLws24szwpz8LM2sL6HHKc6KK4vtJwzfeZWkghuqn7Q - Add file to IPFS build/Dispenser/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/dispenser/Dispenser.sol/Dispenser.json - .. QmdiN7Fhw9sjoVVJgHtTtzxv5fwtFMHLNH1x1yqbswsThW - Add file to IPFS build/Dispenser/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 (already uploaded) - Add file to IPFS build/templates/Dispenser/Dispenser.wasm - .. QmTpn9wagpmH6byjjdCBZdgypFgcw2mva3bC52nC4z3eLW - Add file to IPFS build/FixedRateExchange/node_modules/@oceanprotocol/contracts/artifacts/contracts/pools/fixedRate/FixedRateExchange.sol/FixedRateExchange.json - .. Qmd2ToAptK74j8pGxe8mZXfAvY3AxstgmYH8JDMAfLtAGd - Add file to IPFS build/FixedRateExchange/abis/ERC20.json - .. QmXuTbDkNrN27VydxbS2huvKRk62PMgUTdPDWkxcr2w7j2 (already uploaded) - Add file to IPFS build/templates/FixedRateExchange/FixedRateExchange.wasm - .. QmRrwwoFF33LvPhnGCGgLBLyuLizrFgD44kW9io81tPZzX -✔ Upload subgraph to IPFS - -Build completed: QmVUKpgwuyDh9KgUxTzZvVNFJbdevc56YrZpZjQvu8Yp7q - -Deployed to http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql - -Subgraph endpoints: -Queries (HTTP): http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph -``` - -Ocean Subgraph is deployed under /subgraphs/name/oceanprotocol/ocean-subgraph/. To access it from the server on which it was deployed, open a browser and go to [http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql](http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). diff --git a/infrastructure/deploying-provider.md b/infrastructure/deploying-provider.md deleted file mode 100644 index 1eedc0cc9..000000000 --- a/infrastructure/deploying-provider.md +++ /dev/null @@ -1,312 +0,0 @@ -# Deploying Provider - -### About Provider - -Provider encrypts the URL and metadata during publishing and decrypts the URL when the dataset is downloaded or a compute job is started. It enables access to the data assets by streaming data (and never the URL). It performs checks on-chain for buyer permissions and payments. It also provides compute services (connects to a C2D environment). - -Provider is a multichain component, meaning that it can handle these tasks on multiple chains with the proper configurations. The source code of Provider can be accessed from [here](https://github.com/oceanprotocol/provider). - -As mentioned in the Setup a Server document, all Ocean components can be deployed in two types of configurations: simple, based on Docker Engine and Docker Compose, and complex, based on Kubernetes with Docker Engine. In this document, we will present how to deploy Provider in each of these configurations. - - -## Deploying Provider using Docker Engine and Docker Compose - -In this guide, we will deploy Provider for Sepolia (Eth test network). Therefore, please note that in the following configuration files, "11155111" is the chain ID for Sepolia. - - -### Prerequisites - -* A server for hosting Provider. See [this guide](setup-server.md) for how to create a server; -* Docker Compose and Docker Engine are installed and configured on the server. See [this guide](setup-server.md#install-docker-engine-and-docker-compose) for how to install these products. -* The RPC URLs and API keys for each of the networks to which the Provider will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. -* The private key which will be used by Provider to encrypt/decrypt URLs. - -### Steps - -The steps to deploy the Provider using Docker Engine and Docker Compose are: - -1. [Create the /etc/docker/compose/provider/docker-compose.yml file](deploying-provider.md#1-create-the-etcdockercomposeproviderdocker-composeyml-file) -2. [Create the /etc/systemd/system/docker-compose@provider.service file](deploying-provider.md#2-create-the-etcsystemdsystemdocker-composeproviderservice-file) -3. [Reload the systemd manager configuration](deploying-provider.md#3.-reload-the-systemd-manager-configuration) -4. [Start the Provider service](deploying-provider.md#4.-start-the-provider-service) -5. [Check the service's status](deploying-provider.md#5.-check-the-services-status) -6. [Confirm the Provider is accessible](deploying-provider.md#6.-confirm-the-provider-is-accessible) -7. [Check Provider service logs](deploying-provider.md#7.-check-provider-service-logs) - - -#### 1. Create the /etc/docker/compose/provider/docker-compose.yml file - -From a terminal console, create /etc/docker/compose/provider/docker-compose.yml file, then copy and paste the following content to it. Check the comments in the file and replace the fields with the specific values of your implementation. - -```yaml -version: '3' -services: - provider: - image: oceanprotocol/provider-py:latest =>(check on https://hub.docker.com/r/oceanprotocol/provider-py for specific tag) - container_name: provider - restart: on-failure - ports: - - 8030:8030 - networks: - backend: - environment: - ARTIFACTS_PATH: "/ocean-contracts/artifacts" - NETWORK_URL: '{"80001":"https://sepolia.infura.io/v3/"}' - PROVIDER_PRIVATE_KEY: '{"80001":"" - OCEAN_PROVIDER_TIMEOUT: "9000" - OPERATOR_SERVICE_URL: "https://stagev4.c2d.oceanprotocol.com" => (use custom value for Operator Service URL) - AQUARIUS_URL: "http//localhost:5000" => (use custom value Aquarius URL) - REQUEST_TIMEOUT: "10" -networks: - backend: - driver: bridge -``` - - -#### 2. Create the _/etc/systemd/system/docker-compose@provider.service_ file - -Create the _/etc/systemd/system/docker-compose@provider.service_ file then copy and paste the following content to it. This example file could be customized if needed. - -``` -[Unit] -Description=%i service with docker compose -Requires=docker.service -After=docker.service - -[Service] -Type=oneshot -RemainAfterExit=true -Environment="PROJECT=ocean" -WorkingDirectory=/etc/docker/compose/%i -ExecStartPre=/usr/bin/env docker-compose -p $PROJECT pull -ExecStart=/usr/bin/env docker-compose -p $PROJECT up -d -ExecStop=/usr/bin/env docker-compose -p $PROJECT stop - - -[Install] -WantedBy=multi-user.target -``` - - -#### 3. Reload the systemd manager configuration - -Run the following command to reload the systemd manager configuration - -```bash -sudo systemctl daemon-reload -``` - -Optionally, you can enable the services to start at boot, using the following command: - -```bash -sudo systemctl enable docker-compose@provider.service -``` - - -#### 4. Start the Provider service - -To start the Provider service, run the following command: - -```bash -sudo systemctl start docker-compose@provider.service -``` - - -#### 5. Check the service's status - -Check the status of the service by running the following command. The output of the command should be similar to the one presented here. - -```bash -$ sudo systemctl status docker-compose@provider.service -● docker-compose@provider.service - provider service with docker compose - Loaded: loaded (/etc/systemd/system/docker-compose@provider.service; disabled; vendor preset: enabled) - Active: active (exited) since Wed 2023-06-14 09:41:53 UTC; 20s ago - Process: 4118 ExecStartPre=/usr/bin/env docker-compose -p $PROJECT pull (code=exited, status=0/SUCCESS) - Process: 4126 ExecStart=/usr/bin/env docker-compose -p $PROJECT up -d (code=exited, status=0/SUCCESS) - Main PID: 4126 (code=exited, status=0/SUCCESS) - CPU: 93ms - -Jun 14 09:41:52 testvm systemd[1]: Starting provider service with docker compose... -Jun 14 09:41:52 testvm env[4118]: provider Pulling -Jun 14 09:41:53 testvm env[4118]: provider Pulled -Jun 14 09:41:53 testvm env[4126]: Container provider Created -Jun 14 09:41:53 testvm env[4126]: Container provider Starting -Jun 14 09:41:53 testvm env[4126]: Container provider Started -Jun 14 09:41:53 testvm systemd[1]: Finished provider service with docker compose. -``` - - -#### 6. Confirm the Provider is accessible - -Once started, the Provider service is accessible on `localhost` port 8030/tcp. Run the following command to access the Provider. The output should be similar to the one displayed here. - -```bash -$ curl localhost:8030 -{"chainIds":[5,80001],"providerAddresses":{"5":"0x00c6A0BC5cD0078d6Cd0b659E8061B404cfa5704","80001":"0x4256Df50c94D9a7e04610976cde01aED91eB531E"},"serviceEndpoints":{"computeDelete":["DELETE","/api/services/compute"],"computeEnvironments":["GET","/api/services/computeEnvironments"],"computeResult":["GET","/api/services/computeResult"],"computeStart":["POST","/api/services/compute"],"computeStatus":["GET","/api/services/compute"],"computeStop":["PUT","/api/services/compute"],"create_auth_token":["GET","/api/services/createAuthToken"],"decrypt":["POST","/api/services/decrypt"],"delete_auth_token":["DELETE","/api/services/deleteAuthToken"],"download":["GET","/api/services/download"],"encrypt":["POST","/api/services/encrypt"],"fileinfo":["POST","/api/services/fileinfo"],"initialize":["GET","/api/services/initialize"],"initializeCompute":["POST","/api/services/initializeCompute"],"nonce":["GET","/api/services/nonce"],"validateContainer":["POST","/api/services/validateContainer"]},"software":"Provider","version":"2.0.2"} -``` - - -#### 7. Check Provider service logs - -If needed, use docker CLI to check provider service logs. - -First, identify the container id: - -```bash -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -594415b13f8c oceanprotocol/provider-py:v2.0.2 "/ocean-provider/doc…" 12 minutes ago Up About a minute 0.0.0.0:8030->8030/tcp, :::8030->8030/tcp provider - -``` - -Then, check the logs from the Provider's docker container: - -```bash -$ docker logs --follow provider -[2023-06-14 09:31:02 +0000] [8] [INFO] Starting gunicorn 20.0.4 -[2023-06-14 09:31:02 +0000] [8] [INFO] Listening at: http://0.0.0.0:8030 (8) -[2023-06-14 09:31:02 +0000] [8] [INFO] Using worker: sync -[2023-06-14 09:31:02 +0000] [10] [INFO] Booting worker with pid: 10 -2023-06-14 09:31:02 594415b13f8c rlp.codec[10] DEBUG Consider installing rusty-rlp to improve pyrlp performance with a rust based backend -2023-06-14 09:31:12 594415b13f8c ocean_provider.run[10] INFO incoming request = http, GET, 172.18.0.1, /? -2023-06-14 09:31:12 594415b13f8c ocean_provider.run[10] INFO root endpoint called -2023-06-14 09:31:12 594415b13f8c ocean_provider.run[10] INFO root endpoint response = -[2023-06-14 09:41:53 +0000] [8] [INFO] Starting gunicorn 20.0.4 -[2023-06-14 09:41:53 +0000] [8] [INFO] Listening at: http://0.0.0.0:8030 (8) -[2023-06-14 09:41:53 +0000] [8] [INFO] Using worker: sync -[2023-06-14 09:41:53 +0000] [10] [INFO] Booting worker with pid: 10 -2023-06-14 09:41:54 594415b13f8c rlp.codec[10] DEBUG Consider installing rusty-rlp to improve pyrlp performance with a rust based backend -2023-06-14 09:42:40 594415b13f8c ocean_provider.run[10] INFO incoming request = http, GET, 172.18.0.1, /? -2023-06-14 09:42:40 594415b13f8c ocean_provider.run[10] INFO root endpoint called -2023-06-14 09:42:40 594415b13f8c ocean_provider.run[10] INFO root endpoint response = - -``` - - -## Deploying Provider using Kubernetes with Docker Engine - - -In this example, we will run Provider as a Kubernetes deployment resource. We will deploy Provider for Sepolia (Eth test network). Therefore, please note that in the following configuration files, "11155111" is the chain ID for Sepolia. - -### Prerequisites - -* A server for hosting Ocean Marketplace. See [this guide](setup-server.md) for how to create a server; -* Kubernetes with Docker Engine is installed and configured on the server. See [this chapter](setup-server.md#install-kubernetes-with-docker-engine) for information on installing Kubernetes. -* The RPC URLs and API keys for each of the networks to which the Provider will be connected. See [this guide](../developers/obtaining-api-keys-for-blockchain-access.md) for how to obtain the URL and the API key. -* The private key that will be used by Provider to encrypt/decrypt URLs. -* Aquarius is up and running - -### Steps - -The steps to deploy the Provider in Kubernetes are: - -[1. Create a YAML file for Provider configuration.](deploying-provider.md#1-create-a-yaml-file-for-provider-configuration) - -[2. Deploy the configuration.](deploying-provider.md#2.-deploy-the-configuration) - -[3. Create a Kubernetes service.](deploying-provider.md#3.-create-a-kubernetes-service) - - -#### 1. Create a YAML file for Provider configuration. - -From a terminal window, create a YAML file (in our example the file is named provider-deploy.yaml) then copy and paste the following content. Check the comments in the file and replace the fields with the specific values of your implementation (RPC URLs, the private key etc.). - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: provider - name: provider -spec: - progressDeadlineSeconds: 2147483647 - replicas: 1 - revisionHistoryLimit: 2147483647 - selector: - matchLabels: - app: provider - strategy: - rollingUpdate: - maxSurge: 25% - maxUnavailable: 25% - type: RollingUpdate - template: - metadata: - labels: - app: provider - spec: - containers: - - env: - - name: ARTIFACTS_PATH - value: /ocean-provider/artifacts - - name: NETWORK_URL - value: | - {"80001":"https://sepolia.infura.io/v3/"} - - name: PROVIDER_PRIVATE_KEY - value: | - {"5":"","80001":""} - - name: LOG_LEVEL - value: DEBUG - - name: OCEAN_PROVIDER_URL - value: http://0.0.0.0:8030 - - name: OCEAN_PROVIDER_WORKERS - value: "4" - - name: IPFS_GATEWAY - value: < your IPFS gateway > - - name: OCEAN_PROVIDER_TIMEOUT - value: "9000" - - name: OPERATOR_SERVICE_URL - value: < Operator service URL> - - name: AQUARIUS_URL - value: < Aquarius URL > - - name: UNIVERSAL_PRIVATE_KEY - value: - - name: REQUEST_TIMEOUT - value: "10" - image: oceanprotocol/provider-py:latest => (check on https://hub.docker.com/r/oceanprotocol/provider-py for specific tag) - imagePullPolicy: Always - name: provider - ports: - - containerPort: 8030 - protocol: TCP - resources: - limits: - cpu: 500m - memory: 700Mi - requests: - cpu: 500m - memory: 700Mi - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - restartPolicy: Always - schedulerName: default-scheduler - terminationGracePeriodSeconds: 30 -``` - -Tip: before deployment, you can [validate](https://github.com/instrumenta/kubeval) the yaml file. - - -#### 2. Deploy the configuration - -Deploy the configuration in Kubernetes using the following commands. - -```bash -kubectl config set-context --current --namespace ocean -kubectl apply -f provider-deploy.yaml -deployment.apps/provider created - -kubectl get pod -l app=provider -NAME READY STATUS RESTARTS AGE -provider-865cb8cf9d-r9xm4 1/1 Running 0 67s -``` - - -#### 3. Create a Kubernetes service - -The next step is to create a Kubernetes service (eg. ClusterIP, NodePort, Loadbalancer, ExternalName) for this deployment, depending on the environment specifications. Follow [this link](https://kubernetes.io/docs/concepts/services-networking/service/) for details on how to create a Kubernetes service. - diff --git a/developers/get-api-keys-for-blockchain-access.md b/infrastructure/get-api-keys-for-blockchain-access.md similarity index 92% rename from developers/get-api-keys-for-blockchain-access.md rename to infrastructure/get-api-keys-for-blockchain-access.md index 2f2445bae..669ecf929 100644 --- a/developers/get-api-keys-for-blockchain-access.md +++ b/infrastructure/get-api-keys-for-blockchain-access.md @@ -16,6 +16,6 @@ Choose any API provider of your choice. Some of the commonly used are: * [Alchemy](https://www.alchemy.com/) * [Moralis](https://moralis.io/) -The supported networks are listed [here](../discover/networks/README.md). +The supported networks are listed [here](../developers/networks.md). Let's configure the remote setup for the mentioned components in the following sections. diff --git a/infrastructure/kyb-service.md b/infrastructure/kyb-service.md new file mode 100644 index 000000000..a5c47571e --- /dev/null +++ b/infrastructure/kyb-service.md @@ -0,0 +1,2 @@ +# KYB + diff --git a/infrastructure/oe-marketplace.md b/infrastructure/oe-marketplace.md new file mode 100644 index 000000000..6ba9dcff4 --- /dev/null +++ b/infrastructure/oe-marketplace.md @@ -0,0 +1,2 @@ +# OE Marketplace + diff --git a/infrastructure/oe-node.md b/infrastructure/oe-node.md new file mode 100644 index 000000000..0648175de --- /dev/null +++ b/infrastructure/oe-node.md @@ -0,0 +1,2 @@ +# OE Node + diff --git a/infrastructure/policy-server/README.md b/infrastructure/policy-server/README.md new file mode 100644 index 000000000..1dba5fe90 --- /dev/null +++ b/infrastructure/policy-server/README.md @@ -0,0 +1,2 @@ +# Policy Server + diff --git a/infrastructure/policy-server/policy-server-proxy.md b/infrastructure/policy-server/policy-server-proxy.md new file mode 100644 index 000000000..fa102df5d --- /dev/null +++ b/infrastructure/policy-server/policy-server-proxy.md @@ -0,0 +1,5 @@ +# Policy Server proxy + +The Policy Server proxy exposes two endpoints needed by the Wallet API. To install the Policy Server in proxy mode, perform the following steps: + + diff --git a/infrastructure/setup-server.md b/infrastructure/setup-server.md deleted file mode 100644 index c5240239c..000000000 --- a/infrastructure/setup-server.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -description: >- - The following tutorial shows how to create a server ready for hosting Ocean - Protocol's components. ---- - -# Setup a Server - -Each deployment of the Ocean components starts with setting up a server on which these will be installed, either on-premise or hosted in a cloud platform. - -## Prerequisites - -For simple configurations: - -* Operating System: Linux distribution supported by the Docker Engine and Docker Compose products. Please refer to these links for choosing a compatible operating system: [Docker Compose supported platforms](https://docs.docker.com/desktop/install/linux-install/); [Docker Engine supported platforms](https://docs.docker.com/engine/install/). - -For complex configurations: - -* Operating System: Linux distribution supported by Kubernetes and Docker Engine. Please refer to this link for details: [Kubernetes with Docker Engine](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker). - - - -## Server Size - -The required CPU and memory for the server depend on the number of requests the component is expected to serve, however, the minimum configuration of the server is: - -* 1 core -* 1 GB RAM - -## Steps - -The steps for setting up a server on which to deploy the Ocean components are the following: - -For simple configurations: - -1. [Install the operating system](setup-server.md#install-the-operating-system) -2. [Install Docker Engine and Docker Compose](setup-server.md#install-docker-engine-and-docker-compose) - - - -For complex configurations: - -1. [Install the operating system](setup-server.md#install-the-operating-system) -2. [Install Kubernetes with Docker Engine](setup-server.md#install-kubernetes-with-docker-engine) - -### Install the operating system - -As mentioned earlier, you can use either an on-premise server or one hosted in the cloud (AWS, Azure, Digitalocean, etc.). To install the operating system on an on-premise server, please refer to the installation documentation of the operating system. - -If you choose to use a server hosted in the cloud, you need to create the server using the user interface provided by the cloud platform. Following is an example of how to create a server in Digitalocean. - -#### Example: Create an Ubuntu Linux server in the Digitalocean cloud - -1. Create an account and set billing - -Go to [https://www.digitalocean.com/](https://www.digitalocean.com/) and create an account. Provide the appropriate information for billing and accounting. - -2. Create a server - -Click on **`Create`** button and choose **`Droplets`** options from dropdown. - -

Select Droplet

- - - -3. Select a server configuration - -Select Ubuntu OS, and choose a plan and a configuration. - -

Configure the server

- -### - -4. Select the region and set the root password - -Select the region where you want the component to be hosted and a root password. - -

Select the region and set the root password

- - - -5. Finish the configuration and create the server - -Specify a hostname for the server, specify the project to which you assign the server, and then click on `Create Droplet.` - -

Finalize and create the server

- -6. Access the server's console - -After the server is ready, select the `Access console` option from the dropdown list to open a terminal window. - -

Access the server's console

- -### Install Docker Engine and Docker Compose - -From the terminal window, run the following commands to install Docker and Docker Compose. - -```bash -sudo apt-get update -sudo apt-get install ca-certificates curl gnupg lsb-release -sudo mkdir -p /etc/apt/keyrings -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg -echo \ - "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -sudo apt-get update -sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin - -# Now install docker-compose -sudo apt-get update -sudo apt-get install docker-compose-plugin -``` - -### Install Kubernetes with Docker Engine - -Kubernetes is an orchestration engine for containerized applications and the initial setup is dependent on the platform on which it is deployed - presenting how this product must be installed and configured is outside the scope of this document. - -For cloud deployment, most of the cloud providers have dedicated turnkey solutions for Kubernetes. A comprehensive list of such cloud providers is presented [here](https://kubernetes.io/docs/setup/production-environment/turnkey-solutions/). - -For an on-premise deployment of Kubernetes, please refer to this [link](https://kubernetes.io/docs/setup/). - -Now that the execution environment is prepared and the prerequisites installed, we can proceed to deploy the Ocean's components. diff --git a/infrastructure/ssi-stack/README.md b/infrastructure/ssi-stack/README.md new file mode 100644 index 000000000..cd9b63cd5 --- /dev/null +++ b/infrastructure/ssi-stack/README.md @@ -0,0 +1,2 @@ +# SSI Stack + diff --git a/infrastructure/ssi-stack/opa-server.md b/infrastructure/ssi-stack/opa-server.md new file mode 100644 index 000000000..811d5576a --- /dev/null +++ b/infrastructure/ssi-stack/opa-server.md @@ -0,0 +1,2 @@ +# OPA server + diff --git a/infrastructure/ssi-stack/walt.id-issuer.md b/infrastructure/ssi-stack/walt.id-issuer.md new file mode 100644 index 000000000..fbea61fa0 --- /dev/null +++ b/infrastructure/ssi-stack/walt.id-issuer.md @@ -0,0 +1,2 @@ +# walt.id Issuer + diff --git a/infrastructure/ssi-stack/walt.id-verifier.md b/infrastructure/ssi-stack/walt.id-verifier.md new file mode 100644 index 000000000..876ca8079 --- /dev/null +++ b/infrastructure/ssi-stack/walt.id-verifier.md @@ -0,0 +1,2 @@ +# walt.id Verifier + diff --git a/infrastructure/ssi-stack/walt.id-wallet.md b/infrastructure/ssi-stack/walt.id-wallet.md new file mode 100644 index 000000000..f56bd9308 --- /dev/null +++ b/infrastructure/ssi-stack/walt.id-wallet.md @@ -0,0 +1,2 @@ +# walt.id Wallet + diff --git a/predictoor/README.md b/predictoor/README.md deleted file mode 100644 index cff267d11..000000000 --- a/predictoor/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: Run AI-powered prediction bots or trading bots on crypto price feeds to earn $ ---- - -# 👀 Predictoor - -**Predictoor docs are now at** [**docs.predictoor.ai**](https://docs.predictoor.ai)**.** - -
diff --git a/user-guides/README.md b/user-guides/README.md index 0f876c5c9..a0c52a5cd 100644 --- a/user-guides/README.md +++ b/user-guides/README.md @@ -1,49 +1,9 @@ --- -description: >- - Guides to use Ocean, with no coding needed. -cover: ../.gitbook/assets/cover/user_guides_banner.png +description: Guides to use Ocean Enterprise +cover: ../.gitbook/assets/User Guide2.png coverY: 0 --- -# 📚 User Guides +# User Guides -
- -**Contents:** -- Basic concepts -- Using wallets -- Host assets - -Let's dive in! - -## Basic concepts - -For blockchain beginners - -{% content-ref url="basic-concepts.md" %} -[basic-concepts.md](basic-concepts.md) -{% endcontent-ref %} - -## Using wallets - -{% content-ref url="wallets/README.md" %} -[wallets/README.md.md](wallets/README.md) -{% endcontent-ref %} - -{% content-ref url="wallets/metamask-setup.md" %} -[wallets/metamask-setup.md](wallets/metamask-setup.md) -{% endcontent-ref %} - -## Data Storage - -{% content-ref url="asset-hosting/" %} -[asset-hosting](asset-hosting/README.md) -{% endcontent-ref %} - -## Antique Stuff 🏺 - -If you have OCEAN in old pools, this will help. - -{% content-ref url="remove-liquidity-pools.md" %} -[remove-liquidity-pools.md](remove-liquidity-pools.md) -{% endcontent-ref %} +[Using the OE Marketplace](using-the-oe-marketplace/) diff --git a/user-guides/asset-hosting/README.md b/user-guides/asset-hosting/README.md deleted file mode 100644 index 5732bc9b4..000000000 --- a/user-guides/asset-hosting/README.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -description: How to host your data and algorithm NFT assets like a champ 🏆 😎 ---- - -# Host Assets - -The most important thing to remember is that wherever you host your asset... it needs to be **reachable & downloadable**. It cannot live behind a private firewall such as a private Github repo. You need to **use a proper hosting service!** - -**The URL to your asset is encrypted in the publishing process!** - -### Publish. Cool. Things. - -If you want to publish cool things on the Ocean Marketplace, then you'll first need a place to host your assets as **Ocean doesn't store data**; you're responsible for hosting it on your chosen service and providing the necessary details for publication. You have SO many options where to host your asset including centralized and decentralized storage systems. Places to host may include: Github, IPFS, Arweave, AWS, Azure, Google Cloud, and your own personal home server (if that's you, then you probably don't need a tutorial on hosting assets). Really, anywhere with a downloadable link to your asset is fine. - -In this section, we'll walk you through three options to store your assets: Arweave (decentralized storage), AWS (centralized storage), and Azure (centralized storage). Let's goooooo! - -Read on, if you are interested in the security details! - -### Security Considerations - -{% embed url="https://media.giphy.com/media/81xwEHX23zhvy/giphy.gif" %} -These guys know what's up -{% endembed %} - -When you publish your asset as an NFT, then the URL/TX ID/CID required to access the asset is encrypted and stored as a part of the NFT's [DDO](../../developers/identifiers.md) on the blockchain. Buyers don't have access directly to this information, but they interact with the [Provider](https://github.com/oceanprotocol/provider#provider), which decrypts the DDO and acts as a proxy to serve the asset. - -We recommend implementing a security policy that allows **only the Provider's IP address to access the file** and blocks requests from other unauthorized actors is recommended. Since not all hosting services provide this feature, **you must carefully consider the security features while choosing a hosting service.** - -{% hint style="warning" %} -**Please use a proper hosting solution to keep your files.** Systems like `Google Drive` are not specifically designed for this use case. They include various virus checks and rate limiters that prevent the [`Provider`](../../developers/old-infrastructure/provider/)downloading the asset once it was purchased. -{% endhint %} diff --git a/user-guides/asset-hosting/arweave.md b/user-guides/asset-hosting/arweave.md deleted file mode 100644 index 2158c788a..000000000 --- a/user-guides/asset-hosting/arweave.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -description: How to use decentralized hosting for your NFT assets ---- - -# Arweave - -### Using Arweave with Uploader - -Enhance the efficiency of your file uploads by leveraging the simplicity of the [Ocean Uploader](./Uploader.md) storage system for Arweave. Dive into our comprehensive guide [here](./Uploader.md) to discover detailed steps and tips, ensuring a smooth and hassle-free uploading process. Your experience matters, and we're here to make it as straightforward as possible. - -### Arweave - -[Arweave](https://www.arweave.org/) is a global, permanent, and decentralized data storage layer that allows you to store documents and applications forever. Arweave is different from other decentralized storage solutions in that there is only one up-front cost to upload each file. - -**Step 1 - Get a new wallet and AR tokens** - -Download & save a new wallet (JSON key file) and receive a small amount of AR tokens for free using the [Arweave faucet](https://faucet.arweave.net/). If you already have an Arweave browser wallet, you can skip to Step 3. - -At the time of writing, the faucet provides 0.02 AR which is more than enough to upload a file. - -If at any point you need more AR tokens, you can fund your wallet from one of Arweave's [supported exchanges](https://arwiki.wiki/#/en/Exchanges). - -**Step 2 - Load the key file into the arweave.app web wallet** - -Open [arweave.app](https://arweave.app/) in a browser. Select the '+' icon in the bottom left corner of the screen. Import the JSON key file from step 1. - -![Arweave.app import key file](../../.gitbook/assets/hosting/arweave-1.png) - -**Step 3 - Upload file** - -Select the newly imported wallet by clicking the "blockies" style icon in the top left corner of the screen. Select **Send.** Click the **Data** field and select the file you wish to upload. - -![Arweave.app upload file](../../.gitbook/assets/hosting/arweave-2.png) - -The fee in AR tokens will be calculated based on the size of the file and displayed near the bottom middle part of the screen. Select **Submit** to submit the transaction. - -After submitting the transaction, select **Transactions** and wait until the transaction appears and eventually finalizes. This can take over 5 minutes so please be patient. - -**Step 4 - Copy the transaction ID** - -Once the transaction finalizes, select it, and copy the transaction ID. - -![Arweave.app transaction ID](../../.gitbook/assets/hosting/arweave-3.png) - -**Step 5 - Publish the asset with the transaction ID** - -![Ocean Market - Publish with arweave transaction ID](../../.gitbook/assets/hosting/arweave-4.png) diff --git a/user-guides/asset-hosting/aws.md b/user-guides/asset-hosting/aws.md deleted file mode 100644 index 3125a4e2c..000000000 --- a/user-guides/asset-hosting/aws.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -description: How to use AWS centralized hosting for your NFT assets ---- - -# AWS - -### Amazon Web Services - -AWS provides various options to host data and multiple configuration possibilities. Publishers are required to do their research and decide what would be the right choice. The below steps provide one of the possible ways to host data using an AWS S3 bucket and publish it on Ocean Marketplace. - -**Prerequisite** - -Create an account on [AWS](https://aws.amazon.com/s3/). Users might also be asked to provide payment details and billing addresses that are out of this tutorial's scope. - -**Step 1 - Create a storage account** - -**Go to AWS portal** - -Go to the AWS portal for S3: https://aws.amazon.com/s3/ and select from the upper right corner `Create an AWS account` as shown below. - -![Click the orange create an account button](../../.gitbook/assets/hosting/aws-1.png) - -**Fill in the details** - -![Create an account - 2](../../.gitbook/assets/hosting/aws-2.png) - -**Create a bucket** - -After logging into the new account, search for the available services and select `S3` type of storage. - -![Select S3 storage](../../.gitbook/assets/hosting/aws-3.png) - -To create an S3 bucket, choose `Create bucket`. - -![Create a bucket](../../.gitbook/assets/hosting/aws-4.png) - -Fill in the form with the necessary information. Then, the bucket is up & running. - -![Check that the bucket is up and running](../../.gitbook/assets/hosting/aws-5.png) - -**Step 2 - Upload asset on S3 bucket** - -Now, the asset can be uploaded by selecting the bucket name and choosing `Upload` in the `Objects` tab. - -![Upload asset on S3 bucket](../../.gitbook/assets/hosting/aws-6.png) - -**Add files to the bucket** - -Get the files and add them to the bucket. - -The file is an example used in multiple Ocean repositories, and it can be found [here](https://raw.githubusercontent.com/oceanprotocol/c2d-examples/main/branin_and_gpr/branin.arff). - -![Upload asset on S3 bucket](../../.gitbook/assets/hosting/aws-7.png) - -The permissions and properties can be set afterward, for the moment keep them as default. - -After selecting `Upload`, make sure that the status is `Succeeded`. - -![Upload asset on S3 bucket](../../.gitbook/assets/hosting/aws-8.png) - -**Step 3 - Access the Object URL on S3 Bucket** - -By default, the permissions of accessing the file from the S3 bucket are set to private. To publish an asset on the market, the S3 URL needs to be public. This step shows how to set up access control policies to grant permissions to others. - -**Editing permissions** - -Go to the `Permissions` tab and select `Edit` and then uncheck `Block all public access` boxes to give everyone read access to the object and click `Save`. - -If editing the permissions is unavailable, modify the `Object Ownership` by enabling the ACLs as shown below. - -![Access the Object URL on S3 Bucket](../../.gitbook/assets/hosting/aws-9.png) - -**Modifying bucket policy** - -To have the bucket granted public access, its policy needs to be modified likewise. - -Note that the `` must be chosen from the personal buckets dashboard. - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Public S3 Bucket", - "Principal": "*", - "Effect": "Allow", - "Action": "s3:GetObject", - "Resource": "arn:aws:s3:::/*" - } - ] -} -``` - -After saving the changes, the bucket should appear as `Public` access. - -![Access the Object URL on S3 Bucket](../../.gitbook/assets/hosting/aws-10.png) - -**Verify the object URL on public access** - -Select the file from the bucket that needs verification and select `Open`. Now download the file on your system. - -![Access the Object URL on S3 Bucket](../../.gitbook/assets/hosting/aws-11.png) - -**Step 4 - Get the S3 Bucket Link & Publish Asset on Market** - -Now that the S3 endpoint has public access, the asset will be hosted successfully. - -Go to [Ocean Market](https://market.oceanprotocol.com/publish/1) to complete the form for asset creation. - -Copy the `Object URL` that can be found at `Object Overview` from the AWS S3 bucket and paste it into the `File` field from the form found at [step 2](https://market.oceanprotocol.com/publish/2) as it is illustrated below. - -![Get the S3 Bucket Link & Publish Asset on Market](../../.gitbook/assets/hosting/aws-12.png) - -#### diff --git a/user-guides/asset-hosting/azure-cloud.md b/user-guides/asset-hosting/azure-cloud.md deleted file mode 100644 index 2cc8f5a39..000000000 --- a/user-guides/asset-hosting/azure-cloud.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -description: How to use centralized hosting with Azure Cloud for your NFT assets ---- - -# Azure Cloud - -### Microsoft Azure - -Azure provides various options to host data and multiple configuration possibilities. Publishers are required to do their research and decide what would be the right choice. The below steps provide one of the possible ways to host data using Azure storage and publish it on Ocean Marketplace. - -**Prerequisite** - -Create an account on [Azure](https://azure.microsoft.com/en-us/). Users might also be asked to provide payment details and billing addresses that are out of this tutorial's scope. - -**Step 1 - Create a storage account** - -**Go to Azure portal** - -Go to the Azure portal: https://portal.azure.com/#home and select `Storage accounts` as shown below. - -![Select storage accounts](<../../.gitbook/assets/hosting/azure1 (1).png>) - -**Create a new storage account** - -![Create a storage account](<../../.gitbook/assets/hosting/azure2 (1).png>) - -**Fill in the details** - -![Add details](<../../.gitbook/assets/hosting/azure3 (1).png>) - -**Storage account created** - -![Storage account created](<../../.gitbook/assets/hosting/azure4 (1).png>) - -**Step 2 - Create a blob container** - -![Create a blob container](<../../.gitbook/assets/hosting/azure5 (1).png>) - -**Step 3 - Upload a file** - -![Upload a file](<../../.gitbook/assets/hosting/azure6 (1).png>) - -**Step 4 - Share the file** - -**Select the file to be published and click Generate SAS** - -![Click generate SAS](<../../.gitbook/assets/hosting/azure7 (1).png>) - -**Configure the SAS details and click `Generate SAS token and URL`** - -![Generate link to file](<../../.gitbook/assets/hosting/azure8 (1).png>) - -**Copy the generated link** - -![Copy the link](<../../.gitbook/assets/hosting/azure9 (1).png>) - -**Step 5 - Publish the asset using the generated link** - -Now, copy and paste the link into the Publish page in the Ocean Marketplace. - -![Publish the file as an asset](../../.gitbook/assets/hosting/azure10.png) diff --git a/user-guides/asset-hosting/github.md b/user-guides/asset-hosting/github.md deleted file mode 100644 index 704fb5076..000000000 --- a/user-guides/asset-hosting/github.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -description: How to use Github for your NFT assets ---- - -# Github - -### **Github** - -GitHub can be used to host and share files. This allows you to easily share and collaborate on files, track changes using commits, and keep a history of updates. GitHub's hosting capabilities enable you make your content accessible on the web. - -### **Prerequisites** - -Create an account on [Github](https://github.com/). Users might also be asked to provide details and billing addresses that are outside of this tutorial's scope. - -**Step 1 - Create a new repository on GitHub or navigate to an existing repository where you want to host your files.** - -

Create new repository

- -Fill in the repository details. **Make sure your Repo is public.** - -

Make the repository public

- -### Host Your File - -**Step 2 - Upload a file** - -Go to your repo in Github and above the list of files, select the Add file dropdown menu and click Upload files. Alternatively, you can use version control to push your file to the repo. - -

Upload file on Github

- -To select the files you want to upload, drag and drop the file or folder, or click 'choose your files'. - -

Drag and drop new files on your GitHub repo

- -In the "Commit message" field, type a short, meaningful commit message that describes the change you made. - -

Commit changes

- -Below the commit message field, decide whether to add your commit to the current branch or to a new branch. If your current branch is the default branch, then you should choose to create a new branch for your commit and then create a pull request. - -After you make your commit (and merge your pull request, if applicable), then click on the file. - -

Upload successful

- -**Step 3 - Get the RAW version of your file** - -To use your file on the Market **you need to use the raw url of the asset**. Also, make sure your Repo is publicly accessible to allow the market to use that file. - -Open the File and click on the "Raw" button on the right side of the page. - -

Click the Raw button

- -Copy the link in your browser's URL - it should begin with "https://raw.githubusercontent.com/...." like in the image below. - -

Grab the RAW github URL from your browser's URL bar

- -

Copy paste the raw url

- -**Step 4 - Publish the asset using the Raw link** - -Now, copy and paste the Raw Github URL into the File field of the Access page in the Ocean Market. - -

Upload on the Ocean Market

- -Et voilà! You have now successfully hosted your asset on Github and properly linked it on the Ocean Market. - diff --git a/user-guides/asset-hosting/google-storage.md b/user-guides/asset-hosting/google-storage.md deleted file mode 100644 index 51ba9d582..000000000 --- a/user-guides/asset-hosting/google-storage.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -description: How to use Google Storage for your NFT assets ---- - -# Google Storage - -**Google Storage** - -Google Cloud Storage is a scalable and reliable object storage service provided by Google Cloud. It allows you to store and retrieve large amounts of unstructured data, such as files, with high availability and durability. You can organize your data in buckets and benefit from features like access control, encryption, and lifecycle management. With various storage classes available, you can optimize cost and performance based on your data needs. Google Cloud Storage integrates seamlessly with other Google Cloud services and provides APIs for easy integration and management. - -**Prerequisite** - -Create an account on [Google Cloud](https://console.cloud.google.com/). Users might also be asked to provide payment details and billing addresses that are out of this tutorial's scope. - -**Step 1 - Create a storage account** - -**Go to** [**Google Cloud console**](https://console.cloud.google.com/storage/browser) - -In the Google Cloud console, go to the Cloud Storage Buckets page - -
- -**Create a new bucket** - -
- -**Fill in the details** - -
- -**Allow access to your recently created Bucket** - -
- -**Step 2 - Upload a file** - -
- -**Step 3 - Change your file's access (optional)** - -**If your bucket's access policy is restricted, on the menu on the right click on Edit access (skip this step if your bucket is publicly accessible)** - -
- -
- -**Step 4 - Share the file** - -**Open the file and copy the generated link** - -
- -
- -**Step 5 - Publish the asset using the generated link** - -Now, copy and paste the link into the Publish page in the Ocean Marketplace. - -
- diff --git a/user-guides/asset-hosting/uploader.md b/user-guides/asset-hosting/uploader.md deleted file mode 100644 index 2a8178fe4..000000000 --- a/user-guides/asset-hosting/uploader.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -description: How to use Ocean Uploader ---- - -# Ocean Uploader - -### What is Ocean Uploader? - -Uploader is designed to simplify the process of storing your assets on decentralized networks (such as [arweave](https://www.arweave.org/) and [filecoin](https://filecoin.io/)). It provides access to multiple secure, reliable, and cost-effective storage solutions in an easy-to-use UI and JavaScript library. - -### What decentralized storage options are available? - -Currently, we support Arweave and IPFS. We may support other storage options in the future. - -### How to store an asset on Arweave with [Ocean Uploader](https://uploader.oceanprotocol.com/)? - -Ready to dive into the world of decentralized storage with [Ocean Uploader](https://uploader.oceanprotocol.com/)? Let's get started: - -{% embed url="https://app.arcade.software/share/88CYjl3SPhTS20qKqBGU" fullWidth="false" %} -{% endembed %} - -Woohoo 🎉 You did it! You now have an IPFS CID for your asset. Pop over to https://ipfs.oceanprotocol.com/ipfs/{CID} to admire your handiwork, you'll be able to access your file at that link. You can use it to publish your asset on [Ocean Market](../../developers/uploader/uploader-ui-marketplace.md). diff --git a/user-guides/basic-concepts.md b/user-guides/basic-concepts.md deleted file mode 100644 index b92db67ff..000000000 --- a/user-guides/basic-concepts.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -description: Learn the blockchain concepts behind Ocean ---- - -# Basic Concepts - -You'll need to know a thing or two about blockchains to understand Ocean Protocol's tech... Let's get started with the basics 🧑‍🏫 - -

Prepare yourself, my friend

- -### Blockchain: The backbone of Ocean - -Blockchain is a revolutionary technology that enables the decentralized nature of Ocean. At its core, blockchain is a **distributed ledger** that securely **records and verifies transactions across a network of computers**. It operates on the following key concepts that ensure trust and immutability: - -* **Decentralization**: Blockchain eliminates the need for intermediaries by enabling a peer-to-peer network where transactions are validated collectively. This decentralized structure reduces reliance on centralized authorities, enhances transparency, and promotes a more inclusive data economy. -* **Immutability**: Once a transaction is recorded on the blockchain, it becomes virtually impossible to alter or tamper with. The data is stored in blocks, which are cryptographically linked together, forming an unchangeable chain of information. Immutability ensures the integrity and reliability of data, providing a foundation of trust in the Ocean ecosystem. Furthermore, it enables reliable traceability of historical transactions. -* **Consensus Mechanisms**: Blockchain networks employ consensus mechanisms to validate and agree upon the state of the ledger. These mechanisms ensure that all participants validate transactions without relying on a central authority, crucially maintaining a reliable view of the blockchain's history. The consensus mechanisms make it difficult for malicious actors to manipulate the blockchain's history or conduct fraudulent transactions. Popular consensus mechanisms include Proof of Work (PoW) and Proof of Stake (PoS). - -Ocean harnesses the power of blockchain to facilitate secure and auditable data exchange. This ensures that data transactions are transparent, verifiable, and tamper-proof. Here's how Ocean uses blockchains: - -* **Data Asset Representation**: Data assets in Ocean are represented as non-fungible tokens (NFTs) on the blockchain. NFTs provide a unique identifier for each data asset, allowing for seamless tracking, ownership verification, and access control. Through NFTs and datatokens, data assets become easily tradable and interoperable within the Ocean ecosystem. -* **Smart Contracts**: Ocean uses smart contracts to automate and enforce the terms of data exchange. Smart contracts act as self-executing agreements that facilitate the transfer of data assets between parties based on predefined conditions - they are the exact mechanisms of decentralization. This enables cyber-secure data transactions and eliminates the need for intermediaries. -* **Tamper-Proof Audit Trail**: Every data transaction on Ocean is recorded on the blockchain, creating an immutable and tamper-proof audit trail. This ensures the transparency and traceability of data usage, providing data scientists with a verifiable record of the data transaction history. Data scientists can query addresses of data transfers on-chain to understand data usage. - -By integrating blockchain technology, Ocean establishes a trusted infrastructure for data exchange. It empowers individuals and organizations to securely share, monetize, and leverage data assets while maintaining control and privacy. diff --git a/user-guides/remove-liquidity-pools.md b/user-guides/remove-liquidity-pools.md deleted file mode 100644 index 83c354029..000000000 --- a/user-guides/remove-liquidity-pools.md +++ /dev/null @@ -1,42 +0,0 @@ -# Liquidity Pools \[deprecated] - -Liquidity pools and dynamic pricing used to be supported in previous versions of the Ocean Market. However, these features have been deprecated and now we advise everyone to remove their liquidity from the remaining pools. It is no longer possible to do this via Ocean Market, so please follow this guide to remove your liquidity via etherscan. - -## Remove liquidity using Etherscan - -### Get your balance of pool share tokens - -1. Go to the pool's Etherscan/Polygonscan page. You can find it by inspecting your transactions on your account's Etherscan page under _Erc20 Token Txns_. -2. Click _View All_ and look for Ocean Pool Token (OPT) transfers. Those transactions always come from the pool contract, which you can click on. -3. On the pool contract page, go to _Contract_ -> _Read Contract_. - -

Read Contract

- -4\. Go to field `20. balanceOf` and insert your ETH address. This will retrieve your pool share token balance in wei. - -

Balance Of

- -5\. Copy this number as later you will use it as the `poolAmountIn` parameter. - -6\. Go to field `55. totalSupply` to get the total amount of pool shares, in wei. - -

Total Supply

- -7\. Divide the number by 2 to get the maximum of pool shares you can send in one pool exit transaction. If your number retrieved in former step is bigger, you have to send multiple transactions. - -8\. Go to _Contract_ -> _Write Contract_ and connect your wallet. Be sure to have your wallet connected to network of the pool. - -

Write Contract

- -9\. Go to the field `5. exitswapPoolAmountIn` - -* For `poolAmountIn` add your pool shares in wei -* For `minAmountOut` use anything, like `1` -* Hit _Write_ -* - -

Remove Liquidity

- -10\. Confirm transaction in Metamask - -

Confirm transaction

diff --git a/user-guides/using-the-oe-marketplace/README.md b/user-guides/using-the-oe-marketplace/README.md new file mode 100644 index 000000000..26d7223bb --- /dev/null +++ b/user-guides/using-the-oe-marketplace/README.md @@ -0,0 +1,2 @@ +# Marketplace + diff --git a/user-guides/using-the-oe-marketplace/consuming-an-assets-service.md b/user-guides/using-the-oe-marketplace/consuming-an-assets-service.md new file mode 100644 index 000000000..7ff597f3e --- /dev/null +++ b/user-guides/using-the-oe-marketplace/consuming-an-assets-service.md @@ -0,0 +1,149 @@ +# Consuming an asset's service + +Consuming an asset’s service means accessing its URI and retrieving the result on your local machine. Depending on the type of resource available at the URI, this may involve: + +* Downloading a file directly to your local system. +* Calling an API endpoint and retrieving the returned results. +* Running a GraphQL query and collecting the query’s output. + +**Note**: Only services of type `access` can be consumed as listed above. Services of type `compute` can only be used in a C2D job. + + + +For a consumer to access an asset's service, the following conditions must be met: + +* **Access Rights**: the consumer must be authorized to access both the asset and the service. Authorization is evaluated at both levels (asset and service) based on: + * The consumer’s Web3 address + * The required verifiable credentials defined in the SSI access policies of the asset and the service +* **Payment**: If the service is paid, the consumer must purchase access by paying: + * The service’s listed price + * Any applicable additional fees + + + +## Precondition + +* The consumer has logged in to the marketplace + + + +## Steps + +1\. Locate the asset in the Catalogue and open it by clicking its tile. The Asset Details screen will appear, showing the available services on the right side. Each service includes information about its access type (access or compute) and price. + +
+ + + +2\. Select the service you want from the service list by pressing on its tile. The marketplace will then perform an initial access verification based on the consumer's web3 address. + +**Note:** + +* If the consumer’s address passes verification, no message is shown on the screen. +* If the consumer's address fails marketplace verification, an error message is displayed (see image below). + +
+ + + +3\. Press the "**Check Credentials**" button. The next steps vary depending on whether the asset is governed by SSI-based access policies or not. + +* **The asset does not have SSI-based access policies** + * The consumer's address is verified by Ocean Node against the access rules defined in the asset's description. + + * If verification succeeds: + * An information message is displayed on-screen, + * The "Calculate Total Price" button is shown. + +
+ + + * If verification fails + * An error message is displayed on-screen + * The "Retry" button is shown + +
+ + +* **The asset has SSI-based access policies** + * In accordance with the SSI policies defined at the asset and service levels, the marketplace receives a request for the consumer to present one or more verifiable credentials from the consumer's SSI wallet. + * The marketplace retrieves from the consumer's SSI wallet the verifiable credentials that correspond to the SSI policy criteria. + + * If no verifiable credentials correspond to the SSI policy criteria, a message stating that no credentials were found is displayed. + +
+ + + + * If one or more verifiable credentials are found, a window with all found verifiable credentials is displayed + +
+ * Select the verifiable credentials to present and press **"Accept**". + * The DID Selector window is displayed. Select the DID that will be used to sign the verifiable presentation in which the selected verifiable credentials will be sent for verification. Then press "**Confirm**".
+ +
+ * The verifiable credentials are submitted for verification against the SSI policy defined in the asset metadata + * If the verification fails, an error message will be displayed on-screen. + * If verification succeeds: + + * An information message is displayed on-screen + * The "**Calculate Total Price**" button is shown. + +
+ + + + + +4\. Click "**Calculate Total Price**". + +
+ + + +5\. The detailed list of costs associated with the purchase of the asset is shown. + +* Check the "I agree to the Terms and Conditions" checkbox. The Terms and Conditions can be reviewed by clicking on the link. +* Check the "I agree to the License Terms under which this asset was made available". The License Terms of the asset can be reviewed by clicking on the link +* Click "**Buy**" + +
+ + + + + +6 . Metamask wallet will display a spending cap request. Click "**Confirm**". + +
+ + + +6\. Metamask wallet will display a transaction request. Click "**Confirm**". + +
+ + + +7. The service is purchased, and the "**Download**" button is displayed. + +
+ + + +**Note**: If a service requires consumer parameters, the corresponding fields for entering values will be displayed (see picture below). The user must enter a value for each mandatory field to enable the "Download" button. + +
+ + + +8\. Click "**Download**". Metamask wallet will display a signature request message. + +
+ + + +9\. The result of accessing the service's URI is downloaded. + +
+ diff --git a/user-guides/using-the-oe-marketplace/editing-an-asset/README.md b/user-guides/using-the-oe-marketplace/editing-an-asset/README.md new file mode 100644 index 000000000..a8bfc7574 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/editing-an-asset/README.md @@ -0,0 +1,34 @@ +# Editing an asset + +Editing an asset allows the asset's owner to: + +* change the asset's attributes +* change the state of an asset (enabled/disabled) +* add, delete, or change the asset's services +* change the status of a service (enabled/disabled) + +**Note**: Once created, an asset cannot be deleted; it can only be deactivated, meaning none of its services can be consumed. Similarly, a service created within an asset cannot be deleted; it can only be deactivated, which means it cannot be consumed. However, other enabled services within the same asset can be consumed. + + + +## Precondition + +The user has logged in to the marketplace. + + + +## Steps + +1\. Select the asset. If the user is the asset owner, the "Edit Asset" option appears under the services list. + +
+ + + +2\. Click "**Edit Asset**". The Edit screen is displayed with two options to select from: + +* [_Edit Asset_](update-the-assets-attributes-and-state.md) (preselected), to update the asset's attributes and state +* [_Edit Services_](update-the-assets-services/) to update the attributes or the state of an existing service, or to add a new service to the asset + +
+ diff --git a/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-attributes-and-state.md b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-attributes-and-state.md new file mode 100644 index 000000000..63763a396 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-attributes-and-state.md @@ -0,0 +1,37 @@ +# Update the asset's attributes and state + +To update the asset's attributes, perform the following: + +1\. From the Edit page, select the option "**Edit Asset**". You can update the following attributes: + +* Title +* Description +* Sample file URL +* Tags +* Author +* Access rules +* State. Select one of the following: + * Active - the asset is consumable + * EndOfLife - the asset is not consumable + * Deprecated - the asset is not consumable + * RevokedByPublisher - the asset is not consumable + * OrderingIsTemporaryDisabled - the asset is not consumable + * Unlisted - the asset is not consumable +* License file +* Additional Asset Description + +For a description of each of these attributes, please review the [Publishing an asset](../publishing-an-asset/) page, steps 1 and 2. + +**Note**: The Asset Type (Dataset or Algorithm) cannot be changed once set. + +2\. After the changes were made, click "**Submit**". A transaction request notification from Metamask appears on the screen. Press "**Confirm**". + +
+ +3\. A confirmation message is displayed on the screen. Click "**Back to Asset**" to return to the asset details screen. + +
+ + + +Please note that from the time an asset is updated on the blockchain until it is indexed by the Ocean Node’s indexer, a delay may occur. This delay typically ranges from a few seconds to several minutes, depending on factors such as RPC endpoint performance, the current indexed block, and the machine’s processing capacity running the Ocean Node. diff --git a/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/README.md b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/README.md new file mode 100644 index 000000000..6df219910 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/README.md @@ -0,0 +1,14 @@ +# Update the asset's services + +To update the asset's services, perform the following: + +1\. From the Edit page, select the option "**Edit Service**". The existing services are listed, along with a button to add a new service. + +
+ + + +From this screen, you can perform the following: + +* [Update an existing service](update-an-existing-service.md) +* [Create a new service](create-a-new-service.md) diff --git a/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/create-a-new-service.md b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/create-a-new-service.md new file mode 100644 index 000000000..bb9cf16e7 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/create-a-new-service.md @@ -0,0 +1,36 @@ +# Create a new service + +To create a new service, perform the following: + +1\. Click on the "**Add a new service**" button. The "Add a new service" form is displayed. + +2\. Fill in the following fields: + +* Service Name +* Service Description +* Service Language +* Price: If you want the asset to be free, set the price to zero. For a paid asset, set the price to a value other than zero. +* Payment Collector Address: By default, the payment collector address is the publisher's address; however, it can be changed to a different address. +* Provider URL: the Ocean Node that will encrypt the asset. +* The asset's file +* Timeout +* Access Rules +* Consumer Parameters + +For a description of each of these attributes, please review step 3 of the [Publishing an asset](../../publishing-an-asset/) page. + + + +3\. After the service's attributes are entered, click "**Submit**". A transaction request notification from Metamask appears on the screen. Press "**Confirm**". + +
+ + + +4\. A confirmation message is displayed on the screen. Click "**Back to Asset**" to return to the asset details screen. + +
+ + + +Please note that from the time an asset is updated on the blockchain until it is indexed by the Ocean Node’s indexer, a delay may occur. This delay typically ranges from a few seconds to several minutes, depending on factors such as RPC endpoint performance, the current indexed block, and the machine’s processing capacity running the Ocean Node. diff --git a/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/update-an-existing-service.md b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/update-an-existing-service.md new file mode 100644 index 000000000..914e348ab --- /dev/null +++ b/user-guides/using-the-oe-marketplace/editing-an-asset/update-the-assets-services/update-an-existing-service.md @@ -0,0 +1,43 @@ +# Update an existing service + +To update an existing service, perform the following: + +1 . Click on the service. The service attributes are displayed. + +2\. You can change the following attributes: + +* Service Name +* Service Description +* Service Language +* Price: Please note that if the service is paid, it cannot be changed to a free service and vice versa. +* Payment Collector Address: By default, the payment collector address is the publisher's address; however, it can be changed to a different address. +* Provider URL: the Ocean Node that will encrypt the asset. +* The asset's file: it is protected and not displayed in this field. If you want to change it, first, delete the existing file and then add the new one. +* Timeout +* Service State: select one of the following: + * Active - the service is consumable + * EndOfLife - the service is not consumable + * Deprecated - the service is not consumable + * RevokedByPublisher - the service is not consumable + * OrderingIsTemporaryDisabled - the service is not consumable + * Unlisted - the service is not consumable +* Access Rules +* Consumer Parameters + +For a description of each of these attributes, please review the [Publishing an asset](../../publishing-an-asset/) page, step 3. + + + +3\. After the changes were made, click "**Submit**". A transaction request notification from Metamask appears on the screen. Press "**Confirm**". + +
+ + + +4\. A confirmation message is displayed on the screen. Click "**Back to Asset**" to return to the asset details screen. + +
+ + + +Please note that from the time an asset is updated on the blockchain until it is indexed by the Ocean Node’s indexer, a delay may occur. This delay typically ranges from a few seconds to several minutes, depending on factors such as RPC endpoint performance, the current indexed block, and the machine’s processing capacity running the Ocean Node. diff --git a/user-guides/using-the-oe-marketplace/logging-in-to-the-marketplace.md b/user-guides/using-the-oe-marketplace/logging-in-to-the-marketplace.md new file mode 100644 index 000000000..90a776e8c --- /dev/null +++ b/user-guides/using-the-oe-marketplace/logging-in-to-the-marketplace.md @@ -0,0 +1,76 @@ +--- +description: >- + This page will guide you through the process of logging in to the OE + marketplace, to publish or access assets. +--- + +# Logging in to the Marketplace + +To publish or consume assets, a user must first log in to the marketplace. Logging in requires establishing a connection to the marketplace server using both the MetaMask web3 wallet and the SSI wallet. + +## Precondition + +The user has gone through the onboarding process, at the end of which they have the following: + +* The Metamask web3 wallet is configured with a valid web3 address. +* The SSI wallet account is set up and populated with DIDs and Verifiable Credentials. +* Optional: The user has enough native tokens in their wallet to pay the gas for blockchain transactions. +* Optional: The user has added funds to their web3 address for the token(s) used in the marketplace to purchase assets. + +## Steps + +To log in to the marketplace, perform the following steps: + +1\. On the main page, press the Connect Wallet button. + +
+ + + +2\. The Connect Wallet window is displayed. Click on MetaMask. + +
+ + + +3\. The MetaMask login screen is displayed. Enter your password and press Enter. + +
+ + + +4\. The signature request window is displayed. Press confirm. + +
+ + + +5\. The first time the user connects to the marketplace, the SSI Wallet API screen is displayed and prefilled with the URL of the default SSI wallet, provided by the marketplace. Enter the URL to your SSI wallet and press the "Set SSI Wallet API & Connect SSI". + +
+ + + +**Note:** the SSI Wallet URL is cached in the browser, so future logins to the marketplace won't require entering the SSI Wallet URL. If the user wants to update the SSI Wallet URL, it can be done from Settings -> Update SSI Walet API, as shown below. + +
+ + + +6\. The signature request window is displayed. Press confirm. + +
+ + + +7\. The user is now connected to the Marketplace with both the web3 and SSI wallet. + +
+ + + +8\. From the main menu of the marketplace, the user can do the following: + +* [Publish an asset](publishing-an-asset/) +* access the assets catalogue, from where services can be [accessed](consuming-an-assets-service.md), and C2D jobs can be initiated +* access the user profile, where information related to the user's activity is logged diff --git a/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/README.md b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/README.md new file mode 100644 index 000000000..6ab5397ed --- /dev/null +++ b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/README.md @@ -0,0 +1,18 @@ +--- +description: >- + This page will guide you through the process of onboarding to a Ocean + Enterprise marketplace. +--- + +# Onboarding to the Marketplace + +To interact with the marketplace, a user needs to set up their connection to the market, their profile, and provision funds required to purchase assets, run C2D jobs, and perform transactions. + +The following activities need to be performed to onboard in the market: + +[Install MetaMask in the browser](install-and-configure-metamask-in-the-browser.md) + +[Set up Metamask](../publishing-an-asset/asset-metadata.md) + +[Adding funds to the wallet](adding-funds-to-the-wallet.md) + diff --git a/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/adding-funds-to-the-wallet.md b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/adding-funds-to-the-wallet.md new file mode 100644 index 000000000..a92df830e --- /dev/null +++ b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/adding-funds-to-the-wallet.md @@ -0,0 +1,25 @@ +# Adding funds to the wallet + +To cover transaction fees on an OE-enabled dataspace, the users must have sufficient funds in the native currency of the blockchain network, for example, ETH for a dataspace deployed on the Ethereum network. + +Furthermore, to purchase published assets or run Compute-To-Data jobs, the users must have sufficient funds of the currency in which the asset is listed (i.e., USDC, EURC). + +**Note**: The currencies supported by Ocean Enterprise are listed [here](../../../developers/networks.md). + +## Adding funds for production environments + +To add funds for use on the production network (mainnet), users must first acquire the necessary currency from a supported exchange. Once purchased, initiate a withdrawal from the exchange to your Metamask wallet address. + +**Note**: The currencies on production environments have financial value. + +## Adding funds for testnet environments + +For testnet environments, you can use a "faucet" to claim free test tokens. + +* Test USDC & EURC: Claim these from the official Circle faucet: [`https://faucet.circle.com/`](https://faucet.circle.com/) +* EURAU: Please note that a faucet for EURAU is not currently available. + +**Note**: The currencies on test environments have no financial value + + + diff --git a/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/install-and-configure-metamask-in-the-browser.md b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/install-and-configure-metamask-in-the-browser.md new file mode 100644 index 000000000..4ff9944c4 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/install-and-configure-metamask-in-the-browser.md @@ -0,0 +1,73 @@ +# Install and configure Metamask in the browser + +## Prerequisites + +* Supported Browser: Google Chrome, Mozilla Firefox, Brave, Microsoft Edge, Opera +* Pen and Paper: Required for the physical backup of your Secret Recovery Phrase (SRP). Do not skip this. + +## Step 1: Installation + +Go to [Metamask.io](https://metamask.io/) web page, press "Get Metamask", and follow the instructions on screen. + +1\. Open your browser and go to [https://metamask.io](https://metamask.io/). + +Verification: Ensure the lock icon appears in the address bar. + +2\. Select Your Platform Click the "Get Metamask" button. The site should automatically detect your browser. + +Select "Install MetaMask for \[Your Browser]". + +3\. You will be redirected to your browser’s official web store (e.g., Chrome Web Store, Firefox Add-ons). + +Click Add to Chrome (or Firefox/Brave/Edge). + +Review the permissions prompt and click Add Extension. + +**Result**: Upon completion, the MetaMask "fox" icon will appear in your browser toolbar, and a "Let's get started" tab will open automatically. + +Tip: If the icon disappears, click the "Puzzle" piece icon (Extensions) in your browser toolbar and click the Pin icon next to MetaMask to keep it visible. + +## Step 2: Wallet initialization + +1\. Begin Setup: On the Welcome screen, click Create a new wallet. + +**Note**: "Import an existing wallet" is only used if you already have a Secret Recovery Phrase from a previous installation. + +2\. Choose an option to log in to Metamask: using a Google account, an Apple account, or a Secret Recovery Phrase + +3\. If you chose to use a secret recovery phrase in the previous step, you will be asked to create a strong password (minimum 8 characters). Click the checkbox next to "If I lose this password, MetaMask can’t reset it", then press "Create passwordd" + +**Note**: This password encrypts your private keys locally on your device. It is not your master key; it only unlocks the extension on this specific computer. + + + +## Step 3: Securing your assets + +This is the most important step in the process. You will be assigned a Secret Recovery Phrase (SRP)—a sequence of 12 random words. + +**The Golden Rule**: If you lose this phrase, you lose your funds forever. If someone else gets this phrase, they can steal your funds. + +1\. Reveal the Phrase: Click the lock icon/blurred area to reveal your 12 words. + +2\. Physical Backup (Required): Write down the words in the exact order on a piece of paper. + +* DO NOT take a screenshot. +* DO NOT copy/paste them into a cloud document (Google Docs, Notes, etc.). +* DO NOT email them to yourself. + +3\. Verify the Phrase: Click Continue. MetaMask will ask you to confirm your backup by selecting the words in the correct order or filling in missing words. + +4\. Finalize: Once verified, click Continue -> Got it. The message "Your wallet is ready!" is displayed. Press "Done" to finish the setup. + + + +## Step 4: Post-installation Checks + +Network Status: By default, MetaMask connects to a list of production blockchains - both EVM and non-EVM-compatible. You can see this in the top-left corner of the wallet interface. + +Account Address: By default, an account (Account 1) is created when you install Metamask. Your public wallet address (starts with 0x...) is located at the top center, under Account 1. Click on "network addresses" to display the networks, then click the copy button next to the network you want to copy to your clipboard. This is the address you share to receive funds. + + + + + diff --git a/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/setting-up-the-ssi-wallet.md b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/setting-up-the-ssi-wallet.md new file mode 100644 index 000000000..b11e22bc6 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/onboarding-to-the-marketplace/setting-up-the-ssi-wallet.md @@ -0,0 +1,207 @@ +--- +description: >- + Add DIDs and Verifiable Credentials to the SSI wallet to publish and consume + assets in an SSI-enabled OE marketplace +--- + +# Setting up the SSI wallet + +In an SSI-enabled marketplace, publishers require a Decentralized ID (DID) to sign the asset's DDO, thereby proving its provenance. Furthermore, consumers must present Verifiable Credentials to access assets. DIDs and VCs must be added to the SSI wallet for the OE marketplace to access them. + +Depending on their setup and security requirements, participants in a dataspace can use either the default SSI wallet instance provided by the dataspace or their own instance. + +Setting up the SSI wallet means: + +1. adding DIDs to the wallet, and +2. adding Verifiable Credentials to the wallet + + + +## Preconditions + +* The user must have a minimum understanding of SSI concepts, such as cryptographic key, DID, DID method, and Verifiable Credential +* The SSI wallet instance has been installed and configured, as described here. +* The user has logged in to Metamask. + + + +## Steps + +To set up the SSI wallet, perform the following steps: + +1\. Connect to the SSI wallet's user interface by accessing the SSI wallet instance URL. + + + +2\. The login screen of the SSI wallet is displayed. + +
+ + + +3\. Click the "**Connect with web3**" button. A MetaMask notification message for a signature request appears on the screen. + +
+ + + +4\. Click "**Confirm**". The Select wallet screen is displayed. Press "**View wallet**". + +
+ + + +5\. The main menu of the SSI wallet is displayed. + +
+ +From the SSI wallet's user interface, users can manage the cryptographic keys, DIDs, and Verifiable Credentials associated with their account. + +**Note**: First time the user connects to the SSI wallet instance, a DID named "Onboarding", of type JWK, and a corresponding key are created by default. You can choose to delete or keep them. + + + +6\. **Adding DIDs to the SSI wallet** + +There are two methods to add a DID to the SSI wallet: create a new DID or import an existing DID + +* **Create a new DID** + + * From the left side menu, click "**DIDs**" + +
+ + + * The DIDs menu is displayed. Click "**New**". + +
+ + + * The DID types menu is displayed. From here, the user can choose the type of DID they want to create. did:key and did:jwk are primarily used in testing scenarios, while did:web is used for production cases. The following steps show how to create a did:web. Click "**Create did:web**" + + + + * The **Create WEB DID (did:web)** screen is displayed. + + * **Key id field**: enter the name of an existing key to be assigned to the DID, or leave it blank so a new key will be generated and attached to the DID. + + + + * **Alias:** enter an alias for the DID. + + + + * **Domain**: if you want the DID to be resolved (retrieved by a DID resolver using the did:web method), enter the domain where the DID is located (e.g. _example.com_). Please note that the SSI wallet instance comes with a web registry where DIDs of type did:web can be hosted. If you want to use the web registry provided by the SSI wallet instance, enter the hostname of the wallet instance in this field. + + + + * **Path**: Multiple DIDs can be hosted under one domain by using paths. Enter the path where the DID is located. If you use the web registry provided by the SSI wallet instance, enter `/wallet-api/registry/` in this field. + +
+ + + + * Click "**Create did:web**". An information message is displayed indicating that the DID has been created. + +
+ + + + * To test that the DID can be resolved, go to [https://dev.uniresolver.io/](https://dev.uniresolver.io/), enter the DID in the did-url field, and click **Resolve**. The DID document should be retrieved and displayed. + +
+ + + +* **Import an existing DID** + + * From the left side menu, click "**DIDs**" + +
+ + + * The DIDs menu is displayed. Click "**Import**". + +
+ + + + * The "**Import your DIDs**" screen is displayed. + + * **DID**: enter the DID that is imported + * **Associated key (PEM or JSON)**: enter the private key of the DID in either PEM of JSON format + * **Alias**: provide an alias for the imported DID + +
+ + + + * Click **Import DID.** An information message is displayed indicating that the DID has been imported. + +
+ + + +7\. **Adding Verifiable Credentials to the SSI wallet** + +There are two ways to add Verifiable Credentials to the SSI wallet: import existing credentials (in JWT format) or receive credentials during the credential-issuing process based on the OID4VCI (OpenID for Verifiable Credential Issuance) protocol. + +* **Import existing Verifiable Credentials in JWT format** + + * From the main menu, click "**Credentials**". + +
+ + + + * The Credentials page is displayed. Click "**Import credential (JWT)**." + +
+ + + + * The Import Credential (JWT) page is displayed. + + * **Signed VC JWT**: paste the signed VC. + * **Associated DID**: select the DID associated with the imported VC. + +
+ + + + * Click "**Import credential**". An information message is displayed indicating that the VC has been imported. + + + + * You can then find the imported VC in the Credentials page. + +
+ + + + * **Receive credentials during the credential-issuing process** + + * In the credential issuance process based on the OID4VCI protocol, a credential issuer component receives a credential issuance request that includes the raw credential data (the information to be issued as a VC) and the data that identifies the VC's issuer (the issuer's DID and private key). Then, the credential issuer component generates an OID4VC offer URL that any OID-compliant wallet can accept to receive credential(s). More details on the credential issuance process based on the walt.id SSI stack can be found [here](https://docs.walt.id/community-stack/issuer/api/credential-issuance/vc-oid4vc). + * To receive a credential through an OID4VC offer URL, from the "Credentials" screen click "**Scan to receive or present credentials**". + +
+ + + + * The screen to receive credentials is displayed. Enter the OID4VC offer URL in the input field and click "**Receive credential**". + +
+ + + + * The Receive single credential screen is displayed, indicating the credential issuer component from which the credential will be received and the credential type. + * **Select DID**: from this dropdown list, select the DID associated with the received VC. + +
+ + + + * Press "**Accept**". The VC will be added to the wallet and displayed on the Credentials screen. + +
+ diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/README.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/README.md new file mode 100644 index 000000000..39c8a08bd --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/README.md @@ -0,0 +1,42 @@ +--- +description: This chapter describes the process of publishing an asset in Ocean Enterprise. +--- + +# Publishing an asset + +## Introduction + +Publishing an asset involves recording its description on-chain and generating the associated smart contracts. Once this information is stored, the asset description is indexed and cached within the OE node’s indexer database, enabling users to easily discover, access, and utilize the asset. + +**Note**: In OE, an asset can have multiple services associated, as outlined here. During the publishing process, the asset’s initial service is automatically created. To add additional services, simply edit the asset after publication. + + + +### What happens when an asset is published + +When an asset is published, the following actions occur: + +1. The asset smart contracts (data NFT, data token) are created on-chain. The price is saved in the Fixed Rate Exchange contract. +2. The location of the service files of the asset is encrypted +3. Asset's DDO is created in the form of a Verifiable Credential, in JWT format, encrypted, and saved in IPFS +4. The state of the asset and of the first service of the asset are set to "Active", meaning they are consumable +5. The Content ID of the file is saved on-chain + + + +## Precondition + +The user has logged in to the marketplace. + + + +## Steps + +1\. Press the Publish button from the main page. The Asset Publishing flow has started. + +
+ + + + + diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/additional-asset-description.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/additional-asset-description.md new file mode 100644 index 000000000..36e3c21f7 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/additional-asset-description.md @@ -0,0 +1,11 @@ +# Additional Asset Description + +23\. The **Additional Asset Description** screen is displayed. On this screen, additional descriptions of the asset in other formats can be added. The additional descriptions will be saved in a dedicated field in the asset's DDO and can be retrieved from the cache. + +* To include an additional asset description in the DDO, press the **"Create Additional Asset Description"** button. + +
+* In the **Type** field, enter the type of the additional asset description (e.g. GAIA-X) +* In the **Content** field, insert the asset description +* You can insert as many additional asset descriptions as you need +* Press the **Continue** button diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-level-credentials.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-level-credentials.md new file mode 100644 index 000000000..89c905c2e --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-level-credentials.md @@ -0,0 +1,176 @@ +# Asset Level Credentials + +5\. The Asset Level Credentials screen is displayed. This screen allows you to define the access rules at the asset level. For a better understanding of how access credentials work, please check this link. + +6\. The **Access Rules** group is displayed. Using the fields in this group, you can decide who is allowed or denied access to the asset. The rules are based on web3 addresses. + +7\. The "**Allow ETH Address"** option enables the user to define who can access the asset: + +* To grant access to everybody, select "_Allow all addresses_" + +
+ + + +* To restrict access to specific users, select "_Allow specific addresses_". + * A text field is displayed. Enter the web3 address and press _Add new address_. You can add multiple addresses. + +
+ + + +8\. The "**Deny ETH Address"** option enables the user to define who is denied access to the asset: + +* To deny access to everybody, select "_Deny all addresses_" + +
+ +* To deny access to specific addresses, select _"Deny specific addresses"_. + * A text field is displayed. Enter the web3 address and press _Add new address_. You can add multiple addresses. + +
+ + + +**Note:** Selecting both "Allow all addresses" and "Deny all addresses" simultaneously will result in access being denied to all users, as the deny list takes precedence. + + + +9\. To enable access rules based on SSI credentials, select the "Enable SSI Policies" checkbox. The SSI Policies group is displayed. + +
+ +Using this user interface, the publisher can define access rules at the asset level based on the Verifiable Credentials (VCs) owned by the consumer in their SSI wallet. The VC-based access rules are referred to as SSI policies or simply policies. Three types of SSI policies can be defined: + +* **Policies applied to all requested VCs (static policies)**: their scope includes all requested VCs. The following static policies can be applied: + + * _signature_: verifies the signature of the VC + * _not-before_: verifies the credential is not used before its validity time + * _revoked-status-list_: verifies that the credential was not + * _expired_: Verifies that the credential has not expired + * signature\_sd-jwt-vc: verifies the signature for the selective disclosure JWT (SD-JWT) type of VCs. + + **Note**: by default, certain policies are enforced by the marketplace and are preselected. Additionally, the component that evaluates the submitted VCs applies a set of predefined policies automatically. Therefore, even if you manually deselect a default policy, it may still be enforced due to underlying system rules. +* **Policies applied to a specific VC**: applicable only to the VC for which they were defined. The following policies can be applied to the VC level: + * _Static policies_ (see the list above) + * _Allowed issuer:_ verifies that the VC was issued by a list of specific entities defined by their DIDs. If the VC was not issued by any of the DIDs in the list, the policy fails + * _Custom policy:_ verification rules based on the fields within the requested VCs. For instance, the publisher can enforce a rule that only legal entities from Germany can access the asset. This policy verifies that the `credentialSubject.gx:headquartersAddress.gx:countryCode` equals `"DE"`. + * _Custom URL policy_: A custom policy authored in the REGO language and hosted at a designated URL. This approach enables advanced verification scenarios by allowing tailored logic based on the specific fields within the requested Verifiable Credentials (VCs). +* **Advanced policies:** applicable to all VCs. The following advanced policies can be applied: + + * _Credential presenter same as credential owner:_ verifies that the entity that issues the verifiable presentation (VP) that embeds the VC is the same as the subject of the VC. In case the entity that submits the VP for verification is not the subject of the VC, the policy fails. + * _All requested credential types are necessary for verification:_ verifies that all requested VCs are submitted for verification. If this policy is not enabled and the access rules to the asset request, for instance, two VCs - LegalPerson and LegalRegistrationNumber - a consumer who submits just one of the of these credentials passes the verification. With the policy enabled, passing just one of the credentials will result in failure. + * _Minimum number of credentials required_: Set the minimum number of credentials that must be presented for successful verification. Presenting less VCs than the minimum number of credentials will result in failure. + * _Maximum number of credentials required_: Set the maximum number of credentials that must be presented for successful verification. + + **Note**: some of the advanced policies are enforced by default by the marketplace and are checked in the user interface. + + + +10\. **Policies applied to all credentials:** To add a new policy applied to all credentials, mark the corresponding checkbox. + +
+ + + +11\. **Policies applied to a specific VC**: To define policies applicable to a particular VC, perform the following steps: + +* Click the **New Credential Request** button. The **Credential Request #1** group is displayed. + +
+ + + +* From the **Type** list, select the VC you want to be requested. The list of supported VCs will be periodically updated. Please consult here the list of supported VCs. \ + From the **Format** list, select the format in which the VC should be presented: `jwt_vc_json`, `mso_mdoc` or `vc+sd_jwt`. + +
+ +* To apply a static policy to the requested VC, perform the following: + + * click on **Add policy** button and from the list select **Static Policy**. + +
+ + + + * The **Static Policy** list is displayed. Select a static policy from the list. + +
+ + + +* To apply the Allowed issuer policy to the requested VC, perform the following: + + * click on **Add policy** button and from the list select **Allowed Issuer**. + +
+ + + + * The allowed-issuer policy is diplayed. Press the **New Issuer DID** button and in the **Issuer DID** field enter the DID of the issuer. You can add multiple entries by pressing the **New Issuer DID** button. + +
+ + + +* To apply a custom policy to the requested VC, perform the following: + + * click on **Add policy** button and from the list select **Custom Policy**. + +
+ + + + * The **Name** field is displayed. Enter a meaningful name for the custom policy, using letters and numbers. \ + For consistency and readability, it's recommended to use camelCase notation when naming your policy. + +
+ + + + * To create a new rule, click the **New rule** button. From the **Credential field** list, choose a field from the selected VC that you want to evaluate. \ + Next, select the appropriate operator from the **Operator** list. \ + Finally, enter the desired value in the **Value** field.. Please note that for strings, the comparison is case-insensitive (e.g. "DE", "de" and "De" have the same value).\ + You can add multiple rules in the same custom policy. + +
+ + + +* To apply a custom policy available at a URL, perform the following: + + * click on **Add policy** button and from the list select **Custom URL Policy**.
+ +
+ + + + * The UI group for Custom URL policies is displayed. When using custom URL policies, ensure you follow these guidelines; otherwise, they will not work. + +
+ * Enter the policy name in the **Custom URL Policy Name** text field + * Enter the URL where the policy is located in the **Policy URL** text field + * If the custom policy needs arguments to run, to add them, click on the **New argument** button + * Add the parameter name in the **Parameter Name** field and its value in the **Value** field. You can add multiple parameters. + +
+ + + +12\. **Advanced Policies.** To set up advanced features related to how the verification of the presented VC is done, perform the following steps: + +* Select the **Edit Advanced Policy Features** checkbox. The **Advanced SSI Policy Features** group is displayed. + +
+* Some advanced features are selected by default when the group is displayed. +* Please select the policies relevant to your case. For both the minimum and maximum number of credentials required, enter a numerical value as illustrated below. + +
+ +**Note**: Ensure you understand the function of the advanced policies and how they impact the verification process of VCs for the respective asset. + + + +13\. Press the **Continue** button. + diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-metadata.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-metadata.md new file mode 100644 index 000000000..4df3c46d3 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/asset-metadata.md @@ -0,0 +1,53 @@ +--- +description: This page describes the Asset Metadata screen in the asset publishing flow. +--- + +# Asset Metadata + +2\. The first step in this process - Asset Metadata - is displayed on the screen. + +
+ + + +3\. Fill in the following fields + +* **Title:** input a suggestive name for the Asset you are publishing + +
+ +* **Description:** create a detailed description of the Asset. You can use free text, but also Markdown + +
+ +* **Tags:** add suggestive tags for your asset. The tags help users filter the assets. You can add as many tags as you wish. + +
+ +* **Author:** Enter the name of the author of the asset. It could be your name/company name or an alias. + +
+ +* **Asset type:** Select if the asset is a Dataset or an Algorithm. To understand the difference between those two types, please consult this page. + +
+ +* If the selected asset type is **Algorithm**, the additional fields must be completed. + * **Docker image**: select the Docker image to be used for running the algorithm. It can be one of the following: + * _node:latest_ + * _python:latest_ + * _Custom._ If you selected this option, the following fields are required: + * _Custom Docker Image_: specify the name and the tag of a public Docker hub image or the custom image if you have it hosted in a 3rd party repository + * _Docker Image Checksum_: enter the checksum (DIGEST) of your Docker image. + * _Docker Image Entrypoint_: define the command to be executed to run the algorithm. + * **Custom Parameters**: If the algorithm has custom parameters, select the checkbox "This asset uses algorithm custom parameters". Then, for each custom parameter, enter the Parameter Name and Parameter Label. + +
+ +* If the selected asset type is **Dataset**, the checkbox labeled "**Consent of data subjects**" will appear. To proceed, you must check this box to confirm that you have obtained the necessary consent from the data subjects or holders to publish the asset. +* **License Type:** Each asset is accompanied by a license that outlines the terms and conditions for its use. All participants intending to consume the asset must adhere to the specified license requirements. Please choose one of the following options to attach the license file to the asset: + * _Upload a license file_: the file will be saved in IPFS, and the link to it will be saved in the asset's description + * URL: provide the URL where the file is located and press "Validate". After the file location is validated, it is saved in the asset's description. +* **Terms and Conditions:** All participants within the dataspace are required to comply with its Terms and Conditions. You can review these by clicking the "Terms and Conditions" link. To proceed with the publishing workflow, please confirm your agreement by checking the box labeled "I agree to the Terms and Conditions." + +4\. Press Continue. diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/preview.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/preview.md new file mode 100644 index 000000000..b942aa772 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/preview.md @@ -0,0 +1,10 @@ +# Preview + +24\. The **Preview** screen is displayed. Here you can see how the asset and service will look after they are published. + +
+ +If everything is fine, proceed to the last step by pressing the **Continue** button. + + + diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/service-metadata-and-credentials.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/service-metadata-and-credentials.md new file mode 100644 index 000000000..33a3505f9 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/service-metadata-and-credentials.md @@ -0,0 +1,145 @@ +# Service Metadata and Credentials + +14\. The Service Metadata and Credentials screen is displayed. This screen allows you to define the metadata and the access credentials of the first service created, along with the asset. For a better understanding of how access credentials work, please check this link. + +
+ + + +15. **Service Data** group + +Fill in the following fields + +* **Title:** input a suggestive name for the Service you are publishing +* **Description:** create a detailed description of the Asset. You can use free text, but also Markdown +* **Service language**: select the language of the service +* **Service language direction**: the direction of the text in the selected language. It is automatically set, based on the selected language + +
+ + + +16\. **Access Type / Algorithm Privacy** + +* **Access Type:** In case an asset of type **dataset** is created, the **Access Type** group is displayed. You can choose either _Download_ or _Compute_. + * Choose **Download** if you want the dataset to be downloaded by the consumer when the asset is purchased. This will give the consumer full access to the content downloaded dataset. + +
+ * Choose **Compute** if you want the asset to be accessible only through a C2D job, meaning that only an algorithm can be run on the dataset, and only the results of the algorithm will be accessible to the consumer. + * In case you selected **Compute**, the "**Set Allowed Algorithms**" group is displayed on the screen. + * In this group, you can select which algorithms are allowed to run on the dataset. You can select either specific algorithms or algorithms published by trusted publishers. + * **Allowed Algorithms**: In this dropdown list, the "_Allow selected algorithms_" is selected and the "Selected Algorithms" list is active. The list will include all the published algorithms to which the dataset publisher has access based on their web3 address. + + * select one or more algorithms in the list that will be allowed to run on the dataset + +
+ + * if you want to allow all published algorithms to run on the dataset, in the "Allowed Algorithms" dropdown list select "_Allow any published algorithm_". Once you select this option, the "**Selected Algorithms**" and "**Allow Trusted Algorithm Publishers**" list will be disabled. + +
+ + * **Allowed Trusted Algorithm Publishers**: in this dropdown list, the "_Allow specific algorithm publishers_" option is selected and an input field is displayed. + + * To allow the algorithms published by a specific publisher, enter the web3 address of the publisher and click "Add". The address will be added to the list. You can add as many publishers as you need. + +
+ + * To allow algorithms published by all publisher, in the "**Allowed Trusted Algorithm Publishers**", select the option _"Allow all trusted algorithm publishers_". Once you select this option, the "**Selected Algorithms**" and "**Allow Trusted Algorithm Publishers**" list will be disabled. + +
+ + **Note**: if you select nothing in the "**Allowed Algorithms**" and "**Allow Trusted Algorithm Publishers**", no algorithm will have access to run on the dataset. + + + +* **Algorithm Privacy**: In case an asset of type **algorithm** is created, the **Algorithm Privacy** group is displayed. In this group, the checkbox "**Keep my algorithm private for Compute-to-Data**" is displayed. + * If you want the algorithm to only be run in C2D jobs, check the checkbox + * If you want the algorithm to be downloaded by consumers and have access to the code, uncheck the checkbox + +
+ +**17. Service Configuration Group** + +In this group, the dataset location and the node that will encrypt the file location are specified. + +* **Dataset location:** based on the content's location, four types of the assets can be registered in Ocean Enterprise: of type URL, IPFS, Arweave or GraphQL + * **URL**: to register a content stored at a URL, select the **URL** tab. + + * In the **File** field, add the URL. The URL can point to a file or to an API endpoint + * From the right side, select the HTTP method: GET or POST + * If header parameters are required, specify the key and value for each parameter and press "**Add**" + * Press "**Submit URL**" to verify the URL is accessible. + +
+ * **IPFS**: to register content stored in IPFS, select the **IPFS** tab. + + * In the CID field, enter the content identifier of the content you want to register and press **Validate** + +
+ * **Arweave**: to register content stored in Arweave, select the Arweave tab + * In the **Transaction ID** field, enter the transaction ID of the content and press **Validate** + +
+ * **GraphQL**: to register a GraphQL query, select the **GraphQL** tab + * In the **URL** field, enter the URL of the GrapghQL server + * If header parameters are required, specify the key and value for each parameter and press "**Add**" + * In the **Query** field, enter the query to run on the GrapghQL server + * Press **Submit Query** to verify the URL + +
+* **Provider URL, Sample File, Timeout** + * **Provider URL**: This field indicates the Ocean Node that will encrypt the URL. By default, this field is prepopulated with the Ocean Node URL used by the marketplace. If you want to use a different node, press Delete, then insert the URL of the desired Ocean Node and press Validate. + +
+* **Sample File** (optional field): Enter the URL where a sample file of the asset is located and press **Validate**. +* **Timeout**: the time the consumer who purchased an asset has access to the asset. In the marketplace, it can be set to: 1 day, 1 week, 1 month, 1 year, or forever. The time counter starts the moment the asset is purchased. Once the time expires, the asset has to be purchased again to access it. + + * Select a value from the dropdown list + + + +**18. Access Rules** + +* **Allow Eth Address** and **Deny Eth Address** lists: Use the fields in this group to determine who is allowed or denied access to the service. The rules are based on web3 addresses. These fields work the same way as the ones defined at the asset-level credentials, so please refer to steps 5 - 8 on the[ Asset Level Credentials page](asset-level-credentials.md). + +**Note:** To assess a user's right to access a service of an asset, the allow and deny lists at the asset and service level are cumulated and evaluated altogether. + + + +**19. SSI Policies:** to enable access rules based on SSI credentials at the service level, select the "Enable SSI Policies" checkbox. The SSI Policies group is displayed. Please refer to step 11 on the[ Asset Level Credentials page](asset-level-credentials.md) for an understanding of how these fields work. + + + +20\. **Consumer Parameters** + +Consumer Parameters are the parameters the asset uses. For a dataset of type URL, the consumer parameters are the query parameters used to call the URL. For an asset of type algorithm, the consumer parameters are the arguments passed to the program. + +* To define consumer parameters for an asset, check the **"This asset uses user-defined parameters"** checkbox. The Custom parameters group is displayed + +
+* Four types of parameters can be defined in the interface: text, number, boolean, or select (list of values). +* To define a consumer Parameter, input the following fields: + * **Parameter Name**: the name of the parameter + * **Parameter Label**: the label that will be displayed on screen + * **Description**: parameter description + * **Parameter Type**: one of the four types listed above + * **Required**: if the field is required or optional + * **Default value**: the default value of the parameter. It will be used if no value is input by the consumer at the time of consumption + +
+ + * For Parameters of type "select", the screen includes additional fields where the list's values are entered. + +
+ * To add a new consumer parameter, press "**Add parameter**" + +21\. Press the **Continue** button. + + + + + + + + + diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/service-pricing.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/service-pricing.md new file mode 100644 index 000000000..37143a3fb --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/service-pricing.md @@ -0,0 +1,24 @@ +# Service Pricing + +22\. The Asset Level Credentials screen is displayed. In this screen, the asset's price is set. + +
+ +An asset published on Ocean Enterprise can either be free or paid. + + + +* **Publishing a free asset**: Click on the **Free** tab. Check the "I want this asset to be free. I understand network fees are still to be paid" checkbox and press **Continue**. + +
+ + + +* Publishing a paid asset: Click on the **Fixed** tab. + +
+ + * In the **Price** field, enter the price for the service. + * The currency of the price is displayed at the left side of the Price field + * Press **Continue** + diff --git a/user-guides/using-the-oe-marketplace/publishing-an-asset/submit.md b/user-guides/using-the-oe-marketplace/publishing-an-asset/submit.md new file mode 100644 index 000000000..fd0dc9afc --- /dev/null +++ b/user-guides/using-the-oe-marketplace/publishing-an-asset/submit.md @@ -0,0 +1,21 @@ +# Submit + +25\. The Submit screen is displayed. + +* Press the **Submit** button to publish the asset. + +
+ +* During the publishing process, the Metamask wallet will display notification messages that require your approval to perform the transaction on the blockchain. Approve all transactions + +
+ +* After the asset is published, a confirmation message is displayed on screen. + +
+ +* Press the **View Asset** button to show the asset. + +
+ +Please note that from the time an asset is created on the blockchain until it is indexed by the Ocean Node’s indexer, a delay may occur. This delay typically ranges from a few seconds to several minutes, depending on factors such as RPC endpoint performance, the current indexed block, and the machine’s processing capacity running the Ocean Node. diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/README.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/README.md new file mode 100644 index 000000000..acae6da28 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/README.md @@ -0,0 +1,11 @@ +# Running Compute-To-Data Jobs + +The Compute-to-Data (C2D) feature in the OE stack enables algorithms to be executed against published datasets without exposing the raw data to end users, preserving privacy and security. + +**Note:** C2D jobs can use **only** assets of type _"compute_". C2D jobs cannot be run on assets of type "download". + +This chapter includes the following information: + +* [Introduction to C2D](c2d-concepts.md), where the main concepts are introduced +* [Step-by-step guide](running-and-managing-c2d-jobs/) to run and manage C2D jobs and the related information + diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/c2d-concepts.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/c2d-concepts.md new file mode 100644 index 000000000..6472bae74 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/c2d-concepts.md @@ -0,0 +1,69 @@ +# C2D Concepts + +## How C2D jobs work + +To run a C2D job, the user needs to purchase an "_algorithm"_ asset and zero or more "_dataset_" assets. The algorithm is the program that will be executed in a container on the Ocean Node server. The datasets are downloaded in the container and accessed by the running algorithm. When the job finishes, the algorithm’s output and log files are saved. The user who started the C2D job can then download them. + +**Note:** In contrast to asset downloads, which prevent users from downloading their own assets, publishers are permitted to execute C2D jobs on assets they have published. + +## C2D environment + +When starting a C2D job, the user can configure the environment in which the job will run, including the processing and storage resources allocated to it, as well as its maximum duration. The following resources can be configured. + +* Number of processing cores +* RAM +* Disk space +* Maximum job duration + +**Note**: Ensure the maximum job duration is sufficient for the job to finish. If the duration expires, the job will be terminated automatically. + +When a C2D job starts, resources are allocated from the resource pool for the duration of its execution. The remaining pool resources remain available for other C2D jobs. Once the job completes, its resources are released back into the pool. + +## C2D environment types + +Ocean Node provides two C2D environments to run jobs: + +* **Free environment:** Best for testing. It has limited compute, storage, and job duration, and is free of charge. +* **Paid environment**: Best for production. It offers more compute, storage, and longer job durations. Used resources are billed; running jobs here generates costs. + +Each environment has a pool of processing and storage resources. When a C2D job starts, resources are allocated from the resource pool for the duration of its execution. Unused pool resources remain available for other C2D jobs. When a job completes, its resources are automatically released back into the pool. + +**Node**: the Ocean Node operator configures each environment, so resources and job duration limits can differ from one Ocean Node to another. + +## C2D job cost + +In a paid environment, running C2D jobs incurs costs. Each resource has a per‑minute price (set by the Ocean Node provider), and the job cost is determined by multiplying the resource prices by the job’s duration in minutes. Let's take the following example: + +In the paid environment, the unit price of each resource is: + +* CPU: 0.2 EUR per 1 core/ 1 minute +* RAM: 0.1 EUR per 1 GB/ 1 minute +* Storage: 0.05 EUR per 1GB/ 1 minute + +\ +If the C2D job runs in a paid environment with **2 cores, 8 GB of RAM and 16 GB of storage**, for **3 minutes**, the total cost associated with this job is calculated as: + +**`C2D env cost/minute`**` ``= (2 cores)*0.2EUR + (8GB RAM)*0.1EUR + (16GB storage)*0.05EUR =`` `**`2 EUR/min`** + +**`Job cost = (`**`C2D env cost/minute) * 3minutes`` `**`=`**` ``2 * 3 =`` `**`6 EUR`** + +So, to run the job in this environment costs 6 EUR. + +To start a C2D job in a paid environment, the user needs to allocate in advance the amount that covers running the job in the selected environment for the specified maximum duration. + +Continuing with our example, if the user wants to run a job for **a maximum duration of 10 minutes** in the environment described above, the allocated amount is calculated as: + +**Allocated amount** = **C2D env cost/minute** \* **maximum job duration** = 2 EUR/minutes \* 10 minute = **20 EUR** + +## Escrow account + +When running a C2D job in a paid environment, the user must deposit the full amount needed for the maximum job duration into an escrow account. This account is set by the Ocean Enterprise Collective and is unique to each blockchain. + +The following happens when the C2D job starts in a paid environment: + +1. The user deposits the allocated amount into the escrow account. +2. The Ocean Node that runs the job locks this amount while the job runs. +3. Once the job finishes, the actual cost is calculated based on the per‑minute environment cost and the job’s duration (rounded up to the nearest minute). +4. The Ocean Node deducts the job cost and unlocks the rest. +5. Any leftover funds remain in the escrow account, available for future jobs or withdrawal. + diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/README.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/README.md new file mode 100644 index 000000000..9d97d74ab --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/README.md @@ -0,0 +1,11 @@ +# Running and managing C2D jobs + +Ocean Marketplace provides a wizard to make it easier to run C2D jobs and collect results. You can start a C2D job using either a dataset or an algorithm of type _"compute"_. + +In the Ocean Marketplace, you can: + +* [Run a C2D starting from a dataset](run-a-c2d-starting-from-a-dataset.md) +* [Run a C2D job starting from an algorithm](run-a-c2d-job-starting-from-an-algorithm.md) +* [Manage your escrow account](manage-the-escrow-account.md) +* [See all the C2D jobs you've executed](c2d-jobs-history.md) + diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/c2d-jobs-history.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/c2d-jobs-history.md new file mode 100644 index 000000000..d64ca8509 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/c2d-jobs-history.md @@ -0,0 +1,21 @@ +# C2D jobs history + +## Precondition + +* The user is logged in to the marketplace +* User has a good understanding of the C2D concepts (please consult this page: [C2D Concepts](../c2d-concepts.md)) + + + +The list of C2D jobs started by a user is displayed in two different places: + +* The list of C2D jobs that were executed using the service(s) of a specific asset (dataset or algorithm) is displayed on the asset details page, within the **Your Compute Jobs** display group. + +
+ + + +* The entire list of C2D jobs executed by a user are displayed in the profile details page, under the **Compute Jobs** tab. + +
+ diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/manage-the-escrow-account.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/manage-the-escrow-account.md new file mode 100644 index 000000000..fcdffc66a --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/manage-the-escrow-account.md @@ -0,0 +1,42 @@ +# Manage the escrow account + +## Precondition + +* The user is logged in to the marketplace +* User has a good understanding of the C2D concepts (please consult this page: [C2D Concepts](../c2d-concepts.md)) + + + +## Steps + +1\. Place the mouse over the user's web3 address and from the context menu select View Profile. + +
+ + + +2\. The Profile view is displayed. At the top of this screen, summary information is displayed, including the funds available in the escrow account and the funds locked in the escrow account by C2D jobs, which have not been claimed yet. + +
+ + + +3\. To withdraw the available funds in the escrow account, click the withdraw button under the Escrow Available Funds field. + +
+ + + +4\. The **Withdraw Escrow Funds** window is displayed. Enter the amount to withdraw or press **Max** to withdraw all available funds, then press **Withdraw**. + +
+ + + +5\. Metamask wallet will display a transaction request to be approved. Press Confirm. + +
+ + + +6\. The funds will be moved from the escrow account to the user's wallet. diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-job-starting-from-an-algorithm.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-job-starting-from-an-algorithm.md new file mode 100644 index 000000000..75d818793 --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-job-starting-from-an-algorithm.md @@ -0,0 +1,206 @@ +# Run a C2D job starting from an algorithm + +## Precondition + +* The user is logged in to the marketplace + + + +## Steps + +1\. Access the Catalogue. The catalogue lists all registered assets - datasets and algorithms - of any type - download or compute. + +
+ +**Note**: to refine your selection in the catalogue, use the search bar at the top of the list or the filters available on the left side of the catalogue + + + +2\. Select an algorithm from the list. The asset details page is displayed. + +
+ + + +3\. From the services list, select a service of type _compute_. The "**Start Compute**" button is displayed. + +
+ + + +4\. Click "**Start Compute**". The C2D wizard is started, and Step 1 - Select Datasets window is shown. + +
+ + + +5\. This screen lists only the datasets that include services on which the selected algorithm can run (see [Service Metadata and Credentials](../../publishing-an-asset/service-metadata-and-credentials.md)). + +When running a C2D job starting from an algorithm, you can select zero or more datasets on which the algorithm will be executed. + +a) **Run C2D job without datasets:** If no datasets are needed to run the algorithm, mark the "Proceed without Dataset Selection" checkbox. This will disable the datasets list, remove unnecessary steps from the wizard, and enable the Continue button. Click **Continue**, then move to step 8 of this page. + +
+ + + +b) **Run C2D job with one or more datasets:** If one or more datasets are needed to run the algorithm, select them from the list by clicking on their tiles. The "Selected" status will be displayed next to the asset's name, and the Continue button will be enabled. Click **Continue**. + +
+ + + +6\. The "**Select Services**" page appears, showing the assets chosen in the previous step along with the services on which the algorithm can run. Select the services you want to run the algorithm on and click **Continue**. + +
+ + + +7\. The "**Preview Selected Datasets and Services**" screen is displayed, showing the selection made so far. If you're satisfied with the selected dataset services, press **Continue**. + +
+ + + +8\. If any of the selected services (datasets or algorithms) contain consumer parameters, the **User Parameters** screen will open. It displays all parameters along with their default values. Provide the required inputs and select **Continue**. + +
+ + + +9\. The **"Select C2D Environment"** screen appears. Here you’ll see the Ocean Nodes linked to the dataspace that can run C2D jobs. Each node shows which environments are available—free, paid, or both. Select a node and press **Continue**. + +
+ +**Note**: At present, the list displays only the node linked to the dataspace. Future releases of OE will support multiple nodes serving a single dataspace. + + + +9\. The **C2D Environment Configuration** screen opens. Here you’ll see the available environments for the selected Ocean Node. Each environment lists its resources, and if you choose a paid environment, you’ll also see the per‑minute price for each resource. + +* Select the environment +* Set the resources your job will use +* Choose the maximum job duration. \ + **Note:** For the paid environment, the C2D Environment Price field shows the total cost for the selected duration. You’ll also see a message about your escrow account balance—whether it’s enough to cover the job cost or if you need to deposit more. For more details, check the job cost and escrow account page. +* When you’re ready, confirm the configuration and click **Continue**. + +
+ + + +10\. The "**Review**" screen opens. + +
+ + + +This screen is divided into three sections: Assets, C2D Resources, and Fees. + +* _Assets_ + * Displays the price of each selected asset (algorithms and datasets). + * Includes a credential verification button next to each asset + +
+ + + +* _C2D Resources_ + * Shows the calculated cost of the C2D job. + * Displays the user’s escrow account balance. + * Indicates the additional amount to deposit if the job cost exceeds the current escrow balance.
+ +
+ + + +* Fees: List applicable fees, organized into the following categories: + * Marketplace fees (datasets and algorithms) + * OEC fees (datasets and algorithms) + * Provider fees (datasets and algorithms)\ + **Note**: the Provider fee is not displayed initially. It is calculated after the assets' credentials are verified.
+ +
+ + + +a) In the Assets section, click the "**Check Credential**" button next to one of the assets to initiate the asset credentials verification process. The process will verify the consumer's credentials against the access rules defined for any asset. The verification is performed asset by asset. + +
+ + + +b) If the current asset has SSI-based access policies defined, then the marketplace will run a query in the consumer's SSI wallet and list the Verifiable Credentials that match the specified criteria. Select the Verifiable Credentials you want to send for verification and click **Accept**. + +
+ + + +c) The DID selector window is open, containing the list of all DIDs from the SSI wallet. Select the DID you want to use to sign the Verifiable Presentation in which the Verifiable Credentials selected in the previous step will be wrapped before being sent for verification. Then click **Confirm**. + +
+ + + +d) Credential verification is performed, and each asset is marked with the Verified tag. For assets protected by SSI-based access control, the verification session remains valid only for a limited time. Within this period, the C2D job must be initiated; otherwise, it will fail. A countdown timer is displayed to indicate the remaining validity. If the validity time expires, you will have to reinitiate the credential verification process. + +
+ +**Note 1**: If credential verification fails, the system displays an error message, and the C2D job cannot be executed. To proceed, you must either supply alternative credentials for verification or select different assets. + + + +e) If the credential verification succeeds, the user has to check the two checkboxes at the bottom of the screen to confirm: + +* agreement with the marketplace Terms and Conditions, and +* agreement with the license terms governing each selected asset. + +
+ + + +**Note**: To view the license terms for an asset, click the link provided next to it. The link opens the asset details page in a new browser tab. + +
+ + + +f) Click **Calculate Extra Fees**. The provider fees will be retrieved and displayed in the Fees section. + +
+ + + +g) Review the total cost associated with running the C2D job and click **Buy Compute Job**. + +
+ + + +h) Next, there will be multiple interactions with the Metamask wallet for spending cap approvals and payments, as the user purchases each asset and transfers money into the escrow account. + +
+ + + +i) At the end, a message will notify the user that the C2D job was started. Click **Continue**. + +
+ + + +j) The user will be returned to the Service Details screen, and the C2D job will appear in the Your Compute Jobs list. Click **Refresh** to monitor the job's status. + +
+ + + +k) When the job finishes, click **Show Details**. + +
+ + + +l) The Job Details page is displayed. It shows the assets used to execute the job, along with information such as the actual job duration and cost. The Results section provides links to the job’s logs and output files. To download a file, click its name in the list. + +
+ diff --git a/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-starting-from-a-dataset.md b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-starting-from-a-dataset.md new file mode 100644 index 000000000..fbecfc72a --- /dev/null +++ b/user-guides/using-the-oe-marketplace/running-compute-to-data-jobs/running-and-managing-c2d-jobs/run-a-c2d-starting-from-a-dataset.md @@ -0,0 +1,194 @@ +--- +description: Use the C2D wizard to run a job by selecting a dataset as the starting point. +--- + +# Run a C2D starting from a dataset + +## Precondition + +* The user is logged in to the marketplace + + + +## Steps + +1\. Access the Catalogue. The catalogue lists all registered assets - datasets and algorithms - of any type - download or compute. + +
+ +**Note**: to refine your selection in the catalogue, use the search bar at the top of the list or the filters available on the left side of the catalogue + + + +2\. Select a dataset from the list. The asset details page is displayed. + +
+ +3\. From the services list, select a service of type _compute_. The "**Start Compute**" button is displayed. + +
+ +4\. Click "**Start Compute**". The C2D wizard is started, and Step 1 - Select Algorithm window is shown. + +
+ + + +5\. A list of allowed algorithms to run over the selected dataset is displayed. Select the algorithm you want to run over the dataset. The algorithm will be added to the top of the list. Press **Continue**. + +
+ + + +6\. The "**Select Algorithm Services"** screen is displayed. The list includes selected service. Press **Continue**. + +
+ +7\. The "**Preview Algorithm and Service**" screen is displayed, showing the selection made so far. If you're satisfied with the selected algorithm service, press **Continue**. + +
+ + + +8\. If any of the selected services (datasets or algorithms) contain consumer parameters, the **User Parameters** screen will open. It displays all parameters along with their default values. Provide the required inputs and select **Continue**. + +
+ + + +9\. The **"Select C2D Environment"** screen appears. Here you’ll see the Ocean Nodes linked to the dataspace that can run C2D jobs. Each node shows which environments are available—free, paid, or both. Select a node and press **Continue**. + +
+ +**Note**: At present, the list displays only the node linked to the dataspace. Future releases of OE will support multiple nodes serving a single dataspace. + + + +9\. The **C2D Environment Configuration** screen opens. Here you’ll see the available environments for the selected Ocean Node. Each environment lists its resources, and if you choose a paid environment, you’ll also see the per‑minute price for each resource. + +* Select the environment +* Set the resources your job will use +* Choose the maximum job duration. \ + **Note:** For the paid environment, the C2D Environment Price field shows the total cost for the selected duration. You’ll also see a message about your escrow account balance—whether it’s enough to cover the job cost or if you need to deposit more. For more details, check the job cost and escrow account page. +* When you’re ready, confirm the configuration and click **Continue**. + +
+ + + +10\. The "**Review**" screen opens. + +
+ + + +This screen is divided into three sections: Assets, C2D Resources, and Fees. + +* _Assets_ + * Displays the price of each selected asset (algorithms and datasets). + * Includes a credential verification button next to each asset + +
+ + + +* _C2D Resources_ + * Shows the calculated cost of the C2D job. + * Displays the user’s escrow account balance. + * Indicates the additional amount to deposit if the job cost exceeds the current escrow balance.
+ +
+ + + +* Fees: List applicable fees, organized into the following categories: + * Marketplace fees (datasets and algorithms) + * OEC fees (datasets and algorithms) + * Provider fees (datasets and algorithms)\ + **Note**: the Provider fee is not displayed initially. It is calculated after the assets' credentials are verified.
+ +
+ + + +a) In the Assets section, click the "**Check Credential**" button next to one of the assets to initiate the asset credentials verification process. The process will verify the consumer's credentials against the access rules defined for any asset. The verification is performed asset by asset. + +
+ + + +b) If the current asset has SSI-based access policies defined, then the marketplace will run a query in the consumer's SSI wallet and list the Verifiable Credentials that match the specified criteria. Select the Verifiable Credentials you want to send for verification and click **Accept**. + +
+ + c) The DID selector window is open, containing the list of all DIDs from the SSI wallet. Select the DID you want to use to sign the Verifiable Presentation in which the Verifiable Credentials selected in the previous step will be wrapped before being sent for verification. Then click **Confirm**. + +
+ + + +d) Credential verification is performed, and each asset is marked with the Verified tag. For assets protected by SSI-based access control, the verification session remains valid only for a limited time. Within this period, the C2D job must be initiated; otherwise, it will fail. A countdown timer is displayed to indicate the remaining validity. If the validity time expires, you will have to reinitiate the credential verification process. + +
+ +**Note 1**: If credential verification fails, the system displays an error message, and the C2D job cannot be executed. To proceed, you must either supply alternative credentials for verification or select different assets. + + + +e) If the credential verification succeeds, the user has to check the two checkboxes at the bottom of the screen to confirm: + +* agreement with the marketplace Terms and Conditions, and +* agreement with the license terms governing each selected asset. + +
+ + + +**Note**: To view the license terms for an asset, click the link provided next to it. The link opens the asset details page in a new browser tab. + +
+ + + +f) Click **Calculate Extra Fees**. The provider fees will be retrieved and displayed in the Fees section. + +
+ + + +g) Review the total cost associated with running the C2D job and click **Buy Compute Job**. + +
+ + + +h) Next, there will be multiple interactions with the Metamask wallet for spending cap approvals and payments, as the user purchases each asset and transfers money into the escrow account. + +
+ + + +i) At the end, a message will notify the user that the C2D job was started. Click **Continue**. + +
+ + + +j) The user will be returned to the Service Details screen, and the C2D job will appear in the Your Compute Jobs list. Click **Refresh** to monitor the job's status. + +
+ + + +k) When the job finishes, click **Show Details**. + +
+ + + +l) The Job Details page is displayed. It shows the assets used to execute the job, along with information such as the actual job duration and cost. The Results section provides links to the job’s logs and output files. To download a file, click its name in the list. + +
+ + + diff --git a/user-guides/wallets/README.md b/user-guides/wallets/README.md deleted file mode 100644 index a9a36c649..000000000 --- a/user-guides/wallets/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Fundamental knowledge of using ERC-20 crypto wallets. ---- - -# Wallets - -Ocean Protocol users require an ERC-20 compatible wallet to manage their OCEAN and ETH tokens. In this guide, we will provide some recommendations for different wallet options. - -
- -### What is a wallet? - -In the blockchain world, a wallet is a software program that stores cryptocurrencies secured by private keys to allow users to interact with the blockchain network. Private keys are used to sign transactions and provide proof of ownership for the digital assets stored on the blockchain. Wallets can be used to send and receive digital currencies, view account balances, and monitor transaction history. There are several types of wallets, including desktop wallets, mobile wallets, hardware wallets, and web-based wallets. Each type of wallet has its own unique features, advantages, and security considerations. - -### Recommendations - -* **Easiest:** Use the [MetaMask](https://metamask.io/) browser plug-in. -* **Still easy, but more secure:** Get a [Trezor](https://trezor.io/) or [Ledger](https://www.ledger.com/) hardware wallet, and use MetaMask to interact with it. -* The [token page](https://oceanprotocol.com/token) at oceanprotocol.com lists some other possible wallets. - -### Related Terminology - -When you set up a new wallet, it might generate a **seed phrase** for you. Store that seed phrase somewhere secure and non-digital (e.g. on paper in a safe). It's extremely secret and sensitive. Anyone with your wallet's seed phrase could spend all tokens of all the accounts in your wallet. - -Once your wallet is set up, it will have one or more **accounts**. - -Each account has several **balances**, e.g. an Ether balance, an OCEAN balance, and maybe other balances. All balances start at zero. - -An account's Ether balance might be 7.1 ETH in the Ethereum Mainnet, 2.39 ETH in Görli testnet. You can move ETH from one network to another only with a special setup exchange or bridge. Also, you can't transfer tokens from networks holding value such as Ethereum mainnet to networks not holding value, i.e., testnets like Görli. The same is true of the OCEAN balances. - -Each account has one **private key** and one **address**. The address can be calculated from the private key. You must keep the private key secret because it's what's needed to spend/transfer ETH and OCEAN (or to sign transactions of any kind). You can share the address with others. In fact, if you want someone to send some ETH or OCEAN to an account, you give them the account's address. - -{% hint style="info" %} -Unlike traditional pocket wallets, crypto wallets don't actually store ETH or OCEAN. They store private keys. -{% endhint %} diff --git a/user-guides/wallets/metamask-setup.md b/user-guides/wallets/metamask-setup.md deleted file mode 100644 index e4fa7954d..000000000 --- a/user-guides/wallets/metamask-setup.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -description: How to set up a MetaMask wallet on Chrome ---- - -# Set Up MetaMask - -Before you can publish or purchase assets, you will need a crypto wallet. As Metamask is one of the most popular crypto wallets around, we made a tutorial to show you how to get started with Metamask to use Ocean's tech. - -> MetaMask can be connected with a TREZOR or Ledger hardware wallet but we don't cover those options below; see [the MetaMask documentation](https://metamask.zendesk.com/hc/en-us/articles/360020394612-How-to-connect-a-Trezor-or-Ledger-Hardware-Wallet). - -### Set up - -1. Go to the [Chrome Web Store for extensions](https://chrome.google.com/webstore/category/extensions) and search for MetaMask. - -![metamask-chrome-store](<../../.gitbook/assets/wallet/metamask-chrome-extension (2).png>) - -* Install MetaMask. The wallet provides a friendly user interface that will help you through each step. MetaMask gives you two options: importing an existing wallet or creating a new one. Choose to `Create a Wallet`: - -![Create a wallet](<../../.gitbook/assets/wallet/create-new-metamask-wallet (2).png>) - -* In the next step create a new password for your wallet. Read through and accept the terms and conditions. After that, MetaMask will generate Secret Backup Phrase for you. Write it down and store it in a safe place. - -![Secret Backup Phrase](<../../.gitbook/assets/wallet/secret-backup-phrase (2).png>) - -* Continue forward. On the next page, MetaMask will ask you to confirm the backup phrase. Select the words in the correct sequence: - -![Confirm secret backup phrase](<../../.gitbook/assets/wallet/confirm-backup-phrase (2).png>) - -* Voila! Your account is now created. You can access MetaMask via the browser extension in the top right corner of your browser. - -![MetaMask browser extension](<../../.gitbook/assets/wallet/metamask-browser-extension (2).png>) - -* You can now manage ETH and OCEAN with your wallet. You can copy your account address to the clipboard from the options. When you want someone to send ETH or OCEAN to you, you will have to give them that address. It's not a secret. - -![Manage tokens](<../../.gitbook/assets/wallet/manage-tokens (2).png>) - -You can also watch this [video tutorial](https://www.youtube.com/playlist?list=PL\_dn0wVs9kWolBCbtHaFxsi408cumOeth) if you want more help setting up MetaMask. - -### Set Up Custom Network - -Sometimes it is required to use custom or external networks in MetaMask. We can add a new one through MetaMask's Settings. - -Open the Settings menu and find the `Networks` option. When you open it, you'll be able to see all available networks your MetaMask wallet currently use. Click the `Add Network` button. - -![Add custom/external network](<../../.gitbook/assets/wallet/metamask-add-network (2).png>) - -There are a few empty inputs we need to fill in: - -* **Network Name:** this is the name that MetaMask is going to use to differentiate your network from the rest. -* **New RPC URL:** to operate with a network we need an endpoint (RPC). This can be a public or private URL. -* **Chain Id:** each chain has an Id -* **Currency Symbol:** it's the currency symbol MetaMask uses for your network -* **Block Explorer URL:** MetaMask uses this to provide a direct link to the network block explorer when a new transaction happens - -When all the inputs are filled just click `Save`. MetaMask will automatically switch to the new network.